This is a simple vim "plugin", based on a couple of Python scripts and a little bit of vimrc, for interacting with LLM inference endpoints from within the ergonomic comfort of vim.
./install_qq.py [--prefix=$HOME/.local]
cat vimrc.example >> $HOME/.vimrcYou should also ensure that $HOME/.local/bin is in your $PATH.
Configure endpoint API tokens via environment variables,
e.g. $DEEPSEEK_API_KEY, $ANTHROPIC_API_KEY, $XAI_API_KEY, etc.
In vim:
:newor:vnewto open a new buffer.:qqqto load a freshqqtext file in the current buffer.- Edit your prompt after the
^Q^Qescape chars. :qqto submit your context window to a chat completions endpoint (default model: DeepSeek-V3).- Wait a little bit.
- The current buffer will auto reload to display the new context
window, with the assistant response after the
^A^Aescape chars. More^Q^Qescape chars are also appended at the end of the text file. - Repeat (3)-(6).
- Any text that you place before (above/left of) the initial
^Q^Qprompt will be interpreted as an initial system message.
As of the December 2025 update, this script has been migrated to preferably use
/v1/messages-compatible APIs where possible.
Technically, we implement a superset of the messages API by supporting an
initial "system" role message, which is translated to a special field in the
API request.
MIT License