LLM#
A CLI tool and Python library for interacting with OpenAI, Anthropic’s Claude, Google’s Gemini, Meta’s Llama and dozens of other Large Language Models, both via remote APIs and with models that can be installed and run on your own machine.
Watch Language models on the command-line on YouTube for a demo or read the accompanying detailed notes.
With LLM you can:
Run prompts from the command-line
Store prompts and responses in SQLite
Generate and store embeddings
Extract structured content from text and images
Grant models the ability to execute tools
… and much, much more
Quick start#
First, install LLM using pip or Homebrew or pipx or uv:
Or with Homebrew (see warning note):
Or with pipx:
Or with uv
If you have an OpenAI API key key you can run this:
# Paste your OpenAI API key into this llm keys set openai # Run a prompt (with the default gpt-4o-mini model) llm "Ten fun names for a pet pelican" # Extract text from an image llm "extract text" -a scanned-document.jpg # Use a system prompt against a file cat myfile.py | llm -s "Explain this code"
Run prompts against Gemini or Anthropic with their respective plugins:
llm install llm-gemini llm keys set gemini # Paste Gemini API key here llm -m gemini-2.0-flash 'Tell me fun facts about Mountain View' llm install llm-anthropic llm keys set anthropic # Paste Anthropic API key here llm -m claude-4-opus 'Impress me with wild facts about turnips'
You can also install a plugin to access models that can run on your local device. If you use Ollama:
# Install the plugin llm install llm-ollama # Download and run a prompt against the Orca Mini 7B model ollama pull llama3.2:latest llm -m llama3.2:latest 'What is the capital of France?'
To start an interactive chat with a model, use llm chat:
Chatting with gpt-4.1 Type 'exit' or 'quit' to exit Type '!multi' to enter multiple lines, then '!end' to finish Type '!edit' to open your default editor and modify the prompt. Type '!fragment[ ...]' to insert one or more fragments > Tell me a joke about a pelican Why don't pelicans like to tip waiters? Because they always have a big bill!
More background on this project:
llm, ttok and strip-tags—CLI tools for working with ChatGPT and other LLMs
The LLM CLI tool now supports self-hosted language models via plugins
LLM now provides tools for working with embeddings
Build an image search engine with llm-clip, chat with models with llm chat
You can now run prompts against images, audio and video in your terminal using LLM
Structured data extraction from unstructured content using LLM schemas
Long context support in LLM 0.24 using fragments and template plugins
See also the llm tag on my blog.