Shellm is a lightweight client for interacting with the Ollama API, written entirely in a single Bash script. It provides a simple interface to generate responses from language models, interact with custom tools, and integrate AI capabilities into everyday Linux workflows.
- Single File Script: No complex dependencies—just a single Bash file.
- API Integration: Interacts with an Ollama API server to generate predictions.
- Model Download: Transparently instructs Ollama to download unknown models
- Piping and Chaining: Seamless integration into shell commands for input/output manipulation.
- Verbose Mode: Detailed debugging for troubleshooting or learning.
- Save the
shellm
Bash script to a directory of your choice, e.g.,/usr/local/bin
. - Make the script executable:
chmod +x /usr/local/bin/shellm
- Download the config YAML and place it in one of the recognized paths
- Ensure the Ollama API is running on
localhost:11434
or set theAPI_URL
environment variable to your specific endpoint.
shellm "What is the weather like today?"
This will generate a response using the default model.
Option | Description |
---|---|
-u |
API URL (default: http://localhost:11434/api ) |
-m |
Model name (default: qwen2.5:3b-instruct-q5_K_M ) |
-n |
Number of predictions to generate (default: 200 ) |
-v [0-2] |
Log level (verbosity) mode for debugging |
prompt |
The prompt for the model. If reading from stdin, this will prepend to the input. |
Generate a response to a simple question:
shellm "Why is the sky blue?"
Use a different model:
shellm -m "newmodel-v1:6b" "Summarize the plot of 'The Great Gatsby'."
Enable tool usage mode:
shellm "Translate the following text to French: 'Hello, how are you?'"
Shellm supports chaining multiple invocations, allowing the user to pass the output of one command as the input to the next. This is useful for refining AI responses or handling complex tasks:
response=$(shellm "What is the capital of France?")
shellm "Is $response a popular tourist destination?"
Shellm can be easily integrated into daily Linux workflows using piping. Here are a few examples:
To summarize the contents of a file:
cat myfile.txt | shellm "Summarize this text"
Generate a human-readable summary of files in a directory:
ls -l | shellm "Explain what these files are."
To translate system logs to another language:
journalctl -xe | shellm "Translate this to Spanish."
Variable | Description | Default |
---|---|---|
API_URL |
URL of the Ollama API | http://localhost:11434/api |
MODEL_SMALL |
Default model to use | qwen2.5:3b-instruct-q5_K_M |
VERBOSE |
Enable verbose output | 0 |
For verbose output, use the -v
flag and specify the desired log level:
-v 0
- Errors only-v 1
- Add warnings-v 2
- Add debug output
shellm -v 2 "Debug the script behavior."
Apart from banking on the significant overlap between "shell" and "llm", the name "Shellm" is also extremely close in spelling and pronunciation to the German word "Schelm", which means "rogue", or "jokester". This adds charm and character to the tool, so bear this in mind when it fails spectacularly.
Contributions are welcome! Feel free to open issues or submit pull requests for bug fixes or new features.
Shellm is released under the GPL License. See the LICENSE file for more details.
Enjoy using Shellm to bring AI capabilities directly into your shell! 😊