Skip to content

Conversation

@ServeurpersoCom
Copy link
Collaborator

@ServeurpersoCom ServeurpersoCom commented Dec 8, 2025

Router per-model config

This PR implements INI-based per-model configuration for llama-server router mode, as discussed in #17850.

Summary

Advanced users can define custom configurations using an .ini file. Each model can have its own preset with custom parameters while inheriting router defaults for unspecified options.

Motivation

Multi-model inference servers for small/medium teams need declarative configuration with zero operational friction. Operators should be able to set global defaults via router CLI and override specific parameters per model in a simple text file.

Key Features

  1. INI-based presets - Define model-specific configurations in a simple .ini file
  2. Three independent model sources - Cached models (LLAMA_CACHE), local GGUF files (--models-dir), and custom-path models (--models-preset via paths defined in INI file)
  3. Flexible argument formats - Accepts long args (ctx-size), short args (c), and env vars (LLAMA_ARG_CTX_SIZE) as INI keys
  4. Inheritance system - Preset args are merged with router base args before spawning child processes
  5. Custom model paths - Define models with absolute paths directly in presets without filesystem scanning
  6. Deep directory support - No recursive scanning overhead, explicit model definitions only

Usage

llama-server --models-preset ./presets.ini

The router can combine multiple sources:

llama-server --models-dir ./local-models --models-preset ./custom-configs.ini -ngl 999 -fa

INI Format

Section names define model identifiers. Keys correspond to CLI arguments without leading dashes.

Supported key formats:

  • Long form: n-gpu-layer = 123
  • Short form: c = 4096
  • Env var: LLAMA_ARG_CACHE_RAM = 0

All three formats are equivalent and can be mixed in the same file.

Example presets.ini:

version = 1

; Preset for cached HuggingFace model
[ggml-org/gemma-3-27b-it-GGUF:Q6_K]
chat-template = chatml
ngl = 123
jinja = on
ctx-size = 131072

; Custom local model with absolute path
[my-custom-model]
m = /absolute/path/to/model.gguf
mmproj = /absolute/path/to/mmproj.gguf
ctx-size = 65536
temp = 0.7
top-p = 0.8

; MoE model with specific settings
[MoE-Qwen3-30B-A3B-Thinking]
m = /models/Qwen3-30B-A3B-Thinking-Q6_K.gguf
n-cpu-moe = 30
temp = 0.6
top-p = 0.95
ctx-size = 32768

How Models Are Loaded

The router discovers models from three sources:

  1. Cached models - Scanned from LLAMA_CACHE (typically ~/.cache/llama.cpp)
  2. Local directory - Scanned from --models-dir (non-recursive, direct children only)
  3. Preset definitions - Custom models defined in --models-preset with explicit paths

Model names from presets can match cached or local models to apply custom configurations, or define entirely new models with custom paths.

Argument Inheritance

When spawning a child process for a model, arguments are merged in this order:

  1. Start with preset args from INI (model-specific settings)
  2. Add router base args for any missing keys (global defaults from router CLI)
  3. Force control args (port, host, alias - always overridden by router)

Priority order (highest to lowest):

  • Control args (port, host, alias, model path) - managed by router, cannot be overridden
  • Router base args (inherited from router CLI) - fill in missing preset keys
  • Preset args (from INI) - model-specific overrides

Control args automatically managed by router:

  • --port, --host, --alias
  • --api-key
  • --model, --mmproj, --hf-repo
  • --models-dir, --models-max, --models-preset

If a preset contains control args, they are removed with a warning.

Changes

New files:

  • common/preset.cpp - INI parser using PEG grammar and preset management
  • common/preset.h - Preset structures and API

Modified files:

  • common/arg.cpp/h - Added common_params_parse() for map output, is_truthy/is_falsey/is_autoy utilities, common_arg_get_env_name() for flag to env var mapping
  • common/common.h - Added models_preset parameter
  • common/CMakeLists.txt - Added preset.cpp/h to build
  • tools/server/server-models.cpp/h - Integrated preset system with model loading and spawning
  • tools/server/README.md - Added preset documentation and examples

Technical Details

INI parsing:

  • Uses existing PEG parser from common/peg-parser.h (grammar by @aldehir)
  • Line-oriented parsing handles comments, blank lines, and standard INI sections
  • Whitespace and inline comments properly handled

Argument mapping:

  • common_arg_get_env_name() maps CLI flags to LLAMA_ARG_* env var names
  • Three key formats (long, short, env) all map to same common_arg via lookup table
  • Deduplication handled automatically (short -c and long --ctx-size are the same arg)

Child process spawning:

  • Child servers listen on 127.0.0.1 (not inherited hostname) to avoid conflicts when router runs on 0.0.0.0
  • Arguments passed as CLI args (not environment variables)
  • Router port exported as LLAMA_SERVER_ROUTER_PORT env var for child processes

Use Case Example

Development team runs inference server with multiple models:

llama-server --models-preset ./configs.ini -ngl 999 -fa -ctk q8_0 -ctv q8_0

The presets.ini file defines per-model overrides:

[ggml-org/gemma-3-27b-it-GGUF:Q6_K]
; This model needs more context
ctx-size = 131072

[problematic-model]
m = /models/problematic-Q8_0.gguf
; Disable flash attention for this model
fa = off
; Reduce layers on GPU
ngl = 50

Global defaults (-ngl 999 -fa -ctk q8_0) apply to all models, but each preset can override specific parameters. The router automatically manages ports, aliases, and model paths.

Testing Status

Tested configurations:

  • Multiple cached HuggingFace models with various quantizations
  • Local GGUF files with mmproj auto-detection
  • Custom path models defined in presets
  • Mixed sources (cached + local + preset) in single router instance
  • Argument inheritance and override behavior

Notes

  • File paths in INI are relative to server working directory (absolute paths recommended)
  • --models-dir and --models-preset are independent and can be used together
  • Presets are logged at startup with * prefix indicating custom configuration
  • /v1/models endpoint includes preset configuration in response for debugging
  • Boolean flags can use on/off, enabled/disabled, true/false, or 1/0 as values

Related Issues

Closes #17850
Related to #17470, #10932

Credits

Co-authored-by: aldehir (INI parser PEG grammar)
Co-authored-by: ngxson (preset refactoring, API design, argument system integration)

Replace flat directory scan with recursive traversal using
std::filesystem::recursive_directory_iterator. Support for
nested vendor/model layouts (e.g. vendor/model/*.gguf).
Model name now reflects the relative path within --models-dir
instead of just the filename. Aggregate files by parent
directory via std::map before constructing local_model
Copy link
Collaborator

@ngxson ngxson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this looks interesting, I will clean this up a bit and push a commit

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will be nice if we can move part of this file into common/preset.cpp|h, so it can be reused by other tools

Comment on lines 502 to 507
if (value == "false") {
continue;
}

if (value == "true" || value.empty()) {
child_env.push_back(key + "=");
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think leaving the original value for bool should be good? We can already handle these values using is_falsey / is_truthy in arg.cpp

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point! I'll simplify the bool handling to pass through the original values (=true/=false) and let is_truthy/is_falsey handle the conversion

@aldehir
Copy link
Collaborator

aldehir commented Dec 8, 2025

@ServeurpersoCom

Here is a line-oriented approach for the parser:

static const auto ini_parser = build_peg_parser([](auto & p) {
    // newline ::= "\r\n" / "\n" / "\r"
    auto newline = p.rule("newline", p.literal("\r\n") | p.literal("\n") | p.literal("\r"));

    // ws ::= [ \t]*
    auto ws = p.rule("ws", p.chars("[ \t]", 0, -1));

    // comment ::= [;#] (!newline .)*
    auto comment = p.rule("comment", p.chars("[;#]", 1, 1) + p.zero_or_more(p.negate(newline) + p.any()));

    // eol ::= ws comment? (newline / EOF)
    auto eol = p.rule("eol", ws + p.optional(comment) + (newline | p.end()));

    // ident ::= [a-zA-Z_] [a-zA-Z0-9_.-]*
    auto ident = p.rule("ident", p.chars("[a-zA-Z_]", 1, 1) + p.chars("[a-zA-Z0-9_.-]", 0, -1));

    // value ::= (!eol-start .)*
    auto eol_start = p.rule("eol-start", ws + (p.chars("[;#]", 1, 1) | newline | p.end()));
    auto value = p.rule("value", p.zero_or_more(p.negate(eol_start) + p.any()));

    // header-line ::= "[" ws ident ws "]" eol
    auto header_line = p.rule("header-line", "[" + ws + p.tag("section-name", p.chars("[^]]")) + ws + "]" + eol);

    // kv-line ::= ident ws "=" ws value eol
    auto kv_line = p.rule("kv-line", p.tag("key", ident) + ws + "=" + ws + p.tag("value", value) + eol);

    // comment-line ::= ws comment (newline / EOF)
    auto comment_line = p.rule("comment-line", ws + comment + (newline | p.end()));

    // blank-line ::= ws (newline / EOF)
    auto blank_line = p.rule("blank-line", ws + (newline | p.end()));

    // line ::= header-line / kv-line / comment-line / blank-line
    auto line = p.rule("line", header_line | kv_line | comment_line | blank_line);

    // ini ::= line* EOF
    auto ini = p.rule("ini", p.zero_or_more(line) + p.end());

    return ini;
});

I assume the changes were because of the weirdness in consuming spaces/comments. This should alleviate those concerns.

And the visitor can really be something as simple as this:

std::map<std::string, std::map<std::string, std::string>> cfg;

std::string current_section = "default";
std::string current_key;

ctx.ast.visit(result, [&](const auto & node) {
    if (node.tag == "section-name") {
        current_section = std::string(node.text);
        cfg[current_section] = {};
    } else if (node.tag == "key") {
        current_key = std::string(node.text);
    } else if (node.tag == "value" && !current_key.empty()) {
        cfg[current_section][current_key] = std::string(node.text);
        current_key.clear();
    }
});

ServeurpersoCom and others added 2 commits December 8, 2025 12:29
PEG parser usage improvements:
- Simplify parser instantiation (remove arena indirection)
- Optimize grammar usage (ws instead of zero_or_more, remove optional wrapping)
- Fix last line without newline bug (+ operator instead of <<)
- Remove redundant end position check

Feature scope:
- Remove auto-reload feature (will be separate PR per @ngxson)
- Keep config.ini auto-creation and template generation
- Preserve per-model customization logic

Co-authored-by: aldehir <[email protected]>
Co-authored-by: ngxson <[email protected]>
Complete rewrite of INI parser grammar and visitor:
- Use p.chars(), p.negate(), p.any() instead of p.until()
- Support end-of-line comments (key=value # comment)
- Handle EOF without trailing newline correctly
- Strict identifier validation ([a-zA-Z_][a-zA-Z0-9_.-]*)
- Simplified visitor (no pending state, no trim needed)
- Grammar handles whitespace natively via eol rule

Business validation preserved:
- Reject section names starting with LLAMA_ARG_*
- Accept only keys starting with LLAMA_ARG_*
- Require explicit section before key-value pairs

Co-authored-by: aldehir <[email protected]>
Copy link
Collaborator

@aldehir aldehir left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good as far as parsing is concerned!

I will need to add an expect() helper to provide helpful error messages to users when they make a mistake. I can do that separately in an another PR.

@ServeurpersoCom ServeurpersoCom marked this pull request as draft December 8, 2025 11:57
Children now receive minimal CLI args (executable, model, port, alias)
instead of inheriting all router args. Global settings pass through
LLAMA_ARG_* environment variables only, eliminating duplicate config
warnings.

Fixes: Router args like -ngl, -fa were passed both via CLI and env,
causing 'will be overwritten' warnings on every child spawn
@ServeurpersoCom
Copy link
Collaborator Author

Now it in a basic working state, with the new line-based PEG parser, I'm testing with my entire per models configuration on the server to test some edge case, and then there are the @ngxson refactoring to do.

@ServeurpersoCom ServeurpersoCom marked this pull request as ready for review December 8, 2025 12:34
@ServeurpersoCom
Copy link
Collaborator Author

ServeurpersoCom commented Dec 8, 2025

Missing sampling parameters need .set_env() in common/arg.cpp (--temp, --top-p, --top-k, --min-p have no LLAMA_ARG_ env vars yet). Successfully migrated llama-swap config (YAML) to config.ini via LLM: llama-server preserved all custom parameters (ctx-size, n-cpu-moe, mmproj, -m ....Q6_K), applied global CLI defaults (-ngl 999, -fa, --mlock, -ctk/-ctv etc...) to all models, and automatically reorganized sections/keys alphabetically to maintain normalized format

@ngxson
Copy link
Collaborator

ngxson commented Dec 8, 2025

Missing sampling parameters need .set_env() in common/arg.cpp (--temp, --top-p, --top-k, --min-p have no LLAMA_ARG_ env vars yet).

Hmm yeah I didn't notice that some env vars are missing. I think it will be cleaner if we default to using the longest arg (for example, --ctx-size instead of -c)

Internally, the parser can accept all 3 forms: env, short arg and long arg ; there is no chance that they will collide anyway. I'll push the change for this

@ServeurpersoCom
Copy link
Collaborator Author

Yes look it just need missing .set_env("LLAMA_ARG_TEMP")); etc... I wait your change while I run some tests

llama-server --models-dir ./models_directory
```

The directory is scanned recursively, so nested vendor/model layouts such as `vendor_name/model_name/*.gguf` are supported. The model name in the router UI matches the relative path inside `--models-dir` (for example, `vendor_name/model_name`).
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For visibility, I will remove recursive support from this PR because it's not related to config support - it should be added later via a dedicated PR

Copy link
Collaborator Author

@ServeurpersoCom ServeurpersoCom Dec 8, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes no worries, (I have to keep it on my side otherwise it breaks my integration server) -> I would adapt the configuration on my side to test this feature-atomic PR if necessary

@ngxson
Copy link
Collaborator

ngxson commented Dec 8, 2025

I moved most of the code inside server-config.cpp to common/preset.cpp

We're now using the term "preset", so I think it's easier to make the file name presets.ini now (it can be extended to use outside of server)

Since I'm now using the same common_arg to handle everything, including parsing and merging args, edge cases like deduplication of short form -a and long form --abc is also handled

We don't yet support repeated args or args with 2 values (like --lora-scaled) but it can be added in the future

API endpoint /v1/models also extended to include the args and INI preset, which will be quite useful for debugging

Things that still need to improve:

  • add falsey and truthy check for input from ini
  • add documentation and example


Alternatively, you can also add GGUF based preset (see next section)

### Model presets
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ServeurpersoCom I updated the docs with an example - lmk if this works in your case

- Sanitize model names: replace / and \ with _ for display
- Recursive directory scan with relative path storage
- Convert relative paths to absolute when spawning children
- Filter router control args from child processes
- Refresh args after port assignment for correct port value
- Fallback preset lookup for compatibility
- Fix missing argv[0]: store server binary path before base_args parsing
first_shard_file = file;
} else {
model_file = file;
std::function<void(const std::string &, const std::string &)> scan_subdir =
Copy link
Collaborator

@ngxson ngxson Dec 8, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please remove the recursive implementation - it's unrelated to the current PR, and it's also unsafe as it doesn't handle the case where there's a circular symlink or circular mount points

Copy link
Collaborator Author

@ServeurpersoCom ServeurpersoCom Dec 8, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can retrieve the rest (except the recursion) and push --force; I won't touch the branch before tomorrow/rebase/test.
A two-level browsing system will be perfect for all cases (separate PR)

@emjomi
Copy link

emjomi commented Dec 8, 2025

Hey guys! Sorry to interrupt, but are the LLAMA_ARG_ prefixes required? I think they make the config a bit noisy.

One more thing: maybe it's better to put the config in ~/.config/llama.cpp/ on Linux, as specified in https://specifications.freedesktop.org/basedir/latest/?

Thank you so much for what you're doing!

@ServeurpersoCom
Copy link
Collaborator Author

Hey guys! Sorry to interrupt, but are the LLAMA_ARG_ prefixes required? I think they make the config a bit noisy.

One more thing: maybe it's better to put the config in ~/.config/llama.cpp/ on Linux, as specified in https://specifications.freedesktop.org/basedir/latest/?

Thank you so much for what you're doing!

No worries! with the last refactor the LLAMA_ARG_ prefixes are optional: you can use the short argument forms (e.g., ngl, c) or long forms with dashes (e.g., n-gpu-layers, ctx-size) instead. All three formats are supported.

Regarding config location: the preset file path is fully customizable via --models-preset , so you can place it wherever you prefer, including ~/.config/llama.cpp/presets.ini if that fits your workflow better.

This is a WIP, I update the first message soon

@ServeurpersoCom
Copy link
Collaborator Author

I'll update the PR documentation with the new implementation today: no more INI auto-generation, deep GGUF tree support without scanning, all 3 variable formats supported, standard Linux binary/INI relative paths, and --models-dir and --models-preset are now independent

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Proposal: allow arg.cpp to import/export configs from/to INI file

4 participants