You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
vLLM 0.8+ does not seem to work with outlines anymore, due to this very strange error. vLLM v1 should likely not have this error:
ValueError: vLLM V1 does not support per request user provided logits processors.
I suspect this is an upstream problem but wanted to flag it here in case anyone else is experiencing this issue. The current workaround is to downgrade to vllm==0.7.1.
(oss-debug) λ ~/dottxt/oss-debug/ python vllm_test.py
INFO 03-27 10:53:14 [__init__.py:239] Automatically detected platform cuda.
INFO 03-27 10:53:24 [config.py:585] This model supports multiple tasks: {'reward', 'classify', 'embed', 'score', 'generate'}. Defaulting to 'generate'.
INFO 03-27 10:53:24 [config.py:1697] Chunked prefill is enabled with max_num_batched_tokens=8192.
INFO 03-27 10:53:26 [core.py:54] Initializing a V1 LLM engine (v0.8.2) with config: model='microsoft/Phi-3-mini-4k-instruct', speculative_config=None, tokenizer='microsoft/Phi-3-mini-4k-instruct', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=4096, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=1, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='xgrammar', reasoning_backend=None), observability_config=ObservabilityConfig(show_hidden_metrics=False, otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=None, served_model_name=microsoft/Phi-3-mini-4k-instruct, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=True, chunked_prefill_enabled=True, use_async_output_proc=True, disable_mm_preprocessor_cache=False, mm_processor_kwargs=None, pooler_config=None, compilation_config={"level":3,"custom_ops":["none"],"splitting_ops":["vllm.unified_attention","vllm.unified_attention_with_output"],"use_inductor":true,"compile_sizes":[],"use_cudagraph":true,"cudagraph_num_of_warmups":1,"cudagraph_capture_sizes":[512,504,496,488,480,472,464,456,448,440,432,424,416,408,400,392,384,376,368,360,352,344,336,328,320,312,304,296,288,280,272,264,256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"max_capture_size":512}
WARNING 03-27 10:53:26 [utils.py:2321] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in<vllm.v1.worker.gpu_worker.Worker object at 0x763983533110>
INFO 03-27 10:53:27 [parallel_state.py:954] rank 0 in world size 1 is assigned as DP rank 0, PP rank 0, TP rank 0
INFO 03-27 10:53:27 [cuda.py:220] Using Flash Attention backend on V1 engine.
INFO 03-27 10:53:27 [gpu_model_runner.py:1174] Starting to load model microsoft/Phi-3-mini-4k-instruct...
WARNING 03-27 10:53:28 [topk_topp_sampler.py:63] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
INFO 03-27 10:53:28 [weight_utils.py:265] Using model weights format ['*.safetensors']
Loading safetensors checkpoint shards: 0% Completed | 0/2 [00:00<?, ?it/s]
Loading safetensors checkpoint shards: 50% Completed | 1/2 [00:00<00:00, 1.37it/s]
Loading safetensors checkpoint shards: 100% Completed | 2/2 [00:01<00:00, 1.74it/s]
Loading safetensors checkpoint shards: 100% Completed | 2/2 [00:01<00:00, 1.67it/s]
INFO 03-27 10:53:29 [loader.py:447] Loading weights took 1.25 seconds
INFO 03-27 10:53:29 [gpu_model_runner.py:1186] Model loading took 7.1184 GB and 2.009474 seconds
INFO 03-27 10:53:38 [backends.py:415] Using cache directory: /home/cameron/.cache/vllm/torch_compile_cache/dc009c0fc6/rank_0_0 for vLLM's torch.compileINFO 03-27 10:53:38 [backends.py:425] Dynamo bytecode transform time: 8.92 sINFO 03-27 10:53:41 [backends.py:132] Cache the graph of shape None for later useINFO 03-27 10:54:06 [backends.py:144] Compiling a graph for general shape takes 27.13 sINFO 03-27 10:54:15 [monitor.py:33] torch.compile takes 36.05 s in totalINFO 03-27 10:54:16 [kv_cache_utils.py:566] GPU KV cache size: 92,752 tokensINFO 03-27 10:54:16 [kv_cache_utils.py:569] Maximum concurrency for 4,096 tokens per request: 22.64xINFO 03-27 10:54:35 [gpu_model_runner.py:1534] Graph capturing finished in 19 secs, took 0.47 GiBINFO 03-27 10:54:35 [core.py:151] init engine (profile, create kv cache, warmup model) took 65.58 secondsTraceback (most recent call last): File "/home/cameron/dottxt/oss-debug/vllm_test.py", line 13, in <module> response = generator(prompt) ^^^^^^^^^^^^^^^^^ File "/home/cameron/dottxt/oss-debug/.venv/lib/python3.12/site-packages/outlines/generate/api.py", line 504, in __call__ completions = self.model.generate( ^^^^^^^^^^^^^^^^^^^^ File "/home/cameron/dottxt/oss-debug/.venv/lib/python3.12/site-packages/outlines/models/vllm.py", line 130, in generate results = self.model.generate( ^^^^^^^^^^^^^^^^^^^^ File "/home/cameron/dottxt/oss-debug/.venv/lib/python3.12/site-packages/vllm/utils.py", line 1072, in inner return fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^ File "/home/cameron/dottxt/oss-debug/.venv/lib/python3.12/site-packages/vllm/entrypoints/llm.py", line 457, in generate self._validate_and_add_requests( File "/home/cameron/dottxt/oss-debug/.venv/lib/python3.12/site-packages/vllm/entrypoints/llm.py", line 1308, in _validate_and_add_requests self._add_request( File "/home/cameron/dottxt/oss-debug/.venv/lib/python3.12/site-packages/vllm/entrypoints/llm.py", line 1326, in _add_request self.llm_engine.add_request( File "/home/cameron/dottxt/oss-debug/.venv/lib/python3.12/site-packages/vllm/v1/engine/llm_engine.py", line 184, in add_request request = self.processor.process_inputs(request_id, prompt, params, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/cameron/dottxt/oss-debug/.venv/lib/python3.12/site-packages/vllm/v1/engine/processor.py", line 183, in process_inputs self._validate_params(params) File "/home/cameron/dottxt/oss-debug/.venv/lib/python3.12/site-packages/vllm/v1/engine/processor.py", line 114, in _validate_params self._validate_supported_sampling_params(params) File "/home/cameron/dottxt/oss-debug/.venv/lib/python3.12/site-packages/vllm/v1/engine/processor.py", line 97, in _validate_supported_sampling_params raise ValueError("vLLM V1 does not support per request "ValueError: vLLM V1 does not support per request user provided logits processors.
Describe the issue as clearly as possible:
vLLM 0.8+ does not seem to work with outlines anymore, due to this very strange error. vLLM v1 should likely not have this error:
I suspect this is an upstream problem but wanted to flag it here in case anyone else is experiencing this issue. The current workaround is to downgrade to
vllm==0.7.1
.Steps/code to reproduce the bug:
Expected result:
An `Example` object
Error message:
Outlines/Python version information:
Version information
Context for the issue:
No response
The text was updated successfully, but these errors were encountered: