-
-
Notifications
You must be signed in to change notification settings - Fork 11.8k
[Renderer] Separate out RendererConfig from ModelConfig
#30145
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
+971
−799
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
RendererConfig from ModelConfig`RendererConfig from ModelConfig
|
Documentation preview: https://vllm--30145.org.readthedocs.build/en/30145/ |
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
RendererConfig from ModelConfigRendererConfig from ModelConfig
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Member
Author
|
Force-merging as LoRA tests are timing out on main |
DarkLight1337
added a commit
to DarkLight1337/vllm
that referenced
this pull request
Dec 7, 2025
…llm-project#30145)" This reverts commit 27f4c2f.
vllm-bot
pushed a commit
that referenced
this pull request
Dec 7, 2025
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
deepseek
Related to DeepSeek models
documentation
Improvements or additions to documentation
frontend
kv-connector
llama
Related to Llama models
multi-modality
Related to multi-modality (#4194)
qwen
Related to Qwen models
ready
ONLY add when PR is ready to merge/full CI is needed
speculative-decoding
structured-output
tpu
Related to Google TPUs
v1
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Purpose
Move out renderer-specific fields into a new config.
ModelConfig.tokenizer -> RendererConfig.tokenizerModelConfig.tokenizer_mode -> RendererConfig.tokenizer_modeModelConfig.tokenizer_revision -> RendererConfig.tokenizer_revisionModelConfig.skip_tokenizer_init -> RendererConfig.skip_tokenizer_initModelConfig.io_processor_plugin -> RendererConfig.io_processor_pluginMultiModalConfig.media_io_kwargs -> RendererConfig.media_io_kwargsModelConfig.allowed_local_media_path -> RendererConfig.allowed_local_media_pathModelConfig.allowed_media_domains -> RendererConfig.allowed_media_domainsSince renderer may still need to access model config fields (like
hf_config),RendererConfigcontainsModelConfig. This also means a bunch of tests need to be updated to passRendererConfig(model_config=model_config)explicitly toVllmConfig.Related changes:
ModelConfig.maybe_pull_model_tokenizer_for_runaiintoModelConfig.maybe_pull_model_for_runaiandRendererConfig.maybe_pull_tokenizer_for_runai. Also fix the latter usingmodelinstead oftokenizer.ModelConfig.get_and_verify_max_lenintoModelConfig.recalculate_max_model_lenand call it twice during initialization: insideModelConfig.__post_init__without tokenizer, and afterRendererConfigis constructed with the tokenizer.renderer_configinstead ofmodel_config.renderer_configinstead ofmodel_config.SupportsTranscriptioninterface methods to acceptrenderer_configinstead ofmodel_config._HfExamplesInfo.build_model_configand_HfExamplesInfo.build_renderer_configconvenience methods for testing.Prepare for #22880
Test Plan
Test Result
Essential Elements of an Effective PR Description Checklist
supported_models.mdandexamplesfor a new model.