Skip to content

Conversation

@DarkLight1337
Copy link
Member

@DarkLight1337 DarkLight1337 commented Dec 5, 2025

Purpose

Move out renderer-specific fields into a new config.

  • ModelConfig.tokenizer -> RendererConfig.tokenizer
  • ModelConfig.tokenizer_mode -> RendererConfig.tokenizer_mode
  • ModelConfig.tokenizer_revision -> RendererConfig.tokenizer_revision
  • ModelConfig.skip_tokenizer_init -> RendererConfig.skip_tokenizer_init
  • ModelConfig.io_processor_plugin -> RendererConfig.io_processor_plugin
  • MultiModalConfig.media_io_kwargs -> RendererConfig.media_io_kwargs
  • ModelConfig.allowed_local_media_path -> RendererConfig.allowed_local_media_path
  • ModelConfig.allowed_media_domains -> RendererConfig.allowed_media_domains

Since renderer may still need to access model config fields (like hf_config), RendererConfig contains ModelConfig. This also means a bunch of tests need to be updated to pass RendererConfig(model_config=model_config) explicitly to VllmConfig.

Related changes:

  • Separate ModelConfig.maybe_pull_model_tokenizer_for_runai into ModelConfig.maybe_pull_model_for_runai and RendererConfig.maybe_pull_tokenizer_for_runai. Also fix the latter using model instead of tokenizer.
  • Refactor ModelConfig.get_and_verify_max_len into ModelConfig.recalculate_max_model_len and call it twice during initialization: inside ModelConfig.__post_init__ without tokenizer, and after RendererConfig is constructed with the tokenizer.
  • Update tokenizer/processor factories and chat utils to accept renderer_config instead of model_config.
  • Update multimodal registry and budget to accept renderer_config instead of model_config.
  • Update SupportsTranscription interface methods to accept renderer_config instead of model_config.
  • Add _HfExamplesInfo.build_model_config and _HfExamplesInfo.build_renderer_config convenience methods for testing.

Prepare for #22880

Test Plan

Test Result


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft in the Google Doc.

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
@DarkLight1337 DarkLight1337 added the ready ONLY add when PR is ready to merge/full CI is needed label Dec 5, 2025
@DarkLight1337 DarkLight1337 changed the title [Refactor] Separate out RendererConfig from ModelConfig` [Refactor] Separate out RendererConfig from ModelConfig Dec 5, 2025
@mergify
Copy link

mergify bot commented Dec 5, 2025

Documentation preview: https://vllm--30145.org.readthedocs.build/en/30145/

@mergify mergify bot added documentation Improvements or additions to documentation deepseek Related to DeepSeek models frontend llama Related to Llama models labels Dec 5, 2025
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
@DarkLight1337 DarkLight1337 changed the title [Refactor] Separate out RendererConfig from ModelConfig [Renderer] Separate out RendererConfig from ModelConfig Dec 6, 2025
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
@DarkLight1337 DarkLight1337 enabled auto-merge (squash) December 6, 2025 04:40
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
@DarkLight1337 DarkLight1337 enabled auto-merge (squash) December 6, 2025 13:51
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
@DarkLight1337 DarkLight1337 removed the ready-run-all-tests Trigger CI with all tests for wide-ranging PRs label Dec 7, 2025
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
@vllm-bot vllm-bot merged commit 27f4c2f into vllm-project:main Dec 7, 2025
66 of 69 checks passed
@DarkLight1337
Copy link
Member Author

Force-merging as LoRA tests are timing out on main

@DarkLight1337 DarkLight1337 deleted the renderer-config branch December 7, 2025 07:15
DarkLight1337 added a commit to DarkLight1337/vllm that referenced this pull request Dec 7, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

deepseek Related to DeepSeek models documentation Improvements or additions to documentation frontend kv-connector llama Related to Llama models multi-modality Related to multi-modality (#4194) qwen Related to Qwen models ready ONLY add when PR is ready to merge/full CI is needed speculative-decoding structured-output tpu Related to Google TPUs v1

Projects

Status: Done

Development

Successfully merging this pull request may close these issues.

3 participants