🚀 The feature, motivation and pitch
Current Llava uses Llama 2 7B as the pretrained text model. https://github.com/pytorch/executorch/blob/main/examples/models/llava/README.md
The latest quantized Llama 1B/3B are good in terms of accuracy and size. Let's make it swappable with these models.
Alternatives
No response
Additional context
No response
RFC (Optional)
No response