-
Notifications
You must be signed in to change notification settings - Fork 19.7k
Introduces layer filtering for quantization and fixes GPTQ dependency inversion #21894
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Introduces layer filtering for quantization and fixes GPTQ dependency inversion #21894
Conversation
Summary of ChangesHello @JyotinderSingh, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the flexibility and control over model quantization by introducing a layer filtering mechanism and improving the dependency inversion for GPTQ. Users can now precisely select which layers to quantize, and the GPTQ process no longer relies on internal heuristics for model structure, instead requiring explicit definition. These changes make the quantization API more robust and adaptable to various model designs. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces two significant improvements to the quantization functionality. First, it adds a flexible layer filtering mechanism to the quantize method, allowing users to selectively quantize parts of a model using regex, a list of regexes, or a callable. Second, it refactors the GPTQ (Gradient-based Post-Training Quantization) implementation to decouple it from the Model class, which is a great design improvement that enhances modularity and testability. The changes are well-implemented and include corresponding tests. I have a few minor suggestions to improve code clarity and robustness.
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## master #21894 +/- ##
==========================================
- Coverage 82.36% 82.36% -0.01%
==========================================
Files 578 579 +1
Lines 59816 59830 +14
Branches 9387 9394 +7
==========================================
+ Hits 49270 49278 +8
- Misses 8147 8150 +3
- Partials 2399 2402 +3
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
Overview
This PR introduces two major improvements to the Keras quantization API:
filtersargument tomodel.quantize(), allowing users to specify exactly which layers should be quantized using regex or callables.GPTQConfigor a new model hookget_quantization_layer_structure.Key Changes
Selective Quantization (
filters)model.quantize(mode, config, filters=...).filtersargument: Accepts a regex string, a list of regex strings, or a callable.keras.src.quantizers.utils.should_quantize_layerto centralize filtering logic.Explicit GPTQ Structure
_get_backbone_layersand_get_custom_layersfromgptq_core.py. The previous logic attempted to guess where the embedding and transformer blocks were, which was fragile and dependent on specific KerasHub naming conventions. This caused dependency inversion, where an upstream library had to be aware of downstream implementation details.model.get_quantization_layer_structure(mode)method. Model authors can override this to return the dictionary{'pre_block_layers': [...], 'sequential_blocks': [...]}.quantization_layer_structuretoGPTQConfig.config.quantization_layer_structuremodel.get_quantization_layer_structure(mode)ValueErroris raised.Usage Examples
1. Using Filters (Regex)
2. Using Filters (Callable)
3. GPTQ with Explicit Structure
Testing
model_test.py.gptq_core_test.pyandgptq_test.pyto use explicit structure definitions.should_quantize_layerutility.Related Changes
Since heuristic-based auto-detection of layers is no longer supported at the Keras level, the KerasHub models are now required to define their own
get_quantization_layer_structurehooks. A PR for the same has been created at keras-team/keras-hub#2462