Skip to content

Conversation

@JyotinderSingh
Copy link
Collaborator

@JyotinderSingh JyotinderSingh commented Dec 4, 2025

Description of the change

This change shifts the logic for identifying a KerasHub model's layer structure (like Embedding layers and Transformer Blocks) to the keras-hub layer itself. This is achieved by implementing get_quantization_layer_structure hooks.

The original setup placed this layer discovery logic within the keras core library. This created a dependency inversion issue, requiring keras to know the internal implementation details of models in keras-hub. This made the code difficult to maintain, prone to becoming outdated, and harder to understand.

Reference

keras-team/keras#21894

Checklist

  • I have added all the necessary unit tests for my change.
  • I have verified that my change does not break existing code and works with all backends (TensorFlow, JAX, and PyTorch).
  • My PR is based on the latest changes of the main branch (if unsure, rebase the code).
  • I have followed the Keras Hub Model contribution guidelines in making these changes.
  • I have followed the Keras Hub API design guidelines in making these changes.
  • I have signed the Contributor License Agreement.

@JyotinderSingh JyotinderSingh changed the title Adds get_quantization_layer_structure hooks Adds get_quantization_layer_structure hooks for GPTQ Dec 4, 2025
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @JyotinderSingh, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a significant architectural improvement by refactoring how KerasHub models expose their internal layer structure, specifically for quantization. By implementing get_quantization_layer_structure hooks directly within the models, the previous dependency of the Keras core library on the intricate details of KerasHub model implementations is eliminated. This strategic shift addresses a key dependency inversion problem, leading to a more modular, robust, and easier-to-maintain codebase, as models now self-describe their quantization-relevant components.

Highlights

  • Dependency Inversion: The logic for identifying a KerasHub model's layer structure has been shifted from the Keras core library to the KerasHub models themselves, resolving a dependency inversion issue.
  • New Hook Implementation: New get_quantization_layer_structure hooks have been implemented in various KerasHub models (CausalLM, GemmaCausalLM, GPT2CausalLM, MaskedLM) to expose their internal layer configurations for quantization.
  • Improved Maintainability: This change enhances code maintainability, reduces the likelihood of outdated code, and improves the overall understandability of the model architectures by centralizing layer discovery within the models.
  • Test Coverage: Unit tests have been added or updated for Gemma, GPT2, Mistral, and Phi3 Causal LMs to verify the correct functionality of the new get_quantization_layer_structure method.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces get_quantization_layer_structure hooks to various models, which is a good step towards decoupling quantization logic from the core library and improving maintainability. The implementations for specific models like Gemma and GPT-2 correctly encapsulate model-specific pre-processing logic.

I've left a couple of minor suggestions to improve code quality:

  • One comment addresses code duplication between CausalLM and MaskedLM for better long-term maintainability.
  • Another comment points out an unnecessary local import that can be removed to adhere to standard Python style conventions.

The added tests are thorough and correctly validate the new functionality. Overall, this is a solid contribution.

Copy link
Collaborator

@divyashreepathihalli divyashreepathihalli left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@divyashreepathihalli divyashreepathihalli merged commit 5d0c852 into keras-team:master Dec 4, 2025
9 of 12 checks passed
@JyotinderSingh JyotinderSingh deleted the quantization-hooks branch December 5, 2025 01:40
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants