Skip to content

Conversation

@hypdeb
Copy link
Contributor

@hypdeb hypdeb commented Dec 5, 2025

Purpose

Sampling randomly directly from a tokenizer for benchmarking creates data that is not ideal to benchmark when using speculative decoding or expert parallelism.

On the other hand, random datasets are very flexible and offer complete control on the input and output sequence lengths, which is desirable to create reproducible benchmarks.

This PR introduces a new type of benchmarking dataset called TxtSlicesDataset which offers a compromise between the flexibility of a random dataset and the fidelity of a real dataset. It allows sampling slices from a user-provided txt file.

Content

  • The implementation of TxtSlicesDataset
  • Fixes to typing in datasets.py
  • Factored out internal utils from datasets.py in an attempt to bring the file to a more manageable size
  • A unit test for the new dataset type

@mergify mergify bot added the performance Performance-related issues label Dec 5, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces TxtSlicesDataset for benchmarking, which samples data from a text file. It also includes significant refactoring by moving utility functions from datasets.py to a new dataset_utils.py file and improving typing throughout. The changes are well-structured. My review focuses on improving the robustness and reproducibility of the new TxtSlicesDataset and its tests. I've pointed out a resource leak in the tests and potential for non-reproducible behavior due to the use of the global random module. I've also identified a missing check that could lead to a crash with certain input files.

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

@mergify
Copy link

mergify bot commented Dec 6, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @hypdeb.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Dec 6, 2025
Signed-off-by: jdebache <jdebache@nvidia.com>
@hypdeb hypdeb force-pushed the datasets_refactor branch from d1ba173 to 5c92be2 Compare December 6, 2025 11:20
@mergify mergify bot removed the needs-rebase label Dec 6, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

performance Performance-related issues

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant