Skip to content

Conversation

@shailja-thakur
Copy link

Summary

This PR adds the ability to test Mellea m-program robustness by integrating with BenchDrift- semantic variation generation and evaluation pipeline. Users can now systematically evaluate how consistently their m-programs answer semantically equivalent variations of a problem.

What This Enables

  • Generate semantic variations of a problem (different phrasings, same meaning)
  • Execute m-programs on all variations to measure consistency
  • Measure pass rates, drift patterns, and identify failure modes
  • Understand where m-programs break and where they perform well

Key Components

  • run_benchdrift_pipeline(): Orchestrates BenchDrift's 3-stage pipeline (generate variations → execute m-program → evaluate)
  • MelleaModelClientAdapter: Bridges Mellea m-programs to BenchDrift's test framework
  • analyze_robustness_from_probes(): Computes robustness metrics from test results
  • Configurable variation strategies (generic, cluster-based, persona-based, long-context)

…ting

- Add variation_types parameter to run_benchdrift_pipeline() to allow users to customize which semantic variation types to generate (generic, cluster_variations, persona, long_context)
- Update test/1_test_robustness_testing.py to demonstrate variation_types usage
- Add docs/ROBUSTNESS_TESTING.md with comprehensive documentation for robustness testing workflow
- Enables fine-grained control over robustness testing configurations

🤖 Generated with Claude Code

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
Copy link
Collaborator

@delucs21 delucs21 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewed with some necessary changes before merging

### Step 1: Install BenchDrift
Install BenchDrift from source (required for robustness testing pipeline):
```bash
git clone https://github.com/ritterinvest/BenchDrift.git
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This repo returns 404. I proceeded with testing using the internal repo, but BenchDrift needs to be in a publicly accessible repo.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suggest renaming this file for consistency. Perhaps something like "test_benchdrift_robustness.py"

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suggest renaming this to be more specific - something like "benchdrift_model_client_adapter.py"

import logging
import tempfile
from typing import List, Dict, Any, Callable, Optional, Tuple

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

missing import os

'unified_file': temp_output_filename,
'input_problems': temp_input_filename,
'batch_size': 2,
'max_workers': 4,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

max_workers is hardcoded instead of using the passed parameter.


# --- Core API Functions ---

def run_benchdrift_pipeline(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Documentation defines variation_types parameter for run_benchdrift_pipeline, but it is missing from the function implementation. Following the doc raises a TypeError.

def run_benchdrift_pipeline(
baseline_problem: str,
ground_truth_answer: str,
m_program_callable: Optional[Callable[[str, Dict[str, Any]], Any]] = None,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

m_program_callable type hint implies 2 arguments, but invocations define 1. Should this be Callable[[str], Any] instead?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants