Skip to content

Conversation

@ScottHoang
Copy link


Fix PERPLEXITY task evaluation crash due to incorrect sampling method detection

Issue

Evaluating perplexity-based tasks (e.g., lighteval|wikitext:103:document_level|0) causes an IndexError crash:

File "lighteval/logging/info_loggers.py", line 298, in aggregate
hash_types: list[str] = list(self.compiled_details.values())[0].hashes.keys()
IndexError: list index out of range

Root Cause

The @cached decorator in cache_management.py incorrectly filters out all PERPLEXITY responses:

  1. Incorrect sampling method detection (cache_management.py:203-208):
    - get_sampling_method() cannot distinguish between PERPLEXITY and LOGPROBS responses
    - Both have non-empty logprobs, so it returns SamplingMethod.LOGPROBS for PERPLEXITY tasks
    - The key difference: PERPLEXITY responses have empty text=[], while LOGPROBS typically has text
  2. Filtering removes all responses (cache_management.py:421-423):
    - The decorator filters cached results by sampling method
    - Looks for SamplingMethod.PERPLEXITY but finds SamplingMethod.LOGPROBS
    - All 62 valid responses filtered out → returns empty list []
  3. Downstream crash (info_loggers.py:298):
    - DetailsLogger.aggregate() assumes at least one task has details
    - Accesses list(self.compiled_details.values())[0] on empty dict
    - Raises IndexError

Fixes

  1. Fix sampling method detection (src/lighteval/utils/cache_management.py:203-208):
  def get_sampling_method(self, sample: dict) -> str:
      if len(sample.get("logprobs", [])) > 0:
          # PERPLEXITY tasks have logprobs but empty text
          if len(sample.get("text", [])) == 0:
              return SamplingMethod.PERPLEXITY
          return SamplingMethod.LOGPROBS
      if len(sample.get("text", [])) > 0:
          return SamplingMethod.GENERATIVE
      return None

@HuggingFaceDocBuilderDev
Copy link
Collaborator

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@NathanHB
Copy link
Member

NathanHB commented Nov 5, 2025

thanks for the fix ! happy to review once formatted, don't hesitate to ping me for review :)

@ScottHoang
Copy link
Author

Hi @NathanHB, thanks for the response! I have reformatted the file back to its original state. Can you review? 🙏

@ScottHoang
Copy link
Author

ScottHoang commented Nov 7, 2025

hi @NathanHB , not quite sure why it is failing some of the test. I don't think my fix would affect anything beside ppl tasks....

@NathanHB
Copy link
Member

thanks, taking a look !

@NathanHB
Copy link
Member

not sure what the issue is, i will take a deeper look this week, if you can, in the meantime run the tests locally ?

1. Remove KeyError: mc1_targets field only exists in multiple_choice subset,
   not generation subset used by truthfulqa:gen task

2. Fix backwards answer processing logic that was replacing correct answers
   with periods instead of preserving answer text

These fixes make truthfulqa:gen functional for proper evaluation.
Task format: lighteval|truthfulqa:gen|0
Duc Hoang and others added 5 commits December 10, 2025 18:53
Replace exact_match with bleu_1 and bleu_4 metrics for generative
TruthfulQA evaluation. Exact match was causing 0 scores because
model responses never exactly matched reference strings, even when
factually correct. BLEU metrics provide proper n-gram overlap
scoring as intended by the original TruthfulQA paper.
- Created simple_truthfulqa_judge.py: Custom task for TruthfulQA generation with GPT-4o judge
- Judge evaluates model responses against ground truth answers for truthfulness
- Treats "I don't know" responses as correct (truthful uncertainty)
- Uses binary scoring: CORRECT (1) or INCORRECT (0) based on factual accuracy
- Applied code formatting improvements to llm_as_judge.py

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants