-
Notifications
You must be signed in to change notification settings - Fork 399
Fix PERPLEXITY task #1037
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Fix PERPLEXITY task #1037
Conversation
|
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
|
thanks for the fix ! happy to review once formatted, don't hesitate to ping me for review :) |
|
Hi @NathanHB, thanks for the response! I have reformatted the file back to its original state. Can you review? 🙏 |
|
hi @NathanHB , not quite sure why it is failing some of the test. I don't think my fix would affect anything beside ppl tasks.... |
|
thanks, taking a look ! |
|
not sure what the issue is, i will take a deeper look this week, if you can, in the meantime run the tests locally ? |
1. Remove KeyError: mc1_targets field only exists in multiple_choice subset, not generation subset used by truthfulqa:gen task 2. Fix backwards answer processing logic that was replacing correct answers with periods instead of preserving answer text These fixes make truthfulqa:gen functional for proper evaluation. Task format: lighteval|truthfulqa:gen|0
Replace exact_match with bleu_1 and bleu_4 metrics for generative TruthfulQA evaluation. Exact match was causing 0 scores because model responses never exactly matched reference strings, even when factually correct. BLEU metrics provide proper n-gram overlap scoring as intended by the original TruthfulQA paper.
… call on metadata dict
- Created simple_truthfulqa_judge.py: Custom task for TruthfulQA generation with GPT-4o judge - Judge evaluates model responses against ground truth answers for truthfulness - Treats "I don't know" responses as correct (truthful uncertainty) - Uses binary scoring: CORRECT (1) or INCORRECT (0) based on factual accuracy - Applied code formatting improvements to llm_as_judge.py 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <[email protected]>
Fix PERPLEXITY task evaluation crash due to incorrect sampling method detection
Issue
Evaluating perplexity-based tasks (e.g., lighteval|wikitext:103:document_level|0) causes an IndexError crash:
File "lighteval/logging/info_loggers.py", line 298, in aggregate
hash_types: list[str] = list(self.compiled_details.values())[0].hashes.keys()
IndexError: list index out of range
Root Cause
The @cached decorator in cache_management.py incorrectly filters out all PERPLEXITY responses:
- get_sampling_method() cannot distinguish between PERPLEXITY and LOGPROBS responses
- Both have non-empty logprobs, so it returns SamplingMethod.LOGPROBS for PERPLEXITY tasks
- The key difference: PERPLEXITY responses have empty text=[], while LOGPROBS typically has text
- The decorator filters cached results by sampling method
- Looks for SamplingMethod.PERPLEXITY but finds SamplingMethod.LOGPROBS
- All 62 valid responses filtered out → returns empty list []
- DetailsLogger.aggregate() assumes at least one task has details
- Accesses list(self.compiled_details.values())[0] on empty dict
- Raises IndexError
Fixes