Skip to content

Conversation

@safayavatsal
Copy link

  • Implement WhisperAdapter class for efficient fine-tuning
  • Add AdaptedWhisperModel with selective parameter freezing
  • Create FineTuningDataset for data preparation
  • Include WhisperFineTuner main training class
  • Support adapter saving/loading functionality
  • Address GitHub Discussions Finetuning/Training code ? #64, Fine-tuning Whisper #759 fine-tuning requests

Features:

  • Parameter-efficient fine-tuning using adapter layers
  • Flexible target module selection
  • Integrated training pipeline with validation
  • Compatible with all Whisper model sizes
  • Memory-efficient training approach

- Implement WhisperAdapter class for efficient fine-tuning
- Add AdaptedWhisperModel with selective parameter freezing
- Create FineTuningDataset for data preparation
- Include WhisperFineTuner main training class
- Support adapter saving/loading functionality
- Address GitHub Discussions openai#64, openai#759 fine-tuning requests

Features:
- Parameter-efficient fine-tuning using adapter layers
- Flexible target module selection
- Integrated training pipeline with validation
- Compatible with all Whisper model sizes
- Memory-efficient training approach
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant