Model customization in Amazon Bedrock enhances foundation models for specific use cases through three methods:
Distillation: Transfers knowledge from a larger "teacher" model to a smaller, efficient "student" model using automated data synthesis for fine-tuning.
Fine-tuning: Uses labeled data to train a model for improved task-specific performance by adjusting its parameters.
Continued Pre-training: Exposes a model to domain-specific unlabeled data to refine its knowledge and adapt to specialized inputs.
Before creating a custom model job (fine-tuning or distillation) in Amazon Bedrock, use the provided script to validate your dataset.
Custom Model Job Type | Model | Dataset Validation Script |
---|---|---|
Fine-Tuning | Llama | llama models validation script |
Fine-Tuning | Nova | Nova models validation script |
Fine-Tuning | Haiku | Haiku models validation script |
Distillation | supported (teacher, student) model | Model Distillation validation script |
- Fine-Tuning - Exmaple showing how different LLMs can be fine tuned onto Amazon Bedrock.
- Model Distillation - Exmaple showing how different LLMs can be distilled onto Amazon Bedrock.
- import_models - Exmaple showing how different Opensource LLMs can be imported onto Amazon Bedrock.
We welcome community contributions! Please ensure your sample aligns with AWS best practices, and please update the Contents section of this README file with a link to your sample, along with a description.