Technique applied for finetuning #292
-
Hello, I am finetuning the AWS Chronos Bolt model. I have performed several experiments. I was able to get a better performance working with the finetuned version. However, a question comes into my mind. What technique for finetuning is applied behind? I mean, LoRA, DoRA? I am finetuning with Gluon library. If anyone can help me answering this question, I will thank a lot! |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
@sebassaras02 full fine-tuning is used in AutoGluon. Since the models are quite small and efficient, we did not experiment with low rank and other efficient fine-tuning methods. |
Beta Was this translation helpful? Give feedback.
@sebassaras02 full fine-tuning is used in AutoGluon. Since the models are quite small and efficient, we did not experiment with low rank and other efficient fine-tuning methods.