Hi @Weiyu Yan,
Thank you for reaching out to Microsoft Q&A forum!
According to the Azure OpenAI documentation, the fine-tuning method used for GPT-3.5 is not explicitly mentioned. However, Azure OpenAI uses a technique called Low Rank Approximation (LoRA) to fine-tune models in a way that reduces their complexity without significantly affecting their performance. This method works by approximating the original high-rank matrix with a lower rank one, thus only fine-tuning a smaller subset of "important" parameters during the supervised training phase, making the model more manageable and efficient. For users, this makes training faster and more affordable than other techniques.
The large quantities of high-quality training and validation data will be used in gpt-35-turbo-0613
for better performance.
For mor info: Azure OpenAI GPT-3.5 Turbo fine-tuning and Customize a model with fine-tuning.
I hope this information helps. Do let us know if you any further queries.
If this answers your query, do click Accept Answer
and Yes
for was this answer helpful.