Hello @Sariga Rahul
Thanks for reaching out to us, generally, you want to keep one of them as default value and only adjust the other one according to your case. Personally, I use .7 as my temperature for most of my case, and keep top_p as default. The recommended values for top_p
and temperature
depend on the specific use case and the desired output.
temperature
controls the "creativity" of the generated text, between 0 and 2. A higher temperature will result in more diverse and unexpected responses, while a lower temperature will result in more conservative and predictable responses. The default value for temperature
is 1.0, but you can experiment with different values to see what works best for your use case. Higher values means the model will take more risks. Try 0.9 for more creative applications, and 0 (argmax sampling
) for ones with a well-defined answer. We generally recommend altering this or top_p but not both.
top_p
- An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. 1 is the default value. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both.
Please refer to the document - https://learn.microsoft.com/en-us/azure/cognitive-services/openai/reference
I hope this helps.
Regards,
Yutong
-Please kindly accept the answer and vote 'Yes' if you feel helpful to support the community, thanks a lot.