Hello @Jake
Thanks for reaching out to us, I have experienced the same issue and solved it by selecting Llama-2-7b model not Llama-2-7b-chat. Please take a look on which model you selected and let me know if you can see the button now.
A NOTE about compute requirements when using Llama 2 models: Finetuning, evaluating and deploying Llama 2 models requires GPU compute of V100 / A100 SKUs. You can find the exact SKUs supported for each model in the information tooltip next to the compute selection field in the finetune/ evaluate / deploy wizards. You can view and request AzureML compute quota here.
I hope this helps. Please let me know if you are still having this issue, I am happy to help further.
Regards,
Yutong
-Please kindly accept the answer if you feel helpful to support the community, thanks a lot.