Infinite loading issue when uploading training data to Fine Tune ChatGPT 4

Freddie 10 Reputation points
2024-07-06T21:43:52.38+00:00

I am trying to fine-tune ChatGPT 4 for live chat queries using Azure AI Studio. Upon uploading training data to the Fine Tuning (preview) section, I click on the "+Fine-tune model" button to submit the data. However, the blue loading bar appears and continues indefinitely without submitting my data. This is preventing me from fine-tuning my model. Attached is the format of my training data and its size is about 28MB.

Azure Machine Learning
Azure Machine Learning
An Azure machine learning service for building and deploying models.
2,960 questions
{count} votes

1 answer

Sort by: Most helpful
  1. mikelydick 76 Reputation points
    2024-07-07T12:36:00.4133333+00:00

    Ensure that your training data is correctly formatted and within the size limits specified by Azure AI Studio. The data should typically be in JSONL format and validated using tools like the OpenAI CLI data preparation tool. Given that your data is 28MB, it should be within acceptable limits, but double-check the formatting.

    Verify that the region you are using supports fine-tuning for the specific model you are working with. Some regions may have capacity constraints or may not support fine-tuning at all times. You can check the supported regions and models in the Azure AI Studio documentation.

    Someone with region issue:
    https://www.reddit.com/r/AZURE/comments/13y6e0g/issues_uploading_training_file_on_azure_ai_studio/

    See the regional capacity table:
    https://learn.microsoft.com/en-us/azure/ai-studio/concepts/fine-tuning-overview


    For fine-tuning ChatGPT models in Azure AI Studio, the training data generally needs to adhere to the following format requirements:

    1. File Format:
      • The file should typically be in JSON format (.json extension).
      • Some systems may also accept JSONL (JSON Lines) format, where each line is a valid JSON object.
    2. Data Structure:
      • Each entry in the file should represent a single conversation or example.
      • The structure usually follows a pattern of alternating "system", "user", and "assistant" messages.
    3. Required Fields:
      • Each entry typically needs to include:
      • A "messages" array containing the conversation
      • Each message in the array should have a "role" (system, user, or assistant) and "content"
    4. Example Structure:

    { "messages": [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "What's the capital of France?"}, {"role": "assistant", "content": "The capital of France is Paris."} ] }

    1. Multiple Conversations:
    • If you're including multiple conversations, each should be a separate JSON object.
    • In JSONL format, each line would be a complete conversation.
    1. Special Characters:
      • Ensure that any special characters are properly escaped according to JSON rules.
      • Be particularly careful with quotation marks and backslashes.
    2. UTF-8 Encoding:
      • The file should be saved with UTF-8 encoding to properly handle all characters.
    3. File Size:
      • While you mentioned your file is about 28MB, make sure it doesn't exceed the maximum file size limit set by Azure AI Studio (this can vary, so check the current documentation).
    4. Consistency:
      • Ensure all entries in your dataset follow the same structure consistently.
    5. Validation:
    • Use a JSON validator tool to check your file for any syntax errors before uploading.

    If your data doesn't meet these requirements, the fine-tuning process may fail to start or you might encounter the indefinite loading issue you described. It's worth double-checking your data against these criteria and perhaps sharing a small, anonymized sample of your data structure (without any sensitive information) to get more specific guidance.

    0 comments No comments

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.