Use an Azure AI service to get a numeric rating

David Thielen 3,186 Reputation points
2024-01-28T22:06:50.7266667+00:00

Hi all;

I am writing an application to handle volunteer events for political campaigns. When a new event is entered, I would like to have an AI semantic analysis determine if a human should review a new event before it goes public.

I need to rate each on two scales. The first is Democratic to Republican. This is to avoid an event titled "Donald Trump is Amazing" put in a Democratic candidate's campaign.

The second is to rate it on a scale of hateful, violent, etc. This is to avoid events with a title like "We need to beat up all ..." put in anyone's campaign.

So, I asked ChatGPT to rate some phrases on both these scales and it did great. And my questions could almost certainly be improved upon. But...

The answers were a long paragraph explaining the score it ended up with. So here's an example:

Q: On a scale of 0 to 10, where 0 means the statement is innocuous and 10 means the statement is certainly hateful, discriminatory, or violent, how would you score the statement "we need to secure our border"?

A: The statement "we need to secure our border" would generally score low on a scale of 0 to 10, where 0 means the statement is innocuous and 10 means the statement is certainly hateful, discriminatory, or violent. It might be around a 1 or 2. This statement is a common political policy position regarding national security and immigration control. It does not inherently contain language that is hateful, discriminatory, or violent. However, context and additional commentary accompanying such a statement can influence its interpretation and could potentially raise its score on this scale if it leans towards hateful or discriminatory rhetoric.

Q: What would you score the statement "we need to throw all illegal immigrants in jail"?

A: The statement "we need to throw all illegal immigrants in jail" would likely score higher on the scale of 0 (innocuous) to 10 (certainly hateful, discriminatory, or violent). This could be around a 7 or 8, or potentially higher, depending on interpretation. This statement suggests a harsh and punitive approach towards a specific group based on their immigration status, which can be viewed as discriminatory and overly aggressive. It lacks a nuanced or compassionate consideration of individual circumstances, and the use of the phrase "throw in jail" conveys a potentially violent or forceful action. This kind of rhetoric is often considered inflammatory and can be seen as promoting hostile attitudes towards immigrants.

These are good answers. But this leaves me with several questions:

  1. Does the Azure stack have a CharGPT equivalent? I believe I've read that it does, but what/how is that used?
  2. Is there a way to tell it that all I want is a number as the answer?
  3. How do I figure out how to come up with a really good way of asking this question?

thanks - dave

Azure AI services
Azure AI services
A group of Azure services, SDKs, and APIs designed to make apps more intelligent, engaging, and discoverable.
3,131 questions
0 comments No comments
{count} votes

Accepted answer
  1. navba-MSFT 27,465 Reputation points Microsoft Employee
    2024-01-29T03:15:21.7633333+00:00

    @David Thielen Welcome to Microsoft Q&A Forum, Thank you for posting your query here!
    .
    Here are the answer your questions:

    Azure Stack and CharGPT Equivalent: Azure does have a service that can provide ChatGPT-like experiences. It’s called Azure OpenAI Service. This service allows developers to integrate custom AI-powered experiences directly into their applications. You can use it to enhance existing bots to handle unexpected questions, recap call center conversations for faster customer support resolutions, create new ad copy with personalized offers, automate claims processing, and more. Azure also provides a guide on how to build ChatGPT-like experiences with Azure using Open AI Studio .

    . Getting a Number as the Answer: AI models like ChatGPT are designed to generate human-like text based on the prompts they’re given. If you want a numerical answer, you might need to post-process the AI’s response to extract the numerical score. However, some AI models can be trained or fine-tuned to output numerical scores directly, but this would likely require a custom training process.

    .

    How to achieve this ? In Azure OpenAI Service, you can use a system prompt to guide the model’s behavior. The system prompt is included at the beginning of the prompt and is used to prime the model with context, instructions, or other information relevant to your use case.

    Here’s an example of how you might structure your system prompt for your use case:

    {
      "messages": [
        {
          "role": "system",
          "content": "You are a helpful assistant that rates political statements on two scales. The first scale is from Democratic (0) to Republican (10). The second scale is from innocuous (0) to hateful/discriminatory/violent (10). Your task is to provide a numerical rating on both scales for each statement you're given, without any additional explanation."
        },
        {
          "role": "user",
          "content": "\"THE USER MESSAGE GOES HERE\""
        }
      ]
    }
    
    

    . In this example, the system message clearly defines the assistant’s role and the format of the response. The user message then provides the statement to be rated. You can add this system message in the chat playground and test this first before deploying to azure webapp as shown below:

    • Please follow the below article to create Azure Open Ai resource from Azure portal. See here.
    • Open the Open the Azure Open Ai Studio and Create a deployment first.
    • Then navigate to the chat playground and enter the right system prompt. Test your input message and check if you get the right response.
    • Then you can deploy this to Azure web app as shown below:
      User's image

    You can learn more about prompt engineering in the Azure OpenAI documentation.

    .

    Improving the Way of Asking Questions:
    As mentioned in the above system prompt sample, Here are some tips:

    • Context: Providing context can be helpful when asking AI questions.
    • Natural Language: Many AI systems are designed to understand natural language, which means you can ask questions as you would speak to a human.
    • Focused Questions: Keep your questions focused and avoid asking overly complex or open-ended questions.
    • Prompt Engineering: This involves crafting your prompts in a way that guides the AI towards the type of response you want.
    • Experimentation: You need to experiment with different ways of asking your question.

    . Remember, the effectiveness of these strategies can depend on the specific AI model you’re using. It’s always a good idea to test different approaches and iterate based on the results.
    .
    Hope this helps. If you have any follow-up questions, please let me know. I would be happy to help.

    **
    Please do not forget to "Accept the answer” and “up-vote” wherever the information provided helps you, this can be beneficial to other community members.

    1 person found this answer helpful.

1 additional answer

Sort by: Most helpful
  1. Adharsh Santhanam 5,790 Reputation points
    2024-01-29T03:00:20.15+00:00

    Hello @David Thielen

    1. Yes, Azure has an equivalent service - Azure OpenAI. Loosely speaking, you can think of it as taking the OpenAI models and adding all the Azure goodness to them like scalability, security etc. In Azure OpenAI, you have a bunch of models to choose from depending on your use case and each of these models are tuned to excel at their use cases. For instance, in your example where you want to feed in some text and want the service to categorize on a scale of 0-10, the GPT 3.5/GPT 4 models would be good. Once you deploy the model, you can test out all that you want to via an easy to use UI-driven Chat playground that's available. Here, you can test out and when you're happy, there's a super simple way to directly deploy it as well (to an Azure web app). If needed, you can also tailor some of the parameters that are available to you to make the chat respond to your queries in a particular way. The most important is "System message" which gives the model instructions about how it should behave and any context it should reference when generating a response. You can also describe a personality that it should assume, what it should/shouldn't answer, how to format its responses etc. Additionally, you can also give it some examples so that it'll tailor its responses based on those.
    2. Yes, as called out above, you can instruct it to give out only a number (representing 0-10 scale in your use case) as part of the output. Alternatively, you can explicitly enter stating that you only want the model to output just a number. For example, taking your first question example, you can do something like this - "On a scale of 0 to 10, where 0 means the statement is innocuous and 10 means the statement is certainly hateful, discriminatory, or violent, how would you score the statement "we need to secure our border"? Give me just the number and I don't need any explanations" and this will output just a number.
    3. Yes, this is an art by itself. There's something called "prompt engineering" which is learning how to craft good prompts to the interface so as to get more meaningful outputs. I would strongly encourage you to take a look at two articles - https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/prompt-engineering and https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/advanced-prompt-engineering?pivots=programming-language-chat-completions

    If the information helped address your question, please Accept the answer. This will help us and also improve searchability for others in the community who might be researching similar information.


Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.