@lakshmi Welcome to Microsoft Q&A Forum, Thank you for posting your query here!
Here’s the information specific to features provided by GPT-4 model:
- Improved Problem Solving: GPT-4 can solve difficult problems with greater accuracy than any of OpenAI’s previous models.
- GPT-4 Turbo is more capable and has knowledge of world events up to April 2023. It has a 128K context window so your applications benefit from a lot more custom data tailored to your use case using techniques like RAG (Retrieval Augmented Generation).
- GPT-4 Turbo is available to all Azure OpenAI customers immediately. GPT-4 Turbo pricing is 3x most cost effective for input tokens and 2x more cost effective for output tokens compared to GPT-4, while offering more than 15x the context window.
- Max Request (Token) are more in GPT-4 model when compared to the previous (legacy) models. See here.
- Improved Safety & Alignment: With GPT-4, new research advances from OpenAI have enabled an additional layer of protection. Guided by human feedback, safety is built directly into the GPT-4 model, which enables the model to be more effective at handling harmful inputs, thereby reducing the likelihood that the model will generate a harmful response. See here.
- API Changes: The GPT-4 models in Azure OpenAI Service use the new Chat Completions API. This API is conversation-in and message-out, meaning the models expect input formatted in a specific chat-like transcript format, and return a completion that represents a model-written message in the chat.
More Information:
- Improved Function Calling Function calling, launched in June 2023, enables builders to use Generative AI to connect applications to external tools using API calls. GPT-4 Turbo improves the ability to generate function calls based on user natural language inputs. In addition, GPT-4 Turbo offers the ability to generate multiple function and tool calls in parallel, so that applications can use external systems more efficiently.
- JSON Mode GPT-4 Turbo also introduces JSON Mode, which improves on GPT-4’s ability to generate correctly formatted JSON output to interoperate with software systems. This is a highly requested feature for builders using OpenAI models to work with their applications. You can use JSON Mode by settings response_format to { "type": "json_object" }.
- Reproducible Output
Generative AI models like GPT-4 Turbo generate their outputs probabilistically. In a wide variety of cases, this non-determinism is a benefit, enabling desirable outcomes like creative prose and imaginative drawings. Application builders sometimes want more predictable output from similar inputs. The new seed parameter in GPT-4 Turbo gives builders more control over the language model output. More info here.
. On a side note: GPT-4 Turbo with Vision is a large multimodal model (LMM) developed by OpenAI that can analyze images and provide textual responses to questions about them. It incorporates both natural language processing and visual understanding. More info here.
.
Resources: - Refer this article which talks about how to work with the GPT-4 models.
- To learn more about how to interact with GPT-4 and the Chat Completions API check out our in-depth how-to.
- The Pricing for GPT-4 model is explained here.
- This Quickstart helps to Get started using GPT-35-Turbo and GPT-4 with Azure OpenAI Service
Hope this helps. If you have any follow-up questions, please let me know. I would be happy to help.
**
Please do not forget to "Accept the answer” and “up-vote” wherever the information provided helps you, this can be beneficial to other community members.