Large language models

Completed 100 XP

Understanding how generative AI works can help educators stay at the forefront of technological advancements in education. Let’s dig more into AI vocabulary.

What are large language models?

Large language model (LLM) refers to AI models like GPT-4 (and future versions) by OpenAI that are trained on massive amounts of text and can generate conversational responses on the spot by predicting what words come next in a phrase—like putting together a puzzle. Large language models can perform various natural language tasks like:

  • Classification
  • Summarization
  • Translation
  • Content generation
  • Dialogue (for example, virtual assistants)

Large language models are trained on billions of language examples from diverse sources like books, articles, and websites, which help them to respond with facts, grammatically correct text, argumentation, and a semblance of creativity.

The popular ChatGPT system from OpenAI is an example of this type of generative AI. ChatGPT is powered by a large language model invented at OpenAI based on the GPT-4 (generative pretrained transformer) model. Think of ChatGPT as an application built on top of a large language model that has been fine-tuned for interactive chats.

People using an app powered by a large language model can direct the model’s output through prompts—the text they enter in the app’s interface. Prompts can be natural language sentences or questions, code snippets or commands, or any combination of text or code.

When a prompt is specific and detailed, LLMs can generate text, expand on main points, condense information into key points, and answer questions efficiently. The art of creatively defining LLM prompts is an emerging field known as prompt design and prompt engineering. It involves the process of crafting effective and efficient prompts to get the desired response. Educators and learners might need to experiment with choosing the right words, phrases, symbols, and formats that guide the model to generate high-quality and relevant texts.

Some tips for writing effective prompts are:

  • Be specific.
  • Use the right model for the task.
  • Ask for results from a certain point of view.
  • Guide the model to generate the desired length.
  • Use the new topic button when you want to change topics (The art of the prompt: How to get the best out of generative AI).

Microsoft uses large language model technology to power the capabilities of Copilot.

Copilot is like having a research assistant, personal planner, and creative partner at your side whenever you search the web. With this set of AI-powered features, you can:

  • Ask an actual question. When you ask complex questions, Copilot can give you detailed replies.
  • Get an actual answer. Copilot looks at search results across the web to provide a summarized answer.
  • Be creative. When you need inspiration, Copilot can help you write poems, stories, or even create a brand-new image.
  • In the chat experience, you can also ask follow-up questions like “Can you explain that in simpler terms,” or “Give me more options” to get different and more detailed answers in your search. However, in chat, each conversation has a limited number of interactions to keep the interactions grounded in search.

Note

Always fact-check results. Although LLM responses appear convincing, they might be inaccurate, incomplete, or inappropriate. LLMs can’t understand or evaluate response accuracy. It’s important to note that Copilot gives educators and learners sources for the online content it uses as data in its responses, so they can evaluate it before using it as a trusted source.

A faculty member at a university needs to write a new syllabus for a course on urban planning. They start by asking Copilot to write a summary of a university-level course on urban planning. The summary is detailed but doesn't include all the course elements. The faculty member modifies the prompt to include the course outline and specifies that the summary is for a course syllabus. The second iteration is closer to what they need for the syllabus. They copy the text, paste it into a Word document, and change just a few words. The summary is done. They then ask Copilot to write learning objectives for the course based on the outline and summary. In minutes, they complete this task and can move on to creating course materials.

While LLMs are impressive in many ways, they’re best suited for tasks that involve categorization, generating new ideas, or summarizing text rather than retrieving specific details from a large dataset.


Next unit: Use AI-powered image generation capabilities effectively

Previous Next