are transformer based models part of semantic language model or vice versa ?

Mayank Malik 20 Reputation points
2024-05-30T01:25:24+00:00

I am confused as the core fundamentals are same that the tokens with the similar semantic meaning are placed closer on the multi-dimensional coordinates.

This question is related to the following Learning Module

Azure AI Language
Azure AI Language
An Azure service that provides natural language capabilities including sentiment analysis, entity extraction, and automated question answering.
381 questions
Azure AI services
Azure AI services
A group of Azure services, SDKs, and APIs designed to make apps more intelligent, engaging, and discoverable.
2,561 questions
0 comments No comments
{count} votes

Accepted answer
  1. Marcin Policht 16,420 Reputation points MVP
    2024-05-30T01:48:18.77+00:00

    Transformers are a type of neural network architecture that excels at understanding and processing sequences of words by focusing on important parts of the input, which helps them handle long sentences and complex language structures efficiently. Examples of these models include BERT and GPT, which are used for tasks like translation and text generation.

    Semantic language models aim to understand and represent the meaning of words and sentences, placing similar meanings close together in a conceptual space. While transformers are often used to build these semantic models because of their powerful capabilities, semantic modeling can use various methods. In essence, transformers are a tool that helps achieve the goal of semantic understanding in language models.


    If the above response helps answer your question, remember to "Accept Answer" so that others in the community facing similar issues can easily find the solution. Your contribution is highly appreciated.

    hth

    Marcin

    1 person found this answer helpful.
    0 comments No comments

0 additional answers

Sort by: Most helpful