Intelligent Recommendations Modeling FAQ

This article provides a deeper look into the types of modeling algorithms that the Intelligent Recommendations service uses and provides answers to common modeling questions.


Intelligent Recommendations Algorithms Overview

The Intelligent Recommendations Modeling component utilizes a few different algorithms to create ranked lists. The List API responds to queries and returns results depending on the type of algorithm selected for modeling. More information about the types of Algorithms used by the Intelligent Recommendations service is given in the following table:


As a best practice, use Experimentation to compare the results from a few different list types &/or data types before making a final choice about which algorithm is best for your business use case and dataset (which is a both a combination of the data typesand actual behavior).

Algorithm Type Description
Matrix Factorization (MF) Matrix Factorization is a type of collaborative filtering algorithm that focuses on creating user-to-item and item-to-item relationships based on specific user interactions (purchase, usage, clickthrough, views, download, etc.). This algorithm type rank lists based on the historical preferences of a specific user, which is what we call their personal “Taste” based ranking. It also derives similarities between Items based on interactions of users with items.

Matrix Factorization generates symmetric (if ‘A’ is similar to ‘B’ then ‘B’ is also similar to ‘A’) and transitive (if ‘A’ is similar to ‘B’, and ‘B’ is similar to ‘C’, then ‘A’ is similar to ‘C’) rankings. For best results, use the Matrix Factorization algorithm type when using a dataset with substantial interaction signals and catalog metadata. This feature is great for entertainment domains like Movies and TV, Gaming, or Streaming, but works well in other domains that rely on customer interaction signals including: Retail, Grocery, Trip Itineraries, Manufacturing, and more.
Direct Associated Similarities (DAS) The Direct Associated Similarities (DAS) algorithm is well suited for local/directed affinity domains with high utility-based needs, such as Apps where usefulness is more important than historical preference (taste). For example, people that do ‘A’ then ‘B’ then ‘C’ actions, tend to do ‘D’ action afterwards. DAS is nonsymmetric and not associative.

Our service uses the DAS algorithm to power the Next Best Action API, which creates content suggestions based on distinct, repeatable groupings. A common application of Next Best Action is often seen on Retail check-out experiences in “basket-completion” scenarios like Frequently bought together – which provides complementary item suggestions based on the contents of a user's cart.

DAS can also recombine groups and recommend items from different subdomains. For example, a grocery store shopper may be recommended napkins and plates with their burger patties and buns in their cart.

Domains that benefit from “Next Best Action” include Grocery, Sales, Troubleshooting, Accounting, and more.
Visual-Based Similarity (VBS) Visual-Based Similarity (VBS) is a deep learning, visual cognition algorithm that returns visually similar recommendations for items with similar images, for a given seed item. Like Matrix Factorization, the recommendations produced by the VBS algorithm are symmetric.

This deep learning, convolutional neural network uses ‘Argus’ as a backbone, however, it's trained further using deeper techniques on the tenant's images for view invariants, providing much more relevant recommendations for tenant's domain. VBS is incredibly powerful in domains like Fashion, Design and Jewelry, where visual attributes are a major selling part of the product.
Text-Based Similarity (TBS) The Text Based Similarity (TBS) algorithm returns textually similar recommendations for a given seed item, by focus training a language model on the titles and descriptions of the items in the provided catalog. This algorithm works especially well in domains where titles and descriptions are descriptive, producing unique and intuitive recommendations. The model uses ‘TNLR’ Transformer-based language model as a backbone, however, the model also uses transfer learning and deeper training techniques on the provided dataset, allowing this algorithm to provide state-of-the-art recommendations that semantically make sense.

TBS uses Natural Language Processing (NLP) as input making this algorithm applicable in many different domains including: trip Itineraries and excursions, wineries, Scientific Journal Research Databases, Troubleshooting and more.
Browse Lists Browse lists enable catalog browsing using heuristically based charts sorted by information such as total sales, sum of clicks, release date, or a combination of different metrics. Supported lists are: ‘New’, ‘Trending’, ‘Popular’. Charts are a great starting place to quickly get end users to interact with your products and see the newest and best of your product catalog.

Browse Lists can be further augmented by changing the input interaction type. For example, a model based on purchase signals return “most Popular purchased products”, while changing the model signals to views return “Most Popularly Viewed Products”.

Back to top

Frequently asked questions

This section covers a series of questions commonly asked about Intelligent Recommendations models and their applications.

How can I track the modeling status?

Intelligent Recommendations customers can track the modeling status for each of the models they've created on their account. After you configure a model, the service will periodically create a status log file to report on the current status of all algorithms (with respect to your modeling tier). To learn more about how to access these logs, see Modeling Status Reports Guide.

Back to top

What algorithm and list type should I use for my business?

Selecting a List type and algorithm to use depends on the business use case, experience, and data available for modeling. See List Names, AlgoTypes Table, Refinements for a full list of available list names and AlgoType combinations.

In general, modeling interactions reflect what people interact with. For example, we describe the list type "People Also," which uses the MF algorithm, as "customers who do this action, also do this action." When the action is purchase, the list becomes "People who bought this, also bought."

Item metadata can also be used to establish similarities between items, assuming that the metadata is sufficient in volume and quality. For example, items with similar descriptions can be considered closely related, just as items with similar product images might be closely related. This metadata has been useful to create results for items when no interactions might be available (also known as modeling “cold items”).

Approaches that combine Interactions and metadata-based (for items and/or users) can be used with Intelligent Recommendations to customize the scenarios and experiences. Use multiple different models (and use one model per account) to experiment and see which approach works best for your use cases.

Mapping available Data types and uses cases to Algorithm type

Data Type Available Scenarios Algorithm
E.g., Views, Purchases, Usage etc. What did users do?
Picks for you
People also do
Next Best Action
Matrix Factorization (MF)

Directed Association (DAS)
Textual metadata
E.g., Title, Description
Similar Description Textual Based Similarity (TBS)
Visual metadata
E.g., Product images from multiple angles
Similar looks
Note: Not all domains fit this scenario. You should use it in the case where images are a good representation for an item.
Visual-Based Similarity (VBS)
Other item metadata
E.g., Shape, Category, Tags, etc.
Same as Interactions.
The service also allows models to be built in different ways:
- In a hybrid fashion combining item metadata with interactions
- Or built using only Item metadata (with MF or DAS algorithms)
Matrix Factorization (MF)

Directed Association (DAS)
User metadata
E.g., Demographics
Relevant scenarios are around User Personalization:
- Picks for you
- Personalization

The service allows for models to be built in different ways:
- In a hybrid fashion combining User metadata with Interactions
- Or built using only User metadata (with MF or DAS algorithms)
Matrix Factorization (MF)

Directed Association (DAS)

Back to top

How should I decide whether to use the Matrix Factorization or Direct Association algorithms?

It's recommended to try both with your data to see which algorithm returns more suitable results based on your business requirements.

Try the Matrix Factorization (MF) algorithm if:

  • The connection between items in your domain is mostly Commutative (symmetric, that is, if A=>B then B=>A) and Associative (that is, if A=>B and B=>C then A=>B).
  • Your data is sparse, and you still want enough recommendations for many items.

Try the Direct Association (DAS) algorithm if:

  • The connection between items in your domain is mostly directive (asymmetric, that is, A=>B doesn't mean B=>A) and direct (no associative).
  • ‘Next Best Action’ (given ordered list of items, what should be the next one) is an important scenario for you.
  • You want to recommend one subdomain of your items to another.
  • Direct Connection that appears more should be reflected more in results.

For more information, see List Names, AlgoTypes Tables, Refinements.

Back to top

How many interactions do I need to ensure good recommendations?

To properly model a damian for a set of important products, each product should contain at least five interactions or more for scenarios like "People also like" or "Picks" (personalization). You would also need sufficient interactions that include more than one product, grouped by the InteractionGroupingId (each item in the same order would have a row in the Interactions data entity with the same InteractionGroupingId) to generate results for "Next Best Action".

A good rule of thumb is to aim to have about five times the number of Interactions as the number of items. for example If there are 1000 items in the catalog, it would be good to try modeling with at least 5000 interactions.

When in doubt, it helps to try it out with a simple model (fewer columns) and as many Interactions (more rows) in the input dataset as possible. To evaluate your data contract for quality and to see metrics regarding model performance, see Intelligent Recommendations Dashboards.

Back to top

Why do I need InteractionGroupingId, UserId, ItemId and ItemVariantId included with my Interactions Data Entity?

InteractionGroupingId indicates for the system-connected groups, especially for Items, for better overall inference across the board. For example, grouping transactions by InteractionGroupingId in retail scenario can help the system to learn the products that are “Frequently bought together” in a shopping cart, or tasks that are completed in a role for “Next Best Action”, or similar items in “People also like”.

UserId is used by the system to model the relationships formed between items and users who interact with items, which depending on how the model is focused, can create both personalized and nonpersonalized modeling scenarios. In the Personalized approach with UserId, the system models a mapping between users and items, based on the historical preferences of each individual user. It then produces the “based on your previous history, you might like” model, referred to as “Picks for you”.

ItemId is the actual item reference. It's essential to connect each item with its interactions and allow the patterns to emerge in the model. ItemIds that don't have interactions won't appear in recommendations for other products and may also suffer from poor recommendations when used as the seed for models like “People who like this item also like”.

ItemVariantId is used mainly for the “Similar looks” scenario and Visual Based Similarity (VBS) algorithm, which takes image metadata into account instead of interactions. This field isn't required for Models and Algorithms that rely on Interactions.

To learn more about the required data entities per scenario, see Data Entities Mapping Table.

Back to top

Can I use Item Metadata like Category, Color, Model etc.?

Item metadata can be helpful in many ways:

  • Better modeling of items in addition for the interactions input, so items with few or even no interactions (cold items) can still get “People also like” recommendations.
  • It's possible to have a model based entirely on Item metadata (such as content tags) and return a “similar items” type of recommendations result.
    • How to do this: Give the metadata item a TagId. In the Interactions Data Entity, for each Interaction row set the InteractionGroupingId as the TagId, while keeping the item as ItemId, and user as UserID. To learn more about how TagIds work, see guide to Metadata tagging and bucketing.


Use a separate account for Item Metadata based model, such that you have 1 IR model per IR account, and they are separate from the pure User interactions based model/account.

  • Items with informative textual descriptions can get “Similar description” recommendations, driven by our NLP deep model.
  • Items and variants with images, can get “Similar look" recommendations, driven by our visual cognition deep learning model.

Return to top.

Can I use User metadata, like demographics, to personalize recommendations?

The Intelligent Recommendations service allows customers to include user metadata through a process of metadata tagging. User metadata can be powerful for recommending relevant content to all users, including

  • New or infrequent customers (also known as “cold users”).
  • Connecting users with common attributes with metadata tagging. To learn more about demographic bucketing with recommendations and to see examples, see guide to Metadata tagging and bucketing.

Return to top.

Can I do User-to-User recommendations?

At the moment, full User-to-User recommendations aren't supported. For now, it’s possible for some datasets to get User-to-User recommendations, by making some changes to the Data Contract:

  • For each original Interaction input, construct each row to:
    • Write ItemID in the InteractionGroupingId column
    • Write UserID in the ItemId column
    • Make the API Request: After doing the previous changes to the data contract, the “People also” list type will be called with the UserId and will return a list of similar users.

Return to top.

Where can I learn more about the Matrix Factorization model used with Intelligent Recommendations?

Our MF model: One-class collaborative filtering with random graphs. We developed an in-house version of Bayesian matrix factorization, which we described here and can be used to learn any embeddings as we explained here.

The paper is a bit heavy on the math. If you want a much “gentler” intro to matrix factorization, try this paper (which is different than what we do, but a good starting point for getting your feet wet).

Return to top.

See Also

Troubleshooting Guide
API Status Codes
Data Contract
Data Entities Mapping Table.
Guide to Metadata tagging and bucketing.