Cuir in eagar

Comhroinn trí


Personalizer frequently asked questions

This article contains answers to frequently asked troubleshooting questions about the Personalizer service.

Single region data residency

When will Personalizer be deprecated?

Starting on the 20th of September, 2023 you won’t be able to create new Personalizer resources. The Personalizer service is being retired on the 1st of October, 2026.

How is my data replicated in a region with single region data residency?

Personalizer doesn't store/process customer data outside the region the customer deploys the service instance in.

Configuration issues

I changed a configuration setting and now my loop isn't performing at the same learning level. What happened?

Some configuration settings will reset your model. Configuration changes should be planned and executed carefully after reading the documentation.

When configuring Personalizer with the API, I received an error. What happened?

If you use a single API request to configure your service and change your learning behavior, you will get an error. You will need to make two separate API calls: first, to configure your service, then to change the learning behavior.

Transaction errors

I get an HTTP 429 (Too many requests) response from the service. What can I do?

If you picked a free price tier when you created the Personalizer instance, there is a quota limit on the number of Rank requests that are allowed. Review your API call rate for the Rank API (in the Metrics pane in the Azure portal for your Personalizer resource) and adjust the pricing tier (in the Pricing Tier pane) if your API call volume is expected to increase beyond the threshold for the chosen tier.

I'm getting a 5xx error on Rank or Reward APIs. What should I do?

5xx errors should be transient issues. If they continue to occur, please contact support by selecting New support request in the Support + troubleshooting section, in the Azure portal for your Personalizer resource.

Learning loop

In Apprentice mode, the learning loop doesn't attain a 100% match to the non-personalized (baseline) policy. How do I fix this?

Personalizer's effectiveness in Apprentice mode will rarely achieve near 100% of the application's baseline; and never exceed it. The best practice would be not aim for 100% attainment; but a range of 60% – 80% should be achievable depending on the use case. However, if the learning performance is slow or plateaued below 60%, then the following issues may have occurred:

  • Not enough features sent with Rank API call
  • Bugs in the features sent - such as sending non-aggregated feature data such as timestamps to Rank API
  • Bugs with loop processing - such as not sending reward data to Reward API for events

To address these issues, you may need to make adjustments by either changing the features sent to the loop, or ensuring that the reward score is accurately capturing the value of the action returned by the Rank API call.

The learning loop doesn't seem to learn effectively or quickly. How do I fix this?

The learning loop needs a few thousand Reward calls before Rank calls prioritize effectively.

If you are unsure about how your learning loop is currently behaving, run an offline evaluation, and apply the corrected learning policy.

I keep getting rank results with all the same probabilities for all items. How do I know Personalizer is learning?

Personalizer returns the same probabilities in a Rank API result when it has just started and has an empty model, or when you reset the Personalizer Loop, and your model is still within your Model update frequency period.

When the new update period begins, you will see the probabilities change with the updated model results.

The learning loop was learning but seems to not learn anymore, and the quality of the Rank results isn't that good. What should I do?

  • Make sure you've completed and applied one evaluation in the Azure portal for that loop.
  • Make sure all rewards were sent successfully via the Reward API, and processed.

How do I know that the learning loop is getting updated regularly and is used to score my data?

You can find the time when the model was last updated in the Model and Learning Settings page of the Azure portal. If you see an old timestamp, it is likely because you are not sending the Rank and Reward calls. If the service has no incoming data, it does not update the learning. If you see the learning loop is not updating frequently enough, you can edit the loop's Model Update frequency.

Offline evaluations

An offline evaluation's feature importance returns a long list with hundreds or thousands of items. What happened?

This is typically due to timestamps, user IDs or some other fine grained features sent in.

I created an offline evaluation and it succeeded almost instantly. Why is that? I don't see any results?

The offline evaluation uses the trained model and data from the events that were sent to the Rank/Reward APIs in that time period. If your application did not send any data in between the start and end times of the evaluation, it will complete quickly without any results.

Learning policy

How do I import a learning policy?

Learn more about learning policy concepts and how to apply a new learning policy. If you do not want to select a learning policy, you can use the offline evaluation to suggest a learning policy, based on your current events.

Security

What API authentication protocols does Personalizer support?

Personalizer APIs use Microsoft Entra ID, which supports a variety of authentication and synchronization protocols.

The API key for my loop has been compromised. What can I do?

You can regenerate one key after swapping your clients to use the other key. Having two keys allows you to propagate the key in a lazy manner without having to have any downtime. For security purposes, we recommend doing this at a regular cadence.