Introduction to guidelines for human-AI interaction

This article provides background on 18 recommended guidelines for human-AI interaction design from Microsoft and how to apply them. Microsoft experts in the field of AI have drawn on years of research and thinking to develop this guidance.

Why do we need guidelines for human-AI interaction?

We need guidelines for human-AI interaction because AI systems may demonstrate unpredictable behaviors that can be disruptive, confusing, offensive, or even dangerous. For these reasons, AI systems often violate traditional human-computer interaction design principles.

When a traditional application or product doesn't behave consistently, it would be judged to have a design deficiency or bug. However, inconsistency and uncertainty are inherent in AI-infused systems because of their probabilistic nature and because they change over time as they learn with new data.

Attributes of AI services, including their accuracy, failure modes, and ability to be understood raise new challenges and opportunities for product, service, and application developers.

What are the guidelines?

Microsoft proposes 18 generally applicable design guidelines for human-AI interaction.

These guidelines synthesize more than two decades of thinking and research about how to make AI user-friendly. The Microsoft team ran them through three rounds of validation to ensure they are specific, observable, and easy to understand. You can read the original research paper.

All human-AI interaction guidelines in one image

The 18 guidelines provide recommendations on how to create meaningful AI-infused experiences that leave users feeling in control and that respect their values, goals, and attention.

The guidelines are grouped into four categories, depending on the phase of user interaction to which they apply.

The initial phase

Upon initial exposure to the system, users should learn what to expect. What is this system capable of? And how well can it perform? Setting unreasonably high expectations can result in frustration and product abandonment, so it’s important to communicate honestly what the product can do, and how well. Therefore, the guidelines in the first category are about setting expectations:

1. Make clear what the system can do. Help the user understand what the AI system is capable of doing.

2. Make clear how well the system can do what it can do. Help the user understand how often the AI system may make mistakes.

During interaction

This subset of guidelines is about context. Whether it’s the larger social and cultural context or the local context of a user’s setting, current task, and attention, AI systems make inferences about people and their needs that depend on context.

3. Time services based on context. Time when to act or interrupt based on the user’s current task and environment.

4. Show contextually relevant information. Display information relevant to the user’s current task and environment.

5. Match relevant social norms. Ensure the experience is delivered in a way that users would expect, given their social and cultural context.

6. Mitigate social biases. Ensure the AI system’s language and behaviors don't reinforce undesirable and unfair stereotypes and biases.

Guidelines 5 and 6 in this group remind us to consider social norms and biases. Is the data set representative of the population? Is the model learning and replicating undesirable social biases? To apply these guidelines effectively, ensure your team has enough diversity to cover each other’s blind spots.

When the system is wrong

AI-infused systems will inevitably be wrong, and you need to plan for it. The system might not trigger when expected or might trigger at the wrong time, so it should be easy to invoke and dismiss. When the system is wrong, it should be easy to correct it, and when it's uncertain, the user should be able to complete the task on their own. For example, the AI system can gracefully fade out or ask the user for clarification.

When the system is wrong, users are puzzled and frustrated, and understandably so. How can you prevent such feelings? Providing an explanation of why the system did what it did can help users understand how the system works, empathize with it, and reduce feelings of frustration. But to be able to do that, you need to build explainability into your system, and not work with an opaque box.

7. Support efficient invocation. Make it easy to invoke or request the AI system’s services when needed.

8. Support efficient dismissal. Make it easy to dismiss or ignore undesired AI system services.

9. Support efficient correction. Make it easy to edit, refine, or recover when the AI system is wrong.

10. Scope services when in doubt. Engage in disambiguation or gracefully degrade the AI system’s services when uncertain about a user’s goals.

11. Make clear why the system did what it did. Enable the user to access an explanation of why the AI system behaved as it did.

Over time

AI systems learn and improve over time. An AI system should learn from user behavior. The guidelines in this group encourage users to teach the system, by providing granular feedback. As users provide feedback, convey the consequences of their actions by at least acknowledging the feedback was recorded, and perhaps indicating how it will affect their experience in the future.

However, as the system learns and improves, and the model updates, be cautious about introducing large disruptive changes to the user experience. Plan ahead how to roll out those changes gradually and notify users about them.

12. Remember recent interactions. Maintain short-term memory and allow the user to make efficient references to that memory.

13. Learn from user behavior. Personalize the user’s experience by learning from their actions over time.

14. Update and adapt cautiously. Limit disruptive changes when updating and adapting the AI system’s behaviors.

15. Encourage granular feedback. Enable the user to provide feedback indicating their preferences during regular interaction with the AI system.

16. Convey the consequences of user actions. Immediately update or convey how user actions will impact future behaviors of the AI system.

17. Provide global controls. Allow the user to globally customize what the AI system monitors and how it behaves.

18. Notify users about changes. Inform the user when the AI system adds or updates its capabilities.