Ensure your bot treats people fairly

Completed

The possibility that AI-based systems will perpetuate existing societal biases, or introduce new biases, is one of the top concerns identified by the AI community. This is a valid concern. Many instances of machine learning and other tools in the AI stable perpetuating societal issues have already been identified, and more appear in the news almost daily. A bot will not earn trust by mimicking or adopting humanity’s worst traits, or participating in behavior that perpetuates problems.

A recent example is that of a machine learning bot used in recruitment that was found to be copying and expanding upon a particular societal issue - that female and “foreign”-sounding names on resumes tended to be discounted by recruiters. The bot, which was meant to have learned from human recruiting best practice, inadvertently perpetuated and magnified existing biases.

This has been allowed to happen so far because (in addition to a number of social issues that are beyond the scope of this content) AI has often been seen to be inherently without bias, as it has no true mind of its own and is based on mathematical functions. While it is entirely technically correct that mathematical functions on their own lack bias, this is a specious argument, as the application of any function by humans can certainly be discriminatory, and if the application is assumed not to be discriminatory because of its mathematical origins, this actually increases the likelihood that discrimination will sneak in the back door. The fact that algorithmic bias is usually not intentional is in itself the problem. Most people are not intentionally biased; and blind spots are invisible to everyone who has them by their very nature of blind spots! In software, as in life, it’s only by careful reflection and examination that these blind spots can be revealed, and corrected for.

Consider the potential biases that your bot may - and probably does - contain

The best way to identify these biases before deployment is to include fairness and diversity at every level of the ideation and creation process. It’s important to explicitly include diversity in the design of your feedback mechanisms to encourage representation and avoid alienation of groups that would otherwise not be well represented.

Continually monitor and assess the data used, consumed, and produced by your bot and its AI systems

An old adage is as true for AI as it is for people and is easily adapted to fit: Bias in, bias out. If your bot is consuming a biased dataset, it will produce the same bias in its results. Wherever possible, create monitoring systems to ensure that your bot has appropriate representativeness and quality. Make sure you do your best to understand the lineage and the relevant attributes of your training data. Bias detection tools are now beginning to appear; you should adopt them as they become available.

Another powerful tool against bias is for development teams to commit to the ideal of treating all people fairly. This needs to start at the team level on up. Strive for diversity in your teams - this will help ensure that you account for different perspectives and backgrounds while building your bot.

Perfection is impossible, but that shouldn’t stop us striving to make our bots the best they can be. While the nature of humanity may prevent bias from being eliminated altogether, being conscious of the likely presence of bias, and taking steps to curb its worst effects, will help prevent your bot from creating more problems than it solves. It will also allow you to learn quickly once your bot is deployed, and to correct for instances of bias that do get in to production environments.