Take special care if your bot will support consequential use cases

Completed

Ethical design is always important, for any application, but it becomes even more so when there is a particularly consequential purpose to your bot - including healthcare, education, finance, or security applications.

A consequential use case is one involving a service that, if denied, would have meaningful and significant impacts on someone’s daily life.

If your bot includes any of these applications, or might have consequential impacts, it’s important to design for them. Which includes making sure a human handoff is always and immediately available to the user — and could include choosing not to employ a bot at all, at this time.

Designing for consequential use cases

Before beginning any work on your bot, design or otherwise, first consider if your bot has the capability to affect the well-being of your user, or if there are any legal requirements to consider. Some actions may ethically and/or legally require human judgment or approval. Common instances include financial or healthcare systems.

For example, many countries/regions have strict rules around which institutions and individuals can hand out financial advice, and there are further rules around what information actually is financial advice. In a jurisdiction like this, a bot operating in the fintech space must be programmed carefully to make sure it doesn't stray into accidentally handing out legally fraught information. If a bot breaks the law, liability can be many pronged - but legal responsibility will almost always begin and end with the developer.

Is the use case suitable for automation at all?

Some use cases, particularly in those industries where human health is at stake or threatened, are currently too important or too fraught for automation. Some things are simply too critical to be left to machines. However, simply because a use case is consequential, doesn’t mean it isn’t suitable to automation. In fact, many life-saving technologies would never have been deployed at all if their developers had stopped at the fact that their use involves great risks.

What does this mean for bot deployment and automation? While “gut feeling” may be a useful guide, it can often be misleading. When use cases are consequential, it’s vitally important to consult with a wide range of experts in the domain to ensure that your application isn’t going to cause serious adverse impacts if it doesn’t work as expected.

“The first rule of any technology used in a business is that automation applied to an efficient operation will magnify the efficiency. The second is that automation applied to an inefficient operation will magnify the inefficiency” — Bill Gates.

We might well extend Gates’ maxim on automation to consequential use cases. If the automation is applied poorly, it will exacerbate adverse effects. But if automation is applied well, conversational AI bots may be able to help preserve or extend human life, magnify and protect wealth, and increase education. Just because a use case is consequential doesn’t mean it shouldn’t have automation applied - however, it does require that the question of whether automation should be applied, and if so, how it should be done, be taken very seriously and with all due care.