Design Principles for Conversational AI

4 years ago, I wrote about design principles for smart speakers and the example I used was as follows:

Me: Alexa, turn on bedroom lights.
Alexa: There are two services with the name bedroom
Me: Alexa, turn on bedroom lights
Alexa: Error Sound
Me: Alexa, turn on bedroom lights.
Alexa: I can’t find a device by that name
Me: Forget it, I’ll just use my phone

https://alexarnow.com/2021/02/19/4-principles-for-voice-design/

How different things are now! First, AI is much more capable than turning lights on after 4 tries. It can now pass the LSAT, start a business and get you fame, and write a press release. Next, the conversational interface often gets it write on the first time and in less time than it takes to take at our phone and look it up.

The technology is amazing, but like all tools, it works best when it is usable and meets the needs of the end user. When you are using conversational AI and designing an experience for another human, keep the following 5 design principles in mind:

  • Natural Language Processing (NLP) and Understanding – Use a model designed to use language naturally that understands a wide range of different idioms and dialects. Conversational AI applications should be designed to understand natural language and respond in a natural and conversational way. This requires a robust implementation of NLP and the ability to interpret the user’s intent.
  • User Centered Design – Design with the user’s experience first. This means that the application should be intuitive and easy to use, with clear prompts and feedback to guide the user through the conversation.  This is enabled when it is designed from the user’s goals and needs (rather than by starting from the data). 
  • Error Handling – Conversational AI applications should be designed to handle errors and unexpected input from the user. The application should be able to recognize when the user has provided input that is outside of its capabilities and respond appropriately, either by providing an error message or redirecting the conversation.  There should be no blocking experiences in conversational AI. 
  • Personalization – Personalize applications to the user based on known information.  This can be from preferences, customer data or previous interaction histories. 
  • Contextual Awareness – Design applications to understand and maintain context throughout the conversation. This means that the application should be able to remember previous interactions and draw from other known information to guide the conversation. 

What are the best practices for each?

Use the best practices for each of the design principles to be able to execute on your conversational AI roadmap. Neglecting any one of these 5 categories means that you’ll deliver a crappy experience at great cost.

Natural Language Processing (NLP) and Understanding – Use the best NLP/U technology you can find, anything less will give the crappy chatbot type experience. This is where most designers settle for less and technologists don’t understand the value. Whether or not the experience is usable comes down to the quality of understanding. Make sure you have rigorous ways of defining success and measuring quality. Any organization from the large to the small will need to have some sense of machine learning operations that augment traditional development and support processes.

  • Use advanced NLP algorithms to understand complex sentences and phrases.
  • Use sentiment analysis to interpret the user’s emotional state and respond appropriately.
  • Use machine learning algorithms to learn from user interactions and improve the accuracy of responses.

User Centered Design – It needs to work better and more easily than what it is competing with. Think not only about traditional channels and competitors but also asking a family member, friend or colleague.

  • Use clear and concise prompts to guide the user through the conversation.
  • Provide feedback to the user to let them know that their input has been received and understood.
  • Use humor or personality to make the conversation more engaging and enjoyable for the user.

Error Handling – Things don’t always work right. Don’t make it it the user’s problem and don’t get lazy with “oops something went wrong”. Don’t block the interaction and provide a next best action – “Could you ask me a different way?”

  • Use error messages to guide the user when their input is not understood.
  • Provide suggestions or alternatives when the user provides input that is outside the application’s capabilities.
  • Use machine learning algorithms to improve error handling over time by learning from user interactions.

Personalization – This often gets implemented lazily. A great way to think about implementing this well is to look at what revealed preference is shown by the user and integrate that seamlessly into the conversation so it reduces the amount of information gathering that is provided. If we know, for example, that a user typically runs errands in one part of time tailor the recommendations to that geographic area. This will enhance value in a way that doesn’t sound like the application is facebook stalking the end user.

  • Use user data to personalize the conversation to their preferences and needs.
  • Use machine learning algorithms to recommend relevant products or services based on the user’s history.
  • Use known information to make the conversation more personal and engaging but don’t assume familiarity.

Contextual Awareness – Similar to personalization, we can make a lot of assumptions about the end user. Don’t shy away from them as they can provide useful information for decision making. “People like you are typically do X, would you like to try it”?

  • Use context to provide more relevant and accurate responses.
  • Use previous interactions to personalize the conversation and make it more engaging.
  • Use information about the user’s location or other environmental factors to provide more relevant responses.

A new way of operating is required

To successfully use these practices a new operating model is required. This operating model doesn’t extend existing processes as it requires new governance to complement them. Because the models is a critical driver of the experience, the care and feeding of the model is the most important investment in conversational AI.

Machine learning (ML) Operations (MLOps) is a practice that focuses on the management and deployment of ML models in production environments. When it comes to generative AI, MLOps plays an important role in ensuring that the models are deployed and maintained effectively. MLOps helps to manage the infrastructure and resources required to train and deploy these models.

For example, MLOps can be used to automate the process of training and deploying a generative AI model. This can involve setting up pipelines to automatically train the model on new data, testing the model for accuracy and performance, and deploying the model to a production environment where it can generate new content.

MLOps can also help to ensure that the generative AI models are operating effectively in a production environment. This may involve monitoring the performance of the model over time, identifying any issues that arise, and implementing updates or changes to the model as needed.

Next, I’ll dive into the details of a good MLOps organization with tips on what to look for and who to partner with.