Month: July 2022

  • Multimodal Conversational AI assistants

    Artificial Intelligence (AI) adoption has skyrocketed over the last 18 months. And Gartner says that chatbots are just one step away from a slope of enlightenment on its AI hype cycle. At the same time, AI technologies are coming to accelerate business growth and ensure engineering trust. Together with Conversation Design, Conversational AI is transforming customer experience, customer support, and digital customer services for an onscreen world.

    From mobile-first experience to Conversational AI multimodality in customer interaction

    “Mobile-first experience” — this is the paradigm that has been the number one goal in the strategy of IT companies since Google announced this concept back in 2010. Now in 2022, it’s time for companies to expand on that approach and think about multimodality.

    To determine if multimodal experiences are best for your users, you need to ask yourself the following questions:

    • Do your users have access to multimodal devices?
    • How valuable is that for those users?
    • What natural conversations are your users having?
    • What are they looking for? And how could a bot help them achieve it?

    The mobile world shows the flexibility and scalability of company offers, and virtual assistants are the same. But not everyone has multimodal assistants in their household and its adoption for enterprises is still in its infancy.

    Featured resources: Free guide to Conversation Design and How to Approach It.

    Multimodal Conversation Design is exciting because it marries voice and chat together, and they can fill in gaps that each experience may not offer. For example, today’s voice technology is still limited, such as the challenges around understanding certain accents. Multimodal technology can support this pain point by leveraging visuals for the user to lean on instead of the voice experience. This offers a more accessible experience to all users.

    “During consultation for the automotive industry, when we looked at English support it became very clear that for English US, English UK, Australian etc cultural context is extremely important to consider. So the way you would name a car part in English US would be different from English UK, and you really need to customize your language model.” — Quirine van Walt Meijer, Senior Designer in Conversational AI at Microsoft.

    Conversational AI creates stable and well-trained language models as basics, and then you look outwards in the context, what channels are interesting, or what modalities can best surface brand or user experience. Language is the biggest factor in Conversational AI, once you get started to build a conversation you probably have dialects or different languages inside one country. Check out our investigation of different names of soft drinks in the United States in a recent post, Dialect Diversity in Conversation Design.

    Regional Word Variations Across the US

    It’s essential for conversation design teams to understand how the end-users talk about products, services, and things the virtual assistant will need to know. Always collect sample dialog from a diverse representative sample of the bot’s end users to ensure the system will understand all the different types of jargon and phrases.

    Read also: Three Secrets Behind Impactful Troubleshooting Chatbot Conversation Flows

    Best use cases for Multimodal Conversational AI Assistants

    A great multimodal experience is one that feels seamless, easily switching out contexts. A good example with a booking self-driving vehicle agent by the textbox, but also talking to you inside of the vehicle via voice. Check out more Multimodal Conversation Design Use Cases and opportunities for enterprises.

    Multimodal Conversational AI Assistants

    The Future of Multimodal Conversation Designed Experiences

    The not so far future will be that everytime a brand launches a conversational experience, it will be across multiple channels, specially designed for that channel. Brands need to invest in offering automation to their customers across multiple voice and chat channels, creating more accessible solutions. By allowing more entryways for users to self-serve, a company’s ROI will only increase.

    Want to Reduce Customer Support Costs? We analyze your customer pain points and address them with automation. Get in touch with us!

    Multimodal Conversational AI assistants was originally published in Chatbots Life on Medium, where people are continuing the conversation by highlighting and responding to this story.

  • The Evolution of Conversational AI

    Talking to machines through the years

    Old fashioned red telephone
    Photo by Miryam León on Unsplash

    Conversation comes naturally to us. It’s remarkable just how fluently we can converse on any number of topics, and adapt our style with ease to speak with any number of people.

    In contrast, our conversations with machines can be clumsy and stilted. Conversational AI has been a long-standing research topic, and much progress has been made over the last decades. There are some large-scale deployed systems that we’re able to interact with by language, both spoken and written, although I’m sure very few people would call the interactions natural. But whether it’s a task-based conversation like booking a travel ticket, or a social chatbot that makes small talk, we’ve seen continual evolution in the way the technology is built.

    The first chatbot

    One of the first, and still most famous, chatbots called Eliza was built around 1966. It emulates a psychotherapist using rule-based methods to discover the keywords in what a user types, and reformulate those keywords into a pre-scripted question to ask back to the user. There are implementations still around today which you can try.

    Eliza’s inventor, Joseph Weizenbaum, conceived Eliza as a way to show the superficiality of communication between people and machines. And so he was surprised by the emotional attachment that some people went on to develop with Eliza.

    “Press 1 to make a booking, press 2 for cancellations…”

    The personal computer wasn’t a reality until the late 1970s. So at the time of Eliza there wasn’t really a way that people could interact with a text-based chatbot, unless they happened to work with computers. Chat technology instead begun to be used in customer service scenarios over the phone. These systems were dubbed Interactive Voice Response (IVR). DTMF (dual-tone multi-frequency) was initially a key part of these systems for enabling user input. DTMF assigns each keypad number two frequencies when pressed, which can be decoded by the receiver to figure out which number the user pressed. This is the mechanism behind the scenes when call centres ask you to “Press 1 for bookings, press 2 for cancellations…”, etc.

    The first commercial IVR system for inventory control was invented in 1973, with commercialisation of IVRs picking up in the 1980s as computer hardware improved. Through the 1990s, as voice technology improved, limited vocabulary speech-to-text (STT) was increasingly able to handle some voice input from users, alongside continued use of DTMF. Phone conversations also need a way to respond to the user with voice. Initially, this would have been pre-recorded audio, and later text-to-speech (TTS).

    In early systems, the natural language processing (NLP) to interpret what users said is typically rule-based. To make life easier, questions asked by the system may be very direct in order to reduce confusion between the number of things a person might say in response, e.g. “Please say either booking or cancellation”, or “Please state the city you are departing from”.

    The conversation flow — i.e. what to say next — in these systems was handcrafted, like a flowchart. Standards were developed for writing conversational flows. VoiceXML is one such standard that came into being in 1999. It allowed VUI designers to focus solely on designing the conversation, while software engineers could focus on the system implementation.

    Learning how to converse

    Handcrafting conversation flows is complex, and leads to sometimes clumsy interactions and brittle systems that can break when users say something unexpected. From the early 2000s, researchers looked into ways to learn conversation flows rather than handcraft them. Many of the models at this time were based on reinforcement learning, and were able to learn a conversation flow (or ‘dialogue policy’) through interacting with simulators and by having lots of conversations with real people.

    One of the difficulties of deploying such statistical systems for dialogue policy is in the lack of control they offer to developers. In a world where companies like to maintain control of their brand in customer service interactions, it’s difficult to accept randomness in performance that might reflect poorly on them. A particularly egregious case is that of Tay — a social chatbot released by Microsoft in 2016 which quickly learnt to post offensive and inflammatory tweets, and had to be taken down.

    As the internet grew, so too did the places in which conversational AI technology was deployed. Web browsers, instant messaging and mobile apps quickly became channels in which text-based chat was now viable.

    The deep learning boom

    Through the 2010s, deep learning had a big impact on STT and TTS systems, significantly improving them to handle a wider range of language. Deep learning also started to have an impact in the NLP community. Understanding the meaning of what a user says in a conversation is cast as two machine learning tasks — intent recognition and slot (or entity) recognition. Commercial platforms like Amazon Lex and Google Dialogflow are based around the ideas of intent and slot. Intent recognition is a text classification task which predicts which out of a predefined set of intents a user has asked. For example, a ticket booking system might have MakeBooking or MakeCancellation intents. Slot recognition is a named entity recognition (NER) task which aims at picking salient entities (or slots) out of the text. In a ticket booking scenario, DestinationCity and SourceCity might be among the slots a system aims to recognise. Together, the entity and slots can be used to infer that “I’d like to book a ticket to London” and “Please can I buy a ticket and I’m going to London” effectively mean the same thing to a ticket booking system. The system can use the recognised intent and slots to communicate with a wide range of systems (databases, knowledge graph, APIs etc) and act on a user’s request.

    Using machine learning for NLP leads to conversational systems that can robustly handle a wide range of user inputs. Still, it’s common to have a layer of handcrafted rules alongside the ML model to handle edge cases or guarantee the system will behave appropriately for particularly important or common user queries. Further, even when machine learning can interpret individual user utterances, the overall conversation flow still usually remains handcrafted.

    Deep Learning for Dialogue

    Intent and slot has its limitations as a way of modelling dialogue. For now though, it’s a common way to build both voice and chat bots in real-world applications.

    Deep learning continues to impact the trajectory of conversational AI. Deep neural networks (DNNs) were first used for learning dialogue policies. Then, the natural collision of using DNNs for both NLP and for dialogue policy is to build a single model that directly predicts appropriate responses in a conversation. An example of this kind of model is Google’s MEENA — a large neural network that’s trained to be able to respond appropriately in conversations about different topics.

    These end-to-end neural dialogue models build on large non-conversational language models like BERT and GPT-3. However, they’re difficult to use in commercial products because of some key issues. It’s difficult to have any control over the conversation flow, and they can sometimes produce biased or inappropriate responses. This isn’t great for company branding! Also, they struggle to retain a consistent persona throughout a conversation, forget what they’ve previously said, often produce relatively boring responses, and cannot easily link with external sources of information like knowledge bases or APIs to take action or to find the right information. These models of dialogue are new, however, and current research is addressing these limitations.

    Conversational AI has been the topic of extensive research and development for decades, and a lot has changed in that time. It’s impossible to do justice to all of the research that’s happened, and is still going on, so this is a small snapshot of how the field has developed. Things will look very different in a few years time as the challenges of the current technology are addressed.

    The Evolution of Conversational AI was originally published in Chatbots Life on Medium, where people are continuing the conversation by highlighting and responding to this story.

  • How to Add Chatbot to React Native

    Building a chatbot on a React Native app may have been a complicated affair in the past, but not so today, thanks to Kommunicate’s Kompose chatbot builder.

    In this tutorial, we are going to build a chatbot application from scratch using Kompose ( Kommunicate Chatbot) and React Native.

    We’ll do the integration in 2 phases:

    1. Create a Kompose chatbot and set up the answers.
    2. Add the created chatbot to your React JS website.

    Let’s jump right into it.

    Phase 1: Create a chatbot in Kompose and setup the answers

    Step 1:Setup an account in Kommunicate

    If you do not have an account in Kommunicate, you can create one here for free.

    Next, log in to your Kommunicate dashboard and navigate to the Bot Integration section. Locate the Kompose section and click on Integrate Bot.

    If you want to build a bot from scratch, select a blank template and go to the Set up your bot section. Select the name of your Bot, your bot’s Avatar, and your bot’s default language and click “Save and Proceed”.

    You are now done creating your bot and all you have to worry about now is to “Enable bot to human transfer” when the bot encounters a query it does not understand. Enable this feature and click “Finish Bot Setup.”

    From the next page, you can choose if this bot will handle all the incoming conversations. Click on “Let this bot handle all the conversations” and you are good to go.

    Newly created bot here: Dashboard →Bot Integration → Manage Bots.

    Step 2: Create welcome messages & answers for your chatbot

    Go to the ‘Kompose — Bot Builder’ section and select the bot you created.

    First, set the welcome message for your chatbot. The welcome message is the first message that the chatbot sends to the user who initiates a chat.

    Click the “Welcome Message” section. In the “Enter Welcome message — Bot’s Message” box, provide the message your chatbot should be shown to the users when they open the chat and then save the welcome intent.

    After creating the welcome message, the next step is to feed answers/intents. These answers/intents can be the common questions about your product and service.

    The answers section is where you’ve to add all the user’s messages and the chatbot responses.

    Go to the “Answer” section, click +Add, then give an ‘Intent name’

    In the Configure user’s message section — you need to mention the phrases that you expect from the users that will trigger.

    Configure bot’s reply section — you need to mention the responses (Text or as Rich messages) the chatbot will deliver to the users for the particular message. You can add any number of answers and follow-up responses for the chatbot. Here, I have used custom payload by selecting the “Custom” option in the “More” option.

    Once you have configured the responses, you need to click on “Train Bot” which is at the button right and to the left of the preview screen. Once successfully trained, a toast “Anser training completed” will come at the top right corner.

    Phase 2: Add the created chatbot to your React Native project:

    Step 1: Setup the React Native development environment

    Step 2: Create a React Native app

    Create a new React Native app (my-app) by using the command in your terminal or Command Prompt:

    npx react-native init my-app

    Step 3: Now, navigate to the my-app folder

    cd my-app

    Step 4: Install Kommunicate to your project

    To add the Kommunicate module to you react native application, add it using npm:

    npm install react-native-kommunicate-chat –save

    Step 5: Add Kommunicate code to your project

    Navigate to App.js in your project. By default, a new project contains demo code which are not required. You can remove those codes and write your own code to start a conversation in Kommunicate.

    First, import Kommunicate using:
    import RNKommunicateChat from ‘react-native-kommunicate-chat’;

    Then, create this a method to open conversation before returning any views:

    Next, we need to add a button, which when clicked would open a conversation. Add these React elements and return it.

    const App: () => Node = () => {

    const isDarkMode = useColorScheme() === ‘dark’;

    const backgroundStyle = {

    backgroundColor: isDarkMode ? Colors.darker : Colors.lighter,


    startConversation = () => {

    let conversationObject = {

    ‘appId’: ‘eb775c44211eb7719203f5664b27b59f’ // The [APP_ID]( obtained from kommunicate dashboard.


    RNKommunicateChat.buildConversation(conversationObject, (response, responseMessage) => {

    if (response == “Success”) {

    console.log(“Conversation Successfully with id:” + responseMessage);




    return (

    <SafeAreaView style={styles.con}>

    <StatusBar barStyle={isDarkMode ? ‘light-content’ : ‘dark-content’} />




    <Header />



    backgroundColor: isDarkMode ? : Colors.white,


    <Text style={styles.title}></Text>

    <Text style={styles.title}>Here you can talk with our customer support.</Text>

    <View style={styles.container}>


    title=”Start conversation”

    onPress={() => startConversation()}








    Here is my screenshot:

    Originally Published at on 21/06/2022

    How to Add Chatbot to React Native was originally published in Chatbots Life on Medium, where people are continuing the conversation by highlighting and responding to this story.

  • Conversation Design Discussion: Designing for voice agent

    Welcome to part 2 of our Conversation Design discussion. In part 1 we learned about multimodal Conversational AI assistants: best use cases and their future. Today we’ll dive into voice assistants: how to build and test new customer experiences, collect feedback on it and make a decision to invest in Conversational AI solutions.

    Today, voice agents are commonly chatbots but with voice communication, so they help you with certain quick faqs and tasks, but if we look at that realm of the future it will be that multi-sensory multimodal experience. It’s not just about the modality, it’s about what other factors come into play, such as what we’ve seen from the Metaverse. With mixed reality where there’s so much opportunity, how do companies support that in the best way? Check out our Voice Assistant Use Cases for Business to automate repetitive or labor-intensive tasks.

    Voice agents and other potential user interactions to provide better customer experience

    We often see that companies start off with a chatbot development, and then they often replicate the experience to a voice channel. A big mistake companies make is not recognizing and designing for cognitive load. With listening humans can remember around 2–3 pieces of information in one communication, but when we’re engaging with text-based communications, we can give users 5–9+ pieces of information. Additionally, they can read and re-read the prompt without the bot having to do it for them.

    Cognitive load is a big challenge with voice agents because some of the voices, mostly default ones, may sound a little monotone to users. As time goes on we are however seeing more human-like voices come out on assistants that offer better inflections that help users focus and listen. By having those variations of voices it might give a little bit more flexibility for Conversation Design in terms of how much more information we can give a user than today.

    Testing in Conversation Design process

    Much of the quality in Conversation Design experiences lies in quality testing, especially in voice. If you’re creating conversational flows, it’s essential to act and read out your dialogs, this is the basics of conversation design process. We recommend back to back conversations so users can organically respond to prompts without looking at the speaker’s facial expressions. Check out more insights from our specialists in troubleshooting chatbot conversation flows to level up the answers of your bot.

    Download a Conversational flow chart diagram with the scenario of building dialogues for your chatbot

    These testing exercises allows the designer to address potential points of friction, confusion and immediately correct them and retest all before launching the experience. This mitigates unanticipated errors and promotes strong conversation and completion of your assistant’s conversational flows.

    Conversation Design Process workflow

    How to collect feedback about your AI chatbot or voice agent?

    Collecting feedback is the most critical step in the Conversation Design process in ensuring your conversational solution is useful and enjoyable for customers to use. As your solution expands in features, user feedback will allow your team to make mindful and impactful decisions to further improve the customer experience.

    A common mistake companies make is having their voice or chat bot after every prompt ask the user for initial feedback on if it answered their question or was helpful or not. This requires the customer to give multiple pieces of feedback in one interaction which can quickly cause user frustration or have them ignore it entirely. Think of your own experiences with talking to human agents, do they ask you how they did every minute? It’s unlikely they do because сonversation interactions would be robotic and it’s uncommon for humans to give feedback multiple times in one conversation and it turns into a very robotic experience.

    It’s challenging enough for users to fill out a single survey at the end of an experience, so asking for multiple rounds of feedback is even less likely.

    A great way to ask customer feedback is at the end of newly launched flows or in the form of an ‘Anything Else’ menu at the end of longer, more complex flows. As more feedback is received on new flows, conversation designers and bot tuners can further optimize the experience and once enough good feedback is received the feedback prompt is removed.

    How to make a decision on if your business should invest in Conversational AI?

    Before investing in conversational AI, firms should consider the following factors:

    • Technical abilities and constraints: The ability to integrate conversational AI solution and connect it via API is a must. If your firm has limited technology with little opportunities for the bot to integrate with systems that end users engage with today, your user experience may be slower than current state and expected ROI will be constrained. Remember, if users are being pushed to switch to a chat or voice channel, then that experience must be an improved version of current state otherwise customers won’t make the channel shift.
    • Economic viability: Ensure your firm has the financial investment to support discovery, design and development and optimization costs for a conversational AI solution. More importantly, that there is a budget to support ongoing improvements and new feature builds.
    Results of Conversational AI implementation according to Gartner research
    How to Choose Conversational AI Platform. Get the checklist

    It’s so critical for businesses to ensure their internal systems are flexible enough to allow for integrations. These will help produce frictionless, impactful and engaging experiences that end-users will want to interact with. This is why conversation design is so critical in the bot building process, so that what is launched is something that these users will want to use.

    Want to learn more about how voice can transform your customer experience? Reach out to our expert today!

    Conversation Design Discussion: Designing for voice agent was originally published in Chatbots Life on Medium, where people are continuing the conversation by highlighting and responding to this story.

  • Russian TV show to have virtual human, Elena created by Sber.

    Russian company Sber is going one step ahead by introducing a virtual human for hosting shows. Yes, this Russian tech giant has announced their Virtual human, Elena who is currently hosting Russian TV RBC TV’s markets and investor Calendar shows. To transform pictures and recording into an AI character that can easily replace the Sber has used their visper platform.

    Virtual Host

    With the platform Visper, SberDevices highly expect users to make the same kind of Avatar, Elena. This platform Visper is capable of creating any kind of AI character, starting from realistic Virtual human like Elena to animated cartoon characters. Sber thinks that these creatures can be highly beneficial in terms of making interaction online.

    “The Visper platform is quite young, but it is evolving rapidly, and the characters we create with it. For example, they have recently learned to speak English,” the Sber Devices stated.

    Several Russian TV shows seem to jump quickly over using such technology for hosting their shows. For example, recently Japanese company Nikkei innovation lab has introduced a platform for producing virtual human videos which includes character for news anchor in the list.

    “This is an interesting and important experiment for RBC. Similar technology is developing throughout the world, and we will soon see similar virtual hosts on many channels,” stated by RBC TV managing director.

    Don’t forget to give us your 👏 !

    Russian TV show to have virtual human, Elena created by Sber. was originally published in Chatbots Life on Medium, where people are continuing the conversation by highlighting and responding to this story.


    Google Cloud has recently announced their multi-year collaboration with Cohere, the startup platform for developers to make Natural Language processing model-building easier, in applications. The whole process requires a lot of infrastructure and hardware resources to run and currently Cohere doesn’t own that many resources of infrastructure. So Google is going to work as an infrastructure resource by helping Cohere with Google cloud’s machine learning hardware and infrastructure.

    Cohere and Google

    For taking out the best conclusion and trends Cohere draws an enormous amount of unstructured data and texts and this collaboration will support Cohere’s language model development with Google’s infrastructure. Additionally, Google will allow Cohere to get access to the Tensor processing unit. Besides these, Cohere will be also able to use Google cloud’s server to provide wider access to its model for gaining sales. Developers will be able to adjust the models with a minimum amount of time and coding.

    However, just before the announcement of collaboration Cohere recently announced their API for its language models. And now with this collaboration, they are about to expand their appeal by providing affordable language models.

    Cohere CEO Aidan Gomez stated while announcing the collaboration “By partnering with Google Cloud and using Cloud TPUs, Google’s best-in-class machine learning infrastructure, we are bringing cutting-edge NLP technology to every developer and every organization through a powerful platform,”

    “Until now, high-quality NLP models have been the sole domain of large companies. Through this partnership, we’re giving developers access to one of the most important technologies to emerge from the modern AI revolution.”

    Cohere isn’t alone to collaborate with Google Cloud as recently multiple companies including Wendy’s tied their hand with Google Cloud to incorporate AI and voice tools into a restaurant chain.

    “Leading companies around the world are using AI to fundamentally transform their business processes and deliver more helpful customer experiences,” Google cloud CEO explained.

    Don’t forget to give us your 👏 !

    GOOGLE CLOUD COLLABORATES WITH COHERE TO SET UP NPL MODELS was originally published in Chatbots Life on Medium, where people are continuing the conversation by highlighting and responding to this story.

  • Call Center Experience with Voice agent: Challenges, Use Cases and Case Study

    This is part 2 of a series about how call centers can evolve with the help of Conversational AI solutions. Part 1 looked at the theory about choosing the right technology for call center automation and the opportunity to grow with Conversational AI solutions for business. Here we take an even closer look at a specific scenario of Conversational AI in finance and general use cases of voice agents for both end customers and live agents.

    Customer Experience with Voice Agents

    Voice assistants are growing in popularity these days. The ability for a solution to understand voice, interpret the intent and meaning, and provide value to users is growing at an incredible rate. When working with the voice channel, there are many challenges that can be encountered, where a voice agent can create a huge benefit.

    Challenges with Voice

    • Determine why a customer is calling (their intent).
    • Customer authentication and verification.
    • Troubleshooting.
    • Inconsistent customer service.
    • First contact resolution.
    • Agent engagement and productivity.
    • Accurate log analysis and data collection.

    An automated voice agent can help to mitigate some of the challenges listed above.

    Customer Experience With Voice Assistant

    Authentication of the user can be a challenge depending on how you integrate voice into the solution. If you’re using an IVR, asking security questions is part of the normal process flow for user authentication. But for an embeddable voice solution within an application installed on your mobile device, where the user has already authenticated, you should be able to make certain presumptions of authentication. Any data already presented in the app should be available through the voice agent without additional authentication. In the case of enhancing security, it’s a matter to identify how to integrate a voice technology into existing workflows.

    Inconsistent customer service is also a problem. People react differently to the personality, mood, language, accents, word selections, and slang of live agents. A voice agent can mitigate many of those elements by providing the same answer each time, ensuring every customer receives the same information for the same question.

    Use Cases of Customer Experience with Voice Agents

    • Intent capture & intelligent call routing.
    • Conversation transcription.
    • Handle thousands of calls simultaneously, making them a solution for peak times or off-hours support.
    • Handle transactions during the call routing stage.
    • Authenticating customers through natural conversation.
    • Personalized service based on customer history.
    Download best Use Cases of Chatbot Personalization

    One key use case of voice systems is conversation transcription. The first value of it is the ability to properly capture all customer activities into a transcript for review and analysis, which can be used to quantify new intents and actions. A second value allows the bot to “listen” to how a live conversation is going on between a user and an agent, allowing the bot to provide proactive recommendations to the live agent of how to support the customer.

    An AI bot doesn’t have to be customer-facing; it can be a right hand for the live agents to create efficiency and allow them to be more effective.

    An additional benefit of a Conversational AI solution is that of volume management. Scaling an automated service for a short period is much more viable than scaling up a live agent call center, especially in short bursts. Upgrading an automation service can be done in a matter of minutes or hours, allowing for quick support when needed. The ability to provide support and consistent experience in off-hours or on holidays has huge value.

    Embracing omnichannel opportunities and automation provides a huge potential for enterprises. Transactions through call routing are also very important since you need to understand who is the right live agent for a particular user engagement. You don’t want to transfer someone to a live agent and then find out that was the wrong department; that’s a poor customer experience. So using an automation tool to identify where the user needs to be transferred, to whom they need to talk, manage an expectation of wait time, and availability is going to be key to a successful engagement.

    How do you measure the success of implementing a Conversational AI solution?

    Bringing quantifiable numbers to the table is important, and an analytical approach can provide this data, including:

    • How many conversations were initiated?
    • How many conversations were able to be responded to within the bot?
    • How many had escalations to a live agent?
    • What was the reduction in wait times for live agents as a result of the inclusion of a bot?
    • What intents were not able to be identified by the bot?

    These are some metrics that can be measured to create success criteria. But the key metric for Conversational AI bot development relevance is user satisfaction and a method to measure that success should be part of the equation.

    AI Chatbot to a human agent handoff Case Study for Financial institution

    Let’s take a look at an experience Master of Code has provided, where we were able to create a live agent handoff system. Through the use of questions and the bot’s understanding of the user’s intent, we were able to transfer the user to the right department to get the needed information.

    Use Cases: billing and account management FAQs and specific live agent handoff dependant on 12 topics. The AI chatbot for bank and financial service was built by Master of Code on a partner platform. The client serves financial institutions, financial planners, and broker-dealers.

    Download Finance Use Cases for Conversational AI report with Top Examples

    As you can see, there was a 30% reduction in transfers. This is both with the Conversational AI in finance solution being able to answer some of the questions, but also with the ability to find the right agent. 30% reduction in transfers means that 30% of those calls were handled by the Conversational AI and therefore the wait times in queue for live agents were significantly lessened. It meant that the live agents did not have to rush through a conversation with the user to get to the next call in the queue, which made it a more effective experience.

    1.4 M hours saved with implementing the automated customer service tool Erica at Bank of America. Ready to transform? Get in touch

    Call Center Experience with Voice agent: Challenges, Use Cases and Case Study was originally published in Chatbots Life on Medium, where people are continuing the conversation by highlighting and responding to this story.

  • Digital Avatars Bring New Meaning to Multitasking

    Half visible man hiding behind digital code
    Half visible man hiding behind digital code-Image by Peter Linforth courtesy of Pixaby

    Imagine writing a blog at home, teaching a writing course, and attending a zoom meeting simultaneously. Believe it or not, digital AI companies are doing just that; I’m not talking gaming avatars; doppelgangers that sound like you and use your facial and body expressions. These digital twins are bringing new meaning to multitasking.

    The Use of Digital Twins in Business is Limitless

    In November of 2021, HourOne debuted REALS, a platform where anyone can create one or more digital twins for business use. The business can create AI synthetic assistants that can do just about anything.

    These AI assistants can answer phones, make appointments and give a presentation in 4 different languages. Unfortunately, you will have to get your coffee, but your AI assistant can order your lunch and have it delivered!

    You have complete control over how they interact with your customers. You can program and re-program in virtually (no pun intended!)minutes, making it easy to revise a market task that isn’t going as well as planned or reproduce it using a different assistant and new languages.

    This will be an asset to small businesses and gig workers who can put this technology to great use with international clients and remotely located workers. Business is ever-changing, and how you brand and market your business is essential to its success.

    REALS stated that you could set up any presentation using “thousands of text lines in multiple languages.” This assistant can give the same presentation in numerous countries simultaneously, something it would take a human presenter to do in months with an expense to the company of thousands of dollars.

    The look of marketing will be changed as companies create an interactive avatar that becomes synonymous with their brand. Imagine your kids being able to interact with a character on the cereal box.

    Personal Use of Digital Doppelgangers Will Expand Your World

    While these interactive characters will be a great marketing tool, they can also be created for personal use especially, in the metaverse. Imagine visiting with your sister in Greece on the beach for a monthly platform fee! Talking to family members and friends.

    Housebound seniors and the disabled will be able to create and use the avatar to experience virtual life, find people that share their experiences, make and visit new friends, and experience life in a way they haven’t been able to do.

    3-D Software That Creates Collaboration

    Nvidia’s Omniverse will launch free 3-D software that will allow real-time collaboration between businesses. They believe that increasing access will benefit us all.

    After its Beta launch one year ago, there have been 100,000 downloads by innovators using it to enhance their workflow.

    This will expand collaboration between industries and make working with companies worldwide possible for companies big and small.

    The use of digital avatars is in its infancy; its use in business and the metaverse will change human interaction in untold ways. The changes they offer are going to bring new meaning to multitasking!

    Digital Avatars Bring New Meaning to Multitasking was originally published in Chatbots Life on Medium, where people are continuing the conversation by highlighting and responding to this story.

  • Dubai residents are now able to ask any municipal issues to ‘Fares’

    Dubai has decided to go one step ahead by introducing a virtual assistant that will answer citizens’ questions and requests regarding municipal issues. Using conversational AI the voice assistant ‘Fares’ enables multiple options for citizens to chat or talk with Dubai officials even through WhatsApp numbers.

    Virtual municipal service

    It’s an advanced way of serving the people of Dubai by allowing them to communicate with ‘Fare’ by calling the city hotline or connecting through the WhatsApp number and they can ask any questions, it can be about the city and their services, it can be the inquiry of any request made by them earlier. Additionally, it also has an option to verify any rumors about the city. Citizens can even ask about their status of playing house taxes through these ‘Fares’ and report any issues.

    “Communication with #DubaiMunicipality is easier through ‘Fares’, the virtual assistant who answers your inquiries around the clock and helps you submit your services’ reports,” the city’s official Twitter feed explained. “‘Fares’ is available via WhatsApp, the website, and the Municipality’s unified application. Visit our website and learn more about him.

    Many cities and countries have opted to move forward with smart services to provide their citizens, so Dubai isn’t the first one to take such initiatives, other countries like India, Russia, West Virginia also have seen to be experimenting with conversational AI.

    Don’t forget to give us your 👏 !

    Dubai residents are now able to ask any municipal issues to ‘Fares’ was originally published in Chatbots Life on Medium, where people are continuing the conversation by highlighting and responding to this story.

  • Call Center Automation using AI-Powered Chatbot

    Welcome to our discussion on call center evolution using AI-powered chat and voice agents. Our intent today is to share our experiences and observations as one of the leading Conversational AI companies about how chat and voice assistants can take call center automation experiences to the next level, providing value to both customers and call center agents, regardless of the communication channel. Then in part 2, we’ll discuss Call Center experience with voice agent: Challenges, Use Cases, and Case Study.

    How to choose the right technology for a Conversational AI solution for Call Centers?

    There is no single platform or technology that’s a golden ticket to a successful Conversational AI experience for call center automation. In general, for every industry and field, it takes multiple technologies and multiple systems to create an effective solution. Bringing those systems together is something that Master of Code has experience in delivering, which lets us be recognized as a trusted partner by some significant providers of Conversational AI solutions in the market, including Amazon and Microsoft.

    We work with many platforms based on customer needs and selected solutions, have the knowledge and skill to create Conversational AI experiences within an existing platform to optimize it, not just for a cloud deliverable. This allows us to understand what works and what doesn’t to provide recommendations and guidance throughout the lifecycle of the engagement.

    Implementing a Conversational AI experience within a call center

    There are a few fundamental components that must exist, beginning with call center tools that are implemented for an organization. There is no one right or wrong tool, simply what works best for your organization. All of the major solutions, Cisco — RingCentral, Zendesk, and many more — bring value to creating that call center automation experience. And these solutions enable customers to enter into the queue to engage with live agents.

    Opportunities for call center automation with Conversational AI

    • Reduce repetitive requests to agents, by answering easy questions.
    • Reducing wait times for users, resulting in much more favorable agent stats due to lessened waiting times.
    • Bring a high level of conversational automation into the equation.

    Working with a conversational platform allows marrying the live agent component to the automation piece in a much more simple fashion. In many cases, these call center solutions have a conversational element, either pre-built or with partnerships that can be leveraged. Otherwise, two systems could talk to one another via existing APIs or through custom integrations that can be developed.

    How to Choose Conversational AI Platform. Get the checklist

    Top Conversational AI channels and types for customer engagement

    When making the decision of how you want to engage customers, identification of the most applicable channels and conversational types needs to be determined. It can be implemented by adding a chatbot to your website, or maybe through existing digital support channels, such as Apple Business Chat, Facebook Messenger, or Microsoft Teams. Or replacement of a phone system such as a tone-based IVR system with a Conversational AI-based one.

    Conversational platform for Call Center
    • Channels and communication methodologies drive the use case priority and provide a foundation for measuring success. Since each channel will offer different ways of user engagement, strong knowledge of the channel and what is available within it is key to creating that optimal experience. This selection, which can grow as your needs change, is one of the fundamental pieces that can drive digital engagement for your brand.
    • Workforce management tools. Allows performing some agent planning, but also understanding how to accurately route a customer to the appropriate live agent, person or department. The faster and more readily any solution can give an answer to the user, the more positive of an experience it is.
    • Agent assist. Useful in determining the customer’s need and finding the right agent or workflow to execute the request.
    • Menu-driven navigating systems. Can be low-cost to implement, but also creates a much more linear experience as well as provide limited metrics. As a result, you might know how many people follow a certain path, but you don’t necessarily get insight into what other types of things they’re looking to do within your chatbot.
    • Analytics. By converting the experience to a Conversational AI flow, the amount of data you get increases dramatically. You will see what people want to do, identify new flows and user experiences, and have data-centric metrics to support your growth decisions.
    Download a Conversational Flow Chart Diagram with the Scenario of Building Dialogues for your Chatbot

    In addition, you can add in an NLP solution, either a cloud-based one like Microsoft LUIS or an on-prem solution such as RASA. Based on organizational needs, you can fine-tune that experience to flow in a way that is virtually seamless to the end-user.

    It is important to select a tool for call-center automation based on your business needs, such as supported languages, what type of PII concerns exist, and technological constraints caused by each provider’s limitation.

    Finding the right NLP to manage, understand and train for your call center automation is key. And this can extend further into other AI automation components such as sentiment analysis, document analysis, visual recognition and other cognitive services. Having a long term strategy helps with the right selection, and can save time and money down the road.

    Additionally, we cannot forget the line of business tools that house the detailed data that is needed to make the conversation useful to end-users. This is where users can authenticate themselves, perform appropriate tasks, and access CRM, ERP, or other operational services to allow a live agent to engage and answer user questions. As a bot obtains more access to information, its value continues to increase, resulting in customers who can obtain assistance much more quickly.

    Value of integrating Conversational AI solutions for call center automation

    Whether it is the channel itself, a workforce management tool, NLU or other cognitive systems, line of business tools, or an analytics platform, we cannot deny the importance of integrations. No-code systems may have challenges in obtaining the information needed for an effective exchange, and a low code approach will, at minimum, allow for the development of these custom solutions to provide value.

    Omnichannel support allows the bot to work alongside any channel and over multiple communication methods. With the release of a new channel, businesses will just need to create the experience for that channel based on existing flows. But if the Conversational platform does not support it, then an investigation of optimal experience and implementation in that channel will be required.

    Integration provides more performant conversations because information can be presented in a more conversational manner. Providing information from a Conversational AI solution, in the same way as from a live agent, makes for a more performant and pleasant experience. We’re not limited to just assisting a customer directly, but rather providing the right solution to solve a problem and implementing a right hand for live agents may be that solution.

    API data unification allows us to bring all of the data points together into a coherent message. We may get some data from CRM, an inventory system authentication, SSO platform or directly from a database. Knowing where we can get the data from means that we can unify the experience and merge the data in a way that makes sense for the user.

    Value of Integration in Call Center automation

    Integration provides flexibility like adding data sources without impacting existing ones, such as a CRM upgrading to a new version with new APIs. We update the connectivity library and ensure to get the same information with no changes to the Conversational AI flow. The system can also be configured to fall back to the previous iteration, allowing for it to remain operational, even when downstream services are challenged.

    Also read: Three Secrets Behind Impactful Troubleshooting Chatbot Conversation Flows

    User request translation is a key value proposition for both a bot and a human agent. Letting a bot handle the interpretation allows for a more probabilistic understanding of the request, which could lead to routing to the appropriate person or simply pulling down the right information. Integrating with NLP and other business services to extract that data and make it available to respond to the user’s request effectively which is a huge value statement.

    The Conversational AI approach is much more natural, and the more we make it human-like, the more users will engage and perform actions without the need for a live agent. Building that trust that Conversational AI solutions can answer those questions is key. It can provide a 24/7 support model in languages that perhaps your office can’t do directly through live agents.

    Many enterprises have older systems that require a more hands-on approach to obtaining data. It can be a mainframe system, something written in a language and platform that is no longer viable, or a subject matter expert who has the knowledge of how to support it but has left the organization. Accessing legacy APIs and the ability to provide the data into a modern system provides significant value to the customer and to the business, which a no-code solution may not be able to provide.

    Having built many extensive Conversational AI solutions, we at Master of Code are well versed in finding the right efficiencies and use cases, bringing information at the right time to create an optimal experience. With each of our partners, we work with stakeholders to best understand the ability to implement Conversational AI solutions. It includes choosing the right technology for the task at hand, data sources, and integrations to generate the best experience for users. The objective is to create efficiency and address customer concerns quickly and correctly.

    Take the first step in leveling up your Conversational AI experience:

    Call Center Automation using AI-Powered Chatbot was originally published in Chatbots Life on Medium, where people are continuing the conversation by highlighting and responding to this story.