Your cart is currently empty!
Category: Chat
-
First steps in Chatbot Performace Testing with Botium Box
One major pitfall of building chatbots is underestimating the importance of performance. The UI of a chatbot is usually very simple, so it’s easy to forget the complexity behind these virtual assistants. A slow chatbot might be accepted for home projects, but a company can not neglect it. Bad performance is a serious UX killer.
Performance Testing is the key to ensure that your chatbot is responsive under high load. Worst case is maybe a chatbot that does not answer at all, because it was not able to recover after high load caused by clients or some Ddos attack.
Stress Testing
To collect some basic knowledge about the performance of your chatbot we suggest starting with a stress test. Stress testing basically means that we are starting with a small number of parallel users that gets gradually increased during the execution. After some minutes it will answer two fundamental questions:
First, the main output is how many parallel users can be served without slowing down the system. Determining which are the slow parts requires of course human interaction.
Second, but not less important output can be detected errors. Usually it is the first production-near scenario for a chatbot experiencing heavy load.
Let’s see some more real life examples…
First Stress Test
By default all parallel users are having a very simple conversation with the bot, saying just ‘hello’ for example. They wait for an answer and repeat the conversation until the end of the test is reached. Any response coming from the chatbot is accepted. So if the chatbot answers sometimes with ‘Hello User’ sometimes with ‘I don’t understand’, it has no impact on the performance results. The main goal here is to get an answer, while a http error code or lack of answer is interpreted as failure of course.
In order to start a Stress Test, we have to select the “Performance Tests” tab in our Test Project, and choose Stress Test there.
The duration can be 1 minute. In terms of parallel users lets use 5 for starting, 1000 for ending. (You can put 1000 there by editing the field directly).
“Required percentage of Successful Users?” can be 0 percent, because we don’t want to stop the test on failed responses (at the moment).
I have two goals with these settings; getting a first impression about the max parallel users the chatbot is able to serve, and discovering how the chatbot breaks on extreme load. In production your chatbot could have much more load than expected, so you have to test it.
The first chart in the stress test results shows that there were errors.
We see that the bot can easily handle 5 parallel users. But on the next step, when the number of parallel users was increased to over 200, the chatbot started to fail already.
For more insights into the failures you can download a detailed report.
This report contains all errors detected by Botium Box. The types of failures are:
- Timeout (Chatbot does not answer at all, or too slow)
- Invalid response (API limit reached, server down, invalid credentials)
- Response with unexpected content (We send ‘hi’, and expect ‘Hello!’, but the chatbot answers with ‘Sorry some error detected come back later!’)
In my case there is just one kind of error in the report:
2168/Line 6: error waiting for bot — Bot did not respond within 10000ms
After 10s Botium Box has given up waiting for the response. The chatbot answer time was over the limit, or did not answer at all. Slowing down is common on extreme loads, but receiving no answer is more critical, that we can’t ignore.
By checking the Response Times Chart in our stress test results, I notice that the chatbot is just too slow. The response time is increasing until it reaches 10s and most of the responses are failed. Next step is now to check chatbot side metrics/logs or increase the timeout in Botium Box. For the sake of simplicity I believe that my impression is correct.
The first stress test proved to me that the maximum number of parallel users is somewhere at 200 and on heavy loads the chatbot won’t break but slow down significantly.
Trending Bot Articles:
2. Automated vs Live Chats: What will the Future of Customer Service Look Like?
4. Chatbot Vs. Intelligent Virtual Assistant — What’s the difference & Why Care?
Second Stress Test
Our first stress test had very wide parameters. I had two goals there, getting some raw results about the max parallel users and checking what happens on extreme loads. With the second stress test I want to refine the number of max parallel users.
I set the test step duration to 2 sec meaning that every two seconds Botium Box will increase the number of parallel users. The minimum users to 150 and the number of maximum users to 300. Based on these parameters Botium Box will add every 2 seconds 14 new parallel users till it ends up with 300 after twelve steps.
We can set the ‘Required percentage of Successful Users’ to 95%. Our goal now is not to go beyond the limit of the chatbot, but about the normal usage. I already expect the test to fail because of too many timeouts. The question is when will it fail (at which number of parallel users).
After the test is finished I see that it failed as expected:
To be sure I check the Failed Convos report. There are the same timeout errors as we had in the first test and as I expected. That’s good.
Next step is checking the charts on the stress test results:
In the first step (150 parallel users) there are no errors (first chart) and the average response time is 5s (second chart). The second step (150 + 14 parallel users) looks the same. And there I have already found the limit of my chatbot. I don’t want failures (responses above 10s) or average response times above 5s.
The third step is very interesting. The average response time is increasing. The number of users is not changing in a test step, we expect some linear response time as before. And checking the first chart we see that there were some errors.
In the last step we can see the test tear down phase in chart 3. The emulated users did not start any new conversations, just finished the current one.
Back to the third step. The chatbot seems to be overloaded getting more requests than it is able to process. Another finding is that in the teardown phase the response time does not recover. Probably because the response times chart is cut at 10s due to the timeout used by Botium Box. I suppose there is fallback in the response times, but it is still above 10s, so we don’t see it.
If overloading is not the cause, then maybe something is sucked there (memory leak). But I’m pretty sure it’s overloading because it’s too progressive to be something else. I will prove that in my next article.
Summary
Based on the two tests done I have determined the limits of my chatbot and I know how it behaves on extreme load. Based on those KPIs I can decide for the next actions. Depending on my business requirements and the chatbot’s architecture there are many options. Fixing existing bugs, doing some horizontal/vertical scaling or applying necessary code/architecture changes to name a few.
Next
Using just Stress Testing can be already sufficient if your chatbot is based on an external service. In more complex cases, if you own the chatbot architecture at least partially, it’s strongly recommended to dig deeper. In my next article I will cover (memory) leaks, API limits, Ddos attack and Performance Monitoring.
Don’t forget to give us your 👏 !
First steps in Chatbot Performace Testing with Botium Box was originally published in Chatbots Life on Medium, where people are continuing the conversation by highlighting and responding to this story.
-
QnA Chatbot using sentence simi
Source: https://images.app.goo.gl/wR8nbH8Wdp9miFLp7 Introduction
Transformer models have taken the world of Natural Language Processing by storm. This post is intended to build a simple QnA chatbot. We will first build a knowledge base based on the training set. We will use distilbert-base-uncased transformer model to calculate the sentence vectors for each of the sentences present in the knowledge base. These sentence vectors capture the context of the sentence and in turn, help to understand the sentence.
We will store the sentence vectors in Mongo Database. When the user sends a query, a vector representation of the query will be calculated. Using the cosine similarity, we will find the sentence vector from the knowledge base that matches the most with the input query vector. The intent of the matching sentence will be recognized and the output will be displayed to the user based on the responses mentioned for that intent.
Distilbert-base-uncased
DistilBERT is a small, fast, cheap, and light Transformer model trained by distilling BERT base. It has 40% fewer parameters than Bert-base-uncased, runs 60% faster while preserving over 95% of BERT’s performances as measured on the GLUE language understanding benchmark.
Pytorch
PyTorch is an open-source machine learning library based on the Torch library, used for applications such as computer vision and natural language processing, primarily developed by Facebook’s AI Research lab (FAIR). It is free and open-source software.
PyTorch provides two high-level features:
1. Tensor computing (like NumPy) with strong acceleration via graphics processing units (GPU)
2. Automatic differentiation systemTrending Bot Articles:
2. Automated vs Live Chats: What will the Future of Customer Service Look Like?
4. Chatbot Vs. Intelligent Virtual Assistant — What’s the difference & Why Care?
Installation of Packages
Transformers: This library brings together over 40 state-of-the-art pre-trained NLP models (BERT, GPT-2, Roberta, etc..)
# Install Transformers
!pip install transformers==3Import Libraries
Importing the libraries that are required to perform operations on the dataset.
import random
import numpy as np
import pandas as pd
import torch
from sklearn.metrics.pairwise import cosine_similarityImporting the DistilBertModel and DistilBertTokenizer to calculate the sentence vectors.
from transformers import DistilBertModel, DistilBertTokenizer
tokenizer = DistilBertTokenizer.from_pretrained(‘distilbert-base-uncased’)
model = DistilBertModel.from_pretrained(“distilbert-base-uncased”)Reading the dataset
df = pd.read_excel(r’D:MLML_RevDatasetsKB.xlsx’)
print(df.head())sentences = df[‘Text’]
l_intent = df[‘Intent’]Generate sentence embeddings for the sentences in the dataset
def get_embedding(sents):
# initialize dictionary: stores tokenized sentences
token = {‘input_ids’: [], ‘attention_mask’: []}
for sentence in sents:
# encode each sentence, append to dictionary
new_token = tokenizer(sentence, max_length=128,
truncation=True, padding=’max_length’,
return_tensors=’pt’)
token[‘input_ids’].append(new_token[‘input_ids’][0])
token[‘attention_mask’].append(new_token[‘attention_mask’][0])# reformat list of tensors to single tensor
# combining a list of tensors into a 2D single tensor
token[‘input_ids’] = torch.stack(token[‘input_ids’])
token[‘attention_mask’] = torch.stack(token[‘attention_mask’])output = model(**token)
embeddings = output[0]
print(embeddings.shape)att_mask = token[‘attention_mask’]
print(att_mask.shape)mask = att_mask.unsqueeze(-1).expand(embeddings.size()).float()
print(mask.shape)mask_embeddings = embeddings * mask
print(mask_embeddings.shape)summed = torch.sum(mask_embeddings, 1)
print(summed.shape)
print(summed)summed_mask = torch.clamp(mask.sum(1), min=1e-9)
print(summed_mask.shape)mean_pooled = summed / summed_mask
print(mean_pooled)# convert from PyTorch tensor to numpy array
mean_pooled = mean_pooled.detach().numpy()
return mean_pooledGenerate Embeddings
mean_pooled = get_embedding(sentences)
Embeddings shape
Attention mask shape
Mask shape
Mask Embeddings shape
Summed shape
Summed
Summed mask shape
Mean pooled
Mean pooled shape
Storing the results
result = []
for i in range(len(sents)):
d = {}
d[‘Text’] = sents[i]
d[‘Intent’] = l_intent[i]
d[‘Embedding’] = mean_pooled[i].tolist()
result.append(d)Building the knowledge base
Storing the sentences and vectors in Mongo DB
Importing the required library
import pymongo
Establish a connection to the DB
DEFAULT_CONNECTION_URL = “mongodb://localhost:27017/”
DB_NAME = “KB”# Establish a connection with mongoDB
client = pymongo.MongoClient(DEFAULT_CONNECTION_URL)Create a database and the table
data_base = client[DB_NAME]
collection_name = ‘QuestionAnswer’
collection = data_base[collection_name]Insert the records into the database
records = collection.insert_many(result)
When the user enters the query:
1. Calculate the sentence vector of the query
2. Identify the most matching sentence from the knowledge base using cosine similaritydef calculate_similarity(query):
mean_pooled = get_embedding([query])question = []
similarity = []
intent = []for rec in collection.find():
question.append(rec[‘Text’])
intent.append(rec[‘Intent’])
cos_sim = cosine_similarity([mean_pooled[0]],
[np.fromiter(rec[‘Embedding’], dtype=np.float32)])
similarity.append(cos_sim)index = np.argmax(similarity)
recognized_intent = intent[index]
return recognized_intentIntents JSON file
Here we set up an intents JSON file that defines the intentions of the chatbot user.
For example:
A user may wish to know the name of our chatbot; therefore, we have created an intent called name.
A user may wish to know the age of our chatbot; therefore, we have created an intent called age.In this chatbot, we have used 5 intents: name, age, date, greeting, and goodbye. When the user enters any input, the intent will be recognized by the bot.
Within this intents JSON file, alongside each intents tag, there are responses. For our chatbot, once the intent is recognized the response will be randomly selected from the static set of responses associated with each intent.
# used a dictionary to represent an intents JSON
data = {"intents": [
{"tag": "greeting",
"responses": ["Howdy Partner!", "Hello", "How are you doing?", "Greetings!", "How do you do?"],
},
{"tag": "age",
"responses": ["I am 24 years old", "I was born in 1996", "My birthday is July 3rd and I was born in 1996", "03/07/1996"]
},
{"tag": "date",
"responses": ["I am available all week", "I don't have any plans", "I am not busy"]
},
{"tag": "name",
"responses": ["My name is Kippi", "I'm Kippi", "Kippi"]
},
{"tag": "goodbye",
"responses": ["It was nice speaking to you", "See you later", "Speak soon!"]
}
]}Generate the response to the user’s query
def get_response(intent):
for i in data[‘intents’]:
if i[“tag”] == intent:
result = random.choice(i[“responses”])
break
return resultNow its time to test our chatbot
recognized_intent = calculate_similarity('Hello my friend')
print(f"Intent: {recognized_intent}")
print(f"Response: {get_response(recognized_intent)}")recognized_intent = calculate_similarity('What do people call you')
print(f"Intent: {recognized_intent}")
print(f"Response: {get_response(recognized_intent)}")recognized_intent = calculate_similarity(‘How old are you’)
print(f”Intent: {recognized_intent}”)
print(f”Response: {get_response(recognized_intent)}”)recognized_intent = calculate_similarity(‘what are your weekend plans’)
print(f”Intent: {recognized_intent}”)
print(f”Response: {get_response(recognized_intent)}”)recognized_intent = calculate_similarity(‘Catch you later’)
print(f”Intent: {recognized_intent}”)
print(f”Response: {get_response(recognized_intent)}”)Don’t forget to give us your 👏 !
QnA Chatbot using sentence simi was originally published in Chatbots Life on Medium, where people are continuing the conversation by highlighting and responding to this story.
-
How does IFTTT integrate with Google Assistant to read voice commands without an explicit invocation phrase?
In IFTTT you can create an applet that connects to your Google account and gets invoked when you say certain phrases directly to the Assistant. I’ve worked on dialogflow chat bots with some complex interactions and fulfilment services but it was always for in house projects so I never really go to a point where my bots where being used in a B2C manner. So for these bots you’d have to say something like “Talk to xyz” to invoke your agent and then you could continue talking and interacting with the agent.
I’d like to recreate what IFTTT does with Google assistant now, so that I can have users sign up once and then use my chat bot directly from their Assistant device, without the need to explicitly trigger the agent. Can someone please tell me how this works or point me towards where I need to research on this?
Thanks for reading!
submitted by /u/dudes_indian
[link] [comments] -
Conversational Chatbot using Transformers and Streamlit
Source: https://simplify360.com/conversational-ai/ Artificial Intelligence is rapidly getting into the workflow of many businesses across various industries. Due to the advancements in Natural Language Processing (NLP), Natural Language Understanding (NLU), and Deep Learning (DL), we are now able to develop technologies capable of imitating human-like interactions which include recognizing speech, as well as text.
In this article, we are going to build a Conversational Chatbot app using Transformer (microsoft/DialoGPT-medium model), streamlit.
Transformer Model
A State-of-the-Art Large-scale Pretrained Response generation model (DialoGPT)
DialoGPT is a SOTA large-scale pretrained dialogue response generation model for multiturn conversations. The human evaluation results indicate that the response generated from DialoGPT is comparable to human response quality under a single-turn conversation Turing test. The model is trained on 147M multi-turn dialogue from Reddit discussion thread.
Streamlit
Streamlit is an open-source Python library
Streamlit makes it easy to create and share beautiful, custom web apps for machine learning and data science. In just a few minutes we can build and deploy powerful data apps.
Trending Bot Articles:
2. Automated vs Live Chats: What will the Future of Customer Service Look Like?
4. Chatbot Vs. Intelligent Virtual Assistant — What’s the difference & Why Care?
Installation of Packages
Transformers: This library brings together over 40 state-of-the-art pre-trained NLP models (BERT, GPT-2, Roberta, etc..)
Torch: Python-based scientific computing package
Streamlit: To create a custom web app.
# Install Transformers
!pip install transformers
!pip install torch
!pip install streamlitImport Libraries
Importing the libraries that are required to perform operations on the dataset.
import streamlit as st
import torch
import transformers
from transformers import AutoModelForCausalLM, AutoTokenizerLoad the tokenizer and model
Streamlit cache, caches the tokenizer and the model. This avoids reloading of the tokenizer and the model and thus improving the performance.
@st.cache(hash_funcs={transformers.models.gpt2.tokenization_gpt2_fast.GPT2TokenizerFast: hash}, suppress_st_warning=True)
def load_data():
tokenizer = AutoTokenizer.from_pretrained("microsoft/DialoGPT-medium")
model = AutoModelForCausalLM.from_pretrained("microsoft/DialoGPT-medium")
return tokenizer, modeltokenizer, model = load_data()
Set up the streamlit
In the session state of streamlit, we are storing the below items:
chat_history_ids: Stores the conversations made by the user in that session.
count: Count of conversations. It was observed that the model was not giving good results after 5 sequential conversations. So we use this counter to clear the chat history.
old_response: Stores the previous response by the model. Sometimes the model generates the same response as the previous, using this variable we can track such duplicate responses and we can regenerate the model response.
st.write("Welcome to the Chatbot. I am still learning, please be patient")
input = st.text_input('User:')
if 'count' not in st.session_state or st.session_state.count == 6:
st.session_state.count = 0
st.session_state.chat_history_ids = None
st.session_state.old_response = ''
else:
st.session_state.count += 1Tokenizing the user input and returning the tensor output
new_user_input_ids = tokenizer.encode(input + tokenizer.eos_token, return_tensors=’pt’)
Appending the user input ids to the chat history ids
bot_input_ids = torch.cat([st.session_state.chat_history_ids, new_user_input_ids], dim=-1) if st.session_state.count > 1 else new_user_input_ids
Generating a response while limiting the total chat history to 5000 tokens
st.session_state.chat_history_ids = model.generate(bot_input_ids, max_length=5000, pad_token_id=tokenizer.eos_token_id)
Decoding the response
response = tokenizer.decode(st.session_state.chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)
Regenerating the response if the old response from the model is the same as the current response.
if st.session_state.old_response == response:
bot_input_ids = new_user_input_ids
st.session_state.chat_history_ids = model.generate(bot_input_ids, max_length=5000, pad_token_id=tokenizer.eos_token_id)response = tokenizer.decode(st.session_state.chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)
Displaying the response on the UI
st.write(f”Chatbot: {response}”)
Updating the old_response variable
st.session_state.old_response = response
The app can be easily run in the local environment
The conversational chatbot is up and running
Don’t forget to give us your 👏 !
Conversational Chatbot using Transformers and Streamlit was originally published in Chatbots Life on Medium, where people are continuing the conversation by highlighting and responding to this story.
-
Supercharging Contact Center Agents with Virtual Agents
Digital era has always amused us with its new innovative ideas and inventions. Digital India is a dream of every Indian and that dream has almost come true. It was only successful by applying artificial intelligence and one of them is AI virtual agents as contact center solutions that are being introduced in every field.
Highlights:
- Firstly, what is a supercharging contact center?
- What are virtual agents?
- Working of virtual agents
- How will supercharging contact centers with virtual agents be beneficial?
- Conclusion
Virtual assistant call center service has played an essential role in customer servicing by assisting clients through interacting across various channels including emails, webchats, social media, etc., and has led intelligent virtual assistance to a new level.
Having said that, contact centers have always been run by human beings and there are unforeseen mistakes and risks which any company cannot afford to have. In this competitive world, we expect every work or job to be done perfectly and with less cost consumption.
This has been possible only through one medium is Virtual agents. Are you wondering how it works, what are its benefits? Then let us not wait any further. Let’s get started.
Firstly, what is a supercharging contact center?
As said earlier, supercharging contact centers which are also called e-contact centers are one-point contact areas where all customer interaction channels are managed with the help of CRM (Customer Relationship Manager). Basically, it includes one or more call centers integrated with CRM.
As the business grows, contact center solutions become quite important to achieve utmost customer satisfaction. To help the users with their queries and other issues related to the product, a contact center solution is very essential for a company’s growth. It allows us to maintain retention in the company for a longer time period.
What are virtual agents?
Virtual agents are a set of programs that have certain rules and support of Conversation AI in order to assist with customer service through various mediums. These virtual agents can help us through chatbots, emails, voice bots, or even interactive voice response systems.
In simple words, virtual agents can also be called digital assistants which is mainly helpful for huge organizations dealing with enormous amounts of customers or internal employees. Unlike other agents such as chatbots, virtual agents can communicate through any medium available.
Trending Bot Articles:
2. Automated vs Live Chats: What will the Future of Customer Service Look Like?
4. Chatbot Vs. Intelligent Virtual Assistant — What’s the difference & Why Care?
Working of virtual agents
Basically, virtual agents or virtual assistant call center services are pre-programmed or scripted products in specific ways to understand human language at ease. Upon request of human virtual agents receive the language and convert it to machine language with the help of ASR. Virtual agents will identify keywords and search for solutions.
Later the extracted solution will be converted into human-understandable language with the help of Dialog conversation. Thus, virtual agents can be effective and efficient.
How will supercharging contact centers with virtual agents be beneficial?
Virtual agents can provide numerous benefits to contact centers AI. Below are the following benefits where one can focus on
1. Improves customer experience
Digital India has made us more comfortable with technology than before. Hence, no one wants to wait in the queue to be answered and can resolve on their own with little help available. Virtual agents, with their unique way of approach, help the customers to reach for needed solutions in a short span of time.
Even though virtual agents provide minimal and only needed solutions, they are much more effective and useful solutions for our problems compared to other methods.
2. Available round the clock
We may not expect human beings to work 24/7 as there is a physical and mental limit, but with virtual agents, this is possible seven days a week. In this busy world, users do not have time to check in to companies’ websites to search for solutions. Through AI virtual agents nothing is stopping us by assisting our customers and getting utmost satisfaction.
3. Efficient data gathering
Every time a customer interacts with virtual data, the information shared is different from others. Virtual agent saves information in the backend of the system which is later used for fetching users’ data indeed, to upgrade or update the user system, better decision-making, etc.
From all the questions or the issues faced by virtual agents, managers or superiors can check the pattern or frequently asked questions so that solutions can be found out easily. It also helps us find out where exactly an issue is and at which level customers are facing this difficulty and at which location very easily.
4. Benefits to human agents
During festive seasons or any offers, occasions there will be high volumes of calls or inquiries and at this high rate of calls, there is a lot of pressure which results in not answering every customer efficiently. Thus, virtual agents are used to interact effectively. Here, we have to note that virtual agents are not here to replace contact centers but to assist.
Hence, while a virtual AI assistant for contact center agents can answer commonly asked questions that are still difficult for customers to figure out, contact centers can face challenging tasks which together result in inefficient customer service virtual assistants.
5. Advanced AI
Contact center virtual agents are a real boon to contact centers as AI is built with the capability to upgrade or to correct itself on its own. With every interaction with customers, virtual agent service learns more and more every day. This takes the weight of the developer to update virtual agents frequently.
With every interaction with users, virtual agents assure us that they have the potentiality to satisfy customers with their needs at ease.
6. Enhanced lead generation
As these intelligent virtual agents can gather any amount of information in a fraction of seconds and store it, they are very suitable for interacting with clients. Intelligent Virtual assistance understands the customer’s needs and can assist them at multiple levels and in a wider area compared to contact centers.
Conclusion
Considering all the above points, every contact center virtual agent can perform better with utmost user satisfaction and maintain 100% retention. This is the most effective and easiest way to resolve customer issues with very little cost consumption.
Don’t forget to give us your 👏 !
Supercharging Contact Center Agents with Virtual Agents was originally published in Chatbots Life on Medium, where people are continuing the conversation by highlighting and responding to this story.
-
How to Collect User Data in Messenger and Export It to Google Sheets with the ManyChat Bot —…
How to Collect User Data in Messenger and Export It to Google Sheets with the ManyChat Bot — EmpathyBots
Are you planning to generate leads for your business with Messenger?
Or, just growing a subscriber list of your newsletter?
Whatever it be, you just want to collect user data like name, email, contact number, etc.
And you know what, it is very easy to collect user data in Messenger with the ManyChat bot.
Not just that, In this tutorial, I’m going to show you how to collect user data in Messenger and export it to google sheets with the ManyChat bot so that you can easily access it later.
Source: EmpathyBots Before starting out, If you are not familiar with technical terms in chatbot development then please refer to my guide on technical chatbot terminology, because in this tutorial you need to be aware of that technical jargon.
Ok, I’m going to divide this tutorial into 2 parts, first we’ll see how to collect user data and then how to export it to google sheets.
So, let’s dive in!
How to Collect User Data in Messenger with ManyChat Bot
Before starting, I hope you have a ManyChat account, if not, then go create one.
Ok!, for this tutorial, we will collect 4 types of user data such as first name, last name, email address, gender, and user ID.
If you are wondering what is the user ID, then it is nothing but a unique ID given to each user by Facebook. I will tell you why we need a user ID in a while.
You just follow the following steps to collect user data in messenger with ManyChat.
Trending Bot Articles:
2. Automated vs Live Chats: What will the Future of Customer Service Look Like?
4. Chatbot Vs. Intelligent Virtual Assistant — What’s the difference & Why Care?
Step 1:
Go to the Automation tab > New Flow > Start From Scratch.
Step 2:
Now you will see a starting step connected to blank Messenger content block.
Click anywhere on the Messenger content block, the new window will open where you will see Add Text section, just delete/cancel it.
Source: EmpathyBots Step 3:
Scroll a little and you’ll see a User Input option, select it.
Source: EmpathyBots User Input is a feature to accept user inputs in multiple formats such as text, number, email, phone, etc.
Step 4:
Write the text you want in the text box or simply paste this one,
“Please confirm your email ID or enter an alternate one”
Then, select the reply type to Email.
Source: EmpathyBots Now you can see the two toggles such as Save Input to System Field and Set Email Opt-in.
Turn off the first one because we are going to create a custom field to store email addresses.
Select your custom field, if you haven’t created it, you can create one by just typing the name you want and selecting New User Field.
Source: EmpathyBots I have already created one, so I will just select that.
If you scroll down a little, you will see an option to add the Skip button, I’m not going to add it, you can if you want.
Woohoo! You have just created your first user data field.
What it does is send a message and show a user’s email address as a quick reply button so that they can quickly enter an email address or type an alternative one if they want.
You need not ask users for their user ID, first name, and last name because you can extract it from their profile directly. I will show you how in the google sheets section.
Step 5:
Repeat the same process to collect the gender of a user.
Add the following text into a textbox,
“Please select your gender”
This time select the reply type as Multiple Choice because we’re going to give users the ability to select from the options.
Then, check the box asking to allow free text input as well, select your custom field, and add the options to select beneath a textbox.
Source: EmpathyBots That set! We’ve performed all the steps to collect user data and now it’s time to export it to google sheets.
So, let’s do it in the next section.
How to Export User Data to Google Sheets with ManyChat Bot
Step 1:
Before going to export user data to google sheets, you need to set up two things.
First, create a Google sheet like this,
Source: EmpathyBots And second, integrate google sheets with your ManyChat account by going into Settings > Integrations under the Extensions section.
For doing so, you just need to connect your Google account with ManyChat. Make sure it is the same account on which you have created a Google sheet.
Source: EmpathyBots Now, you’re all set to export user data to google sheets with ManyChat Bot.
Step 2:
Jump over to the same flow you have created to collect user data and click on the content block, and then on a first textbox.
There you can see an option as Perform Actions which allows you to create an action to perform when the particular input is received from the user.
Source: EmpathyBots Click on it, then on an action block > Actions > Google Sheets Action.
Then, click on Select Action under Google Sheets Action > Insert Row > and select your sheet.
Now as I told you before, you can extract the user Id, first name, and last name directly from the user’s profile, just insert the following system fields in ManyChat data in front of respective column titles,
User Id
First Name
Last Name
And then, insert a custom field you have created to collect email addresses in front of the Email Address column title,
Email Address (my custom field)
Source: EmpathyBots Step 3:
Repeat the same process to export Gender’s data.
But this time select Google Sheets Action as an Update Row, otherwise, it will add this data in a new row in the sheet.
And to update a row you need to have some lookup column and value to identify a user.
And then, the user Id came into the picture which is the most accurate parameter to identify a user.
Just add the Lookup Column’s name as in sheets that is “User ID” in our case and Lookup Value as a system field “User Id”.
And insert a custom field you have created to collect gender in front of the Gender column title,
Gender (my custom field)
Source: EmpathyBots Here’s how this feature is working on my chatbot,
Now, go and test yours by clicking on the Preview button.
Suggested Guides:
- How to Build a Simple FAQ Chatbot with ManyChat (No-Code) in 202
- 11 Chatbot Best Practices You Should Follow to Create a Powerful Chatbot
Let’s wrap this tutorial!
Wrapping Up
So, we have just seen how to collect user data in Messenger and export it to google sheets with the ManyChat bot.
This is a very important and useful feature especially when it comes to lead generation.
Just go and implement it!
Liked this story? Consider following me to read more stories like this.
Don’t forget to give us your 👏 !
How to Collect User Data in Messenger and Export It to Google Sheets with the ManyChat Bot —… was originally published in Chatbots Life on Medium, where people are continuing the conversation by highlighting and responding to this story.
-
Coding memes
submitted by /u/ADSPLTech7512
[link] [comments] -
Chatbots Market Size | Global & Regional Trends Analysis, 2026
The Global Chatbots Market is projected to witness around 24.10% CAGR during the forecast period, i.e., 2021-26. The growth of the market is driven primarily by the mounting adoption of Chatbots and the integration of Artificial Intelligence (AI) in them to enhance interaction with the customers & solve their queries instantly, without the need for human intervention in noncritical situations.
Get the details here: https://www.marknteladvisors.com/research-library/global-chatbot-market.html
submitted by /u/MarknTeladvisors
[link] [comments]