Your cart is currently empty!
Month: October 2021
-
Why Institutions Should Work Towards Universal Design
In our previous article, we talked about what universal design for learning (UDL) is and explained its key principles.
In this article, we’ll tell you why lecturers should start using universal design for learning ASAP. If you’re already interested in incorporating UDL into your classroom, we also explain how you can do that.
-
Can I use BERT for intent classification?
Hello, I have been trying to implement a chatbot in Microsoft Azure. My first step is the NLU module where there will be intent classification. I wanted to know if it’s possible to do this using BERT? Thank you.
submitted by /u/SoftPawpaw
[link] [comments] -
Chatbot that broadcasts messages from fb posts
Hello, I was wondering if anyone could help me with this. I need to create a free prototype of a chatbot that reads posts from a facebook page and sends it to whatsapp contacts. How do I proceed ahead with this? Can someone help with a resource? Thanks so much in advance
submitted by /u/whatinthepinkfloyd
[link] [comments] -
testing chatbot
I’m developing a chatbot. Where can I find help for testing it? I would like to share it to some communities, people who like to play with chatbots etc.
I don’t want to make it open to everyone because it’s I don’t have infrastructure to serve high traffic. Just to limited number of ppl.
submitted by /u/ImpressionHefty7255
[link] [comments] -
Cross platform chatbot?
Hey all,
Not sure if this is the right place, but here goes. We’d like to create a cross-platform automatically-updating wiki and bot. We got a pretty big budget for this.
Think following scenarios:
If someone shares a presentation in Microsoft Teams public channel, related to a topic. Throw it in the wiki (Collab. Platform)
Someone’s account manager for a client? Throw it in the wiki (CRM Platform)
Someone’s available to work in 2 weeks? Throw it in the wiki (ERP Platform)
Problem right now is I have no idea at all where to begin. Anyone got any inputs? Could be platforms, AI, machine learning, anything really. The more specific the better. All I got is ‘start with the data people will actually need/should be easily available.
submitted by /u/EmilKay
[link] [comments] -
Mercury — A Chat-bot for Food Order Processing using ALBERT & CRF
Mercury — a chatbot for Ordering Food using ALBERT & CRF
Unless you have been out of touch with the Deep Learning world, chances are that you have heard about BERT, ALBERT and CRF (Conditional Random Field).
Mercury, named after the Greek God Hermes who was the messenger of the Gods, is a chatbot service which can be integrated with various Food-Delivery brands such as Swiggy or Zomato where a User can simply type in his order and send it as a text.
Mercury can then extract the essential information from the order and place the order for the User accordingly.
Here is a list of technologies involved in Mercury :
=> ALBERT (which looks like BERT++)
=> CRFs (Conditional Random Field)
=> gRPC (Google Remote Procedure Calls)
=> JointALBERT Slot-Filling & Intent Classification
=> Flutter (Front-End)
Since this paper is about “Mercury”, I will only be providing a brief summary and some useful links for a more in-detail understanding of each concept.
What is BERT?
Let’s take a look at these sentences where the same word has different meanings :
|- I was late to work because I left my phone at home and had to go back.
|- Go straight for a mile and then take a left turn.
|- Some left-winged parties are shifting towards centralist ideas.
How do you differentiate between each meaning of the word left?
These differences are almost certainly obvious to you, but what about a machine? Can it spot the differences? What understanding of language do you have that a machine does not?
Or rather, What understanding of language do you have that machines did not have earlier?
The Answer : Context.
Your brain automatically filters out the incorrect meaning of words depending on the other words in the sentence, i.e. depending on the context.
But how does a machine do it?
This is where BERT, a language model which is bidirectionally trained (this is also its key technical innovation), comes into the picture.
This means that machines can now have a deeper sense of language by deriving contextual meaning of each word.
Trending Bot Articles:
2. Automated vs Live Chats: What will the Future of Customer Service Look Like?
4. Chatbot Vs. Intelligent Virtual Assistant — What’s the difference & Why Care?
What is ALBERT?
ALBERT was proposed in 2019, with the goal of improving the training and results of BERT Architecture by various techniques :
Parameter Sharing (Drop in Number of Parameters by over 80%)| Inter-Sentence Coherence Loss | Factorization of Embedding Matrix
Results of ALBERT on NLP benchmarks :
ALBERT VS BERT (ALBERT Achieves SOTA Results with 20% Parameters) What are Conditional Random Fields?
CRF classifies inputs to a feature from a ‘list of potential’ features.
I will be going into a little more detail shortly but for now, just understand that CRFs are used for predicting sequences depending upon previous labels in sentences.
They are often used in NLP in various tasks such as Part-Of-Speech Tagging and Named-Entity Recognition since CRFs excel in modelling sequential data such as words in a sentence.
What is gRPC?
It is an open-sourced high performance Remote Procedure Call framework.
It’s main advantage is that the client and server can exchange multiple messages over a single TCP connection via the gRPC Bidirectional Steaming API.
Mercury uses gRPC bidirectional streaming API for implementing Speech-To-Text functionality by using the Google Speech-To-Text API.
Mercury — What’s under the Hood?
What does Mercury do before placing the order for the User?
How does Mercury know that the text it has received is indeed a request for placing an order?
Let’s take a look at this sentence:
“I would like to have 1 non veg Taco, 3 veg Pizzas and 3 cold drinks from Domino’s.”
How does Mercury go from this to something like this?
This is where Joint-ALBERT (Slot-Filling & Intent-Classification) comes into the picture.
Sneak-Peak under the hood of Mercury’s Model
Training :
We come up with some desired labels for our model.
Intent Label : <OrderFood>
Slot Labels : <restaurant_name> , <food_name>, <food_type>, <qty>, <O> (<O> means that specific word does not carry much value in the sentence and can be masked or ignored).
We create hundreds of sample sentences with labels associated to each word.
ALBERT + Conditional Random Field (Joint-ALBERT):
We have already learnt that CRFs excel in modelling sequential data. So how does it help Mercury?
CRFs essentially help in mapping each word to it’s appropriate label.
For example:
It can map the number “1” to <qty> denoting quantity.
It can map the word “Domino’s” to <restaurant_name>.
Great! So if CRFs can do this, why do we even need ALBERT?
In our original sentence :
“I would like to have 1 non veg Taco, 3 veg Pizzas and 3 cold drinks from Domino’s.”
How does CRF know that the word “non” is a <B-food_type> and the word “veg” is <I-food_type> (B means beginning & I means continuation of B)?
How does CRF know that the word “non” is not the dictionary meaning “anti”?
As you probably already guessed, ALBERT provides CRF the contextual meaning of each word which helps CRF in classifying each word into the correct slot labels.
CRF does Slot-Identification for each word by mapping each word’s possible label with each other and figuring out which mapping has the highest probability.
Bold Line represents the Most Probable Mapping Finally, how is Intent of the sentence predicted?
CRF does this part too by figuring out that “A specific sequence of slot-labels leads to a specific Intent”.
For example :
If the slots <food_type>, <food_name> and <restaurant_name> are found in a sentence, then the sentence is probably having the intent of <OrderFood>.
Intent Prediction based on Slot Labels Flutter Front-End
Mercury has a simple and elegant front-end for the User.
Some Useful Links :
You can watch a quick 3-minute Demo of Mercury on My Mercury Website.
You can also check out my other projects on My Main Website.
BERT Paper: Here is the arxiv BERT Research Paper.
ALBERT Paper: Here is the arxiv ALBERT Research Paper.
CRF Paper: Here is the arxiv CRF + LSTM Research Paper for Sequence Tagging.
JointBERT: Here is the arxiv JointBERT (Intent Classification and Slot Filling) Research Paper.
gRPC Introduction: This will get you started with gRPC Basics.
adios, amigos!
Don’t forget to give us your 👏 !
Mercury — A Chat-bot for Food Order Processing using ALBERT & CRF was originally published in Chatbots Life on Medium, where people are continuing the conversation by highlighting and responding to this story.
-
To what extent does a virtual Assistant exhibit Cognition?
Humans have long been fascinated by the concept of machine thinking and working like a human, at least intellectually. Though there might be potentially devastating consequences of a machine becoming as intelligent as humans, we simply cannot deny that they could be of great help and improve our daily lives.
Highlights:
- What is Cognition?
- How can we test the Cognition of a Software?
- Cognitive Capabilities of a Cognitive Virtual Assistant
- Memory Retention
- Understanding spell mistakes and Paraphrases
- Understanding long-form Sentences
Cognitive AI has been powering virtual assistant services throughout their existence. The capabilities of virtual assistants are increasing year by year and With various virtual personal assistants showing very advanced and high intelligence capabilities, it is common for anyone to get curious as to what is a virtual assistant ai? and to what extent this “cognition” they exhibit would go.
In this article, let us try to understand exactly that. However, before that, we must understand what cognition means and what to think of when software exhibits cognition.
What is Cognition?
Basically, cognition is a state of awareness of an entity of its surroundings and an ability to evaluate it and give an intelligent response to the stimuli. To define more formally, it is a mental action or a process of a being in which it acquires understanding through thinking, experience, and senses which it uses to communicate with the world.
Other than being able to “sense” the world, the cognitive being must have the intelligence to understand and react to it and also have a working memory.
Humans have intuitive cognition which is developed through a lot of means. However, a machine exhibiting cognition means that it is also “aware” of its surroundings and gives intelligent responses if we interact with them.
How can we test the Cognition of a Software?
In humans, cognitive capabilities are tested by making them do various activities that require a certain level of cognition. So, a similar form can be used to test and assess the level of cognition of software.
This actually can be seen as the core idea behind the Turing test, which is a thought experiment that goes like this: a computer is hidden behind the doors of a room and a man would be communicating with that computer by sending them written notes. If the computer manages to answer all queries satisfactorily without a man getting doubt, the computer wins!
Artificial Intelligence is actually divided in general into weak AI and strong AI. The former one being Artificial intelligence that works only for a subset of problems and requires a lot of training material to solve a new set of problems and the latter is a general AI that can solve almost all problems that are given to it. Strong AI has not been developed fully yet, so much of the development is still ongoing.
So, this is the first limitation we see with today’s AI. Which is that they can exhibit only finite cognition in limited fields onto which they are trained on. There is another general problem associated with the present state of AI — — black-box nature.
That is, much of all machine learning models used in practice are black boxes and we do not easily know how exactly a particular decision is made and so, we cannot look at “process” and come to a conclusion whether the machine is “thinking” and going through “right steps” of intelligence in giving an answer.
So, what can we do? One easy way is to look at various examples in which virtual assistants give intelligent responses and analyze how intelligently they handled the situation.
Trending Bot Articles:
2. Automated vs Live Chats: What will the Future of Customer Service Look Like?
4. Chatbot Vs. Intelligent Virtual Assistant — What’s the difference & Why Care?
Cognitive Capabilities of a Cognitive Virtual Assistant
Here, let us see various examples of how an intelligent virtual assistant would respond to a query and analyze it from various viewpoints of cognition.
Memory Retention
When using memory, we can understand that a virtual assistant would have an exceptional memory where it is able to access an appropriate memory when there is particular use.
Consider you are talking to a virtual assistant regarding going to a place for a vacation. Think that you told it a very while ago that you like to go to a particular place in the winter season. Now, when you ask a virtual assistant to suggest a place to visit, the virtual assistant would check that the current season is winter and from an understanding of your interests, it may suggest to you places that really suit you.
Not only this long-form memory retention, virtual assistants can also retain current context across various previous messages and thus can give intelligent answers based on previous conversations.
Understanding Spell Mistakes and Paraphrases
The ability to understand spelling mistakes and paraphrases signifies that a virtual assistant is cognizant enough not to take for granted that what you type to be correct and includes complete intent. Even though you give an incorrect input or do not specifically specify intent, the virtual assistant would extract the correct intent and proceed with the next actions.
For example, let us think you have given an input when my event is scheduled? Then, it might understand that you have asked when your event is scheduled. Also, you did not specify what event you have asked for. The virtual assistant tries to guess the most appropriate event that you might ask about and would provide a response based on that.
Understanding long-form Sentences
Understanding long-form sentences signify that a virtual assistant is able to break sentences down and can still understand the overall intent and meaning of a sentence.
For example, if you provide Hi I am John Doe. Currently, I have a mid-tier plan. I Am not satisfied with it. Either solve my problem or get me a higher plan. Then, the virtual assistant would understand that you are somewhat angry and tailor its response. Also, it first asks about the problem and if the problem is not solvable easily, then provides information regarding higher plans. If you are still not satisfied, then your request, with all appropriate information, would be forwarded to a support representative.
Conclusion
From the above analysis, we can come to a conclusion that virtual assistants, with help of cognitive AI technologies, exhibit exceptionally good cognitive capabilities. The capabilities would exhibit intelligence in discerning the intent of the user and also remember and understand the context.
So the limit of a virtual assistant to exhibit cognitive capabilities cannot be specified as such, except for the fact that it’s limit is just to our imaginations. The existing attributes and level of intelligence is certainly growing with a drastic pace.
Don’t forget to give us your 👏 !
To what extent does a virtual Assistant exhibit Cognition? was originally published in Chatbots Life on Medium, where people are continuing the conversation by highlighting and responding to this story.
-
Making chatbots reply smarter with context using Dialogflow Fulfillment
In the previous article, I wrote about why good chatbots need context instead of tree-based flows. The benefits of introducing context are that users can engage in a more natural dialogue with your chatbot, get direct replies and change information without restarting the conversation.
I’ll be using Dialogflow and Cloud Functions for Firebase to describe and explain the implementation. The ticket price inquiry example is based on the scenario described in my previous article, so take a look if you have not.
Concept
1. Instead of one intent with the required slot filling parameters, create that intent followed by one intent for each parameter. (See purple boxes above)
2. In those intents with a single slot filling parameter, set it optional.
3. Put all entities extracted from any intent into the conversation context programmatically. (See the blue box above)
4. Make a functional response for a group of related intents (see the orange box above), so that you’re making a chatbot to reply based on the user’s intent and information (either mentioned or referred from context), instead of intent without information.
Let’s take a closer look at the code.
Intent mapping
Start by creating a map of intents. Let the agent (a webhook client) use the intent map to handle incoming messages.
Remember to create those intents in Dialogflow and turn on the webhook fulfilment.
Intent fulfilment
Next, for each intent that acts on the customer’s query (such as the ticket price inquiry in this case), the chatbot will reply based on the number of participants (children, adults and seniors), the site they wish to visit, and the citizenship.
- Extract both the slot filling values (parameters) and the context parameters found in the request body.
- If the customer has explicitly mentioned the site, the number of people and/or their citizenship, use that. If not, recall from the context in the current conversation session.
- Make a functional reply using those parameters, such as replyTicketPrice(agent, citizenship, participants, site) .
- Finally, keep all (and new) parameters mentioned in the context.
agent.context.set({name, lifespan, parameters: { …currentParams, …newParams}}) This is helpful when the bot goes back to step 1 and 2, preventing the user from repeating the details. - In many cases, a chatbot is designed to fulfil a wide range of customer requests, so a good idea is to assign a topic name in the context to keep the current conversation relevant (be it about ticket price inquiry, membership registration or something else) in case the user informs the chatbot about a different site or a different number of visitors.
Trending Bot Articles:
2. Automated vs Live Chats: What will the Future of Customer Service Look Like?
4. Chatbot Vs. Intelligent Virtual Assistant — What’s the difference & Why Care?
So how should the bot exactly reply? There are 8 possible ways to reply, based on what parameters were given, and it’s up to your conversation design.
If the customer comes with a short and sweet message like “ticket prices”, the chatbot will ask for one of the required parameters. Or if the customer says “ticket prices to the cloud forest”, there’s an equal chance that the bot will ask for the number of the participants or the citizenship. All bot replies are in the form of questions, which will elicit a relevant answer from the user, otherwise, the bot will tell the customer the ticket prices right away if all are known.
What happens when the user says “2 adults, 1 child”, “cloud forest”, or “tourist”? The respective parameter-based intent is triggered, not the ticket price intent. In this situation, the chatbot will invoke the replyTicketPrice response based on whatever information is passed to it each time. There is also the possibility that the customer starts talking to the chatbot that they are “interested to visit” without a specific purpose like inquiring about the ticket prices, so the bot may ask “Which site?”.
The fulfilment of this parameter-based intent (site) shares the same design pattern for the other two (participants and citizenship).
Conclusion
Context is a great way to carry important details from intent to intent, especially if the customer changes information, interrupts the flow by going off-topic, or wants the bot to complete multiple requests.
You’ve probably looked up local weather info using digital assistants, and then asked “what about (this city) instead?” and still get the weather forecast. That’s context at work. Can you think of other use cases too? Are you thinking of re-designing your bot dialogues? Or do you have different ways to accomplish context using other bot frameworks?
Opinions expressed are solely my own and do not express the views or opinions of my employer. If you enjoyed this, subscribe to my updates or connect with me over LinkedIn.
Don’t forget to give us your 👏 !
Making chatbots reply smarter with context using Dialogflow Fulfillment was originally published in Chatbots Life on Medium, where people are continuing the conversation by highlighting and responding to this story.
-
HELP!!!
Can someone help me to find a chatbot that has failed but is still present to test?? (for research purposes)
submitted by /u/OutrageousSession327
[link] [comments] -
Lol kuki_ai
submitted by /u/jobless_introvert004
[link] [comments]