Your cart is currently empty!
Category: Chat
-
Facebook Messenger SDK stopped sending PSID in GetContext() call
Sorry if this is the wrong place for such an extremely targeted question, but I can’t find anyone complaining of this issue anywhere. I have a Facebook Messenger Chatbot (a production and dev version) and I use web extensions to launch UIs that are too complex to deal with in free text. Within these web extensions I run the Facebook Messenger SDK (javascript) to identify the current user. Once initialised I call the GetContext() method which returns a context object, this method previously returned a payload containing their PSID (Page Scoped User ID) and a JWT which also contained the PSID. The web extension sends data to our API using this JWT for authorization. This has been working
fine for 2+ years.A few days ago I noticed that the API calls from the web extensions are now failing and after debugging it I’ve found that the PSID is now an empty string in both the context object and the JWT (when decrypted), so our backend has no way to identify the user.
The bot is connected to the Page and the Web Extension is treated as an App in Facebook, they are both owned by the same FB Business account. The App has “Messenger” added as a Product.
Has something changed (silly question for FB I know)?
submitted by /u/fr4nklin_84
[link] [comments] -
How US legal firms can and must compete with robo-lawyer services
Most US legal firms, like their European counterparts, are steeped in tradition. Even newer firms formed by eager law graduates have their education rooted in similar structures. As legal journals and business magazines impress on the need to modernize and digitize how we work, the external threats to legal firms are growing, but how do we address them?
Automated legal services like chatbots and form creators are a threat to the legal profession, or they can be viewed as a challenge to be met. As was the case with iTunes and Spotify for music and Amazon for retail, some tivals adapted to face a new market reality, others folded or sold up, or were driven into a particular niche, while more startups arrived to compete. Whatever the market, there are plenty of ways to survive and thrive as the momentum and disruption of automation builds, as legal will find out very quickly.
In the legal profession, the adoption of digital technology for collaboration and efficiency, powered by cloud services has been mixed. But as all markets look post-COVID, there is a fresh impetus to grasp and understand the learnings from the crisis and adopt technology to make legal operations more streamlined and efficient.
Lack of awareness is no defense, with Deloitte exploring the disruption issue back in 2017 with a report on ”The case for disruptive technology in the legal profession”, highlighting the key issues of:
· The opportunity that technology creates for legal.
· The growing importance of big data and analytics in legal cases.
· The effects of technology on legal business models.
· Potential legal disrupters.
All of which remain valid today, but now the disruption is more visible, in every lawyer’s face and rising up the boardroom agenda for all firms with a large legal footprint.
Trending Bot Articles:
1. The Messenger Rules for European Facebook Pages Are Changing. Here’s What You Need to Know
3. Facebook acquires Kustomer: an end for chatbots businesses?
The rise of disruptive legal services
Legal business evolution is driven by vertical-specific vendors iterating well-known IT products and cloud solutions, often packaged by legal IT specialists and sold to enterprises with extensive legal departments, and then down to smaller firms.
But the word “disruption” is the driving force behind more radical change. Many startups and “ideas people” both from within and external to the law profession see opportunities to shake up the old order. They create new products and types of service that eliminate the high cost and slow-moving nature of most legal offerings and services.
Behind their ideas, new products are driven by the limitless power of the cloud to deliver services and scale marketing to enormous proportions. While most of them will fail to gain the much-coveted traction, those that succeed act as inspiration for more to try, while rapidly taking business from existing legal firms or providing them with the tools to compete.
The current poster child for disruptive legal tech is DoNotPay, a company founded by an English teenager, Joshua Browder, in 2015. His business started with an automated way to dispute parking tickets and expanded to the US, providing bots that help consumers with legal form filling, filing for airfare refunds, providing access to legal services and much more. Others include Zegal (legal templates), Lisa Robot Lawyer (NDAs and property contracts).
DoNotPay has blossomed into a consumer rights champion, offering virtual credit cards, student advice and has started eating further up the legal food chain with an automated contract builder and other tools. It can even send these forms as faxes to services that are stuck in their ways.
DoNotPay does away with legal jargon and complexity, and more importantly saves time People who never knew they needed a lawyer are using DoNotBuy or the growing number of rivals servicing local, national or regional markets, without ever having to find traditional representation. Digital-native generations will use these tools and never bother Googling for “lawyer near me.” And this is only the start as automated real estate, bail bond, company creation services, business contracts, leases and other legal processes are consumed as instant services.
Getting your legal firm up to speed
Larger legal firms may find the recent changes barely affecting them, but that pace of change continues to increase, and the impact will be felt eventually. Many firms are wondering how they can meet this challenge. Some play to their strengths using cash piles for acquisitions to corner a market or expand into new territory.
Others will follow the well-trodden path of digital legal services adoption, doing whatever their rivals do to keep pace through cloud-based practice management services at cost and with the usual upheaval of adopting new services. Bucking tradition, perhaps the best approach to meeting the automated services era is for firms to ask their domain experts how they can innovate and counteract or outpace those threatening to disrupt the legal landscape.
Where DoNotPay and its rivals may falter is that they are not law firms and the T&Cs state that “The information provided by DoNotPay along with the content on our website related to legal matters (“Legal Information”) is provided for your private use and does not constitute legal advice.”
A law firm can fill the breach with automated services that do provide legal advice or take the next steps that robo-firms currently do not. And creating these tools is simpler than you might think.
BRYTER does legal magic with no-code
Companies like BRYTER, which has been peppering the top 10 legal startup lists (Financial Times) recently, is currently setting up offices in New York after two successful years in Europe It’s no-code service automation product highlights how legal firms can build their own tools without relying on expensive and time-consuming IT rollouts.
Putting design tools in the hands of lawyers and legal professionals to build the applications they need, from chatbots (see BRYTER’s new “lawyer’s guide to chatbots” white paper), form creators, tax checkers and other necessities gets them live in days, not months.
They enable a company to prototype and trial applications quickly to take advantage of new legal market opportunities, and then scale digital legal services to meet demand. BRYTER is already used by leading legal firms for timely issues like privacy, COVID-19, GDPR, CCPA, repapering among other matters, all helping grow digital ideas within businesses.
BRYTER’s arrival in the US should make waves They can save time or generate revenue for the company directly, or be sold to clients as part of service packages, diversifying beyond the traditional billable hour.
BRYTER’s full-service offering provides the tools, expertise and experience to embed no-code tools as a core product within teams or legal practice groups. It took one person to build DoNotPay and it could be one of your lawyers looking to innovate or deliver savings that comes up with the next big thing that brings success to your practice by bringing an idea to life.
Don’t forget to give us your 👏 !
How US legal firms can and must compete with robo-lawyer services was originally published in Chatbots Life on Medium, where people are continuing the conversation by highlighting and responding to this story.
-
Language Translation with Transformers in PyTorch
Mike Wang, John Inacay, and Wiley Wang (All authors contributed equally)
If you’ve been using online translation services, you may have noticed that the translation quality has significantly improved in recent years. Since it was introduced in 2017, the Transformer deep learning model has rapidly replaced the recurrent neural network (RNN) model as the model of choice in natural language processing tasks. However, Transformer models, like OpenAI’s Generative Pre-trained Transformer (GPT) and Google’s Bidirectional Encoder Representations from Transformers (BERT) models, have quickly replaced RNNs as the network architecture of choice for Natural Language Processing (NLP). With the Transformer’s parallelization ability and the utilization of modern computing power, these models are big and fast evolving, generative language models frequently draw media attention for their capabilities. If you’re like us, relatively new to NLP but generally understand machine learning fundamentals, this tutorial may help you kick start understanding Transformers with real life examples by building an end-to-end German to English translator.
In creating this tutorial, we based our work on two resources: the Pytorch RNN based language translator tutorial and a translator implementation by Andrew Peng. With an openly available database, we’ll be demonstrating our Colab implementation for how to translate between German and English using Pytorch and the Transformer model.
Architecture Details
Figure 1: The sequence of data flowing all the way from input to output (Image by Authors) To start with, let’s talk about how data flows through the translation process. The data flow follows the diagram shown above. An input sequence is converted to a tensor where each of the Transformer’s outputs then goes through an unpictured “de-embedding” conversion process from embedding to the final output sequence. Note that we’ll be obtaining words one-by-one from each forward pass during inference rather than receiving a translation of the full text all at once from a single inference.
Input Sequence and Embedding Module
At the start, we have our input sequence. For example, we start with the German sentence “Zwei junge personen fahren mit dem schlitten einen hügel hinunter.” The ground truth English translation is “Two young people are going down a hill on a slide.” Below, we show how the Transformer is used with some insight on the inner workings. The model itself expects the source German sentence and whatever the current translation has been inferred. The Transformer translation process results in a feedback loop to predict the following word in the translation.
Figure 2: An example showing how a Transformer translates from German to English (Image by Authors) Dataset
For the task of translation, we use the German-English `Multi30k` dataset from `torchtext`. This dataset is small enough to be trained in a short period of time, but big enough to show reasonable language relations. It consists of 30k paired German and English sentences. To improve calculation efficiency, the dataset of translation pairs is sorted by length. As the length of German and English sentence pairs can vary significantly, the sorting is by the sentences’ combined and individual lengths. Finally, the sorted pairs are loaded as batches. For Transformers, the input sequence lengths are padded to fixed length for both German and English sentences in the pair, together with location based masks. For our model, we train on an input of German sentences to output English sentences.
Trending Bot Articles:
1. The Messenger Rules for European Facebook Pages Are Changing. Here’s What You Need to Know
3. Facebook acquires Kustomer: an end for chatbots businesses?
Vocabulary
We use the spacy python package for vocabulary encoding. The vocabulary indexing is based on the frequency of words, though numbers 0 to 3 are reserved for special tokens:
- 0: <SOS> as “start of sentence”
- 1: <EOS> as “end of sentence”
- 2: <UNK> as “unknown” words
- 3: <PAD> as “padding”
Uncommon words that appear less than 2 times in the dataset are denoted with the <UNK> token. Note that inside of the Transformer structure, the input encoding, which is by frequency indices, passes through the nn.Embedding layer to be converted into the actual nn.Transformer dimension. Note that this embedding mapping is per word based. From our input sentence of 10 German words, we get tensors of length 10 where each position is the embedding of the word.
Positional Encoding
Compared to RNNs, Transformers are different in requiring positional encoding. RNN with its sequential nature, encodes the location information naturally. Transformers process all words in parallel, therefore requiring stronger location information to be encoded from the inputs.
We calculate positional encoding as a function of time. This function is expected to contain cyclic (sine and cosine functions) and non-cyclic components. The intuition here is that this combination will allow attention to regard other words far away relative to the word being processed while being invariant to the length of sentences due to the cyclic component. We then add this information to the word embedding. In our case, we add this to each token in the sentence, but another possible method is concatenation to each word.
Transformer Model
Here we emphasize Transformer layers and how cost functions are constructed.
How to Use nn.Transformer Module
Pytorch’s Transformer module is at the core of our application.The torch.nn.Transformer parameters include: src, tgt, src_key_padding_mask, tgt_key_padding_mask, memory_key_padding_mask, and tgt_mask. These parameters are defined as:
src: the source sequence
tgt: the target sequence. Note that the target input compared to the translation output is always shifted by 1 time step
src_key_padding_mask: a boolean tensor from the source language where 1 indicates padding and 0 indicates an actual word
tgt_key_padding_mask: a boolean tensor from the target language where 1 indicates padding and 0 indices an actual word
memory_key_padding_mask: a boolean tensor where 1 indicates padding and 0 indicates an actual word. In our example, this is the same as the src_key_padding_mask
tgt_mask: a lower triangular matrix is used to process target generation recursively where 0 indicates an actual predicted word and negative infinity indicates a prediction to ignore
The Transformer is designed to take in a full sentence, so an input shorter than the transformer’s input capacity is padded. The key padding masks allow for the Transformer to perform calculations efficiently by excluding elements after sentences end. When the Transformer is used in sequence to sequence applications, it’s crucial to understand that even though the input sequence is processed at the same time, the output sequence is processed progressively. This sequential progression is configured by tgt_mask. During training or inference, the target output is always one step ahead of the target input as each recursion generates a new additional word, as shown “tgt_inp, tgt_out = tgt[:-1, :], tgt[1:, :]” configuration during training. The tgt_mask is composed as a lower triangular matrix:
Figure 3: Example tgt_mask showing the lower triangular matrix Row by row, a new position is unlocked for target output, e.g. a new target word. The newly appended sentence is then fed back as the target input in this recursion.
Loss Function
While we do build the translation word-by-word for inference, we can train our model using a full input and output sequence at once. Each word in the predicted sentence can be compared with each word in the ground truth sentence. Since we have a finite vocabulary with our word embeddings, we can treat translation as a classification task for each word. As a result, we train our network with the Cross Entropy loss on an individual word level for the translation output in both the RNN and Transformer formulations of the task.
Building a Translator Using Inference
When we perform the actual German to English translation, the entire German sentence is used as the source input, but the target output, e.g. the English sentence is translated word by word, starting with <SOS> and ending with <EOS>. Each step, at the target output we apply argmax function over the vocabulary to obtain the next target word. Note choosing the highest probability word progressively from our network is a form of greedy sampling.
An Insight on How Transformers Actually work
The Transformer model is very effective in solving sequence-to-sequence problems. Funnily enough, it’s effectiveness comes from processing a sentence as a graph instead of an explicit sequence. Each word at a particular position considers all other words. The Transformer powers this approach with the attention mechanism, which captures word relations and applies attention weights to words of focus. Unlike Recurrent Neural Networks, calculating the Transformer module can be done in parallel. Note that the Transformer model allows fixed length sequences for inputs and outputs. Sentences are padded with <PAD> tokens to the fixed length.
Figure 4: An example translating a sentence from French to English. Note that the intermediate layers are neither entirely valid French nor English but an intermediate representation. (Image by Authors) A full transformer network consists of a stack of encoding layers and a stack of decoding layers. These encoding and decoding layers are composed of self-attention and feed forward layers. One of the basic building blocks of the transformer is the self-attention module which contains Key, Value, and Query vectors. At a high level, the Query and Key vectors together calculate an attention score between 0 and 1 which scales how much the current item is being weighted. Note that if the attention score is only scaling items to be bigger or smaller, we can’t really call it a transformer yet. In order to start transforming the input, the Value vector is applied to the input vector. The output of the Value vector applied to the Input Vector is scaled by the Attention Score we calculated earlier.
Resources
- Our Language Translation Colab Implementation
- Language Translation with TorchText
- Translator Implementation by Andrew Peng
- Understanding Transformers, the Programming Way
- Transformer Wikipedia
- The Illustrated Transformer
- Pytorch’s Transformer module
Don’t forget to give us your 👏 !
Language Translation with Transformers in PyTorch was originally published in Chatbots Life on Medium, where people are continuing the conversation by highlighting and responding to this story.
-
“Hello World”, chatbot version — Complete example
“Hello World”, chatbot version — Complete example
The Hello World program is the typical first example you see when learning any programming language since it was first used in a tutorial to learn B (predecessor of the C language) in 1973. It is often the first program written by people learning to code. Its success resides in its simplicity. Writing its code is very simple in most programming languages. It’s also used as a sanity test to make sure the editor, compiler,… is properly installed and configured. For these same reasons, it makes sense to have a “Hello World” version for chatbots. Such bot could be defined as follows:
A Hello World chatbot is a chatbot that replies “Hello World” every time the user greets the bot
So something as this:
While this chatbot is indeed simple (compared with any other chatbot), it’s much more deceitful than its Hello World counterparts for programming languages. That’s because of the essential complexity of chatbot development. Even the simplest chatbot is a complex system that needs to interact with communication channels (on the “front-end”) and the Text Processing / NLP engine (in the “backend”), among, potentially, other external services. Clearly, creating and deploying a Hello World chatbot is not exactly your typical Hello World exercise.
Chatbots are complex systems
But don’t be scared, let me show you how to build your first chatbot with our open-source platform Xatkit. Our Fluent API will help you to create and assemble the different parts of the chatbot. Let’s see the chatbot code you need to write.
Recognizing when the user says “Hi”
The chatbot needs to detect when the user is greeting it. This is the only intention we need to care about. So it’s enough to define a single Intent with a few training sentences. Any NLP Provider (e.g. DialogFlow or nlp.js) would do a good job with this simple intent.
Trending Bot Articles:
1. The Messenger Rules for European Facebook Pages Are Changing. Here’s What You Need to Know
3. Facebook acquires Kustomer: an end for chatbots businesses?
Replying Hello World
To process the user greetings text, we need at least one state that replies by printing the “Hello World” text. But to keep the bot in a loop (who knows, maybe many users want to say Hi!), we’ll use a couple of them.
Configuring the chatbot
As we mentioned above, chatbots come with some inherent essential complexity. At the very least, they need to wait and listen to the user on some channel and then reply to the same channel. In Xatkit, we use the concept of Platform for this. In the code below, we indicate that the bot is displayed as a widget on a webpage and that it will get both events (e.g. the page loaded event) and user utterances via this platform.
And this is basically all you need for your Hello World chatbot!. Feel free to clone our Xatkit bot template to get a Greetings Bot ready to use and play with.
Of course, this is a very simple Hello World chatbot (e.g. what about if the user does not say Hi but something else?) but I think it’s the closer we can get to the Hello World equivalent you’re so used to see for other languages. Remember you can head to our main GitHub Repo for more details on Xatkit or check some of our other bot examples.
Don’t forget to give us your 👏 !
“Hello World”, chatbot version — Complete example was originally published in Chatbots Life on Medium, where people are continuing the conversation by highlighting and responding to this story.
-
Top 10 Chatbot Affiliate Programs for 2021
Affiliate marketing is a hot topic worldwide; many people are already into it, some fail to make a way to succeed. It would be preferable for you not to worry, that I have this ultimate guide to help you get started and win immensely. Going on, here’s the top 10 chatbot affiliate program 2021.
submitted by /u/botpenguin1
[link] [comments] -
Testing Conversational AI
Measuring chatbot performance beyond traditional metrics and software testing methods
-
COVID-19 is Accelerating Voice Technology Adoption
The progressing COVID-19 wellbeing emergency is boosting the utilization of voice innovation as a contactless option in contrast to…