Year: 2021

  • How US legal firms can and must compete with robo-lawyer services

    Most US legal firms, like their European counterparts, are steeped in tradition. Even newer firms formed by eager law graduates have their education rooted in similar structures. As legal journals and business magazines impress on the need to modernize and digitize how we work, the external threats to legal firms are growing, but how do we address them?

    Automated legal services like chatbots and form creators are a threat to the legal profession, or they can be viewed as a challenge to be met. As was the case with iTunes and Spotify for music and Amazon for retail, some tivals adapted to face a new market reality, others folded or sold up, or were driven into a particular niche, while more startups arrived to compete. Whatever the market, there are plenty of ways to survive and thrive as the momentum and disruption of automation builds, as legal will find out very quickly.

    In the legal profession, the adoption of digital technology for collaboration and efficiency, powered by cloud services has been mixed. But as all markets look post-COVID, there is a fresh impetus to grasp and understand the learnings from the crisis and adopt technology to make legal operations more streamlined and efficient.

    Lack of awareness is no defense, with Deloitte exploring the disruption issue back in 2017 with a report on ”The case for disruptive technology in the legal profession”, highlighting the key issues of:

    · The opportunity that technology creates for legal.

    · The growing importance of big data and analytics in legal cases.

    · The effects of technology on legal business models.

    · Potential legal disrupters.

    All of which remain valid today, but now the disruption is more visible, in every lawyer’s face and rising up the boardroom agenda for all firms with a large legal footprint.

    Trending Bot Articles:

    1. The Messenger Rules for European Facebook Pages Are Changing. Here’s What You Need to Know

    2. This Is Why Chatbot Business Are Dying

    3. Facebook acquires Kustomer: an end for chatbots businesses?

    4. The Five P’s of successful chatbots

    The rise of disruptive legal services

    Legal business evolution is driven by vertical-specific vendors iterating well-known IT products and cloud solutions, often packaged by legal IT specialists and sold to enterprises with extensive legal departments, and then down to smaller firms.

    But the word “disruption” is the driving force behind more radical change. Many startups and “ideas people” both from within and external to the law profession see opportunities to shake up the old order. They create new products and types of service that eliminate the high cost and slow-moving nature of most legal offerings and services.

    Behind their ideas, new products are driven by the limitless power of the cloud to deliver services and scale marketing to enormous proportions. While most of them will fail to gain the much-coveted traction, those that succeed act as inspiration for more to try, while rapidly taking business from existing legal firms or providing them with the tools to compete.

    The current poster child for disruptive legal tech is DoNotPay, a company founded by an English teenager, Joshua Browder, in 2015. His business started with an automated way to dispute parking tickets and expanded to the US, providing bots that help consumers with legal form filling, filing for airfare refunds, providing access to legal services and much more. Others include Zegal (legal templates), Lisa Robot Lawyer (NDAs and property contracts).

    DoNotPay has blossomed into a consumer rights champion, offering virtual credit cards, student advice and has started eating further up the legal food chain with an automated contract builder and other tools. It can even send these forms as faxes to services that are stuck in their ways.

    DoNotPay does away with legal jargon and complexity, and more importantly saves time

    People who never knew they needed a lawyer are using DoNotBuy or the growing number of rivals servicing local, national or regional markets, without ever having to find traditional representation. Digital-native generations will use these tools and never bother Googling for “lawyer near me.” And this is only the start as automated real estate, bail bond, company creation services, business contracts, leases and other legal processes are consumed as instant services.

    Getting your legal firm up to speed

    Larger legal firms may find the recent changes barely affecting them, but that pace of change continues to increase, and the impact will be felt eventually. Many firms are wondering how they can meet this challenge. Some play to their strengths using cash piles for acquisitions to corner a market or expand into new territory.

    Others will follow the well-trodden path of digital legal services adoption, doing whatever their rivals do to keep pace through cloud-based practice management services at cost and with the usual upheaval of adopting new services. Bucking tradition, perhaps the best approach to meeting the automated services era is for firms to ask their domain experts how they can innovate and counteract or outpace those threatening to disrupt the legal landscape.

    Where DoNotPay and its rivals may falter is that they are not law firms and the T&Cs state that “The information provided by DoNotPay along with the content on our website related to legal matters (“Legal Information”) is provided for your private use and does not constitute legal advice.”

    A law firm can fill the breach with automated services that do provide legal advice or take the next steps that robo-firms currently do not. And creating these tools is simpler than you might think.

    BRYTER does legal magic with no-code

    Companies like BRYTER, which has been peppering the top 10 legal startup lists (Financial Times) recently, is currently setting up offices in New York after two successful years in Europe It’s no-code service automation product highlights how legal firms can build their own tools without relying on expensive and time-consuming IT rollouts.

    Putting design tools in the hands of lawyers and legal professionals to build the applications they need, from chatbots (see BRYTER’s new “lawyer’s guide to chatbots” white paper), form creators, tax checkers and other necessities gets them live in days, not months.

    They enable a company to prototype and trial applications quickly to take advantage of new legal market opportunities, and then scale digital legal services to meet demand. BRYTER is already used by leading legal firms for timely issues like privacy, COVID-19, GDPR, CCPA, repapering among other matters, all helping grow digital ideas within businesses.

    BRYTER’s arrival in the US should make waves

    They can save time or generate revenue for the company directly, or be sold to clients as part of service packages, diversifying beyond the traditional billable hour.

    BRYTER’s full-service offering provides the tools, expertise and experience to embed no-code tools as a core product within teams or legal practice groups. It took one person to build DoNotPay and it could be one of your lawyers looking to innovate or deliver savings that comes up with the next big thing that brings success to your practice by bringing an idea to life.

    Don’t forget to give us your 👏 !


    How US legal firms can and must compete with robo-lawyer services was originally published in Chatbots Life on Medium, where people are continuing the conversation by highlighting and responding to this story.

  • Language Translation with Transformers in PyTorch

    Mike Wang, John Inacay, and Wiley Wang (All authors contributed equally)

    If you’ve been using online translation services, you may have noticed that the translation quality has significantly improved in recent years. Since it was introduced in 2017, the Transformer deep learning model has rapidly replaced the recurrent neural network (RNN) model as the model of choice in natural language processing tasks. However, Transformer models, like OpenAI’s Generative Pre-trained Transformer (GPT) and Google’s Bidirectional Encoder Representations from Transformers (BERT) models, have quickly replaced RNNs as the network architecture of choice for Natural Language Processing (NLP). With the Transformer’s parallelization ability and the utilization of modern computing power, these models are big and fast evolving, generative language models frequently draw media attention for their capabilities. If you’re like us, relatively new to NLP but generally understand machine learning fundamentals, this tutorial may help you kick start understanding Transformers with real life examples by building an end-to-end German to English translator.

    In creating this tutorial, we based our work on two resources: the Pytorch RNN based language translator tutorial and a translator implementation by Andrew Peng. With an openly available database, we’ll be demonstrating our Colab implementation for how to translate between German and English using Pytorch and the Transformer model.

    Architecture Details

    Figure 1: The sequence of data flowing all the way from input to output (Image by Authors)

    To start with, let’s talk about how data flows through the translation process. The data flow follows the diagram shown above. An input sequence is converted to a tensor where each of the Transformer’s outputs then goes through an unpictured “de-embedding” conversion process from embedding to the final output sequence. Note that we’ll be obtaining words one-by-one from each forward pass during inference rather than receiving a translation of the full text all at once from a single inference.

    Input Sequence and Embedding Module

    At the start, we have our input sequence. For example, we start with the German sentence “Zwei junge personen fahren mit dem schlitten einen hügel hinunter.” The ground truth English translation is “Two young people are going down a hill on a slide.” Below, we show how the Transformer is used with some insight on the inner workings. The model itself expects the source German sentence and whatever the current translation has been inferred. The Transformer translation process results in a feedback loop to predict the following word in the translation.

    Figure 2: An example showing how a Transformer translates from German to English (Image by Authors)

    Dataset

    For the task of translation, we use the German-English `Multi30k` dataset from `torchtext`. This dataset is small enough to be trained in a short period of time, but big enough to show reasonable language relations. It consists of 30k paired German and English sentences. To improve calculation efficiency, the dataset of translation pairs is sorted by length. As the length of German and English sentence pairs can vary significantly, the sorting is by the sentences’ combined and individual lengths. Finally, the sorted pairs are loaded as batches. For Transformers, the input sequence lengths are padded to fixed length for both German and English sentences in the pair, together with location based masks. For our model, we train on an input of German sentences to output English sentences.

    Trending Bot Articles:

    1. The Messenger Rules for European Facebook Pages Are Changing. Here’s What You Need to Know

    2. This Is Why Chatbot Business Are Dying

    3. Facebook acquires Kustomer: an end for chatbots businesses?

    4. The Five P’s of successful chatbots

    Vocabulary

    We use the spacy python package for vocabulary encoding. The vocabulary indexing is based on the frequency of words, though numbers 0 to 3 are reserved for special tokens:

    • 0: <SOS> as “start of sentence”
    • 1: <EOS> as “end of sentence”
    • 2: <UNK> as “unknown” words
    • 3: <PAD> as “padding”

    Uncommon words that appear less than 2 times in the dataset are denoted with the <UNK> token. Note that inside of the Transformer structure, the input encoding, which is by frequency indices, passes through the nn.Embedding layer to be converted into the actual nn.Transformer dimension. Note that this embedding mapping is per word based. From our input sentence of 10 German words, we get tensors of length 10 where each position is the embedding of the word.

    Positional Encoding

    Compared to RNNs, Transformers are different in requiring positional encoding. RNN with its sequential nature, encodes the location information naturally. Transformers process all words in parallel, therefore requiring stronger location information to be encoded from the inputs.

    We calculate positional encoding as a function of time. This function is expected to contain cyclic (sine and cosine functions) and non-cyclic components. The intuition here is that this combination will allow attention to regard other words far away relative to the word being processed while being invariant to the length of sentences due to the cyclic component. We then add this information to the word embedding. In our case, we add this to each token in the sentence, but another possible method is concatenation to each word.

    Transformer Model

    Here we emphasize Transformer layers and how cost functions are constructed.

    How to Use nn.Transformer Module

    Pytorch’s Transformer module is at the core of our application.The torch.nn.Transformer parameters include: src, tgt, src_key_padding_mask, tgt_key_padding_mask, memory_key_padding_mask, and tgt_mask. These parameters are defined as:

    src: the source sequence

    tgt: the target sequence. Note that the target input compared to the translation output is always shifted by 1 time step

    src_key_padding_mask: a boolean tensor from the source language where 1 indicates padding and 0 indicates an actual word

    tgt_key_padding_mask: a boolean tensor from the target language where 1 indicates padding and 0 indices an actual word

    memory_key_padding_mask: a boolean tensor where 1 indicates padding and 0 indicates an actual word. In our example, this is the same as the src_key_padding_mask

    tgt_mask: a lower triangular matrix is used to process target generation recursively where 0 indicates an actual predicted word and negative infinity indicates a prediction to ignore

    The Transformer is designed to take in a full sentence, so an input shorter than the transformer’s input capacity is padded. The key padding masks allow for the Transformer to perform calculations efficiently by excluding elements after sentences end. When the Transformer is used in sequence to sequence applications, it’s crucial to understand that even though the input sequence is processed at the same time, the output sequence is processed progressively. This sequential progression is configured by tgt_mask. During training or inference, the target output is always one step ahead of the target input as each recursion generates a new additional word, as shown “tgt_inp, tgt_out = tgt[:-1, :], tgt[1:, :]” configuration during training. The tgt_mask is composed as a lower triangular matrix:

    Figure 3: Example tgt_mask showing the lower triangular matrix

    Row by row, a new position is unlocked for target output, e.g. a new target word. The newly appended sentence is then fed back as the target input in this recursion.

    Loss Function

    While we do build the translation word-by-word for inference, we can train our model using a full input and output sequence at once. Each word in the predicted sentence can be compared with each word in the ground truth sentence. Since we have a finite vocabulary with our word embeddings, we can treat translation as a classification task for each word. As a result, we train our network with the Cross Entropy loss on an individual word level for the translation output in both the RNN and Transformer formulations of the task.

    Building a Translator Using Inference

    When we perform the actual German to English translation, the entire German sentence is used as the source input, but the target output, e.g. the English sentence is translated word by word, starting with <SOS> and ending with <EOS>. Each step, at the target output we apply argmax function over the vocabulary to obtain the next target word. Note choosing the highest probability word progressively from our network is a form of greedy sampling.

    An Insight on How Transformers Actually work

    The Transformer model is very effective in solving sequence-to-sequence problems. Funnily enough, it’s effectiveness comes from processing a sentence as a graph instead of an explicit sequence. Each word at a particular position considers all other words. The Transformer powers this approach with the attention mechanism, which captures word relations and applies attention weights to words of focus. Unlike Recurrent Neural Networks, calculating the Transformer module can be done in parallel. Note that the Transformer model allows fixed length sequences for inputs and outputs. Sentences are padded with <PAD> tokens to the fixed length.

    Figure 4: An example translating a sentence from French to English. Note that the intermediate layers are neither entirely valid French nor English but an intermediate representation. (Image by Authors)

    A full transformer network consists of a stack of encoding layers and a stack of decoding layers. These encoding and decoding layers are composed of self-attention and feed forward layers. One of the basic building blocks of the transformer is the self-attention module which contains Key, Value, and Query vectors. At a high level, the Query and Key vectors together calculate an attention score between 0 and 1 which scales how much the current item is being weighted. Note that if the attention score is only scaling items to be bigger or smaller, we can’t really call it a transformer yet. In order to start transforming the input, the Value vector is applied to the input vector. The output of the Value vector applied to the Input Vector is scaled by the Attention Score we calculated earlier.

    Resources

    Don’t forget to give us your 👏 !


    Language Translation with Transformers in PyTorch was originally published in Chatbots Life on Medium, where people are continuing the conversation by highlighting and responding to this story.

  • “Hello World”, chatbot version — Complete example

    “Hello World”, chatbot version — Complete example

    Hello World for chatbots

    The Hello World program is the typical first example you see when learning any programming language since it was first used in a tutorial to learn B (predecessor of the C language) in 1973. It is often the first program written by people learning to code. Its success resides in its simplicity. Writing its code is very simple in most programming languages. It’s also used as a sanity test to make sure the editor, compiler,… is properly installed and configured. For these same reasons, it makes sense to have a “Hello World” version for chatbots. Such bot could be defined as follows:

    A Hello World chatbot is a chatbot that replies “Hello World” every time the user greets the bot

    So something as this:

    While this chatbot is indeed simple (compared with any other chatbot), it’s much more deceitful than its Hello World counterparts for programming languages. That’s because of the essential complexity of chatbot development. Even the simplest chatbot is a complex system that needs to interact with communication channels (on the “front-end”) and the Text Processing / NLP engine (in the “backend”), among, potentially, other external services. Clearly, creating and deploying a Hello World chatbot is not exactly your typical Hello World exercise.

    Chatbots are complex systems

    But don’t be scared, let me show you how to build your first chatbot with our open-source platform Xatkit. Our Fluent API will help you to create and assemble the different parts of the chatbot. Let’s see the chatbot code you need to write.

    Recognizing when the user says “Hi”

    The chatbot needs to detect when the user is greeting it. This is the only intention we need to care about. So it’s enough to define a single Intent with a few training sentences. Any NLP Provider (e.g. DialogFlow or nlp.js) would do a good job with this simple intent.

    Trending Bot Articles:

    1. The Messenger Rules for European Facebook Pages Are Changing. Here’s What You Need to Know

    2. This Is Why Chatbot Business Are Dying

    3. Facebook acquires Kustomer: an end for chatbots businesses?

    4. The Five P’s of successful chatbots

    Replying Hello World

    To process the user greetings text, we need at least one state that replies by printing the “Hello World” text. But to keep the bot in a loop (who knows, maybe many users want to say Hi!), we’ll use a couple of them.

    Configuring the chatbot

    As we mentioned above, chatbots come with some inherent essential complexity. At the very least, they need to wait and listen to the user on some channel and then reply to the same channel. In Xatkit, we use the concept of Platform for this. In the code below, we indicate that the bot is displayed as a widget on a webpage and that it will get both events (e.g. the page loaded event) and user utterances via this platform.

    And this is basically all you need for your Hello World chatbot!. Feel free to clone our Xatkit bot template to get a Greetings Bot ready to use and play with.

    Of course, this is a very simple Hello World chatbot (e.g. what about if the user does not say Hi but something else?) but I think it’s the closer we can get to the Hello World equivalent you’re so used to see for other languages. Remember you can head to our main GitHub Repo for more details on Xatkit or check some of our other bot examples.

    Don’t forget to give us your 👏 !


    “Hello World”, chatbot version — Complete example was originally published in Chatbots Life on Medium, where people are continuing the conversation by highlighting and responding to this story.

  • Top 10 Chatbot Affiliate Programs for 2021

    Affiliate marketing is a hot topic worldwide; many people are already into it, some fail to make a way to succeed. It would be preferable for you not to worry, that I have this ultimate guide to help you get started and win immensely. Going on, here’s the top 10 chatbot affiliate program 2021.

    submitted by /u/botpenguin1
    [link] [comments]

  • Testing Conversational AI

    Measuring chatbot performance beyond traditional metrics and software testing methods

  • COVID-19 is Accelerating Voice Technology Adoption

    The progressing COVID-19 wellbeing emergency is boosting the utilization of voice innovation as a contactless option in contrast to…

  • 10 Threats of Artificial Intelligence that We Need to Be Aware of in 2021

    Source

    There is no denying that AI is the future. It will unlock doors to new vistas and make it possible to do things that once seemed impossible.

    We will talk to our cars, we will argue with our virtual assistants, and even break new records in sports with the help of AI coaches.

    The truth is we all are enchanted by AI, the amazing things it can do, and the real-world problems AI can solve.

    But there is also another dark side to AI which we cannot ignore. Dangers of AI are no longer movie fiction. They are real and right in front of us. If we do not become aware now, it will be too late.

    In this blog, I will walk you through 10 such critical AI threats that we need to become aware of this year and in the future. Let’s begin:

    10 Threats of AI that We Need to be Aware of in 2021

    1. AI Promoting Racism & Biasedness

    The idea behind AI was to create systems that are free from hate, racism, and biased opinions that are gripping us humans.

    However, what if AI systems also start promoting racism, hate, and biasedness?

    Something like this happened back in 2016 when a Microsoft AI chatbot called Tay went full Nazi on Twitter and started tweeting Nazi sentiments and racial epithets. Things blew so much out of proportion that Microsoft had to take the AI offline in just 16 hours.

    It turned out that the AI was trying to mimic the behavior of other human users who were deliberately provoking it.

    It is not the only scenario of AI promoting biasedness. There are many examples of AI systems that are partial to ethnic minorities compared to the white population.

    This leads to a serious concern that all these poor values of us humans can be passed to AI systems as well and it will only make things worse.

    Trending Bot Articles:

    1. The Messenger Rules for European Facebook Pages Are Changing. Here’s What You Need to Know

    2. This Is Why Chatbot Business Are Dying

    3. Facebook acquires Kustomer: an end for chatbots businesses?

    4. The Five P’s of successful chatbots

    2. AI Impacting Jobs

    A few days back, I was reading a post on LinkedIn which talked about how AI is going to replace most of the jobs in the future.

    “Nearly 400–800 million jobs will be replaced by AI by the year 2030 and 375 million people will have to look for other career options.” The post mentioned.

    Hence, AI automating jobs is a serious concern that can lead to increased unemployment, depressed youth, and even worse violence.

    AI is even making it worse for job seekers to look for new jobs. Most of the job applications are rejected by applicant tracking systems right away.

    While recruiters call it filtering out ideal candidates, there are times when even the most promising candidates cannot make it to the shortlist. This way both the job seekers and the recruiters suffer.

    3. AI Violating Privacy

    The Facebook-Cambridge Analytica scandal of 2016 shook us to the core. For the first time, we understood that our private details are not private anymore.

    We are generating 2.5 million terabytes of data each day. Ever wondered how much damage can an AI system with access to this much data do?

    Companies can influence our behavior and make us do what they want. They can make us buy their products, vote for people who align with their purpose, and support decisions that are in their favor.

    This way our decisions will no longer be ours. We will just end up becoming a puppet in the hands of organizations who are willing to stoop to any level for profit.

    4. AI Promoting Unreliability and Fake News

    One of the biggest achievements of AI is that it is capable of creating content on its own. There are apps in the market that can create faces, compose texts, write tweets, and clone voices. It can help a lot in advertising.

    However, people with malicious intent are using the same software to spread fake news, rumors, and blackmail others.

    So, it is not much difficult for someone to take your photo, create a fake video, and then blackmail you saying they will release it to your friends. There is even a term for all this called Face swap Video blackmailing.

    Even worse they may release the video. Then what? Your hard-earned reputation will be gone in just a few minutes. The worst part is no one will even bother if it is true or not.

    Celebrities and politicians are already falling prey to this threat and soon it will also haunt people like me and you.

    5. AI-driven Attacks

    Even the most blissful things turn into a curse when they fall into wrong hands, and no analogy can describe this better than AI.

    Artificial Intelligence (AI) not only took cybersecurity to the next level but also opened doors to new threats.

    We are no longer facing old-fashioned, commodity malware human-driven attacks. Cyber-terrorists have evolved. They are now leveraging AI to attack and shut down systems.

    All thanks to AI, cracking safe systems, encrypting secure chats, and hacking into highly confidential websites is a piece of cake for cybercriminals.

    Can you even imagine how much damage it can do?

    6. Increasing Influence of Major Tech Companies

    Facebook, Microsoft, Google, Apple, Alibaba, Tencent, Baidu, and Amazon. These are the eight most powerful companies in the world. They have the power and financial capacity to take the power of AI to a whole new level.

    So, such a powerful technology will end in the hands of a few players who can utilize it the way they want and serve their best interests. This puts at the risk of data monopoly and control.

    7. AI Resulting in Loss of Skills

    Apart from making our life easy breezy by simplifying complex tasks, AI is also making us lose touch with things that made us human.

    We no longer write with our hands, read books, go on little walks, or spend time with nature. Instead of talking to family members, we spend hours looking at our phones and smiling like idiots.

    It is like we are losing real-life skills and becoming increasingly dependent on technology, and this is not a good thing. It will deprive us of everything that makes us human and turn us into the slaves of the same technology which was expected to make our life better.

    8. Autonomous Weapons

    “Mark my words — AI is far more dangerous than nukes.” Elon Musk had said in a Southwest tech conference back in 2018, warning about the impact of autonomous weapons controlled by AI.

    Along with 115 other experts, Mr. Musk had pointed out the potential threats of autonomous weapons and the level of damage they can do.

    What he said makes complete sense. Technology has evolved and the worst part is that it is easily accessible. You can buy a high-quality drone with a camera that you control from your phone, install facial-recognition software on the drone, and use it to track and hunt the specific person you hold a grudge against.

    Would we want that? Making it so easy for someone to take someone’s life that too sitting on their couch controlling the drone from miles away?

    Even worse, what if the AI takes it upon itself to make decisions about life and death? There will be a massacre which is something we would never want.

    9. AI Superintelligence

    Here is something I find myself wondering the most:

    What if one day we build a system that is far more intelligent than us humans and it decides to take over the world?

    Although I am not the only one who is raising this concern, most of the times it is refuted in the name of fear fed by movies.

    But this is the time we should start taking this threat seriously. AI is getting more powerful than before. We are developing AI systems that are defeating current chess champions, we are creating robots that are behaving like humans, we are creating virtual assistants that do a much better job than a personal secretary.

    How long do you think it will take for someone to create a system that is more intelligent than humans? When that happens, things will be drastic.

    The last thing we would want is a Terminator-like scenario in which machines have taken over the world and killer robots are roaming freely down the street.

    10. Liability of Actions

    “Who is responsible for the accidents caused by self-driving cars?” Experts often find themselves wondering.

    It is the main reason why self-driving cars faced a lot of backlash from legal authorities when companies decided to launch them in the market. They just could not decide who they should hold accountable if accidents happen, the car owner or the company who designed the self-driving car.

    It is not just the self-driving cars. Authorities have similar concerns in scenarios in which AI went rogue after learning by itself and started drawing its own conclusions.

    Authorities often wonder whether they should blame the company or the AI itself, and they still do not have a reliable answer.

    How to Overcome These AI Threats

    • We should feed the right information to AI systems because they behave as per the information, we will feed them.
    • Enterprises should automate wisely. Automating routine mundane tasks is wise, but there are still areas that will need human involvement.
    • Consider using a VPN software. It prevents the misuse of your crucial data by keeping your details confidential on the web.
    • Employee training is a must. They should be aware of how to use AI and what things they have to keep in mind for preventing any wrongdoing.
    • Recruiters should use applicant tracking systems but should avoid relying on them entirely. This will ensure they do not lose deserving candidates just because an AI did not deem them fit.
    • People should keep enhancing their skills. This will always keep them in demand. Remember while AI can replace labor, it cannot replace skills.
    • There should be strict laws regarding what companies are doing with the data we are sharing online.
    • Social media channels should use the power of AI to keep a check on accounts that are spreading hate, racism, and biasedness. Instagram is already shadowbanning such accounts and hashtags. In fact, it is one of the main reasons for the hashtags not working on social media accounts.
    • We need to be aware of how much information we are sharing online so that it will not be misused.
    • We need to keep a strict check on AI systems that are capable of creating content. There should be strict guidelines that prevent the misuse of someone’s private information without their consent, and those who do so should be strictly punished.
    • Cybersecurity has to evolve. Only AI can overcome the threat of AI attacks. Businesses have to start thinking about taking advanced security measures like cybersecurity mesh.
    • Authorities must draw strict guidelines on how companies can leverage AI. Some governments also took this decision by banning apps that were using AI for spying and other harmful purposes.
    • We should use AI as it is improving our life but not become increasingly dependent on it. We should lose touch with skills that make us human so that we can survive when technology is not around. We need to find the balance.
    • Companies and scientists who are developing AI must not lose conscience as it will play a critical role in deciding how an AI system will behave.
    • Companies should take responsibility and ensure the use of AI happens for the good of society.

    In a Nutshell

    I would love to quote a few words from the video I was recently watching on YouTube:

    “AI machines do not have a conscience. The way they act is just the reflection of human consciousness.”

    Hence, it is essential we humans do not lose our consciousness when developing these AI systems. Instead of passing down all the racism, greed, biasedness, and bigotry — we should pass them the right information.

    After all, we would want AI to become our great partner, not the evillest version of ourselves. It is the only thing that will help keep us on the right path and ensure AI does not become a threat to our existence.

    And the best way to do so is now — because it is now or never, and tomorrow will be too late. What are your thoughts?

    Don’t forget to give us your 👏 !


    10 Threats of Artificial Intelligence that We Need to Be Aware of in 2021 was originally published in Chatbots Life on Medium, where people are continuing the conversation by highlighting and responding to this story.

  • Do you really need a chatbot?

    A chatbot isn’t the instant quick-win that many businesses assume it to be. And the initial cost and effort of implementing a (value-added) bot into your customer service offering is often underestimated.

    Realistically speaking, you only need a chatbot if your business sits in a service-heavy industry. (I.e. financial services, retail, travel, telecom, etc.)

    And, even then, you still only need a chatbot if your company is large enough to have a contact centre operation or a customer support help desk. That is, if you field large volumes of inbound customer emails, chats, tickets, calls, or webform queries per day.

    If you’re a one-man-band, or an agency with a few select customers, it’s unlikely that a bot deployment will pay off.

    And this is due to the work and cost of a successful, integrated and well-managed chatbot deployment.

    Source: https://www.whoson.com/chatbots-ai/do-i-need-a-chatbot/

    submitted by /u/roxanneonreddit
    [link] [comments]