Category: Chat

  • How to train LLMs to ask follow up questions

    Hi, I am experimenting using LLama for sales/interview call.

    One of the major challenge I am facing is training LLMs to ask follow up questions. I havent been able to find a quality dataset to train the model. I tried controlling it through context but it doesnt work accurately.

    Other challenge is scalability: the questions would vary from domain to domain. B2B sales call would differ from a B2C call or digital marketing interview call would differ from product manager interview call.

    Any research paper/resource/technique that this community has tried would be helpful.

    submitted by /u/StrictSir8506
    [link] [comments]

  • Companies that implement Chatbots

    Hi Guys! I’m a bit curious if you heard about companies wanting to implement internal chatbots for internal processes. I’m asking this because I’m a solution consultant but don’t want to go to salesy or to shady and hide my question behind any weird questions.

    I’m asking this because I heard that a lot are looking for but don’t find one or a lot don’t understand how to implement it internally rather than externally (FAQ)

    Edit: I’m not promoting anything (I didn’t post any name of the company I’m working at)

    submitted by /u/Feisty_Ocelot1394
    [link] [comments]

  • Integrating Twilio WhatsApp API with a Node.js Application

    In this article, we will be exploring how to integrate the Twilio WhatsApp API with a Node.js application. Twilio provides an easy-to-use API for integrating messaging services such as WhatsApp into your applications. By the end of this tutorial, you’ll have a functioning Node.js application that can send and receive messages using the Twilio WhatsApp API.

    Table of Contents

    1. Prerequisites
    2. Setting up a Twilio Account
    3. Installing the Twilio SDK
    4. Sending WhatsApp Messages using Twilio
    5. Receiving WhatsApp Messages using Twilio
    6. Implementing a Basic Echo Bot
    7. Conclusion

    1. Prerequisites

    Before we begin, ensure that you have the following installed on your system:

    • Node.js (version 12 or higher)
    • npm (Node.js package manager)
    • A code editor (e.g., Visual Studio Code)

    2. Setting up a Twilio Account

    To start using the Twilio API, you need to create an account on their platform. Visit the Twilio website and sign up for a free account. After signing up, follow the instructions to enable the WhatsApp Sandbox. Note down the following details:

    • Account SID
    • Auth Token
    • Sandbox Number

    You’ll need these to authenticate your application with Twilio.

    3. Installing the Twilio SDK

    Create a new directory for your Node.js application and navigate to it in your terminal. Run the following command to initialize a new Node.js project:

    npm init -y

    Next, install the Twilio SDK by running:

    npm install twilio

    4. Sending WhatsApp Messages using Twilio

    Create a new file called sendWhatsAppMessage.js and open it in your code editor. First, import the Twilio module and initialize a Twilio client using your Account SID and Auth Token:

    const twilio = require('twilio');
    const accountSid = 'your_account_sid';
    const authToken = 'your_auth_token';
    const client = new twilio(accountSid, authToken);

    Replace ‘your_account_sid’ and ‘your_auth_token’ with the respective values from your Twilio account.

    Now, create a function to send WhatsApp messages using the Twilio client:

    async function sendWhatsAppMessage(to, message) {
    try {
    const response = await client.messages.create({
    body: message,
    from: 'whatsapp:+14155238886', // Your Twilio Sandbox Number
    to: `whatsapp:${to}`,
    });
    console.log(`Message sent to ${to}: ${response.sid}`);
    } catch (error) {
    console.error(`Failed to send message: ${error}`);
    }
    }

    sendWhatsAppMessage('+1234567890', 'Hello from Twilio WhatsApp API!'); // Replace with your phone number

    Replace +1234567890 with your own phone number, including the country code, and run the script:

    node sendWhatsAppMessage.js

    You should receive a WhatsApp message from the Twilio Sandbox number.

    5. Receiving WhatsApp Messages using Twilio

    To receive WhatsApp messages, you need to set up a webhook for incoming messages. We’ll use the express framework for our web server and ngrok to expose our local server to the internet. Install the required packages:

    npm install express ngrok

    Create a new file called receiveWhatsAppMessage.js and open it in your code editor. Set up a basic express server

    const express = require('express'); 
    const app = express();
    const port = 3000;

    app.use(express.urlencoded({ extended: false }));

    app.post('/incoming', (req, res) => {
    const message = req.body;
    console.log(Received message from ${message.From}: ${message.Body});
    res.status(200).send('OK');
    });

    app.listen(port, () => {
    console.log(Server running on http://localhost:${port});
    });

    In this code, we create an Express server and define a route for incoming messages at `/incoming`. When a message is received, we log the sender’s phone number and message content to the console. Next, expose your local server to the internet using `ngrok`.

    Create a new file called `start.js` and add the following code:

    const ngrok = require('ngrok'); 
    const { spawn } = require('child_process');

    (async () => {
    const url = await ngrok.connect(3000);
    console.log(`ngrok tunnel opened at ${url}`);
    const receiveWhatsAppMessageProcess = spawn('node', ['receiveWhatsAppMessage.js'], { stdio: 'inherit', });
    process.on('SIGINT', async () => {
    console.log('Shutting down…');
    await ngrok.kill();
    receiveWhatsAppMessage
    Process.kill('SIGINT');
    process.exit(0);
    });
    })();

    This script starts the ngrok tunnel and runs our receiveWhatsAppMessage.js script as a child process. When the script is terminated, it will close the ngrok tunnel and child process.

    Run the start.js script:

    node start.js

    You should see the ngrok tunnel URL in your console. Copy this URL and add /incoming to the end of it. Update the webhook URL for your Twilio Sandbox number by going to your Twilio Console, selecting your Sandbox number, and pasting the ngrok URL into the “A MESSAGE COMES IN” field. Save the changes.

    Now, send a message to your Twilio Sandbox number, and you should see the message details logged in your console.

    6. Implementing a Basic Echo Bot

    As a practical example, let’s create a simple echo bot that replies to incoming messages. Update the /incoming route in your receiveWhatsAppMessage.js file:

    const MessagingResponse = require('twilio').twiml.MessagingResponse;

    app.post('/incoming', (req, res) => {
    const message = req.body;
    console.log(`Received message from ${message.From}: ${message.Body}`);
    const twiml = new MessagingResponse();
    twiml.message(`You said: ${message.Body}`);
    res.writeHead(200, { 'Content-Type': 'text/xml' });
    res.end(twiml.toString());
    });

    This code creates a TwiML response using the Twilio SDK’s MessagingResponse class. The response contains a new message with the original message’s content. When Twilio receives the response, it will send the reply to the sender.

    Restart your start.js script, and send another message to your Twilio Sandbox number. You should receive a reply with the message content.

    Conclusion

    In this article, we’ve shown you how to integrate the Twilio WhatsApp API with a Node.js application. You learned how to send and receive messages using the Twilio API, and we demonstrated a simple echo bot example. With these building blocks, you can now create more complex chatbots and integrate WhatsApp messaging into your applications using Node.js and Twilio, In the next article, I’ll explain how to integrate WhatsApp API with ChatGPT.


    Integrating Twilio WhatsApp API with a Node.js Application was originally published in Chatbots Life on Medium, where people are continuing the conversation by highlighting and responding to this story.

  • Generative AI — The future of Engineering

    Generative AI — The future of Engineering

    On June 22, 2024, I had the privilege of presenting a master class at the Congrès National des Junior-Entreprises de la Confédération des Junior-Entreprises Marocaines. This event brought together some of the brightest minds and most enthusiastic learners eager to explore the potential of Generative AI and its impact on the future of engineering.

    Master class at CJEM — Hamza BARBOUCH

    Introduction to Generative AI

    Generative AI represents a significant leap forward in artificial intelligence, designed to understand and generate human language. These advanced AI systems, such as GPT-3 and GPT-4, are capable of performing a wide range of tasks, from generating text and answering questions to translating languages and creating code. The foundation of these models lies in their training on vast amounts of data, allowing them to learn patterns and produce human-like responses.

    The example below illustrates the remarkable progression of an LLM model, evolving from a novice akin to a toddler to a highly sophisticated entity comparable to Einstein.

    Illustration of how LLMs gets better in the learning process

    GPT-3 was trained on an extensive corpus of 45TB. To put that into perspective, a 1GB text file contains approximately 178 million words. By this calculation, the training data for GPT-3 encompassed an astounding number of words, highlighting the sheer scale and depth of its training process.

    Mindsets Towards AI

    The integration of AI into various aspects of our lives and work has elicited a range of mindsets. These can generally be categorized into three types:

    1. Denial: Some believe that AI cannot perform their job, underestimating the capabilities of modern AI systems.
    2. Panic: Others fear that AI will take over their jobs, leading to anxiety and resistance towards adopting new technologies.
    3. Positive: The most constructive mindset is viewing AI as a tool to enhance and improve one’s skills and productivity. Embracing AI can lead to new opportunities and career growth.
    Negative & Positive mindsets

    The Foundation: How It All Started

    The journey of Generative AI began with foundational research, particularly the groundbreaking paper “Attention is All You Need.” This paper introduced the Transformer architecture, which has become the cornerstone of modern AI models. The evolution of Generative AI has been fueled by the diversity, volume, and quality of training data, encompassing books, research papers, web pages, code repositories, public datasets, social media, and Wikipedia.

    Attention is All You Need paper — Transformers Architecture

    Hardware Evolution

    The advancements in Generative AI would not have been possible without significant progress in hardware. The development of powerful GPUs and TPUs has enabled the training of large models on enormous datasets. The cost of training these models, such as the $100 million spent on training GPT-4, reflects the immense computational resources required.

    Hardware evolution players

    The Architecture of Transformers

    At the heart of Generative AI is the Transformer architecture, which comprises several key components that work together to process and generate text. This architecture allows for parallel processing of data, making it more efficient than previous models. Transformers have paved the way for the development of various AI models, including BERT, GPT-2, T5, and the latest versions like GPT-3 and GPT-4.

    Transformers architecture components — Encoder & Decoder
    • Encoder-Decoder Structure: The transformer model uses an encoder to read input data (such as text) and a decoder to produce the output. The encoder and decoder each consist of multiple layers.
    • Self-Attention Mechanism: This mechanism allows the model to weigh the importance of different words in a sentence. It helps in understanding context by considering the relationships between words, regardless of their position in the sequence.
    • Multi-Head Attention: By using multiple attention heads, the transformer can focus on different parts of the input sequence simultaneously. This enhances the model’s ability to capture various aspects of the data.
    • Feed-Forward Neural Networks: Each layer in the encoder and decoder contains a feed-forward neural network that processes the output of the attention mechanisms. These networks help in further transforming the data.
    • Positional Encoding: Since transformers do not have a built-in sense of word order, positional encoding is added to the input embeddings to give the model information about the position of each word in the sequence.
    • Layer Normalization and Residual Connections: These techniques help in stabilizing and speeding up the training process. Layer normalization ensures that the output of each layer has a consistent scale, while residual connections allow gradients to flow more easily through the network.

    The Emergence of LLMs

    Large Language Models (LLMs) have evolved from small models to the behemoths we see today. The progression from BERT to GPT-4 showcases the rapid advancements in model size and capabilities. These models differ in their architecture and primary use cases, ranging from natural language processing tasks to text generation and summarization.

    Model LLMs — Open source & Paid

    Tools for Leveraging Generative AI: Ollama and Hugging Face

    In addition to understanding the architecture and training of Generative AI models, it is essential to be familiar with the tools that facilitate their application. Ollama is a versatile platform that allows you to run your favorite Large Language Models (LLMs) locally, offering flexibility and control over model usage. On the other hand, Hugging Face is a renowned hub for AI models and machine learning tools. It provides an extensive library of pre-trained models and APIs, making it easier for developers to implement and experiment with cutting-edge AI technologies. Both platforms are invaluable resources for those looking to harness the power of Generative AI effectively.

    Ollame tool to run LLMs locally
    Huggingface — The GitHub like community to download models & datasets

    Effective Use of Generative AI

    To harness the full potential of Generative AI, one must understand the principles of prompt engineering. Designing and refining prompts to guide the outputs of large language models is crucial. Here are some best practices for effective prompt engineering:

    1 — Know the Model’s Limitations: Understand what the model can and cannot do. This helps in setting realistic expectations and crafting prompts that play to the model’s strengths.

    2 — Aim for Maximum Clarity: Clear and unambiguous prompts lead to better results. Avoid vague language and be explicit about what you want the model to do.

    3 — Be Very Specific and Explicit: The more detail you provide in your prompt, the more accurately the model can generate the desired output. Specify the format, style, and any other relevant details.

    4 — Balance Simplicity and Complexity: Simple prompts may not capture all nuances, while overly complex prompts can confuse the model. Strive for a balance that provides enough detail without being overwhelming.

    5 — Give the Model Examples: Providing examples of the desired output can help guide the model. This technique, known as few-shot learning, involves giving the model a few examples to mimic.

    6 — Iterate and Experiment: Prompt engineering is an iterative process. Experiment with different phrasings and structures to see what works best. Learn from each iteration to refine your prompts.

    7 — Use Contextual Information: Including context in your prompts can help the model generate more relevant responses. This could involve providing background information or specifying the scenario in which the model is to generate text.

    8 — Incorporate Feedback: Use feedback from the model’s responses to improve your prompts. This could involve adjusting the level of detail, changing the phrasing, or providing additional examples.

    By mastering these techniques, users can effectively leverage Generative AI to produce high-quality, relevant outputs tailored to their specific needs.

    The Future of Generative AI

    Looking ahead, the future of Generative AI is incredibly promising. The continuous advancements in data availability, algorithm development, and hardware capabilities will further enhance the performance and applications of these models. Generative AI is poised to revolutionize various industries, including customer service, human resources, education, entrepreneurship, and scientific research.

    Future of Generative AI

    Conclusion

    The master class at the Congrès National des Junior-Entreprises was an enriching experience, filled with insightful discussions and enthusiastic participation. I am grateful to the dream team at Maltem AfricaHamza BARBOUCH, Omar BENMOUSSA, Zineb RAZMI, Badr Belomaria, Sara EL KARMOUDI and Aya Hameda-Benchekroun — for their invaluable support in making this event a success.

    Generative AI is not just a technological advancement; it represents a paradigm shift in how we approach problem-solving and innovation. As we continue to explore its potential, it is essential to stay informed and engaged with the latest developments in this exciting field.

    Thank you to everyone who attended and contributed to the vibrant discussions. Together, we are shaping the future of engineering.


    Generative AI — The future of Engineering was originally published in Chatbots Life on Medium, where people are continuing the conversation by highlighting and responding to this story.

  • Will the characters created in Angelbaby ai fit the bill?

    The Angelbaby ai platform has a function for creating AI, but the one I created myself doesn’t seem quite right. Does anyone have any good suggestions for creating one?

    submitted by /u/Acrobatic-Stable-987
    [link] [comments]

  • Chatbase webhooks

    Has anyone tried webhooks with chatbase?

    I’m trying to hook up our google sheet (eventually airtable) database as point of verification for users for the chatbot I created under chatbase.

    Anyone who can help me here?

    submitted by /u/Possible_Elk9560
    [link] [comments]

  • Does talking to your AI girlfriend make you feel escapist?

    Many people go to chat with an AI girlfriend because they are lonely or not good at socializing, so will chatting with an AI girlfriend for a long time make people feel escapist?

    submitted by /u/Acrobatic-Stable-987
    [link] [comments]