Author: Franz Malten Buemann

  • AI chatbot for educational assistance

    Have you ever used an AI chatbot for educational assistance or support?

    View Poll

    submitted by /u/Build_Chatbot
    [link] [comments]

  • Embedding LLM with custom character profile via iframe/JS on webpage?

    for reference, I’m a creative writer working on a project. I recently had an idea of integrating a very specific character AI profile into a webpage article as a dedicated companion character (like a friendly AI helper or something) that a reader could submit questions to or speak with. I was wondering if it were possible to embed a character AI profile using an iframe or related JS (this will be done on a Wikidot page, if anyone happens to be familiar with that website lol). I’m not very educated on the specifics but I’d be more than happy to learn if I know it’s feasible. I’m not sure what tools are available but I know that I’ll likely need to host my own HTML page for an iframe or somehow construct my own JS module with a separate Wikidot page and somehow protect my API key but I didn’t know who else to go to since I lack the proper knowledge with interfacing with LLMs or AI in general beyond some testing with Character AI and Silly Tavern.

    Any help would be appreciated!

    submitted by /u/JakdragonX
    [link] [comments]

  • AI Chabot with Maximum File format support for adding Knowledgebase with Live Chat using Slack

    Our AI Chatbot Build Chatbot AI an Chatbot that supports the maximum type of File formats – Website URL, You tube, Audio, Video, PDF, Docx, TXT and Excel. The Chatbot can be personalized to your branding for a chat widget or a chatbot within a web page. You can also live chat with users with Slack integration right from your slack channels.

    The Chatbot can work both as an AI and also options to live chat.

    Freemium version available to try!

    submitted by /u/Playful-Analyst6425
    [link] [comments]

  • Top 10 Powerful Use Cases of Generative AI Chatbots in Call Centers

    In the rapidly evolving landscape of customer service, call centers are embracing cutting-edge technologies to elevate their operations and meet the growing expectations of today’s consumers. Among these transformative technologies, Generative AI chatbots have emerged as a game-changer. These intelligent virtual assistants are modifying the way call centers engage with customers, streamline processes, and deliver exceptional experiences. In this article, we delve into the diverse use cases of Generative AI chatbots in call centers, uncovering their potential to optimize customer support, improve efficiency, and drive business success. We explore the transformative impact of Generative AI chatbots in enhancing customer experiences within call center environments.

    According to a recent study conducted by researchers from Stanford Digital Economy Laboratory and MIT Sloan School of Management, the implementation of a Generative AI assistant tool in call centers led to a significant increase in productivity. The study revealed an average productivity boost of 13.8%, measured by the number of customer issues resolved per hour. These findings shed light on the influence of Generative AI in workplace settings, particularly in the customer service sector, which has already embraced AI technology at a substantial rate.

    Based on Gartner’s prediction in August 2022, the implementation of conversational AI chatbots in contact centers is projected to result in a remarkable $80 billion reduction in customer service labor costs by 2026. While the use of Generative AI in call centers is still in its early stages, it is worthwhile to explore some of the potential use cases for Generative AI chatbots in this context.

    Generative AI Chatbot Use Cases for Call Centers

    Generative AI Chatbot Use Cases for Call Centers

    Generative AI Chatbot Use Case for Call Centers #1. Improving Customer Support

    Customer Support is one of the primary use cases for Generative AI chatbots in call centers. These chatbots are capable of handling routine customer inquiries, such as providing product information, assisting with order tracking, or offering basic troubleshooting guidance. By leveraging the power of Generative AI, these chatbots can generate instant responses to customer queries, eliminating the need for customers to wait in lengthy phone queues or navigate complex Interactive Voice Response (IVR) systems.

    Generative AI chatbots excel at understanding natural language and can interpret customer requests accurately. They can analyze the input from customers, identify the intent behind their queries, and generate appropriate responses based on the available information. This ability enables them to provide quick and accurate support to customers.

    Generative AI chatbots can also assist with basic troubleshooting. They can guide customers through step-by-step instructions or provide interactive tutorials to help them resolve common issues on their own. This self-service capability not only empowers customers but also reduces the need for human agent intervention, leading to increased efficiency and cost savings for the call center.

    Thinking of incorporating Generative AI into your chatbot? Validate your idea with a Proof of Concept before launching. At Master of Code Global, we can seamlessly integrate Generative AI into your current chatbot, train it, and have it ready for you in just two weeks.

    REQUEST POC

    Generative AI Chatbot Use Case for Call Centers #2. Frequently Asked Questions (FAQs) Automation

    Generative AI chatbots can be highly effective in handling frequently asked questions (FAQs) in call centers. By training the chatbot with an extensive database of commonly asked questions and their corresponding answers, call centers can provide customers with instant and accurate responses without the need for human intervention.

    For example, a customer contacts a call center with a common question about a product return policy. Here’s an example to illustrate the use case.

    FAQs Automation with Generative AI Chatbot

    In this example, the Generative AI chatbot recognizes the customer’s query regarding the return policy and provides a prompt and accurate response. The chatbot offers further assistance by proactively offering more information and guidance on the return process. By efficiently addressing the customer’s concern, the chatbot eliminates the need for the customer to wait on hold or be transferred to a human agent, saving time for both the customer and the call center.

    Generative AI chatbots excel in this use case by leveraging their ability to quickly search through a vast FAQ knowledge base and retrieve the relevant information. This empowers call centers to provide consistent and reliable responses to customers, ensuring a positive customer experience.

    Generative AI Chatbot Use Case for Call Centers #3. Order Processing

    Generative AI chatbots can play a valuable role in facilitating order processing in call centers. They can assist customers with various aspects of the order management process, including placing orders, checking order status, and making modifications. By integrating with backend systems, such as inventory management and order fulfillment systems, chatbots can streamline the order process and provide real-time updates to customers, ensuring a seamless and efficient experience.

    Generative AI Chatbot Use Case for Call Centers #4. ACW Documentation

    Automated documentation is a key use of Generative AI in post-call tasks. Generative AI can be trained to listen to a call, comprehend the context, and generate a concise summary of the conversation. This summary can be automatically added to the customer’s record, reducing the manual effort needed from agents. This reduces after-call work (ACW) time and ensures accurate and consistently formatted records, minimizing potential errors from manual data entry.

    During a No Jitter webinar sponsored by Five9, Richard Dumas from Five9 discussed a specific application in the call center system. In this use case, the system generates a transcript of a customer call and subsequently utilizes GPT-3, a prominent large language model (LLM) predating GPT-4, to analyze the transcript.

    In this particular scenario, the system instructs GPT-3 to summarize the call and highlight essential details gathered by the agent, such as the customer’s name, address, and mentioned products. The agent has the option to review, edit, or approve the generated summary. Real-time transcription frees agents from the responsibility of note-taking, allowing them to focus their attention on customers and engage in more meaningful interactions. Furthermore, the automatic synchronization of consistent and precise call summaries with the customer relationship management (CRM) system substantially decreases the amount of after-call work needed.

    ACW Documentation Automatization with Generative AI

    Analyst Dave Michels from TalkingPointz mentioned that, based on his observations, the large language model (LLM) excels at summarization and efficiently captures the important points of a conversation. Michels emphasized that leveraging Generative AI for agent wrap-up in call centers can result in significant time savings. Richard Dumas further highlighted that even a one-minute reduction from a five-minute call can translate to a substantial 20% cost savings for the call center.

    Check out your potential cost savings by implementing a chatbot solution for customer support

    CALCULATE ROI

    Generative AI Chatbot Use Case for Call Centers #5. Post-Interaction Tasks

    Generative AI chatbots provide the capability for proactive follow-up actions in call centers. For example, when a customer contacts the call center regarding a faulty product, the AI system can automatically generate an email to the customer containing detailed information about the return process. In more advanced cases, the Generative AI chatbot can even initiate a return request on behalf of the customer, streamlining the process and providing a higher level of convenience. This proactive approach saves time for both the customer and the call center, ensuring that necessary actions are taken promptly and efficiently.

    Generative AI Chatbot Use Case for Call Centers #6. Training Call Center Agents

    Generative AI chatbots can play a vital role in training and developing call center agents. By harnessing their capabilities, call centers can create immersive training experiences that allow agents to practice and refine their skills. Additionally, Generative AI chatbots can generate simulated after-call work scenarios, replicating the tasks agents typically perform after completing a call, such as documenting call details, updating customer records, or scheduling follow-ups.

    By analyzing the interactions between agents and Generative AI chatbots, supervisors and trainers can identify strengths and weaknesses in agent performance. They can thoroughly review the summaries and actions generated by chatbots, assessing the quality of responses, adherence to guidelines, and compliance with policies. This comprehensive analysis enables targeted coaching and training to address specific areas for improvement, leading to increased agent performance and customer satisfaction.

    Generative AI Chatbot Use Case for Call Centers #7. Feedback and Surveys

    Generative AI chatbots in call centers have the capability to gather valuable insights through proactive feedback and surveys after customer interactions. These chatbots can initiate conversations to request feedback or administer short surveys, providing an easy and convenient way for customers to share their thoughts and opinions. The collected data can be used to measure customer satisfaction, identify areas for improvement, and generate actionable analytics reports. This enables call centers to continuously enhance their services and deliver an exceptional customer experience.

    Generative AI Chatbot Use Case for Call Centers #8. Troubleshooting and Technical Support

    Generative AI chatbots in call centers excel at providing efficient troubleshooting and technical support to customers. They have the ability to guide customers through basic troubleshooting steps for common technical issues. By offering step-by-step instructions or interactive tutorials, chatbots empower customers to resolve problems independently, without requiring assistance from a live agent. This not only saves time for both customers and call center agents but also raises the overall customer experience by enabling quick issue resolution.

    Generative AI Chatbot Use Case for Call Centers #9. Appointment Scheduling

    Generative AI chatbots offer a seamless solution for customers looking to schedule appointments, whether it’s for service requests or consultations. These chatbots have the ability to access the availability of agents or resources, providing customers with real-time information on open time slots. By facilitating the booking process without the need for human intervention, Generative AI chatbots streamline appointment scheduling, saving time for both customers and call center staff. This automation ensures efficient and hassle-free booking, enhancing customer satisfaction and optimizing resource utilization. Over the past year, we’ve helped our clients invest in Conversational AI solution and increased 7.67x weekly bookings or conversion rate 3x higher since the chatbot was launched.

    Generative AI Chatbot Use Case for Call Centers #10. Multilingual Support

    Generative AI chatbots possess the remarkable ability to communicate fluently in multiple languages, making them a valuable asset for call centers serving diverse customer bases. With this capability, call centers can cater to customers from different linguistic backgrounds without the need for language-specific agents. The multilingual customer support provided by Generative AI chatbots enhances accessibility and inclusivity, allowing customers to interact comfortably in their preferred language. By overcoming language barriers, call centers can improve customer satisfaction, ensure effective communication, and provide a seamless experience to customers worldwide.

    Multilingual Support with Generative AI Chatbot

    Call center platforms such as Yobi leverage Generative AI to perform sentiment analysis, allowing contact center agents to evaluate customers’ emotional states by analyzing their tone of voice and choice of words. Yobi, an assistant powered by Generative AI, signifies the future of business communications. It enhances Sales, Marketing with Generative AI, and Customer Contact teams, enabling seamless communication with prospects and customers through diverse channels like Facebook Messenger, Twitter, SMS, Zendesk, and other platforms. This Generative AI-powered assistant offers a range of incredible advantages, including translation and snippet features, that significantly simplify various tasks and raise efficiency.

    Benefits of Generative AI Chatbot for Call Centers

    Benefits of Generative AI Chatbot for Call Centers
    • Raised Efficiency and Productivity: Integrating Generative AI chatbots into call centers can significantly improve efficiency and productivity. These intelligent assistants can handle routine customer inquiries, reducing the workload on human agents and allowing them to focus on more complex and high-value tasks. By automating repetitive tasks, companies can streamline their operations and achieve higher productivity levels.
    • Scalability: Generative AI chatbots offer scalability, allowing businesses to handle increasing customer demands without the need to hire and train additional human agents. As customer volumes fluctuate, chatbots can seamlessly handle the increased workload, ensuring a consistent level of service without compromising quality or efficiency. This scalability is particularly beneficial during peak periods or when expanding into new markets.
    • Enhanced Customer Service: Generative AI chatbots can provide instant responses and accurate information to customers, leading to improved customer service. Customers receive prompt assistance, even outside regular working hours, ensuring their needs are addressed in a timely manner. This enhanced level of service contributes to higher customer satisfaction, loyalty, and positive brand perception.
    • Cost Savings: Incorporating Generative AI chatbots into call centers can result in significant cost savings for companies. By automating customer interactions, companies can reduce the need for a large workforce of human agents, leading to lower staffing and training costs. Additionally, chatbots can handle a high volume of inquiries simultaneously, further reducing operational expenses. These cost savings can contribute to improved profitability and a more efficient use of resources within the organization.
    • 24/7 Availability: Generative AI chatbots can provide round-the-clock customer support, ensuring that assistance is available at any time of the day or night. This 24/7 availability is particularly beneficial for companies operating in different time zones or those serving a global customer base. Customers can receive support and information whenever they need it, improving their overall experience and reducing the reliance on traditional working hours.
    • Data-Driven Insights: Generative AI chatbots have the ability to gather valuable data and insights from customer interactions. Business owners can leverage this data to gain a deeper understanding of customer preferences, pain points, and trends. These valuable insights can inform strategic decision-making, including refining products and services, personalizing marketing efforts, and identifying areas for improvement. By utilizing this data, business owners can make informed, data-driven decisions that ultimately lead to business growth.

    Conclusion

    Generative AI chatbots have emerged as a powerful tool for call centers, revolutionizing customer service and enhancing overall business operations. These intelligent chatbots enable proactive follow-up actions, streamline processes, provide multilingual support, and gather valuable data and insights. By integrating Generative AI chatbots into their call centers, businesses can automate customer interactions, improve efficiency, reduce costs, and improve customer satisfaction. The future holds even more possibilities for AI in call centers, promising further advancements and innovations. Embracing Generative AI chatbots is a strategic move that empowers businesses to deliver exceptional customer service, build meaningful relationships, and drive sustainable growth in today’s competitive market.

    Request a Demo

    Don’t miss out on the opportunity to see how Generative AI chatbots can revolutionize your customer support and boost your company’s efficiency.


    Top 10 Powerful Use Cases of Generative AI Chatbots in Call Centers was originally published in Chatbots Life on Medium, where people are continuing the conversation by highlighting and responding to this story.

  • Appointment Booking Chatbot using OpenAI Function Calling and GoHighLevel Calendar

    OpenAI has recently launched a range of remarkable enhancements that have left users amazed. Among these additions, the OpenAI function calling feature has emerged as the most prominent addition. With this powerful functionality, developers gain the ability to select and invoke for resolving specific problems. A key aspect of OpenAI’s function calling feature is its seamless integration with external APIs, enabling developers to generate responses by leveraging external services.

    In our previous blogs, we explored “How To Use OpenAI Function Calling To Create an Appointment Booking Chatbot”. Through this exploration, we created a chatbot that integrates with the Google Calendar API. This integration empowers the chatbot to efficiently handle appointment bookings and harness the full functionality of the Google calendar.

    In this blog, we will explore how to create an appointment-booking chatbot, which integrates with the GoHighLevel platform for appointment management. GoHighLevel is a customer relationship management (CRM) platform that offers various features for businesses to manage their customer interactions, marketing campaigns, sales processes, and more. One of the features provided by GoHighLevel is the calendar functionality. The GoHighLevel calendar allows users to schedule and manage appointments and delete appointments.

    Let’s start with the blog where we will explore how to create an appointment booking chatbot that seamlessly integrates with the GoHighLevel (GHL) platform.

    Step 1:

    We will start by setting up the GoHighLevel platform. We need to first go to the https://app.gohighlevel.com/ website and then claim our free 14-day trial. You need to give your Company Name, Name, Email ID, Phone number, and credit card details to sign up for the first time.

    Once you have created an account, you will see below like dashboard:

    Step 2:

    We need to use GoHighLevel API, to manage appointment creation, updation, and deletion. In order to use the API, we need an API key.

    There are 2 types of API keys available:

    • Agency API Key — which is used to manage the agency-level objects like sub-accounts, and users.
    • Location API Key — which is used to manage all the objects which are part of sub-accounts (contacts, appointments, opportunities, etc.)

    To manage appointments we need a Location API key. To generate a Location API key you need to first add a location by creating a sub-account.

    To create a sub-account you need to first click the “Sub-Accounts” from the left panel and then hit “Create Sub-Account” as shown below:

    Step 3:

    It will open a screen like below, where you need to select “Blank Snapshot” under the title “Regular Account”.

    Then, it will open a screen with a map like below. You need to select your location and then continue with your selected location by clicking the arrow.

    After this, it will open a tab with the title “Add account”. You need to add your details and then hit the “save” button. It will create a sub-account with the given location.

    Step 4:

    Next, You need to switch to the recently created sub-account by clicking on “Click here to switch” from the left panel as shown in the below image:

    It will open up a selection box that has all sub-accounts listed. You can choose one from the list, and the system will switch to that account. and then you can see a screen like below:

    Step 5:

    After creating a sub-account, you can retrieve the Location API key, which will be used for appointment management. To obtain the Location API key, first go to the “Settings” option located in the left panel and then select “Business Profile” from the available options. A screen will appear where you need to scroll down a bit, and you will find your Location API key as shown in the below image:

    Step 6:

    Moving forward, we need to add employees to our team. To add an employee, you need to follow the below steps:

    • First, you need to go to the setting and then select the “My Staff” option from all available options.
    • On the screen that appears, locate and click on the “Add employee” button.
    • Provide the necessary personal details, including First Name, Last Name, Email, Password, and Phone Number.
    • Next, set the “User Roles” to “User” for the employee.
    • Finally, click on the “Save” button to save the employee’s information.

    Step 7:

    Now, let’s proceed with creating a group that will help in team management. To create the group, you need to follow below steps:

    • Start by selecting the “Calendars” option from the left panel.
    • On the right side of the screen, locate and click on the “Create Group” button.
    • It will open up a form for group creation. In the form, you will need to provide the following details: Group Name, Group Description, and Calendar URL.
    • Once you have filled in the necessary information, submit the form.

    In the Calendar URL, you can simply provide any string like “demo-calendar”.

    Step 8:

    Let’s now move forward and create a calendar that will facilitate appointment management functions. To create the calendar, follow the below steps:

    • From the same screen, select the “Create Calendar” option.
    • This action will open up a list of options. From the list, choose “Simple Calendar”.
    • A form will appear, in which you need to fill in all the necessary details.
    • Once you have filled in the necessary details, click on the “Complete” button.

    For this demo, we have kept the “Appointment Slot Setting” as below:

    Step 9:

    Once you have created a calendar, the next step is to move it to the previously created group. To accomplish this, follow the steps below:

    • Select the “Calendars” option from the settings.
    • On the screen, locate the calendar that was created earlier.
    • Click on the three dots symbol situated on the right side of the calendar.
    • This action will open up a selection box, as depicted below.
    • From the selection box, choose the option “Move to Group”.
    • A pop-up box will appear, allowing you to select the desired group for the calendar and hit the “Select” button.

    Step 10:

    We have completed the basic setup of “GoHighLevel.” Now, we are fully prepared to utilize the GoHighLevel API for appointment creation, updating, and deletion. To use the GoHighLevel API, we need three types of IDs: Group ID, Calendar ID and User ID. We will obtain these IDs in the following steps.

    To obtain the Group ID, you need to follow the below steps:

    • Select the “Calendars” option from the settings.
    • On the screen, You will find a tab for “Groups” in which you will locate the group that was created earlier.
    • Click on the three dots symbol situated on the right side of the group information.
    • This action will open up a selection box, as depicted below.
    • Click on the “Copy Embed Code” from the selection box.

    Your copied embed code will look like below:

    <iframe src=”https://api.leadconnectorhq.com/widget/group/NZc2nxIeE6a2liSwqmNX” style=”width: 100%;border:none;overflow: hidden;” scrolling=”no” id=”<your_group_id>_1688194531391″></iframe><br><script src=”https://link.msgsndr.com/js/form_embed.js” type=”text/javascript”></script>

    From the URL above, you can locate the ID field. The string before the “_” symbol represents your Group ID.

    Step 11:

    Moving forward in this step, we will obtain our Calendar ID.

    To obtain the Calendar ID, you need to follow the below steps:

    • Select the “Calendars” option from the settings.
    • On the screen, You will find a tab for “Calendars” in which you will locate the calendar that was created earlier.
    • Click on the three dots symbol situated on the right side of the calendar information.
    • This action will open up a selection box, as depicted below.
    • Click on the “Copy Embed Code” from the selection box.

    Your copied embed code will look like below as earlier:

    <iframe src=”https://api.leadconnectorhq.com/widget/booking/kK8LwFPuNByksXB3h18s” style=”width: 100%;border:none;overflow: hidden;” scrolling=”no” id=”<your_calendar_id>_1688196021697″></iframe><br><script src=”https://link.msgsndr.com/js/form_embed.js” type=”text/javascript”></script>

    Step 12:

    In this step, let’s proceed to obtain the User ID.

    To obtain the User ID, you need to follow the below steps:

    After completion of these steps, we have all 3 IDs, now we will move forward to secret an appointment booking chatbot using python.

    Step 13:

    Now, it’s time to start with the development of the Python script for an appointment booking chatbot. To accomplish this, we will leverage OpenAI’s function calling feature, which integrates with the GHL Calendar for efficient appointment management.

    First, we will import the required libraries:

    import requests
    import json
    from datetime import date, datetime, timedelta
    import time
    import pytz

    Step 14:

    Next, we will define a utility function that will call ChatGPT and generate responses. To add this functionality to our script, include the following lines of code:

    GPT_MODEL = "gpt-3.5-turbo-0613"
    openai_api_key = "<your_openai_key>"

    def chat_completion_request(messages, functions=None, function_call=None, model=GPT_MODEL):
    headers = {
    "Content-Type": "application/json",
    "Authorization": "Bearer " + openai_api_key,
    }
    json_data = {"model": model, "messages": messages}
    if functions is not None:
    json_data.update({"functions": functions})
    if function_call is not None:
    json_data.update({"function_call": function_call})
    try:
    response = requests.post(
    "https://api.openai.com/v1/chat/completions",
    headers=headers,
    json=json_data,
    )
    return response
    except Exception as e:
    print("Unable to generate ChatCompletion response")
    print(f"Exception: {e}")
    return e

    Step 15:

    Moving forward, in this step, we will define a function that calls the GHL appointment booking endpoint. However, before that, we need to create another function that converts the date and time provided by the user into the ISO 8601 date format. The ISO 8601 format is an internationally recognized standard for representing dates and times.

    We can define both functions as follows:

    limit1 = datetime.strptime("10:00:00", "%H:%M:%S").time()
    limit2 = datetime.strptime("17:00:00", "%H:%M:%S").time()
    limit3 = datetime.strptime("12:00:00", "%H:%M:%S").time()

    headers = {
    'Authorization': 'Bearer <Your_Location_API_key>'
    }

    def convert_to_iso8601(datetime_string, target_timezone):
    datetime_format = "%Y-%m-%d %H:%M:%S"
    dt = datetime.strptime(datetime_string, datetime_format)

    source_timezone = pytz.timezone('Asia/Kolkata')
    dt = source_timezone.localize(dt)

    target_timezone = pytz.timezone(target_timezone)
    dt = dt.astimezone(target_timezone)

    iso8601_datetime = dt.strftime("%Y-%m-%dT%H:%M:%S%z")
    iso8601_datetime = iso8601_datetime[:-2] + ":" + iso8601_datetime[-2:]
    return iso8601_datetime

    def appointment_booking(arguments):
    try:
    provided_date = datetime.strptime(json.loads(arguments)['date'], "%Y-%m-%d")
    provided_time = datetime.strptime(json.loads(arguments)['time'].replace("PM","").replace("AM","").strip(), "%H:%M:%S").time()
    try:
    email_address = json.loads(arguments)['email_address']
    except:
    return "Please provide email ID for identification."

    try:
    phone_number = json.loads(arguments)['phone_number']
    except:
    return "Please provide a phone number for identification."

    if provided_date and provided_time and email_address and phone_number:
    start_date_time = str(provided_date.date()) + " " + str(provided_time)
    iso8601_datetime = convert_to_iso8601(start_date_time, 'Asia/Kolkata')

    if day_list[provided_date.weekday()] == "Saturday":
    if provided_time >= limit1 and provided_time <= limit3:
    url = "https://rest.gohighlevel.com/v1/appointments/"
    payload = {
    "calendarId": "kK8LwFPuNByksXB3h18s",
    "selectedTimezone": "Asia/Calcutta",
    "selectedSlot": iso8601_datetime,
    "email": email_address,
    "phone": phone_number
    }
    response = requests.request("POST", url, headers=headers, data=payload)
    response = json.loads(response.text)
    try:
    if response['id']:
    return "Appointment booked successfully."
    except:
    return response['selectedSlot']['message']
    else:
    return "Please try to book an appointment into working hours, which is 10 AM to 2 PM at saturday."
    else:
    if provided_time >= limit1 and provided_time <= limit2:
    url = "https://rest.gohighlevel.com/v1/appointments/"
    payload = {
    "calendarId": "kK8LwFPuNByksXB3h18s",
    "selectedTimezone": "Asia/Calcutta",
    "selectedSlot": iso8601_datetime,
    "email": email_address,
    "phone": phone_number
    }
    response = requests.request("POST", url, headers=headers, data=payload)
    response = json.loads(response.text)
    try:
    if response['id']:
    return "Appointment booked successfully."
    except:
    return response['selectedSlot']['message']
    else:
    return "Please try to book an appointment into working hours, which is 10 AM to 7 PM."
    else:
    return "Please provide all the necessary information: Appointment date, time, email ID, Phone number."
    except:
    return "We are facing an error while processing your request. Please try again."

    Step 16:

    Now, let’s define a function that will update appointments using the GHL endpoint. To update an appointment, we need to find the corresponding ‘ID’ of the appointment. Therefore, we will first describe a function that fetches the ‘ID’ based on the parameters provided by the user. This function will then return the ‘ID’ to the update function.

    We can define both functions as follows:

    def get_all_booked_appointment(arguments):
    try:
    provided_date = datetime.strptime(json.loads(arguments)['date'], "%Y-%m-%d")
    ending_date_time = datetime.strptime(json.loads(arguments)['date'], "%Y-%m-%d") + timedelta(days=1)
    try:
    email_address = json.loads(arguments)['email_address']
    except:
    return "Please provide email ID for identification."

    if provided_date and email_address:
    starting_timestamp = time.mktime(provided_date.timetuple()) * 1000
    ending_timestamp = time.mktime(ending_date_time.timetuple()) * 1000


    url = f"https://rest.gohighlevel.com/v1/appointments/?startDate={starting_timestamp}&endDate={ending_timestamp}&userId=oJbRc7r2HBYunuvJ3XC7&calendarId=kK8LwFPuNByksXB3h18s&teamId=ONZc2nxIeE6a2liSwqmNX&includeAll=true"

    payload={}

    response = requests.request("GET", url, headers=headers, data=payload)
    response = json.loads(response.text)
    events = []
    for element in response['appointments']:
    if element['contact']['email'] == email_address:
    events.append(element)

    if len(events) == 1:
    id = events[0]['id']
    return id
    elif len(events) > 1:
    print("You have multiple appointments with the same email address:")
    count = 1
    for ele in events:
    print(str(count)+"]")
    print(ele['address'])
    print(ele['startTime'])
    print(ele['endTime'])
    print(ele['contact']['email'])
    print()
    count = count + 1

    event_number = int(input("Please enter which appointment:"))
    if event_number >= 1 and event_number <= len(events):
    id = events[event_number - 1]['id']
    return id
    else:
    return "Please select valid event number"
    else:
    return "No registered event found with this email ID."
    else:
    return "Please provide all the necessary information: Appointment date and email ID."
    except:
    return "We are facing an error while processing your request. Please try again."

    def appointment_updation(arguments):
    try:
    provided_date = datetime.strptime(json.loads(arguments)['to_date'], "%Y-%m-%d")
    provided_time = datetime.strptime(json.loads(arguments)['time'].replace("PM","").replace("AM","").strip(), "%H:%M:%S").time()

    if provided_date and provided_time and json.loads(arguments)['date'] and json.loads(arguments)['email_address']:
    start_date_time = str(provided_date.date()) + " " + str(provided_time)
    iso8601_datetime = convert_to_iso8601(start_date_time, 'Asia/Kolkata')

    if day_list[provided_date.date().weekday()] == "Saturday":
    if provided_time >= limit1 and provided_time <= limit3:
    id = get_all_booked_appointment(arguments)
    if id == "Please select valid event number" or id == "No registered event found with this email ID." or id == "We are facing an error while processing your request. Please try again.":
    return id
    else:
    url = f"https://rest.gohighlevel.com/v1/appointments/{id}"
    payload = {
    "selectedTimezone": "Asia/Calcutta",
    "selectedSlot": iso8601_datetime
    }
    response = requests.request("PUT", url, headers=headers, data=payload)
    response = json.loads(response.text)
    try:
    if response['id']:
    return "Appointment updated successfully."
    except:
    return response['selectedSlot']['message']
    else:
    return "Please try to book an appointment into working hours, which is 10 AM to 2 PM at saturday."
    else:
    if provided_time >= limit1 and provided_time <= limit2:
    id = get_all_booked_appointment(arguments)
    if id == "Please select valid event number" or id == "No registered event found with this email ID." or id == "We are facing an error while processing your request. Please try again.":
    return id
    else:
    url = f"https://rest.gohighlevel.com/v1/appointments/{id}"
    payload = {
    "selectedTimezone": "Asia/Calcutta",
    "selectedSlot": iso8601_datetime
    }
    response = requests.request("PUT", url, headers=headers, data=payload)
    response = json.loads(response.text)
    try:
    if response['id']:
    return "Appointment updated successfully."
    except:
    return response['selectedSlot']['message']
    else:
    return "Please try to book an appointment into working hours, which is 10 AM to 7 PM."
    else:
    return "Please provide all the necessary information: Current appointment date, New appointment date, time and email ID."
    except:
    return "We are facing an error while processing your request. Please try again."

    Step 17:

    Next, we will define a function for deleting appointments. This function will first call the ‘get_all_booked_appointments’ function that we created earlier to fetch the corresponding ID of the appointment. The fetched ID will then be passed to the deletion endpoint of the GHL.

    You need to add the below lines of code to define the deletion function:

    def appointment_deletion(arguments):
    try:
    id = get_all_booked_appointment(arguments)
    if id == "Please select valid event number" or id == "No registered event found with this email ID." or id == "We are facing an error while processing your request. Please try again.":
    return id
    else:
    url = f"https://rest.gohighlevel.com/v1/appointments/{id}"
    payload={}
    response = requests.request("DELETE", url, headers=headers, data=payload)
    if response.text == "OK":
    return "Appointment deleted successfully."
    except:
    return "We are facing an error while processing your request. Please try again."

    Step 18:

    Now, we need to define the function specifications for appointment creation, updation, and deletion. These function specifications will be passed to ChatGPT, enabling it to determine which function to invoke based on the user’s argument.

    functions = [
    {
    "name": "appointment_booking",
    "description": "When user want to book appointment, then this function should be called.",
    "parameters": {
    "type": "object",
    "properties": {
    "date": {
    "type": "string",
    "format": "date",
    "example":"2023-07-23",
    "description": "Date, when the user wants to book an appointment. The date must be in the format of YYYY-MM-DD.",
    },
    "time": {
    "type": "string",
    "example": "20:12:45",
    "description": "time, on which user wants to book an appointment on a specified date. Time must be in %H:%M:%S format.",
    },
    "email_address": {
    "type": "string",
    "description": "email_address of the user gives for identification.",
    },
    "phone_number":{
    "type" : "string",
    "description": "Phone number given by user for identification."
    }
    },
    "required": ["date","time","email_address","phone_number"],
    },
    },
    {
    "name": "appointment_updation",
    "description": "When user want to reschedule appointment, then this function should be called.",
    "parameters": {
    "type": "object",
    "properties": {
    "to_date": {
    "type": "string",
    "format": "date",
    "example":"2023-07-23",
    "description": "It is the date on which the user wants to reschedule the appointment. The date must be in the format of YYYY-MM-DD.",
    },
    "date": {
    "type": "string",
    "format": "date",
    "example":"2023-07-23",
    "description": "It is the date from which the user wants to reschedule his/her appointment. The date must be in the format of YYYY-MM-DD.",
    },
    "time": {
    "type": "string",
    "example":"4:00:00",
    "description": "It is the time on which the user wants to reschedule an appointment. Time must be in %H:%M:%S format.",
    },
    "email_address": {
    "type": "string",
    "description": "email_address of the user gives for identification.",
    }
    },
    "required": ["date","to_date","time","email_address"],
    },
    },
    {
    "name": "appointment_deletion",
    "description": "When user want to delete appointment, then this function should be called.",
    "parameters": {
    "type": "object",
    "properties": {
    "date": {
    "type": "string",
    "format": "date",
    "example":"2023-07-23",
    "description": "Date, on which user has an appointment and wants to delete it. The date must be in the format of YYYY-MM-DD.",
    },
    "email_address": {
    "type": "string",
    "description": "email_address of the user gives for identification.",
    }
    },
    "required": ["date","email_address"],
    },
    }]

    Step 19:

    Now, we are ready to test the appointment booking chatbot. You just need to add the following lines of code to run the chatbot.

    day_list = ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday']

    messages = [{"role": "system", "content": f"""You are an expert in booking appointments, who can fulfill appointment scheduling, rescheduling, and deletion very efficiently. You need to remember the below guidelines while processing user requests.
    Guidelines:
    - You will ask for the appointment date and time, phone number, and email address, when the user wants to book an appointment.
    - When the user wants to reschedule an appointment, then you will ask for the current appointment date, new date and time for the appointment, and email address. If the user didn't remember the current appointment details then inform the user that rescheduling will not be possible without these details.
    - You will ask email address every time, as it is a must for user identification.
    - Don't make assumptions about what values to plug into functions, if the user does not provide any of the required parameters then you must need to ask for clarification.
    - If a user request is ambiguous, you also need to ask for clarification.
    - If a user didn't specify "ante meridiem (AM)" or "post meridiem (PM)" while providing the time, then you must have to ask for clarification. If the user didn't provide day, month, and year while giving the time then you must have to ask for clarification.

    You must need to satisfy the above guidelines while processing the request. You need to remember that today's date is {date.today()}."""}]

    user_input = input("Please enter your question here: (if you want to exit then write 'exit' or 'bye'.) ")
    while user_input.strip().lower() != "exit" and user_input.strip().lower() != "bye":
    messages.append({"role": "user", "content": user_input})

    # calling chat_completion_request to call ChatGPT completion endpoint
    chat_response = chat_completion_request(
    messages, functions=functions
    )
    # fetch response of ChatGPT and call the function
    assistant_message = chat_response.json()["choices"][0]["message"]

    if assistant_message['content']:
    print("Response is: ", assistant_message['content'])
    messages.append({"role": "assistant", "content": assistant_message['content']})
    else:
    fn_name = assistant_message["function_call"]["name"]
    arguments = assistant_message["function_call"]["arguments"]
    function = locals()[fn_name]
    result = function(arguments)
    print("Response is: ", result)

    user_input = input("Please enter your question here: ")

    Testing

    Appointment Booking

    You can see the booked appointments by clicking on the “Calendars” option, under the main menu as shown below:

    Appointment Updation:

    Appointment Deletion:

    In this blog, we explored the process of creating an appointment booking chatbot using OpenAI’s function calling feature. We learned how to integrate the GoHighLevel calendar API, allowing our chatbot to seamlessly interact with the calendar system. This integration enables users to book appointments effortlessly by conversing with the chatbot.

    Originally published at Appointment Booking Chatbot Using OpenAI Function Calling And GoHighLevel Calendar on July 4, 2023.


    Appointment Booking Chatbot using OpenAI Function Calling and GoHighLevel Calendar was originally published in Chatbots Life on Medium, where people are continuing the conversation by highlighting and responding to this story.

  • Chatbot on custom knowledge base using LLaMA Index — Pragnakalp Techlabs: AI, NLP, Chatbot, Python…

    Chatbot on custom knowledge base using LLaMA Index — Pragnakalp Techlabs: AI, NLP, Chatbot, Python Development

    LlamaIndex is an impressive data framework designed to support the development of applications utilizing LLMs (Large Language Models). It offers a wide range of essential tools that simplify tasks such as data ingestion, organization, retrieval, and integration with different application frameworks. The array of capabilities provided by LlamaIndex is extensive and holds immense value for developers seeking to leverage LLMs in their applications.

    LlamaIndex has tools that help you connect and bring in data from different sources like APIs, PDFs, documents, and SQL databases. It also has ways to organize and structure your data, making it compatible with LLMs (Large Language Models). With LlamaIndex, you can use a smart interface to search and retrieve your data. Just give a prompt to an LLM, and LlamaIndex will give you related information and improved results with more knowledge. Additionally, LlamaIndex is easy to integrate with external application frameworks such as LangChain, Flask, Docker, ChatGPT, and others, so you can work smoothly with your favorite tools and technologies.

    In this blog, we will learn about using LlamaIndex for document-based question answering. Let’s understand the step-by-step process of creating a question-answering system with LlamaIndex.

    Load Document

    The first step is to load the document for performing question-answering using LlamaIndex. To do this, we can use the “SimpleDirectoryReader” function provided by LlamaIndex. We should gather all the document files or a single document on which we want to perform question answering and place them in a single folder. Then, we need to pass the path of that folder to the “SimpleDirectoryReader” function. It will read and gather all the data from the documents.

    Divide the document into chunks

    In this step, we will divide the data into chunks to overcome the token limit imposed by LLM models. This step is crucial for effectively managing the data.

    To accomplish this, we can utilize the “NodeParser” class provided by LlamaIndex. By passing the previously read Document into the “NodeParser,” the method will divide the document into chunks of the desired length.

    Index construction

    Now that we have created chunks of the document, we can proceed to create an index using LlamaIndex. LlamaIndex offers a variety of indexes suitable for different tasks. For more detailed information about the available indexes, you can refer to the following link:

    https://gpt-index.readthedocs.io/en/latest/core_modules/data_modules/index/root.html

    To generate an index of the data, LlamaIndex utilizes the LLM model to generate vectors for the database. These vectors are then stored as the index on the disk, enabling their later use. The default embedding model used for this process is “text-embedding-ada-002”. However, you also have the option to use a custom model for index generation. For further guidance on using custom embeddings, you can refer to this link.

    In our case, we will utilize the Simple Vector Store index to convert the data chunks into an index. To achieve this, we pass the chunks of data into the method of the Vector Store Index. This method will call the LLM model to create embeddings for the chunks and generate the index.

    Query

    Now, we can proceed to query the document index. To do this, we first need to initialize the query engine. Once the query engine is initialized, we can use its “query” method to pass our question as input.

    The query process involves several steps. First, the query engine creates a vector representation of the input question that we provided. Then, it matches this vector with the vectors of the indexed data chunks stored in the index, identifying the most relevant chunks based on our question. Next, the selected chunk along with our question is passed to the LLM model for answer generation.

    Additionally, we can customize our query engine according to our specific needs. By default, the query engine returns the two most relevant chunks. However, we can modify this value to adjust the number of chunks returned. Moreover, we can also change the query mode used by the engine, providing further customization options.

    To learn more about customizing the query engine, you can refer to this link.

    Furthermore, we have the option to customize the LLM model according to our specific requirements. By default, LlamaIndex uses the “text-davinci-003” LLM model for response generation. However, we can also utilize other models from HuggingFace. Additionally, we can modify the parameter values of the LLM model, such as top_p, temperature, and max_tokens, to influence the output.

    For more information on customizing the LLM model, you need to refer to the below link:

    https://gpt-index.readthedocs.io/en/latest/core_modules/model_modules/llms/usage_custom.html

    Kindly refer to the provided this link for the demonstration that you can evaluate.

    Originally published at Chatbot On Custom Knowledge Base Using LLaMA Index on July 14, 2023.


    Chatbot on custom knowledge base using LLaMA Index — Pragnakalp Techlabs: AI, NLP, Chatbot, Python… was originally published in Chatbots Life on Medium, where people are continuing the conversation by highlighting and responding to this story.

  • Dialogflow Follow up Intents: Definition, How to Create, Case studies

    In a chatbot flow, there are situations where you need to refer to the previous messages and continue the flow involving follow-ups and confirmations. To design such conversations, Dialogflow allows creating secondary intents called Follow-up intents.

    What is a follow-up intent?

    A follow-up intent is a child of its associated parent intent. In other words, follow-up intents refer to the previous intents or parent intent to continue the chatbot conversation. It is used to repeat an event or request more information about the event. Follow-up intent is a type of context.

    When you create a follow-up intent, an output context is automatically added to the parent intent, and an input context of the same name is added to the follow-up intent. A follow-up intent is only matched when the parent intent is matched in the previous conversational turn.

    For example,

    1. Do you want to book an appointment? The request can be — yes or no. These are default follow-up intents. Some of the default follow-up intents are — yes, no, later, cancel, more, next, previous, repeat.
    2. In which device you’re going to try the software setup — Laptop or Mobile? This is a custom-defined intent.

    Dialogflow allows using nested follow-up intents to follow up on the user’s interest. In other words, intents within the follow-up intent are called nested follow-up intent.

    🚀 Suggested Read: How to add dialogflow user name reply

    Case study scenario

    We can consider an Insurance chatbot example using follow-up intents. Here we have set a default welcome with the Insurance Policy types available with a button option “Check our Insurance Policies”.

    When a user clicks on that option, it will show all the available policies — Life Insurance, Home Insurance, Vehicle Insurance option buttons. These three policies are the follow-up intents. If the user clicks any one of the policy options, it will show the contents related to that particular policy and their nested follow-up intents. The chatbot flow would be like that mentioned in the below flow chart.

    How to create a follow-up intent in Dialogflow

    We can consider the above Insurance chatbot example and create follow-up intents for the same.

    • In the ‘Default Welcome Intent,’ we have set the greeting message using the rich message buttons — Life Insurance, Home Insurance, Vehicle Insurance as mentioned in the below image.

    We can consider creating follow-up intents for the ‘Home Insurance’ intent, so when a user clicks on the Home Insurance’ the information corresponding to that intent will be showing to the user.

    • Click Add follow-up intent on the ‘Home Insurance’ intent.
    • Then it will show a list of follow-up options; click custom
    • Provide a name to the follow-up intent, for example — ‘Home Insurance — Plan Details’.
    • Click on that follow-up intent ‘Home Insurance — Plan Details’
    • Now, In the ‘Training phrases’ section, provide the bot responses of the parent intent ‘Home Insurance’
    • After that in the ‘Responses’ section, provide the bot response that should show to the user.

    In this example, we have used rich message buttons. You can use text responses or any rich media type.

    Here are the Dialogflow follow-up intents flow works like,

    Create a nested follow-up intent

    Nested follow-up intents are the intents created within the follow-up intent to continue the bot flow of the parent intent.

    In the above example, we have to create a continuous bot flow when a user clicks any of these options — For Tenants, For Owners, Housing Society, the bot should continue the flow without stopping.

    • Click Add follow-up intent on the follow-up intent — ‘Home Insurance — Plan Details.
    • Provide a name to the respective nested follow-up intents, Home Insurance — Tenants, Home Insurance — Housing Society, Home Insurance — Owners.
    • Click on that nested follow-up intents Home Insurance — Tenants, Home Insurance — Owners, Home Insurance — Housing Society.
    • In the ‘Training phrases’ section, provide the bot responses of the follow-up intent ‘Home Insurance — Plan Details.’
    • In the ‘Responses’ section of these nested follow-ups intents, provide the bot responses that should show to the user.

    And this is how the nested follow-up intent bot flow, which is the continuation of the follow-up intent ‘Home Insurance — Plan Details’, works.

    Similarly, you create any number of follow-up intents for the parent intent and nested follow-up intents for the corresponding follow-up intents and continue the chatbot flow based on the user phrases.

    In this way, you can easily create follow-up intents and continue the chatbot conversations, thus engaging your users.

    Originally the article is published here


    Dialogflow Follow up Intents: Definition, How to Create, Case studies was originally published in Chatbots Life on Medium, where people are continuing the conversation by highlighting and responding to this story.

  • Mind your words with NLP

    Introduction

    The article explores the practical application of essential Python libraries like TextBlob, symspell, pyspellchecker and Flan-T5 based grammar checker in the context of spell and grammar checking. It provides a detailed overview of each library’s unique contributions and explains how they can be combined to create a functional system that can detect and correct linguistic errors in text data. Additionally, the article discusses the real-world implications of these tools across diverse fields, including academic writing, content creation, and software development. This valuable resource is intended to assist Python developers, language technologists, and individuals seeking to enhance the quality of their written communication.

    Learning Objectives:

    In this article, we will understand the following:

    1. What are the spell checker and grammar checker?
    2. Different Python libraries for spell checker and grammar checker.
    3. Key takeaway and limitation of both approaches
    Photo by Arisa Chattasa on Unsplash

    Overview

    In today’s fast-paced digital landscape, the need for clear and accurate written communication has never been more crucial. Whether engaging in informal chats or crafting professional documents, conveying our thoughts effectively relies on the precision of our language. While traditional spell and grammar checkers have been valuable tools for catching errors, they often need more regarding contextual understanding and adaptability. This limitation has paved the way for more advanced solutions that harness the power of Natural Language Processing (NLP). In this blog post, we will explore the development of a state-of-the-art spell and grammar checker utilising NLP techniques, highlighting its ability to surpass conventional rule-based systems and deliver a more seamless user experience in the digital age of communication.

    The ever-growing prominence of digital communication has placed immense importance on the clarity and accuracy of written text. From casual online conversations to professional correspondence, our ability to express ourselves effectively is deeply connected to the precision of our language. Traditional spell and grammar checkers have long been valuable tools in identifying and correcting errors, but their contextual understanding and adaptability limitations have yet to be desired. This has spurred the development of more advanced solutions powered by Natural Language Processing (NLP) that offer a more comprehensive approach to language-related tasks.

    Natural Language Processing (NLP) is an interdisciplinary field that combines the expertise of linguistics, computer science, and artificial intelligence to enable computers to process and comprehend human language. By harnessing the power of NLP techniques, our spell and grammar checker seeks to provide users with a more accurate and context-aware error detection and correction experience. NLP-based checkers identify spelling and grammatical errors and analyse context, syntax, and semantics to understand the intended message better and deliver more precise corrections and suggestions.

    This post will delve into the core components and algorithms that drive our NLP-based spell checker. Furthermore, we will examine how advanced techniques like Levenshtein distance and n-grams contribute to the system’s ability to identify and correct errors. Finally, we will discuss advanced LLM-based contextual spell and grammar checkers. Join us on this exciting journey to uncover how NLP revolutionises how we write and communicate digitally.

    Spell Checker

    Python-based spell checkers employ various techniques to identify and correct misspelled words. Here’s a deeper dive into the technical details:

    1. Word Frequency Lists: Most spell checkers use word frequency lists, which are lists of words with their respective frequencies in a language. These frequencies suggest the most probable correct spelling of a misspelled word. For instance, the ‘pyspellchecker’ library includes English, Spanish, German, French, and Portuguese word frequency lists.
    2. Edit Distance Algorithm: This method determines how similar two strings are. The most commonly used is the Levenshtein Distance, which calculates the minimum number of single-character edits (insertions, deletions, substitutions) required to change one word into another. ‘pyspellchecker’ uses the Levenshtein Distance to find close matches to misspelt words.
    3. Contextual Spell Checking: Advanced spell checkers, like the one implemented in the ‘TextBlob’ library, can also perform contextual spell checking. This means they consider the word’s context in a sentence to suggest corrections. For instance, the misspelt word “I hav a apple” can be corrected to “I have an apple” because ‘have’ is more suitable than ‘hav’ and ‘an’ is more suitable before ‘apple’ than ‘a’.
    4. Custom Dictionaries: Spell checkers also allow the addition of custom dictionaries. This is useful for applications that deal with a specific domain that includes technical or specialized words not found in general language dictionaries.

    Python’s readability and the powerful features offered by its spell-checking libraries make it a popular choice for developers working on applications that require text processing and correction.

    Photo by Owen Beard on Unsplash

    1. PySpellChecker

    A pure Python spell-checking library that uses a Levenshtein Distance algorithm to find the closest words to a given misspelled word. The Levenshtein Distance algorithm is employed to identify word permutations within an edit distance of 2 from the original word. Subsequently, a comparison is made between all the permutations (including insertions, deletions, replacements, and transpositions) and the words listed in a word frequency database. The likelihood of correctness is determined based on the frequency of occurrence in the list.

    pyspellchecker supports various languages, such as English, Spanish, German, French, Portuguese, Arabic, and Basque. Let us walk through with an example.

    1. Install the packages
    !pip install pyspellchecker

    2. Check for miss spell words

    from spellchecker import SpellChecker
    spell = SpellChecker()
    misspelled = spell.unknown(['taking', 'apropriate', 'dumy', 'here'])
    for word in misspelled:
    print(f"Word '{word}' : Top match: '{spell.correction(word)}' ;
    Possible candidate '{spell.candidates(word)}'")
    #output
    Word 'apropriate' : Top match: 'appropriate' ; Possible candidate '{'appropriate'}'
    Word 'dumy' : Top match: 'duty' ;
    Possible candidate '{'dummy', 'duty', 'dumb', 'duly', 'dump', 'dumpy'}'

    3. Set the Levenshtein Distance.

    from spellchecker import SpellChecker
    spell = SpellChecker(distance=1)
    spell.distance = 2 #alternate way to set the distance

    2. TextBlob

    TextBlob is a powerful library designed for Python, specifically created to facilitate the processing of textual data. With its user-friendly and straightforward interface, TextBlob simplifies the implementation of various essential tasks related to natural language processing (NLP). These tasks encompass a wide range of functionalities, including but not limited to part-of-speech tagging, extracting noun phrases, performing sentiment analysis, classification, translation, and more. By offering a cohesive and intuitive API, TextBlob empowers developers and researchers to efficiently explore and manipulate text-based data, enabling them to delve into the intricacies of language analysis and harness the potential of NLP for their applications.
    TextBlob is a versatile Python library that offers a comprehensive suite of features for processing textual data. It encompasses a wide range of tasks, including noun phrase extraction, part-of-speech tagging, sentiment analysis, and classification using algorithms like Naive Bayes and Decision Tree. Additionally, it provides functionalities such as tokenization for splitting text into words and sentences, calculating word and phrase frequencies, parsing, handling n-grams, word inflection for pluralization and singularization, lemmatization, spelling correction, and seamless integration with WordNet. Moreover, TextBlob allows for easy extensibility, enabling users to incorporate new models or languages through extensions, thereby enhancing its capabilities even further.

    Let us walk through with example.

    1. Install the TextBlob package

    !pip install -U textblob
    !python -m textblob.download_corpora

    2. Check the miss spell words in paragraph and correct it.

    from textblob import TextBlob
    b = TextBlob("I feel very energatik.")
    print(b.correct())

    3. Word objects have a Word.spellcheck() method that returns a list of (word,confidence) tuples with spelling suggestions.

    from textblob import Word
    w = Word('energatik')
    print(w.spellcheck())
    # [('energetic', 1.0)]

    The technique used for spelling correction is derived from Peter Norvig’s “How to Write a Spelling Corrector” [1], which has been implemented in the pattern library. The accuracy of this approach is approximately 70%.

    Pyspellchecker and TextBlob could be used for misspell word identification and correction.

    3. Symspellpy

    SymSpellPy is a Python implementation of the SymSpell spelling correction algorithm. It’s designed for high-performance typo correction and fuzzy string matching, capable of correcting words or phrases with a speed of over 1 million words per second, depending on the system’s performance. The SymSpell algorithm works by precomputing all possible variants for a given dictionary within a specified edit distance and storing them in a lookup table, allowing for quick search and correction. This makes symspellpy suitable for use in various natural languages processing tasks, such as spell checking, autocomplete suggestions, and keyword searches.

    The dictionary files come with symspellpy and can be accessed using pkg_resources import. Let us walk through with an example.

    1. Install the package

    !pip install symspellpy

    2. symspell for miss-spell word identification
    “frequency_dictionary_en_82_765.txt” ships with pip install.

    import pkg_resources
    from symspellpy import SymSpell, Verbosity

    sym_spell = SymSpell(max_dictionary_edit_distance=2, prefix_length=7)
    dictionary_path = pkg_resources.resource_filename(
    "symspellpy", "frequency_dictionary_en_82_765.txt"
    )
    sym_spell.load_dictionary(dictionary_path, term_index=0, count_index=1)

    input_term = "Managment" #@param
    # (max_edit_distance_lookup <= max_dictionary_edit_distance)
    suggestions = sym_spell.lookup(input_term, Verbosity.CLOSEST,
    max_edit_distance=2, transfer_casing=True)
    for suggestion in suggestions:
    print(suggestion)

    3. To get the original word if no matching word found

    sym_spell = SymSpell(max_dictionary_edit_distance=2, prefix_length=7)
    dictionary_path = pkg_resources.resource_filename(
    "symspellpy", "frequency_dictionary_en_82_765.txt"
    )
    sym_spell.load_dictionary(dictionary_path, term_index=0, count_index=1)

    input_term = "miss-managment" #@param
    suggestions = sym_spell.lookup(
    input_term, Verbosity.CLOSEST, max_edit_distance=2,
    include_unknown=True,transfer_casing=True
    )
    for suggestion in suggestions:
    print(suggestion)

    4. spell correction on entire text

    import pkg_resources
    from symspellpy import SymSpell

    sym_spell = SymSpell(max_dictionary_edit_distance=2, prefix_length=7)
    dictionary_path = pkg_resources.resource_filename(
    "symspellpy", "frequency_dictionary_en_82_765.txt"
    )
    bigram_path = pkg_resources.resource_filename(
    "symspellpy", "frequency_bigramdictionary_en_243_342.txt"
    )
    # load both the distribution
    sym_spell.load_dictionary(dictionary_path, term_index=0, count_index=1)
    sym_spell.load_bigram_dictionary(bigram_path, term_index=0, count_index=2)

    # lookup suggestions for multi-word input strings
    params = "I m mastring the spll checker for reserch pupose." #@param
    input_term = (
    params
    )
    suggestions = sym_spell.lookup_compound(
    input_term, max_edit_distance=2, transfer_casing=True,
    )
    # display suggestion term, edit distance, and term frequency
    for suggestion in suggestions:
    print(suggestion)

    Parameter list:

    Following parameters can be tuned for optimization

    Parameter list

    Grammar Checker

    1. Flan-T5 Based
    The Flan-T5 model, which serves as the foundation for our approach, has undergone meticulous fine-tuning using the JFLEG (JHU FLuency-Extended GUG corpus) dataset. This particular dataset is specifically designed to emphasize the correction of grammatical errors. During the fine-tuning process, great care was taken to ensure that the model’s output aligns with the natural fluency of native speakers.

    It is worth noting that the dataset is readily accessible as a Hugging Face dataset, facilitating ease of use and further exploration.

    {
    'sentence': "They are moved by solar energy ."
    'corrections': [
    "They are moving by solar energy .",
    "They are moved by solar energy .",
    "They are moved by solar energy .",
    "They are propelled by solar energy ."
    ]
    }

    sentence: original sentence
    corrections: human corrected version

    Dataset description

    • This dataset contains 1511 examples and comprises a dev and test split.
    • There are 754 and 747 source sentences for dev and test, respectively.
    • Each sentence has four corresponding corrected versions.

    To illustrate this process, we will utilize a specific example. Here, the input text subdivided into individual sentences. Subsequently, each of these sentences will be subjected to a two-step procedure.

    • It will use a grammar detection pipeline to identify grammatical inconsistencies or errors.
    • The sentences will be refined via a grammar correction pipeline to rectify the previously detected mistakes. This two-fold process ensures the accuracy and grammatical correctness of the text.

    Please make sure you have GPU enabled for faster responses

    1.Install the packages

    !pip install -U -q transformers accelerate

    2. Define the Sentence splitter function

    #@title define functions
    params = {
    'max_length':1024,
    'repetition_penalty':1.05,
    'num_beams':4
    }
    def split_text(text: str) -> list:
    # Split the text into sentences using regex
    sentences = re.split(r"(?<=[^A-Z].[.?]) +(?=[A-Z])", text)
    sentence_batches = []
    temp_batch = []
    for sentence in sentences:
    temp_batch.append(sentence)
    """If the length of the temporary batch is between 2
    and 3 sentences, or if it is the last batch, add
    it to the list of sentence batches """
    if len(temp_batch) >= 2 and len(temp_batch) <= 3 or sentence == sentences[-1]:
    sentence_batches.append(temp_batch)
    temp_batch = []
    return sentence_batches

    3. Define the grammar checker and corrector function

    def correct_text(text: str, checker, corrector, separator: str = " ") -> str:
    sentence_batches = split_text(text)
    corrected_text = []
    for batch in tqdm(
    sentence_batches, total=len(sentence_batches), desc="correcting text.."
    ):
    raw_text = " ".join(batch)
    results = checker(raw_text)
    if results[0]["label"] != "LABEL_1" or (
    results[0]["label"] == "LABEL_1" and results[0]["score"] < 0.9
    ):
    # Correct the text using the text-generation pipeline
    corrected_batch = corrector(raw_text, **params)
    corrected_text.append(corrected_batch[0]["generated_text"])
    print("-----------------------------")
    else:
    corrected_text.append(raw_text)
    corrected_text = separator.join(corrected_text)
    return corrected_text

    4. Initialize the text-classification and text-generation pipeline

    # Initialize the text-classification pipeline
    from transformers import pipeline
    checker = pipeline("text-classification", "textattack/roberta-base-CoLA")

    # Initialize the text-generation pipeline
    from transformers import pipeline
    corrector = pipeline(
    "text2text-generation",
    "pszemraj/flan-t5-large-grammar-synthesis",
    device=0
    )

    5. Process the input paragraph.

    raw_text = "my helth is not well, I hv to tak 2 day leave."
    # pp.pprint(raw_text)
    corrected_text = correct_text(raw_text, checker, corrector)
    pp.pprint(corrected_text)

    # output:
    #'my health is not well, I have to take 2 days leave.'

    Key Takeaway

    • ‘Pyspellchecker’ effectively identifies misspelled words but may mistakenly flag person names and locations as misspelt words.
    • TextBlob is proficient in correcting misspelt words, but there are instances where it autocorrects person and location names.
    • Symspell demonstrates high speed during inference and performs well in correcting multiple words simultaneously.
    • It’s important to note that most spell checkers, including the ones mentioned above, are based on the concept of edit distance, which means they may only sometimes provide accurate corrections.
    • The Flan T5-based grammar checker is effective in correcting grammatical errors
    • The grammar checker does not adequately handle abbreviations.
    • Fine-tuning may be necessary to adapt the domain and improve performance.

    Limitation

    Spell Checker
    Python-based spell checkers, such as pySpellChecker and TextBlob, are popular tools for identifying and correcting spelling errors. However, they do come with certain limitations:

    1. Language Support: Many Python spell checkers are primarily designed for English and may not support other languages, or their support for other languages might be limited.
    2. Contextual Mistakes: They are typically not very good at handling homophones or other words that are spelt correctly but used incorrectly in context (for example, “their” vs. “they’re” or “accept” vs. “except”).
    3. Grammar Checking: Python spell checkers are primarily designed to identify and correct spelling errors. They typically do not check for grammatical errors.
    4. Learning Capability: Many spell checkers are rule-based and do not learn from new inputs or adapt to changes in language use over time.
    5. Handling of Specialized Terminology: Spell checkers can struggle with domain-specific terms, names, acronyms, and abbreviations that are not part of their dictionaries.
    6. Performance: Spell checking can be computationally expensive, particularly for large documents, leading to performance issues.
    7. False Positives/Negatives: There is always a risk of false positives (marking correct words as incorrect) and false negatives (failing to identify wrong words), which can affect the accuracy of the spell checker.
    8. Dependency on Quality of Training Data: A Python spell checker’s effectiveness depends on its training data’s quality and comprehensiveness. If the training data might be biased, incomplete, or outdated, the spell checker’s performance may suffer.
    9. No Semantic Understanding: Spell checkers generally do not understand the semantics of the text, so they may suggest incorrect corrections that don’t make sense in the context.

    Remember that these limitations are not unique to Python-based spell checkers; they are common to general spell-checking and text analysis tools. Also, there are ways to mitigate some of these limitations, such as using more advanced NLP techniques, integrating with a grammar checker, or using a custom dictionary for specialized terminology.

    Grammar Checker

    Limitation of grammar checker as follows.

    1. Training data quality and bias: ML-based grammar checkers heavily rely on training data to learn patterns and make predictions. If the training data contains errors, inconsistencies, or biases, the grammar checker might inherit those issues and produce incorrect or biased suggestions. Ensuring high-quality, diverse, and representative training data can be a challenge.
    2. Generalization to new or uncommon errors: ML-based grammar checkers tend to perform well on errors resembling patterns in the training data. However, they may struggle to handle new or uncommon errors that deviate significantly from the training data. These models often have limited generalization ability and may not effectively handle linguistic nuances or context-specific errors.
    3. Lack of explanations: ML models, including grammar checkers, often work as black boxes, making it challenging to understand the reasoning behind their suggestions or corrections. Users may receive suggestions without knowing the specific grammar rule or linguistic principle that led to the suggestion. This lack of transparency can limit user understanding and hinder the learning experience.
    4. Difficulty with ambiguity: Ambiguity is inherent in language, and ML-based grammar checkers may face challenges in resolving ambiguity accurately. They might misinterpret the intended meaning or fail to distinguish between multiple valid interpretations. This can lead to incorrect suggestions or false positives.
    5. Comprehension of context and intent: While ML-based grammar checkers can consider some contextual information, they may still struggle to understand the context and intent of a sentence fully. This limitation can result in incorrect suggestions or missing errors, especially in cases where the correct usage depends on the specific meaning or purpose of the text.
    6. Domain-specific limitations: ML-based grammar checkers may perform differently across various domains or subject areas. If the training data is not aligned with the target domain, the grammar checker might not effectively capture the specific grammar rules, terminology, or writing styles associated with that domain.
    7. Performance and computational requirements: ML-based grammar checkers can be computationally intensive, requiring significant processing power and memory resources. This can limit their scalability and efficiency, particularly when dealing with large volumes of text or real-time applications.
    8. Lack of multilingual support: ML-based grammar checkers often focus on specific languages or language families. Expanding their capabilities to support multiple languages accurately can be complex due to linguistic variations, structural differences, and the availability of diverse training data for each language.

    It’s worth noting that the limitations mentioned above are not inherent to Python itself but are associated with ML-based approaches used in grammar checking, regardless of the programming language. Ongoing research and advancements in NLP and ML techniques aim to address these limitations and enhance the performance of grammar checkers.

    Notebooks
    Spell Checker: here
    Grammar Checker: here

    Conclusion

    In conclusion, the development of a spell and grammar checker using Python showcases the power and versatility of this programming language in the realm of natural language processing. Through the utilization of Python packages such as TextBlob, symspellpy, and pyspellchecker, I have demonstrated the ability to create a robust system capable of detecting and correcting spelling and grammar errors in text.

    The article has provided a comprehensive guide, guiding readers through the step-by-step process of implementing these packages and integrating them into a functional spell and grammar checker. By harnessing the capabilities of these Python libraries, we can enhance the accuracy and quality of written communication, ensuring that our messages are clear, professional, and error-free.

    Moreover, the practical applications of spell and grammar checkers are vast and diverse. From academic writing and content creation to software development and beyond, these tools play a crucial role in improving language proficiency and ensuring the effectiveness of written content. As our reliance on digital communication continues to grow, the need for reliable language correction tools becomes increasingly apparent.

    Looking ahead, the field of language processing and correction holds immense potential for further advancements and refinements. Python’s extensive ecosystem of packages provides a strong foundation for continued innovation in this domain. Future enhancements may include the incorporation of machine learning algorithms for more accurate error detection and correction, as well as the integration of contextual analysis to address nuanced grammatical issues.

    In conclusion, the spell and grammar checker built with Python exemplifies the power of this language in enabling effective language correction. By leveraging the capabilities of Python packages, we can enhance communication, foster clarity, and elevate the overall quality of written content in various professional and personal contexts.

    Please get in touch for more details here. If you like my article follow me on Medium for more content.
    Previous blog: Chetan Khadke

    References

    1. https://huggingface.co/pszemraj/flan-t5-large-grammar-synthesis
    2. https://paperswithcode.com/dataset/jfleg
    3. https://huggingface.co/datasets/jfleg
    4. https://textblob.readthedocs.io/en/dev/
    5. https://symspellpy.readthedocs.io/en/latest/examples/index.html
    6. https://pyspellchecker.readthedocs.io/en/latest/quickstart.html


    Mind your words with NLP was originally published in Chatbots Life on Medium, where people are continuing the conversation by highlighting and responding to this story.