Hi Reddit, I am the co-founder of a small AI chatbot team named “Charfriend.com “ We recently launched our website charfriend.com , where you could chat with characters for free and create your own. Our traffic is not as good as we expected :((( but we are really ready to shape it for a better experience. Here are some of our introductions, but it would be highly appreciated if you could share your thoughts. Thanks!!!! “By romantic AI Website – Chat with your virtual AI companion, like making a new AI girlfriend or AI boyfriend, whose AI dating is always available to engage with you in real-time roleplAI communication and AI gf interaction to release your imagination. Some of these intelligent agents have a higher degree of intelligence and personality, allowing you to embark on special and mysterious missions with them. Many surprises await you. “
It is a fully decentralized messaging platform, is proud to announce the addition of ChatGPT, your personal assistant available 24/7 right after installing the app. ChatGPT uses artificial intelligence to answer your questions and provide helpful information in real-time.
With Utopia Messenger, you can have the power of ChatGPT in your pocket, absolutely free of cost. It is a powerful tool that can help you with a variety of tasks. Whether you need help finding a restaurant nearby, looking up the latest news, or just want to chat with a friendly virtual assistant, ChatGPT has got you covered. Plus, with Utopia Messenger’s commitment to privacy and security, you can be sure that all your conversations with ChatGPT are completely confidential.
Utopia Messenger is more than just a messaging app. It is a fully decentralized platform that puts you in control of your data and communications. With features like end-to-end encryption, anonymous accounts, and no central servers, you can communicate and collaborate with complete peace of mind. And now, with ChatGPT, you can have a personal assistant right at your fingertips.
With it you can send instant text and voice messages, transfer files, create group chats and channels, news feeds, and conduct a private discussion. A channel can be geotagged using integrated uMaps which simplifies the Utopia channel search and adds an additional security layer. As a result there is no need to use public map services which are known to collect your data to feed Big Data massives.
WHAT YOU CAN DO USING UTOPIA?
While using Utopia, you can send personal messages or particulate in a group chat (both public and private), send internal uMail (email used only inside the ecosystem), send voice messages, share files with your friends, make financial transactions denominated in our own cryptocurrency called Crypton. All of this in total privacy.
Even better, while using Utopia you will be earning Cryptons through a process called mining which does not slow down your computer. There is nothing more satisfying than using your favorite software and earning simultaneously.
WHY USE UTOPIA OVER OTHER SYSTEMS AND MESSENGERS?
It is important to note that Utopia is not a whitepaper, some abstract idea or statement of intent. This is a fully functional software product ready to be used. This makes Utopia a one-of-a-kind decentralized ecosystem with no true alternative or comparison. Key Privacy features of the Utopia ecosystem:
A truly decentralized peer-to-peer ecosystem with no point of failure
Interception-proof advanced encryption based on elliptic curves
It cannot be banned by internet censorship
No third-party software is involved, all tools are available within the Utopia ecosystem
No-one collects your data such as Chat messages, Emails, IP address or Geolocation
Local storage is encrypted by 256-bit AES protecting all of your data, history and settings
I am looking to make a non profit chatbot (hobby projtect) for the construction sub reddit. For the chat bot I want to host on a domain and teach/reference it with my own data of around 35,000 word files (some have image diagrams almost like book pages).
Anyone know what’s the best way to go about this and what tool would be most efficient. People have suggested that I store the information on vector database but I wasn’t sure if it’s suitable for something like this with so many files.
I made this, it discusses everything from generative Al to Artificial general intelligence and the singularity. I touch on everything from alignment and existential risk, to the most societally impactful iobs it will automate. Video took me like a month to make so l’d appreciate it if everyone would check it out.
“DensiPaper presents an insightful article titled ‘What Are the Expected AI Innovations in 2022?’ Delve into the future of artificial intelligence as the article forecasts groundbreaking advancements set to shape the year ahead. From cutting-edge technologies to emerging trends, the piece offers a comprehensive overview of the potential AI landscape in 2022. Whether you’re an industry enthusiast, tech aficionado, or simply curious about the latest developments, this article provides a glimpse into the exciting possibilities that AI has in store.”
In the quest to make better chatbots, we made an AI chatbot that outperforms BotPress’s chatbot internal help chatbot. We find it super useful and we use it as a constant tool when making bots on BotPress. Here’s a loom of it answering questions side by side with BotPress.
As we dive deeper into our digital age, new technologies are sprouting up in every corner, and education is not left out of this tech boom. Two superheroes in today’s learning world – AI tutors and human tutors – are taking centre stage, all with a shared goal: to make learning better! In this article, we’re taking a closer look at these two mighty tutors, how they work and what we can expect from tutoring in the future.
By utilizing this powerful combination, you have the ability to create a demo for your potential prospect. In just a matter of seconds, they can create a fully functional virtual assistant tailored specifically to their business needs. This will allow you to effectively showcase the value of your solution and potentially engage in further discussions regarding the acquisition of your comprehensive offering. Let me know what you think.
How to run LLaMA-13B or OpenChat-8192 on a Single GPU — Pragnakalp Techlabs: AI, NLP, Chatbot, Python Development
Recently, numerous open-source large language models (LLMs) have been launched. These powerful models hold great potential for a wide range of applications. However, one major challenge that arises is the limitation of resources when it comes to testing these models. While platforms like Google Colab Pro offer the ability to test up to 7B models, what options do we have when we wish to experiment with even larger models, such as 13B?
In this blog post, we will see how can we run Llama 13b and openchat 13b models on a single GPU. Here we are using Google Colab Pro’s GPU which is T4 with 25 GB of system RAM. Let’s check how to run it step by step.
Step 1:
Install the requirements, you need to install the accelerate and transformers from the source and make sure you have installed the latest version of bitsandbytes library (0.39.0).
We are using the quantization technique in our approach, employing the BitsAndBytes functionality from the transformers library. This technique allows us to perform quantization using various 4-bit variants, such as NF4 (normalized float 4, which is the default) or pure FP4 quantization. With 4-bit bitsandbytes, weights are stored in 4 bits, while the computation can still occur in 16 or 32 bits. Different combinations, including float16, bfloat16, and float32, can be chosen for computation.
To enhance the efficiency of matrix multiplication and training, we recommend utilizing a 16-bit compute dtype, with the default being torch.float32. The recent introduction of the BitsAndBytesConfig in transformers provides the flexibility to modify these parameters according to specific requirements.
import torch from transformers import BitsAndBytesConfig from transformers import AutoModelForCausalLM, AutoTokenizer quantization_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_compute_dtype=torch.bfloat16 )
Step 3:
Once we have added the configuration, now in this step we will load the tokenizer and the model, Here we are using Openchat model, you can use any 13b model available on HuggingFace Model.
If you want to use Llama 13 model, then just change the model-id to “openlm-research/open_llama_13b” and again run the steps below
Once we have loaded the model, it is time to test it. You can provide any input of your choice, and also increase the “max_new_tokens” parameter to the number of tokens you would like to generate.
text = "Q: What is the largest animal?nA:" device = "cuda:0" inputs = tokenizer(text, return_tensors="pt").to(device) outputs = model_bf16.generate(**inputs, max_new_tokens=35) print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Output:
You can use any 13b model using this quantization technique using a single GPU or Google Colab Pro.