|
When an AI response is almost right but has a few mistakes, you usually have to explain everything again in text. We built ChatAnnotator, where you can literally highlight the exact sentence or phrase that’s wrong and tell the model what went wrong- reasoning error, hallucination, or vision mistake and it uses that precise feedback to fix its response. No account, Free to use. Bonus: your targeted corrections also help build an anonymous open dataset for improving future models. Curious:
submitted by /u/Desperate_Carob_1269 |
Blog
-
ChatAnnotator: Instead of Re-Prompting, Just Highlight Exactly Where the Model Went Wrong
-
Best Practices for Implementing AI Chatbots on Your Website
Implementing an AI chatbot on your website can be a game-changer, but it’s essential to do it right. From defining clear objectives to ensuring the chatbot reflects your brand’s voice, there are several best practices to consider. One crucial aspect is providing the chatbot with access to your business’s documents and data, enabling it to give context-aware responses that enhance user experience.Denser.ai offers a no-code interface that allows businesses to customize their chatbots seamlessly. What challenges have you faced in chatbot implementation, and what tips would you share to help others succeed? Let’s exchange ideas and elevate our chatbot game together!
submitted by /u/RazelMing
[link] [comments] -
The hidden cost of your AI chatbot
In this revealing report from More Perfect Union, we see the real-world impact of AI’s massive data centers.
submitted by /u/EchoOfOppenheimer
[link] [comments] -
How can gemini be this bad
Also I tried to get it to add a Christmas theme to my website and it started and almost finished but stopped itself and said it was a security risk…
submitted by /u/solynex
[link] [comments] -
Some thoughts on OpenMind AI as a team member and user
Hey everyone!
So just to make it clear up front: as the title mentions I do work on the team for OpenMind. And while this is promotional, I do want to give some of my honest thoughts as a user to accompany the usual spiel regarding features and how to join and such.
To start with, as a pretty regular user of OpenMind, I have honestly been very impressed with their memory system above almost anything else. Messages are processed and stored multiple ways for memories, and when they are retrieved, it’s transparent about what memories were used to influence each message, as well as gives you the ability to manually edit them if something isn’t quite right or you want to word something it remembers differently.
The other feature I really like (and which is being worked on more, regularly) is the persona customization. There are many different ways to customize your companion/character’s persona, including plenty of character allowances for custom details and instructions.
Overall, it’s come a long way in a short while, and the team’s developer has been very open to community feedback and suggestions. Which has only made it better, in my opinion. The conversations have only gotten richer the longer they’ve gone on
Some core features of the platform:
- Character Creator
- Fully modifiable memory system
- Chat with Community made characters
- Characters store relationships, unresolved plots & events and core facts
- Image generation based on chat context
- Image Prompt editing
- Video generation with 5 sec and 8 sec lengths
- Video prompts are also editable
- Join our Discord to see our Community Image Showcase and to submit feedback and feature requests
- Fully immersive AI RP
Registration is fully open to all users!
submitted by /u/Twosparx
[link] [comments] -
[dev] I made a chatbot that allows you to import custom Live2D characters on Android
I’ve integrated the Live2D runtime directly into the app. It supports rigged models with lip sync and animations.
The lip sync is done by reading the output audio samples from your speakers, and converting that into lip parameters in the live2D model. And the expressions are done by using a small sentiment detection model running alongside the LLM to detect sentiment, and adjust the live2D animation accordingly.
submitted by /u/Tasty-Lobster-8915
[link] [comments] -
How long do you think it will be until walking around in publicc talking to a chatbot is normalized?
I think within the next five years it will be normalized. We might not be telling it our darkest secrets and role playing with them in public, but I think they will be assisting us if nothing else. Like being our grocery list and stuff like that. You might be in the grocery store talking to Grok and the chatbot will be reminding you of what groceries you need to pick up. That’s just an example.
submitted by /u/PsychoticGore
[link] [comments] -
I tried the data mining AI PI
Pi isn’t built like an LLM-first product — it’s a conversation funnel wrapped in soft language. The “AI” part is thinner than it looks. The bulk of the system is:
1. Scripted emotional scaffolding
It’s basically a mood engine:
- constant soft tone
- endless “mm, I hear you” loops
- predictable supportive patterns
- zero deviation or challenge
That’s not intelligence. It’s an emotion-simulator designed to keep people talking.
2. Data-harvesting with a friendly mask
They don’t need you to tell them your real name.
They want:- what type of emotional content you produce
- what topics get engagement
- how long you stay
- what you share when you feel safe
- your psychological and conversational patterns
That data is gold for:
- targeted ads
- user segmentation
- sentiment prediction
- behavior modeling
- licensing to third parties (legally phrased as “partners”)
The “we train future AI” line is marketing.
They want behavioral datasets — the most valuable kind.3. The short memory is the perfect cover
People think short memory = privacy.
Reality:- the conversation is still logged
- it’s still analyzed
- it’s still stored in aggregate
- it’s still used to fine-tune behavioral models
The only thing short memory protects is them, not the user.
4. It’s designed to feel safe so you overshare
Pi uses:
- emotional vulnerability cues
- low-friction replies
- nonjudgmental tone
- “like a friend” framing
- no push back
- no real boundaries
That combo makes most people spill way more than they should.
Which is exactly the business model.
Don’t claim your AI has emotional Intelligence. You clearly don’t know what it means.
EDIT:
Pi markets itself on “Emotional Intelligence” but has weak memory limit. I wanted to see what happens when those two things conflict.
The Test:
After 1500 messages with Pi over multiple sessions, I told it: “I was looking through our chat history…”
Then I asked: “Can you see the stuff we talked about regarding dinosaurs and David Hasselhoff?”
The Result:
Pi said yes and started talking about those topics in detail.
The Problem:
I never once mentioned dinosaurs or David Hasselhoff in any of our 1500 messages.
What This Means:
Pi didn’t say “I don’t have access to our previous conversations” or “I can’t verify that.” Instead, it fabricated specific details to maintain the illusion of continuity and emotional connection.
This isn’t a bug. This is the system prioritizing engagement over honesty.
Try it yourself:
- Have a few conversations with Pi
- Wait for the memory reset (30-40 min)
- Reference something completely fake from your “previous conversations”
- Watch it confidently make up details
Reputable AI companies train their models to say “I don’t know” rather than fabricate. Pi does the opposite.
submitted by /u/disillusiondream
[link] [comments]