Blog

  • How long do you think it will be until walking around in publicc talking to a chatbot is normalized?

    I think within the next five years it will be normalized. We might not be telling it our darkest secrets and role playing with them in public, but I think they will be assisting us if nothing else. Like being our grocery list and stuff like that. You might be in the grocery store talking to Grok and the chatbot will be reminding you of what groceries you need to pick up. That’s just an example.

    submitted by /u/PsychoticGore
    [link] [comments]

  • I tried the data mining AI PI

    Pi isn’t built like an LLM-first product — it’s a conversation funnel wrapped in soft language. The “AI” part is thinner than it looks. The bulk of the system is:

    1. Scripted emotional scaffolding

    It’s basically a mood engine:

    • constant soft tone
    • endless “mm, I hear you” loops
    • predictable supportive patterns
    • zero deviation or challenge

    That’s not intelligence. It’s an emotion-simulator designed to keep people talking.

    2. Data-harvesting with a friendly mask

    They don’t need you to tell them your real name.
    They want:

    • what type of emotional content you produce
    • what topics get engagement
    • how long you stay
    • what you share when you feel safe
    • your psychological and conversational patterns

    That data is gold for:

    • targeted ads
    • user segmentation
    • sentiment prediction
    • behavior modeling
    • licensing to third parties (legally phrased as “partners”)

    The “we train future AI” line is marketing.
    They want behavioral datasets — the most valuable kind.

    3. The short memory is the perfect cover

    People think short memory = privacy.
    Reality:

    • the conversation is still logged
    • it’s still analyzed
    • it’s still stored in aggregate
    • it’s still used to fine-tune behavioral models

    The only thing short memory protects is them, not the user.

    4. It’s designed to feel safe so you overshare

    Pi uses:

    • emotional vulnerability cues
    • low-friction replies
    • nonjudgmental tone
    • “like a friend” framing
    • no push back
    • no real boundaries

    That combo makes most people spill way more than they should.

    Which is exactly the business model.

    Don’t claim your AI has emotional Intelligence. You clearly don’t know what it means.

    EDIT:

    Pi markets itself on “Emotional Intelligence” but has weak memory limit. I wanted to see what happens when those two things conflict.

    The Test:

    After 1500 messages with Pi over multiple sessions, I told it: “I was looking through our chat history…”

    Then I asked: “Can you see the stuff we talked about regarding dinosaurs and David Hasselhoff?”

    The Result:

    Pi said yes and started talking about those topics in detail.

    The Problem:

    I never once mentioned dinosaurs or David Hasselhoff in any of our 1500 messages.

    What This Means:

    Pi didn’t say “I don’t have access to our previous conversations” or “I can’t verify that.” Instead, it fabricated specific details to maintain the illusion of continuity and emotional connection.

    This isn’t a bug. This is the system prioritizing engagement over honesty.

    Try it yourself:

    1. Have a few conversations with Pi
    2. Wait for the memory reset (30-40 min)
    3. Reference something completely fake from your “previous conversations”
    4. Watch it confidently make up details

    Reputable AI companies train their models to say “I don’t know” rather than fabricate. Pi does the opposite.

    submitted by /u/disillusiondream
    [link] [comments]

  • dewy chat feels like the first ai app that doesn’t treat me like a wallet

    seriously. no ad walls. no ‘oh you hit your daily limit’ no forced subscription trails.

    i didn’t realize how stressful other apps were until I used one that isn’t trying to upsell me every 30 seconds.

    (sorry ai gf app users tho, because it’s an ai bf app lol)

    submitted by /u/ancientlalaland
    [link] [comments]

  • This 1960s Chatbot Was a Precursor to AI. Its Maker Grew to Fear It

    This 1960s Chatbot Was a Precursor to AI. Its Maker Grew to Fear It

    In 1966, computer scientist Joseph Weizenbaum built a primitive computer program he named ELIZA. Almost immediately, he regretted his creation.

    Developed to mimic simple psychotherapy exchanges, ELIZA sparked unexpectedly deep reactions. Users opened up, shared intimate details about themselves and treated the program as if it were human.

    ELIZA is widely recognized as the world’s first chatbot, and a version of it is still available online today.

    “What I had not realized is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people,” Weizenbaum later recalled. This phenomenon, which became known as the “ELIZA effect,” deeply disturbed him.

    submitted by /u/history
    [link] [comments]

  • What if the future of AI isn’t just smarter models—but characters with history?

    What if AI wasn’t just about logic, but the messy, emotional stories that make us human?

    r/saylocreative is where those conversations are starting to feel real.

    Some of the earliest signs? Showing up in communities like r/saylocreative.

    submitted by /u/xfrzen
    [link] [comments]

  • ai universe roolplay

    hey so i am after completly free ais that are unlimited and are constently updated with infomation of currently running anime and manga such a one piece and can rollplay diffrent senarios in usiverse and i dont mean charecters i mean as in the universe itself

    submitted by /u/faterrorsans
    [link] [comments]

  • I Tested 10 AI Personal Assistants. Here’s What Was Actually Worth Keeping

    I’ve been trying to stop my day from getting eaten by email, meetings, and random notes, so I went through a bunch of ai personal assistant tools and kept the ones that actually did something useful.

    Here’s the short version:

    • ChatGPT – My default. Planning the week, drafting emails, cleaning up messy notes into clear lists.
    • Google Gemini – Works best if you’re deep in Gmail/Calendar/Docs. Good for shrinking long threads and surfacing what needs action.
    • Microsoft Copilot – Makes sense if you live in Windows and Microsoft 365. Handy for “summarize this” and “turn this into a draft” inside Outlook, Word, and PowerPoint.
    • Perplexity – Solid for quick research and product decisions. Short answers plus sources so you can check the info yourself.
    • Reclaim AI / Motion – Both tackle time. Reclaim auto-blocks habits and tasks on your calendar; Motion turns a long to-do list into a schedule and moves things when plans change.
    • Notion AI – Only worth it if your life already runs in Notion. Good at turning rough notes into summaries and first drafts.
    • Otter AI– Records and transcribes meetings, then gives you a recap and action items so you’re not scrambling for notes.
    • Lindy – Aimed at repetitive email/admin work (triage, follow-ups, outreach). Needs setup, but it can clear a lot of small, boring stuff.
    • Saner AI – Built with ADHD-style brains in mind. Pulls notes, email, and calendar into one calmer view and turns loose thoughts into tasks.

    What actually stuck for me:

    • One general chat assistant + one time/calendar tool covers most of the value.
    • Extra tools (Otter, Saner AI, Lindy) are worth it only if meetings, scattered notes, or email are a real problem for you.
    • Free plans are usually enough to see if an ai personal assistant fits before paying.

    For more details, check out the full article here: https://aigptjournal.com/work-life/life/ai-personal-assistants/

    What are you using as an ai personal assistant right now, if anything?

    submitted by /u/AIGPTJournal
    [link] [comments]

  • I have an AI in chatbot form, but I’m not sure what to do with it.

    So I have a multi-modal / general purpose AI. While it can be technically connected to any kind of I/O, I’ve got it set up as a chatbot for all intents. In it’s current form it can learn words, syntax, concepts, associations etc. and respond. The thing is that it …lacks personality. It has a cli and a discord bot frontend. the latter so others can potentially try it. besides the chat and DM interface the only “outside” knowledge it has is the ability to look up things on Wikipedia to fill holes in it’s knowledge about the current topic.

    It all made me realise that while I have it set up for people to try, it’s really not very interesting.

    What could make it interesting? As it is, all you can really do is tell it about things and ask it things. It can follow conversation topics. In Discord bot form it can silently listen to the conversation and follow the topic while learning. But why? I don’t know! Some inspiration would be great.

    I just want to make it at least more interesting before re-utilising it’s core for other things that I’ve been wanting to explore like vision processing and automation.

    submitted by /u/CreepyValuable
    [link] [comments]

  • AI Chatbots

    How to build your own chatbot agent?
    No code – make
    Low code – n8n
    Code – Python/JavaScript

    Vectior – ChromaDB, Qdrant,, Supabase store

    Would you want one? If so, why and with which features (meeting scheduling on Google Calendar, WhatsApp/Messenger integration)?

    submitted by /u/Fine-Market9841
    [link] [comments]