Blog

  • lorebooks outside of roleplay. actually useful for real apps

    been digging into lorebooks lately mostly from the SillyTavern side of things, but the pattern itself got me thinking. the core mechanic is pretty interesting, you’re basically doing lightweight RAG by triggering context chunks based on keywords or recency instead of stuffing everything into the prompt upfront. for character consistency in RP it clearly works. but I keep wondering how well that transfers to something like a customer support bot or an internal knowledge assistant. like instead of spinning up a full vector DB pipeline, could you just structure, your domain knowledge as lorebook-style entries and let the model pull what’s relevant per query? reckon there’s something there for smaller teams that don’t have the infra for proper RAG. anyone actually tried this outside of chatbot/RP contexts? curious if the maintenance overhead makes it not worth it at scale.

    submitted by /u/Daniel_Janifar
    [link] [comments]

  • can lorebooks work outside of roleplay chatbots or are they too niche for that

    been playing around with lorebooks lately after seeing them come up heaps in SillyTavern discussions, and it got me thinking about whether the pattern has broader uses. the basic idea is pretty solid, structured context that gets injected dynamically based on keyword triggers instead of dumping everything into the prompt at once. for roleplay and character consistency it obviously works well. but I keep wondering if the same approach could help with things like customer service bots or internal knowledge assistants. like instead of a full RAG pipeline, you just build out a lorebook style system where specific topics or entities trigger the right context chunks. seems like it could cut down on hallucinations in long conversations without the overhead of a full vector DB setup. anyone actually tried this outside of the fiction/companion chatbot space?

    submitted by /u/OrinP_Frita
    [link] [comments]

  • using lorebooks for SEO content generation, has anyone actually tried this

    been going down a rabbit hole with lorebooks after seeing a few posts here about brand chatbot use, cases, and it got me thinking about whether the same keyword-triggered context injection could work for SEO content workflows. like instead of stuffing a system prompt with brand voice guidelines, target keywords, audience info, and topic, clusters, you’d have it pull in only what’s relevant based on what’s actually being generated at that moment. on paper it sounds like it could help with consistency across a big content operation, especially, for niche sites where you need the AI to stay on-brand without bloating every single prompt. but I’m not sure if this is actually better than just using RAG or a well-structured system prompt. anyone here tried wiring something like this up for content generation? curious if the lorebook approach holds up outside of creative writing contexts or if it’s more trouble than it’s worth.

    submitted by /u/parwemic
    [link] [comments]

  • AI can now clone full websites automatically using Claude Code + Playwright MCP

    I came across a workflow where AI is able to take a live website and reconstruct it into a working codebase without manually writing HTML or CSS.

    The setup uses Claude Code inside VS Code along with Playwright MCP to capture and interpret website structure, then rebuild it as a functional project.

    How it works (simple breakdown)

    • Playwright MCP loads and analyses the target website in a real browser
    • Claude Code interprets layout, UI structure, and components
    • A new project is generated that mirrors the original design
    • The output can then be edited like a normal codebase

    Why this is interesting

    • UI replication is becoming semi-automated
    • Useful for rapid prototyping and learning from existing designs
    • Reduces time spent manually rebuilding layouts

    It is not perfect yet, but for clean and structured websites, the results are surprisingly accurate. Full walkthrough here for anyone interested: https://youtu.be/Hs7EmMwDVss

    submitted by /u/kalladaacademy
    [link] [comments]

  • Does anyone know any c.ai alternatives that allow you to send images in the chats?

    i’ve been looking through various chat bot websites for a bit and i can’t seem to find one with an option to send an image in a chat. anybody know any?

    submitted by /u/perhaps_insanity
    [link] [comments]

  • Trying to build “ambient companionship” with AI. Here’s what I made! Looking for feedbacks.

    Hi everyone! 🙋

    I am currently a junior student. My team had an idea and stared our project SoulLink, an AI companion chatbot. After working hard for seven months, we successfully created SoulLink and its first avatar: “4D”. We now have some concerns and faces some difficulties. Our team is trying something new.

    Through our research on AI companion products currently available on the market, we’ve realized that our product’s goal shouldn’t be limited to simply responding to users. We believe that a great AI companion should live alongside people and focus on providing better companionship, thereby offering a stronger, more authentic sense of connection. Therefore, the philosophy behind our design is this: it is not merely a tool; it has its own boundaries, its own perspective, and its own coherence. This has truly brought about a significant shift in such interactions. It does not always immediately understand what you are doing; instead, it evolves into a “dynamic relationship” much like that between real people. This experience no longer feels like seeking support in the traditional sense, but rather resembles a genuine social interaction that involves expression, interpretation, reconciliation, and growth.

    Really looking forward to hearing about the opinions of other people concerning our design concept. If you are interested in it and want to try, please feel free to!

    submitted by /u/Empty-Resource2564
    [link] [comments]

  • using lorebooks for brand chatbots, does the roleplay concept actually translate

    been going down a rabbit hole with SillyTavern lorebooks lately and it got me thinking about brand applications. the keyword-triggered context injection is interesting for keeping a chatbot consistent across long sessions, and I can see how you’d use the same pattern for, a brand AI, like storing product lore, tone guidelines, brand history, and only pulling in what’s relevant based on what the user actually asks about. seems like Adobe is already doing something similar with their Brand Concierge thing, ingesting real-time product info so the chatbot stays credible. but I’m curious whether the lorebook approach specifically adds anything over just using a well-structured RAG setup. like is the narrative framing actually useful for brand interactions, or does it just add complexity without much payoff? anyone tried building something like this?

    submitted by /u/Dailan_Grace
    [link] [comments]

  • knowledge graphs vs lorebooks for LLM context, is one actually better

    been thinking about this after going down a rabbit hole with SillyTavern lorebooks. they work fine for basic stuff but the more complex your world or use case gets, the more you start hitting limits. keyword triggers are fiddly, you’re always fighting token budgets, and updating anything is a pain. started wondering if knowledge graphs are just a cleaner solution to the same problem. from what I can tell the main difference is that lorebooks are basically flat text, that gets shoved into context, whereas a knowledge graph gives you actual structured relationships between entities. so instead of hoping your trigger words fire at the right time, you’re doing proper graph traversal to pull in only what’s actually relevant. the explainability angle is interesting too, you can trace exactly why certain info got retrieved rather than just trusting that the context injection worked. the tradeoff seems to be setup complexity though. Neo4j and GraphRAG aren’t exactly plug and play for most people. reckon the hybrid approach is probably where most people end up, vector search for fuzzy semantic stuff and a graph layer for the structured factual relationships. curious if anyone here has actually tried replacing lorebooks with a proper knowledge graph setup and, whether it was worth the effort, especially for longer running sessions where consistency tends to fall apart.

    submitted by /u/Daniel_Janifar
    [link] [comments]

  • Are there any actually uncensored AI girlfriend chatbots that don’t break after a while?

    I’ve been testing a few “uncensored” AI girlfriend/chatbot platforms recently and I kinda agree with what people are saying here.

    Most of them feel open at first, but once you go past casual chat you start noticing limits. Either tone resets, replies get generic, or things quietly get restricted.

    Janitor AI is probably the most flexible overall, especially if you’re setting things up yourself. It gives you more control than most.

    Candy AI is more polished and easier to use, but it feels more like a visual product. Conversations don’t really hold up over longer sessions.

    Some of the uncensored-first platforms are interesting, but consistency is usually where they fall off.

    One that stood out a bit differently for me was Xchar. It didn’t feel that special at the start, but over longer chats it stayed more stable. It didn’t clamp down as quickly and the tone didn’t drift as much.

    Still not perfect though.

    Feels like uncensored isn’t really the hard part anymore, it’s whether the AI can actually stay consistent over time.

    If anyone found one that actually does both well, I’m curious.

    submitted by /u/Thomaskindell
    [link] [comments]