been digging into lorebooks lately mostly from the SillyTavern side of things, but the pattern itself got me thinking. the core mechanic is pretty interesting, you’re basically doing lightweight RAG by triggering context chunks based on keywords or recency instead of stuffing everything into the prompt upfront. for character consistency in RP it clearly works. but I keep wondering how well that transfers to something like a customer support bot or an internal knowledge assistant. like instead of spinning up a full vector DB pipeline, could you just structure, your domain knowledge as lorebook-style entries and let the model pull what’s relevant per query? reckon there’s something there for smaller teams that don’t have the infra for proper RAG. anyone actually tried this outside of chatbot/RP contexts? curious if the maintenance overhead makes it not worth it at scale.
submitted by /u/Daniel_Janifar
[link] [comments]