Is there a hack or prompt configuration to activate any NSFW mode in Cici?
submitted by /u/ELMASTER365666316
[link] [comments]
Is there a hack or prompt configuration to activate any NSFW mode in Cici?
submitted by /u/ELMASTER365666316
[link] [comments]
I’ve been using ai chatbots to help me with studying by giving it my textbook pages and lecture slides and then having it teach it back to me. For the past year or 2, I’ve had a Perplexity Education Pro license that I’ve been using without issues and have been able to give it dozens of files without limiting me. But now, I’ve noticed that it’s been limiting my file uploads a lot sooner and makes it borderline impossible to use it the way I usually do. Are there any alternatives chatbots that offer a high file upload limit at reasonable-ish price? I’m willing to pay $30 a month at the very most. What do you guys think would be my best option?
The AI doesn’t need to be the most accurate model with the craziest reasoning skills, but it needs to be able to accurately analyze documents and teach them back to me.
submitted by /u/superarash_
[link] [comments]
I’ve seen a lot of people mention that “long-term memory” is one of the most important features in AI roleplay or chatbot experiences, but I feel like it can mean very different things depending on the person.
So I’m curious from a user perspective – what does “good” long-term memory actually look like for you?
Some things I’m wondering about:
Also, if you’ve ever used a bot that felt like it had genuinely good memory, what did it do differently?
I’m trying to understand this more from actual user experience rather than just a technical definition.
submitted by /u/MagiNeko
[link] [comments]
been going down a rabbit hole with lorebooks lately (mostly from the SillyTavern side) and a thought hit me. the core mechanic is basically dynamic context injection based on keyword triggers, which is. kind of what you’d want for brand consistency across a content pipeline? like instead of cramming your brand voice guide, product details, and audience personas into every single, prompt, you structure them as lorebook entries and let the model pull what’s relevant per piece. the obvious gap is that tools like NovelAI are built entirely around fiction and have zero native marketing workflows. no scheduling, no CRM hooks, nothing. so you’d basically be hacking something together manually or trying to replicate the pattern in LangChain or a custom RAG setup. not impossible but heaps more work than just using Copy.ai or Jasper with a decent brand voice setup. has anyone here actually tried building something like this for real content workflows, or does the, token bloat and setup complexity make it not worth it compared to just using purpose-built tools?
submitted by /u/resbeefspat
[link] [comments]
been thinking about this a lot lately. lorebooks do a solid job keeping things consistent inside a single chat, keyword triggers inject the right context at the right time without bloating the whole prompt upfront. for RP and character work that’s genuinely useful. but the moment you try to scale it across multiple separate conversations, things get messier. most platforms only let you bind one lorebook per chat, so you end up juggling global books or doing workarounds that feel a bit clunky. the token limits are also worth thinking about. something like 740 tokens on a free tier isn’t a lot to work with if your lore is even slightly complex. and because triggers only activate on the last few messages, anything that isn’t referenced recently just. doesn’t show up. so for multi-session setups where context resets between chats, you’re basically starting fresh each time unless you build something custom around it. some people in the SillyTavern community are pushing for AI-driven dynamic states and better multi-lorebook support for group chats, which sounds promising but isn’t really there yet. curious whether anyone’s actually made lorebooks work well for something more structured, like a support bot or internal knowledge tool, rather than just RP. or if you hit the same limitations and ended up going with a vector DB approach instead.
submitted by /u/Dailan_Grace
[link] [comments]
been digging into lorebooks lately mostly from the SillyTavern side of things, but the pattern itself got me thinking. the core mechanic is pretty interesting, you’re basically doing lightweight RAG by triggering context chunks based on keywords or recency instead of stuffing everything into the prompt upfront. for character consistency in RP it clearly works. but I keep wondering how well that transfers to something like a customer support bot or an internal knowledge assistant. like instead of spinning up a full vector DB pipeline, could you just structure, your domain knowledge as lorebook-style entries and let the model pull what’s relevant per query? reckon there’s something there for smaller teams that don’t have the infra for proper RAG. anyone actually tried this outside of chatbot/RP contexts? curious if the maintenance overhead makes it not worth it at scale.
submitted by /u/Daniel_Janifar
[link] [comments]
been playing around with lorebooks lately after seeing them come up heaps in SillyTavern discussions, and it got me thinking about whether the pattern has broader uses. the basic idea is pretty solid, structured context that gets injected dynamically based on keyword triggers instead of dumping everything into the prompt at once. for roleplay and character consistency it obviously works well. but I keep wondering if the same approach could help with things like customer service bots or internal knowledge assistants. like instead of a full RAG pipeline, you just build out a lorebook style system where specific topics or entities trigger the right context chunks. seems like it could cut down on hallucinations in long conversations without the overhead of a full vector DB setup. anyone actually tried this outside of the fiction/companion chatbot space?
submitted by /u/OrinP_Frita
[link] [comments]
been going down a rabbit hole with lorebooks after seeing a few posts here about brand chatbot use, cases, and it got me thinking about whether the same keyword-triggered context injection could work for SEO content workflows. like instead of stuffing a system prompt with brand voice guidelines, target keywords, audience info, and topic, clusters, you’d have it pull in only what’s relevant based on what’s actually being generated at that moment. on paper it sounds like it could help with consistency across a big content operation, especially, for niche sites where you need the AI to stay on-brand without bloating every single prompt. but I’m not sure if this is actually better than just using RAG or a well-structured system prompt. anyone here tried wiring something like this up for content generation? curious if the lorebook approach holds up outside of creative writing contexts or if it’s more trouble than it’s worth.
submitted by /u/parwemic
[link] [comments]
I came across a workflow where AI is able to take a live website and reconstruct it into a working codebase without manually writing HTML or CSS.
The setup uses Claude Code inside VS Code along with Playwright MCP to capture and interpret website structure, then rebuild it as a functional project.
How it works (simple breakdown)
Why this is interesting
It is not perfect yet, but for clean and structured websites, the results are surprisingly accurate. Full walkthrough here for anyone interested: https://youtu.be/Hs7EmMwDVss
submitted by /u/kalladaacademy
[link] [comments]