I use AI daily, there is no other choice, but refuse to send my conversations to OpenAI, Google, or anyone. So I built an app that runs it entirely on my phone for personal conversations

I use AI daily, there is no other choice, but refuse to send my conversations to OpenAI, Google, or anyone. So I built an app that runs it entirely on my phone for personal conversations

https://reddit.com/link/1r32vf8/video/1uq52gevc4jg1/player

Every time you use ChatGPT, Gemini, or Copilot, your conversations are sent to servers you don’t control. Your questions about health, finances, relationships, work problems — all of it sitting in someone’s database, training their next model.

I wanted AI without the surveillance tax. So I built LocalLLM – an Android & iOS app that downloads an AI model once, then runs 100% on your phone. After that first download, you can turn on airplane mode and chat forever.

What it actually does:

  • Chat with AI models that rival early ChatGPT — completely offline
  • Analyze photos and documents with your camera — no Google Lens needed
  • Generate images from text — no Midjourney/DALL-E account required
  • Voice-to-text that runs on-device — no Google speech services
  • Passphrase lock for sensitive conversations
  • Offloads to GPU where possible to increase performance

What it doesn’t do:

  • No accounts. No sign-up. No email.
  • No analytics, tracking, or telemetry. Zero.
  • No ads. No subscription. No in-app purchases.
  • No network requests after you download a model. None.

The only time it touches the internet is to download models from Hugging Face. After that, it’s yours. Airplane mode works perfectly.

Works on most phones with 6GB+ RAM. Flagships run it really well. You can start with as small as 80MB for a model 🙂

It’s fully open source (MIT): https://github.com/alichherawalla/offline-mobile-llm-manager

APK available in the repo if you want to skip building from source.

For iOS as of now you’ll need to actually run it locally and sideload it. If there is enough interest I’ll publish to the app store.

Image gen takes about 6 seconds on iOS, and with NPU ~12 seconds on Android including the time to enhance the prompt.

Happy to answer any questions about what’s happening under the hood.

submitted by /u/alichherawalla
[link] [comments]