This project is scheduled for launch
Launch date: Tuesday, November 3, 2026 at 08:00 AM UTC
im.ai is a small, privacy-first chat app for iPhone, iPad, and Mac. It runs open source large language models directly on your device, and it also lets you bring your own API keys for major cloud providers. One app, one interface, every model you care about, with no account, no server, and no prompt logging.
Most chat apps for AI models are either cloud only, locked into a single provider, or bloated with subscription features you never use. On Apple Silicon, local inference is finally fast enough to be genuinely useful, and modern Apple devices have enough memory to run capable 7B and 8B models in quantised form. im.ai was built to take advantage of that shift while keeping things minimal and respectful of your data.
Local inference runs through llama.cpp, the widely adopted open source runtime. You can load GGUF format models such as Llama 3.2, Gemma 2, Qwen 2.5, and Falcon. Models are downloaded once and cached on device. Conversations never leave your iPhone or Mac when you are in local mode. Nothing is sent anywhere, nothing is logged, nothing is uploaded for training. You can pull the cable and the app keeps working.
When you want the strongest frontier models, im.ai supports bring your own key for Google Gemini, Groq, OpenRouter, Cerebras, and GitHub Models. You paste your key once. Keys are stored in the Apple Keychain, protected by your device passcode or biometric unlock. There is no intermediary server between im.ai and the provider. Requests go directly from your device to the provider endpoint using standard HTTPS. You pay the provider, not me.
im.ai has no sign up flow. There is no account, no email collection, no analytics SDK. There are no in app purchases and no subscription. It does not run a backend. It does not proxy your prompts through any third party. It does not collect telemetry about what you type or which models you use.
Developers who want to test prompts against multiple providers from one place. Privacy conscious users who do not want their conversations ending up in training data. People with older or unreliable internet connections who want a capable chat assistant that works offline. Researchers and students who want to experiment with different open source models without spinning up a server. Anyone who already has an OpenRouter or Gemini key and wants a clean native client.
iOS 18.6 or later on iPhone and iPad. macOS 15.7 or later on Apple Silicon and Intel Macs. Both platforms support local inference, though model size is limited by available memory. A recent iPhone can comfortably run a 3B parameter quantised model. A Mac with 16 GB or more can run 7B models at useful speed.
im.ai is free on the App Store with no in app purchases. There is also an open TestFlight build for people who want the latest changes. If you bring your own cloud keys you pay the cloud provider directly at their rates. There is no im.ai subscription, ever.
Comments will be available once the project is launched.
Discover similar projects you might like
The launch story will be available after the project completes its launch.
Need more content and distribution? Meet Posting Dude.
Explore partner platforms to launch your project in more places and reach diverse audiences.

Ease your marketing efforts and reach a dedicated audience of developers and tech enthusiasts.
IntroductionGigUp is an innovative platform...