Tiles SDK

Run and build with open source model tools

An opensource Ollama alternative with private memory finetuning across your devices

Tiles Gen 1 Flash Drive

$60
/once
Tiles Gen 1 Flash Drive Product

What's included?

Kingston DTMAX/512GB spec sheet
Plug and play usage with the included SDK
Free updates to 1.x version
Community support included

How do we get it?

After you purchase, we'll email you a private download link that includes everything you need.

Review all of the FAQs prior to purchase.

If anything isn't clear, just email us.

Neurons Logo

Neurons, a Tiles blog

Our approach

Ankesh Bharti

We envision a future shaped by many small, task-specific models, each finely tuned to its purpose and context. We aim to do this with continual learning by building the best product for capturing episodic memory1 privately on secure edge and consumer devices, where the model can adapt to each user's tacit and local knowledge, and with additional redundancy layers for data security.

Our first technology preview includes an SDK and a Swift-based Mac menu bar application built using MLX, Apple's Metal platform, the Containerization Framework, and Iroh for networking capabilities. For this developer preview, we are using Google’s gemma3n-e4b and OpenAI’s gpt-oss-20b as base models. We’re also developing a cross-platform Modelfile abstraction with intelligent defaults to make it easier to run and fine-tune open models on-device, whether you’re building apps or training models.

We work at the intersection of research and product design, enabling us to co-design agentic software systems that helps us unlock continual learning.

As part of our work on continual learning, we are exploring research in diffusion transformer designs, Per-Layer Embeddings (PLE with offloading to flash storage, runtime LoRA generation with hypernetworks, and RL in distributed consumer environments. We will also contribute upstream changes to the open-source dependencies we build upon.

We'd love to partner with early-stage companies to build together. Join us in the Tiles Discord server.

Subscribe to our publication Neurons for updates on on-device AI and personalization research. This work is currently funded by our publication’s subscribers. You can also explore more resources on our GitHub.

They don't measure things like understanding streaming video. These are language models. They don't have things like episodic memory. Humans have a working memory, for things that have happened quite recently, and then we have a cortical memory: things being stored in our cortex. But there's also a system in between: episodic memory, in the hippocampus. It's for learning specific things very rapidly. So if you remember some of the things I say to you tomorrow, that'll be your episodic memory. Our models don't really have that kind of thing, so we don't really test for it. We just try to make the context windows, which is more like working memory, longer and longer to compensate.

It's a difficult question because the generality of human intelligence is very broad. You have to go into the weeds of trying to find out if there are specific types of things that are missing from existing benchmarks or different categories of benchmarks that don't currently exist.

— Shane Legg
Cofounder and chief AGI scientist at Google DeepMind