We envision a future shaped by many small, task-specific models, each finely tuned to its purpose and context. We aim to do this with continual learning by building the best product for capturing episodic memory1 privately on secure edge and consumer devices, where the model can adapt to each user's tacit and local knowledge, and with additional redundancy layers for data security.
Our first technology preview includes an SDK and a Swift-based Mac menu bar application built using MLX, Apple's Metal platform, the Containerization Framework, and Iroh for networking capabilities. For this developer preview, we are using Google’s gemma3n-e4b and OpenAI’s gpt-oss-20b as base models. We’re also developing a cross-platform Modelfile abstraction with intelligent defaults to make it easier to run and fine-tune open models on-device, whether you’re building apps or training models.
We work at the intersection of research and product design, enabling us to co-design agentic software systems that helps us unlock continual learning.
As part of our work on continual learning, we are exploring research in diffusion transformer designs, Per-Layer Embeddings (PLE with offloading to flash storage, runtime LoRA generation with hypernetworks, and RL in distributed consumer environments. We will also contribute upstream changes to the open-source dependencies we build upon.
We'd love to partner with early-stage companies to build together. Join us in the Tiles Discord server.
Subscribe to our publication Neurons for updates on on-device AI and personalization research. This work is currently funded by our publication’s subscribers. You can also explore more resources on our GitHub.