Tiles Launcher Documentation

Overview

Tiles is building on-device AI infrastructure for continual learning with private model finetuning. We envision a future shaped by many small, task-specific models, each finely tuned to its purpose and context

Our Approach

We aim to achieve continual learning by building the best product for capturing episodic memory privately on secure edge and consumer devices, where the model can adapt to each user's tacit and local knowledge, with additional redundancy layers for data security.

Think of it as a local-first alternative to the "Sign in with ChatGPT" feature.

Technology Stack

Current Technology Preview

  • SDK: Swift-based client app built with mlx-lm
  • Infrastructure: Apple's Metal platform for inference and fine-tuning
  • Networking: Iroh for distributed functionality
  • Base Models:
  • - Google's gemma3n-e4b

    - OpenAI's gpt-oss-20b

    Core Technologies

  • Model Chaining: Parameter-efficient fine-tuning methods
  • On-device Processing: Edge and consumer device deployment
  • Cross-platform Abstraction: Modelfile with intelligent defaults
  • Private Memory: Secure episodic memory capture
  • Research Areas

    We work at the intersection of research and product design, exploring:

  • Diffusion Transformer Designs: Advanced architectural patterns
  • MatFormer Architectures: Specialized transformer variants
  • Runtime LoRA Generation: Dynamic adaptation with hypernetworks
  • Reinforcement Learning: Distributed consumer environments
  • Continual Learning: Episodic memory systems
  • Open Source Contributions: Upstream improvements to dependencies
  • Development Philosophy

    Local-First Design

  • Privacy by design with on-device processing
  • Secure edge computing capabilities
  • User data stays on user devices
  • Additional redundancy layers for data security
  • Model Specialization

  • Task-specific model fine-tuning
  • Context-aware adaptations
  • Parameter-efficient methods
  • Continual learning capabilities
  • Getting Started

    Prerequisites

  • Apple Silicon Mac (for current technology preview)
  • macOS with Metal support
  • Development environment with Swift support
  • Installation

    Currently in developer preview stage. Contact us for early access.

    Architecture

    On-Device AI Infrastructure

  • Inference Engine: mlx-lm integration
  • Fine-tuning Pipeline: Parameter-efficient methods
  • Memory System: Episodic memory capture
  • Security Layer: Private data handling
  • Model Management

  • Modelfile Abstraction: Cross-platform model definitions
  • Intelligent Defaults: Optimized configurations
  • Dynamic Loading: Runtime model switching
  • Version Control: Model versioning and updates
  • Use Cases

    For Developers

  • Build applications with private AI
  • Implement continual learning systems
  • Create task-specific models
  • Integrate on-device intelligence
  • For Researchers

  • Explore episodic memory systems
  • Study continual learning approaches
  • Develop parameter-efficient methods
  • Contribute to open source projects
  • Episodic Memory

    Traditional language models lack episodic memory - the ability to rapidly learn and recall specific experiences. Our approach addresses this gap by:

  • Implementing hippocampus-inspired memory systems
  • Enabling rapid learning of specific interactions
  • Bridging working memory and long-term storage
  • Supporting personalized model adaptation
  • Community & Support

    Design Partners

    We're seeking design partners among early-stage companies for our prototype stage.

    Communication Channels

  • Discord: #tiles channel on User & Agents Discord
  • Direct Contact: go.tiles.run/talk
  • Email: hello@tiles.run
  • Resources

  • Blog: Neurons publication at blog.tiles.run
  • GitHub: tileshq organization
  • Updates: Subscribe to Neurons for research updates
  • Funding & Development

    This work is currently funded by our publication's subscribers. We believe in transparent, community-supported development of AI infrastructure.

    Technical Specifications

    Supported Platforms

  • macOS (Apple Silicon)
  • iOS (planned)
  • Cross-platform support (in development)
  • Model Formats

  • MLX-compatible models
  • Hugging Face format support
  • Custom Modelfile definitions
  • LoRA adapters
  • Performance

  • On-device inference
  • Real-time fine-tuning
  • Memory-efficient operations
  • Battery-optimized processing
  • API Reference

    Model Loading

    \\\swift

    // Load base model

    let model = try await TilesLauncherModel.load("gemma3n-e4b")

    // Apply fine-tuning

    let adapter = try await model.createAdapter(from: episodicData)

    let tunedModel = try await model.apply(adapter)

    \\\

    Memory Management

    \\\swift

    // Capture episodic memory

    let memory = EpisodicMemory(interaction: userInput, response: modelOutput)

    try await model.store(memory)

    // Retrieve relevant memories

    let memories = try await model.recall(query: currentInput, limit: 10)

    \\\

    Continual Learning

    \\\swift

    // Enable continual learning

    model.configure(.continualLearning(enabled: true))

    // Fine-tune on new data

    try await model.fineTune(data: newExamples, method: .lora)

    \\\

    Roadmap

    Short Term

  • Expand platform support
  • Improve model efficiency
  • Enhanced developer tools
  • Community feedback integration
  • Medium Term

  • Production-ready SDK
  • Advanced memory systems
  • Collaborative learning features
  • Enterprise support
  • Long Term

  • Multi-modal capabilities
  • Advanced reasoning systems
  • Ecosystem partnerships
  • Open source ecosystem
  • Contributing

    We welcome contributions to our open source components:

  • Bug reports and feature requests
  • Code contributions
  • Documentation improvements
  • Research collaborations
  • Contact Information

  • Website: https://tiles.run
  • Email: hello@tiles.run
  • Blog: https://blog.tiles.run
  • GitHub: https://github.com/tileshq
  • Discord: #tiles on User & Agents
  • ---

    *Last updated: January 2025*

    *This documentation is continuously updated as we develop our technology.*