View on GitHub

Teapot AI

Home | Models | Enterprise | Blog | Demo | Discord

🫖 Teapot AI

Building AI-powered LLM agents, privately brewed in your browser.


🚀 Announcement: TinyTeapotLLM Launched!   Check out the Model →
TeapotAI builds open-source small language models that can run anywhere- on CPUs, mobile devices, and even in the browser. Our models are designed for low latency and private deployment, enabling organizations to own their AI infrastructure and models end-to-end without relying on costly external APIs.
For Developers

🤖 Explore Models

Browse TinyTeapot, TeapotLLM, and other open-source small language models optimized for fast local inference, RAG workflows, and grounded reasoning.

  • Ultra-low latency small models
  • Local CPU, mobile, and browser inference
  • RAG, extraction, and classification workflows
  • Open-source and community driven
View Models →
For Organizations

🏢 Enterprise & Deployment

Work directly with the TeapotAI team to deploy, fine-tune, and optimize lightweight language models for production systems and private AI environments.

  • Custom fine-tuning and model alignment
  • Hosting and low-latency inference optimization
  • Private and on-device deployments
  • Evaluation, integration, and ongoing support
Learn About Enterprise →

Teapot LLM

Teapot is a family of small open-source language models fine-tuned on synthetic data and optimized to run locally on resource-constrained devices such as smartphones, browsers, and CPUs. It is built for grounded reasoning, efficient inference, and real-world deployment.

Key Capabilities

./assets/teapot_diagram.png