About TeichAI

A collaboration between two AI researchers focused on making frontier model capabilities accessible through open-source distillation.

Our Mission

Frontier AI models from Anthropic, OpenAI, and Google are incredibly capable but require API access and can be expensive to use at scale. We believe the open-source community deserves access to similar reasoning capabilities.

Our approach is simple: we create high-quality datasets by querying frontier models with diverse prompts, then fine-tune open-source base models on these reasoning traces. The result is smaller, locally-runnable models that capture much of the original model's reasoning style.

All our work is open source. We use Unsloth for efficient fine-tuning and release models in GGUF format for easy local deployment.

How We Work

1

Dataset Creation

We curate diverse prompts covering coding, math, science, and general reasoning. These are sent to frontier models with high reasoning effort settings to capture detailed thinking traces.

2

Fine-tuning

Using Unsloth for 2x faster training, we fine-tune open-source base models (primarily Qwen3 variants) on our reasoning datasets. This transfers the frontier model's reasoning patterns to the smaller model.

3

Quantization & Release

Models are converted to GGUF format with multiple quantization levels (Q3, Q4, Q6, Q8) for use with llama.cpp. This enables local deployment on consumer hardware.

Support Our Work

We're college students funding this research ourselves. Creating high-quality datasets from frontier models isn't cheap - our Claude Opus dataset alone cost over $52 to generate. If you find our models useful, please consider supporting us.