Early access opening soon

Your code.
Your hardware.
Your control.

Maurice is a local-first agentic coding framework. Run sophisticated AI agents entirely on your infrastructure — no API calls, no data exfiltration, no compromises.

No spam. Early access notification only.

✓ You're on the list.
maurice — ~/projects/api
maurice init --model qwen2.5-coder-32b
maurice plan "Add OAuth2 with full test coverage"
◆ Analyzing codebase structure...
Found: FastAPI app, 12 endpoints, pytest suite
◆ Decomposing into 4 subtasks...
maurice execute
[1/4] Creating OAuth2 schema and models
[2/4] Implementing token endpoints
[3/4] Adding middleware and dependencies
[4/4] Generating test suite (14 tests)
✓ Complete. All tests passing.
🔒
100% Local
Zero external API calls
250K+
Indicators processed daily
🛡️
Air-Gap Ready
Built for sensitive environments
🔬
Security-First
Built by threat researchers

Three commands to autonomous coding

Maurice transforms natural language into executed code changes. No prompt engineering required.

1

Initialize

Point Maurice at your codebase. It builds a semantic index of your architecture, patterns, and conventions in minutes.

2

Plan

Describe what you want in plain English. Maurice decomposes complex requests into subtasks and shows you the execution plan.

3

Execute

Watch as coordinated agents implement changes, write tests, and validate results — all running locally on your hardware.

Built for real work

🧠

Self-Improving Intelligence

Maurice implements research-backed learning loops. Multi-agent coordination decomposes complex tasks into specialized subtasks. Continuous feedback integration means the system improves with use — your architectural decisions, naming conventions, and team patterns become part of its working knowledge.

🔒

Fully Local Execution

Runs on your hardware using quantized models optimized for consumer GPUs. No API keys, no cloud dependencies, no telemetry.

Agentic Architecture

Multi-agent coordination for complex projects. Maurice architects, refactors, tests, and ships entire features autonomously.

🛡️

Air-Gapped Ready

Perfect for classified environments and regulated industries. Run in fully isolated networks with zero external connectivity.

🔧

Model Agnostic

Works with any GGUF-quantized model. Balance capability against hardware constraints on your terms.

How Maurice differs

Most AI coding tools require sending your code to external servers. Maurice keeps everything local.

Feature
Maurice
Cursor
Copilot
Claude Code
Fully local execution
No API keys required
Air-gap compatible
Multi-agent coordination
Limited
Self-improving learning
Your code stays private
Policy
Policy
Policy
Bring your own model

Use what works for you

Maurice supports any GGUF-quantized model. Start with our recommended configuration or bring your own.

Qwen 2.5 Coder 32B ★ DeepSeek Coder V2 CodeLlama 34B Mistral Large Llama 3.1 70B Codestral 22B Any GGUF model

★ Recommended: Qwen 2.5 Coder 32B (Q4_K_M) — best capability/VRAM balance for 24GB+ cards

Simple, honest pricing

The core framework is free forever. Pay only if you need enterprise features.

Individual

Free

Full framework for personal and commercial projects.

  • All core features
  • Unlimited local usage
  • Community support
  • MIT licensed

Why local matters

"The best AI assistant is one that respects the boundaries of your machine — and your mind."

Every line of code carries context — your architecture decisions, your team's conventions, your business logic. That context is valuable. It shouldn't leave your control.

Maurice was built on a simple premise: powerful AI coding assistance shouldn't require trusting a third party with your intellectual property. Run it on your hardware. Keep your data where it belongs.

Named for Maurits Cornelis Escher — the artist who found infinite complexity in simple rules — Maurice embraces recursive self-improvement and elegant problem decomposition.

Common questions

What hardware do I need?

A GPU with 16GB+ VRAM for smaller models, 24GB+ for optimal performance with Qwen 32B. Works on NVIDIA, AMD, and Apple Silicon.

How is this different from running Ollama?

Maurice adds agentic capabilities on top of local models — multi-step planning, codebase awareness, self-improvement, and coordinated execution.

Can I use it in air-gapped environments?

Yes. Maurice requires zero network connectivity after initial setup. Perfect for classified, financial, or healthcare environments.

Is my code used to train models?

Never. Everything runs locally. Your code never leaves your machine, and there's no telemetry or data collection.

What languages are supported?

Any language your chosen model supports. Qwen 2.5 Coder handles Python, TypeScript, Rust, Go, and 90+ other languages well.

When is the public release?

Early access is rolling out now. Join the waitlist to secure your spot. Public release planned for Q2 2025.

Ready to own your AI?

Join the waitlist for early access. Be the first to run truly autonomous coding on your own terms.