Local API Server
Jan provides an OpenAI-compatible API server that runs entirely on your computer. Use the same API patterns you know from OpenAI, but with complete control over your models and data.
Features
Section titled âFeaturesâ- OpenAI-compatible - Drop-in replacement for OpenAI API
- Local models - Run GGUF models via llama.cpp
- Cloud models - Proxy to OpenAI, Anthropic, and others
- Privacy-first - Local models never send data externally
- No vendor lock-in - Switch between providers seamlessly
Quick Start
Section titled âQuick StartâStart the server in Settings > Local API Server and make requests to http://localhost:1337/v1:
curl http://localhost:1337/v1/chat/completions \ -H "Content-Type: application/json" \ -H "Authorization: Bearer YOUR_API_KEY" \ -d '{ "model": "MODEL_ID", "messages": [{"role": "user", "content": "Hello!"}] }'Documentation
Section titled âDocumentationâ- API Reference - Interactive API documentation with Try It Out
- API Configuration - Server settings, authentication, CORS
- Engine Settings - Configure llama.cpp for your hardware
- Server Settings - Advanced configuration options
API Reference Learn how to get started with the API Reference
Integration Examples
Section titled âIntegration ExamplesâContinue (VS Code)
Section titled âContinue (VS Code)â{ "models": [{ "title": "Jan", "provider": "openai", "baseURL": "http://localhost:1337/v1", "apiKey": "YOUR_API_KEY", "model": "MODEL_ID" }]}Python (OpenAI SDK)
Section titled âPython (OpenAI SDK)âfrom openai import OpenAI
client = OpenAI( base_url="http://localhost:1337/v1", api_key="YOUR_API_KEY")
response = client.chat.completions.create( model="MODEL_ID", messages=[{"role": "user", "content": "Hello!"}])JavaScript/TypeScript
Section titled âJavaScript/TypeScriptâconst response = await fetch('http://localhost:1337/v1/chat/completions', { method: 'POST', headers: { 'Content-Type': 'application/json', 'Authorization': 'Bearer YOUR_API_KEY' }, body: JSON.stringify({ model: 'MODEL_ID', messages: [{ role: 'user', content: 'Hello!' }] })});Supported Endpoints
Section titled âSupported Endpointsâ| Endpoint | Description |
|---|---|
/v1/chat/completions | Chat completions (streaming supported) |
/v1/models | List available models |
/v1/models/{id} | Get model information |
Why Use Janâs API?
Section titled âWhy Use Janâs API?âPrivacy - Your data stays on your machine with local models Cost - No API fees for local model usage Control - Choose your models, parameters, and hardware Flexibility - Mix local and cloud models as needed
Related Resources
Section titled âRelated Resourcesâ- Models Overview - Available models
- Data Storage - Where Jan stores data
- Troubleshooting - Common issues
- GitHub Repository - Source code