Models Overview
AI models power Janâs conversations. You can run models locally on your device for privacy, or connect to cloud providers for more power.
Quick Start
Section titled âQuick StartâNew to Jan? Start with Jan-v1 (4B) - it runs on most computers Limited hardware? Use cloud models with your API keys Privacy focused? Download any local model - your data never leaves your device
Local Models
Section titled âLocal ModelsâLocal models are managed through Llama.cpp, and these models are in a format called GGUF. When you run them locally, they will use your computerâs memory (RAM) and processing power, so please make sure that you download models that match the hardware specifications for your operating system:
Adding Local Models
Section titled âAdding Local Modelsâ1. Download from Jan Hub (Recommended)
Section titled â1. Download from Jan Hub (Recommended)âThe easiest way to get started is using Janâs built-in model hub (connected to HuggingFaceâs Model Hub):
- Go to the Hub tab
- Browse available models and click on any model to see details
- Choose a model that fits your needs & hardware specifications
- Click Download on your chosen model

2. Import from Hugging Face
Section titled â2. Import from Hugging FaceâYou can download models with a direct link from Hugging Face:
Note: Some models require a Hugging Face Access Token. Enter your token in Settings > Model Providers > Hugging Face before importing.
- Visit Hugging Face Models
- Find a GGUF model that fits your computer
- Copy the model ID (e.g., TheBloke/Mistral-7B-v0.1-GGUF)
- In Jan, paste the model ID to the Search bar in Hub page
- Select your preferred quantized version to download
Copy the model ID:

Paste it in Janâs Hub Search Bar:

3. Import Local Files
Section titled â3. Import Local FilesâIf you already have GGUF model files on your computer:
- Go to Settings > Model Providers > Llama.cpp
- Click Import and select your GGUF file(s)
- Choose how to import:
- Link Files: Creates symbolic links (saves space)
- Duplicate: Copies files to Janâs directory
- Click Import to complete

4. Manual Setup
Section titled â4. Manual SetupâFor advanced users who want to add models not available in Jan Hub:
Step 1: Create Model File
Section titled âStep 1: Create Model Fileâ- Navigate to the Jan Data Folder
- Open
modelsfolder - Create a new folder for your model
- Add your
model.gguffile - Add a
model.ymlconfiguration file. Example:
model_path: llamacpp/models/Jan-v1-4B-Q4_K_M/model.ggufname: Jan-v1-4B-Q4_K_Msize_bytes: 2497281632Thatâs it! Jan now uses a simplified YAML format. All other parameters (temperature, context length, etc.) can be configured directly in the UI when you select the model.
Step 2: Customize in the UI
Section titled âStep 2: Customize in the UIâOnce your model is added:
- Select it in a chat
- Click the gear icon next to the model
- Adjust any parameters you need
Delete Local Models
Section titled âDelete Local Modelsâ- Go to Settings > Model Providers > Llama.cpp
- Find the model you want to remove
- Click the three dots icon and select Delete Model

Cloud Models
Section titled âCloud ModelsâJan supports connecting to various AI cloud providers through OpenAI-compatible APIs, including OpenAI (GPT-4o, o1), Anthropic (Claude), Groq, Mistral, and more.
Setting Up Cloud Models
Section titled âSetting Up Cloud Modelsâ- Navigate to Settings
- Under Model Providers in the left sidebar, choose your provider
- Enter your API key
- Activated cloud models appear in your model selector

Once you add your API key, you can select any of that providerâs models in the chat interface:

Choosing Between Local and Cloud
Section titled âChoosing Between Local and CloudâLocal Models
Section titled âLocal ModelsâBest for:
- Privacy-sensitive work
- Offline usage
- Unlimited conversations without costs
- Full control over model behavior
Requirements:
- 8GB RAM minimum (16GB+ recommended)
- 10-50GB storage per model
- CPU or GPU for processing
Cloud Models
Section titled âCloud ModelsâBest for:
- Advanced capabilities (GPT-4, Claude 3)
- Limited hardware
- Occasional use
- Latest model versions
Requirements:
- Internet connection
- API keys from providers
- Usage-based payment
Hardware Guidelines
Section titled âHardware Guidelinesâ| RAM | Recommended Model Size |
|---|---|
| 8GB | 1-3B parameters |
| 16GB | 7B parameters |
| 32GB | 13B parameters |
| 64GB+ | 30B+ parameters |
Next Steps
Section titled âNext Stepsâ- Explore Jan Models - Our optimized models
- Set up Cloud Providers - Connect external services
- Learn Model Parameters - Fine-tune behavior
- Create AI Assistants - Customize models with instructions