👋 Jan

Local API Reference

Run Jan locally on your machine with llama.cpp's high-performance inference engine

Base URL: http://localhost:1337
Engine: llama.cpp
Format: OpenAI Compatible
Getting Started: Make sure Jan is running locally on your machine. You can start the server by launching the Jan application or running the CLI command. Default port is 1337, but you can configure it in your settings.