📄️ Quick Start
Quick start CLI, Config, Docker
📄️ Getting Started - E2E Tutorial
End-to-End tutorial for LiteLLM Proxy to:
📄️ 🐳 Docker, Deployment
You can find the Dockerfile to build litellm proxy here
📄️ Demo App
Here is a demo of the proxy. To log in pass in:
📄️ ⚡ Best Practices for Production
1. Use this config.yaml
🗃️ Architecture
2 items
🔗 📖 All Endpoints (Swagger)
📄️ ✨ Enterprise Features
To get a license, get in touch with us here
📄️ Langchain, OpenAI SDK, LlamaIndex, Instructor, Curl examples
LiteLLM Proxy is OpenAI-Compatible, and supports:
📄️ Proxy Config.yaml
Set model list, apibase, apikey, temperature & proxy server settings (master-key) on the config.yaml.
📄️ Rate Limit Headers
When you make a request to the proxy, the proxy will return the following OpenAI-compatible headers:
📄️ Fallbacks, Load Balancing, Retries
- Quick Start load balancing
🗃️ 🔑 Authentication
5 items
🗃️ 💸 Spend Tracking + Budgets
6 items
🗃️ Routing
4 items
🗃️ Pass-through Endpoints (Provider-specific)
6 items
🗃️ Admin UI
3 items
🗃️ 🪢 Logging, Alerting, Metrics
6 items
🗃️ 🛡️ [Beta] Guardrails
9 items
🗃️ Secret Manager - storing LLM API Keys
2 items
📄️ Caching
Cache LLM Responses
📄️ ➡️ Create Pass Through Endpoints
Add pass through routes to LiteLLM Proxy
📄️ Email Notifications
Send an Email to your users when:
📄️ Attribute Management changes to Users
Call management endpoints on behalf of a user. (Useful when connecting proxy to your development platform).
📄️ Model Management
Add new models + Get model info without restarting proxy.
📄️ Health Checks
Use this to health check all LLMs defined in your config.yaml
📄️ Debugging
2 levels of debugging supported.
📄️ Modify / Reject Incoming Requests
- Modify data before making llm api calls on proxy
📄️ Post-Call Rules
Use this to fail a request based on the output of an llm api call.
📄️ CLI Arguments
Cli arguments, --host, --port, --num_workers