Teddy Amkie dcbccd1fea Corrected docs updates sept 2025 (#14916)
* docs: Corrected documentation updates from Sept 2025

This PR contains the actual intended documentation changes, properly synced with main:

 Real changes applied:
- Added AWS authentication link to bedrock guardrails documentation
- Updated Vertex AI with Gemini API alternative configuration
- Added async_post_call_success_hook code snippet to custom callback docs
- Added SSO free for up to 5 users information to enterprise and custom_sso docs
- Added SSO free information block to security.md
- Added cancel response API usage and curl example to response_api.md
- Added image for modifying default user budget via admin UI
- Re-ordered sidebars in documentation

 Sync issues resolved:
- Kept all upstream changes that were added to main after branch diverged
- Preserved Provider-Specific Metadata Parameters section that was added upstream
- Maintained proper curl parameter formatting (-d instead of -D)

This corrects the sync issues from the original PR #14769.

* docs: Restore missing files from original PR

Added back ~16 missing documentation files that were part of the original PR:

 Restored files:
- docs/my-website/docs/completion/usage.md
- docs/my-website/docs/fine_tuning.md
- docs/my-website/docs/getting_started.md
- docs/my-website/docs/image_edits.md
- docs/my-website/docs/image_generation.md
- docs/my-website/docs/index.md
- docs/my-website/docs/moderation.md
- docs/my-website/docs/observability/callbacks.md
- docs/my-website/docs/providers/bedrock.md
- docs/my-website/docs/proxy/caching.md
- docs/my-website/docs/proxy/config_settings.md
- docs/my-website/docs/proxy/db_deadlocks.md
- docs/my-website/docs/proxy/load_balancing.md
- docs/my-website/docs/proxy_api.md
- docs/my-website/docs/rerank.md

 Fixed context-caching issue:
- Restored provider_specific_params.md to main version (preserving Provider-Specific Metadata Parameters section)
- Your original PR didn't intend to modify this file - it was just a sync issue

Now includes all ~26 documentation files from the original PR #14769.

* docs: Remove files that were deleted in original PR

- Removed docs/my-website/docs/providers/azure_ai_img_edit.md (was deleted in original PR)
- sdk/headers.md was already not present

Now matches the complete intended changes from original PR #14769.

* docs: Restore azure_ai_img_edit.md from main

- Restored docs/my-website/docs/providers/azure_ai_img_edit.md from main branch
- This file should not have been deleted as it was a newer commit
- SDK headers file doesn't exist in main (was reverted) and wasn't part of your original changes

Fixes the file restoration issues.

* docs: Fix vertex.md - preserve context caching from newer commit

- Restored vertex.md to main version to preserve context caching content (lines 817-887)
- Added back only your intended change: alternative gemini config example
- Context caching content from newer commit is now preserved

Fixes the vertex.md sync issue where newer content was incorrectly deleted.

* docs: Fix providers/bedrock.md - restore deleted content from newer commit

- Restored providers/bedrock.md to main version
- Preserves 'Usage - Request Metadata' section that was added in newer commit
- Your actual intended change was to proxy/guardrails/bedrock.md (authentication tip) which is preserved
- Now only has additions, no subtractions as intended

Fixes the bedrock.md sync issue.

* docs: Restore missing IAM policy section in bedrock.md

Added back your intended IAM policy documentation that was lost when restoring main version:

 Added IAM AssumeRole Policy section:
- Explains requirement for sts:AssumeRole permission
- Shows error message example when permission missing
- Provides complete IAM policy JSON example
- Links to AWS AssumeRole documentation
- Clarifies trust policy requirements

Now bedrock.md has both:
- All newer content preserved (Request Metadata section)
- Your intended IAM policy addition restored

---------

Co-authored-by: Cursor Agent <cursoragent@cursor.com>
2025-09-25 15:49:19 -07:00
2025-09-24 22:11:37 -07:00
2025-09-24 22:11:37 -07:00
2023-08-31 16:58:54 -07:00
2025-06-05 16:29:28 -07:00
2024-10-23 15:44:27 +05:30
2025-06-30 10:45:37 -07:00
2024-08-19 23:59:58 +08:00
2024-02-15 12:54:13 -08:00
2025-08-21 17:44:54 +10:00
2025-09-24 21:45:42 -07:00
2025-09-24 22:11:37 -07:00
2025-09-24 22:11:37 -07:00
2025-09-24 22:11:37 -07:00

🚅 LiteLLM

Deploy to Render Deploy on Railway

Call all LLM APIs using the OpenAI format [Bedrock, Huggingface, VertexAI, TogetherAI, Azure, OpenAI, Groq etc.]

LiteLLM Proxy Server (LLM Gateway) | Hosted Proxy (Preview) | Enterprise Tier

PyPI Version Y Combinator W23 Whatsapp Discord Slack

LiteLLM manages:

  • Translate inputs to provider's completion, embedding, and image_generation endpoints
  • Consistent output, text responses will always be available at ['choices'][0]['message']['content']
  • Retry/fallback logic across multiple deployments (e.g. Azure/OpenAI) - Router
  • Set Budgets & Rate limits per project, api key, model LiteLLM Proxy Server (LLM Gateway)

Jump to LiteLLM Proxy (LLM Gateway) Docs
Jump to Supported LLM Providers

🚨 Stable Release: Use docker images with the -stable tag. These have undergone 12 hour load tests, before being published. More information about the release cycle here

Support for more providers. Missing a provider or LLM Platform, raise a feature request.

Usage (Docs)

Important

LiteLLM v1.0.0 now requires openai>=1.0.0. Migration guide here LiteLLM v1.40.14+ now requires pydantic>=2.0.0. No changes required.

Open In Colab
pip install litellm
from litellm import completion
import os

## set ENV variables
os.environ["OPENAI_API_KEY"] = "your-openai-key"
os.environ["ANTHROPIC_API_KEY"] = "your-anthropic-key"

messages = [{ "content": "Hello, how are you?","role": "user"}]

# openai call
response = completion(model="openai/gpt-4o", messages=messages)

# anthropic call
response = completion(model="anthropic/claude-sonnet-4-20250514", messages=messages)
print(response)

Response (OpenAI Format)

{
    "id": "chatcmpl-1214900a-6cdd-4148-b663-b5e2f642b4de",
    "created": 1751494488,
    "model": "claude-sonnet-4-20250514",
    "object": "chat.completion",
    "system_fingerprint": null,
    "choices": [
        {
            "finish_reason": "stop",
            "index": 0,
            "message": {
                "content": "Hello! I'm doing well, thank you for asking. I'm here and ready to help with whatever you'd like to discuss or work on. How are you doing today?",
                "role": "assistant",
                "tool_calls": null,
                "function_call": null
            }
        }
    ],
    "usage": {
        "completion_tokens": 39,
        "prompt_tokens": 13,
        "total_tokens": 52,
        "completion_tokens_details": null,
        "prompt_tokens_details": {
            "audio_tokens": null,
            "cached_tokens": 0
        },
        "cache_creation_input_tokens": 0,
        "cache_read_input_tokens": 0
    }
}

Call any model supported by a provider, with model=<provider_name>/<model_name>. There might be provider-specific details here, so refer to provider docs for more information

Async (Docs)

from litellm import acompletion
import asyncio

async def test_get_response():
    user_message = "Hello, how are you?"
    messages = [{"content": user_message, "role": "user"}]
    response = await acompletion(model="openai/gpt-4o", messages=messages)
    return response

response = asyncio.run(test_get_response())
print(response)

Streaming (Docs)

liteLLM supports streaming the model response back, pass stream=True to get a streaming iterator in response. Streaming is supported for all models (Bedrock, Huggingface, TogetherAI, Azure, OpenAI, etc.)

from litellm import completion
response = completion(model="openai/gpt-4o", messages=messages, stream=True)
for part in response:
    print(part.choices[0].delta.content or "")

# claude sonnet 4
response = completion('anthropic/claude-sonnet-4-20250514', messages, stream=True)
for part in response:
    print(part)

Response chunk (OpenAI Format)

{
    "id": "chatcmpl-fe575c37-5004-4926-ae5e-bfbc31f356ca",
    "created": 1751494808,
    "model": "claude-sonnet-4-20250514",
    "object": "chat.completion.chunk",
    "system_fingerprint": null,
    "choices": [
        {
            "finish_reason": null,
            "index": 0,
            "delta": {
                "provider_specific_fields": null,
                "content": "Hello",
                "role": "assistant",
                "function_call": null,
                "tool_calls": null,
                "audio": null
            },
            "logprobs": null
        }
    ],
    "provider_specific_fields": null,
    "stream_options": null,
    "citations": null
}

Logging Observability (Docs)

LiteLLM exposes pre defined callbacks to send data to Lunary, MLflow, Langfuse, DynamoDB, s3 Buckets, Helicone, Promptlayer, Traceloop, Athina, Slack

from litellm import completion

## set env variables for logging tools (when using MLflow, no API key set up is required)
os.environ["LUNARY_PUBLIC_KEY"] = "your-lunary-public-key"
os.environ["HELICONE_API_KEY"] = "your-helicone-auth-key"
os.environ["LANGFUSE_PUBLIC_KEY"] = ""
os.environ["LANGFUSE_SECRET_KEY"] = ""
os.environ["ATHINA_API_KEY"] = "your-athina-api-key"

os.environ["OPENAI_API_KEY"] = "your-openai-key"

# set callbacks
litellm.success_callback = ["lunary", "mlflow", "langfuse", "athina", "helicone"] # log input/output to lunary, langfuse, supabase, athina, helicone etc

#openai call
response = completion(model="openai/gpt-4o", messages=[{"role": "user", "content": "Hi 👋 - i'm openai"}])

LiteLLM Proxy Server (LLM Gateway) - (Docs)

Track spend + Load Balance across multiple projects

Hosted Proxy (Preview)

The proxy provides:

  1. Hooks for auth
  2. Hooks for logging
  3. Cost tracking
  4. Rate Limiting

📖 Proxy Endpoints - Swagger Docs

Quick Start Proxy - CLI

pip install 'litellm[proxy]'

Step 1: Start litellm proxy

$ litellm --model huggingface/bigcode/starcoder

#INFO: Proxy running on http://0.0.0.0:4000

Step 2: Make ChatCompletions Request to Proxy

Important

💡 Use LiteLLM Proxy with Langchain (Python, JS), OpenAI SDK (Python, JS) Anthropic SDK, Mistral SDK, LlamaIndex, Instructor, Curl

import openai # openai v1.0.0+
client = openai.OpenAI(api_key="anything",base_url="http://0.0.0.0:4000") # set proxy to base_url
# request sent to model set on litellm proxy, `litellm --model`
response = client.chat.completions.create(model="gpt-3.5-turbo", messages = [
    {
        "role": "user",
        "content": "this is a test request, write a short poem"
    }
])

print(response)

Proxy Key Management (Docs)

Connect the proxy with a Postgres DB to create proxy keys

# Get the code
git clone https://github.com/BerriAI/litellm

# Go to folder
cd litellm

# Add the master key - you can change this after setup
echo 'LITELLM_MASTER_KEY="sk-1234"' > .env

# Add the litellm salt key - you cannot change this after adding a model
# It is used to encrypt / decrypt your LLM API Key credentials
# We recommend - https://1password.com/password-generator/
# password generator to get a random hash for litellm salt key
echo 'LITELLM_SALT_KEY="sk-1234"' >> .env

source .env

# Start
docker-compose up

UI on /ui on your proxy server ui_3

Set budgets and rate limits across multiple projects POST /key/generate

Request

curl 'http://0.0.0.0:4000/key/generate' \
--header 'Authorization: Bearer sk-1234' \
--header 'Content-Type: application/json' \
--data-raw '{"models": ["gpt-3.5-turbo", "gpt-4", "claude-2"], "duration": "20m","metadata": {"user": "ishaan@berri.ai", "team": "core-infra"}}'

Expected Response

{
    "key": "sk-kdEXbIqZRwEeEiHwdg7sFA", # Bearer token
    "expires": "2023-11-19T01:38:25.838000+00:00" # datetime object
}

Supported Providers (Docs)

Provider Completion Streaming Async Completion Async Streaming Async Embedding Async Image Generation
openai
Meta - Llama API
azure
AI/ML API
aws - sagemaker
aws - bedrock
google - vertex_ai
google - palm
google AI Studio - gemini
mistral ai api
cloudflare AI Workers
CompactifAI
cohere
anthropic
empower
huggingface
replicate
together_ai
openrouter
ai21
baseten
vllm
nlp_cloud
aleph alpha
petals
ollama
deepinfra
perplexity-ai
Groq AI
Deepseek
anyscale
IBM - watsonx.ai
voyage ai
xinference [Xorbits Inference]
FriendliAI
Galadriel
GradientAI
Novita AI
Featherless AI
Nebius AI Studio
Heroku
OVHCloud AI Endpoints

Read the Docs

Run in Developer mode

Services

  1. Setup .env file in root
  2. Run dependant services docker-compose up db prometheus

Backend

  1. (In root) create virtual environment python -m venv .venv
  2. Activate virtual environment source .venv/bin/activate
  3. Install dependencies pip install -e ".[all]"
  4. Start proxy backend python litellm/proxy_cli.py

Frontend

  1. Navigate to ui/litellm-dashboard
  2. Install dependencies npm install
  3. Run npm run dev to start the dashboard

Enterprise

For companies that need better security, user management and professional support

Talk to founders

This covers:

  • Features under the LiteLLM Commercial License:
  • Feature Prioritization
  • Custom Integrations
  • Professional Support - Dedicated discord + slack
  • Custom SLAs
  • Secure access with Single Sign-On

Contributing

We welcome contributions to LiteLLM! Whether you're fixing bugs, adding features, or improving documentation, we appreciate your help.

Quick Start for Contributors

This requires poetry to be installed.

git clone https://github.com/BerriAI/litellm.git
cd litellm
make install-dev    # Install development dependencies
make format         # Format your code
make lint           # Run all linting checks
make test-unit      # Run unit tests
make format-check   # Check formatting only

For detailed contributing guidelines, see CONTRIBUTING.md.

Code Quality / Linting

LiteLLM follows the Google Python Style Guide.

Our automated checks include:

  • Black for code formatting
  • Ruff for linting and code quality
  • MyPy for type checking
  • Circular import detection
  • Import safety checks

All these checks must pass before your PR can be merged.

Support / talk with founders

Why did we build this

  • Need for simplicity: Our code started to get extremely complicated managing & translating calls between Azure, OpenAI and Cohere.

Contributors

Description
调用任何具有成本跟踪、护栏、日志记录和负载均衡的 LLM API。1.8k+ 模型、80+ 提供商、50+ 端点(统一 + 原生格式)。可作为 Python SDK 或代理服务器(AI 网关)使用
Readme MIT 954 MiB
Languages
Python 83%
TypeScript 11.5%
JavaScript 4.5%
HTML 0.9%