Files
litellm/document.txt
Ishaan Jaff 831694897e [Feat] RAG API - QA - allow internal user keys to access api, allow using litellm credentials with API, raise clear exception when RAG API fails (#17169)
* allow using a cred with RAG API

* add /rag/ingest to llm api routes

* add rag endpoints under llm api routes

* raise clear exception when RAG API fails

* use async methods for bedrock ingest

* fix ingestion

* fix _create_opensearch_collection

* fix qa check and linting
2025-11-26 17:07:30 -08:00

20 lines
576 B
Plaintext

LiteLLM provides a unified interface for calling 100+ different LLM providers.
Key capabilities:
- Translate requests to provider-specific formats
- Consistent OpenAI-compatible responses
- Retry and fallback logic across deployments
- Proxy server with authentication and rate limiting
- Support for streaming, function calling, and embeddings
Popular providers supported:
- OpenAI (GPT-4, GPT-3.5)
- Anthropic (Claude)
- AWS Bedrock
- Azure OpenAI
- Google Vertex AI
- Cohere
- And 95+ more
This allows developers to easily switch between providers without code changes.