mirror of
https://github.com/BerriAI/litellm.git
synced 2025-12-06 11:33:26 +08:00
* allow using a cred with RAG API * add /rag/ingest to llm api routes * add rag endpoints under llm api routes * raise clear exception when RAG API fails * use async methods for bedrock ingest * fix ingestion * fix _create_opensearch_collection * fix qa check and linting
20 lines
576 B
Plaintext
20 lines
576 B
Plaintext
LiteLLM provides a unified interface for calling 100+ different LLM providers.
|
|
|
|
Key capabilities:
|
|
- Translate requests to provider-specific formats
|
|
- Consistent OpenAI-compatible responses
|
|
- Retry and fallback logic across deployments
|
|
- Proxy server with authentication and rate limiting
|
|
- Support for streaming, function calling, and embeddings
|
|
|
|
Popular providers supported:
|
|
- OpenAI (GPT-4, GPT-3.5)
|
|
- Anthropic (Claude)
|
|
- AWS Bedrock
|
|
- Azure OpenAI
|
|
- Google Vertex AI
|
|
- Cohere
|
|
- And 95+ more
|
|
|
|
This allows developers to easily switch between providers without code changes.
|