mirror of
https://github.com/BerriAI/litellm.git
synced 2025-12-06 11:33:26 +08:00
docs: cleanup README and improve agent guides (#17003)
* docs: cleanup README and improve AI agent guides - Remove obsolete version warnings (openai>=1.0.0, pydantic>=2.0.0) - Add note about Responses API in README - Add GitHub templates section to CLAUDE.md, GEMINI.md, and AGENTS.md - Remove temporary test file test_pydantic_fields.py * update files * update Gemini file
This commit is contained in:
19
AGENTS.md
19
AGENTS.md
@@ -98,6 +98,25 @@ LiteLLM supports MCP for agent workflows:
|
||||
|
||||
Use `poetry run python script.py` to run Python scripts in the project environment (for non-test files).
|
||||
|
||||
## GITHUB TEMPLATES
|
||||
|
||||
When opening issues or pull requests, follow these templates:
|
||||
|
||||
### Bug Reports (`.github/ISSUE_TEMPLATE/bug_report.yml`)
|
||||
- Describe what happened vs. expected behavior
|
||||
- Include relevant log output
|
||||
- Specify LiteLLM version
|
||||
- Indicate if you're part of an ML Ops team (helps with prioritization)
|
||||
|
||||
### Feature Requests (`.github/ISSUE_TEMPLATE/feature_request.yml`)
|
||||
- Clearly describe the feature
|
||||
- Explain motivation and use case with concrete examples
|
||||
|
||||
### Pull Requests (`.github/pull_request_template.md`)
|
||||
- Add at least 1 test in `tests/litellm/`
|
||||
- Ensure `make test-unit` passes
|
||||
|
||||
|
||||
## TESTING CONSIDERATIONS
|
||||
|
||||
1. **Provider Tests**: Test against real provider APIs when possible
|
||||
|
||||
16
CLAUDE.md
16
CLAUDE.md
@@ -28,6 +28,22 @@ This file provides guidance to Claude Code (claude.ai/code) when working with co
|
||||
### Running Scripts
|
||||
- `poetry run python script.py` - Run Python scripts (use for non-test files)
|
||||
|
||||
### GitHub Issue & PR Templates
|
||||
When contributing to the project, use the appropriate templates:
|
||||
|
||||
**Bug Reports** (`.github/ISSUE_TEMPLATE/bug_report.yml`):
|
||||
- Describe what happened vs. what you expected
|
||||
- Include relevant log output
|
||||
- Specify your LiteLLM version
|
||||
|
||||
**Feature Requests** (`.github/ISSUE_TEMPLATE/feature_request.yml`):
|
||||
- Describe the feature clearly
|
||||
- Explain the motivation and use case
|
||||
|
||||
**Pull Requests** (`.github/pull_request_template.md`):
|
||||
- Add at least 1 test in `tests/litellm/`
|
||||
- Ensure `make test-unit` passes
|
||||
|
||||
## Architecture Overview
|
||||
|
||||
LiteLLM is a unified interface for 100+ LLM providers with two main components:
|
||||
|
||||
19
GEMINI.md
19
GEMINI.md
@@ -25,6 +25,25 @@ This file provides guidance to Gemini when working with code in this repository.
|
||||
- `poetry run pytest tests/path/to/test_file.py -v` - Run specific test file
|
||||
- `poetry run pytest tests/path/to/test_file.py::test_function -v` - Run specific test
|
||||
|
||||
### Running Scripts
|
||||
- `poetry run python script.py` - Run Python scripts (use for non-test files)
|
||||
|
||||
### GitHub Issue & PR Templates
|
||||
When contributing to the project, use the appropriate templates:
|
||||
|
||||
**Bug Reports** (`.github/ISSUE_TEMPLATE/bug_report.yml`):
|
||||
- Describe what happened vs. what you expected
|
||||
- Include relevant log output
|
||||
- Specify your LiteLLM version
|
||||
|
||||
**Feature Requests** (`.github/ISSUE_TEMPLATE/feature_request.yml`):
|
||||
- Describe the feature clearly
|
||||
- Explain the motivation and use case
|
||||
|
||||
**Pull Requests** (`.github/pull_request_template.md`):
|
||||
- Add at least 1 test in `tests/litellm/`
|
||||
- Ensure `make test-unit` passes
|
||||
|
||||
## Architecture Overview
|
||||
|
||||
LiteLLM is a unified interface for 100+ LLM providers with two main components:
|
||||
|
||||
@@ -48,10 +48,6 @@ Support for more providers. Missing a provider or LLM Platform, raise a [feature
|
||||
|
||||
# Usage ([**Docs**](https://docs.litellm.ai/docs/))
|
||||
|
||||
> [!IMPORTANT]
|
||||
> LiteLLM v1.0.0 now requires `openai>=1.0.0`. Migration guide [here](https://docs.litellm.ai/docs/migration)
|
||||
> LiteLLM v1.40.14+ now requires `pydantic>=2.0.0`. No changes required.
|
||||
|
||||
<a target="_blank" href="https://colab.research.google.com/github/BerriAI/litellm/blob/main/cookbook/liteLLM_Getting_Started.ipynb">
|
||||
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
|
||||
</a>
|
||||
@@ -114,6 +110,8 @@ print(response)
|
||||
}
|
||||
```
|
||||
|
||||
> **Note:** LiteLLM also supports the [Responses API](https://docs.litellm.ai/docs/response_api) (`litellm.responses()`)
|
||||
|
||||
Call any model supported by a provider, with `model=<provider_name>/<model_name>`. There might be provider-specific details here, so refer to [provider docs for more information](https://docs.litellm.ai/docs/providers)
|
||||
|
||||
## Async ([Docs](https://docs.litellm.ai/docs/completion/stream#async-completion))
|
||||
|
||||
@@ -1,42 +0,0 @@
|
||||
from litellm.proxy._types import GenerateKeyRequest
|
||||
|
||||
# Test 1: Check if fields exist in model
|
||||
print("=== Test 1: Check model_fields ===")
|
||||
print(
|
||||
f"rpm_limit_type in model_fields: {'rpm_limit_type' in GenerateKeyRequest.model_fields}"
|
||||
)
|
||||
print(
|
||||
f"tpm_limit_type in model_fields: {'tpm_limit_type' in GenerateKeyRequest.model_fields}"
|
||||
)
|
||||
|
||||
# Test 2: Create instance with empty dict (simulating FastAPI parsing minimal request)
|
||||
print("\n=== Test 2: Create instance with minimal data ===")
|
||||
instance = GenerateKeyRequest()
|
||||
print(f"Instance created: {instance}")
|
||||
print(f"Instance dict: {instance.model_dump()}")
|
||||
|
||||
# Test 3: Try to access the fields
|
||||
print("\n=== Test 3: Try direct attribute access ===")
|
||||
try:
|
||||
print(f"instance.rpm_limit_type = {instance.rpm_limit_type}")
|
||||
print(f"instance.tpm_limit_type = {instance.tpm_limit_type}")
|
||||
except AttributeError as e:
|
||||
print(f"AttributeError: {e}")
|
||||
|
||||
# Test 4: Try getattr
|
||||
print("\n=== Test 4: Try getattr ===")
|
||||
print(
|
||||
f"getattr(instance, 'rpm_limit_type', None) = {getattr(instance, 'rpm_limit_type', None)}"
|
||||
)
|
||||
print(
|
||||
f"getattr(instance, 'tpm_limit_type', None) = {getattr(instance, 'tpm_limit_type', None)}"
|
||||
)
|
||||
|
||||
# Test 5: Check what fields are actually set
|
||||
print("\n=== Test 5: Check model_fields_set ===")
|
||||
print(f"model_fields_set: {instance.model_fields_set}")
|
||||
|
||||
# Test 6: Check instance __dict__
|
||||
print("\n=== Test 6: Check instance __dict__ ===")
|
||||
print(f"'rpm_limit_type' in __dict__: {'rpm_limit_type' in instance.__dict__}")
|
||||
print(f"'tpm_limit_type' in __dict__: {'tpm_limit_type' in instance.__dict__}")
|
||||
Reference in New Issue
Block a user