diff --git a/AGENTS.md b/AGENTS.md index d72b00f7e1..2c778dc0d7 100644 --- a/AGENTS.md +++ b/AGENTS.md @@ -98,6 +98,25 @@ LiteLLM supports MCP for agent workflows: Use `poetry run python script.py` to run Python scripts in the project environment (for non-test files). +## GITHUB TEMPLATES + +When opening issues or pull requests, follow these templates: + +### Bug Reports (`.github/ISSUE_TEMPLATE/bug_report.yml`) +- Describe what happened vs. expected behavior +- Include relevant log output +- Specify LiteLLM version +- Indicate if you're part of an ML Ops team (helps with prioritization) + +### Feature Requests (`.github/ISSUE_TEMPLATE/feature_request.yml`) +- Clearly describe the feature +- Explain motivation and use case with concrete examples + +### Pull Requests (`.github/pull_request_template.md`) +- Add at least 1 test in `tests/litellm/` +- Ensure `make test-unit` passes + + ## TESTING CONSIDERATIONS 1. **Provider Tests**: Test against real provider APIs when possible diff --git a/CLAUDE.md b/CLAUDE.md index 1598432339..23a0e97eae 100644 --- a/CLAUDE.md +++ b/CLAUDE.md @@ -28,6 +28,22 @@ This file provides guidance to Claude Code (claude.ai/code) when working with co ### Running Scripts - `poetry run python script.py` - Run Python scripts (use for non-test files) +### GitHub Issue & PR Templates +When contributing to the project, use the appropriate templates: + +**Bug Reports** (`.github/ISSUE_TEMPLATE/bug_report.yml`): +- Describe what happened vs. what you expected +- Include relevant log output +- Specify your LiteLLM version + +**Feature Requests** (`.github/ISSUE_TEMPLATE/feature_request.yml`): +- Describe the feature clearly +- Explain the motivation and use case + +**Pull Requests** (`.github/pull_request_template.md`): +- Add at least 1 test in `tests/litellm/` +- Ensure `make test-unit` passes + ## Architecture Overview LiteLLM is a unified interface for 100+ LLM providers with two main components: diff --git a/GEMINI.md b/GEMINI.md index efcee04d4c..a9d40c910b 100644 --- a/GEMINI.md +++ b/GEMINI.md @@ -25,6 +25,25 @@ This file provides guidance to Gemini when working with code in this repository. - `poetry run pytest tests/path/to/test_file.py -v` - Run specific test file - `poetry run pytest tests/path/to/test_file.py::test_function -v` - Run specific test +### Running Scripts +- `poetry run python script.py` - Run Python scripts (use for non-test files) + +### GitHub Issue & PR Templates +When contributing to the project, use the appropriate templates: + +**Bug Reports** (`.github/ISSUE_TEMPLATE/bug_report.yml`): +- Describe what happened vs. what you expected +- Include relevant log output +- Specify your LiteLLM version + +**Feature Requests** (`.github/ISSUE_TEMPLATE/feature_request.yml`): +- Describe the feature clearly +- Explain the motivation and use case + +**Pull Requests** (`.github/pull_request_template.md`): +- Add at least 1 test in `tests/litellm/` +- Ensure `make test-unit` passes + ## Architecture Overview LiteLLM is a unified interface for 100+ LLM providers with two main components: diff --git a/README.md b/README.md index b29c86a112..b6a946318f 100644 --- a/README.md +++ b/README.md @@ -48,10 +48,6 @@ Support for more providers. Missing a provider or LLM Platform, raise a [feature # Usage ([**Docs**](https://docs.litellm.ai/docs/)) -> [!IMPORTANT] -> LiteLLM v1.0.0 now requires `openai>=1.0.0`. Migration guide [here](https://docs.litellm.ai/docs/migration) -> LiteLLM v1.40.14+ now requires `pydantic>=2.0.0`. No changes required. - Open In Colab @@ -114,6 +110,8 @@ print(response) } ``` +> **Note:** LiteLLM also supports the [Responses API](https://docs.litellm.ai/docs/response_api) (`litellm.responses()`) + Call any model supported by a provider, with `model=/`. There might be provider-specific details here, so refer to [provider docs for more information](https://docs.litellm.ai/docs/providers) ## Async ([Docs](https://docs.litellm.ai/docs/completion/stream#async-completion)) diff --git a/test_pydantic_fields.py b/test_pydantic_fields.py deleted file mode 100644 index 6df535cfb9..0000000000 --- a/test_pydantic_fields.py +++ /dev/null @@ -1,42 +0,0 @@ -from litellm.proxy._types import GenerateKeyRequest - -# Test 1: Check if fields exist in model -print("=== Test 1: Check model_fields ===") -print( - f"rpm_limit_type in model_fields: {'rpm_limit_type' in GenerateKeyRequest.model_fields}" -) -print( - f"tpm_limit_type in model_fields: {'tpm_limit_type' in GenerateKeyRequest.model_fields}" -) - -# Test 2: Create instance with empty dict (simulating FastAPI parsing minimal request) -print("\n=== Test 2: Create instance with minimal data ===") -instance = GenerateKeyRequest() -print(f"Instance created: {instance}") -print(f"Instance dict: {instance.model_dump()}") - -# Test 3: Try to access the fields -print("\n=== Test 3: Try direct attribute access ===") -try: - print(f"instance.rpm_limit_type = {instance.rpm_limit_type}") - print(f"instance.tpm_limit_type = {instance.tpm_limit_type}") -except AttributeError as e: - print(f"AttributeError: {e}") - -# Test 4: Try getattr -print("\n=== Test 4: Try getattr ===") -print( - f"getattr(instance, 'rpm_limit_type', None) = {getattr(instance, 'rpm_limit_type', None)}" -) -print( - f"getattr(instance, 'tpm_limit_type', None) = {getattr(instance, 'tpm_limit_type', None)}" -) - -# Test 5: Check what fields are actually set -print("\n=== Test 5: Check model_fields_set ===") -print(f"model_fields_set: {instance.model_fields_set}") - -# Test 6: Check instance __dict__ -print("\n=== Test 6: Check instance __dict__ ===") -print(f"'rpm_limit_type' in __dict__: {'rpm_limit_type' in instance.__dict__}") -print(f"'tpm_limit_type' in __dict__: {'tpm_limit_type' in instance.__dict__}")