* fix: handle none content
* fix: defensive check on none value
* Fix test failures: Azure OCR skip, None content handling, PublicAI JSON config
- Skip aocr/ocr call types in Azure test (they don't use Azure SDK client)
- Handle None content in Responses API transformation (skip message creation)
- Update PublicAI tests to use JSON-based configuration system
- Add None check in PublicAI test fixture to fix type error
Fixed two issues:
1. Presidio guardrail test TypeError:
- Issue: test_presidio_apply_guardrail() was calling apply_guardrail() with
incorrect arguments (text=, language=) instead of the correct signature
(inputs=, request_data=, input_type=)
- Fix: Updated test to use correct method signature:
- Changed from: apply_guardrail(text=..., language=...)
- Changed to: apply_guardrail(inputs={'texts': [...]}, request_data={}, input_type='request')
- Also updated assertions to extract text from response['texts'][0]
2. License verification base64 decoding error:
- Issue: verify_license_without_api_request() was failing with
'Invalid base64-encoded string: number of data characters (185) cannot be
1 more than a multiple of 4' when license keys lacked proper base64 padding
- Root cause: Base64 strings must be a multiple of 4 characters. Some license
keys were missing padding characters (=) needed for proper decoding
- Fix: Added automatic padding before base64 decoding:
- Calculate padding needed: len(license_key) % 4
- Add '=' characters to make length a multiple of 4
- This makes license verification robust to keys with or without padding
Both fixes ensure the code handles edge cases properly and tests use correct APIs.
* fix: resolve code quality issues from ruff linter
- Fix duplicate imports in anthropic guardrail handler
- Remove duplicate AllAnthropicToolsValues import
- Remove duplicate ChatCompletionToolParam import
- Remove unused variable 'tools' in guardrail handler
- Replace print statement with proper logging in json_loader
- Use verbose_logger.warning() instead of print()
- Remove unused imports
- Remove _update_metadata_field from team_endpoints
- Remove unused ChatCompletionToolCallChunk imports from transformation
- Refactor update_team function to reduce complexity (PLR0915)
- Extract budget_duration handling into _set_budget_reset_at() helper
- Minimal refactoring to reduce function from 51 to 50 statements
All ruff linter errors resolved. Fixes F811, F841, T201, F401, and PLR0915 errors.
* docs: add missing environment variables to documentation
Add 8 missing environment variables to the environment variables reference section:
- AIOHTTP_CONNECTOR_LIMIT_PER_HOST: Connection limit per host for aiohttp connector
- AUDIO_SPEECH_CHUNK_SIZE: Chunk size for audio speech processing
- CYBERARK_SSL_VERIFY: Flag to enable/disable SSL certificate verification for CyberArk
- LITELLM_DD_AGENT_HOST: Hostname or IP of DataDog agent for LiteLLM-specific logging
- LITELLM_DD_AGENT_PORT: Port of DataDog agent for LiteLLM-specific log intake
- WANDB_API_KEY: API key for Weights & Biases (W&B) logging integration
- WANDB_HOST: Host URL for Weights & Biases (W&B) service
- WANDB_PROJECT_ID: Project ID for Weights & Biases (W&B) logging integration
Fixes test_env_keys.py test that was failing due to undocumented environment variables.
* fix(generic_guardrail_api.py): add 'structured_messages' support
allows guardrail provider to know if text is from system or user
* fix(generic_guardrail_api.md): document 'structured_messages' parameter
give api provider a way to distinguish between user and system messages
* feat(anthropic/): return openai chat completion format structured messages when calls made via `/v1/messages` on Anthropic
* feat(responses/guardrail_translation): support 'structured_messages' param for guardrails
structured openai chat completion spec messages, for guardrail checks when using /v1/responses api
allows guardrail checks to work consistently across APIs
* fix(unified_guardrail.py): support during_call event type for unified guardrails
allows guardrails overriding apply_guardrails to work 'during_call'
* feat(generic_guardrail_api.py): support new 'tool_calls' field for generic guardrail api
returns the tool calls emitted by the LLM API to the user
* fix(generic_guardrail_api.py): working anthropic /v1/messages tool call response
send llm tool calls to guardrail api when called via `/v1/messages` API
* fix(responses/): run generic_guardrail_api on responses api tool call responses
* fix: fix tests
* test: fix tests
* fix: fix tests
Fixes#17517
- Fixed bug where only the first matching blocked keyword was masked
- Now iterates through ALL blocked keywords and masks each one
- Added 3 regression tests for multiple keyword masking
* init schema.prisma
* init LiteLLM_ObjectPermissionTable with agents and agent_access_groups
* TestAgentRequestHandler
* refatctor agent list
* add AgentRequestHandler
* fix agent access controls by key/team
* feat - new migration for LiteLLM_AgentsTable
* fix add LiteLLM_ObjectPermissionBase with agent and agent groups
* add agent routes to llm api routes
* add agent routes as llm route