* Fix Bedrock guardrail apply_guardrail method and test mocks
Fixed 4 failing tests in the guardrail test suite:
1. BedrockGuardrail.apply_guardrail now returns original texts when guardrail
allows content but doesn't provide output/outputs fields. Previously returned
empty list, causing test_bedrock_apply_guardrail_success to fail.
2. Updated test mocks to use correct Bedrock API response format:
- Changed from 'content' field to 'output' field
- Fixed nested structure from {'text': {'text': '...'}} to {'text': '...'}
- Added missing 'output' field in filter test
3. Fixed endpoint test mocks to return GenericGuardrailAPIInputs format:
- Changed from tuple (List[str], Optional[List[str]]) to dict {'texts': [...]}
- Updated method call assertions to use 'inputs' parameter correctly
All 12 guardrail tests now pass successfully.
* fix: remove python3-dev from Dockerfile.build_from_pip to avoid Python version conflict
The base image cgr.dev/chainguard/python:latest-dev already includes Python 3.14
and its development tools. Installing python3-dev pulls Python 3.13 packages
which conflict with the existing Python 3.14 installation, causing file
ownership errors during apk install.
* fix: disable callbacks in vertex fine-tuning tests to prevent Datadog logging interference
The test was failing because Datadog logging was making an HTTP POST request
that was being caught by the mock, causing assert_called_once() to fail.
By disabling callbacks during the test, we prevent Datadog from making any
HTTP calls, allowing the mock to only see the Vertex AI API call.
* fix: ensure test isolation in test_logging_non_streaming_request
Add proper cleanup to restore original litellm.callbacks after test execution.
This prevents test interference when running as part of a larger test suite,
where global state pollution was causing async_log_success_event to be
called multiple times instead of once.
Fixes test failure where the test expected async_log_success_event to be
called once but was being called twice due to callbacks from previous tests
not being cleaned up.
Docker Development Guide
This guide provides instructions for building and running the LiteLLM application using Docker and Docker Compose.
Prerequisites
- Docker
- Docker Compose
Building and Running the Application
To build and run the application, you will use the docker-compose.yml file located in the root of the project. This file is configured to use the Dockerfile.non_root for a secure, non-root container environment.
1. Set the Master Key
The application requires a MASTER_KEY for signing and validating tokens. You must set this key as an environment variable before running the application.
Create a .env file in the root of the project and add the following line:
MASTER_KEY=your-secret-key
Replace your-secret-key with a strong, randomly generated secret.
2. Build and Run the Containers
Once you have set the MASTER_KEY, you can build and run the containers using the following command:
docker compose up -d --build
This command will:
- Build the Docker image using
Dockerfile.non_root. - Start the
litellm,litellm_db, andprometheusservices in detached mode (-d). - The
--buildflag ensures that the image is rebuilt if there are any changes to the Dockerfile or the application code.
3. Verifying the Application is Running
You can check the status of the running containers with the following command:
docker compose ps
To view the logs of the litellm container, run:
docker compose logs -f litellm
4. Stopping the Application
To stop the running containers, use the following command:
docker compose down
Troubleshooting
build_admin_ui.sh: not found: This error can occur if the Docker build context is not set correctly. Ensure that you are running thedocker-composecommand from the root of the project.Master key is not initialized: This error means theMASTER_keyenvironment variable is not set. Make sure you have created a.envfile in the project root with theMASTER_KEYdefined.