mirror of
https://github.com/BerriAI/litellm.git
synced 2025-12-06 19:43:17 +08:00
* Addd v2/chat support for cohere * fix streaming * Use v2_transformation for logging passthrough: * Use v2_transformation for logging passthrough: * Add test for checking if document and citation_options is getting passed * Update the cohere model * Add cost tracking for vertex ai passthrough batch jobs * Add full passthrough support * refactor code according to the comments * Add passthrough handler * remove invalid params * Updated documentation * Updated documentation * Updated documentation * Correct the import * Add openai videos generation and retrieval support * add retrieval endpoint * Add docs * Add imports * remove orjson * remove double import * fix openai videos format * remove mock code * remove not required comments * Add tests * Add tests * Add other video endpoints * Fix cost calculation and transformation * Fixed mypy tests * remove not used imports * fix documentation for get batch req (#15742) * Add grounding info to responses API (#15737) * Add grounding info to responses API * fix lint errors * Use typed objects for annotations * Use typed objects for annotations * fix mypy error * Litellm fix json serialize alreting 2 (#15741) * fix json serializable error for alerts * Add test * fix mypt errors * fix mypt errors * Add Qwen3 imported model support for AWS Bedrock (#15783) * Add qwen imported model support * fix mypy errors * fix empty user message error (#15784) * fix typed dict for list * Add azure supported videos endpoint * fix mapped tests * add azure sora models to model map * Add OpenAI video generation and content retrieval support (#15745) * Add openai videos generation and retrieval support * add retrieval endpoint * Add docs * Add imports * remove orjson * remove double import * fix openai videos format * remove mock code * remove not required comments * Add tests * Add tests * Add other video endpoints * Fix cost calculation and transformation * Fixed mypy tests * remove not used imports * fix typed dict for list * fix mypy errors * move directory * make v2 chat default * Fix mypy tests * Fix mypy tests * Fix mypy tests * Fix mypy tests * Revert "Add Azure Video Generation Support with Sora Integration" * refactor videos repo * add test * Add azure openai videos support * Add azure openai videos support * Add router endpoint support for videos * fix mypy error * add azure models * fix mapped test * fix mypy error * Add proxy router test * Add proxy router test * remove deprecated model name from tests * fix import error * fix import error * Add gaurdrail integration in videos endpoint * Add logging support for videos endpoint * Add final documentation supporting videos integration * fix model name and document input * Update literals to avoid mypy errors * Remove unused imports and print statements * revert guardrail support for video generation and video remix * revert guardrail support for video generation and video remix * Fix failing mapped and llm translation tests
54 lines
1.8 KiB
Python
54 lines
1.8 KiB
Python
"""
|
|
Test suite for Azure video router functionality.
|
|
Tests that the router method gets called correctly for Azure video generation.
|
|
"""
|
|
|
|
import pytest
|
|
from unittest.mock import Mock, patch, MagicMock
|
|
import litellm
|
|
|
|
|
|
class TestAzureVideoRouter:
|
|
"""Test suite for Azure video router functionality"""
|
|
|
|
def setup_method(self):
|
|
"""Setup test fixtures"""
|
|
self.model = "azure/sora-2"
|
|
self.prompt = "A beautiful sunset over mountains"
|
|
self.seconds = "5"
|
|
self.size = "1280x720"
|
|
|
|
@patch("litellm.videos.main.base_llm_http_handler")
|
|
def test_azure_video_generation_router_call_mock(self, mock_handler):
|
|
"""Test that Azure video generation calls the router method with mock response"""
|
|
# Setup mock response
|
|
mock_response = {
|
|
"id": "video_123",
|
|
"model": "sora-2",
|
|
"object": "video",
|
|
"status": "processing",
|
|
"created_at": 1234567890,
|
|
"progress": 0
|
|
}
|
|
|
|
# Configure the mock handler
|
|
mock_handler.video_generation_handler.return_value = mock_response
|
|
|
|
# Call the video generation function with mock response
|
|
result = litellm.video_generation(
|
|
prompt=self.prompt,
|
|
model=self.model,
|
|
seconds=self.seconds,
|
|
size=self.size,
|
|
custom_llm_provider="azure",
|
|
mock_response=mock_response
|
|
)
|
|
|
|
# Verify the result is a VideoObject with the expected data
|
|
assert result.id == mock_response["id"]
|
|
assert result.model == mock_response["model"]
|
|
assert result.object == mock_response["object"]
|
|
assert result.status == mock_response["status"]
|
|
assert result.created_at == mock_response["created_at"]
|
|
assert result.progress == mock_response["progress"]
|