This website requires JavaScript.
Explore
Help
Register
Sign In
vllm-project
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://github.com/vllm-project/vllm.git
synced
2025-12-06 06:53:12 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
main
Add File
New File
Upload File
Apply Patch
vllm
/
docs
/
assets
History
Amr Mahdi
f5d3d93c40
[docker] Build CUDA kernels in separate Docker stage for faster rebuilds (
#29452
)
...
Signed-off-by: Amr Mahdi <
amrmahdi@meta.com
>
2025-12-03 11:41:53 +00:00
..
contributing
[docker] Build CUDA kernels in separate Docker stage for faster rebuilds (
#29452
)
2025-12-03 11:41:53 +00:00
deployment
Add Hugging Face Inference Endpoints guide to Deployment docs (
#25886
)
2025-09-30 14:35:06 +00:00
design
[Docs] Add guide to debugging vLLM-torch.compile integration (
#28094
)
2025-11-05 21:31:46 +00:00
features
[Core] Encoder separation for Encode-Prefill-Decode Disaggregation (
#25233
)
2025-11-11 18:58:33 -08:00
logos
Migrate docs from Sphinx to MkDocs (
#18145
)
2025-05-23 02:09:53 -07:00