Commit Graph

6224 Commits

Author SHA1 Message Date
Sayak Paul
055d08adc6 Merge branch 'main' into flux-test-refactor 2026-02-08 11:46:56 +05:30
YiYi Xu
fd705bd8ff [Modular] refactor Wan: modular pipelines by task etc (#13063)
* initil

* fix init_pipeline etc

* style

* copies

* fix copies

* upup more

* fix test

* add output type (#13091)

---------

Co-authored-by: yiyi@huggingface.co <yiyi@ip-26-0-160-103.ec2.internal>
Co-authored-by: Álvaro Somoza <asomoza@users.noreply.github.com>
2026-02-07 11:28:27 -10:00
hlky
09dca386d0 ZImageControlNet cfg (#13080)
Co-authored-by: Álvaro Somoza <asomoza@users.noreply.github.com>
Co-authored-by: YiYi Xu <yixu310@gmail.com>
2026-02-07 10:40:55 -10:00
YiYi Xu
10dc589a94 [modular]simplify components manager doc (#13088)
* simplify components manager doc

* Apply suggestion from @yiyixuxu

* Apply suggestion from @stevhliu

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Apply suggestion from @stevhliu

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

---------

Co-authored-by: yiyi@huggingface.co <yiyi@ip-26-0-160-103.ec2.internal>
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
2026-02-06 09:55:34 -10:00
David El Malih
44b8201d98 docs: improve docstring scheduling_dpmsolver_multistep_inverse.py (#13085)
* Improve docstring scheduling dpmsolver sde

* Update scheduling_dpmsolver_sde.py

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* run make fix-copies

---------

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
2026-02-06 09:20:05 -08:00
dxqb
ca79f8ccc4 GGUF fix for unquantized types when using unquantize kernels (#12498)
Even if the `qweight_type` is one of the `UNQUANTIZED_TYPES`, qweight still has to be "dequantized" because it is stored as an 8-bit tensor. Without doing so, it is therefore a shape mismatch in the following matmul.

Side notes:
 - why isn't DIFFUSERS_GGUF_CUDA_KERNELS on by default? It's significantly faster and only used when installed
 - https://huggingface.co/Isotr0py/ggml/tree/main/build has no build for torch 2.8 (or the upcoming 2.9). Who can we contact to make such a build?

Co-authored-by: YiYi Xu <yixu310@gmail.com>
2026-02-06 08:56:19 +05:30
CalamitousFelicitousness
99e2cfff27 Feature/zimage inpaint pipeline (#13006)
* Add ZImageInpaintPipeline

Updated the pipeline structure to include ZImageInpaintPipeline
    alongside ZImagePipeline and ZImageImg2ImgPipeline.
Implemented the ZImageInpaintPipeline class for inpainting
    tasks, including necessary methods for encoding prompts,
    preparing masked latents, and denoising.
Enhanced the auto_pipeline to map the new ZImageInpaintPipeline
    for inpainting generation tasks.
Added unit tests for ZImageInpaintPipeline to ensure
    functionality and performance.
Updated dummy objects to include ZImageInpaintPipeline for
    testing purposes.

* Add documentation and improve test stability for ZImageInpaintPipeline

- Add torch.empty fix for x_pad_token and cap_pad_token in test
- Add # Copied from annotations for encode_prompt methods
- Add documentation with usage example and autodoc directive

* Address PR review feedback for ZImageInpaintPipeline

Add batch size validation and callback handling fixes per review,
using diffusers conventions rather than suggested code verbatim.

* Update src/diffusers/pipelines/z_image/pipeline_z_image_inpaint.py

Co-authored-by: Álvaro Somoza <asomoza@users.noreply.github.com>

* Update src/diffusers/pipelines/z_image/pipeline_z_image_inpaint.py

Co-authored-by: Álvaro Somoza <asomoza@users.noreply.github.com>

* Add input validation and fix XLA support for ZImageInpaintPipeline

- Add missing is_torch_xla_available import for TPU support
- Add xm.mark_step() in denoising loop for proper XLA execution
- Add check_inputs() method for comprehensive input validation
- Call check_inputs() at the start of __call__

Addresses PR review feedback from @asomoza.

* Cleanup

---------

Co-authored-by: Álvaro Somoza <asomoza@users.noreply.github.com>
2026-02-05 11:48:25 -03:00
Sayak Paul
a3dcd9882f [core] make qwen hidden states contiguous to make torchao happy. (#13081)
make qwen hidden states contiguous to make torchao happy.
2026-02-05 09:02:32 +05:30
Sayak Paul
9fe0a9cac4 [core] make flux hidden states contiguous (#13068)
* make flux hidden states contiguous

* make fix-copies
2026-02-05 08:39:44 +05:30
David El Malih
03af690b60 docs: improve docstring scheduling_dpmsolver_multistep_inverse.py (#13083)
Improve docstring scheduling dpmsolver multistep inverse
2026-02-04 09:21:57 -08:00
Sayak Paul
90818e82b3 [docs] Fix syntax error in quantization configuration (#13076)
Fix syntax error in quantization configuration
2026-02-04 08:31:03 -08:00
Alan Ponnachan
430c557b6a Add support for Magcache (#12744)
* add magcache

* formatting

* add magcache support with calibration mode

* add imports

* improvements

* Apply style fixes

* fix kandinsky errors

* add tests and documentation

* Apply style fixes

* improvements

* Apply style fixes

* make fix-copies.

* minor fixes

---------

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2026-02-04 13:45:12 +05:30
Sayak Paul
1b8fc6c589 [modular] change the template modular pipeline card (#13072)
* start better template for modular pipeline card.

* simplify structure.

* refine.

* style.

* up

* add tests
2026-02-04 10:09:10 +05:30
YiYi Xu
6d4fc6baa0 [Modular] mellon doc etc (#13051)
* add metadata field to input/output param

* refactor mellonparam: move the template outside, add metaclass, define some generic template for custom node

* add from_custom_block

* style

* up up fix

* add mellon guide

* add to toctree

* style

* add mellon_types

* style

* mellon_type -> inpnt_types + output_types

* update doc

* add quant info to components manager

* fix more

* up up

* fix components manager

* update custom block guide

* update

* style

* add a warn for mellon and add new guides to overview

* Apply suggestions from code review

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update docs/source/en/modular_diffusers/mellon.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* more update on custom block guide

* Update docs/source/en/modular_diffusers/mellon.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* a few mamual

* apply suggestion: turn into bullets

* support define mellon meta with MellonParam directly, and update doc

* add the video

---------

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
Co-authored-by: yiyi@huggingface.co <yiyi@ip-26-0-160-103.ec2.internal>
2026-02-03 13:38:57 -10:00
YiYi Xu
ebd06f9b11 [Modular] loader related (#13025)
* tag loader_id from Automodel

* style

* load_components by default only load components that are not already loaded

* by default, skip loading the componeneets does not have the repo id
2026-02-03 05:34:33 -10:00
Dhruv Nair
700f882bc5 update 2026-02-03 10:09:28 +01:00
songkey
b712042da1 [Flux2] Fix LoRA loading for Flux2 Klein by adaptively enumerating transformer blocks (#13030)
* Resolve Flux2 Klein 4B/9B LoRA loading errors

* Apply style fixes

---------

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2026-02-02 20:36:19 +05:30
Dhruv Nair
0b76728e27 Refactor Model Tests (#12822)
* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2026-02-02 18:51:44 +05:30
DefTruth
973e334443 feat: support Ulysses Anything Attention (#12996)
* feat: support Ulysses Anything Attention

* feat: support Ulysses Anything Attention

* feat: support Ulysses Anything Attention

* feat: support Ulysses Anything Attention

* fix UAA broken while using joint attn

* update

* post check

* add docs

* add docs

* remove lru cache

* move codes

* update
2026-02-02 17:04:32 +05:30
YiYi Xu
769a1f3a12 [Modular]add a real quick start guide (#13029)
* add a real quick start guide

* Update docs/source/en/modular_diffusers/quickstart.md

* update a bit more

* fix

* Apply suggestions from code review

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update docs/source/en/modular_diffusers/quickstart.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update docs/source/en/modular_diffusers/quickstart.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* update more

* Apply suggestions from code review

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* address more feedbacks: move components amnager earlier, explain blocks vs sub-blocks etc

* more

* remove the link to mellon guide, not exist in this PR yet

---------

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2026-01-31 09:43:20 -10:00
Mikko Lauri
ec6b2bcccb Fix aiter availability check (#13059)
Update import_utils.py
2026-01-30 19:24:05 +05:30
Jared Wen
6a1904eb06 [bug fix] GLM-Image fit new get_image_features API (#13052)
change get_image_features API

Signed-off-by: JaredforReal <w13431838023@gmail.com>
Co-authored-by: YiYi Xu <yixu310@gmail.com>
2026-01-29 16:16:42 -10:00
Sayak Paul
f5b6b6625a [wan] fix wan 2.2 when either of the transformers isn't present. (#13055)
fix wan 2.2 when either of the transformers isn't present.
2026-01-29 08:45:24 -10:00
Olexandr88
1be2f7e8c5 docs: fix grammar in fp16_safetensors CLI warning (#13040)
* docs: fix grammar in fp16_safetensors CLI warning

* Apply style fixes

---------

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-01-29 21:33:09 +05:30
Sayak Paul
314cfddf3a [ci] uniform run times and wheels for pytorch cuda. (#13047)
* uniform run times and wheels for pytorch cuda.

* 12.9

* change to 24.04.

* change to 24.04.
2026-01-29 19:22:30 +05:30
Sayak Paul
e7de7d8449 [wan] fix layerwise upcasting tests on CPU (#13039)
up
2026-01-29 13:16:57 +05:30
Vinh H. Pham
a2ea45a5da LTX2 distilled checkpoint support (#12934)
* add constants for distill sigmas values and allow ltx pipeline to pass in sigmas

* add time conditioning conversion and token packing for latents

* make style & quality

* remove prenorm

* add sigma param to ltx2 i2v

* fix copies and add pack latents to i2v

* Apply suggestions from code review

Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com>

* Infer latent dims if latents/audio_latents is supplied

* add note for predefined sigmas

* run make style and quality

* revert distill timesteps & set original_state_dict_repo_idd to default None

* add latent normalize

* add create noised state, delete last sigmas

* remove normalize step in latent upsample pipeline and move it to ltx2 pipeline

* add create noise latent to i2v pipeline

* fix copies

* parse none value in weight conversion script

* explicit shape handling

* Apply suggestions from code review

Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com>

* make style

* add two stage inference tests

* add ltx2 documentation

* update i2v expected_audio_slice

* Apply suggestions from code review

Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com>

* Apply suggestion from @dg845

Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com>

* Update ltx2.md to remove one-stage example

Removed one-stage generation example code and added comments for noise scale in two-stage generation.

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com>
Co-authored-by: Daniel Gu <dgu8957@gmail.com>
2026-01-28 19:35:43 -08:00
Jayce
a58d0b9bec Fix Wan/WanI2V patchification (#13038)
* Fix Wan/WanI2V patchification

* Apply style fixes

* Apply suggestions from code review

I agree with you for the idea of using `patch_size` instead. Thanks!😊

Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com>

* Fix logger warning

---------

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com>
2026-01-28 18:06:58 -08:00
David El Malih
0ab2124958 docs: improve docstring scheduling_dpm_cogvideox.py (#13044) 2026-01-28 10:40:08 -08:00
Vasiliy Kuznetsov
74a0f0b694 remove torchao autoquant from diffusers docs (#13048)
Summary:

Context: https://github.com/pytorch/ao/issues/3739

Test Plan: CI, since this does not change any Python code
2026-01-28 21:10:22 +05:30
Sayak Paul
2c669e8480 change to CUDA 12.9. (#13045)
* change to CUDA 12.9.

* up

* change runtime base

* FROM
2026-01-28 17:22:27 +05:30
Ita Zaporozhets
2ac39ba664 fast tok update (#13036)
* v5 tok update

* ruff

* keep pre v5 slow code path

* Apply style fixes

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-01-28 17:13:04 +05:30
Sayak Paul
ef913010d4 [QwenImage] fix prompt isolation tests (#13042)
* up

* up

* up

* fix
2026-01-28 15:44:12 +05:30
YiYi Xu
53d8a1e310 [modular]support klein (#13002)
* support klein

* style

* copies

* Apply suggestions from code review

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: Álvaro Somoza <asomoza@users.noreply.github.com>

* Update src/diffusers/modular_pipelines/flux2/encoders.py

* a few fix: unpack latents before decoder etc

* style

* remove guidannce to its own block

* style

* flux2-dev work in modular setting

* up

* up up

* add tests

---------

Co-authored-by: yiyi@huggingface.co <yiyi@ip-26-0-160-103.ec2.internal>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: Álvaro Somoza <asomoza@users.noreply.github.com>
2026-01-27 15:43:14 -10:00
Kashif Rasul
d54669a73e [Qwen] avoid creating attention masks when there is no padding (#12987)
* avoid creating attention masks when there is no padding

* make fix-copies

* torch compile tests

* set all ones mask to none

* fix positional encoding from becoming > 4096

* fix from review

* slice freqs_cis to match the input sequence length

* keep only attenton masking change

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2026-01-27 12:42:48 -10:00
Jared Wen
22ac6fae24 [GLM-Image] Add batch support for GlmImagePipeline (#13007)
* init

Signed-off-by: JaredforReal <w13431838023@gmail.com>

* change from right padding to left padding

Signed-off-by: JaredforReal <w13431838023@gmail.com>

* try i2i batch

Signed-off-by: JaredforReal <w13431838023@gmail.com>

* fix: revert i2i prior_token_image_ids to original 1D tensor format

* refactor KVCache for per prompt batching

Signed-off-by: JaredforReal <w13431838023@gmail.com>

* fix KVCache

Signed-off-by: JaredforReal <w13431838023@gmail.com>

* fix shape error

Signed-off-by: JaredforReal <w13431838023@gmail.com>

* refactor pipeline

Signed-off-by: JaredforReal <w13431838023@gmail.com>

* fix for left padding

Signed-off-by: JaredforReal <w13431838023@gmail.com>

* insert seed to AR model

Signed-off-by: JaredforReal <w13431838023@gmail.com>

* delete generator, use torch manual_seed

Signed-off-by: JaredforReal <w13431838023@gmail.com>

* add batch processing unit tests for GlmImagePipeline

Signed-off-by: JaredforReal <w13431838023@gmail.com>

* simplify normalize images method

Signed-off-by: JaredforReal <w13431838023@gmail.com>

* fix grids_per_sample

Signed-off-by: JaredforReal <w13431838023@gmail.com>

* fix t2i

Signed-off-by: JaredforReal <w13431838023@gmail.com>

* delete comments, simplify condition statement

Signed-off-by: JaredforReal <w13431838023@gmail.com>

* chage generate_prior_tokens outputs

Signed-off-by: JaredforReal <w13431838023@gmail.com>

* simplify if logic

Signed-off-by: JaredforReal <w13431838023@gmail.com>

* support user provided prior_token_ids directly

Signed-off-by: JaredforReal <w13431838023@gmail.com>

* remove blank lines

Signed-off-by: JaredforReal <w13431838023@gmail.com>

* align with transformers

Signed-off-by: JaredforReal <w13431838023@gmail.com>

* Apply style fixes

---------

Signed-off-by: JaredforReal <w13431838023@gmail.com>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-01-27 12:22:02 -10:00
Aditya Borate
71a865b742 Fix: Cosmos2.5 Video2World frame extraction and add default negative prompt (#13018)
* fix: Extract last frames for conditioning in Cosmos Video2World

* Added default negative prompt

* Apply style fixes

* Added default negative prompt in cosmos2 text2image pipeline

---------

Co-authored-by: YiYi Xu <yixu310@gmail.com>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-01-27 12:20:44 -10:00
Sam Edwards
53279ef017 [From Single File] support from_single_file method for WanAnimateTransformer3DModel (#12691)
* Add `WanAnimateTransformer3DModel` to `SINGLE_FILE_LOADABLE_CLASSES`

* Fixed dtype mismatch when loading a single file

* Fixed a bug that results in white noise for generation

* Update dtype check for time embedder - caused white noise output

* Improve code readability

* Optimize dtype handling

Removed unnecessary dtype conversions for timestep and weight.

* Apply style fixes

* Refactor time embedding dtype handling

Adjust time embedding type conversion for compatibility.

* Apply style fixes

* Modify comment for WanTimeTextImageEmbedding class

---------

Co-authored-by: Sam Edwards <sam.edwards1976@gmail.com>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-01-27 11:57:21 +05:30
Salman Chishti
d9959bd53b Upgrade GitHub Actions to latest versions (#12866)
* Upgrade GitHub Actions to latest versions

Signed-off-by: Salman Muin Kayser Chishti <13schishti@gmail.com>

* fix: Correct GitHub Actions upgrade (fix branch refs and version formats)

* fix: Correct GitHub Actions upgrade (fix branch refs and version formats)

* fix: Correct GitHub Actions upgrade (fix branch refs and version formats)

---------

Signed-off-by: Salman Muin Kayser Chishti <13schishti@gmail.com>
2026-01-27 11:52:50 +05:30
YiYi Xu
b1c77f67ac [modular] add auto_docstring & more doc related refactors (#12958)
* up

* up up

* update outputs

* style

* add modular_auto_docstring!

* more auto docstring

* style

* up up up

* more more

* up

* address feedbacks

* add TODO in the description for empty docstring

* refactor based on dhruv's feedback: remove the class method

* add template method

* up

* up up up

* apply auto docstring

* make style

* rmove space in make docstring

* Apply suggestions from code review

* revert change in z

* fix

* Apply style fixes

* include auto-docstring check in the modular ci. (#13004)

* Run ruff format after auto docstring generation

* up

* upup

* upup

* style

---------

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2026-01-26 17:12:13 -10:00
David El Malih
956bdcc3ea Flag Flax schedulers as deprecated (#13031)
flag flax schedulers as deprecated
2026-01-26 09:41:48 -08:00
Hameer Abbasi
2af7baa040 Remove *pooled_* mentions from Chroma inpaint (#13026)
Remove `*pooled_*` mentions from Chroma as it has just one TE.
2026-01-26 10:18:29 -03:00
David El Malih
a7cb14efbe Improve docstrings and type hints in scheduling_ddpm_parallel.py (#13027)
* docs: improve docstring scheduling_ddpm_parallel.py

* Update scheduling_ddpm_parallel.py

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

---------

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
2026-01-25 10:43:43 -08:00
David El Malih
e8e88ff2ce Improve docstrings and type hints in scheduling_ddpm_flax.py (#13024)
docs: improve docstring scheduling_ddpm_flax.py
2026-01-23 11:51:47 -08:00
David El Malih
6e24cd842c Improve docstrings and type hints in scheduling_ddim_parallel.py (#13023)
* docs: improve docstring scheduling_ddim_parallel.py

* docs: improve docstring scheduling_ddim_parallel.py

* Update src/diffusers/schedulers/scheduling_ddim_parallel.py

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update src/diffusers/schedulers/scheduling_ddim_parallel.py

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update src/diffusers/schedulers/scheduling_ddim_parallel.py

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update src/diffusers/schedulers/scheduling_ddim_parallel.py

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* fix style

---------

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
2026-01-23 10:00:32 -08:00
Garry Ling
981eb802c6 feat: add qkv projection fuse for longcat transformers (#13021)
feat: add qkv fuse for longcat transformers

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2026-01-23 23:02:03 +05:30
jiqing-feng
1eb40c6dbd Resnet only use contiguous in training mode. (#12977)
* fix contiguous

Signed-off-by: jiqing-feng <jiqing.feng@intel.com>

* update tol

Signed-off-by: jiqing-feng <jiqing.feng@intel.com>

* bigger tol

Signed-off-by: jiqing-feng <jiqing.feng@intel.com>

* fix tests

Signed-off-by: jiqing-feng <jiqing.feng@intel.com>

* update tol

Signed-off-by: jiqing-feng <jiqing.feng@intel.com>

---------

Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2026-01-23 18:40:10 +05:30
Sayak Paul
bff672f47f fix Dockerfiles for cuda and xformers. (#13022) 2026-01-23 16:45:14 +05:30
David El Malih
d4f97d1921 Improve docstrings and type hints in scheduling_ddim_inverse.py (#13020)
docs: improve docstring scheduling_ddim_inverse.py
2026-01-22 15:42:45 -08:00
David El Malih
1d32b19ad4 Improve docstrings and type hints in scheduling_ddim_flax.py (#13010)
* docs: improve docstring scheduling_ddim_flax.py

* docs: improve docstring scheduling_ddim_flax.py

* docs: improve docstring scheduling_ddim_flax.py
2026-01-22 09:11:14 -08:00