runtime error
Exit code: 1. Reason: οΏ½οΏ½β | 3.49G/4.49G [00:12<00:02, 454MB/s][A model.safetensors: 90%|βββββββββ | 4.03G/4.49G [00:15<00:01, 337MB/s][A model.safetensors: 97%|ββββββββββ| 4.36G/4.49G [00:28<00:00, 337MB/s][A model.safetensors: 99%|ββββββββββ| 4.43G/4.49G [00:50<00:01, 41.9MB/s][A model.safetensors: 100%|ββββββββββ| 4.49G/4.49G [00:55<00:00, 37.4MB/s][A model.safetensors: 100%|ββββββββββ| 4.49G/4.49G [00:55<00:00, 80.3MB/s] generation_config.json: 0%| | 0.00/136 [00:00<?, ?B/s][A generation_config.json: 100%|ββββββββββ| 136/136 [00:00<00:00, 1.41MB/s] Traceback (most recent call last): File "/home/user/app/app.py", line 23, in <module> model = TransformersModel( File "/home/user/app/smolvlm_inference.py", line 10, in __init__ self.model = AutoModelForImageTextToText.from_pretrained(model_id, torch_dtype=torch.bfloat16).to(to_device) File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4459, in to return super().to(*args, **kwargs) File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1369, in to return self._apply(convert) File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 928, in _apply module._apply(fn) File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 928, in _apply module._apply(fn) File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 928, in _apply module._apply(fn) [Previous line repeated 1 more time] File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 955, in _apply param_applied = fn(param) File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1355, in convert return t.to( File "/usr/local/lib/python3.10/site-packages/torch/cuda/__init__.py", line 412, in _lazy_init torch._C._cuda_init() RuntimeError: No CUDA GPUs are available
Container logs:
Fetching error logs...