Skip to content

Conversation

@vku-ibm
Copy link
Contributor

@vku-ibm vku-ibm commented Oct 7, 2025

Not 100% sure if this is enough but we can give it a try.
Didn't made changes for cu126 because these versions:

cu126 = [
  "torch>=2.6.0,<=2.6.0",
  "torchvision>=0.21.0",
]

run into conflict with vllm.

@github-actions
Copy link
Contributor

github-actions bot commented Oct 7, 2025

DCO Check Passed

Thanks @vku-ibm, all your commits are properly signed off. 🎉

@mergify
Copy link

mergify bot commented Oct 7, 2025

Merge Protections

Your pull request matches the following merge protections and will not be merged until they are valid.

🟢 Enforce conventional commit

Wonderful, this rule succeeded.

Make sure that we follow https://www.conventionalcommits.org/en/v1.0.0/

  • title ~= ^(fix|feat|docs|style|refactor|perf|test|build|ci|chore|revert)(?:\(.+\))?(!)?:

@dosubot
Copy link

dosubot bot commented Oct 7, 2025

Related Documentation

Checked 2 published document(s). No updates required.

You have 5 draft document(s). Publish docs to keep them always up-to-date

How did I do? Any feedback?  Join Discord

@vku-ibm vku-ibm changed the title Added arm64 platform for pr, enabled pr image push feat: added arm64 platform for pr, enabled pr image push Oct 8, 2025
@igobypenn
Copy link

I can ran this on an ampere altra server with an nvidia gpu.. and this branch just constantly keeps restarting sayign it Starting production server 🚀
_docling-serve_logs.txt

I ran
docker exec docling-serve nvidia-smi and it does return GPU so it is mounted properly.

Thu Oct  9 06:03:29 2025       
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 580.65.06              Driver Version: 580.65.06      CUDA Version: 13.0     |
+-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA RTX 2000 Ada Gene...    Off |   00000004:01:00.0 Off |                  Off |
|100%   51C    P2              8W /   70W |    1716MiB /  16380MiB |     18%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI              PID   Type   Process name                        GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|  No running processes found                                                             |
+-----------------------------------------------------------------------------------------+```

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants