Replies: 1 comment
-
You could try
Specific versions of packages can be installed like so:
But that's the CPU version of torch, which is very slow. Use |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
tl;dr - I'm looking source a trustworthy debian flavored Python 3.11.7 & modules as I don't want to want to compile from source, entering that dep-hell. Then I've to figure out how to get the specific PyTorch installed (as mentioned later below.) I'll have to do the same for fedora 41 next.
Although Automatic1111 works for me with the default installed
v1-5-pruned-emaonly.safetensors
and other =<4 GB file size checkpoints. When I try to loadponyDiffusionV6XL_v6StartWithThisOne.safetensors
(>6 GB), it throws an OOM and crashes -- stating it needs 20 GB VRAM.Before I can troubleshoot that OOM situation, I need to tackle this error being an earlier listed error for A1111's PyTorch dependency. I suspect with this failure, like it says, memory management is not possible -- which constrains me to smaller and pruned checkpoints.
Looking for an assist to overcome the following error:
So, I'm looking source a trustworthy debian flavored Python 3.11.7 & modules as I don't want to want to compile from source, entering that dep-hell. Then I've to figure out how to get the specific PyTorch installed (as mentioned later below.)
FYI, Others have had A1111 working on a GTX1660 6GB without issue with startup flags. So I don't think I'm necessarily hardware constrained -- things will just take longer to render.
Here's what I'm having work with and done so far on my Low Profile system (that use-case constrains me.)
Here's the build out and what I've done like so many HOWTOs have said:
I also confirmed CUDA is available:
With only
ponyDiffusionV6XL_v6StartWithThisOne.safetensors
installed, I tried the following:and was crashing with the OOM error. I then noticed the earlier message (having quickly scrolled off the screen). And double checked with the default
v1-5-pruned-emaonly.safetensors
-- yup still there even though I could use A1111 with =< 4 GB file sized checkpoints.I then tried to re-install the transformers:
Nope. Didn't re-install. Then I dove down deeper rabbit holes. For example,
venv
, that should be possible usingpip3 install torch torchvision torchaudio --index-url
https://download.pytorch.org/whl/cu121
given it still exists. I just don't know yet how to force it to installtorch-2.1.2+cpu.cxx11.abi-cp311-cp311-linux_x86_64.whl
consideringtorch-2.6.0+cpu.cxx11.abi-cp311-cp311-linux_x86_64.whl
is likely what would be installed being the latest from that repo.ppa:deadsnakes/ppa
as well putting right back where I'm at now. The container adding another layer of hurt (pun intended.)If there's someone who could provide some guidance, I would be very thankful. I really would like to have A1111 working. Having the prompt and settings captured in the PNG image text blocks is extremely helpful. IIRC, Comfy has a similar feature.
I find the WebUI a little more intuitive vs Comfy's "flowcharting" (even though it looks to be more powerful & flexible.) And, A1111 is what my cohorts are using on our project so I'm kind of locked into getting A1111 operational.
Thanks in advance for the assist. Cheers.
FYI, I'd rather avoid using Conda, miniconda might be tolerable. I'll also need to figure out how to get it working for ROCm on my Framework 13 running the Fedora 41 Silverblue. I suspect I'll have a similar challenge there finding an older, available Python 3.11.7 and modules.
Beta Was this translation helpful? Give feedback.
All reactions