I usually run batches of 16 at 512x768 at most, doing more than that causes bottlenecks, but I feel like I was also able to do that on the 3070ti also. I’ll look into those other tools though when I’m home, thanks for the resources. (HF diffusers? I’m still using A1111)
(ETA: I have written a bunch of unreleased plugins to make A1111 work better for me, like VSCode-like editing for special symbols like (/[, and a bunch of other optimizations. I haven’t released them because they’re not “perfect” yet and I have other projects to be working on, but there’s reasons I haven’t left A1111)
Eh, this is a problem because the “engine” is messy and unoptimized. You could at least try to switch to the “reforged” version, which might preserve extension compatibility and let you run features like torch.compile.
Hmm maybe in the new year I’ll try and update my process. I’m in the middle of a project though so it’s more about reliability than optimization. Thanks for the info though.
Oh you should be able to batch the heck out of that on a 4080. Are you not using HF diffusers or something?
I’d check out stable-fast if you haven’t already:
https://github.com/chengzeyi/stable-fast
VoltaML is also old at this point, but it has really fast AITemplate implementation for SD 1.5: https://github.com/VoltaML/voltaML-fast-stable-diffusion
I usually run batches of 16 at 512x768 at most, doing more than that causes bottlenecks, but I feel like I was also able to do that on the 3070ti also. I’ll look into those other tools though when I’m home, thanks for the resources. (HF diffusers? I’m still using A1111)
(ETA: I have written a bunch of unreleased plugins to make A1111 work better for me, like VSCode-like editing for special symbols like (/[, and a bunch of other optimizations. I haven’t released them because they’re not “perfect” yet and I have other projects to be working on, but there’s reasons I haven’t left A1111)
Eh, this is a problem because the “engine” is messy and unoptimized. You could at least try to switch to the “reforged” version, which might preserve extension compatibility and let you run features like torch.compile.
Hmm maybe in the new year I’ll try and update my process. I’m in the middle of a project though so it’s more about reliability than optimization. Thanks for the info though.
Yep. I didn’t mean to process shame you or anything, just trying to point out obscure but potentially useful projects most don’t know about :P
It’s sort of a niche within a niche and I appreciate your sharing some knowledge with me, thanks!