Best stable diffusion rocm windows. ROCm working with AMD on Windows - HUGE speed boost! I tried installing stable diffusion for the first time yesterday and had a very difficult time getting it to work with my AMD RX 6800XT. cpp development by creating an account on GitHub. If you don't want to use linux system, you cannot use automatic1111 for your GPU, try SHARK tomshardware graph above shows under SHARK, which calculate under vulkan. Aug 28, 2023 · Step 3: Download lshqqytiger's Version of AUTOMATIC1111 WebUI. /webui. Apr 14, 2023 · Stable Diffusion on Windows, here we come! I've been blown away with how much faster in Stable Diffusion my 6800 XT is in Linux compared to my GTX 1070 in Windows. 11 on Linux, macOS or Windows Powershell python -m venv shark_venv source shark_venv/bin/activate # Use shark_venv/Scripts/activate on Windows # If you are using conda create and activate a new conda env # Some older pip installs may not be able to handle the recent PyTorch deps python -m pip install HOW-TO: Stable Diffusion on an AMD GPU. On my 6800 XT, a 512x512 generation on Stable Diffusion takes about 30 seconds on DirectML, and 5 seconds on ROCm. 0, XT 1. Now, ROCm is coming to the Windows operating system, and the company has extended ROCm support for consumer graphics cards instead of only supporting professional-grade GPUs. Install both AUTOMATIC1111 WebUI and ComfyUI. Got Automatic1111 working with 7900XTX and ROCm 5. Option 2: Use the 64-bit Windows installer provided by the Python website. You switched accounts on another tab or window. Installing ROCm on windows with GPU RX 6700 XT (Navi 22) #2370. Using CPU docker stop stablediff Feb 12, 2024 · Download the installer from the HIP-SDK download page. If you cant wait for more features and dont mind the slower img processing you can go for the ONNX format setup. OC brings the card to 16. Follow these steps: Download a Stable Diffusion model file from the HuggingFace website (https://huggingface. I looked into it a few months back and my understanding is that it doesn't work. UPDATE: Nearly all AMD GPU's from the RX470 and above are now working. Speeding Up Stable Diffusion on Windows. Start with Quick Start (Windows) or follow the detailed instructions below. Plus I can actually use larger batch sizes now. yaml you can find in stable-diffusion-webui-directml\extensions\sd-webui-controlnet\models\. Mar 12, 2024 · Clone Web-UI. To stop Stable Diffusion press Ctrl + C and use the command below. 5 I finally got an accelerated version of stable diffusion working. Then yesterday I upgraded llama. NovelAI贴吧一键部署版使用教程. 04 with AMD rx6750xt GPU by following these two guides: https://www. This docker container deploys an AMD ROCm 5. Note that the installer is a graphical application with a WinMain entry point, even when called on the command line. AMD GPU with ROCM in Linux / Ubuntu-> do it. With the release of ROCm 5. 3 working with Automatic1111 on actual Ubuntu 22. Never tried ROCm on Windows myself, but from everything I've read and googled tells me that ROCm will NOT work under WSL or any other VM under Windows. Sep 1, 2022 · ROCm 使うには /dev/kfd, /dev/dri が見えている必要があるため, --device でコンテナ内で見れるようにします. Not native ROCM. /stable_diffusion_onnx", provider = "DmlExecutionProvider") prompt = "A happy celebrating robot on a mountaintop, happy, landscape, dramatic lighting, art by artgerm greg rutkowski alphonse mucha, 4k uhd'" image = pipe (prompt). Inference is fine, around 16 it/s. Fooocus is a rethinking of Stable Diffusion and Midjourney’s designs: Learned from Stable Diffusion, the software is offline, open source, and free. 10 launch. py --precision full --no-half You can run " git pull " after " cd stable-diffusion-webui " from time to time to update the entire repository from Github. Automatic1111 repo is the best one IMO. For Windows OS, please change the format of path to be like . The only mentioned RDNA3 GPUs are the Radeon RX 7900 XTX and the Radeon PRO W7900. 1. AMD has published an exciting development for its Radeon Open Compute Ecosystem (ROCm) users today. AMDIntel. Sign up for free to join this conversation on GitHub . What is your Stable Diffusion Webui experience with 6800XT? I'm still have to stick with a RTX 2080 because of ControlNet support Oct 24, 2023 · One user, for example, turned a 4600G APU into a 16GB VRAM GPU that could run AI workloads – including on Stable Diffusion – without too much of a hitch, according to a video they posted on We'll use the environment to optimize ONNX models of Stable Diffusion exported by huggingface diffusers or optimum. ai, is an open-source text-to-image model that empowers you to swiftly create artistic visuals from natural language prompts. Earlier this week ZLuda was released to the AMD world, across this same week, the SDNext team have beavered away implementing it into their Stable Apr 11, 2023 · His next test will be for training (which is why he got the card and I will as well). Tested on my RX 6900 XT. Nod. (ROCm で Stable diffusion 計算代行など, GPU 計算をインターネット越しで提供する場合は適切に設定するのが ROCM team had the good idea to release Ubuntu image with the whole SDK & runtime pre-installed. 5. 0. 1. itch. Aug 7, 2023 · Installing ROCm on windows with GPU RX 6700 XT (Navi 22) #2370. In a matter of seconds, this generative AI tool transforms your textual input into compelling visual compositions. /build-rocm to build the Docker image. A few months back, there was no ROCm support for RDNA3 yet, so I just up and ordered a second 13700K with a RTX 4090. 2 container based on ubuntu 22. Contribute to leejet/stable-diffusion. Already have an account? Dec 15, 2023 · Windows 11 Pro 64-bit (22H2) Our test PC for Stable Diffusion consisted of a Core i9-12900K, 32GB of DDR4-3600 memory, and a 2TB SSD. To give you an idea of how powerful it is: StabilityAI, the creators of Stable Diffusion, use ComfyUI to test Stable Diffusion internally. Using CPU docker start -a stablediff-cpu-runner; Using CUDA docker start -a stablediff-cuda-runner; Using ROCm docker start -a stablediff-rocm-runner; Stopping Stable Diffusion. The first is NMKD Stable Diffusion GUI running the ONNX direct ML with AMD GPU drivers, along with several CKPT models converted to ONNX diffusers. 建议先在Windows下体验AI绘画: 【AI绘画】极限简化,解压即用!. pearax. Jul 8, 2023 · From now on, to run WebUI server, just open up Terminal and type runsd, and to exit or stop running server of WebUI, press Ctrl+C, it also removes unecessary temporary files and folders because we Sep 15, 2022 · In this video we'll show how to run Stable Diffusion with an AMD GPU RX580 on the Windows operating system. Rocm on Linux is very viable BTW, for stable diffusion, and any LLM chat models today if you want to experiment with booting into linux. https://nmkd. Copy and paste the following commands to install ROCm: sudo apt install "linux-headers-$ (uname -r)" "linux-modules-extra-$ (uname -r)" Dec 5, 2023 · Stable Diffusion, developed by stability. So native rocm on windows is days away at this point for stable diffusion. RX 7900 XTX Windows PC. A stable diffusion webui configuration for AMD ROCm. You might have to mess with some arguments though, for me I run it with: --opt-sub-quad-attention --disable-nan-check --autolaunch --no-half --precision full --opt-split-attention-v1 Stable Diffusion in pure C/C++. 5 release). Mar 14, 2023 · Maybe you can try mine, i'm using 5500XT 4GB and I can say this the best settings for my card. Might have to do some additional things to actually get DirectML going (it's not part of Windows by default until a certain point in Windows 10). You should see a line like this: C:\Users\YOUR_USER_NAME. https://git Dec 20, 2023 · Checklist. You can change lowvram to medvram. We need a few Python packages, so we'll use pip to install them into the virtual envrionment, like so: pip install diffusers==0. 10. Run “command prompt” on your PC. This means that the application lifetime is tied to a window, even on headless systems where that window may not be visible. 067-0. 1 查看本地系统信息 由于ROCm目前(2024. ComfyUI was created in January 2023 by Comfyanonymous, who created the tool to learn how Stable Diffusion works. I understand that many of people in the AI image generation world have a NVIDIA gpu or use a cloud service such as clipdrop. I personally run it just fine on windows 10 after some debugging, and if you need help with setup, there are a lot of people that can help you. py--upcast-sampling --precision autocast Running Stable Diffusion from VM. So I decided to document my process of going from a fresh install of Ubuntu 20. 若想高速出图,就需要在Linux下部署:. Lora training on AMD (ROCm) with kohya_ss starts here ↓↓↓↓↓↓↓. Happy diffusion. To Test the Optimized Model. Shark - 1200+s recompile if I change ANY setting, looks awful without VAE, and can't change resolution beyond 512x512. well. ai Jan 17, 2024 · Learn how to install Stable Diffusion, a powerful AI platform for generating high-quality images, on your Windows system using Python 3. Dependencies: PyTorch and ROCm. A卡用户目前只能在Linux下配置使用Rocm加速AI绘画,Windows下只能使用CPU加速。. from_pretrained (". Hope AMD double down on compute power on the RDNA4 (same with intel) CUDA is well established, it's questionable if and when people will start developing for ROCm. x it/s which is the limit at the moment, at least in my testing. In this blog, we show you how to use pre-trained Stable Diffusion models to generate images from text (text-to-image), transform existing visuals (image-to-image), and restore damaged pictures (inpainting) on AMD GPUs using ONNX Runtime. Updated on Dec 01,2023. The amd-gpu install script works well on them. This is where stuff gets kinda tricky, I expected there to just be a package to install and be done with it, not quite. darabon opened this issue on Aug 7, 2023 · 3 comments. Reload to refresh your session. co/models). #. 04 to a working Stable Diffusion. Mar 13, 2024 · AMD Drivers & ROCm on Ubuntu. it will only use maybe 2 CPU cores total and then it will max out my regular ram for brief moments doing 1-4 batch 1024x1024 txt2img takes almost 3 hours. 04 with pyTorch 2. Fooocus. Run . 5 and yes it includes 7900xt. Had to install python3. Earlier this week ZLuda was released to the AMD world, across this same week, the SDNext team have beavered away implementing it into their Stable Jan 17, 2023 · from diffusers import StableDiffusionOnnxPipeline pipe = StableDiffusionOnnxPipeline. 6 Dec 24, 2022 · 54 seconds, 512x512, 50 Steps, Radeon RX570, anything-v3. 0; . or both. Jul 27, 2023 · Deploy ROCm on Windows. It’ll also work out of the box if your ROCm is properly installed. Stable Diffusion Dockerfile for ROCm. 0. yamfun. I think ROCm has better support for arch Linux, but Ubuntu is a bit more user friendly if you are familiar with windows already. Fooocus is an image generating software (based on Gradio ). Not sure whether the set up experience has improved or not with ROCm 5. > And I guess XTX performance can increase with rocm updates and amd drivers? python --version # Check you have 3. I'm fully prepared to be corrected on that. Now, this is where it can get complicated, what we’re going to do is get the basics of the Stable Diffusion installation on your PC. #1. Good news for those who already have AMD cards, but still not good enough for recommending AMD cards, as Nvidia cards still faster than AMD (at similar price) + ROCm on linux. 5 release candidate Docker container that works properly on 7900XT/ 7900XTX cards - but you have to also compile PyTorch yoursel I've tried your docker image but when PLMS sampling starts, the temperature of GPU goes from ~42°C to over 100°C (reaching 115°C) in just few seconds, which leads to shutdown of my system ("kernel: amdgpu 0000:2d:00. Open the Settings (F12) and set Image Generation Implementation. RunwayML Stable Diffusion 1. 1 reply. 0: amdgpu: ERROR: GPU over temperature range (SW CTF) detected!"). But if there is any way to add support for AMD to your todo list it would be greatly appreciated. Another thing that hurts is no Triton but here is what he told me yesterday "regarding the 7900 XTX. Then I started the webui with export HSA_OVERRIDE_GFX_VERSION=9. It's not ROCM news as such but an overlapping circle of interest - plenty of ppl use ROCM on Linux for speed for Stable Diffusion (ie not cabbage nailed to the floor speeds on Windows with DirectML). x (all variants) StabilityAI Stable Diffusion XL; StabilityAI Stable Video Diffusion Base, XT 1. Stable Diffusion Boost with AMD ROCm on Windows! Table of Contents: Introduction. Many good questions, let me try and answer them all: > I havent tried it but i have to admit that i really dislike the vertically stacked interface. Enjoy the best performance by utilizing your AMD graphics card efficiently. exe. Generation runs about 15-16 it/sec which is about 3-4x what I got on my 2070 super. HIP already exists on Windows, and is used in Blender, although the ecosystem on Windows isn't all that well developed (not that it is on Linux). The t-shirt and face were created separately with the method and recombined. I've documented the procedure I used to get Stable Diffusion up and running on my AMD Radeon 6800XT card. Here are the changes I made: Install Python 3. 12)只支持linux下使用,所以如果你的系统是windows或者mac(黑苹果)的话就只能使用CPU进行跑图。. amd. Though I don't know if I'm gonna be in a hurry to try and upgrade just yet I’m with you on waiting to update. sh It took about 1 minute to load the model (some 2GB photorealistic) and another minute to transfer it to vram (apparently). 2. Learned from Midjourney, the manual tweaking is not needed, and users only need to focus on the prompts and images. This method should work for all the newer navi cards that are supported by ROCm. 6. Stable Diffusion with AMD ROCm on Windows. Directml fork is your best bet with windows and a1111. 1), and I got around 16it/s. 04 and that's what I'm going to assume you'll use t Jan 16, 2024 · Option 1: Install from the Microsoft store. 2 - Find and install the AMD GPU drivers. I tried many methods to speed up the process, but kept Use the command below every time you want to run Stable Diffusion. Configure Orthometric WebUI and Comfort UI for a user-friendly experience. I wonder how performance in Windows will compare to Linux. I've been using ROCm 6 with RX 6800 on Debian the past few days and it seemed to be working fine. to Stable Diffusion (ONNX - DirectML - For AMD GPUs). images [0 stable diffusion,使用NMKD来跑Stable Diffusion的桌面版,AMD显卡的福音! ,【测试】6750xt跑stable diffusion,给需要选购rocm显卡的人一个参考,7900 XTX 24GB 在 Stable-Diffusion 上的惊艳表现 -- 驱动版本:Adrenalin 23. 10 by running the following command: sudo dnf install python3. I do think there's a binary somewhere that allows you to install it. I’ve got Stable Diffusion stable (mostly) on Linux, don’t think I’m going to mess with this until other braver souls go first and/or there’s a big In this video I want to go over how to get stable diffusion working on an amd RX 6700xt. The issue exists after disabling all extensions; The issue exists on a clean installation of webui; The issue is caused by an extension, but I believe it is caused by a bug in the webui Then I installed stable-diffusion-webui (Archlinux). Open. The best I am able to get is 512 x 512 before getting out of memory errors. 0 replies. I have Stable Diffusion installed using Easy Diffusion script and it works but with cpu. 99 · 41. 4. I have ROCm 5. Possibly of interest to the lonely AMD crowd (like me). Checking for Updates and Announcements. Once rocm is vetted out on windows, it'll be comparable to rocm on Linux. Navigate I've set up stable diffusion using the AUTOMATIC1111 on my system with a Radeon RX 6800 XT, and generation times are ungodly slow. Apr 12, 2023 · AMD is keeping awfully quiet, but I somehow stumbled across a ROCm 5. io/t2i-gui For things not working with ONNX, you probably answered your question in this post actually: you're on Windows 8. Then you get around 15-17it/s depending on ROCm version. Neat, but IMHO one of the chief historical problems. videogames. ROCm is a real beast that pulls in all sort of dependencies. 10 to PATH “) I recommend installing it from the Microsoft store. I has the custom version of AUTOMATIC1111 deployed to it so it is optimized for AMD GPUs. I have two SD builds running on Windows 10 with a 9th Gen Intel Core I5, 32GB RAM, AMD RTX 580 with 8GB of VRAM. NMKD GUI. 15it/s afterwards tho. and ComfyUI fully lacks support for it. Mar 6, 2023 · That's cause windows does not support ROCM, it only support linux system. What's the status of AMD ROCm on Windows - especially regarding Stable Diffusion?Is there a fast alternative? We speed up Stable First published: 23 Feb 2024. 1 WHQL,【低端显卡福利】成倍提升ai绘画stable-diffusion推理出图 I've been using an 7900XTX using directml on Windows and rocm 5. /sd. You can give pytorch w/ rocm a try if you're under one of the ROCm-supported Linux distro like Ubuntu. Run Stable Diffusion on an AMD card, using this method. long. Here are the new official instructions for installing the AMD drivers using the package manager. Good news would be having it on windows at this point. com shows: Please add PyTorch support of Windows on AMD GPUs! Alternatives No response Additional context No response cc @jeffdaily @sunway513 @jithunn Aug 18, 2023 · The model folder will be called “stable-diffusion-v1-5”. 300 · 52. However, I'm also interested in machine learning, so I'm wondering - can I run Linux in VM, pass-through the GPU to it and achieve the same ML efficiency as I would if I did a dual-boot? I believe you need 2 GPUs to pass one to the VM. The Status of ROCm on Windows for Stable Diffusion. Updates (2023. MIOpen Development for Windows. Looks great the first time and can set the resolution to 1024x1024. Side by side comparison with the original. But that's simply not enough to conquer the market and gain trust. GreyScope. 512x512 images take about 2 minutes to generate on the 1070, and only about 30 seconds on the 6800 XT. Jan 2, 2024 · Step 4: Download a Stable Diffusion Model File. I used Ubuntu 22. Tutorial - Guide. PLANET OF THE APES - Stable Diffusion Temporal Consistency. Feb 6, 2024 · Install Stable Diffusion XL locally on Linux and maximize your data processing capabilities. py –help. ai Shark is extraordinarily easy to set up and works really well. 0 (FP16) , Windows 11Shark Stable Diffusion *Currently officially support RDNA2 / RDNA3. 5it/s. --security-opt seccomp=unconfined は無くても大丈夫そう. Last time I checked in order to get 7900 XTX working I still need to compile pytorch manually (it was ROCm 5. I looked around and saw that there was a directml version Apr 14, 2023 · Apr 14, 2023. 5 on Linux for ~2 months now (using the leaked rc before the official 5. That's interesting, although I'm not sure if you mean a build target for everything or just HIP. Ideally, they'd release images bundled with some of the most popular FLOSS ML tools ready to use and the latest stable ROCm version. Share I also have a RX6750. Try NMKD Stable Diffusion GUI - AI Image Generator works with my RX6700XT and supports Picture to text and model replacement. Otherwise, I have downloaded and began learning Linux this past week, and messing around with Python getting Stable Diffusion Shark Nod AI going has helped with the learning curve, but I'm so use to Windows that I would like to go with what I'm Dec 19, 2023 · ComfyUI is a node-based user interface for Stable Diffusion. \\sd instead of . Follow this step-by-step guide to set up Stable Diffusion and access its web user interface. 🚀 The feature, motivation and pitch AMD has release ROCm windows support, as docs. 0 pip install transformers pip install onnxruntime. The result of this repo is a side effect of my work and is not endorsed by Microsoft. This only developed to run on Linux because ROCm is only officially supported on Linux. This development milestone is essential for making So, I've been keeping an eye one the progress for ROCm 5. Use the following command to see what other models are supported: python stable_diffusion. --no-half --always-batch-cond-uncond --opt-sub-quad-attention --lowvram --disable-nan-check. If you're having issues installing an installation - I would recommend installing Stability Matrix, it is a front end for installing AI installations and it takes away the potential human based pitfalls (ie fecking it up). 1; LCM: Latent Consistency Models; Playground v1, v2 256, v2 512, v2 1024 and latest v2. I previously failed (probably because I was being impatient while installing / downloading or drunk. that's why that slow. no I freshly installed ubuntu in dualboot mode. cpp froze, hard drive was instantly filled by gigabytes of kernel logs spewing errors, and after a while the PC stopped responding. pytorch/pytorch#106161. That will be all. 1 - Install Ubuntu 20. Ubuntu or debian work fairly well, they are built for stability and easy usage. Aug 4, 2023 · Many features are disabled on Auto 1111 such as training. Expanding on my temporal consistency method for a 30 second, 2048x4096 pixel total override animation. Information about SDXL and AMD ROCm:https://stabledi Feb 25, 2023 · For depth model you need image_adapter_v14. 037 it/s. I am using Fedora, so the process is slightly different. download and unpack NMKD Stable Diffusion GUI. It has some issues with setup that can get annoying (especially on windows), but nothing that can't be solved. 2. cpp to the latest commit (Mixtral prompt processing speedup) and somehow everything exploded: llama. 6 progress and release notes in hopes that may bring Windows compatibility for PyTorch. The necessary steps have been copied here for your convenience. Search for " Command Prompt " and click on the Command Prompt App when it appears. 4. 3 min read time. I used my AMD 6800XT with auto1111 in windows. If 512x512 is true then even my ancient rx480 can almost render at They were supported since Rocm 5. The code is hereby provided for the ease of reproducibility of the conversion and optimization of the model pipeline. Obtain sd-v1-4. First, remove all Python versions you have previously installed. Windows 10 was added as a build target back in ROCm 5. A/I卡可用。. This guide should help you as much as it did for me. Aug 3, 2023 · Matthieubmton Aug 3, 2023. 04 LTS. You're using CPU for calculating, not GPU. If you really want to work with AMD GPUs you need a Linux distro, NOT WINDOWS (if you want to generate images using all your GPU computing power) Just like Nvidia has CUDA for high performance computing, AMD has ROCm, currently only available for Linux distros (Not Windows support until later this year). 0 released. Stable Diffusion WebUI: I used commandline args: --opt-sub-quad-attention --no-half-vae --disable-nan-check --autolaunch Took positive and negative prompts, and CFG from TomsHardware's article regarding the Stable Diffusion benchmark and used both SD-v1-5-pruned-emaonly model, as well as neverendingDreamNED_v122BakedVae Sep 14, 2022 · Installing Dependencies 🔗. (If you use this option, make sure to select “ Add Python to 3. 10 from AUR to get it working and all rocm packages I could find. This file contains the necessary data for Stable Diffusion to function optimally. launch Stable DiffusionGui. Oct 21 2022 Update: The latest way to run SD on W It is still slower than Linux and ROCm but not as slow as the first commenter said. The system is has 5950X and 6800XT running Pop!_OS 22. 5; Stable Cascade Full and Lite; aMUSEd 256 256 and 512; Segmind Vega; Segmind SSD-1B; Segmind SegMoE SD and SD-XL ROCm 6. Actual news PyTorch coming out of nightly which happened with 5. Works great for me. When I finally got it to work, I was frustrated that it took several minutes to generate an image. We tested 45 different GPUs in total — everything that has Aug 4, 2023 · Looong01 commented on Aug 4, 2023edited. You signed out in another tab or window. 教程使用的是 Ubuntu20. . 2nd implementation. yaml) and place alongside the model. I cannot get SHARK to work. Stable diffusion runs like a pig that's been shot multiple times and is still trying to zig zag its way out of the line of fire It refuses to even touch the gpu other than 1gb of its ram. py --interactive --num_images 2 . The support depends on torch which in turn depends on MIOpen, you can see more on both pr. Now, we need to go and download a build of Microsoft's DirectML Onnx runtime. Get a NV during Black Friday. but images at 512 took for ever. Applies to Windows. Directml is great, but slower than rocm on Linux. 3. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Copy and rename it so it's the same as the model (in your case coadapter-depth-sd15v1. it worked. toolbox enter --container stable-diffusion cd stable-diffusion-webui source venv/bin/activate python3. ckpt and put it in models/. To proceed with Stable Diffusion, you need to download a Stable Diffusion model file. 04. r/StableDiffusion. 2023-07-27. •. anyways. Linux outperforms Windows, providing up to 8 times faster iteration times. 进入你的系统, uname -m && cat /etc/*release ## 查看你的系统版本并记住 uname -srmv Jul 30, 2023 · You signed in with another tab or window. 5 ,不 And if you get hooked on generating stuff with SD and don't want to wait for stable ROCm support for Windows consider installing Linux on a second drive as dual boot. ROCm/MIOpen#2272. I am employed by Microsoft and is working on ONNXRuntime ROCm EP (as of 2023-04-20). DirectML - 0. For a single 512x512 image, it takes upwards of five minutes. Nov 15, 2022 · 1. x and 2. Hello! I'm about to get a new gaming rig - it will run Windows. I've tried running the thing with the fan manually Jun 29, 2023 · I've been asked about how to get stable diffusion working on Windows instead of doing it on Linux. 🔧If you would like to discuss building a Generative AI May 2, 2023 · sam_86314where windows support DirectML sucks in terms of performance I've found out. This is my go to. Feb 12, 2024 · AMD 显卡部署 stable diffusion webUI 1 事先准备 1. Using Microsoft Alive ONNX. Use Stable Diffusion XL/SDXL locally with AMD ROCm on Linux. To test the optimized model, run the following command: python stable_diffusion. I couldn't get the training to work, mostly because of what I assume is a bug with the ROCm fork of Triton that's currently /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. What were your settings because if its 512x512 example image it's suspiciously slow and could hint at wrong/missing launch arguments. Press the Window keyboard key or click on the Windows icon (Start icon). /run-rocm to run a shell in the Docker container. Launch the installer. 22) Later today, I found out there is a stable diffusion web UI benchmark, 6800xt on Linux can achieve 8it/s, so I did a little digging, and change my boot arguments to only: python launch. Rocm + SD only works under Linux which should dramatically enhance your generation speed. xt pi gx fe kh da cc va jl xh