AMD’s Radeon RX 7900 XTX showcases impressive capabilities when running the DeepSeek R1 AI model, outperforming NVIDIA’s GeForce RTX 4090 in inference benchmarks.
### AMD Accelerates Support for DeepSeek’s R1 LLM Models, Exceeding Performance Expectations
DeepSeek’s latest AI model has certainly made waves across the industry. Many are curious about the computational power required for its training, but AMD’s Radeon RX 7900 XTX GPU, part of the “RDNA 3” architecture, allows the average consumer to achieve optimal performance. AMD has provided DeepSeek’s R1 benchmark data, highlighting the RX 7000 series’ dominance over its NVIDIA counterpart across various models.
DeepSeek performing exceptionally on @AMDRadeon 7900 XTX. Discover how to optimize Radeon GPUs and Ryzen AI APUs here:
pic.twitter.com/5OKEkyJjh3
Many consumers have found success using GPUs for AI tasks largely because they offer a better performance-to-cost ratio compared to traditional AI accelerators. Running models locally also means enhanced privacy, which is a significant consideration given some concerns surrounding DeepSeek’s AI models. Luckily, AMD has released a detailed guide for running DeepSeek R1 models on their GPUs, and here’s a quick rundown of the process:
1. Ensure you’re running the 25.1.1 Optional or newer Adrenalin driver.
2. Download LM Studio 0.3.8 or later from lmstudio.ai/ryzenai.
3. Install LM Studio and bypass the initial onboarding screen.
4. Navigate to the “discover” tab.
5. Choose your preferred DeepSeek R1 Distill. Smaller versions like Qwen 1.5B offer optimal speed and are ideal for beginners, while larger ones provide enhanced reasoning capabilities. All options are highly competent.
6. On the right, select the “Q4 K M” quantization and hit “Download.”
7. After downloading, return to the chat tab, select the DeepSeek R1 distill from the dropdown, and ensure “manually select parameters” is checked.
8. In the GPU offload layers section, push the slider to the maximum.
9. Click model load.
10. Engage with a reasoning model fully operational on your AMD hardware!
If you run into any issues, AMD has released a tutorial on YouTube that breaks down each step in detail. Checking it out should help you run DeepSeek’s LLMs on your AMD systems securely, protecting your data from potential misuse. With the next wave of GPUs from both NVIDIA and AMD, we anticipate a significant boost in inferencing power, thanks to dedicated AI engines designed to handle such workloads efficiently.