Sponsor of the Day:
Jerkmate
https://radwaytech.com/running-tomography-pipelines-on-amd-rocm/
Running tomography pipelines on AMD ROCm - RadwayTech Services
Apr 29, 2026 - Experimental porting of the httomo tomography stack to ROCm, and compare the performance of a Radeon RX 7900 GRE to a number of NVIDIA accelerators.
amd rocmrunningtomographypipelinesservices
https://www.nextplatform.com/ai/2024/11/26/amd-rocm-63-has-goodies-for-ai-aficionados-and-hpc-gurus-alike/1654181
AMD ROCm 6.3 Has Goodies For AI Aficionados And HPC Gurus Alike
Dec 4, 2024 - Speeds and feeds are great, but hardware is only as useful as the software that can harness it, and, for AMD, that’s the ROCm software stack. If you’re
amd rocm6 3goodiesaiaficionados
https://www.phoronix.com/news/AMD-ROCm-7.2.1
AMD ROCm 7.2.1 Released With Ubuntu 24.04.4 LTS Support, Bug Fixes - Phoronix
Building off the release of ROCm 7.2 from January, ROCm 7.2.1 is now available with Ubuntu 24.04.4 LTS support as well as various bug fixes to this open-source...
7 2 1ubuntu 24 04bug fixes phoronixamd rocmlts support
https://rocm-handbook.amd.com/projects/amd-rocm-programming-guide/en/latest/
AMD ROCm Programming Guide — AMD ROCm Programming Guide 7.2.2
AMD ROCm programming guide
amd rocmprogramming guide7 2
https://gpuopen.com/learn/amd-lab-notes/amd-lab-notes-gpu-aware-mpi-readme/
GPU-aware MPI with ROCm - AMD GPUOpen
MPI is the de facto standard for inter-process communication in High-Performance Computing. This post will guide you through the process of setting up an MPI...
amd gpuopenawarempirocm
https://rocm.blogs.amd.com/artificial-intelligence/mlperf-inf_v6.0-repro/README.html
Reproducing the AMD MLPerf Inference v6.0 Submission Result — ROCm Blogs
Provide instructions to potential customers and partners to verify our MLPerf Inference v6.0 submission result.
mlperf inference v6rocm blogsreproducingamd0
https://rocm.blogs.amd.com/software-tools-optimization/eaisuite-autoscaling/README.html
Leveraging AMD AI Workbench to Scale LLM Inference for Optimal Resource Utilization — ROCm Blogs
Learn how to use the AMD AI Workbench GUI and AIM Engine CLI capabilities to enable and configure autoscaling for your AI workloads.
amd aiscale llmresource utilizationrocm blogsleveraging
https://debconf25.debconf.org/talks/182-cirocmdebiannet-a-debian-ci-with-amd-gpus-for-aiml-packages/
ci.rocm.debian.net: a Debian CI with AMD GPUs, for AI/ML packages - DebConf25
amd gpusai mlcirocmdebian
https://rocm.blogs.amd.com/artificial-intelligence/mlperf-inference-v6.0/README.html
AMD Instinct™ GPUs MLPerf Inference v6.0 Submission — ROCm Blogs
In this blog, we share the technical details of how we accomplish the results in our MLPerf Inference v6.0 submission.
mlperf inference v6rocm blogsamdgpus0
https://www.crusoe.ai/cloud/gpus/amd-mi300x
AMD MI300X Cloud | 192GB HBM3 & AMD ROCm | Crusoe Cloud
Power your AI and HPC projects with the AMD Instinct MI300X GPU on Crusoe Cloud. Get exceptional performance for generative AI and training with 192GB of HBM3...
amdmi300xcloud192gbhbm3
https://rocm.docs.amd.com/en/latest/how-to/rocm-for-ai/inference-optimization/workload.html
AMD Instinct MI300X workload optimization — ROCm Documentation
Learn about workload tuning on AMD Instinct MI300X GPUs for optimal performance.
amd instinctworkload optimizationrocm documentationmi300x
https://www.crusoe.ai/cloud/gpus/amd-mi355x
AMD MI355X Cloud | 288GB HBM3e & AMD ROCm | Crusoe Cloud
Access the AMD Instinct MI355X GPU on Crusoe Cloud for next-generation generative AI. Leverage a massive 288GB of HBM3e memory and the open AMD ROCm software...
amdmi355xcloudhbm3erocm