Robuta

https://gpuopen.com/learn/onnx-directlml-execution-provider-guide-part2/
Learn how to optimize neural network inference on AMD hardware using the ONNX Runtime with the DirectML execution provider and DirectX 12 in the second part of...
provider guideonnxdirectmlexecutionpart
https://www.amd.com/en/resources/support-articles/release-notes/RN-RAD-WIN-23-40-27-06-DIRECTML.html
amd softwareadrenalineditiondirectml