是否可以在 AMD 图形处理器上运行 CUDA?

我想将我的技能扩展到 GPU 计算。我熟悉光线跟踪和实时图形(OpenGL) ,但下一代图形和高性能计算似乎是在 GPU 计算或类似的东西。

我目前使用的 AMD 高清7870图形卡在我的家庭电脑。我可以为此编写 CUDA 代码吗?(我的直觉是否定的,但自从 Nvidia 发布了编译器二进制代码,我可能就错了)。

第二个更普遍的问题是,我从哪里开始使用 GPU 计算?我确信这是一个经常被问到的问题,但我看到的最好的是从08’,我认为这个领域已经发生了很大的变化。

244680 次浏览

Nope, you can't use CUDA for that. CUDA is limited to NVIDIA hardware. OpenCL would be the best alternative.

Khronos itself has a list of resources. As does the StreamComputing.eu website. For your AMD specific resources, you might want to have a look at AMD's APP SDK page.

Note that at this time there are several initiatives to translate/cross-compile CUDA to different languages and APIs. One such an example is HIP. Note however that this still does not mean that CUDA runs on AMD GPUs.

You can't use CUDA for GPU Programming as CUDA is supported by NVIDIA devices only. If you want to learn GPU Computing I would suggest you to start CUDA and OpenCL simultaneously. That would be very much beneficial for you.. Talking about CUDA, you can use mCUDA. It doesn't require NVIDIA's GPU..

I think it is going to be possible soon in AMD FirePro GPU's, see press release here but support is coming 2016 Q1 for the developing tools:

An early access program for the "Boltzmann Initiative" tools is planned for Q1 2016.

Yup. :) You can use Hipify to convert CUDA code very easily to HIP code which can be compiled run on both AMD and nVidia hardware pretty good. Here are some links

GPUOpen very cool site by AMD that has tons of tools and software libraries to help with different aspects of GPU computing many of which work on both platforms

HIP Github Repository that shows the process to hipify

HIP GPUOpen Blog

Update 2021: AMD changed the Website Link go to ROCm website

https://rocmdocs.amd.com/en/latest/

You can run NVIDIA® CUDA™ code on Mac, and indeed on OpenCL 1.2 GPUs in general, using Coriander . Disclosure: I'm the author. Example usage:

cocl cuda_sample.cu
./cuda_sample

Result: enter image description here

As of 2019_10_10 I have NOT tested it, but there is the "GPU Ocelot" project

http://gpuocelot.gatech.edu/

that according to its advertisement tries to compile CUDA code for a variety of targets, including AMD GPUs.

These are some basic details I could find.

Linux

ROCm supports the major ML frameworks like TensorFlow and PyTorch with ongoing development to enhance and optimize workload acceleration.

It seems the support is only for Linux systems.(https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html)

ROCm supports the major ML frameworks like TensorFlow and PyTorch with ongoing development to enhance and optimize workload acceleration. based on HIP

Heterogeneous-Computing Interface for Portability (HIP) is a C++ dialect designed to ease conversion of CUDA applications to portable C++ code. It provides a C-style API and a C++ kernel language. The C++ interface can use templates and classes across the host/kernel boundary. The HIPify tool automates much of the conversion work by performing a source-to-source transformation from CUDA to HIP. HIP code can run on AMD hardware (through the HCC compiler) or NVIDIA hardware (through the NVCC compiler) with no performance loss compared with the original CUDA code.

Tensorflow ROCm port is https://github.com/ROCmSoftwarePlatform/tensorflow-upstream and their Docker container is https://hub.docker.com/r/rocm/tensorflow

Mac

This support for macOS 12.0+( as per their claim )

Testing conducted by Apple in October and November 2020 using a production 3.2GHz 16-core Intel Xeon W-based Mac Pro system with 32GB of RAM, AMD Radeon Pro Vega II Duo graphics with 64GB of HBM2, and 256GB SSD.

You can now leverage Apple’s tensorflow-metal PluggableDevice in TensorFlow v2.5 for accelerated training on Mac GPUs directly with Metal.

As others have already stated, CUDA can only be directly run on NVIDIA GPUs. As also stated, existing CUDA code could be hipify-ed, which essentially runs a sed script that changes known CUDA API calls to HIP API calls. Then the HIP code can be compiled and run on either NVIDIA (CUDA backend) or AMD (ROCm backend) GPUs.

The new piece of information I'd like to contribute is that if someone doesn't want to hipify their existing CUDA code (i.e., change all CUDA API calls to HIP API calls), there is another option that can be used; simply add (and include) a header file that redefines the CUDA calls as HIP calls. For example, a simple vector addition code might use the following header file:

#include "hip/hip_runtime.h"


#define cudaMalloc hipMalloc
#define cudaMemcpy hipMemcpy
#define cudaMemcpyHostToDevice hipMemcpyHostToDevice
#define cudaMemcpyDeviceToHost hipMemcpyDeviceToHost
#define cudaFree hipFree

...where the main program would include the header file:

#include "/path/to/header/file"


int main(){


...


}

Compilation would, of course, require using nvcc (as normal) on an NVIDIA GPU and hipcc on an AMD GPU.

Regarding where to get started with GPU computing (in general), I would recommend starting with CUDA since it has the most documentation, example codes, and user-experiences available via a Google search. The good news is, once you know how to program in CUDA, you essentially already know how to program in HIP : )

Last year, AMD launched inside ROCm initiative an interesting Open source project named GPUFort.

While it's (obviously) not a way to simply "run CUDA code on AMD GPUs", it helps developers to move away from CUDA.

Quick description here : https://www.phoronix.com/news/AMD-Radeon-GPUFORT

As it's open source, you can find it on Github : https://github.com/ROCmSoftwarePlatform/gpufort