Novel GPUs meets the requirements of the High Performance Computing. GPUs are often used for compute intensive algorithms, but in most cases only with a single workstation with several GPUs inside. The next logical step to increase the performance is a GPU based cluster or cloud. An API that virtualizes GPUs throw common network would be outstanding for academic use and R&D companies.
This API enables the concurrent usage of GPGPU-compatible devices remotely. It is a transparent communication layer between clients and servers. One client can be connected to several servers and do distributed computation on every node. Also multiple clients can use one server. It can be useful in three different environments:
- Clusters and clouds. To reduce the number of GPUs installed in High Performance Clusters. This leads to energy savings, as well as other related savings like acquisition costs, maintenance, space, cooling, etc.
- Academia. In commodity networks, to offer access to a few high performance GPUs concurrently to many students.
- Virtual Machines. To enable the access to the CUDA facilities on the physical machine.