AI/GPU Roadmap Spotlight: GPU Provider

AI/GPU Roadmap Spotlight: GPU Provider

What Is GPU Provider?

This component of the AI/GPU ecosystem aims to develop the software for connecting computers and servers equipped with GPUs to the Golem Network. Thanks to the GPU Provider, computers equipped with GPUs can be rented by end users (Requestors, according to Golem’s nomenclature) in a safe and convenient way. In exchange for the use of GPU computing power, Providers receive GLM tokens from Requestors. The software is delivered as a live USB image which needs to be written to an external drive. The image contains the operating system, drivers, and Provider software. After writing the image to the disk and connecting it, the computer boots from the disk and starts the Configuration Wizard. Once the Provider is configured, it is ready to be rented by Requestors.

Why Are We Doing It?

We aim to build a decentralized physical infrastructure for open-source developers and AI companies. With the increasing demand for computing power in AI projects, we are expanding the Golem ecosystem. The GPU Provider will help provide computing power for the AI industry. More specifically, we are addressing the dynamic development of AI models and services using these models.

For Whom It Is?

GPU Provider is addressed to people and companies who have computers or servers running on the x86 platform, equipped with NVidia GPUs starting from the NVidia RTX 30xx series. 

Those interested in becoming GPU Providers must dedicate their entire computer resources to the Golem Network while participating. While the Provider is working, it is not possible to use the computer for other tasks due to the graphics card being taken over by the Provider software. This software is dedicated for computers with any operating system platform like Linux or Windows, as it provides its own OS on the live USB.

Technical Implementation

The Provider software uses virtualization to separate tasks performed by the Requestor from the Provider’s system. The virtual machine has a graphics card of the Provider's computer available via GPU passthrough, which removes performance overhead related to virtualization.

To protect hardware from malicious Requestors, IOMMU (I/O memory management unit) mechanisms are used to provide isolation of GPU from other devices. Further payload isolation is based on Linux kernel capabilities, which forces Requestor code to run with limited privileges in the guest system.

Benefits for Users

The benefits achieved by GPU Provider users include:

  • Receiving GLM tokens from Requestors for renting resources.
  • Personal development in the GPU/AI domain and understanding of decentralized platforms.
  • Co-creation of the project and contribution to the development of Golem Network’s open-source platform.

Connection with other AI/GPU Projects

The GPU provider will be used by AI models inference applications, the Rent GPU service and other Requestors connecting their applications via the Worker API or SDKs. GPU Provider differs from AI Provider prepared with Golem's partner - Gamerhash, in that any computational tasks can be performed on it - not only AI tasks like using AI Provider. AI Provider is dedicated to Windows platform only. 

Milestones and Next Steps

Q1 - Beta tests: We have completed the first phase of beta testing. Thanks to the involvement of about 50 beta testers, we have improved the quality of the software and the convenience of using it. 

Q2 - Further development: For the second quarter, we have planned to start developing GPU Provider to include support for multiple GPUs on one PC or server and a provider management panel, which is currently only possible after remotely connecting to a machine with the provider running and using the command line. We also plan to expand the number of GPU Provider users this quarter.

Do You Want to Get Involved?

Our Beta Testing waitlist is still open. Once we expand the number of GPU Providers we will inform Beta Testers from the waitlist. Interested?