- Summary:
- io.net launched its testnet last week and it has embarked on helping data centres globally to generate revenue from idle GPU capacity.
Io.net, the company on a mission to build the world’s largest decentralised Graphical Processing Units (GPU) computing network, has today reported that 107,000 GPUs from data centers and private clusters will be plugging into its newly-launched decentralized physical infrastructure network (DePIN) beta.
The news marks a great start to io.net’s novel approach to decentralised GPU computing and is a big boost in the company’s efforts to take on Web2 cloud computing market leaders like AWS and Azure. Notably, the company will focus most of its efforts towards powering machine learning and artificial intelligence (AI) computing, which is good news for Web3.
One of io.net’s core strength is its ability to cluster GPUs spread across different geographic locations in minutes. Notably, the company is partnering with Render, which will enable it to leverage Render’s DePIN network of distributed GPU suppliers. These computing resources will be deployed on io.net’s platform.
Optimising GPU capacity the io.net way
The partnership with Render is one of the most valuable to io.net. Render network sources GPU rendering from multiple decentralised sources at faster speeds and low costs than centralized cloud solutions. Under the partnership, io.net and Render have allocated $2,600,000 towards an incentive program for GPU resource providers that join the network. On the other hand, Render nodes have the option of expanding their existing GPU capacity from graphical rendering to AI and machine learning applications.
As we reported earlier, io.net is focusing on helping idle GPUs optimize their capacity. The company says that it aims to help the many data centers globally that are underutilizing their GPU capacity, with significant percentages of their capacity lying idle. Furthermore, tens of thousands of these GPUs are actually top-end GPUs. Specifically, io.net says that the data centres typically only utilize 12% to 18% of their GPU capacity, rendering most of their capacity idle.
In terms of the broader market view, the company is building a DePIN that will primarily cater to machine learning engineers and businesses that could be keen on using highly customizable UI. This modular design allows users to go for exactly what they need, and they can define the number of GPUs they need, the location, security parameters and other metrics.