Cryptocurrency Industry News

The Trust Dilemma in AI: Can Blockchain Provide a Solution?

Published by
Written By: Michael Abadha
Share
    Summary:
  • AI investments attracted close to $190 billion in 2023, with notable acquisitions such as Microsoft’s $10 billion investment in OpenAI.

Billions have been invested in the AI industry over the past three years. According to a report by Wisdom Tree, global corporate AI investments attracted close to $190 billion in 2023, more than ten times what was invested in 2013 ($14.5 billion).

However, amidst this success also lies a billion dollar question; Are AI innovations trustworthy in their current state? Quite a number of stakeholders from various sectors, including tech, regulators, academia and more critical niches such as healthcare have voiced concerns in the recent past. 

Elon Musk, for instance, is on record criticizing Google’s Gemini AI chatbot for the LLM’s ‘woke’ responses which triggered a backlash on debut. The Tesla founder has also indicated that he is not comfortable with Apple’s potential integration with ChatGPT. 

“If Apple integrates OpenAI at the OS level, then Apple devices will be banned at my companies,” tweeted Elon Musk. “Visitors will have to check their Apple devices at the door, where they will be stored in a Faraday cage.”

Whether or not this take is justified is up for debate, but what’s clear is that there is a huge trust problem. The next sections of this article will dive into the fundamental shortcomings that are fueling mistrust in AI development, as well as blockchain’s potential to address some of these issues.

The AI Credibility Gap

If you’re a regular user of large LLM models like me, chances are you cannot explain the process through which ChatGPT, Llama, or Bard generate outputs or what data informs the responses that appear once we query these LLMs. 

Except for the AI developers or tech nerds, only a handful of people have knowledge of how outputs are generated; a classic black box problem. AI LLMs do not inform their users where they have gathered training data from, the limitations embedded in the code, or any other nuances that may be useful. This is one of the areas where AI innovations are failing to build trust.

Of course, one might be tempted to argue that regulations such as the EU AI Act will compel companies in this realm to be more transparent. However, it is not as simple as it sounds. The players are now hiding behind ‘open source’ technology, making it hard to decipher what is already disclosed and the fine print, which in most cases involves a breach of data privacy.

At the same time, we also cannot ignore the fact that the very companies we expect to be honest have proven time and again to be cunning, especially in this era where data is the new oil.

For context, all of the biggest tech companies have poured money into AI over the past year, with notable acquisitions such as Microsoft’s $10 billion investment in OpenAI. On the other hand, Nvidia controls the AI chip market single-handedly. At this pace, the AI revolution is bound to suffer the same centralization fate as Web2

And that’s just the tip of the iceberg; there are other major factors posing a risk to AI’s trustworthiness, including ethical and environmental concerns. Estimates show that AI data centers’ demand for power will grow by 10% within the next year, a projection that does not align well with the UN’s current sustainability goals.

Decentralizing AI Innovation 

What if more people were involved in AI development? Currently, this is not the case; tech giants are the primary players in the field, which is not surprising given that they hold vast quantities of datasets that are essential for AI training.

However, the narrative is gradually changing. AI-oriented blockchain platforms such as Qubic are opening up the AI space to more participants. This Layer 1 chain leverages a consensus mechanism called Useful Proof of Work; instead of miners wasting their extra computational energy, Qubic allows them to direct this power toward researching solutions for the Aigarth AI software, which is built on top of the Qubic blockchain infrastructure.

The idea is simple: using blockchain technology to source computing power from more participants while also expanding the training knowledge for Aigarth’s artificial neural networks (ANNs). Qubic’s goal is to eventually build self-improving ANNs, naturally mimicking the evolution of human intelligence.

Blockchain infrastructures also have the potential to bring transparency to AI innovations. This is because blockchain-powered AI innovations are designed to store data on-chain as opposed to centralized databases. This presents an opportunity for transparent audits of the input data, which is a particular cause for concern.

Moreover, with blockchain technology, it becomes difficult for contributors to manipulate the input information without other stakeholders raising alarms. What is to stop big tech from tweaking perspectives as they have consistently done over the past decade? Only decentralization can prevent centralized AI companies from succumbing to this temptation.

Conclusion 

AI will continue evolving as the world embraces the fourth industrial revolution (4IR). But just like the internet, not all AI innovations that currently exist will survive; in fact, most will fail as a result of the shortcomings highlighted in this article. What will be more intriguing to witness, however, is the synergy between AI and other emerging technologies to deliver innovations that are truly for the people and not another Trojan horse to control the masses.

This post was last modified on Jul 31, 2024, 15:58 BST 15:58

Written By: Michael Abadha

Michael is a self-taught financial markets analyst, who specializes in analysis of equities, forex and crypto markets. He draws his inspiration from the fact that markets provide an interface through which the world interacts in search of a better tomorrow.

Published by
Written By: Michael Abadha