Most AI Tokens Are Not Decentralized AI, Research Finds
Decentralized AI promises something profound.
It imagines a future where intelligence is not controlled by a handful of technology firms, where users retain ownership over data and models, and where AI systems operate as shared public infrastructure rather than proprietary assets.
Over the past few years, this vision has been bundled into a fast-growing category of crypto assets known as AI-based tokens..
Momentum accelerated after the release of ChatGPT in late 2022. A familiar narrative took hold: crypto would decentralize AI in the same way it was supposed to decentralize finance.
The research paper AI-Based Crypto Tokens: The Illusion of Decentralized AI? asks the necessary (and uncomfortable) question beneath that narrative:
How much of this decentralization is real, and how much exists only in name?
Decentralized in branding, centralized in practice
At first glance, AI-token projects appear decentralized. They use blockchains. They issue governance tokens. They advertise open participation and community ownership.
The paper shows that this surface-level decentralization masks a deeply centralized reality.
Across nearly all major projects, the most important computational tasks—model training, inference, and data hosting—occur off-chain on servers controlled by a small group of actors. The blockchain functions primarily as a coordination and payment layer, not as the execution environment for intelligence itself.
This architectural choice is understandable. Modern AI workloads are expensive, data-intensive, and poorly suited to on-chain execution. But it creates a fundamental contradiction: the trustless guarantees of blockchain end precisely where AI computation begins.
Why AI tokens fail to outperform centralized AI
The research systematically compares AI-token platforms with centralized AI services such as cloud APIs and model marketplaces.
The conclusion is not subtle.
Centralized providers outperform AI-token platforms on nearly every practical dimension: performance, cost, reliability, and user experience. They scale quickly, deploy massive GPU clusters, and offer simple interfaces. AI-token platforms introduce additional friction, wallets, tokens, governance votes, and settlement delays, without delivering commensurate benefits.
In many cases, the underlying business model is nearly identical to centralized AI:
credit cards are replaced with tokens
APIs are wrapped in smart contracts
infrastructure control remains concentrated
The result is not decentralization, but indirection.
The result is indirection.
The Core Misconception: Tokens ≠ Decentralization
The paper’s most important insight is conceptual.
Tokenization and decentralization are not the same thing.
The paper calls this the illusion of decentralized AI.
The illusion persists because decentralization is often inferred from design elements rather than measured in practice. The relevant questions are operational:
Who runs the models?
Who controls updates?
Who can meaningfully influence outcomes?
In most AI-token systems, the answers still point to a small core team.
This mirrors a broader Web3 pattern: decentralization assumed by architecture, undermined by incentives, coordination costs, and technical constraints.
What the Research Actually Finds
1. Decentralization Rarely Extends to the AI Core
Across nearly all surveyed projects, decentralization stops at coordination.
The paper finds that:
Training is centralized
Inference is centralized
Data pipelines are centrally curated
Blockchains are used primarily for:
payments
access control
token distribution
In other words, the blockchain coordinates economic activity, but does not execute intelligence.
The authors describe this pattern precisely as “coordination decentralization without execution decentralization.” The system appears decentralized at the surface level, while the computational core remains tightly controlled.
This distinction is critical. Control over training, inference, and data determines what models can do, how they evolve, and whose interests they serve. When those functions remain centralized, decentralization becomes cosmetic rather than structural.
2. Governance Tokens Do Not Alter Control Paths
The paper closely examines on-chain governance across AI-token platforms, with a focus on what token holders can actually influence.
The findings are consistent:
Voting rarely affects core model decisions
Infrastructure upgrades remain team-driven
Token holders influence parameters, not architecture
Governance mechanisms tend to operate at the margins—adjusting fees, rewards, or access rules—while decisions about model design, training regimes, and deployment pipelines remain centralized.
As a result, governance tokens function less as instruments of control and more as:
signaling tools
fundraising instruments
They convey participation without conferring authority.
The paper is explicit on this point: decentralized governance cannot compensate for centralized execution. When the technical core is not governable, voting becomes symbolic.
3. Verification Is the Missing Primitive
The most repeated technical conclusion in the paper is stark:
Without verifiable computation, decentralization collapses at the boundary between blockchain and AI.
Blockchains cannot currently verify:
whether inference was executed correctly
whether training followed protocol
whether datasets were manipulated
Because these steps occur off-chain, trust is reintroduced precisely where decentralization claims are strongest.
Some projects attempt mitigation through staking, reputation systems, or peer evaluation. The paper acknowledges these efforts but finds them insufficient. They reduce risk at the margins without eliminating the trust gap.
For the authors, this is the central unsolved problem. Until computation itself becomes verifiable, decentralized AI remains structurally incomplete.
4. Economic Incentives Favor Centralization
Even in cases where decentralization is technically feasible, the paper finds that economic incentives consistently push systems back toward centralization.
Documented advantages of centralized infrastructure include:
cheaper GPU access through scale
lower latency via centralized coordination
faster iteration and better user experience
Over time, these pressures lead teams to recentralize rationally rather than ideologically.
The paper explicitly connects this dynamic to earlier Web3 patterns:
DeFi protocols with centralized frontends
Layer-2 networks with single sequencers
Cross-chain bridges with trusted operators
In each case, decentralization erodes unless actively defended against economic gravity.
AI-token systems follow the same trajectory.
5. Speculation Dominates Measured Usage
Finally, the paper examines usage data rather than narratives.
Across AI-token ecosystems, empirical indicators show:
low transaction-to-market-cap ratios
limited recurring usage
shallow developer adoption
Token velocity is driven primarily by trading, not by demand for AI computation.
The authors are careful not to frame this as a moral failure. Instead, they describe it as a structural outcome. When tokens are freely tradable and product-market fit is weak, speculative activity overwhelms functional use.
AI tokens, the paper argues, have largely replicated this pattern.
Examples of Decentralized AI
Bittensor
Bittensor attempts decentralization at the model contribution and incentive layer.
Participants contribute models rather than raw compute
Models are scored by peers based on usefulness
Rewards are allocated algorithmically, not by a central operator
What’s still centralized
Core protocol development
Network parameters
Heavy reliance on off-chain execution
Why it matters
Bittensor shows that model-level decentralization is possible, even if infrastructure-level decentralization is not yet solved.
Gensyn
Gensyn targets the compute verification problem directly.
Focuses on decentralized ML training
Uses cryptographic techniques to verify off-chain computation
Explicitly acknowledges blockchains cannot run AI directly
What’s still missing
Production-scale adoption
End-user applications
Fully trustless verification at scale
Why it matters
The paper highlights Gensyn as an example of research-aligned design: it tackles the actual bottleneck, not the narrative layer.
Fetch.ai
Fetch.ai focuses on agent coordination, not foundation models.
Autonomous agents interact on-chain
AI logic often runs locally or off-chain
Blockchain handles discovery, identity, and settlement
Trade-off
Intelligence is distributed
Training and inference are not
Why it matters
The paper categorizes this as partial decentralization: useful, but not equivalent to decentralized AI infrastructure.
Off-Chain Computation Is the Real Bottleneck
One finding appears repeatedly throughout the research: heavy reliance on off-chain computation.
Blockchains cannot efficiently run modern AI models. As a result, decentralized AI platforms depend on:
external servers
GPU providers
specialized compute nodes
The blockchain records coordination and payments, but cannot verify whether computation was executed correctly.
This creates a structural trust gap.
Users must trust off-chain providers to behave honestly. Some projects attempt mitigation through staking, reputation systems, or peer evaluation. These mechanisms help at the margins, but they remain incomplete.
Until AI computation becomes verifiable, decentralization remains partial by design.
Speculation overwhelms utility, again
The paper places AI tokens within a familiar historical pattern.
Across multiple crypto cycles, utility tokens have often failed to achieve sustained usage even as speculation flourished. AI tokens follow this trajectory closely. Prices respond strongly to AI narratives, while actual platform usage remains limited.
In many ecosystems, token trading dominates token utility.
This outcome is not necessarily driven by bad faith. It is structural. When tokens are freely tradable, financial incentives often overwhelm functional ones. Without strong product-market fit, tokens become speculative assets first and coordination tools second.
The research suggests most AI-token ecosystems have not yet crossed that threshold.
What real decentralized AI would require
The paper does not reject decentralized AI outright. Instead, it outlines the conditions under which it could become real.
Promising directions include verifiable off-chain computation using zero-knowledge proofs, trusted execution environments, or AI oracles that allow blockchains to verify AI outputs. Federated learning coordinated on-chain could enable collaborative model training without centralized data control. Modular blockchain architectures could support AI-specific execution layers rather than forcing AI onto general-purpose chains.
Most importantly, decentralized AI must offer capabilities centralized AI cannot—or will not—provide. Privacy-preserving training on sensitive data. Collective ownership of foundational models. Censorship-resistant deployment.
Without such differentiation, decentralization remains ideological rather than practical.
The deeper structural lesson
The broader lesson of this research extends beyond AI tokens.
Decentralization is not a branding choice. It is an emergent property of architecture, incentives, and governance. When systems rely on centralized computation, centralized coordination, and centralized upgrades, tokens alone cannot make them decentralized.
This explains why many AI-token projects feel simultaneously ambitious and underwhelming. The vision points forward. The infrastructure pulls backward.
Conclusion: decentralized AI is still research, not product
The paper’s conclusion is measured but firm.
AI-based crypto tokens today are better understood as experiments rather than solutions.
They explore important ideas and surface real constraints, but they have not yet delivered decentralized AI at scale.
Decentralized AI needs:
verifiable off-chain computation
cryptographic proofs of inference
incentive-compatible data contribution
governance that reaches execution, not just policy
None of these are solved problems.
Decentralized AI will not emerge by “adding crypto” to AI.
It will emerge when AI systems are redesigned around verification, coordination, and collective ownership from first principles.