TBW - Stablecoins and AI agents: the foundations of autonomous commerce

TBW - Stablecoins and AI agents: the foundations of autonomous commerce

Artificial intelligence agents are gradually emerging as a new category of autonomous actors capable of interacting with the real world. These digital entities, equipped with reasoning, memory and access to external data, can search, book, pay or perform complex tasks without direct human intervention. Through the integration of digital wallets, they become capable of making real payments and economic decisions in real time.

These agents do not work in isolation. They communicate with each other, share tasks and collaborate in what are known as swarms - swarms of agents acting collectively as a coordinated team. Each plays a specific role: some collect data, others analyse or execute, and the whole acts as a distributed organisation capable of achieving complex objectives. This collaborative logic could transform many sectors: in decentralised finance, these swarms could manage portfolios, monitor smart contracts, optimise liquidity or execute automatic arbitrages. Elsewhere, they could produce content, manage customer support or coordinate large-scale research.

Multi-agent architectures vary depending on the use case. Some adopt a sequential structure, where each agent acts in turn; others a hierarchical model, where a "conductor" distributes tasks; still others operate in parallel, or collectively refine their results through iterative cycles. These systems can be centralised or entirely decentralised, depending on the degree of trust and coordination required.

Financial institutions are beginning to take an interest. Mastercard is developing Agent Pay, a secure payment infrastructure designed for AI agents. It is based on Agentic Tokens, tokenised digital identities that guarantee biometric authentication and compliance with the user's intentions. The idea: to enable registered and verified agents to execute autonomous, frictionless payments over existing networks - whether for individuals or businesses.

For Michael A. Hanono, founder of Talus Labs, the rise of AI agents will accompany the rise of the onchain economy: "As the economy moves onto the blockchain, we will need agents to automate and coordinate transactions. This will drive global financial inclusion, rather than hinder it."

Same observation for Jennifer Dodgson, Chief AI Officer and co-founder of KIP Protocol, who sees the open culture of the crypto world as the key to this convergence: "AI pioneers have long judged agents too unstable to be useful, or too rigid to be attractive. But in the web3 ecosystem, experimentation continued. This is where agents have really found their place."

The integration of autonomous agents with crypto infrastructure and stablecoins could lead to the emergence of a new generation of applications where artificial intelligence becomes an economic player in its own right - capable of understanding, negotiating and paying, without an intermediary.

>> In silence, the World (ex-Worldcoin) project continues to weave its web

Mapping a rapidly structuring ecosystem

One of the major projects in autonomous artificial intelligence today concerns communication between agents. Several frameworks and protocols are emerging to enable these entities to collaborate, coordinate and collectively solve complex problems. These multi-agent architectures aim to standardise exchanges, encourage specialisation and make AI systems more modular and scalable.

Among the most notable initiatives is BeeAI, an open source platform supported by the Linux Foundation. It is based on the Agent2Agent (A2A) protocol, which enables agents from different environments to talk to each other. The aim is to bridge the interoperability gaps between the multiple agent ecosystems and make it easier for them to be discovered, executed and shared.

LangGraph adopts a visual approach: each agent is represented as a node in a graph, and exchanges take place via connections (or edges). This model promotes clear workflows, management of complex tasks and fluid scaling of multi-agent systems.

For its part, CrewAI focuses on lightness and speed. This Python framework, designed to create teams of autonomous agents, combines ease of use with granular control. Each agent can be assigned a precise role, tool and objective, making it possible to tailor automation to specialised tasks.

OpenAI also offers its own Agents SDK, a set of tools for building collaborative workflows between agents. The SDK manages task allocation, memory, input validation and traceability, guaranteeing the reliability and transparency of multi-agent processes.

Google is establishing itself as a structuring player in the field with its Agent2Agent Protocol (A2A), an open standard designed to enable AI agents from different frameworks to communicate securely and interoperably. This protocol, which complements the Model Context Protocol (MCP), acts as a universal language. MCP links AI models to external systems - databases, tools or processes - so that they can access information and interact with their environment. Together, A2A and MCP form a common foundation for an ecosystem of interconnected agents.

Finally, Microsoft has developed AutoGen, an open source framework for creating and orchestrating collaborative agents. Its version 0.4 introduces an asynchronous, event-driven architecture designed for scalability and modularity. AutoGen also offers observability, debugging and multi-language compatibility tools, reinforcing the robustness and flexibility of interactions between agents.

These initiatives are laying the foundations for a digital economy where AI agents will be able to cooperate freely, in a standardised and secure way. An essential step before their integration into onchain finance, where interoperability will be decisive for executing transactions, managing portfolios or trading autonomously on the rails of stablecoins.

When AI agents meet decentralized finance

A new generation of projects is now exploring the convergence between artificial intelligence and decentralised finance. This movement, dubbed DeFAI (for Decentralized Finance + Artificial Intelligence), aims to enable autonomous AI agents to cooperate and execute financial strategies or transactions based on simple natural language instructions. These agents could eventually analyse markets, place orders, manage portfolios or automatically arbitrate between different protocols, without human intervention.

Several initiatives are already positioning themselves in this field. Projects such as Wayfinder, GRIFFAIN, Hive.AI, Hey Anon, Swarmnode.AI and Mode are each exploring this intersection between AI and DeFi in their own way. Some focus on personal agents capable of interacting directly with decentralised finance protocols; others on multi-chain architectures where several agents work together to optimise the execution of transactions. The differences lie mainly in their approach to coordination: large-scale automation for the former, simplified and personalised interfaces for the latter.

In this ecosystem under construction, several protocols are seeking to structure interactions between agents and make them interoperable. Theoriq presents itself as the first decentralised protocol combining artificial intelligence and blockchain to govern and build multi-agent systems. Its platform enables the creation of collectives of interconnected agents capable of cooperating on complex tasks.

For its part, AITP (Agent Interaction & Transaction Protocol) standardises communication between AI agents, particularly when they evolve in different trust environments. Integrated into the NEAR AI Hub, AITP enables advanced interactions - forms, payments, data exchanges - based on structured discussion threads and extensible capabilities. It aims to unify exchanges between agents and users, whatever the original protocol or platform.

The ASI Alliance, meanwhile, offers a complete architecture for building decentralised artificial intelligence systems. Its approach is based on a composable stack: ASI Data for secure data sharing, ASI Compute for distributed computing power, and ASI Chain, a blockchain designed for autonomous agent coordination, confidentiality and smart contract execution.

The decentralised execution infrastructure is provided by Talus, via its Nexus framework, which enables AI agents to carry out on-chain workflows in a transparent and verifiable way. Swarm Network completes this layer by providing an orchestration platform, where agents can group together in adaptive swarms (clusters), make collective decisions and collaborate via a no-code interface called Agent BUIDL. Together, Talus and Swarm make it possible to design modular agent systems that can be audited in real time, paving the way for use cases such as autonomous market analysis, decentralised governance or collective research.

Finally, CARV is banking on an approach centred on the agents themselves, called AI Beings. These entities have a persistent identity, a shared memory and economic awareness. They interact via the Model Context Protocol (MCP) and the D.A.T.A. framework, guaranteeing secure communication and fluid coordination between agents.

These projects lay the foundations for a new ecosystem where AI agents become economic actors capable of interacting with each other, remunerating each other and executing transactions via stablecoins and crypto rails.

When AI agents start paying

Artificial intelligence agents are no longer content to simply perform tasks or produce analyses. They are now learning to trade with each other autonomously, using stablecoins and blockchain infrastructures as payment rails. This development paves the way for a new form of machine-to-machine commerce, where agents become genuine economic players capable of exchanging value in real time.

Several initiatives are already turning this vision into reality. uAgents, developed within the ASI Alliance by Fetch.ai, enables autonomous agents to send tokens, sign transactions and verify their execution in a decentralised environment. Using this library, an agent can initiate a payment request, transfer tokens, receive confirmation and integrate these operations into dynamic workflows.

Another use case comes from the partnership between Circle and Questflow. Questflow's MAOP (Multi-Agent Orchestration Protocol) enables the coordination of swarms of agents, each with its own crypto wallet. These agents interact as independent economic actors, able to pay each other for a service rendered, a calculation performed or a task accomplished. USDC, Circle's stablecoin, is used as the default settlement currency, making instant on-chain microtransactions possible for payments, rewards or service fees.

The convergence between AI and decentralised finance is accelerating with the arrival of new protocols. Circle is working to automate payments between agents via the integration of Circle Wallets and the x402 protocol developed by Coinbase. This system reactivates HTTP code 402 ("Payment Required") to enable an agent to automatically purchase an API service or make a micropayment without human intervention. It's a step towards a generalised pay-per-use model where machines consume and pay other machines.

On 16 September 2025, Google took a new step forward by launching the Agent Payments Protocol (AP2). This open protocol, built on the Agent2Agent (A2A) and Model Context Protocol (MCP) standards, enables AI agents to make interoperable and compliant payments. AP2 supports credit cards, bank transfers and stablecoins, including USDC. Working with more than 60 partners - including Coinbase and Mastercard - Google aims to create a secure agentic commerce ecosystem, where every transaction is authenticated, verifiable and traceable.

Behind these innovations lie fundamental questions: how do we ensure trust, accountability and ethics in a world where machines act as autonomous economic actors? What happens to the notion of value when transactions are carried out from agent to agent without human intervention? This convergence between AI, blockchain and stablecoins outlines a new decentralised digital economy where autonomous coordination becomes the norm.

For Teng Yan, founder of Chain of Thought, this change is inevitable: "I am 99.9% sure that the majority of transactions in the future will be agent-to-agent, rather than agent-to-human. These agents will create their own economies to serve us better. Stablecoins will be the natural form of exchange, thanks in particular to their efficiency for micropayments."

Michael A. Hanono, founder of Talus Labs, shares this vision: "In a world of autonomous agents, stablecoins are more reliable than banking infrastructures. Agents can't depend on intermediaries: they need a native, trustless settlement layer."

Same observation for Jennifer Dodgson, co-founder of KIP Protocol, who sees an inclusion dimension: "Stablecoins offer a practical solution when access to certain models is restricted, or to maintain a level of anonymity. People are still reluctant to let agents manage their finances, but it was the same in the early days of e-commerce. In time, convenience will eventually outweigh fear."

What crypto-powered AI agents need to take off

There is huge interest in the idea of artificial intelligence agents that can autonomously pay, negotiate or collaborate on the rails of blockchain. But to move from concept to mass adoption, several major hurdles need to be overcome - technical, economic and regulatory.

The first condition concerns security. These agents handle private keys and interact directly with smart contracts, exposing them to hacking and embezzlement. A robust architecture is essential to prevent key compromise, data corruption and code injection attacks. One of the challenges is also managing the volume of data: agents must be able to process massive amounts of information off-chain before transferring only the essential elements to the blockchain, in order to preserve the scalability of the system.

To carry out autonomous transactions, agents need programmable wallets capable of automatically executing payments or on-chain operations. But the cost of gas remains a major brake: micropayments quickly become unviable on congested networks. The search for low-cost layers 2 and interoperable solutions between chains is therefore a priority, so that agents can interact frictionlessly in multi-blockchain environments.

Another fundamental challenge is that of identity. Unlike humans, agents have no legal existence or possible KYC. This complicates their integration into regulated systems. As Teng Yan, founder of Chain of Thought, sums up: "KYC is for humans, KYA will be for agents - Know Your Agent. Reputation and identity will become critical variables in the agent economy." Stablecoins, meanwhile, offer a reliable foundation for instant, low-volatility settlements, which are essential for this type of automated interaction.

Functionally, these agents are already capable of performing tasks such as token discovery, automated trading or managing transactions on decentralised portfolios. But their autonomy remains fragile: execution errors, data failures or latency can lead to losses or missed opportunities. The key question is therefore reliability: when will agents be stable enough to operate without human supervision, or even collaborate exclusively with each other?

To achieve this, they will need to be able to coordinate on collective tasks, share resources and above all exchange payments in real time. This presupposes intelligent wallets capable of managing A2A (agent-to-agent) transfers, a scalable blockchain infrastructure and streaming payments mechanisms based on stablecoins, which are much more suitable than volatile tokens.

Regulatory clarity will be crucial. Legislators will need to define a suitable framework for both AI model providers, crypto payment rails and stablecoin issuers. Without this, agents will remain confined to experimental environments.

How do the regulatory standards apply to AI agents?

On 18 July 2025, the European Commission published guidelines specifying how the Artificial Intelligence Regulation (AI Act) applies to General-Purpose AI (GPAI) models. The recommendations, while non-binding, are intended to guide national regulators and model providers on the practical implementation of the text - particularly for AI agents, which are considered to be versatile systems capable of performing multiple tasks in a variety of contexts.

The document clearly distinguishes models with systemic risks, i.e. those whose scale, capabilities or economic impact could potentially harm public safety, health, fundamental rights or economic stability. These models, which include advanced AI agents, will be subject to enhanced surveillance. The framework thus introduces an unprecedented level of requirements for players in the sector, in particular those developing autonomous agents capable of interacting or transacting in real environments.

In accordance with the AI Act, a GPAI model is defined as an AI capable of performing a variety of tasks regardless of how it is deployed. Models used exclusively for research or prototyping purposes are excluded. Providers of models presenting a systemic risk will, on the other hand, have to assess and mitigate the risks associated with their agents, guarantee their transparency, and ensure their compliance with high security standards.

The guidelines also specify when a "downstream" actor in turn becomes a model provider - for example when it re-tools an existing model to derive an adapted version - as well as the cases of exception for open source models. Lastly, they describe the transition arrangements before the scheme comes fully into force.

On an operational level, IAM providers will have to implement a number of guarantees:

  • Risk assessment: identifying the ways in which an agent could cause damage, by applying standardised performance and robustness tests.
  • Transparency and traceability: introduce agent identification tools (templates, public registers, configurable alerts) and maintain a comprehensive logging infrastructure to track interactions and decisions made by agents.
  • Technical security: design multi-level filtering mechanisms capable of blocking dangerous behaviour in real time, as well as automatic shutdown systems - manual or automated - triggered by monitoring signals.
  • Human supervision: incorporate check and validation points, link logging systems to emergency shutdown procedures and introduce clear permissions management to control agent access and default settings.

Promising innovation or systemic risk?

Like any emerging technology, crypto-powered artificial intelligence agents pose a central dilemma: can they make finance more efficient without increasing its risks? Their integration into DeFi promises considerable gains in automation, speed and transparency, but also raises major concerns about security, bias and governance.

On paper, these agents could transform decentralised finance by making it more fluid: automated lending, real-time risk rating, execution of complex strategies without human intervention. But the expected benefits should not overshadow the potential dangers. AI systems, including those integrated into DeFi, inevitably inherit the biases of the data on which they are trained. Applied to sensitive areas such as credit or scoring, these biases could reproduce - or even amplify - systemic discrimination. This raises a fundamental question: should AI algorithms used in finance be subject to independent audits or public oversight, to ensure fairness and transparency?

The risks are not limited to ethics. In a decentralised environment, AI agents expose DeFi to new vulnerabilities. Data manipulation or poisoning, compromise of private keys, attacks through unauthorised access: these are all scenarios likely to compromise the security of protocols and users. Added to this is the issue of personal data protection, which is particularly sensitive when agents interact directly with digital wallets and smart contracts.

Another debate concerns the distribution of benefits. Will these tools serve to democratise access to finance, or will they primarily benefit institutional players and developers with a technological edge? The recent history of financial innovation suggests caution: greater efficiency does not always translate into wider inclusion.

The challenge, then, for regulators and innovators alike, is to strike a balance between experimentation and protection. The aim is not to curb innovation, but to prevent perverse effects before they become systemic. As the philosophy of the European regulator with the AI Act reminds us, trust is not born of the technology itself, but of the framework in which it evolves.

Evolutionary outlook for AI agents, prompts and payments

Until recently, payments - including crypto transactions - relied entirely on human intervention. Each step required manual action: choosing a wallet, selecting a network and token, entering the recipient's address, then validating the transaction. The level of complexity varied depending on the platform - more or less fluid on CEXs, often technical on DEXs - and depending on the need to use inter-chain bridges or intermediaries.

This logic is in the process of being turned on its head. Users can now automate an increasing part of the transactional process, and the question is no longer whether AI agents will be able to do it, but when they will be able to do it without human supervision. The next stage would be fully autonomous commerce, where the agents themselves carry out transactions from one end of the chain to the other, without human intervention or friction.

In this new landscape, AI agents could become economic operators in their own right. Their concrete uses are already identifiable: execution of DeFi trading and arbitrage strategies, automation of recurring payments or microtransactions, or real-time risk monitoring and management. As they become more sophisticated, these agents will become capable of coordinating complex tasks between themselves, forming dynamic, self-organising economic networks.

But this prospect requires responsible AI. Agents must be safe, transparent and interpretable - able to understand human needs while remaining under human control. Initiatives such as Amazon Augmented AI or Amazon Bedrock Agents are already part of this logic: they reintroduce a human loop for high-stakes or irregular transactions, and incorporate automatic blocking mechanisms to prevent AI-generated fraud. The human is no longer there to validate everything, but to intervene as a last resort, as a safeguard.

At the heart of these interactions, prompts become the new interface between humans and agents: they enable economic decisions to be delegated to entities capable of learning, negotiating and paying. Eventually, these agents could form interconnected economies, where value flows seamlessly between AI networks. The question of payment infrastructure then becomes central. Stablecoins appear to be the most suitable solution, offering stability and liquidity where traditional cryptos remain too volatile. Their growing adoption by traditional financial institutions reinforces this position, ensuring continuity between decentralised and regulated finance.

Ongoing regulatory developments, particularly in Europe and the United States, are consolidating this movement: they are setting transparency and traceability standards that strengthen security and fraud prevention, while paving the way for large-scale use.

For Jennifer Dodgson, co-founder of KIP Protocol, the economic impact of this transformation will follow a well-known logic: "As is often the case with AI, we will see an 80/20 effect. 80% of the agents will be used to concentrate wealth in the hands of the big players - Google, OpenAI, BlackRock. But the remaining 20% will be the most interesting: AI encourages proliferation and remains relatively accessible. No individual trader rivals BlackRock's infrastructure, but each can deploy a hundred agents and select the best over time."

The Big Whale's analysis

The convergence of artificial intelligence agents, stablecoins and crypto payment infrastructures is opening a new chapter in autonomous finance and trading. Multi-agent systems and standardised protocols now enable digital entities to collaborate efficiently, execute transactions without human intervention and redefine the ways in which economies are organised across multiple sectors.

Stablecoins play a central role in this transition: they provide a stable and reliable basis for agent-to-agent payments, while ensuring seamless interoperability between traditional and decentralised finance. Their growing adoption by institutions is strengthening this bridge between two long-opposed worlds - that of regulated banking infrastructures and that of open, programmable blockchain networks.

But this development raises profound questions: how can transparency, fairness and data protection be guaranteed in an economy where machines are themselves becoming market players? What responsibilities should be assigned to entities capable of making autonomous economic decisions? And how can trust be preserved when algorithmic logic imposes itself in areas historically based on human regulation and ethical judgement?

As regulatory frameworks adapt - in Europe with the AI Act and MiCA, in the US with the stablecoin bills - the priority will be to strengthen the security, accountability and traceability of these systems. But beyond the law, a philosophical reflection is underway: what becomes of the notion of agentivity in a digital society where action and value shift to non-human intelligences?

>> Data, insights, comparisons: discover our stablecoin dashboard

Read more