
When people choose what to buy, they want everything at once: the best quality, the best price, convenient delivery, and a product that matches the picture they already have in their head.
And of course an AI assistant can often help more than a plain search box: it knows far more about its user than a brand can realistically learn in years of close contact. It’s not surprising that more and more buyers are choosing AI as their main way of shopping.
So what are brands supposed to do?
Tear down their architecture and rebuild it from scratch?
Integrate with every agent out there and multiply the number of touchpoints?
Even the McKinseys of this world are asking a similar question: will AI agents start setting the rules for retailers and optimising only for their own upside, forgetting both the sellers and what the customer actually wanted?
Because if it’s easy to recommend the “right” product, it’s just as easy to nudge the user away from this watch and towards that one instead — from a brand that happens to pay this AI ecosystem a bit more, for example.
Let’s unpack this.
While your competitors are busy redesigning their bulky websites, it’s worth pausing to notice that the buyer doesn’t actually need to visit them anymore.
From the buyer’s point of view, the flow looks like this: they ask the assistant, “Find me running shoes under $120”, the assistant pulls a few options from different stores, the purchase is completed in the same conversation, and they may not even notice which company’s systems are actually running the checkout.
Your interaction with the customer now starts and ends without you. Most of your old touchpoints simply aren’t on the customer journey anymore.
If you don’t act deliberately, you really do end up as nothing more than a single line in a list: “just another seller who happens to have the right SKU”.
If you strip away the nice language, one simple question remains: who makes money on this flow? And who takes the reputational and financial hit when something goes wrong?
To stay in real touch with the customer, you need to be clear about a few things.
1. Who is responsible?
Very specifically: who is responsible for accurate recommendations and product descriptions, who owns pricing, taxes and delivery options, and who pays for a decision in the customer’s favour when there’s a dispute.
2. What signals does the customer get along the way?
At a minimum:
If at every step it feels like “I’m talking to a faceless system”, there is no relationship — neither with the brand nor with the platform.
3. What data is all of this built on?
For the assistant, the process now looks very different. It doesn’t rely on your landing page as the main source of truth. The key inputs are a structured product feed (in the format ChatGPT expects), a checkout API that returns the final terms of the deal, and order statuses.
If you’re not properly represented in this data, you don’t update prices and availability, you skip important details — no amount of “customer first” declarations will help.
The system simply cannot act in the customer’s interest if it doesn’t have a realistic picture of your business.
4. What limits do you set for the agent?
By default, an agent will try to maximize:
If you don’t set boundaries, it will optimize for its own goals. For example, it might use emotional but not entirely accurate arguments to push a product over the line.
Those limits need to be agreed upfront both in your contracts and in how the channel is configured: what exactly the assistant is allowed to promise, which categories it’s allowed to handle in the first place, and where a “human stop button” is mandatory and a person has to step in.
The layer between the seller, the assistant and the buyer can be designed in different ways. In practice there are two main options.
The seller integrates directly:
The seller sells via a marketplace or infrastructure layer that:
In both cases, this layer can: strengthen trust — by taking on the complex infrastructure and making roles and flows more transparent, or erode trust — when it’s unclear who is responsible for what and the customer feels abandoned.
The key question here isn’t what the product is called or which stack it uses. It’s whether this layer gives the seller and the platform enough control and transparency — and whether it makes the channel clearer for the customer, not more confusing.
If the infrastructure can clearly show who is Merchant of Record in a specific transaction, how the payment moves through the system, and who owns the dispute, it helps both the brand and the AI platform save face.
You don’t have to love agentic commerce or be sure how big it will get and how fast.
But it’s already part of how people shop, and it makes sense to be ready to work with it.
**Look at your current catalogue through the assistant’s eyes.
**If you hide all your frontends and leave only the feed and statuses — is it clear who you sell to, what you sell, where the limits and risks are? Or will the assistant have to fill in the gaps for you?
**Choose an interaction model you’re actually comfortable with.
**In your existing contracts and integrations, who takes the reputational hit when the channel fails? And are you okay with that setup if, on top of it, an AI agent becomes the main entry point?
When the basics are in place, conversations about protocols, marketplaces and architecture get much easier.
Agentic commerce by itself doesn’t kill your relationship with the customer — especially if you prepare for the shift in advance and choose a way of working with this new sales channel that actually fits your business.