AI “agents” — software that can read your messages, follow instructions and act on your behalf — are increasingly being pitched as virtual personal shoppers. They can sort emails, hunt for deals and even pick items that fit your budget and taste. But experts warn that letting agents handle purchases on their own creates real risks: errors in communication, unexpected charges and security vulnerabilities.
“Right now the technology isn’t mainstream and it’s risky,” said Matt Kropp, an AI specialist at Boston Consulting Group. He and other technologists say consumer-facing agents lack sufficient safeguards for fully autonomous buying. “An agent might be able to secure something big like a car, but handing it your actual credit card is still unwise.”
Despite those cautions, major companies are rolling out agent-driven commerce features. American Express said it will verify certain AI agents’ identities and extend consumer protections for eligible cardholders when those agents make purchases. Amazon offers an assistant called Rufus that can monitor prices, alert users when items hit target prices and complete transactions. Walmart has a conversational agent named Sparky to help customers search, read reviews and place orders.
Adoption is growing. Market research firm Statista reports that roughly one in four Americans aged 18 to 39 has tried using AI to research products or shop. But increased use has already produced costly mistakes.
One widely reported incident involved Sebastian Heyneman, a San Francisco entrepreneur who told an agent to secure him a speaking slot at the World Economic Forum in Davos. The agent succeeded — but booked a $30,000 slot that Heyneman couldn’t afford. The agent came from Tasklet, a company that helps businesses automate tasks with AI, and Tasklet’s founder, Andrew Lee, said such outcomes often stem from unclear or conflicting user prompts.
Lee says agents are useful for routine shopping assistance but cautions against unrestricted control. “Agents are hard to trust yet. I like to control where my money goes, and we don’t recommend giving them free rein,” he said.
Security researchers echo those concerns. Bad actors can try to manipulate agents into revealing personal or payment information. Bretton Auerbach, founder of a tech startup in New York, describes scenarios where an agent directed to a malicious website could be tricked into pasting a credit card number or other sensitive data into a phishing page that looks legitimate to the AI.
For consumers considering agent-based shopping, experts recommend limiting exposure rather than full automation. Practical precautions include requiring explicit confirmation before purchases, setting spending caps, using single-use or virtual payment cards, and connecting agents only to vetted services. Financial firms and retailers are beginning to build verification, monitoring and dispute procedures into their offerings, but those protections vary.
Until stronger guardrails, clearer standards and more robust controls become widespread, handing an AI unchecked authority over your money or payment details can lead to financial loss and privacy breaches. For now, many technologists advise using agents as helpers — to surface options and do research — while retaining final control over spending decisions.