By Megan Cerullo
Reporter, MoneyWatch
April 17, 2026 / 5:00 AM EDT / CBS News
Artificial intelligence “agents” promise to do everything from tidying your email to buying a pair of heels based on your budget and style. But technology experts warn that outsourcing key decisions to AI exposes consumers to risks — communications errors, financial losses and security vulnerabilities — especially when agents are allowed to make purchases autonomously.
“It isn’t mainstream yet and it’s pretty risky right now, because there aren’t enough guardrails in the system for people to feel comfortable with agents autonomously buying things for them,” said Matt Kropp, an AI expert with Boston Consulting Group. “It could potentially go buy a car, but I wouldn’t say, ‘Here’s my credit card.'”
Still, major companies are pushing agentic commerce as a new way to engage customers and drive sales by letting AI do the legwork. American Express announced services and protections for cardholders who make purchases using specified AI agents, saying it will verify an agent’s identity and protect eligible customers from charges related to AI agent error. Amazon offers an agentic assistant called “Rufus” that can track prices, alert customers when prices hit a target and complete purchases. Walmart has deployed a “conversational” agent named Sparky to help consumers find products, read reviews and place orders.
Roughly a quarter of Americans ages 18 to 39 say they have tried using AI to research products or shop, according to market research firm Statista. But adoption is driving mishaps as well.
One high-profile example: Sebastian Heyneman, founder of a San Francisco tech startup, instructed an AI agent to secure him a speaking slot at the World Economic Forum in Davos. The agent succeeded — but booked the slot for $30,000, a fee he couldn’t afford. Heyneman used a bot from Tasklet, which lets businesses automate routine tasks with AI agents. Tasklet founder Andrew Lee said such problems can arise when user prompts give the agent conflicting instructions.
Lee acknowledged agents can handle normal consumer shopping tasks, but cautioned against handing them unrestricted control. “The specific use case of shopping is not a good thing to use these systems for — yet,” he said. “The agents are fundamentally hard to trust. Personally, I am not super comfortable with that yet. I like to control where my money goes myself, and as a business, we don’t recommend that.”
Security researchers and startup founders warn of other dangers: bad actors can trick agents into disclosing personal data or payment information. Bretton Auerbach, founder of a New York tech startup, explained that if an agent is directed to a malicious site, it might be fooled into pasting a credit card number or other sensitive details into a phishing page that looks legitimate to the AI.
For now, experts advise caution. Companies are beginning to add verification and protections, but agents remain prone to misunderstandings, prompt misconfigurations and adversarial manipulation. Letting an AI manage your money or make purchases without strict safeguards can expose you to financial and privacy risks that many consumers may not be ready to accept.
Edited by Alain Sherter
In: E-Commerce, Artificial Intelligence