Mitigating the risks of autonomous AI with agent-ready data
Date:
Mon, 23 Mar 2026 11:22:53 +0000
Description:
Most AI failures arent caused by poor models, but by missing context.
FULL STORY ======================================================================Copy link Facebook X Whatsapp Reddit Pinterest Flipboard Threads Email Share this article 0 Join the conversation Follow us Add us as a preferred source on Google Newsletter Tech Radar Get the TechRadar Newsletter Sign up for
breaking news, reviews, opinion, top tech deals, and more. Contact me with news and offers from other Future brands Receive email from us on behalf of our trusted partners or sponsors By submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over. You are
now subscribed Your newsletter sign-up was successful An account already exists for this email address, please log in. Subscribe to our newsletter The rollout of autonomous AI agents poses major opportunities to organizations.
However, without the right foundations and approach, the risks are vast when there is no way to guarantee that agents will make correct, reliable decisions. Agents do not perceive reality as we do. Instead, they act upon
the snapshots of reality captured in the data they access. Article continues below You may like Championing data leadership: how can data strategy shape
AI success? Why agentic AI pilots stall and how to fix them 3 risks
hindering enterprise-ready AI and how low-code workflows help Peter Manta Social Links Navigation
Global AI Practice Leader at Informatica from Salesforce. When data quality
is poor, an agent will make bad decisions, and there is no human around to identify the mistake and correct it.
The good news is that there are data strategies that can mitigate the risks. Agents of chaos In a traditional Machine Learning (ML) system using
automation , data problems can reduce accuracy. But in an agentic ecosystem, the actions of one agent can have catastrophic downstream impacts.
At worst, a rogue agent could cause a data cascade in which one error sparks
a chain reaction of flawed outputs that get stored, treated as truth, and
then reused. Are you a pro? Subscribe to our newsletter Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed! Contact me with news and offers from other Future brands Receive email from us on behalf of our trusted partners
or sponsors By submitting your information you agree to the Terms &
Conditions and Privacy Policy and are aged 16 or over.
In large organizations, these failures dont look dramatic at first.
Everything seems normal.
Until someone realizes the system has been acting on the wrong frame for weeks. By then, its too late and other agents are acting out based on the mistakes made upstream.
A lie whispered into the system becomes a command shouted out of the other side. What to read next AI governance under strain: what modern platforms
mean for data privacy Its time to walk the walk with AI How businesses can stop their AI agents from running amok
Data quality is necessary, but not sufficient. Agents need decision quality data to mitigate the risk of poor outcomes Data vs reality Given my role as
an AI data leader working with global enterprises, I was reflecting on a key data problem during a recent trip to London. I stopped to eat a famous
British dish and when I asked for some chips, I was confident I wouldnt be served a plateful of silicon.
The data I had gathered about my own personal situation and the context - sitting in a restaurant in the UK capital - made me confident the waiter
would bring me some cooked potatoes served alongside fish, not a handful of CPUs. Analysis of local data told me what kind of food I should expect. If I had made the wrong judgment and gathered or interpreted data incorrectly, it would have been a disappointing dinner.
A New Zealand supermarket recently provided another excellent - and hilarious - illustration of the challenge of interpreting data without all the right pieces in place. It created an AI recipe builder to help customers use up their leftovers, inviting people to type in the ingredients they have available and have the bot generate recipes.
Inevitably, people started asking it to make dishes with bleach, ant killer and other dangerous ingredients, so the AI began generating less-than-lovely-sounding suggestions like glue sandwiches and French toast flavored with a soupon of methanol.
The ingredients were complete and the instructions were correct, but the AI only understood the structure of a recipe - not the purpose or intent of
using ingredients to make nourishing, rather than noxious, food.
Now imagine if that AI had been agentic and tasked with, for example, instructing a food assembly plant to generate recipes and ingredient boxes to be sent to customers. The story above was funny - the nightmare scenario of poison being mailed to customers is anything but. A source of truth for
agents That tale is a clear lesson. If organizations havent established a trusted foundation for their data, the jump to agentic AI is extremely risky. This doesnt magically appear when agents are deployed - it comes from governance, metadata, lineage and understanding not just what the data says, but where it came from and why it exists.
And what should that data look like? It needs to be authoritative and trustworthy, as well as comprehensive and up-to-date to inform timely, complete decisions. It must also be responsible - meaning agents can safely act on it - and secure to prevent misuse.
Finally, context must be considered at every stage. It is the secret sauce that ties all these aspects of good data management together. Big decisions
on data As they roll out autonomous AI tools , organizations are discovering that accuracy alone is not enough. Agentic systems dont just predict - they act - and those actions compound. That means missing context is far more dangerous than it was in earlier generations of AI.
The organizations that succeed will be those that treat governance, metadata, and lineage not as an annoying requirement from their compliance teams, but
as a strong foundation for their agents. When data drifts away from the
truth, that movement doesnt stop - it gets worse. Building a solid truth
layer will help to stop that and prevent systems from failing down the line.
Many IT management teams wont know where this layer is. Maybe they will point vaguely to a warehouse. But they need to know where the truth their agents rely on can be found, because if its not in place upstream, then everything else is at risk.
Right now, this appears like a relatively small issue. But as agents take on more and more critical roles, its going to become a big one - so getting the basics right today is not just a prudent decision, but an unavoidable hedge against future chaos. Check out the best data migration tools here.
======================================================================
Link to news story:
https://www.techradar.com/pro/mitigating-the-risks-of-autonomous-ai-with-agent -ready-data
--- Mystic BBS v1.12 A49 (Linux/64)
* Origin: tqwNet Technology News (1337:1/100)