• Im a cybersecurity professional, heres why Im preparing for an AI

    From TechnologyDaily@1337:1/100 to All on Sunday, March 15, 2026 10:15:26
    Im a cybersecurity professional, heres why Im preparing for an AI data breach

    Date:
    Sun, 15 Mar 2026 10:00:00 +0000

    Description:
    AI companies host a treasure trove of data - third-party data breaches pose major risk.

    FULL STORY ======================================================================Copy link Facebook X Whatsapp Reddit Pinterest Flipboard Threads Email Share this article 0 Join the conversation Follow us Add us as a preferred source on Google Newsletter Tech Radar Get the TechRadar Newsletter Sign up for
    breaking news, reviews, opinion, top tech deals, and more. Contact me with news and offers from other Future brands Receive email from us on behalf of our trusted partners or sponsors By submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over. You are
    now subscribed Your newsletter sign-up was successful An account already exists for this email address, please log in. Subscribe to our newsletter Recently, OpenAI acknowledged a security breach at a third-party data analytics vendor that led to the exposure of some of its API users personal information, including email addresses, names, and browser details.

    The incident on its own underscores the continuing issues surrounding supply chain targeting the risks of third-party data exposure but beyond that, the incident serves as a potential shot across the bow for the cybersecurity community and the broader public in general. Mike Kosak Social Links Navigation

    Director of Threat Intelligence at LastPass. Treasure trove of data AI companies are a treasure trove of data. Not just the data the models are trained on or even the intellectual property involved in the actual technology- AI can be viewed akin to Cloud Service Providers (CSPs) as repositories for a massive amount and variety of customer -provided data. Article continues below You may like Confronting AIs data privacy paradox Friend or foe? AI: The new cybersecurity threat and solutions Securing AI infrastructure is critical here's how to do it

    As we saw in the late 2010s, nation-states and other threat actors increased their targeting of CSPs to maximize their return on investment, and it is a matter of time until we see a major breach of one of the AI companies and the accompanying exposure of personal and proprietary data.

    The data is too attractive, and threat actors are too capable.

    This isnt to take anything away from the security programs at these
    companies; on the contrary, there is no doubt that, particularly among the most advanced firms that would draw the biggest interest among threat actors, the security programs are world-class and incredibly well-resourced and operated, but its the classic issue of defenders need to be right all the
    time and attackers only need to be right once. Secure by design To be clear, this isnt even taking into consideration the recent security issues
    identified within Moltbook after it was rapidly adopted in the last few
    weeks, including major vulnerabilities independently discovered by both Wiz, as captured in their excellent blog post, and Jameson OReilly which were highlighted by 404 Media. Are you a pro? Subscribe to our newsletter Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features
    and guidance your business needs to succeed! Contact me with news and offers from other Future brands Receive email from us on behalf of our trusted partners or sponsors By submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over.

    While Moltbook is the focus of these recent reports, the issues arising from insecure development of AI tools - especially as the capabilities and technology proliferate - are much larger and more distressing, and they deserve their own analysis.

    These issues go back to an overarching emphasis on speed of implementation,
    an overreliance on vibe coding , and a fundamental lack of implementation of the secure-by-design mantra that is creating security issues that threat actors will most certainly leverage. But again, thats another topic back to the issue at hand.

    What makes a potential large-scale breach of a major AI firm so unique is the variety and sensitivity of the data. Many companies dont even realize some of their most sensitive data may have already been shared via their employees . What to read next AI powers innovation but its also powering the next wave
    of cyber attacks The Human Risk Reckoning: Why security must evolve for an AI-augmented workforce The power and potential of agentic AI in cybersecurity

    According to a study earlier this year from Harmonic, 45.4% of companys sensitive data submissions into AI apps came from personal accounts and Varonis found 99% of organizations have sensitive data exposed to AI tools , including unsanctioned apps.

    Combine this data with deeply personal information individuals are sharing with AI chatbots, including asking questions that have later been used in criminal cases and leveraging AI for mental health and therapy-like discussions.

    The potential for extortion and blackmail becomes a concern as well, particularly among those who may feel pressure to avoid going to therapists
    or reporting mental health concerns, such as those in intelligence, first responders, or the military.

    People are viewing AI chatbots as a safe place to share their thoughts and questions while maintaining a sense of anonymity when this may not be the case, particularly in the long-term. Enforcing robust AI I raise these concerns not to be a naysayer or a Cassandra, but in hopes of preparing the larger AI customer base for the inevitable so that they can take the appropriate steps now before something happens.

    This means examining their risk appetite, be it personal, professional, or organizational, for that they are willing to share with AI and let be stored in perpetuity on third-party servers that are viewed as rich targets. This means users should examine what, if any, sensitive data they are comfortable sharing with an external organization.

    For companies, which often have data classification policies, this is easier to do. For personal users, this can be more difficult. Once this examination is complete, it means taking steps to adjust behavior, again either personal or organizational, to align with that risk appetite.

    This may mean developing, implementing, and (most importantly) enforcing robust AI use policies within your company. This may also mean researching chatbots before leveraging them for asking personal and/or sensitive
    questions that you may not want to have out in the open in the event of a large breach. Major breach AI and its continuing rapid development obviously have some amazing and wonderful implications for companies and individuals alike. But these companies place as highly-prized targets for advanced threat actors means it is almost certainly just a matter of time until a major
    breach occurs.

    Best for users to consider now what data they would like to avoid being exposed in the event of a major breach by refraining from submitting it in
    the first place. We've featured the best encryption software. This article
    was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro



    ======================================================================
    Link to news story: https://www.techradar.com/pro/im-a-cybersecurity-professional-heres-why-im-pre paring-for-an-ai-data-breach


    --- Mystic BBS v1.12 A49 (Linux/64)
    * Origin: tqwNet Technology News (1337:1/100)