• Amazon's new Health AI Chatbot is rife with potential for misuse

    From TechnologyDaily@1337:1/100 to All on Saturday, March 14, 2026 09:15:26
    Amazon's new Health AI Chatbot is rife with potential for misuse here's why
    I wouldn't trust it with my data

    Date:
    Sat, 14 Mar 2026 09:00:00 +0000

    Description:
    Amazon's new Health AI will recommend you Amazon Pharmacy products and train itself on your conversations.

    FULL STORY ======================================================================Copy link Facebook X Whatsapp Reddit Pinterest Flipboard Threads Email Share this article 0 Join the conversation Follow us Add us as a preferred source on Google Newsletter Tech Radar Get the TechRadar Newsletter Sign up for
    breaking news, reviews, opinion, top tech deals, and more. Contact me with news and offers from other Future brands Receive email from us on behalf of our trusted partners or sponsors By submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over. You are
    now subscribed Your newsletter sign-up was successful An account already exists for this email address, please log in. Subscribe to our newsletter Amazon has launched its new Health AI service in the US, a chatbot which can help you understand, treat and diagnose health conditions. Available to all
    US Amazon Prime subscribers, it can have conversations with you about your health issues, recommend fixes, health products and connect you with doctors.

    Let's get this out of the way: I think AI has a place in healthcare. As medical staff battle creaking hospital infrastructure and overwhelming
    demand, patients suffer long waitlists and (in the US, at least) seemingly ever-rising costs driven by for-profit pharmaceutical and medical industries. AI has the potential to ease all of those problems. However, its implementation needs to be carefully considered, and it's not necessarily something I'd trust Amazon to do. While we've all ended up trusting Amazon with our data in some form or another ( Amazon Web Services is the biggest provider of cloud storage in the world, after all), I'd still be reluctant to hand over my sensitive health information to its chatbot. Let's break down exactly why. Article continues below You may like Why I won't use ChatGPT Health ChatGPT Health gives you a new way to explore medical questions Why
    one charity is pushing back on mental health chatbots What exactly is Amazon Health AI? Meet Amazon Health AI: a personalized health agent that connects you to One Medical providers - YouTube Watch On Amazon describes Health AI as "an agentic AI health assistant designed to make health care easier". It says Health AI is "designed to be a personalized health agent that knows you and your medical history so it can provide more helpful responses and take meaningful action, including connecting you to the professionals, treatments, and account services you need to get and stay well".

    These services include recommending you Amazon Pharmacy products and connecting you to healthcare providers (specifically from Amazon's One
    Medical group). With permission, Amazon can also access your medical records and have the chatbot discuss them with you.

    Amazon insists security is tight: it says Health AI is a "HIPAA-compliant" environment, referring to the US' Health Information Portability and Accountability Act. This means all your protected health information (PHI) is treated as it would be at a doctor's office, and subject to all legal privacy requirements. What are the potential risks? (Image credit: Amazon Pharmacy) The HIPAA Journal, in an article about AI published last year, said that healthcare providers and vendors using AI run risks as a result of the technology. It stated that "lurking surreptitiously behind the potential benefits of using PHI in AI technology lies a murky mix of risks that could negatively impact [healthcare providers], your vendors, and even your patients, especially when HIPAA compliance and patient PHI are involved." Get daily insight, inspiration and deals in your inbox Sign up for breaking news, reviews, opinion, top tech deals, and more. Contact me with news and offers from other Future brands Receive email from us on behalf of our trusted partners or sponsors By submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over.

    Some of these risks involve the nature of AI models requiring huge corpuses
    of data to train. If you're building a health-based chatbot, you need a lot
    of health data to do so and Amazon, one of the world's biggest collectors of data, sees the value in your personal information.

    Amazon insists "Protected health information from Amazon One Medical and Amazon Pharmacy is not used in the broader Amazon store to market general merchandise or by Amazon Ads, and Amazon does not sell customers' personal data" and "we only use protected health information for purposes permitted under HIPAA".

    However, it's not hard to see the conflict of interest: it'll tell you the problem, and sell you the solution from its own website, or connect you to
    its own medical providers. While it won't be used in Amazon's general store, Amazon said nothing about using health information for marketing purposes in relation to Amazon Pharmacy. Did you tell the chatbot you're having trouble sleeping? Expect to have deals surfaced relating to over-the-counter
    solutions for better sleep. What to read next I asked experts whether I
    should use ChatGPT for health advice, and I was shocked OpenAI says 40
    million people use ChatGPT for healthcare every day Amazons AI agent shopper is roping in brands without asking

    While Amazon will record your data to train future models of its AI, it does say it will remove names. However, that in itself isn't properly private.
    Meta was found to be able to link Facebook accounts to users of the period tracking app known as Flo , even after names and other identifying account information had been removed, thanks to a unique identification number. It
    was found in court to be spying on Flo users. I'm bringing this up because I wouldn't trust name removal to be an effective way of anonymising accounts, when it comes to harvesting chat logs to train the next generation of AI.

    If there are data breaches, or Amazon uses this data in a way that isn't
    HIPAA compliant, I also don't think any punishment can be effectively enforced. The Facebook court case I linked above took many years before reaching a verdict, and still no punishment has been handed out. Amazon is simply too big to fine, and too big to discipline, especially as its data centers are used by everyone from governments to grocers.

    Big tech's desire to help build a happy, healthy populace is second to its desire for the profits it could wring out of your personal data.

    With that in mind, Amazon isn't doing a good enough job at convincing me its new Health AI service is safe, private and in my best interests. The state of New York obviously feels the same way: it's in the process of blocking AI chatbots from giving legal or medical advice . Follow TechRadar on Google
    News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!

    And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.



    ======================================================================
    Link to news story: https://www.techradar.com/health-fitness/amazons-new-health-ai-chatbot-is-ripe -with-misuse-potential


    --- Mystic BBS v1.12 A49 (Linux/64)
    * Origin: tqwNet Technology News (1337:1/100)