Google Removes AI Health Features Amid Accuracy and Safety Concerns

Google Removes AI Health Features Amid Accuracy and Safety Concerns | Quick Digest
Google has removed its 'What People Suggest' AI health feature and scaled back 'AI Overviews' in search results due to widespread concerns over misleading and potentially dangerous medical information. Investigations revealed inaccuracies, particularly regarding liver test data, raising patient safety alarms. The move underscores the critical need for accuracy in AI-generated health advice.

Key Highlights

  • Google removed 'What People Suggest' AI health feature.
  • Specific 'AI Overviews' in search results also removed.
  • Decision followed reports of misleading medical information.
  • Concerns included inaccurate liver test data and oversimplification.
  • Experts warned of potential harm to patient safety.
  • The move highlights ongoing debate on AI ethics in healthcare.
Google has recently undertaken significant actions by quietly removing its controversial artificial intelligence (AI) search feature, 'What People Suggest,' and scaling back its 'AI Overviews' in search results concerning health-related queries. These decisions come amidst a growing chorus of concerns regarding the accuracy and safety of AI-generated medical information, which, according to investigations, risked providing misleading and potentially dangerous advice to users globally. The 'What People Suggest' feature, which was launched in March 2025, aimed to provide crowdsourced health advice, allowing users to share and find tips based on lived medical experiences. While initially touted by Google as an example of AI's potential to enhance global health outcomes, it faced considerable scrutiny for its inaccuracies. The company confirmed its removal, attributing it to a 'broader simplification' of its search page. However, this explanation from Google appears to downplay the underlying pressure from safety and quality concerns widely reported across various media outlets. More broadly, Google's 'AI Overviews' – generative AI summaries that appear at the top of search results for billions of users monthly – also came under fire. A pivotal investigation by The Guardian in January 2026 revealed that these AI Overviews were presenting false and misleading health information, potentially endangering users. For instance, in searches for 'what is the normal range for liver blood tests,' the AI summaries provided generic numerical ranges without crucial contextual factors such as a patient's age, sex, ethnicity, or specific medical history. Experts warned that such oversimplified and decontextualized information could lead seriously ill patients to falsely believe their results were normal, deterring them from seeking necessary medical care. Following The Guardian's investigation and subsequent criticism from medical professionals and organizations like the British Liver Trust, Google began removing AI Overviews for specific problematic medical queries, including those related to liver function tests. While these removals were welcomed, experts caution that similar misleading AI-generated summaries might still appear for slightly reworded questions, and concerns persist, particularly concerning sensitive topics like cancer and mental health. This incident is part of a larger, ongoing global debate about the ethical deployment of AI in healthcare. Beyond public-facing search features, Google has also developed Med-PaLM 2, a large language model designed for use in healthcare settings, primarily for medical professionals. Even this more controlled deployment has faced scrutiny. In August 2023, U.S. Senator Mark R. Warner raised concerns with Google CEO Sundar Pichai regarding Med-PaLM 2's potential inaccuracies, patient privacy, and the need for greater transparency and ethical safeguards in its rollout. The World Health Organization (WHO) and various patient advocacy groups have also advised caution regarding AI tools in healthcare to protect human welfare and autonomy. The core issues revolve around the inherent challenges of AI in generating accurate, nuanced, and contextually appropriate medical advice. AI models, by their nature, can oversimplify complex medical topics and, in some cases, 'hallucinate' or produce incorrect information. The high stakes in healthcare mean that such errors can have severe, real-world consequences for patient health and trust in medical information. The industry faces the delicate balance of leveraging AI's immense potential to improve care and operational efficiencies while rigorously ensuring safety, accountability, and ethical governance. Google maintains that its AI Overviews are shown only when the company has high confidence in the response quality and that it continuously reviews performance across categories, with internal clinical teams reviewing flagged examples. However, the repeated instances of misleading health information highlight the need for more comprehensive oversight and robust validation protocols before AI tools are widely deployed, especially in critical domains like health. This situation emphasizes that while AI holds transformative promise for healthcare, its implementation must prioritize patient well-being and be guided by stringent ethical considerations and regulatory frameworks to build and maintain public trust.

Frequently Asked Questions

What Google AI health features have been removed or scaled back?

Google has quietly removed its 'What People Suggest' feature and also removed specific 'AI Overviews' in search results related to certain health queries, particularly those concerning liver blood tests and similar medical information.

Why did Google remove these AI health features?

The removal stems from significant safety concerns and reports of misleading or inaccurate medical information provided by these AI features. Investigations, notably by The Guardian, highlighted how oversimplified or incorrect advice could potentially put users at risk by deterring them from seeking proper medical care.

What were the specific inaccuracies identified in Google's AI health summaries?

Key inaccuracies included presenting generic 'normal ranges' for liver blood tests without considering individual factors like age, sex, or ethnicity. This lack of crucial context could lead patients to misinterpret their health status. Concerns also remain for other sensitive areas like cancer and mental health information.

Does Google use AI for healthcare in other ways?

Yes, Google continues to develop and test more advanced AI models like Med-PaLM 2, primarily for use by healthcare professionals to assist with tasks like answering medical questions and summarizing documents. However, even these enterprise-level AI tools have faced scrutiny regarding accuracy, transparency, and ethical deployment.

What is the broader implication of this move for AI in healthcare?

This incident underscores the critical need for rigorous ethical guidelines, transparency, and robust verification processes for AI applications in healthcare. It highlights the challenge of balancing AI innovation with patient safety and the importance of ensuring AI-generated medical information is accurate, contextually relevant, and does not replace professional medical advice.

Read Full Story on Quick Digest