This is a guest post written by Andy Slavitt, the Co-Founder and General Partner at Town Hall Ventures, and Toyin Ajayi, M.D., the Co-Founder and CEO at Cityblock Health. They share their perspective on the potential for using AI in clinical care, responding to a recent Opinion post in The Guardian.
By Andy Slavitt and Toyin Ajayi, M.D.
Dr. Oni Blackstock and Leah Goodridge wrote a thoughtful opinion piece last week objecting to doctors’ use of AI as a patient listening and diagnostic tool for low-income and unhoused populations. To illustrate, they cite southern California clinics led by Akido Labs, an AI healthcare technology company that provides medical services, where medical assistants use AI to transcribe conversations with unhoused populations for generating diagnoses and treatment plans, which are reviewed by a doctor. So far, so good, it would seem. Making care accessible for unhoused and other marginalized populations remains an urgent challenge to our healthcare system, and to our society as a whole.
But the writers object to the program’s premise – that AI can be an accurate, trustworthy tool enabling providers to deliver more care for these populations. They take the position that AI developers are experimenting with unproven technology on communities that don’t have the economic clout or social voice to object. AI is perpetuating medical classism and diagnostic bias. In their view, something is not better than nothing; it’s dangerously worse.
We have dedicated our careers as policymakers, entrepreneurs, and investors to improving quality, access and outcomes for marginalized and vulnerable populations. We deeply appreciate that examples of bias, misuse of data and exploitation of these populations are ubiquitous in our healthcare system.
But we believe the conclusion that AI should be withheld from people with low incomes or those who are unhoused misunderstands both how AI is already being used and how inequities in healthcare often arise. The greater danger is not that AI will be imperfect—every healthcare tool (and human) is—but that withholding effective tools from certain populations will deepen the very disparities we seek to reduce.
High-income, highly educated individuals are already using AI at scale and appear to be loving it. Surveys show widespread consumer adoption of large language models (LLMs) for health information, medication understanding, symptom triage, appointment preparation, and navigating insurance and benefits. A 2024 Pew Research Center survey found that younger, higher-income, and more educated adults are significantly more likely to use AI tools for information gathering, including health-related questions. Similarly, a 2023 Stanford HAI report documented rapid uptake of generative AI tools among professionals and affluent consumers, with demonstrated gains in task completion, comprehension, and efficiency. OpenAI reported over 40 million people use ChatGPT daily to ask questions about healthcare. The use of AI in longevity tools is becoming a status symbol among well-to-do populations. Likewise clinical teams are adopting AI rapidly to quickly access best treatments, research, and to make visits smoother.
Withholding AI from lower-income populations and their providers does not prevent harm—it creates a digital caste system, where affluent patients benefit from AI-augmented understanding, preparation, and advocacy, while others are denied the same tools in the name of “protection.”
That is not equity. That is paternalism.
There are credible studies showing a real benefit when AI is part of the care process. A randomized controlled trial published in JAMA Internal Medicine (2023) found that AI-generated responses to patient questions were rated higher than physician responses for empathy and clarity, even when accuracy was comparable. Another study in Nature Digital Medicine (2024) found that LLMs significantly improved patient comprehension of discharge instructions, particularly among individuals with lower baseline health literacy.
As to the contention in The Guardian regarding the unhoused population served by Akido, a survey of 5,500 patients showed that 63% of people saw a medical provider on the very first day of interaction, and 87% were still getting care after 3 months. People also needed to visit the Emergency Room 31% less frequently and initiated addiction treatment more quickly.
Importantly, this is not an “AI is better than nothing” argument. It is an argument that people value these tools, they find them trustworthy, and they demonstrably improve understanding, engagement, and care.
Denying access because outcomes are not perfect ignores how healthcare actually functions today—where misunderstanding, rushed encounters, and opaque systems are already the norm.
Bias Is Real—but Not Unique to AI, and Often More Fixable
The writers cite well-known studies showing bias in clinical AI models, including the 2021 Nature Medicine paper on chest X-ray algorithms and disparities in breast cancer screening tools. These findings are real and serious.
But three points are consistently missed:
Human clinicians exhibit persistent and well-documented bias.
Decades of evidence show that Black patients, women, Medicaid recipients, and low-income patients receive worse pain management, fewer diagnostic tests, and lower-quality care from human providers. A landmark 2016 study in PNAS found that medical trainees held false beliefs about biological differences between Black and white patients, directly affecting treatment decisions.Algorithmic bias is auditable; human bias often is not.
AI systems can be evaluated, stress-tested, retrained, and monitored in ways individual clinician behavior cannot. The appropriate response to biased models is not blanket execution-it is better data, transparency, and governance.Workforce problems are getting worse, not better.
Without advancements in productivity, it’s the people least able to advocate for themselves that will increasingly get left out.
Recent evidence shows progress. A 2024 Lancet Digital Health review found that when models are trained on more representative datasets and evaluated with subgroup performance reporting, disparities can be significantly reduced or eliminated. Bias is not inevitable; we need to work with broad populations to reduce it.
The False Choice Between AI and Human Care
The writers frame AI and human care as mutually exclusive. That framing is not supported by evidence and is unlikely to be how the world works going forward.
The strongest results come from AI-augmented care, not AI replacing clinicians. A 2023 study in Health Affairs showed that AI-supported primary care teams improved chronic disease management outcomes while reducing clinician burnout—particularly in under-resourced settings. We are all going to get a blend of clinicians, team members and technology depending on the situation. Low income people aren’t getting inferior care in these models.
For communities facing clinician shortages, long wait times, and fragmented care, insisting on a “human-only” standard that does not exist in practice risks preserving scarcity rather than alleviating it.
For communities facing clinician shortages, long wait times, and fragmented care, insisting on a “human-only” standard that does not exist in practice risks preserving scarcity rather than alleviating it.
A Better Standard: Choice, Transparency, and Inclusion
Equity does not mean shielding people from tools others are using successfully. It means:
Giving people choice in whether and how they use AI
Being transparent about limitations and risks
Including impacted communities in design and evaluation
Ensuring AI adds to, not subtracts from, human care
Refusing to deploy AI in low-income or unhoused populations until it is “perfect” guarantees those populations will be last to benefit—just as they have been with telehealth, digital portals, and specialty care access.

