This is a guest post written by Janine Knudsen, MD, senior advisor at the Common Health Coalition, and Shoshanah Brown, MS, MBA, strategic advisor at Popai Health. Their combined experience across innovative care models like Accompany Health, Aledade, Cityblock Health and airNYC, paired with tenures at the New York City Department of Health and Mental Hygiene, gives them a unique perspective on this framework for deploying AI in safety-net environments, and we think it’s an important addition to the conversation.

Discussions of artificial intelligence have increasingly examined its implications for underserved populations. In an HTN discussion, Andy Slavitt and Toyin Ajayi articulated an important perspective on AI adoption and equity, emphasizing that restricting access to emerging technologies may unintentionally reinforce disparities. Their analysis underscored a critical consideration for healthcare leaders: ensuring that AI-enabled capabilities are not disproportionately concentrated in well-resourced environments, where they may widen existing gaps.

This framing surfaces a complementary challenge for healthcare organizations, particularly in safety-net and Medicaid contexts: how to evaluate and prioritize AI investments alongside the workforce, workflow, and operational realities that ultimately determine care effectiveness.

AI may be deployed within safety-net environments, yet still fail to improve equity and drive value when design assumptions do not align with the workforce and workflows that shape outcomes.

We write as healthcare operators who have designed, implemented, and scaled safety-net care models serving medically and socially complex populations. Our experience spans community health centers, public delivery systems, and emerging care organizations, where we have built community health worker programs, supported interdisciplinary teams, managed resource constraints, and navigated the operational and financial pressures that shape sustainability. This vantage point provides a practical perspective on how AI tools intersect with frontline workflows, workforce dynamics, and whole-person care delivery.

A structural constraint shaping current AI development is that most tools are designed for physicians and delivery systems organized around traditional clinical encounters. In safety-net and Medicaid environments, however, outcomes frequently depend on a different set of functions: sustained patient engagement, identification of social barriers, referral coordination, and longitudinal follow-up. These responsibilities are largely carried by community health workers, care managers, patient navigators, and care coordinators — roles whose workflows, information needs, and constraints differ substantially from clinician-centered models. AI systems that fail to align with these operational realities may produce limited impact despite considerable technical sophistication.

The Design Mismatch

A recurring challenge in AI deployment is that design assumptions frequently reflect standardized clinical workflows rather than the distributed and coordination-intensive dynamics that shape safety-net care delivery.

Safety-net care delivery often unfolds in environments where critical information is conversational, contextual, and longitudinal. Patient priorities and barriers evolve over time, and progress depends on sustained engagement and coordination rather than discrete encounters. These conditions create structural friction for AI systems designed around structured inputs and standardized workflows.

Under these circumstances, AI systems optimized primarily for clinician documentation or encounter workflows may generate only partial value. Dominant sources of operational friction — referral tracking, loop closure, prioritization of outreach, and synthesis of fragmented updates — frequently sit outside traditional physician-centered workflows.

The challenge is not technical capability, but alignment between AI design assumptions and the operational realities of safety-net care.

Care Plans and Workforce Realities

Care plans provide a useful lens through which to observe the consequences of this misalignment. In safety-net and Medicaid contexts, care plans are intended to function as coordination infrastructure across interdisciplinary teams rather than merely documentation artifacts.

Maintaining care plans that accurately reflect patient circumstances presents persistent challenges. Information relevant to care execution — emerging barriers, referral status changes, engagement patterns, and distributed updates — is often fragmented across systems and interactions. No single team member possesses comprehensive visibility.

Care plans fail as useful tools not because of inadequate clinical judgment, but because synthesizing fragmented information into actionable, shared logic is structurally difficult.

These dynamics directly affect the frontline workforce. Community health workers, care managers, navigators, and care coordinators routinely operate within the gap between documented plans and operational reality. Significant effort is devoted to reconciling inconsistent records, tracking referral outcomes, updating multiple systems, and following up on unresolved barriers.

AI tools designed primarily to accelerate documentation save time and may reduce friction. AI capabilities that improve care plan visibility, highlight unresolved constraints, and support prioritization may alter workforce effectiveness more materially.

For interdisciplinary teams, AI’s highest-value contribution may lie less in automation than in strengthening coordination and informational reliability.

Social Care and Loop Closure

Social care workflows expose some of the most persistent coordination challenges in safety-net settings. Digital referral platforms have meaningfully improved resource discovery and documentation, yet fragmentation across social service ecosystems remains substantial.

Identifying a resource is rarely the most difficult step.

Ensuring that patients successfully connect with services — loop closure — frequently requires repeated follow-up, eligibility verification, status tracking, and coordination across organizations operating under distinct constraints. These processes remain highly manual and labor-intensive.

For frontline roles, referral resolution often represents a dominant source of operational burden. Significant time is spent reconciling outcomes, tracking unresolved needs, and updating fragmented systems with incomplete information.

AI systems that enhance documentation or matching workflows address important but partial elements of this challenge. Systems that assist with tracking, detect stalled pathways, synthesize updates, and support prioritization or escalation logic may offer greater leverage.

The opportunity is not simply digitizing social care interactions, but strengthening the reliability of loop closure processes that determine whether needs are actually resolved.

The Efficiency Trap

AI-driven efficiency gains introduce a predictable organizational risk within workforce-intensive care models. Improvements in workflow speed or task execution may generate pressure to expand caseloads or compress interaction time without corresponding adjustments to care model expectations.

Many safety-net care functions derive effectiveness from relational continuity and contextual understanding. These mechanisms are inherently time-dependent and resistant to simple productivity scaling.

When efficiency gains are interpreted primarily through throughput metrics, organizations may inadvertently weaken the relational substrate that drives outcomes. Expanded panels and reduced engagement time can erode the processes frontline roles are designed to sustain.

This dynamic reflects not a failure of AI technologies, but the interaction between efficiency incentives and workforce-dependent care models.

Evaluating AI impact, therefore, requires assessing whether efficiency gains strengthen or strain workforce capacity to sustain engagement, resolve barriers, and maintain care plan alignment.

Efficiency that dilutes care effectiveness is not operational progress.

Strengthening the Non-Clinical Safety-Net Workforce

Artificial intelligence will not replace the relational functions central to safety-net care. Trust-building, contextual interpretation, and sustained engagement remain inherently human capabilities performed by community health workers, care managers, navigators, coordinators, and social workers.

AI can, however, materially strengthen the infrastructure supporting this work.

In safety-net environments, performance constraints frequently arise from coordination burden, information fragmentation, documentation demands, and referral resolution challenges. Technologies that reduce these sources of friction may exert greater system-level impact than those focused narrowly on accelerating clinical workflows.

Realizing this potential requires alignment between AI design priorities and operational reality. Safety-net care operates under workforce structures, reimbursement constraints, compliance demands, and social service dependencies that differ fundamentally from standard clinical environments.

Promising Examples

The safety net does not primarily require technologies that demonstrate isolated technical capability. It requires infrastructure that functions reliably under social complexity, workforce constraints, regulatory requirements, and constrained reimbursement.

This is not theoretical. We are beginning to see promising examples:

  • Companies like Waymark are developing AI systems that surface “next best actions” for care teams, helping interdisciplinary staff prioritize outreach based on emerging risks — not just claims data.

  • ThriveLink is building AI agents that streamline applications for government safety net programs such as SNAP, reducing the bureaucratic friction that frontline teams and patients navigate daily.

  • Suki AI has shown promise in adapting its scribe technology to better capture social context that is often lost in clinical notes — bringing visibility to factors that shape outcomes but rarely appear in structured fields.

These efforts point toward a different future — one where AI augments the workforce closest to social complexity rather than bypassing it.

The central question is not whether AI reaches underserved populations. It is whether it truly strengthens the workforce and systems that already serve them.

Reply

Avatar

or to participate

Keep Reading