Every so often we share what we’re hearing straight from the field—real patients, real clinicians—to keep our work grounded. From a national survey of 400 U.S. adults, a few patterns stood out about primary care and healthcare technology. At Lumeris, we lean on human-centered research to shape AI and UX that fit actual clinical workflows, not wishful thinking. These are some of the highlights we’ve been trading internally within the Lumeris Lab.
Methodology
This post summarizes results from a Lumeris UX survey: a 28-question instrument fielded to U.S. adults (18+) via a research firm in October 2025; 400 participants completed. Reported margin of error is ±5% at a 95% confidence level. The sample spans gender, race, income, education, urban/suburban/rural, age, and life stage; respondents skew slightly toward people managing chronic conditions.
Most patients aren’t peeking over the horizon for AI—they’re already using it. That matters for how we design and deploy Tom: assume familiarity, meet people where they are, and focus on real health use cases and utility for the patient.
Findings
1) Familiarity with AI is mainstream
Nearly half of respondents (46%) say they’re very or extremely familiar with AI tools in daily life (Q1).
Why it matters: You don’t need a remedial AI explainer inside a health experience. Design for confident users and provide guardrails and transparency for everyone else.
2) Frequent use is common
A majority (52%) report they use AI tools often or always (Q2).
Why it matters: Patients are already tasking AI with everyday work. Your product needs to feel efficient, responsive, and useful from the first interaction—anything clunky will be judged against the tools they already use.
3) Which tools people use (hint: chat assistants dominate)
When asked which AI tools they’ve used in the last 3 months (multi-select), respondents most often chose ChatGPT/OpenAI (91%), Google AI tools such as Gemini/Bard (80%), and built-in voice assistants like Siri (70%) and Alexa (70%).
Why it matters: Conversational patterns are learned behaviors. Patients expect a chat-first experience that can also hand off into guided actions—exactly the kind of orchestration Tom provides.
4) Health is already a top AI use case
Over half (54%) say they use AI for health/fitness/well-being—third only to work/productivity (73%) and social/entertainment (64%).
Why it matters: Patients are primed to use AI for care navigation, prep, and follow-through. Building around those behaviors increases adoption with minimal education cost.
Design implications for Tom
Start where people already are: Offer a fast chat entry point with clear next steps (schedule, message, refill), then escalate to a human when needed. (Brand guidance: refer to Tom as “it,” focus on what it enables; avoid anthropomorphizing.)
Make usefulness obvious, immediately: Lead with one-tap actions and “Best Next Action” summaries at the point of care and between visits.
Respect time: People who use AI often expect speed. Keep flows short and outcomes clear (e.g., “You’re scheduled,” “Labs ordered,” “Refill confirmed”).
Meet learned expectations: Natural language, concise evidence-backed answers, and transparent citations build trust with frequent users.
Lumeris learnings
Patients already lean on AI to research, plan, and prepare. Tom is designed to orchestrate those moments into care—surfacing the Best Next Action, coordinating with the care team, and handing off to humans when the situation calls for judgment. That’s how we turn everyday AI habits into measurable clinical impact.
See Tom in action. Book a 30-minute walkthrough of how Tom coordinates between-visit support, surfaces Best Next Actions, and integrates into your workflows—without adding burden to clinicians. (Role-based form: health system leader, physician leader, product/IT.)
Research facilitation provided by Savvy Cooperative.
![]()

