← Back to Work

Case Study — Global Health Systems

OpenELIS Usability Study

Organization
OpenELIS × University of Washington
Timeline
Add timeline
Role
Add role
Tags
HealthcareOpen SourceDomain Tag
Hero Placeholder

Mission-critical workflows under real lab pressure.

Reserved for a strong top-of-page artifact that gives the lab environment and software stakes immediate visual context.

Healthcare
Hero media slot for a future lab workflow image or interface capture.
01

The Brief

OpenELIS serves high-volume laboratories in lower-resource settings, where software has to support accuracy, speed, and training under real operational pressure. The study focused on how the system showed up across geographically distant deployments, and how usability issues could compound when infrastructure, staffing, and workflow conditions varied site to site.

"Researching a lab system across four countries meant paying attention to local conditions without losing sight of the shared workflow problems underneath them."

Study framing

HaitiCôte d'IvoireMauritiusVietnam

Fig 1 — OpenELIS locations recreated in an OpenStreetMap view from the countries listed on your public portfolio.

02

Research &
Discovery

Use this section for field research, task analysis, and observations of how the software fits into lab operations.

Field InterviewsAdd participant groups and what you learned from themn=xx
Task AnalysisCapture key workflows, bottlenecks, and time-sensitive interactionsn=xx
Usability SessionsDescribe the tasks tested and the failures or friction you foundn=xx

Workflow artifact

Process Map

Pain point

Critical Friction

Fig 2 — Replace with research artifacts

03

Strategic
Framing

State the insight that helped prioritize the highest-impact usability issues.

Core Insight

Add the insight that reframed the usability problem in the context of real lab work.

Add design or research principles like clarity under pressure, error recovery, training burden, or localization.

04

Explorations

Show the usability recommendations, interface directions, or workflow changes you considered.

Direction A

Workflow Revision

Direction B

Interface Revision

Rejected

Dead End

Fig 3 — Replace with explorations

05

The Solution

Use this section for the proposed interaction model, recommendations, and usability improvements.

Final RecommendationAdd screen, flow, or service blueprint

Before / after

Module One

Critical interaction

Module Two

Fig 4 — Replace with solution artifacts

06

Outcomes &
Impact

Replace with time-on-task, error reduction, learning curve, or implementation impact metrics.

-00%
Error reduction metric
+00%
Task success or speed metric
00 sites
Program reach or deployment metric
07

Reflection

Even though this began as master's-level work, the product context and usability sessions were real, which made the study feel consequential from the start. The biggest lesson was that rigorous research practice matters even more when the software supports healthcare workflows across multiple countries, languages, and levels of familiarity with the system.

What I'd do differently

I would run the pilot earlier. An earlier pilot, ideally with international participants, would have helped us refine task wording, post-task questions, timing, and the cultural shape of feedback before the full study began. I also would have spent more time in the test server up front to catch small inconsistencies in the supporting materials before moderation started.

What I'd defend

A structured but human interview script made the sessions stronger. It created consistency across moderators while still leaving room to probe, clarify, and respond naturally. Pairing moderation with dedicated note-taking also made the data richer and the sessions easier to run under pressure.

What changed for me

This study deepened my appreciation for well-documented qualitative and quantitative analysis. Standardizing responses, organizing findings for tabulation, and ranking issues by frequency and severity made the recommendations much more credible and actionable. It also reinforced how much careful recruitment matters when the user base spans different countries and experience levels.

Why it mattered

I was especially motivated by the real-world impact of the work. OpenELIS supports laboratories and clinics involved in testing for conditions like COVID-19 and HIV, so usability improvements are not abstract. They affect day-to-day work for the people relying on the system, and ultimately the people those labs serve.

The project also reinforced a more democratic way of working: clear roles, shared research standards, and strong facilitation norms make collaborative studies more respectful, more efficient, and more useful, especially in remote and cross-cultural settings.