
In two Democratic Republic of the Congo facilities, an observer-led mobile health tool reached feasibility for observing births, with mixed results for debriefing uptake.
LIVEBORN, a mobile health application designed to give real-time guidance and support debriefing during newborn resuscitation, was feasible to use for observing births in two facilities. In this pilot, 74% of births were observed with LIVEBORN at the facility using real-time guidance and 67% at the facility using debriefing, exceeding the study’s feasibility threshold of 50%. Usability was acceptable overall (System Usability Scale median score 68), but debriefing did not reach the study’s threshold for how many bag-mask ventilation cases were actually debriefed (42% vs a 50% target).
Quick summary
- What the study found: Using LIVEBORN to observe births was feasible at both sites; usability was adequate overall; real-time guidance met all pre-specified feasibility criteria; debriefing fell short on debriefing uptake for bag-mask ventilation cases.
- Why it matters: Feedback during and after resuscitation may improve bag-mask ventilation—a core practice for reducing newborn deaths from respiratory depression—especially where bedside coaching is hard to sustain.
- What to be careful about: LIVEBORN depends on an observer entering accurate real-time data; staffing constraints and disagreement with some prompts (especially “ventilate” vs continued suctioning) can limit consistent use.
What was found
The journal article [Mobile health supported real-time guidance and debriefing for newborn resuscitation: A pilot study of LIVEBORN feedback] tested whether LIVEBORN could be incorporated into routine care in two health facilities in the Democratic Republic of the Congo. LIVEBORN works by having an observer enter data on what the provider is doing and how the newborn is doing, in real time. The system then provides feedback in two modes: real-time guidance during the resuscitation, and debriefing prompts after the resuscitation.
The main question was feasibility: could staff realistically observe enough births with LIVEBORN for the approach to function? The study defined feasibility as observing at least 50% of births with LIVEBORN. During pilot testing, feasibility cleared that bar at both facilities: 74% of births were observed at the facility allocated to real-time guidance, and 67% were observed at the facility allocated to debriefing.
The team also measured usability using the System Usability Scale (a standard questionnaire that yields a score indicating how usable a tool feels to users). Overall usability met the study’s threshold: median System Usability Scale score was 68 (with quartiles 65 and 78). Usability differed by facility: the real-time guidance facility had a median score of 73, while the debriefing facility had a median score of 65, which fell below the study’s pre-set usability threshold.
For real-time guidance, another implementation measure—the Feasibility of Intervention Measure (a brief scale capturing whether an intervention seems doable in real settings)—met the study’s threshold, with a median score of 16. Debriefing faced a practical bottleneck: only 3 of 7 bag-mask ventilation cases were debriefed (42%), below the 50% threshold the study set for feasibility of debriefing uptake. Debriefing also occurred in six cases without bag-mask ventilation, with time to first breath ranging from 49 to 163 seconds after birth.
What it means
The key signal is operational: this pilot suggests it is possible to integrate an observer-driven mobile feedback tool into busy delivery settings and capture data for a majority of births. That matters because the study frames bag-mask ventilation as a central “basic resuscitation” practice that reduces newborn deaths from respiratory depression. If a system can reliably support better timing and quality of these actions—especially early ventilation—it has a plausible pathway to impact.
But this study is not an effectiveness test of survival or clinical outcomes. What it shows is that a feedback workflow can be built with frontline staff and run at meaningful coverage, at least during a pilot period, even in a low-resource environment. It also shows where a promising tool can stall: debriefing is easier to endorse than to execute consistently.
The midwives’ comments add an adoption reality check. They reported that audio prompts in real-time guidance felt supportive rather than punitive and did not feel distracting. At the same time, midwives sometimes disagreed with guidance to ventilate—particularly when the system urged them to stop suctioning and switch to ventilation, and they believed additional suctioning would lead to a cry.
Where it fits
In clinical learning terms, LIVEBORN combines two well-known mechanisms. Real-time guidance functions like “just-in-time” cognitive support—short prompts that reduce reliance on memory under stress. Debriefing functions like structured reflection: a facilitated review soon after an event, designed to help teams notice gaps, correct habits, and build shared standards.
In psychology, both elements can strengthen skill acquisition (building a reliable routine through practice and feedback) and reduce cognitive load (the mental burden of tracking steps while also managing an emergency). The study’s focus group discussions illustrate this: midwives described leaving “old practices” and becoming more orderly, including greater attention to the “golden minute” (a resuscitation concept emphasizing speed in establishing breathing within the first minute after birth).
The results also reflect a common implementation pattern: tools are not adopted just because they are useful; they are adopted when the workflow is workable. Here, the system hinges on having someone available to observe and enter data accurately. When staffing is thin, feasibility is less about motivation and more about logistics.
How to use it
If you are considering a similar tool or workflow, treat the observer system as the product, not an accessory. In this pilot, insufficient staffing to observe births was the primary barrier at both facilities. The teams addressed it by expanding from “high-risk births” to observing all births, and by expanding who could serve as observers.
At the real-time guidance facility, the final approach used a three-pronged observer pool: midwives with experience using smartphones or touchscreen tablets, environmental health services staff, and one full-time research staff member. A call system was developed to alert environmental health services staff when a birth was impending. The workflow also specified who maintained charging and readiness: NeoBeat (a device used during resuscitation in this workflow) was cleaned with other resuscitation equipment and then placed on its charger; the tablet was kept in the acute care room with the acute care nurse ensuring it was charged; and before every birth, the resuscitator prepared NeoBeat while the observer prepared the LIVEBORN tablet.
At the debriefing facility, the team ultimately limited observers to midwives plus one research staff member, after testing trainees and finding that midwives preferred to observe and coach trainees simultaneously. They also learned that a paper checklist to identify high-risk births was used inconsistently and felt burdensome, so they dropped it and focused on observing all births. The tablet-storage solution was different: a wooden box with a padlock near the NeoBeats, attached to a metal trolley so the observer could rest the tablet during observations, with the head nurse midwife ensuring the tablet was charged.
For debriefing, the operational lesson is to design for “minimum viable debrief.” In this pilot, debriefings were conducted during morning meetings led by the head nurse midwife. Even with that structure, the debriefing feasibility threshold for bag-mask ventilation cases was not met, and each midwife attended about one debriefing per month on average. If debriefing is the goal, the system likely needs explicit triggers, protected time, and a clear definition of which cases must be reviewed.
Limits & what we still don’t know
This was a pilot focused on feasibility and usability, not clinical effectiveness. The findings do not establish that LIVEBORN improves newborn survival or reduces complications. They do show that observation coverage can reach high levels in these settings, but they also show that debriefing uptake can lag even when staff value it.
The approach is dependent on an observer entering accurate information about both the newborn’s condition (especially breathing) and provider actions. The study notes that training observers was intensive but essential. That dependence creates scale-up risk: when data are incomplete or delayed, feedback can become less trustworthy, and trust is central to whether clinicians follow prompts under pressure.
There were also human-factors barriers: midwives described a learning curve with the tablet, including touchpad issues and unfamiliarity, with improvement over time through practice and peer tips. Clinician disagreement with some prompts—especially around switching from suctioning to ventilation—signals a potential mismatch between algorithm timing and clinician judgment in specific scenarios, or a need for clearer training on when to prioritize ventilation.
Finally, debriefing may be particularly sensitive to workplace culture. Midwives described feeling embarrassed or sad about mistakes in front of colleagues and managers, while also feeling encouraged to improve. That combination can either build a “culture of improvement” or shut people down, depending on facilitation quality and psychological safety (a team climate where people can acknowledge errors without fear of humiliation).
Closing takeaway
This pilot shows that LIVEBORN can be integrated into clinical practice at meaningful coverage, with overall acceptable usability, when teams co-design practical workflows for charging, storage, and—most critically—observation staffing. Real-time guidance met pre-specified feasibility criteria, while debriefing fell short on consistent follow-through for bag-mask ventilation cases despite being valued by midwives. The next question is effectiveness, but the immediate operational message is clear: if you can’t staff the observer role reliably, the feedback system will struggle no matter how good the software is.
Data in this article is provided by PLOS.
Related Articles
- Nasal temperature drops during stress, especially social speech stress
- High-flow nasal therapy costs more than low-flow oxygen in COAST
- Self-centered reflection increased sense of agency; selfless reflection decreased it
- Autistic adults report lasting mental health benefits from psychedelics
- Working memory links broadly to preadolescent psychopathology in network analysis
- Teachers on the Front Line of Bullying What Drives Action and What Gets in the Way
- Turning Heartbreak Into a Story: How Writing About a Breakup Changes What You Remember and Expect Next
- Parents of autistic children reported heavy strain and relied on coping rituals
- When ADHD Care Works, It’s Usually Because the System Finally Does
- When Teachers Become the Front Line for Child Mental Health
- Screening in a Single PE Class: FUNMOVES Brings Early Motor-Skills Checks to Spanish Schools
- When Campus Life Feels Too Much: How Creativity and Mind-Body Practices Can Strengthen Well-Being
- When the Brain Stops Staying in Its Lane: What LSD Reveals About Flexibility and Synchrony