Automated surveillance was 100% accurate, while experts relying on manual recording and interpretation of individual patient data made errors. The automated system is faster and can be programmed to run checks at any timea
An automated system for identifying patients at risk for complications associated with the use of mechanical ventilators was significantly more accurate than traditional surveillance methods that rely on manual recording and interpretation of individual patient data.
In a paper published this month, researchers at Massachusetts General Hospital (MGH) report that an algorithm developed by the hospital’s Division of Infectious Diseases Infection Control Unit and the Clinical Data Animation Unit (CDAC) was 100 percent accurate in identifying at-risk patients when provided with necessary data.
“Ventilator-associated pneumonia is a very serious problem that is estimated to develop in up to half the patients receiving mechanical ventilator support,” said Brandon Westover of the Department of Neurology, director of CDAC and co-senior author. “Many patients die each year from ventilator-associated pneumonia, which can be prevented by following good patient care practices, such as keeping the head of the bed elevated and taking measures to prevent the growth of harmful bacteria in patients’ airways.”
Traditional surveillance of patients receiving mechanical ventilation involves manual recording every 12 hours, usually by a respiratory therapist, of ventilator settings, which are adjusted throughout the day. Those settings, which reflect the pressure required to keep a patient’s lungs open at the end of a breath and the percentage of oxygen being delivered to the patient, are reviewed by an infection control practitioner for signs that indicate possible ventilator-associated pneumonia.
“In our study, manual surveillance made many more errors than automated surveillance,” said Erica Shenoy of the Division of Infectious Diseases, the Infection Control Unit and hospital epidemiology lead for CDAC, and lead author of the report, published in Infection Control & Hospital Epidemiology.
The errors included false positives, reporting cases that on review did not meet criteria for what are called ventilator-associated events; misclassifications, reporting an event as more or less serious than it really was; and failure to detect and report cases that on closer inspection, actually met criteria.
“In contrast, so long as the necessary electronic data were available, the automated method performed perfectly,” Shenoy said.
Initial testing and debugging of the automated system was carried out from January to March of 2015 in four MGH intensive care units. During that time 1,325 patients were admitted, 479 of whom received ventilator support. A retrospective analysis comparing manual versus automated surveillance of data gathered from patients cared for during this development period showed the automated system was 100 percent accurate in detecting ventilator-associated events, distinguishing patients with such events from those without, and predicting the development of ventilator-associated pneumonia.
In contrast, the accuracy of manual surveillance for each of three staged measures, corresponding to yellow, orange and red level of risk that a patient was at risk of developing pneumonia was 40 percent, 89 percent and 70 percent.
A validation study to further test the algorithm was conducted using data from a similar three-month period in 2016, during which 1,234 patients were admitted to the ICUs, 431 of whom received ventilator support. During that period, manual surveillance produced accuracies of 71 percent, 98 percent and 87 percent, while results for the automated system were 85 percent, 99 percent and 100 percent accurate.
The drop-off in accuracy of the automated system during the validation period reflects a temporary interruption of data availability while software was being upgraded, and the researchers subsequently developed a monitoring system to alert staff to any future interruptions.
“An automated surveillance system could relieve the manual effort of large-scale surveillance, freeing up more time for clinicians to focus on infection prevention,” Westover said. “Automated surveillance is also much faster than manual surveillance and can be programmed to run as often as desired, which opens the way to using it for clinical monitoring, not just retrospective surveillance.”