Friday, October 30, 2015

MSSA and MRSA: Both are dangerous!

File:Staphylococcus aureus, 50,000x, USDA, ARS, EMU.jpgJessica Ericson and co-workers recently published a remarkable investigation of invasive Staphylococcus aureus infection in hospitalized infants. In it they describe a retrospective multicenter study of 348 NICUs in which a group of 3888 infants suffered invasive S. aureus infection between 1997 and 2012. They compare the demographics and mortality of infants with invasive MRSA and MSSA; determine the relative annual proportions of MSSA and MRSA; and calculate the risk of death after an invasive MSSA and MRSA infection. It's a fascinating study and I recommend reading it.

Among other results, they find that infant mortality following invasive MRSA and MSSA infection is essentially the same. Moreover, in their cohort of patients, MSSA was responsible for a larger burden of disease and death in infants than MRSA. Based on their findings, and consistent with previous studies, the authors recommend that
Measures to prevent S. aureus infection should include MSSA in addition to MRSA.
This is an important point. The goal of infection prevention is to protect patients; both MRSA and MSSA are deadly, so we should be mindful of each. Commonly, patients are screened for MRSA carriage only, and isolated and decolonized if found to be positive. Previous research suggests that it may be possible to reduce the incidence of both MRSA and MSSA infection by screening for S. aureus universally, and this latest study shows why this is critically important.

After drafting this post I realized that Mike Edmond, on the Controversies in Hospital Infection Prevention blog, had already written a great piece in connection with this paper. You should read the post. In it, he captures the issue powerfully in a single line: 
I often joke that I've never had a patient tell me that they don't want a MRSA infection, but they'll take an MSSA infection. 
Indeed, both pathogens deserve attention and respect, as has long been known

(image source: Wikipedia)

Saturday, October 24, 2015

Rift Valley fever and the problem of forecasting
The notion that prediction is difficult, especially about the future, is absolutely true in the domain of infectious disease. Despite the difficulty we must try, and I think there's reason for hope, given recent studies utilizing powerful machine learning techniques and diverse data. There's much innovation being brought to the issue.

Recently, a topic I've written about in the past has appeared in the news: Forecasting of Rift Valley fever (RVF) in East Central Africa. Bernard Bett has written a thoughtful piece on the PLoS Translational Global Health blog, which I recommend reading, examining the gamut of public health tools and responses needed in order to combat RVF. He begins by framing the issue succinctly, 
Recent climate predictions suggest East Africa may be in line for an epidemic of Rift Valley fever -- an infectious disease which can hit people, their livestock and livelihoods, and national economies hard. Data from the Climate Prediction Centre and the International Research Institute for Climate and Society suggest there is a 99.9% chance there will be an El Niño occurrence this year, with a 90% chance it will last until March/April 2016. At least two of the most recent Rift Valley fever epidemics in East Africa -- those in 1997/98 and 2006/2007 -- were associated with El Niño weather patterns, with Kenya suffering losses amounting to US$32 million in the most recent. Given the strong predictions of an El Niño occurrence, and the established association between El Niño and Rift Valley fever risk, countries in the Horn of Africa need to start laying out measures to manage the developing risk. . . 
The health, economic, and social costs of this disease are well known and there is a wealth of research establishing both RVF epidemiology and its strong ties to climate (including El Niño) and the environment. Nonetheless, there was little early response undertaken given remotely sensed (i.e., satellite-based) RVF forecasting in 2006-07. Peter Roeder described the situation in a 2007 ProMED post (archive 20070112.0164),
It is interesting, if rather disheartening, to watch another RVF epizootic emerge and evolve in eastern Africa and to note that it is such a close recapitulation of events that occurred in 1997/8 and decades before. It is a recapitulation not only with respect to disease evolution but also in terms of national and international preparedness—or lack of it. Those who followed ProMED in those days will be aware that the epizootic attracted intense international attention and was closely reported in postings, which contain much useful information. Despite seminal work on developing early warning systems based on remote sensing . . . it seems that the capacity to respond has not improved greatly in the high-risk countries in Africa. 
We are presently seeing the emergence of a very powerful El Niño, possibly one of the strongest in the historical record, and this was forecast in mid-August of this year. While such a climate forecast, especially when combined with other data, could reasonably be interpreted as a 2-3 month warning of the potential for RVF in parts of Africa, it's important to appreciate the complexity of acting on such information. As I wrote in 2012,
A recent, comprehensive set of case studies of the 2006–2007 outbreak in East Central Africa was published in the American Journal of Tropical Medicine and Hygiene (August 2010), and many of the nuances are described there. For example, current preparations of the Smithburn vaccine have a shelf life of approximately 4 years. Outbreaks in the Horn of Africa region occur aperiodically, with a mean of near 10 years between outbreaks. Veterinary health authorities cannot spend scarce resources on continually replenishing a stock of RVF vaccine when other needs are present continuously. Nor can manufactures maintain large stocks that are likely to expire before sale. Thus, vaccine may not be available at any given time. Nonetheless, waiting until there is a need to manufacture vaccine is problematic.
In other words, although vaccination is a powerful strategy for protecting against RVF virus transmission, maintaining vaccine stocks isn't straightforward. Moreover, simply having vaccine available isn't enough: Effective and safe administration triggered by any early warning, such as the one described in the impressive study of Anyamba et al in 2009, is complicated. In the case of the 2006-07 outbreak, for example, by the time a warning was issued, early outbreak areas were already inundated by rains, making travel and delivery of supplies difficult. In fact, in some scenarios it may take up to 150 days from a RVF vaccine order until the successful acquisition of vaccine-associated herd immunity -- much greater than the few weeks of advanced warning the state of the art can current supply. (Note: There's a distinction between a statistical forecast for a specific disease and simply noting that the strongest El Nino in decades is going to mess with everything.)

If a disease forecast is to have impact, many factors must come into alignment, including the forecast supplying sufficient lead time, decisionmakers having enough confidence in the forecast to act, and the existence of a public health infrastructure capable of supporting an effective (and potentially complex) response. These are important issues to keep in mind when thinking about surveillance and early warning, regardless of the disease and setting.

(image source: Wikipedia)

Thursday, July 9, 2015

Measles, vaccines, and the herd first confirmed measles death in the US since 2003 was recorded in Washington State recently, where a woman died from measles-associated pneumonia. According to a health department news release, she had an underlying condition and was taking medications that suppressed her immune system. People undergoing immunosuppressive therapy are at high risk of contracting infections and, if they develop infection, often do not exhibit the signs immunocompetent persons show. This woman is thought to have become infected at a medical clinic during a local outbreak; the etiology of her pneumonia wasn't recognized as measles until autopsy.

Measles is highly contagious (R0, the basic reproduction ratio of the pathogen, can be as high as 18) and there are hundreds of thousands, if not millions, of immunocompromised persons in the US who depend upon the immunity of those around them for protection. When a large fraction the community possesses immunity to a pathogen, circulation of the pathogen becomes less intense. When the prevalence of immunity becomes high enough, it ceases to circulate. In this simple picture, if the fraction 11/R0 of a population can be made immune, and that fraction is maintained over time over time, then a pathogen can be eradicated.

In reality, herd immunity is more complex than this. Many complications arise from imperfect vaccine immunity, population heterogeneity (including network effects), uneven vaccination, and those who opt not to receive vaccines. These complexities make it challenging, from a public health practice perspective, to protect populations with vaccines. Nonetheless, this woman's death illustrates how important it is to immunize as many people as possible: Doing so heightens protection of those vulnerable to vaccine preventable infections.

This case has been reported within the context of anti-vaccination notions, or as I prefer to think of it, vaccine skepticism. Regardless of the terminology, there is one simple truth to the incident: She developed what proved to be a fatal infection because someone in her community was not immune to the measles virus. That seems needless when a safe and effective vaccine that conveys long-lived immunity is available. Hopefully laws like those enacted in California and Vermont recently will spread to other states and help to increase the prevalence of vaccine-associated immunity in communities throughout the US.

(image source: Wikipedia)

Saturday, June 13, 2015

MERS as (another) messenger of prevention

It's hard for me to know how to interpret the MERS situation in South Korea. At a high level, a recently recognized viral respiratory pathogen has traveled halfway around the world and is causing morbidity and mortality in a small section of an immunologically naive population. It appears to be associated with hospitals. What do we take away from this? Lessons will be learned when the event subsides and people study what happened, but to me, MERS reminds us that outbreaks of pathogens for which there are no vaccines or drug therapies underscore the importance of prevention.

When possible, preventing pathogens from physically reaching or entering a host by respiratory, percutaneous, alimentary, blood et al pathways is preferable to relying on pharmaceutics. Drugs tend to be complex and costly to develop, can take a long time to enter the marketplace, and -- especially in the case of antibiotics and antivirals -- they can become obsolete over time. Moreover, drugs are often toxic to the patient. Prevention is applicable in situations when appropriate drugs don't exist (e.g., for newly emerged pathogens), when it isn't possible to administer drugs in a timely manner, or when patients cannot tolerate them. 

Consider two anecdotes related to the spread of MERS virus in South Korean hospitals. As described by Choe Sang-Hun, it appears that the index patient in the South Korean event had "coughed and wheezed his way through four hospitals before officials figured out, nine days later, that he had something far more serious and contagious." Furthermore, ED wait times in Korea can be extraordinarily long by US standards. Another patient, who waited two-and-a-half days in the emergency department before a hospital bed became available, infected 55 additional individuals during their wait. Apparently, 2.5 days isn't an unusually long waiting time in some Seoul hospitals. 

Applying effective prevention measures to patients suspected of infection is the only way of stopping the chain of transmission in such environments. Unfortunately, it is unclear how to achieve good infection control for MERS and a range of other pathogens. Eli Perencevich described the issue clearly, as usual, in the Controversies in Hospital Infection Prevention blog recently: 
. . . we don't actually know how to achieve good infection control for MERS and the other diseases he [Tom Frieden] mentioned [measles, DR-TB, SARS, Ebola]. If only we invested in studies to understand how to best implement PPE in these [hospital] settings. One could imagine improved PPE technology, refined PPE donning and doffing algorithms and enhanced environmental cleaning as potential targets for future studies examining optimal protection from MERS. Not coincidentally, many of these are the same targets that Mike, Dan and I mentioned in our Ebola+PPE editorial several months ago. If we invest in infection prevention technology and implementation research, our health care system will be safer regardless of the pathogen du jour.
And that's the point that MERS makes me think about. Yes we need antimicrobials and vaccines that work against specific pathogens, of course we do, but developing such drugs is a major effort. Biochemical pathways must be understood, pathogen life histories and survival strategies must be elucidated, and the host response must be characterized among many, many other things. Doesn't it make sense that research on pathogen-agnostic approaches to prevention, which don't require such specific and complex information, might be simpler and broadly applicable? 

Investing in research on infection prevention approaches, and how to implement them sustainably in realistic clinical environments, would pay benefits far beyond helping to thwart the spread of exotic and newly emerged pathogens. We may learn how to better control and prevent the usual suspects of hospital associated infection, which, afterall, are responsible for a tremendous burden of disease day in and day out.

(image source: Wikipedia)

Wednesday, June 3, 2015

Vaccines, cancer, and science communication: Oh my!

Tara Haelle, writing for NPR recently, tells how a university press release, inaccurately entitled "Study explains how early childhood vaccination reduces leukemia risk", was covered widely in the press last month. The release attempted to explain newly published research carried out, in part, at UCSF. She describes how conversations with the senior author of the study failed to temper some questionable passages in the press release, and how follow-up with other researchers expert in vaccines helped to provide clarity. It's a good article and I recommend reading it. The bottom line, she suggests, is to always be skeptical of press releases.

That's certainly sage advice. I don't want to go into the details of this particular case; Haelle does that very competently, as do others, and it's easy enough to check the face validity of the claims of the release and author interview yourself via the SEER Website and FDA vaccine license time lines. Rather, I want to step back from the details and think about the episode more broadly.

This is a case of a group of scientists carrying out research that passed peer review and was published in a prestigious journal. When such a study is published, universities understandably want to make the presumed important information available to a broader audience. The titles and topics of many research articles probably won't draw the attention of casual readers, so universities have media relations teams that work with researchers to write press releases and help investigators interact with the press. One danger of this, and again it is understandable, is the potential for over simplifying and overselling research in order to make it accessible and relevant to the public.

It can also lead to a misunderstanding of the process of discovery. Medical science is a fluid, dynamic endeavor in which knowledge emerges iteratively, often triggered from conflicting results. Rarely does one study show anything definitively. In fact, many findings described in peer-reviewed studies are refuted later by the findings of other peer-reviewed studies. It has been argued that most published research findings are false, and recent evidence suggests that an alarming number of published results aren't reproducible. It reminds one of a quote attributed to physicist Wolfgang Pauli,
I don't mind your thinking slowly; I mind your publishing faster than you think.
Regardless of publication rates (and the pressures leading to those rates), knowledge emerges as conflicts are resolved, which can take years or even decades. It's unclear whether this is widely understood by those consuming medical and scientific information via the popular press. We need to think about how to communicate both new scientific findings and the process of science to the public more effectively. 

(image source: SEER)

Tuesday, May 12, 2015

Intensive care from afar: Caregiver versus patient watcher

File:US Navy 030423-N-6967M-090 A central computer system monitors the heart rates of each patient in the Intensive Care Unit (ICU) to ensure a quick.jpgA recent NPR story by Michael Tomsic recounted the remarkable story of how the Carolinas HealthCare System monitors ICU patients in 10 of its hospitals from a remote "command center"-like facility. Several critical care specialists staff the center; nurses are present around the clock and doctors work nights. Command center staff also spend time at the hospitals they monitor.

The system began doing this roughly two years ago and have since found that the quiet atmosphere of the command center ("none of the bells and whistles going off that most ICUs need to alert nurses and doctors down the hall that they're needed") allows medical staff in the center to maintain a constant focus on patients. The approach seems to be working for the system: They've observed a higher patient volume, lower mortality rate, and decreased length of stay since opening the center (though, as the article describes, such improvement likely isn't due solely to the remote monitoring program).

The issue of alarm fatigue is recognized as an important patient safety issue, so the idea of placing a group of specialists outside the immediate patient environment for monitoring purposes has a strong rationale. What I found most interesting about the article, however, was revealed in remarks from two nurses interviewed. One observed that "There are things that I'm able to view here [in the command center] — trends that I'm able to view here — that I'm not able to view at the bedside", while another noted that since the command center staff has easy access to patient data, handoffs are better and issues are less likely to be missed.

Assuming that these ICUs are not fundamentally different from ICUs in other facilities, the story highlights an issue that is endemic far beyond this particular set of hospitals: the frequent failure to bring data to the bedside in an effective way. This is ironic, as the big data and IT revolution brags -- incessantly, it sometimes seems -- about delivering data and analytics to the point where they can be most useful. That isn't consistent with the remarks from healthcare workers in this article.

Is caregiving versus patient monitoring an either-or proposition? I doubt it, as I've seen data-driven intensive care delivered reliably over long periods of time. Rather, I think the question is how to make data actionable through delivery to the right people without disrupting their workflow. It's a question for all clinical environments beyond the ICU. We need to make more effective use of routine clinical data.

(image source: Wikipedia)

Saturday, May 9, 2015

Microbial ecology: Keeping one step ahead of the bad bugs

File:Clostridium difficile 01.pngTwo papers were published recently that apply notions of bacterial interference and competition rather elegantly. The first was a study by Dale Gerding et al on administering nontoxigenic Clostridium difficile spores to prevent recurrent C. diff infection. The study aimed to determine the safety, fecal colonization, recurrence rate, and optimal dosing schedule of nontoxigenic C. difficile, and the authors found that
Among patients with CDI who clinically recovered following treatment with metronidazole or vancomycin, oral administration of spores of NTCD-M3 was well tolerated and appeared to be safe. Nontoxigenic C. difficile strain M3 colonized the gastrointestinal tract and significantly reduced CDI recurrence. 
It's a fascinating study and I recommend reading it. In addition to contemplating this as a potential future treatment for recurrent CDI, it's intriguing to wonder if patients could have their GI tracts colonized by nontoxigenic C. diff prophylactically before receiving antibiotics associated with CDI.

The other study, by Alice Deasy et al, demonstrates how nasal inoculation with the commensal Neisseria lactamica inhibits carriage of N. meningitidis in young adults. N. lactamica is a commensal occupying the same ecological niche (the nasopharynx) as the pathogenic organism N. meningitidis, which is associated with epidemic meningitis. They observed a significant inhibition of meningococcal carriage in carriers of N. lactamica, which was attributed to displacement of existing meningococci and to inhibition of new acquisition. Their findings suggest N. lactamica as a potential "novel bacterial medicine to suppress meningococcal outbreaks". Again, I recommend reading the complete study.

The notion of exploiting microbial ecology is appealing for many reasons, including that it doesn't require developing intrinsically new pharmacologic compounds and that it may have no significant side effects. At the same time, its important to remember previous trials employing bacterial interference, such as the deliberate colonization of newborn children with "low virulence" Staph. aureus, so that old missteps aren't repeated.

(image source: Wikipedia)