Ranson’s Criteria: For the History Books

Acute pancreatitis – a common indication for hospitalization in the US – is a feared complication of alcohol use, gallstones, abdominal trauma, steroids, mumps, high fat and calcium level, ERCPs and even scorpion stings. Up to 25% of patients with acute pancreatitis will develop severe acute pancreatitis. It’s a tricky clinical situation to manage due to the difficulty in estimating the severity of the disease; yet distinguishing between the two is critical as mortality rates are 1-2% for mild pancreatitis and up to 17% for severe cases. Prior to this study, physicians relied purely on clinical judgment to triage patients to the floor or the ICU, a method known to underestimate the severity of the disease. Ranson et al provided 11 criteria – 5 assessed at admission and the other 6 at 48 hours – to help predict the most severe cases, with the goal of aggressively treating these patients in the ICU or the OR.

The study suffered from many shortcomings, which complicated data analysis. The data itself was likely skewed due to selection and observer bias, the number of patients enrolled, and the variety of treatment methods employed. For example, 21 patients underwent abdominal exploration within 7 days of admission. Another 10 patients were randomized to early operative or non-operative management. Of those, all but one spent at least 8 days in the ICU. An additional category was non-randomized early management where 17 patients were managed by early operation within 48 hours of diagnosis. Operations varied widely, but the existing standard “recommended early laparotomy with cholecystostomy, gastrostomy, [jejunostomy] and pancreatic drainage in patients with severe acute pancreatitis”. With all these variables, it becomes impossible to assess outcomes based on the various treatment modalities employed – the power in each group and for the whole study is greatly reduced. Nonetheless some of the conclusions regarding blood loss and fluid depletion were accurate. Importantly, the authors noted that a worsening BUN in the face of aggressive fluid replacement was a more sensitive index for survival than the BUN at admission.

At the time, the 11 criteria now known as Ranson’s Criteria were the best available for predicting severity. However, more recent studies have demonstrated that Ranson’s criteria as an aggregate are a relatively poor predictor of disease severity. Today we have at our disposal a number of scoring systems that have been shown to be better prognosticators than Ranson’s Criteria. One, the APACHE II scoring, was developed for critically ill ICU patients. Modern guidelines from the American Gastroenterology Association recommend using the APACHE II scoring system because of its good negative predictive value for severe acute pancreatitis. Still, it continues to be difficult to accurately predict outcomes and severity, although early aggressive management and the use of other diagnostic modalities not available to the authors at the time have improved survival.

The complexity of the interactions of numerous risk factors that ultimately lead to an attack of acute pancreatitis almost precludes having a simple scoring system. It seems that Ranson’s criteria are best used during hospital rounds where medical students and residents can rapidly regurgitate them. It is our hope that they recognize this study for its historic value and instead employ APACHE II to guide early management.

Ranson JH, Rifkind KM, Roses DF, Fink SD, Eng K, Spencer FC. Prognostic signs and the role of operative management in acute pancreatitis. Surg Gynecol Obstet. 1974; 139:69.

(Unfortunately there isn’t even an abstract of the original article available online. For those at UTSW, I can e-mail you a photocopy of the article since it isn’t available online through the library)

ISIS-2: Managing an Acute MI

Got chest pain? Then pop some aspirin and head to the ER. This was, in a nutshell, the conclusion of the Second International Study of Infarct Survival (ISIS-2). 1.5 million Americans suffer from a heart attack every year and many are treated in the ER with morphine, oxygen, nitrates, aspirin and beta-blockers. Some are taken to the cath lab. Part of this methodical management stems from the results of ISIS-2. Prior to ISIS-2, there was exactly one trial that examined the use of aspirin, an anti-platelet agent, acutely during a myocardial infarction (MI). (In that study, one dose of aspirin was given for a suspected MI, and mortality was assessed at one month. The single dose conferred no mortality benefit and the authors concluded that aspirin was not beneficial in the acute setting.) The focus on streptokinase in ISIS-2 was less groundbreaking as there were a number of other concurrent trials studying the use of thrombolytic therapy for MIs. Still, in conjunction with other randomized clinical trials (GISSI, ISAM, etc), they quickly established a time frame of 3 hours for thrombolytic therapy, giving rise to the phrase “time is muscle”. Today, with rapid triage at the ED and improved times to percutaneous coronary intervention (PCI), we know it as 90 minutes for “door to balloon”.

The methodology in the study was consistent throughout the trial. Eligibility was made straight forward to encourage participation in various countries and ultimately 17,187 patients were recruited for the study. Patients were randomized for both aspirin and streptokinase. This created 4 distinct groups: streptokinase infusion, aspirin 160mg, streptokinase + aspirin, or placebo infusion and tablets. Participating physicians were encouraged to continue all other aspects of patient care as they saw fit, though they were required to report the intention to use any anticoagulation in addition to the trial medications. At the time of publication, discharge information on 204 patients (1.1%) was not available. Otherwise follow-up was strong, with 97% follow-up at 5 weeks after discharge and a median follow-up of 15 months.

Both 5-week mortality and 24-month mortality were analyzed. Aspirin afforded a 23% reduction in the odds of death when given within 24 hours of the onset of chest pain and continued for a month. This translates into 25 deaths avoided per 1000 patients treated (NNT of 40) and 15 non-fatal cardiovascular events avoided. This effect was further enhanced if aspirin was given within 4 hours of pain; mortality benefits were less pronounced if given after 4 hours. When examining both fatal and non-fatal events in the years following a myocardial infarction, the number needed to treat is far less than 40. This survival benefit was independent of the survival gains witnessed with streptokinase therapy. The combination of thrombolytic and anti-platelet therapy, administered within 4 hours of onset of chest pain, lead to a reduction of 40-50% in odds of death.

The administration of aspirin in the setting of an acute MI can substantially alter the outcome. Moreover, the gains in survival over the first few weeks persist well into the future with smaller amounts of daily aspirin. These benefits, definitively established by ISIS-2, cannot be understated. Simply put, for the 1.5 million MIs that occur in the US every year, taking a drug that is already found in nearly all households can prevent tens of thousands of deaths.

ISIS-2 Collaborative Group. Randomized trial of intravenous streptokinase, oral aspirin, both, or neither among 17,187 cases of suspected acute myocardial infarction: ISIS-2. Lancet. 1988. Aug 13;2(8607):349-60.

(for those at UTSW, I can e-mail you a photocopy of the article since it isn’t available online through the library)

Albumin or Normal Saline? Either way you’re SAFE

“A Comparison of Albumin and Saline for Fluid Resuscitation in the Intensive Care Unit” – The SAFE Study Investigators, The New England Journal of Medicine, May 27, 2004

Introduction

One of the most powerful acute interventions in medicine today is fluid resuscitation. It is possible to instantly increase perfusion to all vital organs, saving unfathomable amounts of nutrient and oxygen-deprived tissue with adequate administration of fluids. The fluids that are administered are simple sterile water-based concoctions that do not carry the adverse effects or complications of blood or other medications. Therefore, initial responses to patients in extremis, those who require immediate and acute care, usually include obtaining vascular access and infusing large amounts of fluids.

Although there are many types of fluids, two major categories provide two physiologic approaches to fluid management. Crystalloid solutions are solutions of water and small molecules – small enough to leak out of capillaries and equilibrate with extracellular body fluid (body fluid that surrounds cells).  Colloid solutions are solutions of water with large molecules – large enough to not travel through intact vascular membranes (they do not leak out of healthy blood vessels). Among other differing effects, colloid solutions tend to hold water inside the vessels better, but they can increase intravascular density, predisposing high-viscosity-low-flow states. Prior to this publication, small disputing studies had shown both benefits and lack of benefits from choosing one fluid type or the other. The SAFE investigators settled this dispute with a large (~7000 patients), double-blinded clinical trial that compared fluid management between 0.9% Normal Saline (NS, a crystalloid) and 4% Albumin (a colloid) in Intensive Care Unit (ICU) patients with the primary outcome being all cause mortality after 28 days.

Results

After initial fluid resuscitation with Albumin or NS, there was no difference in mortality after 28 days in all patients admitted to the ICU. Patients given NS tended to receive more fluid initially resulting in a greater net positive fluid balance than those that received Albumin. However, even when broken down by common presenting illnesses to the ICU – sepsis, trauma or Acute Respiratory Distress Syndrome (ARDS), mortality was unchanged between Albumin and NS in each individual group. In fact, the only significant result this study produced was that NS administration produced lower mortality than Albumin in trauma patients with brain injury, a very small subgroup of all patients in this study.

Why We Do What We Do

We give NS. We give Albumin. The SAFE study showed that the two main fluids categories and the two most popular fluids are relatively safe. When administered to ICU patients – those least able to defend their body against physiologic challenges – neither fluid identified itself as inferior or superior. So was this study worthless because it did not identify a single solution and change or guide medical practice? I contend that strong, robust studies such as the SAFE study that do not find significant differences actually strengthen medical practice as a whole. Knowing these results provides doctors with both options of fluid when treating a patient. More importantly, doctors are able to use logical physiologic evidence, derived from basic scientific principles, to tailor their choice in fluids and management to the presenting patient.

Large studies such as SAFE are designed to adequately test and compare interventions. However, these studies are not intended to dictate prognosis for the patients. The data does not imply that one patient resuscitated with albumin will have the same outcome as if he had been treated with NS. Each patient is different, and giving the clinician the opportunity to apply his knowledge and training along with the guiding clinical evidence is how optimal care is provided.

Finfer S, Bellomo R, Boyce N, French J, Myburgh J, Norton R; SAFE Study
Investigators. A comparison of albumin and saline for fluid resuscitation in the 
intensive care unit. N Engl J Med. 2004 May 27;350(22):2247-56. PubMed PMID:
15163774.
http://www.ncbi.nlm.nih.gov/pubmed/15163774

Smoking and Lung Cancer

As early as 1912, scientists had begun to suspect smoking as a factor in the increased rates of death from lung cancer, but strong retrospective and prospective studies were unavailable. Through the 1940s, prominent scientific figures continued to blame everything from improved cancer detection rates to tarred roads to industrial plant fumes as the cause of increased rates of lung cancer. While the association seems obvious to us today, there was a paucity of data to support it at that time. The ubiquity of tobacco didn’t help either. During the first half of the 20th century, nearly 80% of American men noted some amount of tobacco use. Numerous retrospective studies emerged in 1950, but there was no trump card.

Doll and Hill’s seminal study attempted a broad-based, prospective approach to address the issue. In 1951, a simple survey asking for a brief smoking history was sent to each physician practicing in the UK. Of the replies received, men under 35 and all women were excluded from the analysis, leaving approximately 24,000 men aged 35 years and above. In 1954, Doll and Hill looked at a national database to determine how many of these physicians had died and the cause of death. All 36 men who died of lung cancer were smokers. The death rate was 0.48 per 1000 for those smoking approximately 1g daily and 1.14 per 1000 for those smoking more than 25g daily. Moreover, an increase in cigarette use was directly correlated to the risk of lung cancer.

The simplicity of their study proved to be genius. There were no advanced metrics – all the data analysis can be accomplished on a basic calculator. The correlation between tobacco and lung cancer could not have been made more obvious. Unfortunately, the results did not result in rapid policy changes. A joint advertising blitz by tobacco companies left the public unaware or unconvinced of the harmful effects of tobacco for a few more decades. Still, the evidence was undeniable, and the scientific community coalesced around this body of data, widely regarded as the turning point in the war against tobacco.

Doll R, Hill AB. 1954. The mortality of doctors and their smoking habits. British Medical Journal. 

Transfuse at 7 – The TRICC Trial

“A Multicenter, Randomized, Controlled Clinical Trial of Transfusion Requirements in Critical Care” – Transfusion Requirements in Critical Care Investigators, Canadian Critical Care Trials Group; February 11, 1999, The New England Journal of Medicine

Introduction

Ask any physician when he initiates blood transfusions and you will get many different answers. The most appropriate one (and the one medical students should memorize for their wards services) is when it matters – when the patient has lost enough red blood cells (RBC) to cause symptoms. However, there are many situations where symptoms related to RBC loss or destruction are confounded by numerous other medical conditions. Patients who go to the Intensive Care Unit (ICU) are usually the most difficult to assess due to the severity and complexity of the illnesses that they present with. For this reason, this Canadian group attempted to set a standard threshold for when to initiate blood transfusions based on a simple, routine laboratory blood study – the hemoglobin concentration (Hgb). The Hgb is a study of the concentration of oxygen-carrying molecules in RBCs, and normal values for adults run from 13-15 g/dL.

Blood transfusions save lives. In the general sense, if your blood cannot carry oxygen, then there is no point in breathing it in. If your Hgb drops too low, it is just as bad as suffocating. However, blood transfusions are also high-risk and harmful interventions. When you receive blood from another individual, your own immune system recognizes the blood as foreign. This leads to an immune response, whether severe or mild, which may divert resources away from fighting the pathogen that was originally the reason for the illness and further weakening the body. Additionally, there is always the risk of acquiring blood borne illnesses, such as Hepatitis C or HIV, despite the screeningefforts of blood banks. Due to the value and risk, blood transfusions must be used carefully, and the TRICC trial was vital to understanding how to do so.

Results

This trial split patients into two groups. The first, the restricted group, was given blood transfusions only when their Hgb dropped below 7, and their Hgb was maintained in the 7 to 9 range. The second, the liberal group, was transfused any time the Hgb fell below 10, and their Hgb was maintained in the 10 to 12 range. The primary outcome, all cause mortality at 30 days, showed no statistical difference between the two groups. However, when the analysis was repeated for patients who were less than 55 years old, there was significantly less mortality in the restrictive (transfuse at 7) group. The analysis was also repeated for patients who were less sick, as defined by an APACHEII score of <20 (APACHEII is a score calculated from multiple physiologic factors (vital signs, lab values, etc.), on a range from 0-71 that indicates increasing severity of disease and risk of death with an increasing score). In this sub-analysis as well, there was reduced mortality in the restrictive group. For patients older than 55 and with an APACHEII > 20, there was no difference in 30 day mortality between the restrictive and liberal groups.

Why We Do What We Do

The authors of this study presented their results in a very objective and humble manner. Even though there was a clear trend towards reduced mortality in the restrictive group for the entire study (18.7 percent mortality in the restrictive group vs. 23.3 percent mortality in the liberal group), they refused to acknowledge it due to lack of statistical significance (P = 0.11). At the least, this validates that there is no additional harm done if transfusions are restricted to patients with Hgb < 7. Patients in the restricted group received less blood overall in this study, reducing their risk for contracting transfusion-associated infectious diseases and major transfusion-associated complications, such as lung injury (TRALI) or cardiac overload (TACO). For younger patients and those who were not as severely ill, it is clearly apparent that over-transfusion is actually deleterious, and this is statistically significant. So the next time you approach a patient with possible blood requirements, it is not a bad idea to use the Hgb of 7 as a starting point to guide management.

Hébert PC, Wells G, Blajchman MA, Marshall J, Martin C, Pagliarello G,
Tweeddale M, Schweitzer I, Yetisir E. A multicenter, randomized, controlled
clinical trial of transfusion requirements in critical care. Transfusion
Requirements in Critical Care Investigators, Canadian Critical Care Trials Group.
N Engl J Med. 1999 Feb 11;340(6):409-17. Erratum in: N Engl J Med 1999 Apr
1;340(13):1056. PubMed PMID: 9971864.
http://www.ncbi.nlm.nih.gov/pubmed/9971864

Model for End-Stage Liver Disease (MELD)

The liver has an incredible capacity to recover following an injury. In some patients this can be overwhelmed by autoimmune disease (primary biliary cirrhosis, Wilson’s disease, etc). Others manage to lay waste to their livers through external factors like alcohol consumption, drugs, or a hepatitis virus. Whatever the cause, a number of patients end up with cirrhosis and portal hypertension. To alleviate portal hypertension and bypass the liver, a transjugular intrahepatic portosystemic shunt (TIPS) can be placed. The use of this procedure outside of an emergent setting was controversial in advanced liver disease, and morbidity and mortality benefits over endoscopic or medical management were unknown.

Prior to this study, the Child-Pugh class and scoring system loosely determined when to proceed with a TIPS procedure. While the Child-Pugh system is beneficial for classifying a broader spectrum of liver dysfunction, its shortcomings lie in an inability to distinguish between imminent hepatic failure and cirrhosis that can be managed medically for a few more months. It fails to account for increases in creatinine and kidney failure, which are associated with increased mortality. By using this system, we were performing elective TIPS placement without a proper risk-benefit analysis. This created a scenario in which a perceived improvement in symptoms was traded for increased encephalopathy and worsening liver function.

Malinchoc and colleagues looked at 231 patients who underwent an elective TIPS procedure for recurrent variceal bleeding (75%) or refractory ascites (25%).  Of this group, 25 patients were either lost to follow-up or received liver transplants within 3 months and were subsequently excluded from analysis. The authors found that mortality was strongly associated with increases in bilirubin, creatinine, INR and cause of cirrhosis. In fact, in certain situations TIPS had the opposite effect – 3-month mortality increased significantly post-procedure. Using multivariate analysis, the team developed a scoring system and bedside nomogram with these four variables for predicting 3-month mortality post-TIPS placement. Now known as the MELD score, it received praise for a strong concordance (c)-statistic score of 0.87 indicating high clinical accuracy. The basis of this system quickly gave rise to the pediatric liver disease model (PELD), also utilized in a similar fashion. While the scope of the study itself was narrow, the model was found to be generalizable to chronic liver disease and for liver transplant stratification. MELD quickly replaced the 36-year-old Child-Pugh classification for transplant allocation used by the United Network for Organ Sharing (UNOS). This study was a game changer for liver transplant candidates. Patients with the most severe disease, who previously may have been classified as moderate under the Child-Pugh system, moved up in line for a liver transplant. It also provided a means to soundly advise patients and family members on the risks of an elective TIPS, which was no small victory. All together, the MELD study will likely remain the standard for chronic liver disease stratification for years to come.

 

Malinchoc M, Kamath PS, Gordon FD, Peine CJ, Rank J, ter Borg PC. A model to predict poor survival in patients undergoing transjugular intrahepatic portosystemic shunts. Hepatology. 2000;31(4):864.

Testicular Cancer: Changing Tides

Prior to 1974, localized testicular cancer was easily cured with orchiectomy, while metastatic disease was effectively a death warrant – cure rates were as low as 5% with a variety of chemotherapeutic agents. This trial, conducted by an up-and-coming researcher using an experimental drug called cis-diamminedichloroplatinum (or as we know it, cisplatin), provided a much-needed win in the battle against solid cancers. From 1974-1976, Lawrence Einhorn used a combination of cisplatin, vinblastine and bleomycin to treat patients with mild to severe metastatic disease. Some patients received adjunct surgery to remove residual tumors. Overall, at the time of publication, 38 of 47 patients were disease free. Moreover, at 13-year follow-up (which is remarkable in and of itself), over 50% of patients were disease free, an achievement stated as a “one-log increase in the cure rate.”

In addition to improved cure rates, cisplatin did not cause a pronounced myelosuppression like other chemotherapeutic drugs. It was not without its vices though. As Einhorn described, platinum had toxic effects on the kidney, manifested as a 25-50% reduction in baseline creatinine clearance. This was likely exacerbated by the dehydration caused by the intractable nausea and vomiting, which is eloquently described in Mukherjee’s Emperor of All Maladies. From a critical perspective, the study was well conducted in the context of a limited number of patients. Criteria for partial versus complete remission were well-defined, and follow-up was more than sufficient. Additional trials in the following years would cement the new findings.

Although partial results had been published in reports in Journal of Urology and American Family Physician, the complete results of the trial were ultimately published in Annals of Internal Medicine in 1977.  Given the overwhelmingly positive response to the therapy and the aggressive nature of non-seminomatous testicular cancer, it was quickly approved by the FDA in 1978 and became the standard of care. The combination chemotherapy was modified in the 80s to use etoposide instead of vinblastine, leading to additional improvements in cure rates. That same combination is in use today, and cure rates exceed 95%. Moments like this, when one can claim a logarithmic increase in cure rates, are few and far between in medicine.

Einhorn LH, Donohue J. Cis-diamminedichloroplatinum, vinblastine, and bleomycin combination chemotherapy in disseminated testicular cancer. Ann Intern Med. 1977. Sep;87(3):293-8.