PIOPED II: Assessing CTAs in PE diagnosis

Pulmonary embolism (PE) is the blockage of arteries that perfuse the lungs, generally due to clots. Rapid assessment and initiation of anti-coagulation in suspected PE cases is paramount as the risk of recurrent PE in confirmed cases could be as high as 25% within the first 24 hours. Fortunately, diagnosis and management of pulmonary embolism quickly improved at the turn of the 21st century. Between 1995 and 2001, Wells et al developed criteria to assess the likelihood of a pulmonary embolism based on the clinical presentation. At the same time, a new development in computed tomography (CT), termed spiral CT, permitted visualization of pulmonary vasculature. The Prospective Investigation of Pulmonary Embolism Diagnosis II (PIOPED II) study addressed the benefits of CTA.

One issue when attempting to assess the accuracy and usefulness of a new imaging modality is having a strong gold standard with which to compare. Given that the existing gold standard for PE is a pulmonary angiography or digital subtraction pulmonary angiography (DSPA) – a highly invasive, time-consuming study that uses direct arterial catheter access for contrast-enhanced imaging – an alternative method was needed. Using multiple ancillary tests, the authors employed a composite reference diagnosis as a gold standard in addition to DSPA. A diagnosis of PE was given when ventilation-perfusion (V/Q) scanning showed high probability in a patient without a history of PE or when V/Q scans showed moderate probability with positive lower extremity venous ultrasonography. PE was ruled out with low pre-test probability and negative results from V/Q scans or venous ultrasonography. Using these standards, 632 patients were ruled out for PE. 592 received a CT-PA and were followed for 6 months. Of that group, only 2 required anticoagulation, indicating that the composite reference diagnosis was a suitable substitute in situations where DSPA was not necessary.

The results, while impressive on their own, are more indicative when compared to the results from the first PIOPED study, which examined V/Q scans. In the original study, V/Q scans achieved 98% sensitivity but very poor specificity (10%). In contrast, CTA achieved 83% sensitivity and 96% specificity. When venous angiography was included (CTA-CTV), sensitivity improved to 92%. CTA also inherently provided imaging of the whole chest and upper neck, which in certain cases produced alternative diagnoses.

The results of PIOPED II demonstrate that, when used in conjunction with modified Wells criteria, CTA provides high positive and negative predictive values for PE. Effectively, concordant clinical assessment and imaging results can rule-in or rule-out a diagnosis which was not possible with V/Q scans alone. In patients with suspected PE and who do not have contraindications for IV contrast, it offers a shorter clinical algorithm for diagnosis and management by eliminating the uncertainty of moderate probability V/Q scans. Today CTA has become a second gold standard in the diagnosis of PE due to the invasiveness of pulmonary angiograms. Nevertheless, in both algorithms, any inconclusive results must be followed up by DSPA or serial lower extremity ultrasounds. As the authors themselves admit, the benefits of CTA-CTV are largely dependent on the expertise of the radiology staff reading the film. For institutions with this capability, the diagnosis and management of PE is greatly improved with CTA.

 

Multidetector computed tomography for acute pulmonary embolism. Stein PD, Fowler SE, Goodman LR, et al. N Engl J Med. 2006. Jun 1;354(22):2317-27.

Advertisements

Statins: Game Changers in CVD

In a recent post we had discussed the use of aspirin in the setting of an acute MI. Our second look at landmark trials examining the treatment of coronary vascular disease (CVD) focuses on primary prevention of CVD. Elevated cholesterol levels have long been implicated in the progression of CVD, and numerous medications have been developed to reduce plasma cholesterol levels as a means to reduce the incidence of myocardial infarctions and other cardiovascular events. In fact, the Lipid Research Clinics Coronary Primary Prevention Trial (LRC-CPPT) revealed that the CVD risk reduction was proportional to the reduction in LDL cholesterol. Through the 1980s, fibrates and bile acid resins (cholestyramine) were in use, and studies from multiple institutions had demonstrated their ability to moderately reduce cholesterol levels. Used alone, neither cholestyramine nor the fibrates could achieve greater than a 10-15% reduction in LDL. The most promising development was the approval of HMG-CoA reductase inhibitors, otherwise known as statins. Several smaller studies suggested an improvement in plasma cholesterol levels and cardiovascular outcomes with statin therapy. The Scandinavian Simvastatin Survival Study (4S) assessed the effect of simvastatin on total mortality and cardiovascular outcomes and confidently addressed the question.

Between 1988 and 1994, 4444 patients between the ages of 35 and 70 were randomly assigned to receive various doses of simvastatin or placebo. Dosing was titrated to a specific serum cholesterol value; patients who were above this value received up to 40mg per day, and those below this value were titrated down. The study group was followed for an average of 5.4 years. The primary endpoint for 4S was total mortality – an important distinction since a few research trials with fibrates, including one from the WHO, had shown an increase in non-cardiovascular deaths. Randomization was successful in evenly dividing patients already on multiple medications for hypertension, angina and diabetes. A similar study conducted at the same time in Scotland, the WOSCOP study, was looking at the effect of pravastatin on primary prevention. While 4S was not as straightforward as WOSCOPS, it was far more generalizable in a number of ways. For one, it included women. Other aspects of the study that stand out are the broader age range, existing CVD or diabetes and a percentage of smokers closer to that of the US.

The data from 4S were impressive. Patients on simvastatin on average saw a 35% decrease in LDL cholesterol. More importantly, this decrease correlated with a relative risk of death of 0.70 when compared to placebo (182 vs 256 deaths). This reduction in deaths was attributable to a 42% decrease in cardiovascular mortality. Likewise, simvastatin also lead to a pronounced decrease in non-fatal cardiovascular events (RR 0.66). Adverse effects were few – 1 episode of rhabdomyolysis in the simvastatin group. Complaints of myalgias and elevations in LFTs were similar between placebo and simvastatin.

Prior to this study, the benefits of lipid-lowering therapy lacked a consensus. The 4S trial conclusively determined that statin therapy was a safe and more effective treatment choice in lowering serum LDL cholesterol than other lipid-lowering agents. Moreover, it provided the largest reduction in cardiovascular mortalities, and the benefit was witnessed in all age groups. Today many newer variations of statins exist but simvastatin continues to be the first line therapy for many of our patients.

 

The Scandinavian Simvastatin Survival Study Group. Randomized trial of cholesterol lowering in 4444 patients with coronary heart disease: the Scandinavian Simvastatin Survival Study (4S). Lancet. 1994. 344:1383-1389.

Ranson’s Criteria: For the History Books

Acute pancreatitis – a common indication for hospitalization in the US – is a feared complication of alcohol use, gallstones, abdominal trauma, steroids, mumps, high fat and calcium level, ERCPs and even scorpion stings. Up to 25% of patients with acute pancreatitis will develop severe acute pancreatitis. It’s a tricky clinical situation to manage due to the difficulty in estimating the severity of the disease; yet distinguishing between the two is critical as mortality rates are 1-2% for mild pancreatitis and up to 17% for severe cases. Prior to this study, physicians relied purely on clinical judgment to triage patients to the floor or the ICU, a method known to underestimate the severity of the disease. Ranson et al provided 11 criteria – 5 assessed at admission and the other 6 at 48 hours – to help predict the most severe cases, with the goal of aggressively treating these patients in the ICU or the OR.

The study suffered from many shortcomings, which complicated data analysis. The data itself was likely skewed due to selection and observer bias, the number of patients enrolled, and the variety of treatment methods employed. For example, 21 patients underwent abdominal exploration within 7 days of admission. Another 10 patients were randomized to early operative or non-operative management. Of those, all but one spent at least 8 days in the ICU. An additional category was non-randomized early management where 17 patients were managed by early operation within 48 hours of diagnosis. Operations varied widely, but the existing standard “recommended early laparotomy with cholecystostomy, gastrostomy, [jejunostomy] and pancreatic drainage in patients with severe acute pancreatitis”. With all these variables, it becomes impossible to assess outcomes based on the various treatment modalities employed – the power in each group and for the whole study is greatly reduced. Nonetheless some of the conclusions regarding blood loss and fluid depletion were accurate. Importantly, the authors noted that a worsening BUN in the face of aggressive fluid replacement was a more sensitive index for survival than the BUN at admission.

At the time, the 11 criteria now known as Ranson’s Criteria were the best available for predicting severity. However, more recent studies have demonstrated that Ranson’s criteria as an aggregate are a relatively poor predictor of disease severity. Today we have at our disposal a number of scoring systems that have been shown to be better prognosticators than Ranson’s Criteria. One, the APACHE II scoring, was developed for critically ill ICU patients. Modern guidelines from the American Gastroenterology Association recommend using the APACHE II scoring system because of its good negative predictive value for severe acute pancreatitis. Still, it continues to be difficult to accurately predict outcomes and severity, although early aggressive management and the use of other diagnostic modalities not available to the authors at the time have improved survival.

The complexity of the interactions of numerous risk factors that ultimately lead to an attack of acute pancreatitis almost precludes having a simple scoring system. It seems that Ranson’s criteria are best used during hospital rounds where medical students and residents can rapidly regurgitate them. It is our hope that they recognize this study for its historic value and instead employ APACHE II to guide early management.

Ranson JH, Rifkind KM, Roses DF, Fink SD, Eng K, Spencer FC. Prognostic signs and the role of operative management in acute pancreatitis. Surg Gynecol Obstet. 1974; 139:69.

(Unfortunately there isn’t even an abstract of the original article available online. For those at UTSW, I can e-mail you a photocopy of the article since it isn’t available online through the library)

ISIS-2: Managing an Acute MI

Got chest pain? Then pop some aspirin and head to the ER. This was, in a nutshell, the conclusion of the Second International Study of Infarct Survival (ISIS-2). 1.5 million Americans suffer from a heart attack every year and many are treated in the ER with morphine, oxygen, nitrates, aspirin and beta-blockers. Some are taken to the cath lab. Part of this methodical management stems from the results of ISIS-2. Prior to ISIS-2, there was exactly one trial that examined the use of aspirin, an anti-platelet agent, acutely during a myocardial infarction (MI). (In that study, one dose of aspirin was given for a suspected MI, and mortality was assessed at one month. The single dose conferred no mortality benefit and the authors concluded that aspirin was not beneficial in the acute setting.) The focus on streptokinase in ISIS-2 was less groundbreaking as there were a number of other concurrent trials studying the use of thrombolytic therapy for MIs. Still, in conjunction with other randomized clinical trials (GISSI, ISAM, etc), they quickly established a time frame of 3 hours for thrombolytic therapy, giving rise to the phrase “time is muscle”. Today, with rapid triage at the ED and improved times to percutaneous coronary intervention (PCI), we know it as 90 minutes for “door to balloon”.

The methodology in the study was consistent throughout the trial. Eligibility was made straight forward to encourage participation in various countries and ultimately 17,187 patients were recruited for the study. Patients were randomized for both aspirin and streptokinase. This created 4 distinct groups: streptokinase infusion, aspirin 160mg, streptokinase + aspirin, or placebo infusion and tablets. Participating physicians were encouraged to continue all other aspects of patient care as they saw fit, though they were required to report the intention to use any anticoagulation in addition to the trial medications. At the time of publication, discharge information on 204 patients (1.1%) was not available. Otherwise follow-up was strong, with 97% follow-up at 5 weeks after discharge and a median follow-up of 15 months.

Both 5-week mortality and 24-month mortality were analyzed. Aspirin afforded a 23% reduction in the odds of death when given within 24 hours of the onset of chest pain and continued for a month. This translates into 25 deaths avoided per 1000 patients treated (NNT of 40) and 15 non-fatal cardiovascular events avoided. This effect was further enhanced if aspirin was given within 4 hours of pain; mortality benefits were less pronounced if given after 4 hours. When examining both fatal and non-fatal events in the years following a myocardial infarction, the number needed to treat is far less than 40. This survival benefit was independent of the survival gains witnessed with streptokinase therapy. The combination of thrombolytic and anti-platelet therapy, administered within 4 hours of onset of chest pain, lead to a reduction of 40-50% in odds of death.

The administration of aspirin in the setting of an acute MI can substantially alter the outcome. Moreover, the gains in survival over the first few weeks persist well into the future with smaller amounts of daily aspirin. These benefits, definitively established by ISIS-2, cannot be understated. Simply put, for the 1.5 million MIs that occur in the US every year, taking a drug that is already found in nearly all households can prevent tens of thousands of deaths.

ISIS-2 Collaborative Group. Randomized trial of intravenous streptokinase, oral aspirin, both, or neither among 17,187 cases of suspected acute myocardial infarction: ISIS-2. Lancet. 1988. Aug 13;2(8607):349-60.

(for those at UTSW, I can e-mail you a photocopy of the article since it isn’t available online through the library)

Smoking and Lung Cancer

As early as 1912, scientists had begun to suspect smoking as a factor in the increased rates of death from lung cancer, but strong retrospective and prospective studies were unavailable. Through the 1940s, prominent scientific figures continued to blame everything from improved cancer detection rates to tarred roads to industrial plant fumes as the cause of increased rates of lung cancer. While the association seems obvious to us today, there was a paucity of data to support it at that time. The ubiquity of tobacco didn’t help either. During the first half of the 20th century, nearly 80% of American men noted some amount of tobacco use. Numerous retrospective studies emerged in 1950, but there was no trump card.

Doll and Hill’s seminal study attempted a broad-based, prospective approach to address the issue. In 1951, a simple survey asking for a brief smoking history was sent to each physician practicing in the UK. Of the replies received, men under 35 and all women were excluded from the analysis, leaving approximately 24,000 men aged 35 years and above. In 1954, Doll and Hill looked at a national database to determine how many of these physicians had died and the cause of death. All 36 men who died of lung cancer were smokers. The death rate was 0.48 per 1000 for those smoking approximately 1g daily and 1.14 per 1000 for those smoking more than 25g daily. Moreover, an increase in cigarette use was directly correlated to the risk of lung cancer.

The simplicity of their study proved to be genius. There were no advanced metrics – all the data analysis can be accomplished on a basic calculator. The correlation between tobacco and lung cancer could not have been made more obvious. Unfortunately, the results did not result in rapid policy changes. A joint advertising blitz by tobacco companies left the public unaware or unconvinced of the harmful effects of tobacco for a few more decades. Still, the evidence was undeniable, and the scientific community coalesced around this body of data, widely regarded as the turning point in the war against tobacco.

Doll R, Hill AB. 1954. The mortality of doctors and their smoking habits. British Medical Journal. 

Model for End-Stage Liver Disease (MELD)

The liver has an incredible capacity to recover following an injury. In some patients this can be overwhelmed by autoimmune disease (primary biliary cirrhosis, Wilson’s disease, etc). Others manage to lay waste to their livers through external factors like alcohol consumption, drugs, or a hepatitis virus. Whatever the cause, a number of patients end up with cirrhosis and portal hypertension. To alleviate portal hypertension and bypass the liver, a transjugular intrahepatic portosystemic shunt (TIPS) can be placed. The use of this procedure outside of an emergent setting was controversial in advanced liver disease, and morbidity and mortality benefits over endoscopic or medical management were unknown.

Prior to this study, the Child-Pugh class and scoring system loosely determined when to proceed with a TIPS procedure. While the Child-Pugh system is beneficial for classifying a broader spectrum of liver dysfunction, its shortcomings lie in an inability to distinguish between imminent hepatic failure and cirrhosis that can be managed medically for a few more months. It fails to account for increases in creatinine and kidney failure, which are associated with increased mortality. By using this system, we were performing elective TIPS placement without a proper risk-benefit analysis. This created a scenario in which a perceived improvement in symptoms was traded for increased encephalopathy and worsening liver function.

Malinchoc and colleagues looked at 231 patients who underwent an elective TIPS procedure for recurrent variceal bleeding (75%) or refractory ascites (25%).  Of this group, 25 patients were either lost to follow-up or received liver transplants within 3 months and were subsequently excluded from analysis. The authors found that mortality was strongly associated with increases in bilirubin, creatinine, INR and cause of cirrhosis. In fact, in certain situations TIPS had the opposite effect – 3-month mortality increased significantly post-procedure. Using multivariate analysis, the team developed a scoring system and bedside nomogram with these four variables for predicting 3-month mortality post-TIPS placement. Now known as the MELD score, it received praise for a strong concordance (c)-statistic score of 0.87 indicating high clinical accuracy. The basis of this system quickly gave rise to the pediatric liver disease model (PELD), also utilized in a similar fashion. While the scope of the study itself was narrow, the model was found to be generalizable to chronic liver disease and for liver transplant stratification. MELD quickly replaced the 36-year-old Child-Pugh classification for transplant allocation used by the United Network for Organ Sharing (UNOS). This study was a game changer for liver transplant candidates. Patients with the most severe disease, who previously may have been classified as moderate under the Child-Pugh system, moved up in line for a liver transplant. It also provided a means to soundly advise patients and family members on the risks of an elective TIPS, which was no small victory. All together, the MELD study will likely remain the standard for chronic liver disease stratification for years to come.

 

Malinchoc M, Kamath PS, Gordon FD, Peine CJ, Rank J, ter Borg PC. A model to predict poor survival in patients undergoing transjugular intrahepatic portosystemic shunts. Hepatology. 2000;31(4):864.

Testicular Cancer: Changing Tides

Prior to 1974, localized testicular cancer was easily cured with orchiectomy, while metastatic disease was effectively a death warrant – cure rates were as low as 5% with a variety of chemotherapeutic agents. This trial, conducted by an up-and-coming researcher using an experimental drug called cis-diamminedichloroplatinum (or as we know it, cisplatin), provided a much-needed win in the battle against solid cancers. From 1974-1976, Lawrence Einhorn used a combination of cisplatin, vinblastine and bleomycin to treat patients with mild to severe metastatic disease. Some patients received adjunct surgery to remove residual tumors. Overall, at the time of publication, 38 of 47 patients were disease free. Moreover, at 13-year follow-up (which is remarkable in and of itself), over 50% of patients were disease free, an achievement stated as a “one-log increase in the cure rate.”

In addition to improved cure rates, cisplatin did not cause a pronounced myelosuppression like other chemotherapeutic drugs. It was not without its vices though. As Einhorn described, platinum had toxic effects on the kidney, manifested as a 25-50% reduction in baseline creatinine clearance. This was likely exacerbated by the dehydration caused by the intractable nausea and vomiting, which is eloquently described in Mukherjee’s Emperor of All Maladies. From a critical perspective, the study was well conducted in the context of a limited number of patients. Criteria for partial versus complete remission were well-defined, and follow-up was more than sufficient. Additional trials in the following years would cement the new findings.

Although partial results had been published in reports in Journal of Urology and American Family Physician, the complete results of the trial were ultimately published in Annals of Internal Medicine in 1977.  Given the overwhelmingly positive response to the therapy and the aggressive nature of non-seminomatous testicular cancer, it was quickly approved by the FDA in 1978 and became the standard of care. The combination chemotherapy was modified in the 80s to use etoposide instead of vinblastine, leading to additional improvements in cure rates. That same combination is in use today, and cure rates exceed 95%. Moments like this, when one can claim a logarithmic increase in cure rates, are few and far between in medicine.

Einhorn LH, Donohue J. Cis-diamminedichloroplatinum, vinblastine, and bleomycin combination chemotherapy in disseminated testicular cancer. Ann Intern Med. 1977. Sep;87(3):293-8.