Saturday, January 11, 2014

Digoxin, digoxout: an appraisal of digoxin immune fab

Preface: The inspiration for this topic came from an exchange on Twitter between @PharmERToxGuy, @DavidJuurlink, and I, representing one of the things I love most about #hcsm -- the opportunity for dialogue across diverse backgrounds and practice settings. In this particular case, we debated the appropriate use of digoxin immune fab (DigiFab®), which was certainly challenging to do in segments of 140 characters or less. Below I have outlined a more detailed rationale for why I advocate its conservative use in the management of digoxin toxicity.

My reasons for advocating the conservative use of digoxin immune fab (DigiFab®) are unrelated to its efficacy, as it is undoubtedly the most effective antidote for digoxin toxicity. Instead, I contend that in many cases, it is an unnecessary and overly aggressive -- if not a costly -- approach to a scenario that may be just as effectively managed by thoughtful monitoring and supportive therapy. While it can be challenging to predict whether patients might require fab therapy at a later time, I believe a more judicious approach can be made possible by considering the severity of toxicity, the circumstances in which it occurred, and whether fab administration would substantially alter the clinical course of the patient.

Digoxin toxicity is difficult to characterize as a result of heterogeneity in the literature (e.g., study methods, definitions for toxicity) and how use of digoxin has evolved over time (i.e., patient populations, indications, dosing, target concentrations). As an example, a patient with a ventricular arrhythmia and serum digoxin concentration of 10.0 ng/mL in 1994 and one with symptomatic bradycardia and a serum digoxin concentration of 2.0 ng/mL in 2014 are both classified as having digoxin toxicity (and both cases often characterized simply as a dysrhythmia), although the severity of their presentations is vastly different. These and similar challenges may explain in part some of the discrepancies in the literature, as some studies demonstrate a decline in the prevalence of digoxin toxicity while others claim it has not changed [1-3].

What has changed considerably over the last several decades is how digoxin is used. In the late 1980s and early 1990s, it was not uncommon for the vast majority of patients with heart failure to be receiving digoxin therapy -- as many as 9 out of 10 in some studies [4]. Today those numbers are substantially fewer, as digoxin therapy is often reserved for those patients with advanced symptomatic disease. When it is used in this population, a lower serum concentration (i.e., 0.5 - 0.9 ng/mL) is targeted, ameliorating many of the more severe adverse effects observed in the setting of elevated concentrations in the past [5,6]. Additionally, patients with heart failure are likely to be on concomitant therapies (e.g., beta blockers, aldosterone antagonists, implantable defibrillators and other devices) that may confer protection from some of the more severe forms of digoxin toxicity or prevent it altogether (e.g., less hypokalemia as a result of aldosterone antagonist use). Similar trends, including a decline in overall digoxin use and reservation for only the most advanced cases, have also been observed in the atrial fibrillation population, where lenient rate control targets have obviated the need for digoxin in many patients [7-9].

Whether or not these differences impact the number of patients presenting with digoxin toxicity, they likely influence how, and perhaps more importantly, why patients present. In my practice setting, digoxin toxicity often manifests as a result of something more problematic (i.e., renal impairment as a result of worsening heart failure, emergence of underlying conduction abnormalities) rather than the consequence of a drug-drug interaction or acute overdose. In these latter cases, fab administration may be a reasonable approach for preventing hospital admission. However, for the 4 out of every 5 patients with digoxin toxicity who require hospitalization either way, fab administration may not confer substantial benefit over what would be provided by monitoring and symptomatic support [3].

Patients with worsening heart failure often require days of clinical evaluation whether or not they have signs or symptoms consistent with digoxin toxicity (which can often mimic those of worsening heart failure). Furthermore, complete digoxin withdrawal may actually worsen outcomes in this population [6, 10]. In the case of renal impairment, digoxin immune fab may not be an ideal strategy if renal impairment is advanced or does not improve substantially, as it too requires renal clearance and is not removed by hemodialysis. Although an earlier review substantiates fab use in patients with mild to moderate renal impairment, several limitations make it difficult to derive similar conclusions when renal impairment is severe [11]. Although manifestations of digoxin toxicity may initially improve in this latter population, recrudescent toxicity may occur days to weeks later as digoxin redistributes from peripheral tissues, a phenomenon that has been well-documented in the literature [12, 13]. For patients on chronic digoxin therapy, this may occur even in the absence of severe renal impairment. In these scenarios, fab use may provide clinicians with a false sense of security, resulting in less frequent monitoring or premature discharge when the patient should be observed for recrudescent toxicity or worsening signs and symptoms of heart failure.

Finally, as I alluded to in several instances above, digoxin immune fab may not be the most cost-effective strategy in a given patient. Notably, many cost-effectiveness analyses are a decade or more older, making them subject to the same limitations as the epidemiological studies described above. Given the financial woes of today's health care environment, cost-effectiveness should be a factor in determining whether a therapy is indicated, especially when less expensive alternatives exist or if the therapy is unlikely to alter the long-term outcome of the patient. Otherwise, we endanger our ability to use these more expensive therapies in patients who have no alternatives.  In the US, a single vial of digoxin immune fab costs between $1200-1500 (or more), and most patients require multiple vials based on their body weight and/or serum digoxin concentration. Unless hospitalization can be substantially shortened or avoided altogether, the cost of fab therapy may quickly outpace reimbursement. For example, the average reimbursement for a drug overdose at my institution runs about $6500, whereas a heart failure admission runs around $8300 [14].

That being said, the following are situations where I would definitely recommend the use of digoxin immune fab:
  • Ventricular arrhythmias, accelerated junctional rhythms
  • Life-threatening bradyarrhythmias unresponsive to chronotropic agents (and when temporary pacing is not readily available)
  • Acute mental status changes
  • Acute overdose
I generally avoid recommending fab on the basis of a specific serum digoxin concentration alone, as these are often open to interpretation (e.g., timing of ingestion, laboratory draw). Furthermore, a toxic concentration is any concentration that results in clinically meaningful adverse sequelae in a given patient. A serum digoxin concentration of 2.0 ng/mL resulting in a ventricular arrhythmia is toxic and requires emergent treatment, while a patient with a serum concentration of 4.0 ng/mL and no adverse sequelae requires close observation but emergent therapy is not warranted.

Outside the indications outlined above, the strategy I most commonly recommend for managing digoxin toxicity is to simply facilitate urine output (e.g., intravenous fluids), provide supportive therapy when necessary, and monitor closely should a need for fab arise. If the patient has symptomatic bradycardia, this may require intermittent use of a chronotropic agent. Although atropine is often recommended in this scenario, its half life makes it less than ideal for counteracting a drug that may require hours to days to clear. Instead, I prefer the use of a dopamine infusion in this setting, as it may be turned on or off (or titrated) based on patient need. Importantly, dopamine and other catecholamine-based therapies should be monitored closely so as not to exacerbate other rhythm disturbances commonly associated with digoxin toxicity.

Peer review: Special thanks goes to Jo Ellen Rodgers, PharmD, FCCP, BCPS (AQ Cardiology), a clinical associate professor at the University of North Carolina Eshelman School of Pharmacy, and Jonathan Cicci, PharmD, BCPS, a clinical pharmacy specialist in cardiology at the University of North Carolina Health Care for their review of this entry.

References
  1. Haynes K, Heitjan D, Kanetsky P, Hennessy S. Declining public health burden of digoxin toxicity from 1991 to 2004. Clin Pharmacol Ther. 2008 Jul;84(1):90–4.
  2. Yang EH, Shah S, Criley JM. Digitalis toxicity: a fading but crucial complication to recognize. Am J Med. 2012 Apr;125(4):337–43. 
  3. See I, Shehab N, Kegler SR, Laskar SR, Budnitz DS. Emergency Department Visits and Hospitalizations for Digoxin Toxicity: United States, 2005-2010. Circ Heart Fail. 2013 Dec 3; 
  4. Effects of enalapril on mortality in severe congestive heart failure. Results of the Cooperative North Scandinavian Enalapril Survival Study (CONSENSUS). The CONSENSUS Trial Study Group. N Engl J Med. 1987 Jun 4;316(23):1429–35. 
  5. Rathore SS, Curtis JP, Wang Y, Bristow MR, Krumholz HM. Association of serum digoxin concentration and outcomes in patients with heart failure. JAMA J Am Med Assoc. 2003 Feb 19;289(7):871–8. 
  6. Ahmed A, Gambassi G, Weaver MT, Young JB, Wehrmacher WH, Rich MW. Effects of discontinuation of digoxin versus continuation at low serum digoxin concentrations in chronic heart failure. Am J Cardiol. 2007 Jul 15;100(2):280–4. 
  7. Wyse DG, Waldo AL, DiMarco JP, Domanski MJ, Rosenberg Y, Schron EB, et al. A comparison of rate control and rhythm control in patients with atrial fibrillation. N Engl J Med. 2002 Dec 5;347(23):1825–33. 
  8. Hohnloser SH, Crijns HJGM, van Eickels M, Gaudin C, Page RL, Torp-Pedersen C, et al. Effect of dronedarone on cardiovascular events in atrial fibrillation. N Engl J Med. 2009 Feb 12;360(7):668–78. 
  9. Van Gelder IC, Groenveld HF, Crijns HJGM, Tuininga YS, Tijssen JGP, Alings AM, et al. Lenient versus strict rate control in patients with atrial fibrillation. N Engl J Med. 2010 Apr 15;362(15):1363–73. 
  10. Packer M, Gheorghiade M, Young JB, Costantini PJ, Adams KF, Cody RJ, et al. Withdrawal of digoxin from patients with chronic heart failure treated with angiotensin-converting-enzyme inhibitors. RADIANCE Study. N Engl J Med. 1993 Jul 1;329(1):1–7. 
  11. Wenger TL. Experience with digoxin immune Fab (ovine) in patients with renal impairment. Am J Emerg Med. 1991 Mar;9(2 Suppl 1):21–23; discussion 33–34. 
  12. Rajpal S, Beedupalli J, Reddy P. Recrudescent digoxin toxicity treated with plasma exchange: a case report and review of literature. Cardiovasc Toxicol. 2012 Dec;12(4):363–8. 
  13. Hazara AM. Recurrence of digoxin toxicity following treatment with digoxin immune fab in a patient with renal impairment. QJM Mon J Assoc Physicians. 2013 Sep 27; 
  14. Medicare C for, Baltimore MS 7500 SB, Usa M. Medicare Provider Charge Data Overview [Internet]. 2013 [cited 2013 Dec 24]. Available from: http://www.cms.gov/Research-Statistics-Data-and-Systems/Statistics-Trends-and-Reports/Medicare-Provider-Charge-Data/index.html

Sunday, December 29, 2013

The year in review: top posts from 2013

With the year quickly wrapping up, here is a look back at the top three posts from 2013:
  1. Cocaine and beta blockers: all it's cracked up to be?
    The proposed pharmacologic interaction between cocaine and beta blockers is one of the most oft-quoted, if not controversial, teaching points in all of cardiovascular medicine... but is there evidence to support it?
    Posted May 5, 2013

  2. The trouble with diltiazem infusions
    Few therapies are more problematic in the cardiac intensive care unit than diltiazem infusions. Although effective for heart rate control, extended infusions are notorious for their adverse effects, including refractory bradycardia and hypotension.
    Posted August 11, 2013

  3. No love lost for labetalol infusions: risks of prolonged use
    Two cases where patients experienced life-threatening events prompted this review of continuous infusion labetalol and the dangers associated with prolonged use.
    Posted December 26, 2013

Apparently it was a good (or bad?) year for AV nodal blockers.

Thursday, December 26, 2013

No love lost for labetalol infusions: risks of prolonged use

In the past three months, two patients transferred to our institution have experienced life-threatening complications as a consequence of continuous infusions of intravenous (IV) labetalol. The first presented to an outside facility with hypertensive emergency, where he was initiated on a labetalol infusion. By the time he was transferred to us the next morning, the infusion had been continued for nearly 36 hours (over 20 times the maximum recommended dose) and he subsequently developed hypotension refractory to multiple vasopressors. He also had complete loss of neurologic function, although it was unclear whether this resulted from his initial presentation or anoxic injury as a consequence of profound hypotension. The second case involved a patient transferred to one of our services for the management of an unrelated medical condition after having received cardiopulmonary resuscitation and targeted temperature management at an outside facility for bradycardic arrest secondary to a prolonged labetalol infusion.

Labetalol is a beta blocker with potent antihypertensive effects, and it may be administered orally or intravenously. This latter feature makes it especially useful for the acute management of elevated blood pressure, although whether prompt treatment is necessary outside the setting of hypertensive urgency or emergency remains an area of controversy [1]. Nevertheless, it is one of the most frequently used agents in this scenario. Labetalol is one of the few drugs where the IV and oral formulations have different pharmacologic effects, with a β:α effect ratio of 7:1 and 3:1 for the IV and oral forms, respectively.

Labetalol may be administered as a slow continuous infusion for the purpose of a rapid load and transition to intermittent oral dosing. However, its availability as a continuous infusion often deceives clinicians into selecting it as a maintenance antihypertensive infusion. With a half-life of 5-8 hours, labetalol accumulates rapidly. For this reason, the maximum cumulative dose of IV labetalol is 300 mg, based on how the drug was studied in clinical trials. At usual infusion rates (1-2 mg/min), the maximum dose is reached in only 3-5 hours, where efforts to transition to oral therapy (or at least intermittent IV boluses) should be performed. Prolonged infusions substantially increase the risk of refractory beta blockade, which may result in profound bradycardia, hypotension, or cardiovascular collapse [2], as observed in the two cases mentioned above.

Examples of prolonged infusions being used without adverse sequelae exist in the literature [3,4], but the risks often outweigh the benefits given the availability of viable alternatives in most clinical scenarios.

If a continuous IV labetalol infusion must be used, a bolus of 20-40 mg should be given initially, followed by an infusion of 1-3 mg/min up to a maximum cumulative dose of 300 mg. As with diltiazem infusions (see why these are problematic here), a labetalol infusion cannot be rapidly titrated for effect. Once the maximum cumulative dose is achieved, therapy should be transitioned to oral dosing (starting at 100-200 mg twice to three times daily) or intermittent intravenous boluses (20-80 mg every 6-8 hours).

References
  1. Weder AB. Treating acute hypertension in the hospital: a Lacuna in the guidelines. Hypertension. 2011 Jan;57(1):18–20.
  2. Fahed S, Grum DF, Papadimos TJ. Labetalol infusion for refractory hypertension causing severe hypotension and bradycardia: an issue of patient safety. Patient Saf Surg. 2008;2:13.
  3. Goldsmith TL, Barker DE, Strodel WE. Prolonged labetalol infusion for management of severe hypertension and tachycardia in a critically ill trauma patient. DICP Ann Pharmacother. 1990 Mar;24(3):235–8. 
  4. Vaughan LM, Sudduth CD, Sahn SA. Long-term continuous infusion of labetalol. Chest. 1991 Feb;99(2):522.

Saturday, December 7, 2013

ENGAGE-AF: Me too! Or four, rather… edoxaban represents yet another alternative to warfarin in patients with atrial fibrillation

This entry is the fourth part of a series on late-breaking clinical trials from the American Heart Association Scientific Sessions 2013. For a list of all reviewed trials, click here.

Summary:
In the Effective Anticoagulation with Factor Xa Next Generation in Atrial Fibrillation (ENGAGE-AF) trial [1], patients with atrial fibrillation and a CHADS2 score of > 2 were randomized in a double-blind, double-dummy fashion to either high-dose edoxaban (60 mg daily), low-dose edoxaban (30 mg daily), or warfarin titrated to an INR of 2-3. Patients with an estimated creatinine clearance (CrCl) < 30 mL/min and those taking dual antiplatelet therapy were excluded. The dose of edoxaban was halved if patients were < 60 kg, had a CrCl of 30-50 mL/min, or if a strong p-glycoprotein inhibitor (i.e., dronedarone, quinidine, verapamil) was added. Notable baseline characteristics include median age of 72, history of stroke in 28.3% and heart failure in 57.4%; over three-fourths of patients had a CHADS2 score < 3. The median time in therapeutic range (TTR) for those on warfarin was 68.4%.

In terms of the primary endpoint of stroke or systemic embolism, both doses of edoxaban were non-inferior to warfarin (1.18% with high-dose edoxaban and 1.61% with low-dose edoxaban vs. 1.50% with warfarin [p < 0.001 and p = 0.005, respectively]). While a trend favoring edoxaban was observed in the superiority analysis, it did not reach statistical significance. Rates of major bleeding were lower with both doses of edoxaban (2.75% and 1.61% with high and low-dose edoxaban vs. 3.34% with warfarin, both p < 0.001), as were rates of intracranial hemorrhage. Gastrointestinal bleeding (GI) was higher with edoxaban. Improvements in several key secondary endpoints were also observed with edoxaban, including death from cardiovascular causes. Patients receiving the lower dose of edoxaban had a higher rate of ischemic strokes, while rates were similar among those on warfarin and high-dose edoxaban.

Commentary:
With a few exceptions, the findings of ENGAGE-AF are almost identical to those observed in the comparison of rivaroxaban, another factor Xa inhibitor, and warfarin in the ROCKET-AF trial [2]. Most experts argue that is not possible to compare the new oral anticoagulants with each other because none were studied head-to-head. While this is true from the standpoint of academic purity, it does little to guide the clinician when faced with the challenge of selecting an agent in an individual patient. For this latter case, a discussion of the similarities and differences are important.

Similarities between ENGAGE-AF and ROCKET-AF include:
  • Trial design: both were randomized, double-blind, double-dummy trials
  • Inclusions/exclusions: similar thresholds for renal function, high-risk exclusions (e.g., patients on concomitant dual antiplatelet therapy were excluded) 
  • Drug dosing: once daily dosing vs. warfarin titrated to an INR of 2-3 
  • Efficacy: both shown to be non-inferior to warfarin 
  • Safety: both safer than warfarin in the severest of safety endpoints (fatal bleeding, intracranial hemorrhage), but higher rates of GI bleeding

A few key differences:
  • Study population: on average, the patients enrolled in ENGAGE-AF were healthier than those in ROCKET-AF, as demonstrated by lower median CHADS2 scores and fewer patients with a history of stroke or heart failure
  • Warfarin management: warfarin was more optimally managed in ENGAGE-AF based on a TTR of 68.4% compared to 55% in ROCKET-AF; that being said, INR control is closely related to overall health status, so the fact that the INR was less problematic in the healthier population of ENGAGE-AF is not altogether surprising 
  • Safety endpoints: although both drugs reduced the incidence of severe bleeding, rivaroxaban was similar to warfarin in the primary safety endpoint of major bleeding, whereas edoxaban was safer in ENGAGE-AF 
  • Transition at study termination: the investigators of ENGAGE-AF should be applauded for the lessons learned from ROCKET-AF, where rebound thrombotic events were observed among patients being transitioned from rivaroxaban to open-label warfarin at conclusion of the trial. At the end of ENGAGE-AF, a carefully monitored transition from edoxaban to warfarin was performed, resulting in no differences in rebound thrombotic events 
  • Differences in secondary endpoints: edoxaban showed improvements in some secondary endpoints (e.g., cardiovascular death, other composites)

So now that edoxaban will represent the fourth alternative to warfarin, what are clinicians to do? To be honest, it is my personal opinion that edoxaban offers few if any clinical advantages to rivaroxaban. Despite being studied in a healthier population (i.e., where differences in drug metabolism and clearance are less likely to complicate management), edoxaban was still only non-inferior to warfarin. While it was safer than warfarin in terms of major bleeding, this could again be attributed to the healthier nature of the patient population. Both rivaroxaban and edoxaban reduced the incidence of the severest safety endpoints – fatal bleeding and intracranial hemorrhage.

Compared to the other new oral anticoagulants apixaban and dabigatran, my feelings on edoxaban are similar to rivaroxaban, which I wrote about in this entry in November 2011 and again in October 2012. I still tend to favor apixaban as my first-line alternative to warfarin based on it having the most comparative advantages, although I would still consider dabigatran in younger patients with normal renal function. The whole notion that the once daily dosing made possible by rivaroxaban, and now edoxaban, is more ideal for less compliant patients is still a dangerous proposition (see my note at the end of the selection tool posted in October 2012). For patients likely to miss doses, a once daily drug that only lasts half a day actually offers less protection from stroke and systemic embolism than one taken twice daily.

Bottom line:
Edoxaban is non-inferior to warfarin for preventing stroke and systemic embolism in patients with atrial fibrillation while reducing the risk of major bleeding.

References
  1. Giugliano RP, Ruff CT, Braunwald E, Murphy SA, Wiviott SD, Halperin JL, et al. Edoxaban versus warfarin in patients with atrial fibrillation. N Engl J Med. 2013 Nov 28;369(22):2093–104.
  2. Patel MR, Mahaffey KW, Garg J, Pan G, Singer DE, Hacke W, et al. Rivaroxaban versus warfarin in nonvalvular atrial fibrillation. N Engl J Med. 2011 Sep 8;365(10):883–91.

Saturday, November 30, 2013

COAG, EU-PACT, and a tale of three trials: pharmacogenomic-guided warfarin dosing

This entry is the third part of a series on late-breaking clinical trials from the American Heart Association Scientific Sessions 2013. For a list of all reviewed trials, click here.

Summaries:
COAG: In the Clarification of Optimal Anticoagulation through Genetics (COAG) study [1], 1015 patients at sites throughout the US were randomized to an initial warfarin dosing strategy that incorporated both pharmacogenomic and clinical factors or clinical factors only. After 5 days, both arms were managed according to a standardized adjustment algorithm. At 4 weeks of follow-up, there was no difference in the primary outcome of INR time-in-therapeutic range (TTR) between the two groups (45.2% vs. 45.5% in the pharmacogenomic and clinically-guided group, respectively, p = 0.91). A difference was found only in the subgroup of self-identified black patients, where TTR was lower with the pharmacogenomic-guided strategy (35.2% vs. 43.5% with control, p = 0.01). No differences in thrombotic or bleeding events were observed.

EU-PACT trials: In a similarly designed study conducted by the European Pharmacogenetics of Anticoagulant Therapy (EU-PACT) group [2], 548 patients were randomized to a warfarin dosing strategy that used both pharmacogenomic and clinical factors or clinical factors only. After the first 5-7 days, patients were managed according to local standards of practice. At a follow-up of at least 10 weeks, no difference in the primary endpoint of TTR was observed between the two groups (61.6% vs. 60.2% in the pharmacogenomic and clinically-guided groups, respectively, p = 0.52). An improvement in TTR was observed with pharmacogenomic-guided dosing at 4 weeks (52.8% vs. 47.5% with control, p = 0.02), a pre-specified secondary endpoint of the study. No differences were observed in specific subgroups, nor were there any differences in clinical outcomes.

In a second trial conducted by the EU-PACT group [3], 455 patients were randomized to a dosing strategy comprised of both pharmacogenomic and clinical factors or usual care, which was defined as an initial dosing strategy of 10 mg, 5 mg, and 5 mg for days 1-3 among patients < 75 years of age and 5 mg daily for days 1-3 in patients > 75 years. After 5 days, patients were managed according to local standards of care. After a follow-up period of 12 weeks, the pharmacogenomic-guided strategy improved TTR by a mean 7% (67.4% vs. 60.3% with usual care, 95% CI 3.3-10.6, p < 0.001). The median time to reach therapeutic INR was 8 days shorter in the pharmacogenomic-guided group (21 days vs. 29 days with usual care, p < 0.001). No differences in clinical endpoints were observed.

Commentary:
Let me be first to admit that I missed the fact that there were three studies of pharmacogenomic-guided warfarin dosing strategies published simultaneously. The major headline from AHA13 was that this strategy did not confer significant improvements in the management of therapy, and I walked away thinking that was the end of the story. I even tweeted so. However, based on the findings of the third study, I think further discussion of these trials is warranted.

Note: For the sake of simplicity, I will refer to the EU-PACT study that compared pharmacogenomic plus clinical factors versus clinical factors alone as EU-PACT-1 and the study comparing pharmacogenomic plus clinical factors to usual care as EU-PACT-2.

First, the COAG and EU-PACT-1 trials were almost identical in design, with the main difference being their length of follow-up (4 weeks in COAG vs. a minimum of 10 weeks and goal of 12 weeks in EU-PACT-1) [1,2]. Both obtained genotypes for the major polymorphisms expected to influence warfarin management (i.e., CYP2C9 an VKORC1) and compared 5 days of an algorithm that combined this information with clinical features already known to impact warfarin dosing (intervention) versus an algorithm that used only clinical features (control). While this design was essential for determining whether the addition of genetic information would influence outcomes, one should be cautious not to interpret the control arm as representing the standard of care for most practices. As an example of how complex the control algorithm was, the following equation was used to determine the initial dose of warfarin in COAG:

Dose (in mg/day) = exp[0.613 – (0.0075 x age) + (0.156 if black race) + (0.108 if smokes) + (0.425 x body surface area) – (0.257 if on amiodarone) + (0.216 x target INR) + (0.0784 if indication for warfarin is DVT/PE)]

While these and other dosing calculations have been validated in smaller studies, I am not aware of any large robust clinical trials comparing them to usual care. Even so, they are not included in current practice guidelines and are therefore unlikely to be widely used.  Whether or not they have been validated, they certainly incorporate many of the features one should consider when managing warfarin therapy.  Given their inclusion in the dose determinations for these two studies, I am not surprised that the addition of genetic characteristics failed to have an incremental impact on TTR control in a study of only a few hundred patients.  I imagine many thousands of patients would be necessary to distinguish the impact of genetic characteristics on top of all of the other adjustments already included in the equation.

Although no differences in TTR were observed at 4 weeks in COAG, an improvement with the pharmacogenomic-guided dosing strategy was observed at this time interval in EU-PACT-1 (although the primary analysis was conducted at 12 weeks). The cause of this discrepancy is unclear but the poor performance of the pharmacogenomic-guided algorithm among blacks in COAG (40% of the study population) may have been a contributor.

The design of EU-PACT-2 represents a more accurate comparison of a pharmacogenomic-guided dosing strategy and usual care [3].  While some ambulatory care practices and inpatient consultation services may utilize equations similar to the ones highlighted above, the vast majority of patients receiving a diagnosis that warrants anticoagulation therapy are initiated on a fixed dose with subsequent adjustments for changes in INR. The specific dose selected often incorporates many of the features included in these dosing equations, but not in a formal sense. When compared to this usual care approach, EU-PACT-2 demonstrated that a strategy incorporating both pharmacogenomic and clinical factors improves TTR control.  Unfortunately, because an arm comprised of patients being managed using only clinical factors was not included in this trial, it is impossible to know the true incremental impact of genetic information on warfarin dosing.

Finally, all three trials were conducted mostly at large academic medical centers with access to specialist providers, including pharmacists specifically trained in the management of anticoagulation therapy.  Although the intervention was largely blinded for the first 5-7 days, the specialized care provided at these centers may have influenced the potential impact of the pharmacogenomic-guided strategy by the end of study follow-up at 4-12 weeks. What would have been more interesting to know is whether this strategy would have conferred improvements among patients managed by general care practitioners or those discharged from the hospital who were unable to follow up for INR management until 1-2 weeks later. Perhaps my practice environment has biased me, but I feel these latter scenarios are far more common.

Bottom line:
The addition of genetic information to an algorithm already incorporating clinical features known to influence warfarin dosing does not improve management at 12 weeks, although differences may be observed in certain subgroups or at earlier time points. On the other hand, an algorithm incorporating both genetic and clinical information significantly improves warfarin management when compared to usual care.


References
  1. Kimmel SE, French B, Kasner SE, Johnson JA, Anderson JL, Gage BF, et al. A Pharmacogenetic versus a Clinical Algorithm for Warfarin Dosing. N Engl J Med. 2013 Nov 19;
  2. Verhoef TI, Ragia G, de Boer A, Barallon R, Kolovou G, Kolovou V, et al. A Randomized Trial of Genotype-Guided Dosing of Acenocoumarol and Phenprocoumon. N Engl J Med. 2013 Nov 19;
  3. Pirmohamed M, Burnside G, Eriksson N, Jorgensen AL, Toh CH, Nicholson T, et al. A Randomized Trial of Genotype-Guided Dosing of Warfarin. N Engl J Med. 2013 Nov 19;

Saturday, November 23, 2013

TOPCAT: not purrfect, but a signal of benefit with spironolactone in heart failure with preserved ejection fraction

This entry is the second part of a series on late-breaking clinical trials from the American Heart Association Scientific Sessions 2013. For a list of all reviewed trials, click here.

Note: details of this trial have not yet been published, so the following has been compiled from ClinicalTrials.gov and results presented at AHA13.

Summary:
In the Treatment of Preserved Cardiac Function Heart Failure with an Aldosterone Antagonist (TOPCAT) trial, 3445 patients with heart failure with preserved ejection fraction (HFpEF) were randomized in a double-blind fashion to spironolactone (15 mg titrated to 30-45 mg per day) or placebo.  Investigators defined HFpEF as the presence of at least one sign and symptom of heart failure and EF > 45%. Patients were also required to have controlled systolic blood pressure (SBP < 140 mmHg or < 160 mmHg if taking > 3 antihypertensive medications) and serum potassium < 5.0 mEq/L. The mean systolic blood pressure at enrollment was 129.2±14.0 mmHg and potassium was 4.3±0.4 mEq/L.  Other notable characteristics at baseline included the presence of hypertension in 91% of patients and chronic kidney disease in 39%. Additionally, 84% of patients were taking an ACE inhibitor (ACEi) or angiotensin receptor blocker (ARB), and 82% were taking a diuretic.

After an average follow-up of 3.3 years, spironolactone failed to reduce the primary endpoint, a composite of cardiovascular mortality, cardiac arrest, or heart failure hospitalizations (18.6% vs. 20.4%, HR 0.89 (95% CI 0.77-1.04), p = 0.138). Spironolactone was more effective at reducing heart failure hospitalizations alone (12.0% vs. 14.2% with placebo, HR 0.83 (95% CI 0.69-0.99), p = 0.042), but it also doubled the rate of hyperkalemia (defined as serum potassium > 5.5 mEq/L) (18.7% vs. 9.1% with placebo, p < 0.001) and increased the incidence of renal failure (data not available at the time of writing). In a post-hoc analysis of TOPCAT, regional variability was observed, as patients in the Eastern Hemisphere (primarily Russia and the Republic of Georgia) did not benefit from the addition of spironolactone while a slight benefit was observed among patients in the Americas (HR 0.82 (95% CI 0.69-0.98)).

Commentary:
The results of TOPCAT follow a consistent theme in patients with HFpEF – no therapies have been shown to have a substantial impact on disease progression and it remains an incredibly difficult condition to treat. Optimism for spironolactone had been high due in part to the results of Aldo-DHF, where its use resulted in improvements in left ventricular function, although no differences in clinical endpoints were observe [1]. Authors attributed this lack of benefit to the short duration of the study and its relatively young, healthy population. However, as TOPCAT revealed, spironolactone does not appear to confer significant benefit even when a larger and sicker population is followed for a longer period of time.

Several findings from TOPCAT are worth further comment. Although the results were largely negative, the fact that spironolactone reduced heart failure hospitalizations may be a signal of benefit in carefully selected patients, such as those who can be monitored closely for hyperkalemia and changes in renal function. Although some experts have heralded it as the first study to show a benefit in HFpEF, this is not entirely accurate, as candesartan demonstrated similar improvements in heart failure hospitalizations as a secondary endpoint of the CHARM-Preserved trial [2]. Nonetheless, it does represent another potential approach in a patient population with so few therapeutic options.

Some of the baseline characteristics of patients in TOPCAT also warrant further discussion. The vast majority had hypertension, which is not altogether unsurprising given its role in the pathophysiology of HFpEF. However, the fact that patients were required to have controlled hypertension may have limited the impact of spironolactone. I recognize this probably had to be done to assess whether the potential impact of spironolactone was independent of its effects on blood pressure. However, given the number of trials showing benefit with spironolactone in refractory hypertension (as well as the role of hypertension in HFpEF), perhaps spironolactone would have been better than the medication therapies patients were required to be on in order to meet inclusion criteria. For example, the incidence of hyperkalemia (and the results this may have had on the primary endpoint) may not have been so high had so many patients not been taking ACEi or ARB therapy. 

The higher rate of renal failure observed in the spironolactone arm is also intriguing, as a similar result was observed in Aldo-DHF, but this finding is not consistent with trials of aldosterone antagonists in patients with heart failure with reduced ejection fraction (HFrEF).  Many patients with HFpEF are notoriously preload dependent, which can complicate strategies for maintaining appropriate volume status. Spironolactone is a weak diuretic at the low doses commonly used in HFrEF and usually has only minimal impact on volume status in most patients, even when combined with loop diuretics. However, could it have had a more substantial impact in patients with HFpEF given the more tenuous nature of their volume status? In TOPCAT, 4 out of every 5 patients were already on a diuretic at baseline – could the addition of spironolactone have been enough to tip the balance toward hypovolemia and subsequent renal failure? 

All in all, I would still consider spironolactone in select patients with HFpEF (i.e., those who can be monitored for changes in serum potassium concentrations and renal function). I would especially consider its use in patients with HFpEF and refractory hypertension as well as those for whom only minor diuresis is necessary to maintain volume status. That being said, I am definitely cautious about its use in combination with diuretics or ACEi or ARB therapy. In fact, given the lack of benefit shown with these other agents, I think it would be reasonable to consider spironolactone first based on its potential for reducing heart failure hospitalizations (and potential for adverse events when combined with other agents).

Bottom line:
Similar to the agents studied before it, spironolactone does not substantially impact clinical outcomes in patients with HFpEF. It appears to reduce heart failure hospitalizations, but does so at the expense of increased rates of hyperkalemia and renal failure. Accordingly, its use should only be considered in select patients. 

References
  1. Edelmann F, Wachter R, Schmidt AG, Kraigher-Krainer E, Colantonio C, Kamke W, et al. Effect of spironolactone on diastolic function and exercise capacity in patients with heart failure with preserved ejection fraction: the Aldo-DHF randomized controlled trial. JAMA J Am Med Assoc. 2013 Feb 27;309(8):781–91. 
  2. Yusuf S, Pfeffer MA, Swedberg K, Granger CB, Held P, McMurray JJ, et al. Effects of candesartan in patients with chronic heart failure and preserved left-ventricular ejection fraction: the CHARM-Preserved Trial. The Lancet. 2003 Sep;362(9386):777–81.

Thursday, November 21, 2013

ROSE AHF: Mostly thorns for low-dose dopamine, nesiritide in acute decompensated heart failure and renal impairment

This entry is the first part of a series on late-breaking clinical trials from the American Heart Association Scientific Sessions 2013. For a list of all reviewed trials, click here.

Summary: 
In the Renal Optimization Strategies Evaluation in Acute Heart Failure (ROSE AHF) trial [1], patients with acute decompensated heart failure (ADHF) and renal impairment were randomized in a double-blind fashion to 72 hours of low-dose dopamine (2 mcg/kg/min), low-dose nesiritide (0.005 mcg/kg/min), or placebo. Patients were eligible for enrollment if they had at least one sign and symptom of ADHF (irrespective of ejection fraction) and an estimated glomerular filtration rate (eGFR) of 15-60 mL/min/1.73 m2. Baseline characteristics were similar between the three groups with a median systolic blood pressure of 115 mmHg, median ejection fraction (EF) of 33% (over two-thirds with EF < 50%), and eGFR of 42 mL/min/1.73 m2.

Low-dose dopamine failed to produce a difference in the co-primary endpoints of cumulative urine output (UOP) or change in cystatin C at 72 hours compared to placebo (differences in UOP of 8254 mL and 8296 mL, respectively, p = 0.59). Drug discontinuation was similar between the two groups, although low-dose dopamine was more likely to be discontinued for tachycardia (7.2% vs. 0.9% with placebo, p < 0.001) while placebo was discontinued more frequently for hypotension (10.4% vs. 0.9% with low-dose dopamine, p < 0.001). Likewise, low-dose nesiritide also failed to confer significant differences in the co-primary endpoint (differences in UOP of 8574 mL and 8296 mL with placebo, p < 0.49). Compared to placebo, hypotension was more common in the low-dose nesiritide group (18.8% vs. 10.4% with placebo, p = 0.07). Results for the co-primary endpoints were similar across subgroups with the exception of EF. Compared to dopamine, patients with preserved EF tended to do better with placebo (p = 0.01 for interaction). In contrast, nesiritide appeared to benefit those with reduced EF, although this difference was not statistically significant. No differences in clinical endpoints (e.g., symptom relief, death, rehospitalization) were observed between any of the groups.

Commentary: 
Those who follow my blog know that I am no fan of using low-dose dopamine for the purposes of renoprotection in ADHF (previous entries here and here). While I am not opposed to its use as a mixed inotrope/vasopressor (i.e., for patients in whom peripheral vasodilation from a traditional inotrope might compromise hemodynamics), the renoprotective properties of low-dose dopamine have been widely discredited [2]. Given the lack of benefit observed in ROSE AHF, hopefully this myth has been debunked once and for all. Importantly, dopamine not only failed to produce a significant difference in the co-primary endpoints; it also resulted in lower rates of hypotension and higher rates of tachycardia, indicating that even at doses as low as 2 mcg/kg/min, dopamine is not entirely selective for renal vascular beds.

Unfortunately, a signal of differential response between those with reduced versus preserved EF was observed (although not statistically significant), which may provide just enough justification to continue evaluating this approach in select subgroups.  Although the inclusion of a low-dose nesiritide arm in this study was an interesting hypothesis, the fact that it did not produce a meaningful difference in outcomes is not altogether unsurprising.

Bottom line: 
Neither low-dose dopamine nor low-dose nesiritide provide renoprotective effects in patients with ADHF and renal impairment.

References
  1. Chen HH, Anstrom KJ, Givertz MM, Stevenson LW, Semigran MJ, Goldsmith SR, et al. Low-Dose Dopamine or Low-Dose Nesiritide in Acute Heart Failure With Renal Dysfunction: The ROSE Acute Heart Failure Randomized Trial. JAMA J Am Med Assoc. 2013 Nov 18;
  2. Cicci JD, Reed BN, McNeely EB, Oni-Orisan A, Patterson JH, Rodgers JE. Acute Decompensated Heart Failure: Evolving Literature and Implications for Future Practice. Pharmacotherapy. 2013 Nov 11;

Wednesday, November 20, 2013

Clinical Trial Highlights from the American Heart Association Scientific Sessions 2013 (AHA13)

Over the next couple of weeks, I will be providing trial summaries and commentary on all of the medication-related late-breakers from the American Heart Association Scientific Sessions (AHA13). Although there were few breakthroughs presented at the meeting, Marc Pfeffer, MD, PhD, professor at the Harvard Medical School and lead investigator for the TOPCAT trial, cautioned attendees to refrain from calling the results negative, as the true purpose of research is to learn even if the results are not beneficial.

Keep checking back for updates!

Posted as of December 7, 2013:

Sunday, September 22, 2013

Should tolvaptan be used routinely for hyponatremia in patients with heart failure? Na.

One can hardly open a medical publication without seeing an advertisement for Otsuka's tolvaptan (Samsca®), an oral vasopressin antagonist approved for the management of hyponatremia in the setting of heart failure. Despite only minimal improvements in clinical trials and new warnings issued by the US Food & Drug Administration (FDA), the use of tolvaptan remains a topic of interest.

Hyponatremia is common among hospitalized patients and is associated with poor prognosis among those with heart failure [1]. What is less clear, however, is whether this relationship is a result of cause-and-effect or merely correlation. The latter is worth investigating, as this has been observed with other surrogate markers that were once considered potential therapeutic targets, with hemoglobin being one of the most recent to be called into question [2].

To date, the available evidence suggests that, while serum sodium concentrations can be improved with vasopressin antagonist therapy, these changes do not appear to confer meaningful differences in clinical outcomes. In SALT, a trial evaluating the use of tolvaptan in patients with hyponatremia (a third of whom had heart failure), patients randomized to tolvaptan experienced improvements in urine output and serum sodium concentrations, but this only persisted while patients were on therapy; in less than a week after discontinuing tolvaptan, serum sodium concentrations returned to baseline [3]. In EVEREST, a trial specifically enrolling patients with acute decompensated heart failure (irrespective of serum sodium concentrations), those randomized to tolvaptan experienced greater reductions in body weight (although less than a kg difference versus placebo) and improvements in some but not all heart failure signs and symptoms [4]. Tolvaptan failed to impart any clinically meaningful differences in mortality, hospitalizations, worsening heart failure, or quality of life [5]. Serum sodium concentrations improved initially but these differences dissipated with time.

In other words, tolvaptan and other vasopressin antagonists appear to have no appreciable effect on the underlying pathophysiology of heart failure. While serum sodium concentrations can be improved, recurrence of hyponatremia should be expected following cessation of therapy if underlying causes (e.g., reduced renal perfusion, hypervolemia, etc.) are not addressed. Coupled with emerging evidence of liver injury that eventually prompted FDA to limit its use to less than 30 days (and avoid it altogether in patients with evidence of liver impairment), tolvaptan has only limited utility in patients with heart failure.

There are a couple of scenarios where a short course of tolvaptan may be considered:
  • Patients with symptomatic hyponatremia at any serum sodium concentration; or,
  • As a temporizing measure (i.e., up to 5 days or so) to stabilize critically low serum sodium concentrations (< 125 mEq/L, to prevent patients from becoming symptomatic) while underlying causes are corrected, i.e., discontinuation of potentially offending drugs (select antipsychotics and antidepressants, thiazide diuretics), optimization of standard heart failure therapies, addition of vasodilators or inotropes to improve renal perfusion, or aggressive diuresis to correct hypervolemia.
That being said, there is evidence that suggests small boluses of hypertonic saline can improve hyponatremia in these scenarios without worsening fluid balance [6].

In summary, routine use of tolvaptan should be avoided, as it both fails to improve long-term clinical outcomes and represents an incredibly expensive strategy for improving symptoms and/or treating a surrogate marker that has yet to be associated with improved clinical endpoints. Although the price of tolvaptan has likely come down since its introduction to the market, at one time its use represented spending about $50 for each mEq/L increase in serum sodium concentration per day, and about $3 for each additional mL of urine output per day.

Note: I borrowed the title for this entry from an old Chemistry Cat meme, so let me end by giving credit (or blame?) to whomever it is due.

References
  1. Adams KF Jr, Fonarow GC, Horton DP, et al; ADHERE Scientific Advisory Committee and Investigators. Characteristics and outcomes of patients hospitalized for heart failure in the United States: rationale, design, and preliminary observations from the first 100,000 cases in the Acute Decompensated Heart Failure National Registry (ADHERE). Am Heart J. 2005 Feb;149(2):209-16.
  2. Swedberg K, Young JB, van Veldhuisen DJ, et al; for the RED-HF Investigators. Treatment of anemia with darbepoetin alfa in systolic heart failure. N Engl J Med. 2013 Mar 28;368(13):1210-9.
  3. Schrier RW, Gross P, Orlandi C, et al; for the SALT Investigators. Tolvaptan, a selective oral vasopressin V2-receptor antagonist, for hyponatremia. N Engl J Med. 2006 Nov 16;355(20):2099-112.
  4. Gheorghiade M, Konstam MA, Orlandi C, et al; Efficacy of Vasopressin Antagonism in Heart Failure Outcome Study With Tolvaptan (EVEREST) Investigators. Short-term clinical effects of tolvaptan, an oral vasopressin antagonist, in patients hospitalized for heart failure: the EVEREST Clinical Status Trials. JAMA. 2007 Mar 28;297(12):1332-43.
  5. Konstam MA, Gheorghiade M, Orlandi C, et al; Efficacy of Vasopressin Antagonism in Heart Failure Outcome Study With Tolvaptan (EVEREST) Investigators. Effects of oral tolvaptan in patients hospitalized for worsening heart failure: the EVEREST Outcome Trial. JAMA. 2007 Mar 28;297(12):1319-31.
  6. Licata G, Di Pasquale P, Paterna S, et al. Effects of high-dose furosemide and small-volume hypertonic saline solution infusion in comparison with a high dose of furosemide as bolus in refractory congestive heart failure: long-term effects. Am Heart J. 2003 Mar;145(3):459-66.

Saturday, August 17, 2013

Perspectives on the Bush stenting case: lifestyle modifications and the risk of cardiovascular disease

Significant controversy (examples here, here, and here) has surrounded whether former President George W. Bush should have undergone percutaneous coronary intervention (PCI) and stent placement. Many have called PCI an overly aggressive strategy based on his clinical presentation, citing studies that have shown no advantages with PCI among patients with stable coronary artery disease (CAD) [1]. However, the purpose of this entry is not to discuss the clinical appropriateness of the stent (as few individuals outside of the team taking care of Bush have the data to determine this), but instead how his case has refocused attention on the pathophysiology of CAD (for an elegant explanation of this, see John M's blog) and more importantly, how living a healthy lifestyle is not always the be-all end-all strategy for reducing one's cardiovascular risk.

First, let me be clear that the association between cardiovascular disease and many of the characteristic features of an unhealthy lifestyle (e.g., poor nutrition, excess sodium intake, physical inactivity) is undeniable. In fact, the growing prevalence of these traits is largely responsible for the rate at which cardiovascular disease has overtaken malnutrition and infectious diseases as the most common cause of worldwide morbidity and mortality.

Unfortunately, these associations have also been used to stigmatize many patients with cardiovascular disease as simply paying the dues for a lifetime of poor decisions, and how easily their problems could be "fixed" with healthier choices (or by that same token, why health care benefits should not be provided to them for their past indiscretions). Often these statements come from individuals who can ably afford a gym membership (or live in a neighborhood where it is safe enough to exercise outside), can purchase fresh foods (not to mention having the time to properly prepare them), or who were raised in homes or school systems where they were taught the importance of nutrition and exercise.  In Bush's case, we have an individual who purportedly eats healthy, exercises regularly (he recently completed a 100-km bike ride), and has access to the best preventative care in the world, yet has CAD significant enough to at least warrant discussion of coronary stent placement.

While making healthy lifestyle decisions can undoubtedly reduce one's risk of cardiovascular disease, the risk never evaporates entirely. More importantly, the impact of these decisions on cardiovascular risk is a complex interplay of genetic, physiologic, and biochemical interactions, many of which we do not understand or have any influence upon. Even if we did reach consensus on what exaxctly constitutes a healthy lifestyle (for example, what should the daily limit of sodium be?), it is not clear that everyone would respond favorably, if at all. A recent example of this was observed in the Look AHEAD trial, where aggressive changes in diet and increased physical activity failed to improve outcomes among overweight patients with diabetes [2].

A predisposition to developing cardiovascular disease has already been well-characterized among several congenital disease states and conditions, such as type 1 diabetes, familial hypercholesterolemia, and a number of kidney disorders. Even cardiovascular risk factors traditionally characterized as being under the influence of lifestyle decisions (e.g., hypertension, type 2 diabetes) can be impacted significantly by underlying genetic differences.  For example, African Americans are known to demonstrate enhanced sodium retention as well as low plasma renin activity, making them more susceptible to hypertension and conferring differences in how they respond to certain classes of antihypertensive medications [3]. Similar effects have also been observed with diabetes, where both African Americans and American Indians have been shown to be at higher risk for developing insulin resistance compared to other ethnic groups [4].  While some are quick to point out the socioeconomic and cultural features that may lead to these differences, an independent association between ethnicity and disease often remains, even after controlling for dietary and other lifestyle factors.

In summary, while it is clear that therapeutic lifestyle modifications can have a signficant impact on the development and progression of cardiovascular disease, it is not yet clear how many of these risk factors -- and to what extent -- are under our control.  While we should emphasize to patients that healthy lifestyle decisions can be an effective strategy for reducing their risk (which I believe should also include attempts at removing barriers that would prevent them from otherwise making healthy choices), we should recognize that cardiovascular disease may still occur anyway, as it did in the case of former President Bush.  Because it is capable of prevailing in the face of even the most intensive lifestyle interventions, cardiovascular disease should be a villain against whom we are all opposed, not as fair and just punishment for a few unhealthy decisions.

References
  1. Boden WE, O'Rourke RA, Weintraub WS, et al; for the COURAGE Trial Research Group. Optimal medical therapy with or without PCI for stable coronary disease. N Engl J Med. 2007 Apr 12;356(15):1503-16.
  2. Wing RR, Bolin P, Yanovski SZ, et al; for the Look AHEAD Research Group. Cardiovascular effects of intensive lifestyle intervention in type 2 diabetes. N Engl J Med. 2013 Jul 11;369(2):145-54.
  3. Gibbs CR, Beevers DG, Lip GY. The management of hypertensive disease in black patients. QJM. 1999 Apr;92(4):187-92.
  4. Steinberger J, Daniels SR; for the American Heart Association Atherosclerosis, Hypertension, and Obesity in the Young Committee (Council on Cardiovascular Disease in the Young); American Heart Association Diabetes Committee (Council on Nutrition, Physical Activity, and Metabolism). Obesity, insulin resistance, diabetes, and cardiovascular risk in children. Circulation. 2003 Mar 18;107(10):1448-53.

Sunday, August 11, 2013

The trouble with diltiazem infusions

My general opposition to diltiazem infusions was well-known at my previous institution, and several people have asked me to write an entry explaining why. Being a native of the South, writing in the form of a three-point sermon comes only naturally, and perhaps is a fitting way to write an entry on this sunny Sunday afternoon.

"[Intravenous diltiazem] is a terrible drug. It ought to be removed from the formulary."
- Head of electrophysiology at my previous institution

As I begin, I think it is important to clarify that much of my opposition to diltiazem infusions (i.e., "dilt drips") is based on their use for acute rate control in patients with significant underlying cardiovascular disease. There are other settings (e.g., general medicine, emergency medicine), where the temporizing use of a diltiazem infusion may be an appropriate strategy, assuming the drug is properly managed from a practical standpoint. However, in patients with a history of coronary artery disease, heart failure with reduced ejection fraction (HFrEF), and other chronic cardiovascular conditions, diltiazem infusions are rarely an ideal strategy.

First, several pharmacokinetic characteristics make intravenous diltiazem a suboptimal agent to administer as a continuous infusion. Its delayed onset of action necessitates the administration of bolus doses with the initiation of a continuous infusion and with each rate increase. Unfortunately, these are often omitted, which may lead clinicians to believe an infusion is inadequate at its current rate (and often resulting in rapid dose escalation).  Additionally, its long-half life increases the risk of accumulation with prolonged use, especially with rapid dose escalation. Finally, diltiazem does not demonstrate linear pharmacokinetics, so changes in dose rarely correlate with its therapeutic effects. Minimal (if any) evidence supports the use of diltiazem infusions beyond 24 hours, giving little guidance to clinicians on its practical management [1].

Second, as a continuous infusion, there is a tendency to believe that diltiazem can be titrated like a rapidly-acting vasoactive agent; this, combined with the aforementioned pharmacokinetic characteristics, make it a commonly mismanaged agent in the acute care setting. Given its delayed onset of action, a weight-based bolus should be administered prior to the initiation of a continuous infusion. Although the package insert recommends a 0.25 mg/kg bolus, this often results in hypotension, and may explain why bolus doses are often omitted. I usually recommend starting with a 0.15 mg/kg bolus and then administering additional boluses if necessary, although some evidence suggests that pre-medicating with intravenous calcium may mitigate the risk of hypotension. As I alluded to above, when bolus doses are omitted, the tendency is to rapidly increase the rate of infusion. Although a therapeutic effect may be observed at 1-2 hours, significant accumulation occurs as the drug approaches steady state (i.e., 4-6 hours later), increasing the risk for adverse effects (e.g., precipitous changes in heart rate or blood pressure, high-degree atrioventricular block) that may persist for 10-12 hours or more. Furthermore, this delay to steady state is also why diltiazem infusions should not be titrated to effect.

Third, in many patients with significant underlying cardiovascular disease, diltiazem is rarely an ideal long-term strategy for rate control. Although the negative inotropic effects of diltiazem are probably no worse than beta blockers in an acute setting, it does not confer the same long-term benefits associated with beta blockers across several cardiovascular conditions, such as reductions in the risk of sudden cardiac death among those with prior myocardial infarction [2], or improvements in all-cause mortality in patients with HFrEF [3]. In fact, in this latter population, the use of diltiazem is actually associated with a nearly twofold increase in the risk of worsening heart failure [4].

Proponents of diltiazem infusions will often argue that its disadvantages are related to chronic administration and this should not preclude its use in an acute setting.  While I agree its acute risks are probably no worse than beta blockers, why even start down a path that does not represent a long-term solution (when reasonable alternatives exist), especially given the practical limitations associated with its use? Moreover, if diltiazem is successful in the immediate setting, will therapy be continued? If not, how quickly can a more definitive management strategy be implemented? Should a patient's length of stay be extended solely for the purposes of cross-titrating to a more appropriate long-term strategy? My point is this: if no plans for a more definitive strategy exist, and there are not plans to continue an agent in the long-term, it just seems like a step backward to even start there.

There are some scenarios where diltiazem infusions are a reasonable approach, such as a short-term infusion to maintain hemodynamic stability as a patient awaits more definitive management (e.g., ablation), or as a transition to oral therapy in a patient without structural heart disease, in whom diltiazem is a reasonable agent for long-term rate control.  However, even in these scenarios, caution should still be exerted in terms of its practical management in order to reduce the risks associated with drug accumulation.

References
  1. Diltiazem HCl Power for Solution [package label]. Lake Forest, IL: Hospira, Inc; 2008. 
  2. Turi ZG,Braunwald E.The use of beta-blockers after myocardial infarction. JAMA.1983 May 13;249(18):2512-6.
  3. Effect of metoprolol CR/XL in chronic heart failure: Metoprolol CR/XL Randomised Intervention Trial in Congestive Heart Failure (MERIT-HF). Lancet. 1999 Jun 12;353(9169):2001-7.
  4. Goldstein RE, et al; for the Adverse Experience Committee and the Multicenter Diltiazem Postinfarction Research Group. Diltiazem increases late-onset congestive heart failure in postinfarction patients with early reduction in ejection fraction. Circulation. 1991 Jan;83(1):52-60.

Sunday, June 16, 2013

Transitions

I am excited to announce that I have accepted a full-time faculty position at the University of Maryland School of Pharmacy, where I will be practicing at the University of Maryland Medical Center in Baltimore, MD. To those of you from the University of North Carolina who have followed this blog from the very beginning, thank you for encouraging me to keep it up.  While taking this next step in my career is definitely bittersweet, I am grateful that cardiology remains a small world, and I look forward to opportunities to collaborate in the future.

Tuesday, May 21, 2013

Desensitization in patients with an aspirin allergy

After tweeting about an aspirin desensitization we performed last week, I have received several requests for our approach in patients with aspirin allergies, as well as the protocol that we use to desensitize those in whom we feel therapy is clinically indicated.

Given the time and resources required for a desensitization (e.g., drug preparation, admission to an intensive care unit, frequency of monitoring, etc.), the most important initial steps are determining if aspirin is indicated (and no other reasonable alternatives exist), and whether the patient has a history of a true type I hypersensitivity reaction (e.g., anaphylaxis) to aspirin.  In the case of the former, all of our aspirin desensitizations have been performed for the purpose of providing dual antiplatelet therapy in the setting of an acute coronary syndrome (ACS), often with coronary stent placement. For patients with stable coronary disease (or for those in whom monotherapy may mitigate excess bleeding risk), clopidogrel monotherapy may serve as a suitable alternative to aspirin; based on the results of the CAPRIE trial, clopidogrel is associated with comparable rates of both ischemic and bleeding outcomes compared to aspirin [1]. Unfortunately, the number of patients for whom this would be a reasonable strategy is quite small, making aspirin desensitization necessary in the majority of cases.

If aspirin is clinically indicated, a thorough interview of the patient should be performed to determine the type of allergic reaction experienced. In many cases, the reported allergy is not a type I hypersensitivity reaction, or it is simply an adverse effect (e.g., gastritis) that has been mislabeled as an allergy. If the history is unclear, or if the patient provides any information that might be concerning (e.g., a rash occurred but unsure whether swelling or wheals were involved, unsure about timing related to exposure, etc.), I usually err on the side of caution.

At our institution, we transfer patients undergoing aspirin desensitization to the cardiac intensive care unit, where they can receive frequent monitoring of vital signs and observation for adverse reactions. We use the procedure described by Wong, et al. [2] to reach a target dose of 325 mg over a 3-hour period. For the doses preceding 81 mg, we compound a liquid formulation by crushing an 81 mg chewable tablet and mixing it with a sufficient quantity of sterile water to create a 1 mg/mL solution.  Additionaly, we compound two batches in case the patient vomits up a dose.

The desensitization is then performed as follows:
  1. Pre-medicate with oral diphenhydramine 25 mg and famotidine 20 mg.
  2. Check vital signs at baseline and every 20 minutes thereafter.
  3. At 20 minute intervals, administer the following doses of aspirin:
    [Time 00:00] 0.1 mg (0.1 mL)
    [Time 00:20] 0.3 mg (0.3 mL)
    [Time 00:40] 1 mg (1 mL)
    [Time 01:00] 3 mg (3 mL)
    [Time 01:20] 10 mg (10 mL)
    [Time 01:40] 20 mg (20 mL)
    [Time 02:00] 40 mg (40 mL)
    [Time 02:20] 81 mg (one 81 mg tablet)
    [Time 02:40] 162 mg (two 81 mg tablets)
    [Time 03:00] 325 mg (one 325 mg tablet)
  4. After the last dose of the desensitization, a normal administration time (i.e., every 24 hours) may be resumed.
  5. If an allergic reaction is observed at any time, rescue medications (intravenous diphenhydramine, epinephrine) should be administered.
We target an initial dose of 325 mg because this is the standard loading dose at our institution for patients presenting with ACS (some institutions use 162 mg for this purpose); however, after achieving this dose during the desensitization process, we then administer a maintenance dose of 81 mg daily.

Acknowledgement: Special thanks to Abigail Miller Cook, PharmD, BCPS, with whom I collaborated on the above process at our institution; Abbie is currently a clinical pharmacy specialist at Loyola University Medical Center in Chicago, IL.

References
  1. CAPRIE Steering Committee. A randomised, blinded, trial of clopidogrel versus aspirin in patients at risk of ischaemic events (CAPRIE). Lancet. 1996 Nov 16;348(9038):1329-39.
  2. Wong JT, Maclean JA, Bloch KJ, et al. Rapid oral challenge-desensitization for patients with aspirin-related urticaria-angioedema. J Allergy Clin Immunol. 2000 May;105(5):997-1001.

Sunday, May 12, 2013

Uncharted territory: bivalirudin and the new P2Y12 inhibitors

Earlier this week, we were discussing the evidence to support the direct thrombin inhibitor (DTI) bivalirudin in patients undergoing percutaneous coronary intervention (PCI) and how its use has evolved to include the full spectrum of acute coronary syndromes (ACS) [1-4]. In general, when compared to the combination of unfractionated heparin (UFH) and a glycoprotein IIb/IIIa inhibitor (GPI), bivalirudin is associated with similar ischemic outcomes but a lower incidence of bleeding. Given the poor outcomes associated with bleeding after ACS, these characteristics have conferred a very advantageous benefit-risk profile for bivalirudin in the setting of PCI.

One of the only disadvantages associated with the use of bivalirudin monotherapy is the potential for early stent thrombosis, a phenomenon mostly noted in HORIZONS-AMI, which specifically enrolled patients with ST-segment elevation myocardial infarction receiving early PCI [3]. Although patients randomized to bivalirudin experienced a benefit in net clinical adverse events (9.2% vs. 12.1% with UFH plus GPI, p = 0.005), an increase in stent thrombosis in the first 24 hours was also observed (1.3% vs. 0.3% with UFH plus GPI, p < 0.001). Despite this early difference, rates of stent thrombosis at 30 days were not different between the two groups.

The most plausible explanation for the early increase in stent thrombosis observed in the bivalirudin group is that many patients were probably not yet experiencing the antiplatelet effects of clopidogrel. Although the onset of action is thought to occur more quickly (around 2 hours) with the 600 mg loading dose, only about two-thirds of patients received this dose prior to PCI.  Even at 2 hours, patients randomized to bivalirudin likely experienced a delayed onset of dual antiplatelet therapy compared to those in the UFH plus GPI group, where the onset of GPI therapy would have been almost immediate.

Despite this potential disadvantage with the use of bivalirudin, the overall net clinical benefit still weighs heavily in its favor, so current practice guidelines recognize it as being an acceptable alternative to heparin (with or without a GPI) in patients undergoing PCI [5]. As a result, bivalirudin has largely supplanted the use of heparin at our institution, as well as many other large PCI centers.

However, as we have also expanded our use of the newer P2Y12 inhibitors prasugrel and ticagrelor, the thought occurred to me that these two agents have not been extensively studied with bivalirudin. In fact, I was astounded by how little bivalirudin was used in the landmark trials comparing prasugrel and ticagrelor to clopidogrel -- only 3% and 2% in TRITON TIMI 38 and PLATO, respectively [6, 7].

Given the proposed advantages of these agents compared to clopidogrel (e.g., earlier onset of action, greater potency, no susceptibility to genetic polymorphisms, etc.), one would anticipate that they be at least non-inferior in terms of ischemic outcomes, but do we really know? Prasugrel demonstrated a clear difference in efficacy after only a few hours in its comparison to clopidogrel, but some have attributed at least some degree of this difference to the lower loading dose and delayed administration of clopidogrel in TRITON TIMI 38 [6]. Similar early differences were not observed with ticagrelor, where nearly half of patients were receiving clopidogrel prior to randomization and of those randomized to continue receiving clopidogrel, more received an appropriate loading dose prior to PCI [7]. Interestingly, a recent study of the pharmacodynamic effects of prasugrel and ticagrelor demonstrated that both had fairly poor antiplatelet activity in the hours following an initial loading dose, which makes me wonder just how much of a clinical advantage they provide in the hours immediately following an ACS [8].

Therefore, should we anticipate improvements in the incidence of stent thrombosis and other thrombotic complications when the new P2Y12 inhibitors are used in combination with bivalirudin? More importantly, are these agents associated with similar rates of bleeding as clopidogrel and bivalirudin (at least when compared to UFH plus a GPI)? While one might anticipate comparable rates of bleeding between clopidogrel and ticagrelor (based on similarities observed in the overall trial), I am not sure we can anticipate this with prasugrel given its higher rates of bleeding and fatal bleeding compared to clopidogrel at baseline.

Based on the increased uptake of bivalirudin and the new P2Y12 inhibitors, the combination of the two will undoubtedly become a standard of care -- but is it one that we have robustly tested? While I certainly do not believe we are putting patients at excessive risk with the combination of bivalirudin and a newer P2Y12 inhibitor, I am not sure we have much evidence to support it -- and if there is one thing I have learned from practicing in cardiology, it is that placing faith over evidence is one of the quickest ways to get burned.

References
  1. Lincoff AM, Bittl JA, Topol EJ, et al; for the REPLACE-2 Investigators. Bivalirudin and provisional glycoprotein IIb/IIIa blockade compared with heparin and planned glycoprotein IIb/IIIa blockade during percutaneous coronary intervention: REPLACE-2 randomized trial. JAMA. 2003 Feb 19;289(7):853-63.
  2. Stone GW, McLaurin BT, Ohman EM, et al; for the ACUITY Investigators. Bivalirudin for patients with acute coronary syndromes. N Engl J Med. 2006 Nov 23;355(21):2203-16.
  3. Stone GW, Witzenbichler B, Mehran R, et al; for the HORIZONS-AMI Trial Investigators. Bivalirudin during primary PCI in acute myocardial infarction. N Engl J Med. 2008 May 22;358(21):2218-30.
  4. Kastrati A, Neumann FJ, Mehilli J, et al; for the ISAR-REACT 4 Trial Investigators. Abciximab and heparin versus bivalirudin for non-ST-elevation myocardial infarction. N Engl J Med. 2011 Nov 24;365(21):1980-9.
  5. Levine GN, Bates ER, Ting HH, et al. 2011 ACCF/AHA/SCAI Guideline for Percutaneous Coronary Intervention A Report of the American College of Cardiology Foundation/American Heart Association Task Force on Practice Guidelines and the Society for Cardiovascular Angiography and Interventions. J Am Coll Cardiol. 2011 Dec 6;58(24):e44-e122.
  6. Wiviott SD, Braunwald E, Antman EM, et al; for the TRITON-TIMI 38 Investigators. Prasugrel versus clopidogrel in patients with acute coronary syndromes. N Engl J Med. 2007 Nov 15;357(20):2001-15.
  7. Wallentin L, Becker RC, Thorsén M, et al; for the PLATO investigators. Ticagrelor versus clopidogrel in patients with acute coronary syndromes. N Engl J Med. 2009 Sep 10;361(11):1045-57.
  8. Parodi G, Valenti R, Bellandi B, et al. Comparison of prasugrel and ticagrelor loading doses in ST-segment elevation myocardial infarction patients: RAPID (Rapid Activity of Platelet Inhibitor Drugs) Primary PCI Study. J Am Coll Cardiol 2013; 61: 1601-1606.

Sunday, May 5, 2013

Cocaine and beta blockers: all it's cracked up to be?

Note: The use of beta blockers in the setting of cocaine abuse is one of the most controversial topics in cardiovascular medicine. The following are my perspectives based on the available evidence, and refer to their use for compelling indications (e.g., myocardial infarction, heart failure, etc.), not for the acute management of cocaine toxicity.

Beta blockers remain a cornerstone in the management of several cardiovascular disorders, including myocardial infarction (MI) and heart failure -- two conditions where they have been associated with improvements in survival as well as a myriad of other clinical outcomes [1, 2].  However, because of the theoretical risks attributed to their use in the setting of cocaine abuse, many clinicians are reluctant to administer them despite these well-established benefits.

The pharmacologic basis for these fears is unopposed alpha-mediated coronary vasospasm -- that is, that beta blockers occupy post-synaptic beta receptors and shunt excitatory neurotransmitters (present in excess concentrations as a result of cocaine-inhibited reuptake) toward alpha receptors, resulting in vasoconstriction. While this is certainly a plausible mechanism for myocardial injury, more recent research has indicated that cocaine-induced cardiotoxicity is complex, involving pro-thrombotic effects, progressive atherosclerosis, and ventricular remodeling [3] -- all scenarios where beta blockers may actually be helpful.  Furthermore, beta blockers are not associated with worsening outcomes in other hyperadrenergic states, so why in the setting of cocaine abuse?

While the risk of using beta blockers in the setting of cocaine abuse is often discussed, the data to support a clinically meaningful interaction is slim at best.  The pharmacologic mechanism mentioned above was first hypothesized from a single case report involving a patient with cocaine toxicity who experienced a mild increase in blood pressure and decrease in heart rate after being treated with propranolol [4]. Much of the remaining evidence has been derived from experimental or animal models. Evidence of cocaine-provoked vasospasm was thought to be observed in early cardiac catheterization studies but has since been challenged by more recent investigations, including some that have involved direct administration of cocaine [5]. Even when vasospasm has been observed, it has not been associated with defects in coronary perfusion. In fact, in a case series of patients with MI in the setting of cocaine use, the vast majority of patients had clear evidence of thrombosis during percutaneous coronary intervention [6].

The limited evidence to support an interaction between cocaine and beta blockers often involves propranolol, a non-selective beta blocker that is used only rarely in contemporary clinical practice. Carvedilol and labetalol, two beta blockers with concomitant alpha-blocking activity, have been proposed as a reasonable alternative in patients who continue to use cocaine, as they would provide similar clinical benefits while moderating the theoretical alpha-mediated effects of cocaine. In fact, patients receiving labetalol in the setting of cocaine use experienced improvements in blood pressure without evidence of coronary vasoconstriction [7]. Additionally, in a small prospective trial comparing labetalol to calcium channel blockers among patients presenting with an acute coronary syndrome in the setting of active cocaine use, administration of labetalol was associated with favorable changes in hemodynamics and inflammatory mediators without increasing the incidence of adverse cardiovascular events [8].

Two retrospective studies of patients presenting to the emergency department have also cast doubt on the risk attributed to beta blockers in the setting of cocaine abuse [9, 10]. In one trial, use of beta blockers was associated with a reduction in MI; in patients for whom the admission was their first event, a reduction in in-hospital mortality was also observed with beta blocker use [9]. In another trial, beta blockers were not associated with increased mortality among patients presenting with chest pain following cocaine abuse; in fact, after adjustment for potential confounders, beta blocker use was associated with a reduction in cardiovascular death [10].  Obviously, these retrospective analyses are not without limitations, including notable differences in baseline characteristics, use of urine toxicology data as evidence of cocaine abuse, and inability to asses interim behaviors following hospital discharge. While conclusions cannot be made based on these results alone, I do think it is enough to call into question a concept that has become a widely held notion in cardiovascular medicine (despite having only minimal evidence to support its existence).

So what should clinicians do with this seemingly discordant information?  As is often the case in clinical practice, one should probably consider the potential risk versus benefit in an individual patient. Although the studies to date are helpful in clarifying the role of beta blockers in this setting, I am not sure enough evidence exists to support their indiscriminate use in cocaine-induced myocardial injury. For young, otherwise healthy patients, it is unlikely that beta blockers will provide much benefit (although I remain doubtful that there is an increased risk associated with their use). However, beta blockers should be considered in those patients who are at high risk for coronary artery disease or MI, given the complex cardiotoxicity of cocaine and the potential for benefit in these conditions.  In fact, in the studies mentioned above, the patients who seemed to benefit most were those who were at increased risk for cardiovascular events, either by having established ischemic heart disease at baseline or multiple cardiovascular risk factors (e.g., advanced age, hypertension).  I think similar arguments can be made for patients with heart failure.

I recognize that some may remain reluctant to use beta blockers in the setting of cocaine abuse as a result of professional liability concerns. However, a certain degree of personal responsibility should also be expected of the patient.  As health care providers, we should make available to patients the tools necessary to improve their condition, whether it be in the form of education (including substance abuse counseling and/or therapies, if indicated), therapeutic lifestyle changes, invasive procedures, or medications. However, whether a patient chooses to use these tools -- or similarly, avoid those activities that might put them at risk -- that is ultimately up to them.

Note: this entry was revised from an earlier version to include the study by Hoskins, et al comparing labetalol to calcium channel blockers in patients presenting with cocaine-induced myocardial injury.

References
  1. Turi ZG,Braunwald E.The use of beta-blockers after myocardial infarction. JAMA.1983 May 13;249(18):2512-6.
  2. Effect of metoprolol CR/XL in chronic heart failure: Metoprolol CR/XL Randomised Intervention Trial in Congestive Heart Failure (MERIT-HF). Lancet. 1999 Jun 12;353(9169):2001-7.
  3. Afonso L, Mohammad T, Thatai D. Crack whips the heart: a review of the cardiovascular toxicity of cocaine. Am J Cardiol.  2007 Sep 15;100(6):1040-3.
  4. Ramoska E, Sacchetti AD. Propranolol-induced hypertension in treatment of cocaine intoxication. Ann Emerg Med. 1985;14: 1112-1113.
  5. Majid PA, Cheirif JB, Rokey R, et al. Does cocaine cause coronary vasospasm in chronic cocaine abusers? A study of coronary and systemic hemodynamics. Clin Cardiol. 1992;15:253-258.
  6. Minor RL, Jr., Scott BD, Brown DD, Winniford MD. Cocaine-induced myocardial infarction in patients with normal coronary arteries. Ann Intern Med 1991;115:797– 806.
  7. Boehrer JD, Moliterno DJ, Willard JE, Hillis LD, Lange RA. Influence of labetalol on cocaine-induced coronary vasoconstriction in humans. Am J Med 1993;94: 608–10.
  8. Hoskins MH, Leleiko RM, Khan BV, et al. Effects of labetalol on hemodynamic parameters and soluble biomarkers of inflammation in acute coronary syndrome in patients with active cocaine use. J Cardiovasc Pharmacol Ther. 2010 Mar;15(1):47-52.
  9. Dattilo PB, Hailpern SM, Nordin C, et al. Beta-blockers are associated with reduced risk of myocardial infarction after cocaine use. Ann Emerg Med. 2008 Feb;51(2):117-25.
  10. Rangel C, Shu RG, Marcus GM, et al. Beta-blockers for chest pain associated with recent cocaine use. Arch Intern Med. 2010 May 24;170(10):874-9.