SIZE XSSIZE SMSIZE MDSIZE LG

Article Index

Advice for Interpreting the Literature

Morgan Belling, PharmD
Clinical Hematology/Oncology Pharmacist
The University of Kansas Health System
Kansas City, KS


One of the essential skills of a hematology/oncology pharmacist is the ability to identify, analyze, and apply available data to patients and use that knowledge to make evidence-based recommendations to the multidisciplinary team, on both a micro level (direct patient care) and a macro level (the institution). And pharmacists are routinely involved with active research themselves, whether as an advisor on a residency research project or as a primary investigator for a trial. How to stay up-to-date on the literature has been discussed in a previous issue of HOPA News (Volume 13, Issue 3, “The Resident’s Cubicle”). But what about interpreting that information? Courses are offered on the subject, of course, but what key points should you think about when reading through those clinical trials, review articles, and guidelines, especially as an up-and-coming or new practitioner?

Evaluating Clinical Studies (Focusing on Randomized Controlled Trials)
Study design: Is the study prospective (ideally) or retrospective? Blinded or open label? Was it conducted at a single center or at multiple institutions? Remember that the strength of study designs is as follows (with the first-listed type providing the strongest evidence): systematic reviews and meta-analysis; randomized controlled trial; cohort study; case-control study; cross-sectional study; case series and case reports; and finally, reviews and expert consensus or opinion. What are potential sources of bias or confounding as a result of the study design? How did the researchers attempt to limit bias and account for confounders? What were the primary and secondary objectives? Typically, when one is evaluating the efficacy of a cancer therapy, overall survival is considered the gold standard, but other end points may be appropriate.

Methods: Were appropriate statistics used, given the type of information analyzed? When one is assessing a randomized controlled trial comparing two treatment groups, it’s important to pay attention not only to the intervention arm but also to the comparator arm—is that novel intervention being compared to the current standard of care, or has information been studied and published that perhaps supports a new standard of care? This issue may be especially important if a trial spans several years: the comparator arm may no longer be as relevant as it once was. From a pharmacist’s perspective, were there medications that were contraindicated for use with the study drug, or were there drug-drug interactions that warranted a change in the dose of an agent? All these points should be kept in mind.

Patient population: What was the sample size of the patient population? Was it sufficient to meet power? It’s important to pay attention to inclusion and exclusion criteria; the latter is just as important as the former. Let’s say that an attending physician wants to use blinatumumab (Blincyto) for an adult patient with relapsed Philadelphia chromosome negative B-cell acute lymphoblastic leukemia (ALL) with active malignancy in the central nervous system (CNS). It would be important to know that patients with active CNS involvement were excluded from a key phase 2 study that led to U.S. Food and Drug Administration approval. Essentially, does your patient match the patient population in which the drug was studied? What was the performance status of the patients? How many lines of therapy had they previously received before being included in the current trial? Among the groups studied, were the baseline characteristics similar? If there were differences in baseline characteristics, do those differences matter clinically? How might those differences influence your interpretation of the trial results?

Results: By what means were the results analyzed: intention-to-treat versus per-protocol? Was the primary end point one that is appropriate for the disease? Pay close attention to tables and figures of data and interpret them for yourself, looking for trends. Does your interpretation of the data mirror that of the authors? As the saying goes, “Trust, but verify!” Look critically at the available information and ask those questions. In addition to evaluating results of efficacy end points, also consider safety analyses. What were the most common side effects in the intervention arm, and how did that rate compare to the rate seen in the comparator group? What grades of side effects were seen? How many patients required a dose reduction or discontinued treatment in either study group? It’s often useful to know the time to onset of efficacy or adverse events—if a patient is starting ruxolitinib for management of steroid-refractory acute graft-versus-host disease, for example, when might a response be expected? If a patient is initiated on nivolumab for metastatic melanoma management, she might ask you how quickly a rash may develop; it would be useful to know this information, referencing the published data on the subject. When you are interpreting results, it’s also imperative to take some findings with a grain of salt—for instance, a subgroup analysis deserves your careful appraisal. Remember that a predefined subset analysis is more useful than a post-hoc analysis; if a post-hoc analysis is done, it is, by definition, retrospective, and the likelihood of a false positive (identifying a difference between groups when one does not truly exist) increases because typically many more points or factors are compared versus the number that are assessed in a prespecified subgroup analysis. Also ask: How do the results from this study compare with those from other available literature? Another important question: What does this study contribute to the literature and clinical practice, and what gaps in knowledge remain?

Conclusions: On the basis of the study design, are the authors’ conclusions indeed supported by the study results? Can the findings from this study be applied to your practice and patient population? Although results may be statistically significant, are they clinically significant? Would you change the way you practice on the basis of this study?

Evaluating Guidelines
When reading guidelines, investigate the organization’s guideline development process. The American Society of Clinical Oncology and the National Comprehensive Cancer Network, for example, provide detailed descriptions of their processes online. How are categories or levels of evidence defined? Are the guidelines predominantly based on systematic reviews of the literature and randomized controlled trials, or do they also incorporate literature with less robust evidence? How does expert or consensus opinion factor into the guideline development process? And when the guideline makes a recommendation, what is the associated category? Again, the “trust, but verify” mantra applies; go back to the primary literature that is referenced, and read those studies for yourself to become familiar with the reasoning behind those recommendations. It’s not realistic to do this for every referenced study, of course, but for the most influential trials, the ones that are prompting the strongest recommendations, that’s a good place to focus! What evidence do we have that is definitive, and what clinical questions remain that necessitate further studies but require extrapolation or consensus opinion in the interim? For some clinical questions, a randomized controlled trial to address the issue may never be conducted, for reasons related to ethics, recruitment, or other valid concerns.

Evaluating clinical studies and guidelines is a challenge and a necessary component of our practice as hematology/oncology pharmacists. Like many aspects of your practice, this skill will develop as you gain experience. Participating in the design and implementation of clinical trials will allow insight into the many biases that may be present in various scenarios. The more you read the medical literature and are cognizant of the above questions and actively search for their answers, and the more you practice and face situations in which you have to apply the evidence and also incorporate your clinical judgment, the more your abilities will improve. Your career will become increasingly rewarding as your evidence-based practice contributes to high-quality care for the patients to whom you dedicate your work. These skills will then be passed on to others as you mentor students, residents, and junior practitioners who learn from your methods and insight.

xs
sm
md
lg