UCB's Global Corporate Website
Welcome to UCB in the United States
  • Healthcare Professionals
  • Patients
  • Investors
  • Jennifer Bright, Executive Director, Innovation and Value Initiative; Edward Lee, Head of U.S. HEOR Strategy, UCB
    Voices on Value: The importance of patient-focused value assessment, Part 2

     

    This is part two of our conversation with UCB’s U.S. Head of Health Economics and Outcomes Research, Edward Lee, and the Innovation and Value Initiative’s (IVI) Executive Director, Jennifer Bright, on creating patient-centric value assessment. You can find part one here.

    Part one focused on why value assessment tools must be patient-centered, and in part two, Lee and Bright weighed in on the characteristics of frameworks that support sustainable value and equitable access to health care. They also explained how we can ensure a patient’s voice is heard via transparency, real-world evidence, etc.

    To learn more about UCB’s perspectives on how value assessment can be patient-centered, please read UCB’s Principles for Value Assessment in the U.S.

    Note: The conversation has been edited and condensed for clarity.

    There are some approaches to value, including multi-criteria decision analysis, or novel aspects of value (e.g., indirect patient benefit, caregiver benefit) that are uniquely patient-focused but are largely missing from traditional value assessment. Why are these approaches important?

    Jennifer Bright: First, if anyone thinks that value assessment is all figured out and that there’s one magic formula that gives us the answer, they’re mistaken. This is a field that is evolving, and that’s a good thing. IVI’s open-source value model platform allows us to showcase and experiment with new ways of thinking and new methods. We are trying things like multi-criteria decision analysis and distributed cost-effective analysis, which allows us to better account for health equity.

    These methods are technical, and we need to learn how use them better, more efficiently, and more consistently. And we need to learn how to use these tools because traditional cost effectiveness analysis doesn’t represent the patient perspective adequately, and we don’t have the data upfront all the time. We all must explore new methods, not just take stuff off the shelf.

    Edward Lee: It’s important to include approaches like multi-criteria decision analysis because they account for factors that matter to patients. An important first step is to identify success from the patient’s perspective.

    I think what we see with some of the frameworks today is that they react to how health care gets funded or reimbursed. And while there are certainly barriers to formally including patient-centered factors in assessments, we need to do our due diligence to incorporate as many factors that patients value as possible. And we need to be transparent about how we’ve identified the factors that are included, which factors are needed, and the limitations to the data we do have.

    Bright: We all have an obligation to identify those priorities, invest in measuring them and invest in finding new methods. Otherwise, we’re in a perpetual cycle of not meeting patient needs.

    What we’re really trying to do with these frameworks is conclude what is worth spending our health care resources on, and we’re doing that without a complete picture of the outcomes that are important. We run the risk of hard coding existing disparities, and that’s a risk we should not be willing to take.

    Why is transparency so vital in value assessment frameworks and methodology? How do you ensure the value work you do is transparent?

    Lee: Of all the factors in our Value Assessment Principles that need to be followed, transparency is among the top. Ultimately, there is a lot of work and effort and resources that go into value assessment, and we want those results to be actionable for the stakeholders. For value assessments to be actionable, we need the stakeholders to buy into the validity and the interpretability of the ongoing value assessments.

    This can be a function of understanding the complexity and intricacy of the value assessment framework, rather than for the sake of transparency. Transparency allows you to understand the output of the assessment, even if you don’t necessarily agree with it.

    So, first, the evidence that’s being used in the assessment and the models should be publicly available. That way anyone who wants to take a deep dive into the assumptions, the limitations, or the decisions that led to the results can have insight. Second, clearly state the limitations and the assumptions, and be consistent about doing so throughout. This is fundamental to interpreting the results of the value assessment.

    Bright: I could not agree more with Eddie. Transparency is IVI’s second principle of value assessment behind patient-centricity. The models should be open-source—you should see the assumptions, the coding, the mechanics behind the model.

    Value assessment frameworks should be a common good because our health care system is complex and decision-making isn’t static, evidence is evolving every day. Our approach to value currently is very static. Right now, we go through the assessment process, and get our answer, and that’s the “definitive answer.” We know that’s not how things happen in the real world. We need a structure that allows for flexibility, evolution and diverse perspectives.

    How important is real world evidence (RWE) to the value assessment process to ensure decisionmakers understand patient priorities and/or patient benefit in the long term? How do digital platforms help with this?

    Lee: Digital platforms have allowed UCB to leverage new capabilities and data assets that are more representative of real-world populations. What we realized is that there is a gap between the insights and efficacy that can be established in a clinical trial versus when patients are using our products in the real world. We’re better able to track the outcomes we can achieve. Like are there some patient segments that have better outcomes compared to others, are there imbalances in patients taking different types of therapies, how does insurance impact the patient journey? These are all things we must track to ensure there’s equitable care.

    Bright: When we talk about real-world data, we’re talking about claims data, patient registries, surveys, focus groups, state data, etc. Typically, this is not data that is included in value assessments because it’s very heterogenous in how it’s gathered, housed, labeled, etc.

    When we set this data aside, we send two messages: one, it’s not important, no matter how much we say it is, and two, that it’s not important for the scientific community to figure out how to use it. These are both damaging to building trust. When we sideline an entire body of knowledge, we are missing an opportunity.

    I think, in short, real-world data is essential to this process. We must work with patient communities to understand the top issues that need to be captured; then help those communities understand the best way to collect that information; then create a common, good repository for it.

    Lee: I agree with Jen’s comments, and I think there are a couple of things we can do now to incorporate real-world evidence. First, we assume that all products with a similar mechanism of action have the same continuation rate. We can use real-world evidence to look at individual continuation rates for individual therapies and bring that into assessment models. Second, we also assume all patients have the same journey once they are diagnosed, but we know that’s very heterogeneous, based on where you live in the U.S. and what kind of insurance you have. That’s a huge opportunity for insights to be pulled into decision-making today.

    How do we ensure that the outputs of value assessment are actionable so that patients, providers, payers, etc. can leverage the data to improve patient outcomes?

    Lee: Actionable outputs gets to the heart of a well-structured value assessment process.

    We need consistency and transparency to lead to action. And I’m not saying that every value assessment has to be the same, and in fact, they should be specific to the population of interests, whether that’s based on disease state, a patient segment, etc. Overall, the outputs, data collection, data aggregation, should all be informed by what matters to the patient. And if we can’t do that, then we need to be as transparent as possible about what is needed to get there, and we need to do this consistently across all frameworks. That way we can judge whether an assessment followed the right approach and how to weigh its results in decision-making.

    The second component is education, making sure that the end user has the right tools and resources to independently evaluate the findings. They need to understand the results and apply them appropriately when they’re making decisions for the populations they’re managing.

    Bright: Those are all good points, and I might add a couple more.

    One, we absolutely have to understand the patient perspective. And we have to engage with the user communities, and that’s not just large payers—it could be the employer community, as well. We’ve been doing this with our major depressive disorder model, to gain insights into what’s meaningful, what they want to achieve and what are they trying to answer. That’s a way to make it relevant for real-time decision-making.

    Next, collaboration. We need to get better at collaborating with each other, between IVI and the Institute for Clinical and Economic Review (ICER), academia and external entities, patient communities, etc. This will help accelerate our learning, standardize best practices, and then scale.

    Finally, how do we identify use cases to really test new methods and see where it leads us? A model cannot just be generated and put on a shelf; it has to be tested in the real world. How does it influence decision-making, payment strategy? What impact is it having on patient health? If we don’t build this in, we won’t be measuring anything we’re doing, and that’s a huge mistake.

    Categories
    Share:linkedin| twitter| email