Tuesday, June 2, 2009

Healthcare X PRIZE - Additional Questions and Answers

Part I of a series of answers to questions that have been submitted to the Healthcare X PRIZE team for consideration. We wanted to share with you some of the early thinking as it relates to these questions.


1. Our team would like to emphasize a community / public health /population paradigm. How can we take this approach if the target population is not a traditional community defined by geography?

Our target population will be contained within a defined geographic area (first four digits of Zip Code). We believe the best aggregator of participants will be employers rather than a traditional community. Employers represent the channel by which 60+% of health care value is purchased and a natural delivery mechanism for information and programs. They also have access to specific information that will be required for prize operations. Conversely, a traditional community is obviously attractive because we believe improving population health should be the designed outcome for any health system. There are advantages and disadvantages to both types of aggregators. From a competition and operational perspective, we are currently anticipating working through employers.

2. Can a team suggest a target population local to its area? (And some teams will not be located in a WellPoint market area). Given that there will be considerable team / provider interaction, local knowledge, relations and physical access will be a major advantage. Therefore, either all teams should have the option of choosing local populations, or no team should be able to do this.

We agree with this perspective. Many potential teams are able to deliver exceptional health value due to the inherent organizational, contractual, and cultural advantages that have been established. While these groups represent market leaders, the aim of the HXP is to see if these advantags can be replicated in new markets. As a result, we are designing the competition such that teams will have to compete outside of their “home court” to remove any unfair advantages.

As a result of this and the geographic constraints, we are still considering creating a demonstration division where teams can “compete alongside” the competition (follow same rules, competition guidelines, reporting requirements, etc) but not be a part of the official competition.

3) To what extent can we address social factors and social determinants? Can we collaborate with social services or community organizations more broadly? If so, then the advantage may go to teams in a clearly delimited geographic area (say, as compared to an employer-linked population group dispersed across a very large metro area).

Teams are strongly encouraged to take a “community based approach” as we believe health needs to involve and engage the community, including the social services and community organizations that can and should contribute to improvements in community health. We will take geographic dispersement under consideration when selecting “communities” and attempt to control for this variable in the selection process.

4) Ideally any US health care solution should address the problems of the uninsured, underinsured or lower socio-economic groups likely to be on Medicaid. Might it be possible to include some of these cohorts proportionately in the target population?

Given the pre-eminent focus on access by others, we have chosen to focus on other aspects of the health care reform debate. We are currently working with our sponsor WellPoint, as well as with state Medicaid programs to include disadvantaged groups like you mention. We believe this will also improve the impact and relevance of our efforts to include those most in need. We also like the notion of specific cohorts that each team will have to manage, and are considering how this can potentially be operationalized.


5) A target population of 10K is far too small for at least two reasons: there will be limited efficiencies of scale; and the likelihood of meaningful impact on enrollees due to the epidemiology of disease over a three year period. We would recommend target populations of at least 35K.

We are currently investigating the number of outcomes (events) that happen in representative populations of different sizes. We are currently working with WellPoint's analytic division, HealthCore, to determine the correct population size. The size of the population must be balanced against operational constraints of prize administration. We agree with you that a larger number would be better, but believe that a population of 10K is a reasonable compromise. Do you have any evidence of significant epidemiological differences between a population of 35K vs. 10K that would justify the additional cost, complexity, and challenge of operating a larger test population.

6) The issue of enrollee turnover will be a challenge. If we expect a typical 20% per annum turnover among the insured population, then only about 50% of the population will be present after three years. It might be appropriate to stratify impact by length of enrollment or limit final population to continuous enrollees, but that would required a much larger starting number.

We are going to be contracting with the “communities” (employers) for a three year period. We are considering stability of community when selecting the communities but have not finalized our methodology for how to best handle attrition (perhaps use person years?). We are also evaluating how we can use COBRA and retiree programs to extend participation from turnover. Modeling the 20% attrition you assume, would require the base population to be approximately 15,000 people (~50% of the population would turnover over three years). We will take this under consideration.


7) Communication with providers will be key. If providers have a very small % of their enrollees involved in the competition, how will we get their attention? The only way this would work is a small geographic area where most of their patients are insured by Blue Cross (WellPoint) but such communities are not fully representative of the US and are not scalable.

We are asking for prospective communities to highlight HXP participation support from local providers as part of their application process. We don’t expect that teams will work with all local providers, but rather will offer incentives and additional support to those who seem most receptive to their programs. We believe that participation of individuals in the test program can be accomplished through support of “activated providers” who will offer new programs (eg, home/office visits, telephonic, transition management programs, care coordination, etc). Of necessity, this “recruitment” will most likely be successful with traditional early adopters who will benefit from a virtuous cycle of recognition, innovation, and media attention. We realize this will be challenging and we will not win over every provider at once but we believe we will find enough providers willing to participate to have a viable competition.

8) Provider EHR/HIT and consumer PHRs/ web portals will be key to any group’s response. To what degree will the communities that are targeted already be wired? If not, this will be a major undertaking and the extent to which this HIT development is necessary must be comparable across teams to ensure a level playing field. Also, given the “HITECH” subsidies being rolled out soon perhaps these resources can be coordinated so as not to count towards the cost-of-care input costs. We would recommend close cooperation with the Office of the National Coordinator for HIT.

We share your opinion about the value of EHR/PHR in potentially contributing to delivering, managing, and improving health interventions. However, given that <10% of hospitals and <35% of providers have EHR/PHR we are unclear of any evidence that these IT tools will be necessary to achieve. Furthermore, much of the data that will be shared as part of this competition is not currently included in current EHR/PHR offerings (Claims data, other feeds from insurer/employer, self reported surveys, etc). We will consider adding “degree of connectedness” as one of the community criteria to attempt to avoid selection bias. We anticipate that many teams will have to add several new capabilities to their communities they are managing.

9) We assume that the price of health care varies across target populations; this will be adjusted for. This involves not only provider price but also practice “signature”. That is, if current practices patterns vary considerably (e.g., the MN / FL issue) that must be taken into consideration for comparability’s sake. Moreover, it actually may be an unfair advantage for a team to be assigned a FL type of “intense use” practice environment as it will be much easier to obtain savings in such an area given the likely “fat” in the system.

This variability will be handled by both a selection process that will account for variations in spending as well as a matched control in an adjacent 4 digit zip code area. Our measurement infrastructure will measure relative delta’s not absolutes. While we will attempt to control this through selection/matching, we might not be able to control all aspects of practice signature. However, we are expecting that the ability to influence both outcomes and costs can help to equalize this (FL might need to focus more on cost related issues but since MN has already solved this to some degree they focus more heavily on quality related issues). We are open to suggestions on how to optimize this.

10) Will we need to supply a complete care-management/DM infrastructure? Or, can we (or must we) use Wellpoint’s current care management providers?

We are considering whether WellPoint can actually be a vendor to the competition. This requires clearance from ethics/legal teams. WellPoint will clearly not be able to be an exclusive member of any one team, nor do we expect that any team will have to use their solutions.

11) The provider mix may be very different in across the locales (e.g., primary care / specialists mix, presence of groups/IDS's). Also the general adequacy of supply may vary across areas. Each provider situation will offer different challenges and will have a very significant impact on outcomes. This will need to be controlled for.

We are aware of this situation (variability of provider mix). Teams will have to adjust for local provider mix. This is another trade off of a real world competition and the reality in which any competitor would find themselves. The Dartmouth team has some tools to evaluate provider mix that we are hopeful to deploy to help adjust for this in the community selection process.


12) The issue of intellectual property (IP) needs to be clarified. To what extent will concepts methods and tools remain the property of team or must be put in the public domain? Also will WellPoint have access to any “left behind” IP for their own commercial use without paying future ongoing licensing fees to the team?

This issue will be clarified in the Master Teaming Agreement (MTA). The intention of WellPoint will be to make publicly available any IP they garner from the competition. Also, each team will maintain exclusive rights to the intellectual property they develop as part of the competition. Teams will be required to publish their results as part of the evaluation process. Public reporting of standardized outcomes information and methodologic overview will be included in this but we will make every effort to ensure that individual IP is protected as part of the competition.

A stated objective of the Prize is to create an entirely new industry and we want teams to be able to mature into viable business entities. As such, we understand and respect the need to protect IP.

13) There is some discussion of a comparison group in the initial plan design. Will there be one selected for each target or only one overall? This is a good idea and you will need advice from research design methodologists. Please elaborate more fully in the next iteration of contest documentation.

We will be providing a comparison or control group for each target area to account for some of the regional / payer variability that we discussed above. We will be utilizing a third party entity (Milliman, HealthCore, etc) to help us with this process.

14) The simulation approach you will use to select finalists is not entirely clear. In order for all teams to provide enough information to allow you to do this “bench test,” we will need considerably more clarity on the framework and content you will require as part of our preliminary plan response.

We are currently working on this concept. Essentially, teams will have access to a WellPoint database from which teams must identify their population, assign them interventions, make projections of estimated impact, and provide a credible financial model of impact. Teams will also need to highlight how they will be able to achieve the 50% health value improvement. Teams who advance will be subject to further evaluation wherein they will have to justify their assumptions in order to qualify to be drafted by a community.

15) When the intervention is “turned on”, the referees must be meticulous about accounting for all costs that “deep pocketed” teams could absorb on their own. Also, you may need to control for potential geographic variation in a team’s costs (such as more travel by a distant team) to ensure a level playing field regarding input costs.

Each team will be required to bill all operational services to the community. Back end development/internal costs not deployed to participants are not restricted. Costs will be audited by an independent auditor. Deep pocketed teams may have some advantage in development resources, but they will be competing against the agile development processes of smaller companies. We anticipate an exciting ecosystem evolving that will allow unique solutions to be developed or evolve quickly. We hope this seeds a very dynamic marketplace / industry going forward.

16) We understand that the competition needs a beginning and an end. However most of the seeds of the intervention will bear fruits well beyond the 3 year period. Is it possible to continue monitoring after three years and to estimate impact beyond 3 years?

We believe that successful competitors will remain integrated into the test communities long after the competition. We hope that, with an appropriate measurement framework and auditing function in place, that teams/communities will continue to operate the intervention and continue to prove successful outcomes.

Goal is to create independent business entities. Teams/Communities will need to determine how to sustain their business relationship beyond the competition time frame.

17) Do you want further input on suggest outcome metrics as part of plan? This would suggest iterations of the end-points so that all teams can be on the same page.

We welcome public comment and feedback on the outcome metrics. We are working closely with Dartmouth, Brookings, IHI, Milliman, and others to ensure that appropriate metrics are evaluated and incorporated into the competition. We are attempting to keep the public involved in the iterative process by reporting back our findings.
blog comments powered by Disqus