A week ago I puzzled:
If a measure of medical quality does not perfectly correlate with quality, that seems to many a sufficient reason to prevent people from seeing or acting on the measure. … We prevent hospitals from publishing mortality statistics, because such stats may sometimes be "misinterpreted." … "As corporations and other organizations mine electronic data to draw conclusions about them … doctors could begin to `cherry pick’ healthier patients."
Many commentors defended such fears. Toby Ord:
A systematically biased estimate of quality … is feared to create damaging incentives in the medical profession (cherry picking patients, not doing work on the unmeasured aspects etc). … doing more harm than good … Restricting the data to the government or supervisory bodies that understand its weaknesses may be the best solution.
Yet every industry with imperfect quality measures suffers similarly. For example, consider the bad incentives from these imperfect college quality measures:
Student SAT scores: Prefer to admit students with high scores, versus students who best benefit from your school.
Student GRE Scores: Teach to the GRE test, neglecting other topics.
Graduation Rates: Fail too few students, and give too many A grades.
Campus visits: Invest too much in pretty grounds, and in visible events while students visit.
Research prestige: Invest too much in prestigious professors who neglect teaching for research.
Sports success: Invest too much in winning teams that gain attention.
To avoid these problems should we have the government assign students to colleges, or should we prevent schools from having researchers or sports teams, allowing campus visits, or publicizing test scores, graduation rates, or research success? If not, what makes medicine so different?
Being a PhD student and probably never leaving the university, I'm always interested in how to assess program quality and make comparisons between them. Thus far, for me, the most illuminating indication of quality of undergraduate education has been the reports of students who are graduate students at my school who came from other ones. For example, I got an undergraduate degree in computer science, and when I run into other people who also got undergraduate degrees in computer science but from different institutions, I find that often I learned a lot more math than they did, but they learned more about software engineering, reflecting a difference in program goals. Similar stories, I expect, appear across most programs. The trouble is that it's hard to get this kind of information to make comparisons before you enter college because universities, in general, focus more on advertising clear indications of status rather than describing the objectives of degree programs and the social setting in each program.
As I always say to my students, algebra is the same whether you're learning it in Cambridge, MA or right here. What matters is your desire to learn.
Toby, the fact that you are uncertain about how to best match patients and doctors does not seem a good reason to prevent them from trying. It seems to me you are too quick to assume better students should go to better schools, but that richer or sicker patients should not go to better doctors.
In any case, there is a good reason to let people evaluate quality even if there are no sorting benefits from matching who goes with who: better quality evaluation creates better incentives to produce quality.