There are many settings where agents with differing types choose among assessments that attempt to measure these types. For example, students may take either the SAT or ACT. Bond issuers may choose between the three main rating agencies. Assessments that provide higher ratings are obviously preferable to all agents. Preferences over risk are less obvious. Intuitively, low types prefer less accurate assessments because they can benefit more from mistakes. High types prefer more accurate assessments because they benefit from an accurate description of their type. We propose a condition on the assessments that ensures agents will choose them in an assortative manner. If the assessments have only two scores, this condition implies Blackwell’s informativeness criterion. However, this does not hold with three or more scores. When the assessments give the same unconditional distribution of scores, our condition implies the concordance order. We extend the analysis to repeated testing and mechanism design. We show that a principal can use menus of garbled assessments to improve the informativeness of high scores.