Readability of Assessments in a Digital Age (Part 2): Practical Issues

  • We want test questions that are very detailed, highly complex, engage the test taker, reflect the job, and, oh yes, are at a 6th grade reading level.
  • Grade level = 5.9. Reading level from paragraphs from a 3rd grade reader from 1960, as calculated by Word.

This is part two of a blog dealing with the measurement of readability and the establishment of appropriate reading levels. For purposes of this blog, readability can be defined as the ability of material to be comprehended by its intended audience.

In Part 1, we investigated approaches to readability based on:

  1. The measurement of grammatical features or readability formulas.
  2. The linguistic perspective.
  3. Job analysis.

In Part 2, we turn our attention to more practical issues such as:

  • How are readability indices used by assessment professionals?
  • What adjustments can or should be made when evaluating multiple-choice tests?
  • How has the changing nature of jobs impacted readability?

(more…)

By |2015-03-24T12:46:54-04:00March 25th, 2015|Readability|Comments Off on Readability of Assessments in a Digital Age (Part 2): Practical Issues

Readability of Assessments in a Digital Age (Part 1): Bet You Won’t Read This Whole Blog

  • People online don’t read.
  • Olny samrt poelpe cna raed tish – cna you?

The opening epigraphs both deal with readability. The first is a commonly encountered claim that people scan rather than read when perusing material online. What does that mean for employment websites and the associated assessments? The second is a teaser that often makes the false claim that very few people can read the material, when in fact almost everyone can. It illustrates that individuals can make sense out of what appears to be unreadable or scrambled text. Both have implications for our topic for this two-part blog, which involves the readability of assessments.

The measurement of readability and the establishment of appropriate reading levels is a critical responsibility faced on a regular basis by many selection specialists and personnel managers in the public sector. This task involves an analysis of the assessment of materials used on the job and the tests or assessments used in selection. Readability can be defined as the ability of material to be comprehended by its intended audience.

Unfortunately, most of our knowledge of the impact of readability on assessments was developed in an era where we used paper-and-pencil, multiple-choice tests. Even that literature is limited in that most of it deals with educational tests. Very few studies look at the actual impact of readability on the difficulty of employment tests or potential racial bias in tests. I could spend this blog complaining ad nauseam about researchers conducting highly artificial studies of irreproducible phenomena of little generalizability, while ignoring questions of real practical importance, but that is another topic for another day or forum.

One of the questions we will examine in the second part of this blog is whether readability is still relevant for computer-based tests. However, before we do, we will review the more traditional literature on readability and how we measure readability.

In Part 1, we investigate approaches to readability based on:

  1. The measurement of grammatical features or readability formulas.
  2. The linguistic perspective.
  3. Job analysis.

(more…)

By |2015-03-02T15:40:26-04:00March 11th, 2015|Readability|1 Comment