Successive Hurdles, Test Weighting and Certification Rules: Part 4

The articles in this series when taken as a whole present a picture of the challenges and potential pitfalls presented in the development of effective selection instruments and test batteries. In addition to the need to make sure instruments are reliable and valid so that they support the selection of the best available work force, they must also withstand legal scrutiny. Unfortunately, experience has shown that local laws, statutes and/or civil service rules that provide the blue print for how HR work is to be done are many times in conflict with exam development and validation procedures. In particular, certification rules that dictate the number of candidates from a ranked list that can be certified for a hiring authority to consider for selection can be responsible for undoing the efforts made to conform to professional standards.

Many individuals tasked with writing civil service rules, particularly in the infancy of the development of merit systems, did not have the benefit of possessing a test development and statistical background. Many systems focused on fairness and avoiding abuses of differing forms of the spoils system or the good ol’ boy system, but they did not take into consideration statistical concepts related to test scores, and in particular whether or not meaningful differences existed between scores. Sometimes, certification rules narrowly defined the group eligible for certification and in other instances; rules were modified in an attempt to address equal employment issues. These modifications often took the form of certification of the whole list which meant the hiring agency could select anyone on the entire list to put through the final selection interviews. Continue reading

Successive Hurdles, Test Weighting and Certification Rules: Part 3

In the last article we focused on weighing the tests and subtests that comprise the total selection process. We identified some instruments that should only be used as pass fail and we identified others that can be used to rank candidates. Those tests and subtests suitable for ranking are those identified through the job analysis as assisting in differentiating potential job performance. We also identified an issue with weighting tests and sub tests if we rely on simply multiplying test results by the percentage we want them to weigh in our total. Tests with greater variance tend to impact ranking more than the desired weight. Simply put, tests tend to self weight based on their variance.

Given a simple illustration we can see that tests that spread test scores out (have greater variance) will have a greater impact on the final ranking of candidates than tests that tend to lump everyone together (have less variance). Taking this concept to its extreme, it can be seen that if a group of five people all got the same score on a multiple-choice exam but achieved widely divergent scores on a structured interview, the multiple-choice exam would weight zero in our final ranking and the interview would weigh one hundred percent. Continue reading

Successive Hurdles, Test Weighting and Certification Rules: Part 2

In the previous article, I introduced the concept of weighting exams that comprise the battery of instruments in a selection process. This article will explore that process more in depth. To begin with, some instruments lend themselves to being weighted and thus providing an impact on the final ranking of candidates and others do not. Determination of which instruments are appropriate for ranking and the weight given to those that are considered appropriate for that purpose should be established through the use of a comprehensive job analysis designed to support the content validity model for test development.

There are numerous published methodologies for conducting job analyses that are designed to comply with the Uniform Guidelines for Employee Selection Procedures (UGESP) even though they may differ in how they combine subject matter experts’ ratings on KSAP’s which ultimately determine the weight given to selection components. Typically, these systems will collect ratings on KSAP’s and then review them to determine which ones have received ratings that indicate that they are required at time of hire, are important for job success and are linked to performing important job tasks effectively. Often to make the system more manageable, the next step will involve grouping KSAP’s into domains, which is what is recommended by several job analyses procedures designed to conform with the requirements of the UGESP. Continue reading

Successive Hurdles, Test Weighting and Certification Rules: Part 1

Medical doctors and psychologists rarely rely on the results of one clinical test when making diagnoses. Similarly, selection experts recommend using a battery of selection instruments when making employee selections. The people in these professions realize that the accuracy and reliability of their conclusions are greatly enhanced when they have a broader range of information on those being evaluated.

In selection it is often critical to measure quite divergent knowledge, skills, and abilities, which necessitates the use of multiple selection instruments as part of a battery that comprises the selection process. Most jobs require cognitive abilities and some require a body of knowledge, which in many cases can be measured by a written exam. In addition, most jobs require some degree of ability to communicate verbally. Since written tests can not measure verbal communication, a second test, usually a structured interview is necessary to measure whether or not a candidate possess the verbal abilities required for the target job.

In addition to measuring these abilities in candidates, many positions require additional abilities which require the use of additional selection instruments. Many classes like police officer, fire fighter, corrections officer and park ranger require the measurement of candidates’ physical abilities, psychological stability, medical fitness and suitability of background. To utilize these instruments effectively and efficiently, they must be combined in a manner that provides the greatest support for administration of the selection process and maximization of each instrument’s validity. Combining the information from multiple instruments is where the employee selection model differs from the medical model. Continue reading