Now we’ll get into more of the details surrounding the development of a biodata instrument. To help illustrate how such instruments are developed — and how the collection and scoring of this type of information can enhance selection systems — we’ll review the development of IPMA-HR’s Correctional Officer Biodata Questionnaire (CO-BDQ) through its Technical Report, which describes in detail the steps IPMA-HR and Bruce Davey and Associates (BDA) took to develop and validate the 120-item test.
By the end of this article, you should have a clearer picture of what information is gathered in order to develop a biodata instrument, how it relates to the job, how candidates are tested and how their results determine how good of a “fit” they are with the profile developed of successful job performers. Keep in mind throughout that, as with any selection instrument, the CO-BDQ had to be two things: reliable (i.e., measure what it measures consistently) and valid (i.e., measure what it is supposed to measure).
Developers wanted the CO-BDQ to measure the potential for good job performance coupled with low probability of turnover — doing this consistently would prove its reliability. Because of the nature of this type of instrument and the type of information it gathers and utilizes, extra effort had to be made to demonstrate its efficacy. Continue reading
Those involved in the hiring process generally agree: the more you know about each candidate, the more likely you are to make good hiring decisions. To that end, successful selection systems — i.e., valid and reliable — for police and corrections officers typically do not rely on just one type of test. As the term “systems” suggests, selection specialists develop a battery of tests and methods to increase the accuracy of their agency’s hiring decisions.
Job analyses plays a critical part in the development of effective public safety selection systems by successfully identifying the tasks involved in the job, as well as the knowledge, skills, abilities and personal characteristics (KSAPs) necessary to perform them. While test instruments are developed to measure the degree to which candidates possess the prerequisite KSAPs, not all KSAPs identified in the job analyses are measured.
Deciding which KSAPs would be left out was primarily a matter of how easy it was to measure. If it could be measured by traditional test instruments, such as multiple-choice written tests and structured interviews, then it was included. As a result, exam and selection plan outlines focused on dividing the KSAPs based on the availability and ability of test instruments: if a KSAP could not be measured or if measuring it was too expensive or time-consuming, it was added to the “no” column. Continue reading
Perhaps it is a character flaw, but I have never enjoyed reading long technical reports. In my early years as a practitioner in Human Resources, I was typically so busy that I did my best to gain the key information I needed from written materials by skimming them. This approach worked a large percentage of the time and yet there was more than one time when I would have benefited from having read an entire document thoroughly.
My days of skimming ended when one of the jurisdictions I had just started working for was sued by the Department of Justice for patterns and practices of discrimination in their entry-level hiring and promotional processes. One of the important lessons that came out of defending that lawsuit was how critical it is to read all important documents thoroughly.
In that regard, if you are one of the jurisdictions who has made the choice to utilize IPMA-HR’s public safety tests, you may be overlooking a wealth of information if you do not take the time to read thoroughly the “Test Response Data Report,” which is available to Test Security Agreement signers on request. Review of this report will provide you with some key information regarding the test and how your candidates performed in comparison to how all test takers combined performed.
The beauty of this report is in its simplicity. Unlike typical research studies that are weighed down with a lot of technical jargon, this report comes complete with all the information necessary to understand it. While a background in statistics may be helpful in applying significance to some of the information, it is not required to understand the report. To its credit, the document also provides a concise explanation of adverse impact and makes it clear that adverse impact does not equal discrimination. Continue reading
When I was in school — particularly elementary school, where the practice seemed to be more prevalent — it troubled me to witness one student copying off another during tests. I always thought this was unfair. I wish I could say that I was upset by cheating because it damaged the educational system, but in reality I was angry because of the impact on me personally.
I carried that impression into the HR world. Since many decisions are based on test results that impact the health and effectiveness of an organization, I felt more justified in taking an active part to prevent cheating in this arena.
It pleased me a great deal when I discovered that there were other ways to discourage copying beyond the utilization of diligent test proctors. Making different forms of the same test was perhaps a devious method to discourage copying. On the other hand, we always announced to test takers the possibility that multiple forms of the test were in use so they would know that the person next to them may not have the same test. Continue reading
If you’d like to review the previous articles in this series, which were posted back in November, you can find them here: Part 1: Complaints & Appeals Related to Testing: An Overview and Part 2: Considering Your Appeals Process.
This is the third article in this series on complaints and appeals and it is intended to give courage and hope to some of you in the HR profession who are dealing with rules governing complaints and appeals that do not support sound test development and validation procedures. If we are to support and improve the effectiveness of testing and the value of the work done by those in our profession, we need to recognize that there are times we need to work to change rules that are contrary to sound practices. While being a change agent can be fraught with risks, it can also produce rewards. Before going forward with any efforts to modify existing rules, it is critical to assess the climate in which you work and the impact appeal procedures have on the utility of the tests you are using.
Some of the basic things we know about test development and test validation include the fact that tests only measure the KSAP’s (knowledge, skills, abilities and personal characteristics) that an individual possesses at the time of testing. We also know that most tests we use in Human Resources are either aptitude tests or achievement tests.
In general terms:
- Aptitude tests measure one’s ability to learn and retain information over time and they are usually the types of test used for entry-level testing.
- Achievement tests are designed to measure one’s knowledge of a particular subject after having received training and/or experience in that area.
Anything that occurs post test in regard to providing candidates the opportunity to review the test and appeal test items changes candidates’ body of knowledge that can be applied to the test and therefore negatively impacts the reliability of the test. That is, we are now measuring candidates’ abilities to conduct research and make cogent arguments regarding the quality of test items and their answers compared to keyed answers. We are no longer able to determine what candidates knew or did not know at the time of the test. So when we change scores for candidates based on appeals we are giving them credit for information they may or may not have had during the test. That means we are no longer measuring what the test was intended to measure and alterations in scores that negatively impact reliability also reduce the validity of the test. Continue reading
As stated in the last article, appeals are typically more formal than complaints and there are usually written rules and procedures that govern the handling of appeals. These rules typically spell out how an appellant will go about appealing and how the agency will go about responding to the appeal.
The related rules range from general to very specific and they often can be found written out in the civil service rules for the agency or in a separate document of their own. At times, unions negotiate appeal procedures, but they are not typically part of mandatory subjects of bargaining. At one point in my career, I worked for a police agency that had been rocked by a cheating scandal. To prevent further cheating they created a very onerous and expensive test creation and appeal process that reeked havoc on the ability to develop valid and reliable written tests.
This brings me back to a point I made in the previous article and I will stress again here. If you have written appeal procedures that are not consistent with sound test development and validation strategies, you should consider it a goal to get them rewritten and approved so they do. Truly your rules should work for your test development and validation program and not against it. Continue reading
If you have an active selection program that processes large numbers of candidates through successive hurdles type of processes that include written and oral exams, the probability you are going to get complaints and appeals is high. The number you receive and how you handle them is largely within your control. While it is true that most jurisdictions have an appeal process that is spelled out by their Civil Service Rules or other regulations that guide their operations, it is also generally true that these rules are subject to change. So I acknowledge that, for the present time, the manner in which you deal with complaints and appeals may be dictated to you. I would also like to stress that in your role as an HR Professional it is part of your responsibility to do what you can to ensure that your rules reflect current practices and procedures in the field of testing.
You may also have written contracts with unions and/or consultants and test publishers that specify how appeals will be handled and in those cases you are obligated to follow those guidelines during the life of the contracts. However, I still believe the number of complaints and appeals you receive can be minimized by your approach to the testing process. Unfortunately, even though a lot of the finesse involved in handling applicants and their issues must be learned through experience, having good guidelines to follow and a positive customer service attitude can go a long way in mitigating the impact of complaints and appeals. Continue reading
This is the third article in a three-part series on job analysis. We have covered the fundamentals of job analysis and we have reviewed a report prepared by IPMA-HR as a means of illustrating the role of a Human Resources Analyst in evaluating the work of test developers and consultants. In particular, it is important to recognize that as an HR professional you may not be personally responsible for creating job analysis procedures, writing tests and conducting validation studies, but it is important to know how they are done so you can play this key role for your agency. Even if you hire a test developer or consultant, you may be asked to assist in the process and understanding how job analyses are done will prove valuable to you in this role as well.
In the first article I stressed that a thorough job analysis is the foundation for most of the technical work performed in Human Resources. As we have already seen, a job analysis is critical for developing content valid selection instruments which should be the heart of your recruiting and selection program. As if that was not sufficient reason for conducting job analyses, the information obtained from doing thorough work in analyzing the jobs in your agency can also support your training program, your classification and compensation program, your performance evaluation program, disciplinary action and remediation and serve as a basis for transportability studies as discussed in the last article. Continue reading
As indicated in the first article in this series, a thorough job analysis should be the foundation for most of the technical work performed in Human Resources. We also discussed that while analysts in the field today may not necessarily need to be able to design their own job analysis systems and create written exams from the results, they should have an understanding of the process and the ability to recognize whether or not products and vendors meet professional standards and can stand up to court scrutiny.
Our focus, as suggested above, will be the utilization of job analyses to create content valid selection instruments with other uses for job analyses results being discussed in the next article. In addition, it is important to stress the Uniform Guidelines on Employee Selection Procedures (UGESP 1978) along with the Society for Industrial and Organizational Psychology (SIOP) Principles for the Validation and Use of Personnel Selection Procedures (Principles, 2003) are still the guiding documents for determining the adequacy of content validation procedures (see references at the end of this post). It is also important to note that the UGESP (1978) don’t apply only to written exams, but to all selection instruments. That includes oral exams, physical fitness tests, background investigations and one-on-one hiring interviews. Continue reading
When I started in Human Resources, the Uniform Guidelines on Employee Selection Procedures (UGESP 1978) had just been adopted by the Department of Personnel, the Labor Department, the Equal Employment Opportunities Commission and the Department of Justice. These guidelines spelled out the requirements for demonstrating that selection procedures had content validity, criterion related validity and/or construct validity.
The adoption of the Guidelines was followed by a wave of class action law suits filed primarily by the Department of Justice suing — primarily public safety agencies — for illegal discrimination in hiring procedures and failure to demonstrate the validity of instruments being used for selection. This created a demand for individuals with test development and validation experience to assist public sector agencies in developing and validating new selection instruments. Or, as some agencies chose at the time, to opt out of using written exams.
The hiring of new “Personnel Analysts” focused on individuals with backgrounds in research, statistics and testing. Training for new analysts focused on test writing and validation which had at its heart the development of job analysis procedures. Job analyses that were intended to serve as the basis for developing content valid selection procedures had to be designed to withstand the rigorous scrutiny of the Department of Justice, which as the chief writer of the Guidelines admitted to me, were designed to tip the tables in their favor when it came to litigation. In addition, while the Guidelines contained a section outlining the requirements for demonstrating content validity in detail, the Department of Justice team of attorneys responsible for litigating most of their cases had a distinct bias toward criterion related validity. They also held content validity in low regard. That is, even if an agency followed the Guidelines to demonstrate that their selection instruments were developed to “build in” their validity (content validity), the DOJ would tend to pick their analyses and studies apart if the agencies did not also demonstrate a statistical correlation between test performance and a job related criterion such as job performance (criterion validity). Continue reading