Legal Update: including Physical Abilities Testing and Big Data

Physical Abilities Testing

There were recent developments in two cases related to the use of physical abilities tests as part of the pre-employment process.

United States of America v. Commonwealth of Pennsylvania et al.

In August 2016, the U.S. Department of Justice (DOJ) asked a federal judge in the U.S. District Court for the Middle District of Pennsylvania to grant a partial summary judgment indicating that the physical abilities test battery used by the Pennsylvania State Police (PSP) disproportionately screened out female applicants clearing the way for trail against PSP. The DOJ claims the court should conclude that the state’s use of its physical abilities test for screening entry-level state police trooper positions has a disparate impact on women and should therefore allow the parties to proceed to trial to determine whether the test is job-related and consistent with business necessity or whether there is a less discriminatory alternative employment practice that serves the state’s needs.

The suit, originally filed in July 2014, states that from 2003 to 2008 the PSP used a physical ability test battery that consisted of five events: (1) 300-meter run, (2) sit-ups, (3) push-ups, (4) vertical jump, and (5) 1.5 mile run. There was a cut off for each event and recruits had to pass each event to continue in the selection process. From 2003 to 2008, approximately 94% of male applicants passed the tests while approximately 71% of female applicants passed the tests (adverse impact ratio = .76). In addition, the suit claims that in 2009, the PSP modified the battery, adding an agility run and removing the sit-ups. With the new tests, between 2009 and 2012, 98% of male applicants passed and 72% of female applicants passed (adverse impact ratio = .73). The DOJ suggests that the difference in pass rates of female and male applicants is statistically significant and that the female pass rate is less than 80% of the male pass rate for both the 2003 to 2008 period and the 2009 to 2012 period.

Ernst v. City of Chicago

In September 2016, the U.S. Court of Appeals for the Seventh Circuit reversed one District Court decision and remanded another to the lower court for reconsideration. In 2008, five female licensed paramedics filed a Title VII gender discrimination suit after they failed a physical abilities test battery used by the City’s fire department for hiring. The battery included a modified stair-climb, an arm endurance assessment, and leg lift. The plaintiffs argued that the City had a discriminatory motive against women (disparate treatment) because the City hired a consulting firm without a contract competition who had previously created a test for the City’s entry-level firefighter position which had a disparate impact on females, a decisions which they purported reflected the City’s desire to reduce the number of women in paramedic ranks. They also argued that the paramedic battery had a disparate impact on females and that improper statistical methods were used to develop and validate the test battery.

In District Court, the case was split into two parts. The disparate treatment claims went to a jury which returned a verdict in favor of the City. The disparate impact claim was tried at a separate bench trail in which the court ruled that the plaintiffs had established a disparate impact against women. Between 2000 and 2009, 98% of male applicants passed the physical ability test while 60% of women passed the test (adverse impact ratio = .61). The court ruled that the City’s validation study established that the assessment battery was job-related and consistent with business necessity. While the plaintiffs proffered equally valid alternatives with less adverse impact, the court ruled they presented them without evidence and ruled in favor of the City.

On appeal, the Seventh Circuit reviewed both claims. For the disparate treatment claim, the court remanded the case for a new jury trial because jury instructions were not clear on the plaintiff’s burden of proving the City was motivated by anti-female bias in creating the tests. On the disparate impact claim, the court reversed the bench trail verdict stating that the validation study was not reliable and valid. The court took issue with some aspects of the concurrent criterion-related validity study used to establish the job-relatedness of the battery.

First, they questioned the representativeness of validation study participants who were volunteers. The court suggested the sample did not represent the skill-set in the general population of Chicago paramedics as they performed better than public-sector and private-sector paramedics normally perform resulting in a higher standard. They also suggested that one of the three criterion measures (work samples) used to validate the assessments lacked reliability.

In addition, because work samples were used to validate the assessments without validating the work samples themselves, they could not conclude that these work samples reflected the primary focus of paramedic skills learned on the job. There was no evidence that the work sample test were a proper validation of job skills. The court questioned whether the work samples actually tested the skills that Chicago paramedics learn on the job.

Big Data and the EEOC

In an earlier update, we reported on a Federal Trade Commission public workshop entitled “Big Data: a Tool for Inclusion or Exclusion” (2014), and a White House Report entitled, “Big Data: Seizing Opportunities, Preserving Values (2014). Big data is continuing to “disrupt” the HR space, changing the way things are done in a number of areas including assessing engagement and the development and validation of selection assessments (and the types of assessments that are being used and the constructs measured). This was highlighted in October 2016 at the Society for Industrial and Organizational Psychology’s 12th Annual Leading Edge Consortium, “Talent Analytics: Data Science to Drive People Decisions and Business Impact,” which showcased new and emerging methods in data collection, analysis, and presentation.

In addition, on October 13, 2016, the EEOC held a public hearing on how big data is being used to make hiring and other employment decisions during background checks. According to the EEOC, big data in this context refers to the use of algorithms, data scraping of the internet, and other means of evaluating thousands of pieces of information about an individual. The EEOC wants to ensure employers’ use of technology-driven tools does not lead to discrimination in the hiring process. EEOC Chair Jenny Yang stated that “Big data has the potential to drive innovations that reduce bias in employment decisions and help employers make better decisions in hiring, performance evaluations, and promotions. But at the same time, Chair Yang indicated that “it is critical that these tools are designed to promote fairness and opportunity, so that reliance on these expanding sources of data does not create new barriers to opportunity.”

The EEOC Commissioners heard from a panel of experts, including industrial psychologists, lawyers, and labor economists who discussed the pros and cons of big data in the workplace. Drs. Eric Dunleavy and Kathleen Lundquist were the industrial psychologists on the panel. Dr. Dunleavy reported that a survey of Society of Human Resource Management members indicated that about one third of their organizational members are using big data in employment and the proportion is higher among larger employers. Dr. Lundquist noted that the big data models measure correlation not causation and don’t actually look for the knowledges, skills, and abilities necessary to do the job. Ifeoma Ajunwa, of the University of the District of Columbia School of Law, pointed out that the use of big data isn’t limited to hiring decisions, but extends to current employees who are subject to workforce monitoring and even wearable tracking devices. She indicated that we shouldn’t envision a workplace where workers must surrender all privacy rights for a job. The prepared statements by the panel members, as well as their bios, can be found here. Big data is changing what we assess and how we assess it. As its impact on the field grows quickly, it will be important to monitor the methods used, to consider how it impacts measurement and validation, and to keep an eye on the legal implications.

Co-Author: Brian O’Leary U.S. Government Retired, Independent Consultant

Reprinted with permission from the Personnel Testing Council of Metropolitan Washington

Leave a Reply