About Dennis Doverspike

Dennis Doverspike, Ph.D., ABPP, is President of Doverspike Consulting LLC. He is certified as a specialist in Industrial-Organizational Psychology and in Organizational and Business Consulting Psychology by the American Board of Professional Psychology (ABPP), serves on the Board of the American Board of Organizational and Business Consulting Psychology, and is a licensed psychologist in the State of Ohio. Dr. Doverspike has over forty years of experience working with consulting firms and with public and private sector organizations. He is the author of 3 books and over 150 other professional publications. Dennis Doverspike received his Ph.D. in Psychology in 1983 from the University of Akron.

Should I Provide Assessment Feedback?

The topic of my blog for this month deals with employers providing developmental feedback to candidates based upon the results of employment test or assessment.  Although the feedback of results from employment tests is common in many other countries, it is less frequently the case that such feedback is provided in the United States.

My topic this month deals with using assessment or test results in order to provide developmental feedback and suggestions to employees.  Although I will be dealing with feedback from tests in general, I will pay special attention to assessments that allow for a more in-depth, comprehensive view of the individuals, such as offered by the use of assessment centers.

[For more information on assessment centers, see Public Safety Assessment Center System (PSACS) and Assessment Center Educational Materials (ACEM)]

Some Findings from a Quick Literature Search

I had a graduate student perform a quick search of the current literature. Our findings regarding policies toward providing developmental feedback by employers in the United States were that it is rare for organizations to provide scores or give feedback to job applicants for pre-employment tests.  It is more common for promotional candidates, but even there the exact type of feedback may skew toward simply providing results or scores.  Providing expansive or detailed feedback is most likely to occur where the tests are used specifically for training or developmental purposes.

As for assessments centers, The International Congress on Assessment Center Methods has a document entitled The 2014 Guidelines and Ethical Considerations for Assessment Center Operations (6th Edition).  According to their guidelines, feedback should be provided and if the assesses are members of the organization than the employee has the right to “read any formal, summary, written reports concerning their own performance and recommendations that are prepared and made available to management.” (more…)

By |2016-01-20T11:09:10-04:00January 20th, 2016|Assessment, Assessment Centers, Assessment Feedback, Uncategorized|Comments Off on Should I Provide Assessment Feedback?

Test Score Posting Policies in the Public Sector

My blog this month deals with what I believe is a complex question that requires deft consideration of the demands of multiple stakeholders and the careful weighing of legal and ethical issues. I am speaking of the question of how does a public sector jurisdiction make decisions regarding the posting of scores both during and after the completion of an assessment or selection project.

As human resource and assessment professionals, we have to resolve the conflict between the equally important values of transparency of feedback to test takers, the privacy rights and expectations of confidentiality held by job or promotional candidates, and the public’s right to know, along with the media’s right to information. Deciding how and what type of information to post can seem like a judgment worthy of Solomon, as the human resource professional must reconcile:

  • the public’s right to information, including possible public record laws;
  • the candidate’s desire for feedback and test score information; and
  • the right of the candidate to privacy and the candidate’s expectation that their scores will be handled in a confidential and sensitive manner.

In my opinion, one of the complicating factors is that the release and posting of test score information has to consider many factors beyond simple psychometric and assessment issues. Some of the factors that must be considered or questions which need to be asked and answered include:

  • Are there federal or state laws that govern the release of public sector employment test information, as well as public records in general?
  • Are there local Civil Service regulations or rules?
  • Does the union contract specify how test results will be issued?
  • Are there past, relevant court decisions?
  • How have we done it in the past? What are the existing precedents?
  • What precedent, if any, do we want to create for future tests?

My own experience has been that every jurisdiction tends to make decisions regarding the posting of and release of a candidates test and score information differently, even within a specific geographic area such as Northeast Ohio. I know of some cities that post in public all the test score information for each candidate, while similar nearby cities post only the final rankings of the test takers.

If at this point you are starting to mumble to yourself, “I fear that Doverspike is not going to give us a simple answer in this blog,” you are correct. However, I am going to share with you some data from a survey conducted by IPMA-HR Assessment Services. (more…)

By |2015-09-29T09:25:19-04:00September 29th, 2015|Assessment, Test Scores|Comments Off on Test Score Posting Policies in the Public Sector

Job Analysis – What’s New?

What’s new in job analysis? A cynic might reply – “very little.”  However, such a conclusion would lead to a very short blog and, more importantly, would not be accurate.  Despite the foundational nature of job analysis, there have been some recent developments worth sharing.

Consensus on Recommended Practices

Although it is still true that the Uniform Guidelines and courts show no preference for any specific method of job analysis, due to pressures for documentation from regulatory agencies, a professional consensus has begun to evolve and emerge around recommended practices for job analysis.  The associated principles can be expressed as follows.  A job analysis should:

  • Be task-based. Despite continued mention of worker-oriented approaches, including the emergence of competency models, the job description should be task-oriented including detailed listings of tasks and associated knowledge, skills, abilities, and personal characteristics (KSAPs).
  • Identify linkages. The identification and measurement of linkages between tasks and KSAPs is critical.  When job analysis is used in test development, it is equally important to establish linkages between the KSAPs and the test content.
  • Utilize interviews and focus groups. The appropriate use of interviews or focus groups remains important in obtaining job information from incumbents and supervisors.
  • Incorporate questionnaires. Where practical, with practicality a function primarily of the number of incumbents and the quality of the information obtained from the interviews, questionnaires should be used to gather quantitative ratings of tasks, KSAPs, and linkages.  The collected data can then be subject to statistical analysis.  Technological developments, including the widespread availability of easy-to-use online survey software, have made it much simpler and cost-effective to create and distribute job analysis instruments.  In designing surveys, practitioners should be aware of the now ubiquitous nature of smartphones.  Large matrices of the type so frequently used to collect job ratings do not translate well to small screens.  As a result, analysts must be creative in designing surveys when the incumbents will be responding using mobile devices, including tablets and smartphones.

(more…)

By |2015-08-05T09:23:03-04:00August 5th, 2015|Assessment, Job Analysis|1 Comment

To Retest or Not to Retest: Answers (Part 2)

In our previous blog, I reviewed the research literature related to the retesting of applicants. Summarizing our findings from Part 1:

  1. If someone takes a test again, his/her score will increase.
  2. If a group of individuals are retested, the rank-order will change.
  3. At least two months, but more realistically 6 months to a year, should be required between most retests.
  4. Given a candidate is willing, there seems to be no reason to limit retests. The issues are really whether to even allow a first retest and the time between retests.
  5. Under typical situations, where only a portion of the applicants may be taking the test a second time, the first administration will probably be the most valid; but there are many factors that may influence this conclusion. And, from 1 above, we would expect those taking the test a second time to have higher scores than the first time examinees.

This month, our goal is to arrive at some practical suggestions regarding practice based on professional and government guidelines, the public sector testing model, and the previously mentioned research findings in order to come up with recommendations for applied practice.  This will include a discussion of how we should determine a score for someone who is retested.  (more…)

By |2015-07-29T14:52:42-04:00June 3rd, 2015|Retesting|Comments Off on To Retest or Not to Retest: Answers (Part 2)

To Retest or Not to Retest; That is the Question (Part 1)

Well, not the only question. In this blog, we will consider a series of questions including:

  1. If someone takes the test again, will there score change?
  2. If a group of individuals are retested, will the rank-order of the scores change?
  3. How much time should there be between retests?
  4. How many retests should be allowed?
  5. Which test scores are the most valid for predicting performance?

My answers will be based primarily on the research literature.  However, retesting is one of those topics where the importance of the question to practitioners has far outpaced the quantity and applicability of the published research literature.

(more…)

By |2015-05-01T14:59:36-04:00April 27th, 2015|Retesting|Comments Off on To Retest or Not to Retest; That is the Question (Part 1)

Readability of Assessments in a Digital Age (Part 2): Practical Issues

  • We want test questions that are very detailed, highly complex, engage the test taker, reflect the job, and, oh yes, are at a 6th grade reading level.
  • Grade level = 5.9. Reading level from paragraphs from a 3rd grade reader from 1960, as calculated by Word.

This is part two of a blog dealing with the measurement of readability and the establishment of appropriate reading levels. For purposes of this blog, readability can be defined as the ability of material to be comprehended by its intended audience.

In Part 1, we investigated approaches to readability based on:

  1. The measurement of grammatical features or readability formulas.
  2. The linguistic perspective.
  3. Job analysis.

In Part 2, we turn our attention to more practical issues such as:

  • How are readability indices used by assessment professionals?
  • What adjustments can or should be made when evaluating multiple-choice tests?
  • How has the changing nature of jobs impacted readability?

(more…)

By |2015-03-24T12:46:54-04:00March 25th, 2015|Readability|Comments Off on Readability of Assessments in a Digital Age (Part 2): Practical Issues

Readability of Assessments in a Digital Age (Part 1): Bet You Won’t Read This Whole Blog

  • People online don’t read.
  • Olny samrt poelpe cna raed tish – cna you?

The opening epigraphs both deal with readability. The first is a commonly encountered claim that people scan rather than read when perusing material online. What does that mean for employment websites and the associated assessments? The second is a teaser that often makes the false claim that very few people can read the material, when in fact almost everyone can. It illustrates that individuals can make sense out of what appears to be unreadable or scrambled text. Both have implications for our topic for this two-part blog, which involves the readability of assessments.

The measurement of readability and the establishment of appropriate reading levels is a critical responsibility faced on a regular basis by many selection specialists and personnel managers in the public sector. This task involves an analysis of the assessment of materials used on the job and the tests or assessments used in selection. Readability can be defined as the ability of material to be comprehended by its intended audience.

Unfortunately, most of our knowledge of the impact of readability on assessments was developed in an era where we used paper-and-pencil, multiple-choice tests. Even that literature is limited in that most of it deals with educational tests. Very few studies look at the actual impact of readability on the difficulty of employment tests or potential racial bias in tests. I could spend this blog complaining ad nauseam about researchers conducting highly artificial studies of irreproducible phenomena of little generalizability, while ignoring questions of real practical importance, but that is another topic for another day or forum.

One of the questions we will examine in the second part of this blog is whether readability is still relevant for computer-based tests. However, before we do, we will review the more traditional literature on readability and how we measure readability.

In Part 1, we investigate approaches to readability based on:

  1. The measurement of grammatical features or readability formulas.
  2. The linguistic perspective.
  3. Job analysis.

(more…)

By |2015-03-02T15:40:26-04:00March 11th, 2015|Readability|1 Comment

Welcome to the New Year and Forecasting the Future of Assessment

Traditionally, I have started the new year with a blog that recaps the past and looks to the future in assessment. My habit has been to insert a statement concerning how difficult it is to predict the future. However, this year I was surprised to find that some of the topics I would select for future trends, were actually covered in my blogs over the past year. So, maybe with age I am getting better at prophecy.

My predictions for future trends or hot topics over the coming year include:

  • Mobile Devices and Technology
  • Big Data and Predictive Analytics
  • Branding
  • Police Performance.

Mobile Devices and Technology

The Society for Industrial and Organizational Psychology (SIOP) recently listed their top ten trends for 2015. Number 1 was Mobile Assessments. It is clear that mobile assessments increase “flow through” or the number and diversity of individuals tested. On the other hand, mobile assessments present a number of challenges in terms of programming, comparability of scores, and the noise in the environment.

Practitioners have a large number of questions concerning the use of mobile devices, more questions than there are evidence-based answers. As a result, I can safely predict an increase in the number of publications we will see dealing with the issue of mobile testing, especially research looking at the question of measurement equivalence across device type. From a personal perspective, I see the issues involved in using mobile devices for assessment as one piece of the larger technology puzzle, which would include the movement to greater use of online assessment.  Candidates have come to expect that assessments will be offered in an online version. Online assessments have many advantages including simplicity, efficiency, and cost savings.

IPMA-HR is now offering selected tests through the new Online Test Administration Service (OTAS). The plan is to have all tests available for online administration in  the near future.

Big Data and Predictive Analytics

I must admit to having concerns regarding the faddish nature of the Big Data and Predictive Analytics movement.  From my perspective, assessment professional have been engaged in Predictive Analytics for over 60 years. Nevertheless, Big Data is here to stay and finished Number 2 on the SIOP trend list. Assessment professionals will have to familiarize themselves with the language of Big Data and Predictive Analytics.

We will have to increase our awareness of advancements in the use of Big Data and Predictive Analytics and ensure that all selection decisions are made in a fair and valid manner, whether based on traditional models or empirical relationships discovered through Big Data analytics. In particular, within the public sector, we should remain wary of any attempt to substitute the measurement of demographic or Big Data-based variables for professionally developed and validated assessments of individual merit.

Branding

The management of “brand” or “image” can be seen as particularly important in the public sector because there is a strong belief expressed in a variety of media that young job seekers are not attracted toward government agencies and jobs. Thus, human resource professionals must be concerned with the maintenance of a positive public sector image. Selection strategies can impact the image that applicants, employees, and the general public hold of your organization. Thus, we should take responsibility for the impact of our selection methods and decisions on the reputation of the organization.

An important part of branding is the impression that your employees make on the public. Those who work directly with the public are a major factor in shaping the image the public will hold of your agency. Effective and efficient customer service is key. With the improvement of customer service in mind, IPMA-HR will be working on rolling out a “generic” customer service test for 2015. The availability of this new instrument should aid organizations in the task of identifying and hiring the best customer service personnel available.

Finally, as promised several months ago, some results on branding in the public sector from my mini-survey. Unfortunately, I only received 19 complete responses; I thank those who took the time to complete the survey. Based on the readership of the blog, I would assume most individuals are employed in the public sector.

Now the results (again based on a small, mostly public sector sample):

  • The public sector was seen as having the following positive attributes:
    • A good image among respondents and being a good place to work.
    • A socially responsible image.
    • Hard working employees.
  • The public sector was seen as having the following negative attributes:
    • A poor or below average image in the mind of the general public.
    • A below average financial image and future.
    • An image as failing to pay fairly, although with good benefits.
    • An image as failing to pay based on performance or merit.

Looking at the results, I believe they suggest a divide between the way public sector employees see the government as an employer, which would be in a generally positive fashion, and the way employees believe the public sees the government, which is not very positive. The respondents were also concerned about the financial shape and future of the government, which seems tied to the general issue of the fairness of pay, as well as the ability to attract future employees.

Compared to many private sector companies, the reaction of public sector employees to the government as an employer tends to be pretty positive. The results of this mini-survey support this viewpoint. Hopefully, the positive message of work in the public sector can be communicated to future candidates for employment.

Police Performance

As I write this blog, the topic of police performance dominates the news cycle and social media. Police work is incredibly difficult and the educational requirements associated with the job continue to rise.  At the same time, many communities are reporting that they are experiencing a shortage of applicants; recent events will probably exacerbate that trend. As assessment professionals, we are under continued pressure to recruit, screen, select, train, and retain highly qualified individuals to serve the public through police work.

In order to assist you, IPMA-HR Assessment Services has added to its already existing suite of products for police selection.  In addition to the previously mentioned availability of online testing, Assessment Services is introducing a Police Officer Structured Interview System (POSIS).  Based on extensive studies with nearly 1,000 candidates, POSIS will provide in one package everything you need to conduct structured and defensible oral interviews, thereby adding a new item to your assessment arsenal.

The POSIS guides you through all the steps of the process including planning, training raters, delivery of the interviews, and scoring.  The POSIS should greatly aid communities in delivering a standardized, valid interview process, which should lead to the hiring of highly competent police officers and a more positive image for both human resources and the police department.

Conclusion

My final prediction is that public sector assessment professionals will have to expand their competency in various areas of emerging technologies, including an expanded knowledge of online assessment, Big Data, and the development of “branded” assessments. This will require that we learn to communicate and work on teams with members of allied professions, especially those in the Information Technology areas. As we move into the future, you can count on IPMA-HR Assessment Services to develop and deliver innovative products and selection systems, while continuing to support their traditional battery of tests.

This brings to a close my first year as a blogger for Assessment Services. I hope you, the reader, found my blogs to be informative and enjoyable. I am always willing to consider any feedback you have or suggestions for future topics.

By |2015-01-23T15:28:41-04:00January 26th, 2015|Assessment|Comments Off on Welcome to the New Year and Forecasting the Future of Assessment

Thoughts on Adverse Impact: Part 2

In my previous post, Thoughts on Adverse Impact Part 1, I offered my suggestions on how to plan and think about an adverse impact study. Summarizing and reviewing some of my main points:

From the Practitioner Perspective, adverse impact involves a technology, not a science.

  • The ways in which we prepare for, calculate, and interpret the results of an adverse impact analysis are guided primarily by the Uniform Guidelines and case law, as opposed to strict principles of statistics.
  • Adverse Impact can be defined as practical or significant differences in selection rate as a function of protected group status.

In this month’s blog, I deal with more commonly discussed issues, such as various approaches to quantifying adverse impact and the sequencing of tests. Again, as a caution, this blog does contain my opinions on a controversial topic. In addition, although I have tried to simplify the discussion, this is a complex topic and it would be difficult enough to explain in a long article or book, let alone a blog.

Distinguishing Between Adverse Impact and Subgroup Differences

Although adverse impact and subgroup differences between protected classes are distinct concepts, the two terms are often used interchangeably, not only in the practitioner literature but by experts who should know better. In order to appreciate some of the issues in the analysis of adverse impact, one must first develop an understanding of the difference between these two frequently confused concepts. (more…)

By |2014-11-18T17:13:43-04:00November 19th, 2014|Adverse Impact|Comments Off on Thoughts on Adverse Impact: Part 2

Thoughts on Adverse Impact: Part 1

  • If your head hurts just thinking about adverse impact. Cheer up, you are not alone.

A long time ago, I had an international student ask me why my class was not titled Thoughts of Professor Doverspike; in her country, all classes were simply listed as Thoughts of Professor _____. I have always thought that was a wonderful idea and so this month’s blog is Thoughts on Adverse Impact (actually, I originally titled it Thought on Adverse Impact, but I was unsure whether that was a typo, a Freudian slip, or an accurate depiction of my limited cognitive ability).

I have spent most of my professional life, which now amounts to over 35 years, dealing with adverse impact issues. During this time period I have found that most human resource professionals, from the novice to the experienced employment attorney, struggle when it comes to understanding even the basics of adverse impact.

From the Practitioner Perspective, adverse impact involves a technology, not a science. Thus, in my opinion, the ways in which we prepare for, calculate, and interpret the results of an adverse impact analysis are guided primarily by the Uniform Guidelines and case law, as opposed to strict principles of statistics. Therefore, after offering a brief definition and thoughts on technology, I first offer up my suggestions on how to plan and think about your adverse impact study. In my next blog, I will deal with more commonly discussed issues, such as various approaches to quantifying adverse impact, the sequencing of tests, and the interpretation of results.

As a serious caution, this blog does contain my opinions on a controversial topic. It would not be difficult to find other experts who might disagree with some of my recommendations and conclusions. (more…)

By |2014-09-30T16:37:06-04:00October 1st, 2014|Adverse Impact|1 Comment