1. Know What You’re Trying to Predict
It sounds obvious, but any assessment is only valid insofar as it can successfully predict what you are trying to predict. Therefore, the very first, and perhaps most critical, step in creating a valid certification or licensure assessment is to know exactly what it is you want that assessment to predict. Are you creating a licensure examination aimed at ensuring the safety of the public? If so, what knowledge, skills and abilities would the licensed job incumbent need to possess in order to ensure safe and effective job performance? Is your certification examination designed to successfully predict performance in a particular job or to demonstrate knowledge of a particular set of standards? The predictive validity of any examination depends on your ability to answer these questions at the outset.
The process of comprehensively evaluating the scope of knowledge, skills, abilities and other characteristics that must be assessed is referred to as job/task analysis or competency analysis. This process is intended to ensure that the assessment is fair and valid because it evaluates only the knowledge, skills and other characteristics which are actually important to predicting the intended outcome (e.g. effective job performance). Clearly understanding the required scope of your assessment is the foundation of the exam development process, and neglecting this initial step not only may put a credentialing organization at risk legally, but will likely produce an assessment which fails to achieve its objective.
2. Recruit Representative ‘SMEs’
A surefire way to jeopardize the legal defensibility and validity of any credentialing examination is to develop the assessment using only input from individuals that aren’t actually representatives of the candidate population. Use subject matter experts, or SMEs, who not only are highly familiar with the required assessment content but also represent the candidate population in terms of critical job and demographic variables is imperative. To ensure your credentialing organization is effectively representing the candidate population throughout the exam development process, you need to assess who the candidate population is or will be, what factors influence how the job is performed and what knowledge is important. For example, using only senior managers to perform a job analysis on a low level certification may not be appropriate (as they may have skewed knowledge of the tasks performed compared to actual job incumbents). Additionally, it is important to know whether required knowledge differs in various regions, countries or environments. Make sure the assessment is developed in such a way that all of these differences are appropriately reflected. The broader your assessment is, the trickier this can be.
3. Balance the Ability to Score Objectively with Job Relatedness
A common challenge faced by many credentialing organizations is effectively balancing legal defensibility of an assessment with face validity, or the appearance of validity. For certification examinations in particular, both candidates and their potential employers must perceive the assessment as being a valuable performance predictor in order for the test to remain credible, competitive and marketable. Credentialing organizations often have more success with creating face validity when they develop assessments which more realistically mirror actual job performance. This can be achieved with the use of innovative item types, such as simulations and work samples. When thoughtfully developed, these innovative examination questions can more effectively predict job performance by assessing candidates’ ability to apply their knowledge to a task, rather than simply memorizing and regurgitating it. However, the legal defensibility of such complex assessments can be called into question when the items cannot be developed and utilized objectively, and in these cases the credentialing organization is at risk for inaccurately discriminating between competent and incompetent candidates. Oral, essay, and performance based assessments are commonly scrutinized and legally challenged for this reason. One solution for balancing assessment innovation with legal defensibility is the use of more standardized but advanced item types, such as Drag and Place, Hot Spot, case based or media rich questions. Frequently these item types can better assess the complexities of procedural knowledge and skills while maintaining a high level of scoring objectivity.
4. Embrace Change
Are you still delivering paper/pencil examinations in a technology driven industry? Are you testing candidates’ knowledge of a particular manual procedure when job incumbents have moved on to using software or other resources to perform that task? If so, the assessment is at risk for being perceived as obsolete (hence less valuable), and even unfair (hence legally questionable).
Failing to embrace industry change as it comes is a guaranteed way to lose credibility as a credentialing organization. As discussed above, embracing change involves using assessment techniques and item types which accurately reflect job performance. However, it is equally important that the credentialing organization stay abreast of any examination content which has become outdated, and that it periodically update the assessment accordingly. The examination development is a cyclical process, and all steps of the examination development process (from job analysis to scoring) must be revisited and re-evaluated as often as needed. How often your organization should revisit various phases of the examination development cycle will depend on how rapidly the nature of the job and related industry changes. It can be difficult for subject matter experts who have been working in the industry (or on a particular examination) for a longtime to be fully aware of these changes; periodically recruiting new SMEs with a variety of experience levels and backgrounds can help to give fresh perspectives and input.
Tiffany Baker, PhD.
Global Operations Manager