How to Use GenAI to Enhance Upskilling Through Smarter Testing and Training

Image

This article originally appeared in Training magazine. Read the full article here >

At work and in the world, change is in the air.

The Great Resignation? That’s behind us. The debate over hybrid and remote work? It’s less relevant. The struggle to retain top talent in a competitive job market? It’s an ongoing challenge.

Now the conversation has coalesced around the potential impact of generative AI (genAI) and its impact on the present and future of work.

According to Gallup, 72 percent of Chief Human Resources Officers (CHROs) say AI will replace jobs in their organization in the next three years, while 65 percent say the technology can be used to improve employee performance throughout their companies. Consequently, as Gallup notes, “the demand for new skills is higher than ever. The time to act on upskilling the workforce is now.”

Employees know it and are ready to act.

Less than half of employees strongly agree they have the skills they need to excel at their jobs, and they are eager to upskill their skill sets. That’s why many professionals are pursuing skills-based training and certifications, creating testing challenges for employers, regulatory bodies, and other stakeholders.

Training and Testing Challenges When Upskilling Employees

Upskilling is a powerful lever for workforce development. However, many training initiatives don’t formally upgrade skills because they don’t validate and certify knowledge acquisition. That’s because testing and certification protocols are inherently complex and resource-intensive.

When developing certification examinations, in-house and third-party certification programs typically follow a rigorous and systematic process. They convene subject matter experts (SMEs) to conduct a job task analysis, defining the expected knowledge, skills, and abilities of their profession or specialty. These SMEs outline the content domain for assessment, drawing on their extensive expertise. They also gather input from others in the field before test creators apply these insights to test item creation.

Experts draft initial test items that cover specific content areas of the exam blueprint while a separate cohort of SMEs review and modify these draft assessments for accuracy, difficulty, and relevance.

SMEs are valued and valuable, and they are in limited supply. Their efforts often are hindered by workflows and limitations, creating a need for new efficiencies that make creating training and testing material more adaptable, dynamic, and cost-effective.

Read the full article here >