Failing to measure the job and skill competencies of employees objectively and quantitatively is not a problem until it becomes one. In the current business climate, where skill requirements can change from week to week, skills gaps can appear — and widen — faster than ever before. Add to that the quickly shifting dynamics between employer and employee, and you’ve got a major challenge for learning and development (L&D).
Filling skills gaps becomes easier if objective and quantitative rubrics are used to measure and document employee job knowledge and/or job skill competencies thus reducing or eliminating subjective assessments and/or bias toward the employee.
Quantitative Versus Qualitative Rubrics
Quantitative employee competency rubrics are developed by identifying the knowledge and skill sets that are core to employees’ job tasks. Their development and use also help to standardize both training and employee competency assessment of core job knowledge and skills. Job-related assessments are an important tool used to measure employee competency in core tasks to prove employees have the knowledge and skills to competently perform their jobs, a necessary requirement given today’s fast-paced environment. Quantitative assessment rubrics do have some limitations, such as not being able to quantify some behavioral variables that are often associated with job skills and customer service competencies.
Employees can be competent in their core job knowledge and/or skills yet offend customers and other employees with their language, actions or inactions. Supplementing quantitative performance measures with qualitative ones about their ability to work with others in group projects, design teams and to apply and synthesize what they have learned from training programs and experience might be needed to get a total picture of employees’ abilities.
Developmental Stages of Quantitative Rubrics to Determine Job Competencies
Stage 1: Review the Core Job Tasks
Prior to developing a quantitative rubric to measure employee job competency, obtain the job task analysis (JTA) description for the job and learn what core tasks are required for the position. For example, an administrative assistant may perform many more tasks than what the title implies, but not all may need to be measured.
Stage 2: Identify Relevant Job Tasks
After reviewing the JTA for each job being evaluated, identify the relevant core job tasks and/or skills. These may include various skill levels (e.g., typing a specific number of words per minute, performing cardiopulmonary resuscitation (CPR) or flying an aircraft).
Stage 3: Measuring Competencies by Learning Domain
Instructional designers and/or human resource evaluators generally measure employee competency within three learning domains: cognitive, affective and psychomotor. The cognitive domain focuses on measuring employee knowledge about a job. Online or pencil-and-paper assessments can be useful instruments to measure such knowledge. A few examples are assessing employee competency about key organizational policies, rules and statutory requirements that directly affect his or her job.
Affective Domain measurement focuses on employees’ attitudes and feelings. An example is when the employee adheres to and practices safety guidelines during a training session versus totally disregarding them, thus endangering the safety of other trainees and training staff.
Psychomotor Domain assessment focuses on performance skills that cannot be adequately assessed via online or pencil-and-paper testing. Correctly installing a can light in a ceiling is an example.
Stage 4: Developing A Quantitative Assessment
After identifying the relevant and core job knowledge, skills and/or tasks, multiple-choice questions can be used to assess specific employee knowledge about them (e.g., how to file and handle confidential documents). A checklist can be used to identify the key steps necessary to demonstrate job skill competency. After reviewing employee training on job-related skills from the instructional lesson plan, identify those skills and steps that must be correctly performed to demonstrate skill competencies.
Next, sequentially list the steps for each job-related skill (use a separate page per skill). Determine a passing score from the steps that must be completed. If there are 10 steps and eight steps are critical to demonstrate skill competency of a core task, then a passing score of 80% (8 of 10 steps) must be achieved by the employee being evaluated.
Highlight the critical steps that cannot be skipped or improperly performed. These key steps are labeled “terminal” and will objectively result in documenting the employee’s inability to correctly demonstrate the skill being evaluated, thus requiring the employee to undergo remedial training and/or remedial assessment. When a terminal step is missed or improperly performed, this automatically identifies the employee’s inability to demonstrate competency in that particular skill. The employee will receive remedial training before being reassessed for competency.
Each job-related core task must have one performance objective per skill being evaluated. To personalize the evaluation, you must identify the conditions needed to complete the task, including the minimum passing score, as well as the steps that must be correctly demonstrated. Employees, whenever possible, should have their skills assessed in a real-world setting. This is referred to as authentic assessment.
Stage 5: Train Evaluators and Pre-test the Rubric
Employees will say they are not against change, but experience has shown they are against changing. This goes for the evaluators, too. Prior to using a newly developed rubric to quantitatively measure employee competency, it is imperative to train the evaluators (i.e., trainers and managers) on how to properly explain and/or use it. The integrity of competency-based assessment depends largely upon the rubrics and how well they are used by evaluators.
Consider using hypothetical employee examples of core tasks to train the evaluators on how the relevant job tasks and/or skills were identified and how to measure them using the quantitative rubric. Develop three or four different hypotheticals for various work-related skills to make sure the evaluators learn to use the measurement system consistently and objectively. After each hypothetical skill assessment, ask the evaluators to discuss how they measured competency. This is a great opportunity to address any confusion or ambiguity, and to seek rubric feedback. Amend the rubric or the evaluator training based on their feedback.
Hypothetical Situation and Quantitative Skill Rubric
Assume a high school requires its faculty and staff to know how to properly apply an automatic external defibrillator (AED). Annual competency assessment and documentation of these employees is required (psychomotor domain), along with a written assessment (cognitive domain). A singular skill competency quantitative rubric is shown in the sidebar.
Stage 6: Communicate the Quantitative Rubric to Employees
Discuss the quantitative competency rubric with employees so they will understand how it will be used to objectively evaluate their competency about job knowledge and/or skills (for example, the correct application of AED pads to the chest area). No one likes surprises, unless it is a birthday party or gift, so educate them about the assessment process. Explain how the quantitative rubric was designed to objectively measure and determine their job-related competencies.
Stage 7: Review Outcomes with Employee
Following the competency-based assessment, review its results with the employee regardless of the outcome (pass or need remedial instruction). Focus on the positive and how the quantitative rubric helps all employees improve their job knowledge and/or skills, and how this impacts their career and the organizational mission. For those employees who need remedial training and reassessment, make sure these are explained in a positive and encouraging manner.
Keep in mind that today’s employees may not speak English as a first language or may have a learning or other disability that may impact their knowledge and/or skill assessments. Depending on the hurdle, evaluators may need to make reasonable accommodations for those employees. If an employee has trouble taking an online or written cognitive domain assessment because of vision impairment or dyslexia, for example, a reasonable accommodation may be reading the question and possible answers before asking the employee to verbally identify the correct answer.
In summary, competency-based assessments help organizations manage risk through quantitative measurement of employee core tasks. Should a customer claim an employee caused a personal or property injury because of incompetency, providing competency-based assessments will help to mitigate these allegations. Of course, there is always operator error, even when the employee was deemed competent in job knowledge and/or skills. Similarly, reviewing a quantitative competency rubric with a disgruntled employee and explaining the competency assessment was objective and not subjective, may eliminate further legal action.
Developing competency-based assessments takes time, but after they are developed, these assessments can be used until there are changes in job knowledge, skills or training. Evaluator favoritism of the employee and/or implicit or actual bias are also minimized or eliminated using quantifiable and objective rubrics to measure, determine and document employee job competencies.