Portfolio Title Banner

Homepage Link
Introduction Link
Instructional Technology Link
Competencies Link
Artifacts Link
Professional Link
Works Cited Link
blue divider line

Domain of Evaluation

Instructional technologists must continuously determine the effectiveness of the learning process. By conducting various forms of evaluation, an instructional designer may determine if instruction addresses the learner’s needs and solves the performance problem. Evaluations are a necessary component during design, development, and implementation. When conducting evaluations, data is collected and analyzed. The four sub-domains of evaluation are: problem analysis, criterion-referenced measurement, formative evaluation, and summative evaluation.

Problem Analysis

Before an instructional technologist can begin designing and developing ways to make improvements, an analysis of performance problem is conducted. This analysis will determine the course of action the instructional technologist must take. The analysis of a problem is also known as a front-end analysis, problem analysis, or needs assessment (Rossett, 1987). Training needs assessments specifically determine an organization’s optimals in performance, actuals of performance, feelings of stakeholders, causes of problems, and possible solutions. To retrieve data for the analysis, instructional technologists will conduct user surveys, interviews, documentation reviews, and analysis of extant data. A problem analysis can determine whether or not training, changes in organization, increased budget, or some other solution is needed.

Return to Top

Criterion-Referenced Measurement

Criterion-referenced measurement is based on goals and objectives determined by the instructional technologist. It is most often used in training as an assessment tool for learners. This type of tool identifies exactly which skill is mastered by the learner. Criterion-referenced assessments may be in the form of written tests or rubrics. Instructional technologists conduct a thorough task analysis in order to develop this type of measurement. These assessments are useful because the data is concrete easy to analyze. Instructional technologists can revise instruction in order to address specific skills and objectives that have not been mastered.

Formative Evaluation

During development of materials and instruction, formative evaluations should be conducted. The instructional technologists use formative evaluations to gather information about the effectiveness of instruction and then use the information to complete the development of materials (Seels and Glasgow, 1998). Types of formative evaluations may be conducted in small groups, one on one, and small or large field test situations. Instructional technologists gather information by conducting verbal interviews, observations, or user surveys. Some instructional design teams may even give a pretest or posttest to learners. This information is then used to revise materials before they are fully implemented. Instructional technologists work closely with users, learners, and development teams to be sure that potential problems with materials are identified and solved.

Return to Top

Summative Evaluation

Summative evaluations are used to determine if instruction, projects, or changes should be adopted for longer periods of time or in a more widespread group. These evaluations take place after implementation is complete. Instructional technologists often ask that summative evaluations are conducted by an outside group. The outside group is able to give feedback and suggestions for improvement.

Instructional technologists look to gain feedback in four major levels of evaluation according to Kirkpatrick. These levels are learner satisfaction (How did the learner react?), learner achievement (Did the learner gain the knowledge?), transfer on-the-job (Does the learning improve abilities useful on the job?), and impact on organization (How did the final results affect the program?) (Kirkpatrick, 1994). Once this information is gathered and analyzed, the organization can determine the rate of success. Instructional technologists can use this evaluation to improve further projects and make revisions. There are many evaluation models that may be used for the process of evaluation. Some effective models are James D. Russell's Evaluation Model, Arthur Anderson's Formative Evaluation Model, and Kirkpatrick's Four-Level Model.

A valuable comprehensive model used to conduct summative evaluations for long term projects is the CIPP (context, input, process, and product) Evaluation Model created by Daniel Stufflebeam. It is a framework used for evaluations of programs, projects, personnel, products, institutions, and systems. It contains a checklist of activities that addresses issues concerning impact, effectiveness, sustainability, and transportability (Stufflebeam, 2004).

Table 3 - CIPP Model for Evaluation

CIPP Evaluation Criteria

Key Questions for Summative Evaluations

Impact

Were the right participants reached?

Effectiveness

Were the participants’ needs met?

Sustainability

Were the gains for the participants sustained?

Transportability

Did the processes that produced the gains prove transportable and adaptable for effective use in other settings?

Return to Top

Home * Introduction * The Field of IT * Competencies * Artifacts * Professional * Works Cited * Site Map * GLOSSARY *

This site was designed and developed by Patricia M. Gonzalez-McQuiston © 2005. Last updated on 12-5-05

Watson School of Education Building Graphic UNCW LOGO AND SEAL GRAPHIC Banner Graphic of Patricia Gonzalez-McQuiston's Portfolio
Watson School of Education Building Graphic UNCW LOGO AND SEAL GRAPHIC Banner Graphic of Patricia Gonzalez-McQuiston's Portfolio