Instructional Technology Portfolio for Susan Elizabeth Allred
Design Development Utilization Management Evaluation

Domain of Evaluation

“Evaluation is the process of determining the adequacy of instruction and learning” (Seels & Richey, 1994, p. 54). Designers rely on evaluation models to determine relevant methods to collect data and what types of data need to be collected. For example, the Context, Input, Process evaluation, and Product evaluation (CIPP) systems model for program evaluation provides these for categories for collecting information. Context evaluation involves conducting needs analysis, determining objectives, and whether objectives are approriate to satisfy the determined need. Input evaluation involves analyzing the inputs and resources of the system by conducting a benifit verus cost analysis, design analysis, and alternative stratigies analysis. Process evaluation involves analysis of how the program is performing is the system by formative evaluation. Product evaluation involves analyzing the outcomes of the program by summative evaluation.

There are four subcategories in the Evaluation domain:

  • Problem Analysis
  • Criterion-Referenced Measurement
  • Formative Evaluation
  • Summative Evaluation

Problem Analysis (also called needs anlaysis or performance analysis) involves gathering information, identifying discrepancies, analyzing performance, identifying priorities and goals, and writing problem statement. Information about the context of the problem helps the designer make decisions about feasible solutions. Information gathered should define the actual performance as well as the optimal performance. A discrepancy is a gap between optimal behavior and actual behavior. The performance analysis step results in identifying the nature of the problem and its causes. A problem statement should summarize the nature of the problem, constraints, resources and decisions on goals.

The problem analysis process as defined by Alison Rossett (1987) is to determine purposes based on initiators, identify sources, select tools, conduct the needs assessment in stages, and use findings for decision making. The purpose of the problem can be detrmined by meeting with the stakeholder that initiated the problem initially. This meeting could also yield valuable information about the location, access and constraints to existing data that may be useful for analysis. Once information is gathered and analyzed, the designer must decide the tools they will use to gather additional information that the existing data did not provide. These tools include Interviews, observations, records, group facilitation, and surveys. The designer should choose multiple tools and conduct the needs assessment in mutiple steps or stages. Finally, the designer should compile and analyze all the existing and collected data to define the gap between the actual and optimal performance. This will define the goals for the proposed solutions.

A Criterion-Referenced Measurement is used if the problem analysis shows that there is a trining need. Once the training goal is established, objectives for instruction will be written along with the criterion referenced assessment items. These items are deliberately constructed so as to yield results that are directly interpretable in terms of specific performance standards (Glaser & Nitko, 1971, p.65). These measurements evaluate a learner based upon specified objectives instead or comparing learners to one another.

Formative Evaluation involves “gathering information on the adequacy of an instructional product or program and using this information as a basis for further development” (Seels and Richey, 1994, p. 128). Formative evaluation is an ongoing process that provides information used to revise instruction in development. Designers gather information through one-on-one, small group, and field tests. The designer will begin by testing the material with potential users one-on-one. These users will be evaluating the material, content, and usability of the instruction. This will provide the designer with valuable information for revisions before full blown production of materials begins. The second phase is small group evaluation in which the designer tests the revised draft on a group of potential users in a setting that the actual instruction will be taking place. Again, this provides information for revision. The third phase is an implementation on a small scale called a field test. This provides data for a third and final revision.

Summative Evaluation involves “systematically gathering information on the adequacy and outcomes of an instructional intervention and using this information to make decisions about utilization” (Seels and Richey, 1994, p. 134). Summative evaluation takes place after implementation of instruction has taken place. The purpose is to present conclusions about the success of instruction and make recommendations if any improvements are necessary.

Competencies Job Desricriptions Evidence
Domain of Evaluation
Plan and conduct needs assessment. Assess and design training and performance needs of client population. 530 project – Center for Teaching Excellence Training Needs Assessment Report
Plan and conduct evaluation of instruction/training. Leads in the ongoing evaluation of the effectiveness of the instructional technology program

500 project-Geometry Proofs Tutorial – Evaluation Results

513 project- Algebra tutorial

515 project WebCT module

Plan and conduct summative evaluation of instruction/training.    
Plan and conduct product evaluation.

Monitors the computer competency assessment plan for staff in grades 6-12

Specifies learning impact and measures of student learning

512 project – Web Evaluation Instrument


Home | Site Map | Introduction |
Definition of IT | Domains | Competencies | Artifacts | References | Resume |
Works Sited | Signature Page | UNCW | MIT Program