Marolyn Anith Schauss
Electronic Portfolio

 Printer Friendly Version

 

Home

Introduction

Field of IT

Competencies

Artifacts

Professional

Index

The Field of IT ~ Domain of Evaluation

 

Problem Analysis
Criterion-Referenced Measurement

Formative Evaluation
Summative Evaluation

 

~ Definition of IT  ~ Domain of Design
~ Domain of Development  ~ Domain of Utilization

~ Domain of Management ~ Glossary


The Domain of Evaluation
According to Seels and Richy (1994), “Evaluation is the process of determining the adequacy of instruction and learning” (p. 54). From an instructional designer’s view evaluation is a tool to measure what has been learned. The Joint Committee Standards for Educational Evaluation (1981) identifies four types of evaluation: program evaluations; project evaluations; and materials evaluations. According to Worthen and Sanders (1987), “in the domain of evaluation distinctions are made between program, project and product (materials) evaluation, just as distinctions are made between formative and summative evaluations” (p. 22). Table 1 is graphic chart providing distinctions between the evaluations:

 Table 1: Types of Evaluation. From: The Joint Committee Standards for Educational Evaluation (1981).

Program evaluations – “evaluations that assess educational activities which provide services on a continuing basis and often involve curricular offerings” (p. 12). Some examples an instructional designer might encounter under an evaluation contract are: evaluations of a school district’s reading program; a state’s special education program; or a university’s continuing education program.

 

Project evaluations – “evaluations that assess activities that are funded for a defined period of time to perform a specific task. Projects that become institutionalized in effect become programs” (pp. 12, 13). Some examples an instructional designer might be contracted to evaluate are: a three-day workshop on behavioral objectives (a short period of time); or a three-year career educational demonstration project (a possible indefinite period of time).

 

Materials evaluation (instructional products) – evaluations that assess the merit or worth of content-related physical items, including books, curricular guides, films, tapes, and other tangible instructional products (p. 13).

 

            To guide the evaluation process, instructional designers often use an evaluation model.  One comprehensive evaluation model that is used by instructional designers is CIPP model proposed by D. L. Stufflebeam’s (1969).  The CIPP model is a guide for program, project, personnel, institution and systems evaluations. This evaluation model suggested four types of evaluation: Context, Input, Process, and Product (CIPP). Context evaluations determine the effectiveness and appropriateness of the program; Input evaluations determine the feasibility and accountability of the strategies chosen for the project; Process evaluations determine the maintenance of the document and the accessibility of the activities within the program; Product evaluations determine the program’s success over time. Anthony Finkelstein introduces the Objective Oriented model (2000). When using this model, an instructional designer compares the results of learning with the established goal(s). Figure 1 is a graphic representation of the Objective Oriented model.

The Development Process. Objective Oriented Model.

Figure 1: The Development Process. Objective Oriented Model. Finkelstein (2000).

             An instructional designer places value and significance on the results of evaluation to determine if the learner is transferring the new knowledge. The following are the sub-domains of evaluation domain: problem analysis; criterion-referenced measurement; formative evaluation; and summative evaluation.

             Problem Analysis – is the preliminary step for evaluation of instruction development. An instructional designer will clarify learner goals and constraints during the Problem Analysis step by using information-gathering and decision-making strategies. The information-gathering defines a gap between ‘what is’ and ‘what should be’ in terms of results (Kaufman, 1972). Information-gathering is determined through a similar matrix as represented in Table 2:

 Table 2: Identifying the Gap Matrix. Kaufman (1972).

Need

What Is

What Should Be

Gap

Information

Information

Information

Information

             Once the instructional designer has determined through information gathering, the gap, the instructional designer will develop assessment instruments to measure the mastery of the learning. The instruments are based on specific criterion.         

 Return to Top

            Criterion-Referenced Measurement – involves techniques for determining learner mastery of pre-specified content. The criterion for determining adequacy of learner mastery is the extent to which the learner has met the objective and the attitudes or skills relative to the objective according to Linn and Miller (2005).

             The instructional designer uses criterion-referenced measurements to determine how well learners performed relative to a standard. According to Seels & Richey (1994) “Criterion-Referenced Post-Measurement determines whether major objectives have been met,” (p. 57).

             The Table 3 matrix is a practical means an instructional designer uses to tally and measure whether or not the objectives have been met:

 Table 3: Matrix for Measuring Objectives

Stakeholders

Expectations

Information Needs

Goals

Constraints

Information

Information

Information

Information

Information

             Once the matrix information is gathered an instructional designer involves evaluations through means of one-on-one, small group, and field trials in a process known as formative evaluation. It is during the design and development stages of a project, the instructional designer conducts a formative evaluation. At the end of the project, the instructional designer schedules a summative evaluation for the organization to decide if the instruction should be adopted. The instructional designer usually does not conduct the summative decision for adoption due to bias opinion.   

 Return to Top

            Formative Evaluation – involves gathering information on adequacy and using the information as a basis for further development. According to Michael Scriven (1967) the distinction between Formative Evaluation and Summative Evaluation is as recorded in Table 4:

 Table 4: Formative Evaluation Description

Formative evaluation is conducted during the development or improvement of a program or product (or person, etc.). It is an evaluation which is conducted for the in-house staff of the program and normally remains in-house; but it may be done by an internal or external evaluator or (preferably) a combination. The distinction between formative and summative has been well summed up in a sentence of Bob State’s “When the cook tastes the soup, that’s formative; when the guests taste the soup, that’s summative” (p. 56).

 

             Instructional designers rely on formative evaluations for a review of content through observations, interviews, one-to-one, and small-group strategies. A designer and subject matter experts will make revisions to the materials before presenting the materials for large-group tryouts. Formative evaluations utilize both quantitative and qualitative measures and one example of measuring might include an SPSS analysis. Quantitative evaluations measure the objectives whereas qualitative evaluations measure the subjective and experiential aspect of the project.

 Return to Top

            Summative Evaluation – involves gathering information on adequacy and using this information to make decisions about utilization. According to Michael Scriven (1967) the definition for Summative Evaluation is provided in Table 5:

 Table 5: Summative Evaluation Description

Summative evaluation is conducted after completion and for the benefit for some external audience or decision-maker (e.g. funding agency, or future possible users), though it may be done by either internal or external evaluators for a mixture. For reasons of credibility, it is much more likely to involve external evaluators than is a formative evaluation. It should not be confused with outcome evaluation, which is simply an evaluation focused on outcomes rather than on process – it could be either formative or summative (p. 130).

 

            Summative evaluation often uses group comparison study in a quasi-experimental design. According to Seels & Richey (1994), “when a product finally goes to market and is evaluated by an outside agency, which plays a consumer reports role, the purpose of the evaluation is clearly summative –i.e., helping buyers make a wise selection of a product” (p. 58). Summative evaluations require a balance between quantitative and qualitative measures. An instructional designer utilizes both quantitative and qualitative measures of the summative evaluation to analyze if: the material met the original objectives; what is the impact of the instruction; and was the problem solved?

 

Return to Top

Home | Introduction | Field of IT  | Competencies | Artifacts | Professional | Index