Chapter 2
The
Domains of the Field
The
1994 definition is built around five separate areas of concern to instructional
technologists: Design, Development, Utilization, Management, and Evaluation.
These are the domains of the field. In this chapter there are definitions for
each of these domains, the domain subcategories, and related concepts.
The
Role of the Domains
The
Functions of the Domains
To complete the task of defining a field, a means for
identifying and organizing the relationships emerging from theory and practice
must be developed. Taxonomies, or classifications, are often used to simplify
these relationships (Carrier and Sales, 1987; Knezek, Rachlin and Scannell,
1988; Kozma and Bangert-Downs, 1987). A taxonomy is a classification based on relationships.
In the classic Taxonomy of Educational Objectives: Cognitive Domain,
Benjamin Bloom differentiates between a taxonomy and a simpler classification
scheme. According to Bloom, a taxonomy : (1) may not have arbitrary elements,
(2) must correspond to some real phenomena represented by the terms, and (3)
must be validated through consistency with the theoretical views of the field.
The major purpose in constructing a taxonomy . . . is to
facilitate communication. . . the major task in setting up any kind of taxonomy
is that of selecting appropriate symbols, giving them precise and usable
definitions, and securing the consensus of the group which is to use them
(Bloom, 1956, p. 10-11).
An up-to-date taxonomic structure is essential to the
future development of Instructional Technology and in addition, the field needs a common
conceptual framework and agreement on terminology. Without this framework it is
difficult to make generalizations, or even communicate easily across
sub-fields. Common understandings are especially critical since much of the
work of instructional technologists is done in teams, and to be effective teams
need to agree upon their terminology and conceptual framework.
The rapidity of technological change and modification
necessitates the transfer of what is known from one technology to another.
Without this 'transferability' the research base must be recreated for each new
technology. By identifying taxonomic areas, academics and practitioners can
work to resolve research issues, and practitioners can work with the orists to
identify where theories are weak in supporting and predicting real world
Instructional Technology applications. Without clearly delineated categories
and functions, cooperation between academics and practitioners becomes even
more difficult due to a variety of definitions of the same term. Consequently,
the validation of theory and practice can be impeded.
Fleishman
and Quaintance (1984) summarized several potential benefits of developing a
taxonomy of human performance:
·
to aid in conducting
literature reviews;
·
to create the
capacity to generate new tasks;
·
to expose gaps in
knowledge by delineating categories and sub- categories of knowledge, exposing
holes in research, and promoting theoretical discussion or evaluation;
·
to assist in theory
development by evaluating how successful theory organizes the observational
data generated by research within the field of Instructional Technology.
Several of the previous approaches to taxonomies of
Instructional Technology have used a functional approach. The 1977 definition
of the field (AECT, 1977) proposed that instructional management functions and
instructional development functions operated on instructional systems
components. Ronald L. Jacobs (1988) also proposed a domain of human performance
technology that included both theory and practice and identified the functions
practitioners fulfill. In Jacobs' proposed domain there are three functions:
management functions, performance systems development functions and human performance
systems components which are the conceptual bases for performing the other
functions. Each function has a purpose and components. The subcomponents of
management are administrative and personnel. The subcomponents of development
are the steps in the development process, and the subcomponents of human
performance systems are concepts about organization, motivation, behavior,
performance and feedback.
The Relationships Among Domains
The relationship among the domains shown in Figure 2.1 is
not linear. It becomes easier to understand how the domains are complementary
when the research and theory areas in each domain are presented. Figure 2.1,
The Domains of Instructional Technology, summarizes the major areas in the
knowledge base for each domain.
While researchers can concentrate on one domain,
practitioners must often fulfill functions in several or all domains. Although
they may focus on one domain or area in the domain, researchers draw on theory
and practice from other domains. The relationship between the domains is
synergistic. For example, a practitioner working in the development domain uses
theory from the design domain, such as instructional systems design theory and
message design theory. A practitioner working in the design domain uses theory
about media characteristics from the development and utilization domains and
theory about problem analysis and measurement from the evaluation domain. The
complementary nature of the relationships between domains is shown in Figure
2.2, The Relationship Between the Domains of the Field.
It is clear from Figure 2.2 that each domain contributes
to the other domains and to research and theory that is shared by the domains.
An example of shared theory is theory about feedback which is used in some way
by each of the domains. Feedback can be included in both an instructional
strategy and a message design. Feedback loops are used in management systems,
and evaluation provides feedback.
GAMBAR
Although four major subcategories are shown for each
domain in Figure 2.1, there may be others that are independent, but not shown.
These areas may not appear because the body of theory is insufficient or
because they are currently less important. One example is the area of
electronic performance support systems which may be given more importance in
future definitions and domains of the field. Nevertheless, most areas of the
field fit in the subcategories identified. Indeed, some fit in more that one
subcategory as is the case with the media selection area which is part of the
instructional utilization domain. The pursuit of definitional clarity could
lead to specifying the taxonomic levels more completely by breaking each major
subcategory into finer distinctions. This task will be left for the future.
The rest of this chapter will be devoted to a discussion
of each domain and its relationship to the other domains. For each domain there
will be an explanation of its roots, of what it encompasses, of the subcategories
in the domain, and of the characteristics associated with each subcategory.
Some trends or issues in the domain will be noted.
A Description of the Domains The Domain of
Design
In part, the design domain had its genesis in the
psychology of instruction movement. There were several catalysts: 1) the 1954
article by B. F. Skinner on "The Science of Learning and the Art of
Teaching" and his theory of programmed instruction; 2) the 1969 book by
Herbert Simon on The Sciences of the
Artificial which discussed the general characteristics of a prescriptive
science of design; and 3) the establishment in the early 1960s of centers for
the design of instructional materials and programs, such as the Learning
Resource and Development Center dt the University of Pittsburgh. During the
1960s and 1970s Robert Glaser, director of that center, wrote and spoke about
instructional design being the core of educational technology (Glaser, 1976).
Many instructional psychology roots of the design domain were nurtured in these
Pittsburgh associations. Not only was this the home of Simon, Glaser and the
Learning Research and Development Center, but Skinner's influential paper
"The Science of Learning and the Art of Teaching" was first presented
in Pittsburgh prior to its publication later that year (Spencer, 1988).
GAMBAR
Complementing the instructional psychology roots was the
application of systems theory to instruction. Introduced by Jim Finn and
Leonard Silvem, the instructional systems approach gradually developed into a
methodology and began to incorporate ideas from instructional psychology. The
systems approach led to the instructional systems design movement as
exemplified by the instructional development process used in higher education
in the 1970s (Gustafson and Bratton, 1984). Interest in message design also
grew during the late 1960s and early 1970s. The collaboration of Robert Gagne
and Leslie Briggs at the American Institutes for Research in the 1960s (also in
Pittsburgh) and at Florida State University in the 1970s brought instructional
psychology expertise together with systems design talent. Together they brought
the instructional design concept to life (Briggs, 1968; Briggs, 1977; Briggs,
Campeau, Gagne, and May, 1967; Gagne, 1965; Gagne, 1989; Gagne and Briggs,
1974).
The domain of instructional design at times has been
confused with development, or even with the broader concept of instruction
itself. This definition, however, limits design to the planning function, but
planning on the micro as well as the macro level. Consequently, the domain's
knowledge base is complex and includes an array of procedural models,
conceptual models, and theory. Nevertheless, the knowledge base of any field is
not static. This is certainly the case with instructional design, in spite of
its firm foundation in traditional bodies of knowledge. Moreover, because of
the close relationship between instructional design and the other domains of
Instructional Technology, the design knowledge base also changes to maintain
consistency with development, utilization, management, and evaluation.
Design theory is more fully developed than those facets of
the field that have greatly relied upon traditions of practice to shape their
knowledge bases. However, with respect to the uses of technology, design
research and theory have almost always followed practitioner exploration of the
intricacies and capabilities of a new piece of hardware or software. This is
certainly the case now. The challenge, for both academics and practitioners
alike, is to continue to define the knowledge base as well as respond to the
pressure of the workplace.
Design is the
process of specifying conditions for learning. The purpose of design is to create strategies and
products at the macro level, such as programs and curricula, and at the micro
level, such as lessons and modules. This definition is in accord with current
definitions of design which refer to creating specifications (Ellington and
Harris, 1986; Reinluth, 1983; Richey, 1986). it differs from previous
definitions in that the emphasis is on conditions for learning rather than on
the components of an instructional system (Wallington, et. al., 1970). Thus,
the scope of instructional design is broadened from learning resources or individual
components of systems to systemic considerations and environments. Tessmer (1990)
has analyzed the factors, questions and tools that are used to design
environments.
The domain of design encompasses at least four major areas
of theory and practice. These areas are identifiable because they are the categories
into which research and theory development efforts fall. The design domain
includes the study of instructional systems design, message design,
instructional strategies and learner characteristics. Definitions and
descriptions for each of these areas follow.
Instructional
Systems Design. Instructional Systems Design (ISD) is an organized procedure that includes the steps of analyzing, designing,
developing, implementing and evaluating instruction. The word 'design' has
meaning at both the macro- and micro-level in that it refers to both the
systems approach and to a step in the systems approach. The steps in the
process each have a separate base in theory and practice as does the overall
ISD process. In simple terms, analyzing is the process of defining what is to
be learned; designing is the process of specifying how it is to be learned;
developing is the process of authoring and producing the instructional
materials, implementing is actually using the materials and strategies in
context, and evaluating is the process of determining the adequacy of the
instruction. ISD is generally a linear and iterative procedure which demands
thoroughness and consistency. It is characteristic of the process that all of
the steps must be completed in order to serve as a check and balance on each
other. In ISD, the process is as important as the product because confidence in
the product is based on the process.
Message Design. Message design involves "planning for the manipulation of the physical form of the message" (Grabowski, 1991, p. 206). It
encompasses principles of attention, perception and retention that direct
specifications for the physical form of messages which are intended to
communicate between a sender and a receiver. Fleming and Levie (1993) limit
messages to those patterns of signs or symbols that modify cognitive, affective
or psychomotor behavior. Message design deals with the most micro of levels
through small units such as individual visuals, sequences, pages and screens.
Another characteristic of message design is that designs must be specific to
both the medium and the learning task. This means that principles for message
design will differ depending on whether the medium is static, dynamic or a combination
of both (e.g., a photograph, a film or a computer graphic), and on whether the
task involves concept or attitude formation, skill or learning strategy
development, or memorization (Fleming, 1987; Fleming and Levie, 1993).
Instructional Strategies. Instructional
strategies are specifications for selecting and sequencing events and activities within a lesson.
Research on instructional strategies has contributed to knowledge about the
components of instruction. A designer Uses instructional strategy theories or
components as principles of instruction. Characteristically, instructional
strategies interact with learning situations. These learning situations are
often described by models of instruction. The model of instruction and the
instructional strategy needed to implement the model differ depending on the
learning situation, the nature of the content and the type of learning desired
(Joyce and Weil, 1972; Merrill, Tennyson, and Posey, 1992; Reigeluth, 1987a).
Instructional strategy theories cover learning situations, such as situated or
inductive learning, and components of the teaching/learning process, such as
motivation and elaboration (Reigeluth, I987b).
Reigeluth (1983a) differentiated between macro- and micro-
strategies:
Micro-strategy variables are elemental methods for
organizing the instruction on a single idea (i.e. a single concept, principle,
etc.). They include such strategy components as definition, example, practice,
and alternate representation . . . Macro-strategy variables are elemental
methods for organizing those aspects of instruction that relate to more than
one idea, such as sequencing, synthesizing, and summarizing (previewing and
reviewing) the ideas that are taught (p. 19).
Since 1983, the terms have been used more generally to
compare the design of a curriculum with the design of a lesson (Smith and Ragan,
1993a). The more typical use of the terms today is for micro-design to be synonymous
with instructional strategy design and macro-design to refer to the steps in
the ISD process. The phrases "micro-strategy" and
"macro-strategy" are not often used today.
Micro-design has also broadened in meaning to provide for
specifications for even smaller units of instruction, such a text pages,
screens, and visuals. Thus, there are those now who use the term
"micro-design", or "micro-ievel", to refer to message design,
as well as to instructional strategy design. Micro-design at the message design
level will be dis- cussed in Chapter Three.
Learner
Characteristics. Learner
characteristics are those facets of the learner's experiential background that
impact the effectiveness of a learning process. Research on learner
characteristics often overlaps research on instructional strategies, but it is
done for a different purpose: to describe facets of the learner that need to be
accounted for in design. Research on motivation is an example of an overlapping
area. The instructional strategy area uses motivation research to specify the
design of components of instruction. The learner characteristics area uses
motivation research to identify variables that should be taken into account and
to specify how to take them into account. Learner characteristics, therefore,
impact the components of instruction studied under instructional strategies.
They interact not only with strategies but also with the situation or context
and content (Bloom, 1976; Richey, 1992).
Trends and
Issues. Trends and issues in the design
domain cluster around the use of traditional instructional systems design (ISD)
models, the application of learning theory to design, and the impact of the new
technologies on the design process. Although there is consensus that the more
traditional systematic approach to instructional design is still of major
significance, some are raising questions regarding the efficacy of ISD models,
and the tendency to use them in an inflexible, linear manner. Dick (1993)
advocates an enhanced ISD that
incorporates elements of the performance technology approach, attempts to
reduce the typical LSD cycle time, and places an increased emphasis on
electronic performance support systems. There is also a growing concern about
the absence of ISD in the schools as a means of curriculum design. Some are
calling for a more thorough examination of the applicability of standard ISD
procedures for use in schools whether one is planning instruction for children
or staff development for teachers and administrators (Gustafson, 1993; Martin
and Clemente, 1990; Richey and Sikorski, 1993).
One issue of great importance is the need for theory which
relates learning classification to media selection. Each of the steps in the
ISD process, from task analysis to evaluation, with the exception of media
selection, has a basis in learning classification theory and procedures for
implementing that theory. Although some media selection models require
consideration of types of learning (Reiser and Gagne, 1982), ways to base these
decisions on objectives and strategies while taking other variables into
account are insufficiently developed.
With respect to other theoretical issues, there are
concerns that practitioners typically emphasize only those general design steps
highlighted in an 1SD model and ignore the use of general learning principles
(Winn, 1989). However, there are also questions as to the most appropriate
orientation to learning. The field has been voicing a cognitive stance, even
though procedures and tactics reflect both a behavioral and cognitive orientation.
Today there is also growing support for the constructivist position, resulting
in an emphasis on learner experience, learner control and learner definitions
of meaning and reality. This is consistent with the trend towards
contextualization of content which is evident in the situated and anchored
learning research (Cognition and Technology Group at Vanderbilt, 1992), the
performance technology movement and the systemic approach to designing
instruction (Richey, 1993a). The search for collaboratively and
cooperatively-based alternatives to individualized and independent learning
approaches is another example of pressure to develop alternative strategies.
Perhaps the more basic trend will be the acceptance of alternative approaches
to design.
Regardless of one's philosophical or theoretical
orientation, all designers are being affected by the rapid advancements in
technology which provide new platforms for instructional delivery, as well as a
means of automating facets of the design process itself. As a delivery
alternative, these technologies allow not only more effective visualization,
but also instant access to information, the ability to link information, more
adaptable and interactive design, and learning through other than formal means (Hannafin,
1992). As a means of automating design, the new technologies allow designers to
use more detailed rules for instructional strategy selection, implement
"just-in-time" training, and efficiently respond to the expectations
and requirements of their organizations (Dick, 1993). These trends are a
reaction to issues and affect the fundamentals of instructional design (Richey,
1993a; Seels, 1993a;).
The Domain of Development
The roots of the development domain are in the area of
media production, and through the years changes in media capabilities have led
to changes in the domain. Although the development of textbooks and other instructional
aids preceded film, the emergence of film was the first major landmark in the
progression from the audio-visual movement to the modern day Instructional
Technology era. In the 1930s theatrical film began to be used instructionally.
As a result, the first film catalogs appeared; film libraries and companies
were established; film studies were under- taken and commercial organizations,
such as the Society for Visual Education, were established. These events
stimulated not only the production of materials for education, but also
journals about these materials, such as Educational
Screen and See and Hear.
GAMBAR
During World War II, many types of materials were produced
for military training, especially films (Saettler, 1968). After the war, the
new medium of television was also applied to education, and a new genre of
television program emerged. Concurrently, large scale government funding
supported curriculum projects .which incorporated other types of instructional
media. During the late 1950s and early 1960s programmed instructional materials
were developed. By the 1970s computers were used for instruction, and
simulation games were in vogue in schools. During the 1980s theory and practice
in the area of computer-based instruction came to fruition, and by the 1990s
computer-based integrated multimedia was part of the domain.
Development is
the process of translating the design specifications into physical form. The development domain encompasses the wide variety of
technologies employed in instruction. It is not, however, isolated from the
theory and practice related to learning and design. Nor does it function
independently of evaluation, management or utilization. Rather, development is
driven by theory and design and must respond to the formative demands of evaluation
and utilization practices and management needs. Similarly, the development
domain does not consist solely of the hardware of instruction but incorporates
both hardware and software, visual and auditory materials, as well as the
programs or packages which integrate the various parts.
Within the development domain, there exists a complex
interrelationship between the technology and the theory which drives both
message design and instructional strategies. Basically, the development domain
can be described by:
·
the message which is
content driven;
·
the instructional
strategy which is theory driven; and
·
the physical
manifestation of the technology—the hardware, software and instructional
materials.
The last of these descriptors, technology, represents the driving
force of the development domain. Starting from this assumption, we can define
and describe the various types of instructional media and their characteristics.
This process should not, however, be thought of as simply a categorization, but
instead as an elaboration of the characteristics that technology draws from
theory and design principles.
The development domain can be organized into four
categories: print technologies (which provide the foundation for the other
categories), audiovisual technologies, computer-based technologies, and
integrated technologies. Because the development domain encompasses design, production,
and delivery functions; a material can be designed using one type of
technology, produced using another, and delivered using a third. For example,
message design specifications can be translated into script or storyboard form
using a computer-based technology; then, the script or storyboard can be
produced using audiovisual technologies and delivered using an integrated
technology, such as interactive multimedia. Within the development domain, the
concept of design assumes a third meaning. In addition to referring to
macro-level instructional systems design (identifying goals, content, and
objectives) and micro-level instructional design (specifying and sequencing
activities), design can also refer to specialized applications, such as screen
design in the development domain.
The sub-categories of the development domain reflect
chronological changes in technology. As one technology gives way to another
there is an overlap between the old and the new. For example, the oldest technologies
are print technologies based on mechanical principles. The audiovisual
technologies followed as ways to utilize mechanical and electronic inventions within
an educational setting. Microprocessor-based technologies led to computer
applications and interactivity, and today elements of the print technologies
are often combined with computer-based technologies, as in desk top publishing.
With the digitized age, it is now possible to integrate the old technological
forms, and thus capitalize on the advantages of each.
Print
Technologies. Print technologies are ways to produce or deliver materials, such as
books and static visual materials, primarily through mechanical or photographic
printing processes. This subcategory includes text, graphic, and
photographic representation and reproduction. Print and visual materials
involve the most basic and pervasive technologies. They provide the foundation
for both the development and utilization of most other instructional materials.
These technologies generate materials in hard copy form. Text displayed by a
computer is an example of the use of computer-based technology for production.
When that text is printed in hard copy to be used for instruction, it is an
example of delivery in a print technology.
The two components of this technology are verbal text
materials and visual materials. The development of both types of instructional
material relies heavily upon the theory related to visual perception, reading,
and human information processing, as well as theories of learning. The oldest
and still the most common instructional materials occur in the form of
textbooks in which sensory impressions, implied through linguistic mediators
and printed visual material, represents reality. The relative effectiveness of
different degrees of realism has been addressed by a number of conflicting
theories (Dwyer, 1972; 1978). In its purest form, visual media can carry the
complete message, but this is generally not the case in most instructional
exchanges. Most commonly, a combination of textual and visual information is
provided.
The manner in which both print and visual information is
organized can contribute greatly to the types of learning which will occur. At
the most basic level, simple text books provide sequentially organized, yet
randomly accessible information in a "user-friendly" manner. Other
forms of print technologies, such as programmed instruction, have been developed
based upon other theoretical prescriptions and instructional strategies.
Specifically, print/visual technologies have the following characteristics:
·
text is read
linearly, whereas visuals are scanned spatially;
·
both usually provide
one-way, receptive communication;
·
they present static
visuals;
·
their development
relies strongly on principles of linguistics and visual perception;
·
they are
learner-centered; and
·
the information can
be reorganized or restructured by the user.
Audiovisual
Technologies. Audiovisual technologies are ways to produce or deliver materials
by using mechanical or electronic machines to present auditory and visual
messages. Audiovisual instruction is most obviously characterized by the
use of hardware in the teaching process. Audiovisual machines make possible the
projection of motion pictures, the playback of sounds, and the display of large
visuals. Audiovisual instruction is defined as the production and utilization
of materials that involve learning through sight and hearing and that do not
depend exclusively on the comprehension of words or other similar symbols.
Typically, audiovisual technologies project material, such as films, slides and
transparencies. Television, however, represents a unique technology in that it
bridges from audiovisual to computer-based and integrated technologies. Video,
when produced and stored as videotape, is clearly audiovisual in nature since
it is linear and generally intended for expository presentation rather than
interaction. When the video information is on a videodisc, it becomes randomly
accessible and demonstrates most of the characteristics of computer-based or
integrated technologies, i.e. non-linear, random access and learner driven.
Specifically, audiovisual technologies tend to have the
following characteristics:
·
they are usually
linear in nature;
·
they usually present
dynamic visuals;
·
they typically are
used in a manner pre-determined by the designer/developer;
·
they tend to be
physical representations of real and abstract ideas;
·
they are developed
according to principles of both behavioral and cognitive psychology; and
·
they are often
teacher-centered and involve a low degree of learner interactivity.
Computer-based
Technologies. Computer-based technologies are ways to produce or deliver materials
using microprocessor-based resources.
Computer-based technologies are distinguished from other technologies because
information is stored electronically in the form of digital data rather than as
print or visuals. Basically, computer-based technologies use screen displays to
present information to students. The various types of computer applications are
generally called computer-based instruction (CBI), computer-assisted
instruction (CAI), or computer-man- aged instruction (CMI). These applications
were developed almost directly from behavioral theory and programmed
instruction, but today reflect a more cognitive theoretical base (Jonassen,
1988). Specifically, the four CBI applications are tutorials, where primary
instruction is presented; drill and practice, which helps the learner to
develop fluency previously learned material; games and simulations, which
afford opportunities to apply new knowledge; and databases, which enable
learners to access large data structures on their own or using externally-prescribed
search protocols.
Computer-based technologies, both hardware and software,
generally have these characteristics:
·
they can be used in
random or nonsequential, as well as linear ways;
·
they can be used the
way the learner desires, as well as in ways the designer/developer planned;
·
ideas usually are
presented in an abstract fashion with words and symbols and graphics;
·
the principles of
cognitive science are applied during development; and
·
learning can be
student-centered and incorporate high learner interactivity.
Integrated
Technologies. Integrated technologies are ways to produce and deliver materials which
encompass several forms of media under the control of a computer. Many believe that the most sophisticated technique
for instruction involves the integration of several forms of media under the
control of a computer. Examples of the hardware components of an integrated
system could include a powerful computer with large amounts of random access
memory, a large internal hard drive, and a high resolution color monitor.
Peripheral devices controlled by the computer would include videodisc players,
additional display devices, networking hardware, and audio systems. Software
may include videodiscs, compact discs, networking software, and digitized
information. These all may be controlled by a hypermedia lesson running under
an authoring system such as HyperCard' or Toolbook' . A primary feature of this
technology is the high degree of learner interactivity among the various
information sources.
Integrated technology instruction has the following
characteristics:
·
it can be used in
random or nonsequential, as well as linear ways;
·
it can be used the
way the learner desires, not only in ways the developer planned;
·
ideas are often
presented realistically in context of the learner's experiences, according to
what is relevant to the learner, and under the control of the learner;
·
principles of
cognitive science and constructivism are applied in the development and
utilization of the lesson; learning is cognitively-centcred and organized so
that knowledge is constructed as the lesson is used;
·
materials demonstrate
a high degree of learner interactivity; and
·
materials integrate
words and imagery from many media sources.
Trends and
Issues. Trends and issues in the print
technologies and audiovisual technologies include increased attention to text
design and visual complexity and to the use of color for cueing (Berry, 1992).
Trends and issues in the computer-based technologies and integrated
technologies areas of the development domain relate to design challenges for
interactive technologies, application of constructivist and social learning
theory, expert systems and automated development tools, and applications for
distance learning.
For example, there is currently great interest in
integrated learning systems (ILS) and electronic performance support systems
(EPSS). ILS's are "complex, integrated hardware/software management
systems using computer-based instruction" (Bailey, 1992, p. 5). These
systems are characterized by lessons which are: 1) based on objectives; 2)
integrated into the curriculum; 3) delivered through networks; and 4) include
performance tracking components (Bailey, 1992).
Specifically these systems can randomly generate problems,
adjust the sequence and difficulty of problems based on student performance,
and provide appropriate and immediate feedback (in private). Instruction is
'individualized' and 'personalized' with ILS's (Bailey, 1992, p.5).
Gloria Gery (1991) similarly describes the sophisticated
performance support systems used in industry which combine hardware and
software components to pro ide an Infobase', computer-based management, expert
tutoring, and job aids and tools within one system. EPSS is a concept, not a
technology.
ILS's and EPSS's are examples of the trend toward greater
integration of the development domain with other domains such as design, management,
and evaluation. As instructional projects become more sophisticated, the
demarcations between domains blur and the activities of one domain are
inescapably dependent on the activities of another.
The Domain of Utilization
Utilization may have the longest heritage of any of the
domains of Instructional Technology, in that the regular use of audiovisual
materials predates the widespread concern for the systematic design and
production of instructional media. The domain of utilization began with the
visual education movement which flourished during the first decade of this century
when school museums were established.. The first systematic experiments in the
preparation of exhibits for instructional purposes were con- ducted. Also
during the early years of the twentieth century, teachers were finding ways to
use theatrical films and short subjects in the class- room, thus creating a
market for films designed specifically for educational purposes. By 1923 visual
education budgets in city school systems covered projectors, stereopticons,
film rentals and lantern slides. Among the earliest formal research on
educational applications of media was Lashley and Watson's program of studies
on the use of World War I military training films (on the prevention of
venereal disease) with civilian audiences. The focus was on how these films
might be used to best effect. McCluskey and Hoban's research in the 1930s also
focused primarily on the classroom effects of different film utilization
practices (Saettler, 1968; 1990).
After World War H, the audiovisual instruction movement
organized and promoted the use of materials. The available supply of instructional
materials expanded as production increased leading to new ways to assist
teachers. During the 1960s instructional media centers were established in many
schools and colleges, and curriculum projects incorporating media became available.
These events all contributed to the utilization domain. Probably the most
significant event, however, was the publication in 1946 of the first post World
War II textbook devoted to utilization, Audiovisual
Materials in Teaching (Dale, 1946), which attempted to provide a general
rationale for the selection of appropriate learning materials and activities.
Published in several languages and used all over the world, new editions of
this text appeared regularly for the next 20 years. It led to other textbooks
on utilization that were used in a widely taught course introducing teachers to
audiovisual materials. In 1982 Heinich, Molenda, and Russell's Instructional Materials and the New Technologies of Instruction was published.
This updated the utilization information presented to pre- and in-service
teachers, and became another landmark text on utilization. After several
editions, the ASSURE model presented in this text has become a widely
disseminated procedural guide to help instructors plan for and implement the
use of media in teaching. The steps in this model are: Analyze learners, State
objectives, Select media and materials, Utilize media and materials, Require
learner participation, Evaluate and revise.
The growth of theory during the 1970s and 1980s produced
several texts on media selection. Media selection processes are represented
through instructional design models because they are systematic (Reynolds and
Anderson, 1991). Media selection is a step in instructional systems design, and
when the teacher selects media, he or she is performing an instructional design
function, not a utilization function. Media selection is so closely related to
utilization that it overlaps the design and utilization domains. When media
selection is done by someone who uses a systematic design process, it is a
design task. When it is done based on subject content or media characteristics
using a simpler design process, it is closer to a utilization task. Thus, here
again we see the integrated nature of the taxonomy associated with the 1994
definition of the field.
For many years the utilization domain was centered around
the activities of teachers and media specialists who aided teachers. Models and
theories in the domain of utilization have tended to focus on the user's
perspective. In the late 1960s, however, the concept 'diffusion of
innovations', referring to the communication process used to spread information
and involve users in order to facilitate adoption of an idea, was introduced
and attention turned to the provider's perspective. This area was stimulated by
the publication of Diffusion of
Innovations by Everett M. Rogers in 1962. This book has gone through
several editions. Starting with 405 studies culled from fields as diverse as
education, medicine, public policy, and farming, the author analyzed and
synthesized findings from these fields. The synthesis was reported with a model
and case histories to substantiate propositions about the stages, process and
variables involved in diffusion, which was defined as the spread, adoption and
maintenance of an innovation. More recently, Rogers (1983) expanded the study
to over 3000 case histories. The importance of this area for the utilization
function is that utilization depends on the promotion of aware- ness, trial and
adoption of innovations. Since the book was first published other scholars have
pursued questions related to innovation, contributed to the knowledge base in
this area, and developed other innovation and diffusion models.
AECT's 1977 definition linked utilization and
dissemination into one function, Utilization-Dissemination. The purpose of the
function was "to bring learners into contact with information about
educational technology" (AECT, 1977, p. 66). The 1977 definition also
included a separate function called utilization which was similarly defined as
"bringing learners into contact with learning resources and instructional
systems components" (p. 65). In the 1994 definition, dissemination tasks,
meaning "deliberately and systematically making others aware of a
development by circulating information" (Ellington and Harris, 1986, p.
51), are included in the diffusion of innovations sub-category of the
utilization domain.
Once a product has been adopted the processes of
implementation and institutionalization begin. In order to evaluate the
innovation, implementation must occur. While the instructional design
literature con- siders implementation a required step prior to evaluation, it
is not considered necessary for the step to occur before specifications for
instruction are determined. Consequently, little design literature addresses
the implementation process. Like summative evaluation and diffusion planning,
implementation planning is often omitted due to a shortage of time and money.
The research base of implementation and
institutionalization is not as well developed as other areas, although
contributions have been made from the literature on organizational development
and education. Organizational Development
(OD) is defined as "a response
to change, a complex educational
strategy intended to change the beliefs, attitudes, values, and structure of
organizations so that they can better adapt to new technologies, markets, and
challenges, and the dizzying rate of change itself' (Bennis, 1969, p. 2). As
such, it promotes planned organizational change. (Cunningham, 1982). The
difference between diffusion of innovations and organizational development is
that OD is primarily concerned with change in organizations, and diffusion of
innovations is primarily con cemed with individuals accepting and using ideas.
The overlap between these two concepts is evident. The literature on
organizational development is helpful in understanding implementation and
institutionalization.
The concept of institutionalization is prominent in other
sectors of education. It refers to the integration of the innovation within the
structure of the organization. The process and variables affecting
implementation and institutionalization of curricular innovations are described
in a tenyear follow-up study of the quarter plan to provide year-round schools
in grades 9-12 in Buena Vista, California. Based on this study, the administration,
faculty, and students recommended that their board of education
institutionalize the four quarter system including a voluntary fourth quarter
by providing adequate funds (Bradford, 1987).
Historically, each domain has policies and regulations
associated with it. It is the domain of utilization, however, that is most
affected by policies and regulations. The use of television programming, for
example, is heavily regulated. The copyright law affects the use of print,
audiovisual, computer-based and intergrated technologies. State policy and regulations
affect the use of technology in the curriculum. Thus, the study and practice of
institutionalization may lead to involvement in issues of policy formation,
political behavior, organizational development, ethics, and sociological or
economic principles. Institutionalization may require the adjustment of laws, regulations,
or policies either at the local level or higher.
The utilization function is important because it addresses
the interface between the learner and the instructional material or system.
This is obviously a critical function because use by learners is the only
raison d'être of instructional materials. Why bother acquiring or creating materials
if they are not going to be used? The domain of utilization encompasses a wide
range of activities and teaching strategies.
Utilization then requires systematic use, dissemination,
diffusion, implementation, and institutionalization. It is constrained by
policies and regulations. The utilization function is important because it
describes the interface between the learner and instructional materials and
systems. The four subcategories in the domain of utilization are: media
utilization, diffusion of innovations, implementation and institutionalization,
and policies and regulations.
Utilization is
the act of using processes and resources for learning.Those engaged in
utilization are responsible for matching learners with specific materials and
activities, preparing learners for interacting with the selected materials and
activities, providing guidance during engagement, providing for assessment of
the results, and incorporating this usage into the continuing procedures of the
organization.
Media
Utilization. Media utilization is the systematic use of resources for
learning. The media utilization process
is a decision-making process based on instructional design specifications. For
example, how a film is introduced or "followed-up" should be tailored
to the type of learning desired. Principles of utilization also are related to
learner characteristics. A learner may need visual or verbal skill assistance
in order to profit from an instructional practice or resource.
Diffusion of
Innovations. Diffusion of innovations is
the process of communicating through planned strategies for the purpose
of gaining adoption. The ultimate goal is to bring about change. The first stage
in the process is to create awareness through dissemination of information. The
process includes stages such as awareness, interest, trial and adoption. Rogers
(1983) describes the stages as knowledge, persuasion, decision, implementation,
and confirmation. Characteristically, the process follows a communications
process model which uses a multi-step flow including communication with
gatekeepers and opinion leaders.
Implementation and Institutionalization. Implementation
is using instructional materials or strategies in real (not simulated)
settings. Institutionalization is the continuing, routine use of the
instructional innovation in the structure and culture of an organization. Both
depend on changes in individuals and changes in
the organization. However, the purpose of implementation is to ensure
proper use by individuals in the organization. The purpose of institutionalization
is to integrate the innovation in the structure and life of the organization.
Some of the past failures of large scale Instructional Technology projects,
such as computers in schools and instructional television, emphasize the
importance of planing for both individual and organizational change (Cuban,
1986).
Policies and
Regulations. Policies and regulations are the rules and actions of society (or its
surrogates) that affect the diffusion and use of Instructional Technology.
Policies and regulations are usually constrained by ethical and economic
issues. They are created both as a result of action by individuals or groups in
the field and action from without the field. They have more effect on practice
than on theory. The field of Instructional Technology has been involved in
policy generation related to instructional and community television, copyright
law, standards for equipment and programs, and the creation of administrative
units to support Instructional Technology.
Trends and
Issues. Trends and issues in the
utilization domain often center around policies and regulations which affect
use, diffusion, implementation and institutionalization. Another issue
associated with this domain is how the influence of the school restructuring
movement might affect the use of instructional resources. The role of
technology in school restructuring is still evolving. The proliferation of
computer-based materials and systems has raised the economic and political
stakes for those contemplating adoption. Instructional Technology professionals
are now involved in decisions about multi-million dollar expenditures,
affecting not just individual teachers and individual classrooms, but whole
school districts, colleges, and corporations. The field is increasingly
involved in political and economic issues at the level of the whole
organization. These factors often have an impact on the ways in which
utilization is conducted.
The Domain of Management
The concept of management is integral to the field of
Instructional Technology and to roles held by many instructional technologists.
Individuals in the field are regularly called upon to provide management in a
variety of settings. An instructional technologist might be involved with
efforts such as the management of an instructional development project or the
management of a school media center. The actual goals for the management
activity may vary greatly from setting to setting, but the underlying
management skills remain relatively constant regardless of setting.
Many instructional technologists have position titles that
imply a clear management function. For example, an individual may be the Learning
Resources Center Director at a university. This individual is responsible for
the entire learning resources program including goals, organization, staff,
finances, facilities, and equipment. Another individual may be employed as the
media specialist in an elementary school. This individual may have the
responsibility for the entire media center program. The programs administered
by these individuals may differ greatly, but the basic skills necessary to
manage the program will remain constant. These skills include organizing
programs, supervising personnel, planning and administering budget and
facilities, and implementing change. Although each author uses slightly different
terms, these types of management are described in Chisholm and Ely (1976),
Prostano and Prostano (1987), and Vlcek and Wiman (1989).
The management domain evolved originally from the
administration of media centers, programs and services. A melding of the
library and media programs led to school library media centers and specialists.
These school media programs merged print and non-print materials and led to the
increased use of technological resources in the curriculum. In 1976 Chisholm
and Ely wrote Media Personnel in Education: A Competency Approach which emphasized
that the administration of media programs played a central role in the field.
AECT's 1977 definition divided the management function into organization
management and personnel management as practiced by administrators of media
centers and programs.
As practice in the field became more sophisticated,
general management theory was applied and adapted. As projects in the field,
especially instructional design projects, became more and more involved, project
management theory was applied. Techniques for managing these projects had to be
created or borrowed from ocher fields. New developments in the field have
created new management needs. Distance learning depends on successful
management because several locations are involved. With the advent of new
technologies, new ways to access information are becoming available. As a
consequence, the area of information management has great potential for the
field.
One theoretical base for information management comes from
the discipline of information science. Other bases are emerging from practice
in the integrated technologies area of the development domain and from the
field of library science. The information management area opens many
possibilities for instructional design, especially in the areas of curriculum
development and implementation and self-designed instruction.
Management
involves controlling Instructional Technology through planning, organizing,
coordinating and supervising. Management
is generally the product of an operational value system. The complexity of man
aging multiple resources, personnel, and design and development efforts is
multiplied as the size of the intervention grows from small, one-school or
-company departments, to state-wide instructional interventions and global
multi-national company changes. Regardless of the size of the Instructional
Technology program or project, one key ingredient essential to success is
management. Change rarely occurs at only the micro-instructional level. To
ensure the success of any instructional intervention, the process of any
cognitive behavior or affective change must occur in tandem with change at the
macro-level. With few exceptions (Greer, 1992; Hannum and Hansen, 1989;
Romiszowski, 1981), managers of Instructional Technology programs and projects
looking for sources on how to plan for and manage these multiple macro-level
change models will be disappointed.
In summary, there are four subcategories of the management
domain: project management, resource management, delivery system management and
information management. Within each of these sub- categories there is a common
set of tasks that must be accomplished. Organization must be assured, personnel
hired and supervised, funds planned and accounted for, and facilities developed
and maintained. In addition, planning for short- and long-term goals must
occur. To control the organization the manager must establish a structure that
aids decision- making and problem-solving. This manager should also be a leader
who can motivate, direct, coach, support, delegate, and communicate (Prostano
and Prostano, 1987). Personnel tasks include recruiting, hiring, selecting,
supervising and evaluating. Fiscal tasks encompass budget planning,
justification and monitoring, accounting and purchasing. Responsibility for
facilities entails planning, supporting and supervising. A manager may have
responsibility for developing a long range plan. (Caffarella, 1993).
Project
Management. Project management involves planning, monitoring, and controlling instructional design and
development projects. According to Rothwell and Kazanas (1992), project
management differs from traditional management, which is line and staff
management, because: (a) project members may be new, short-term members of a
team; (b) project managers often lack long-term authority over people because
they
are temporary bosses, and (c) project managers enjoy greater control and
flexibility than is usual in line and staff organizations.
Project managers are responsible for planning, scheduling
and con- trolling the functions of instructional design or other types of
projects. They must negotiate, budget, install information monitoring systems,
and evaluate progress. The project management role is often one of dealing with
threats to success and recommending internal changes.
Resource
Management. Resource management involves planning, monitoring, and controlling resource support systems and
services. The management of resources is a critical area because it
controls access. Resources can include personnel, budget, supplies, time, facilities,
and instructional resources. Instructional resources encompass all of the technologies
described in the section on the development domain. Cost effectiveness and
justification of effectiveness for learning are two important characteristics
of resource management.
Delivery System Management. Delivery system
management involves planning, monitoring and controlling "the method by
which distribution of instructional materials is organized . . . fit islet
combination of medium and method of usage that is employed to present
instructional information to a learner" (Ellington and Harris, 1986, p.47). Distance learning projects, such as
those at National Technological University and Nova University, provide
examples of such management. Delivery system management focuses on product
issues, such as hardware/software requirements and technical support to users
and operators, and process issues, such as guidelines for designers and
instructors. Within these parameters decisions must be made that match the technology's
attributes with the instructional goals. Decisions about delivery system
management are often dependent on resource management systems.
Information Management. Information
Management involves planning, monitoring and controlling the storage, transfer
or processing of information in order to provide resources for learning. There is a great deal of overlap between storing,
transferring and processing because often one function is necessary in order to
perform the other. The technologies described in the development domain are
methods of storage and delivery. Transmission or transfer of information often
occurs through integrated technologies. "Processing consists of changing
some aspect of information [through computer programs] . . . to make it more
suitable for some purpose" (Lindenmayer, 1988, p. 317). Information
management is important for providing access and user friendliness. The
importance of information management is its potential for revolutionizing
curriculum and instructional design applications. The growth of knowledge and
knowledge industries beyond the scope that today's educational system can
accommodate means that this is an area of great importance to Instruc- tional
Technology in the future. An important component of the field will continue to
be the management of information storage systems for instructional purposes.
Trends and Issues. The trend towards quality improvement and quality
management that is seen in industrial settings is likely to spread to
educational settings. If so, it will have an influence on the management
domain. A synthesis of diffusion of innovations, performance technology and
quality management could provide a powerful tool for organizational change:
Diminishing availability will challenge managers to make better use of current
resources. The marriage of information systems and man- agement will grow and
affect Instructional Technology in that management decision-making will be more
and more dependent on computerized information.
The Domain of Evaluation
Evaluation
in its broadest sense is a commonplace human activity. In daily life we are
constantly assessing the worth of activities or events according to some system
of valuing. The development of formalized educational programs, many funded by
the federal government, has brought with it the need for formalized evaluation
programs. The evaluation of these programs required the application of more
systematic and scientific procedures.
Curriculum specialist Ralph Tyler is generally credited
with promulgating the concept of evaluation in the 1930s (Worthen and Sanders,
1973). The year 1965 saw the passage of the landmark Elementary and Secondary
Education Act, mandating formal needs assessments and evaluation of certain
types of programs. Since that time, evaluation has grown into a field of its
own, with professional associations (e.g. the American Evaluation Association)
and a long list of published books and journal sources.
The publication of Robert Mager's Preparing Instructional Objectives in 1962 was an important event
in the evolution of evaluation.When preparing for a workshop on programmed
instruction, Mager decided to use programmed instruction as an introduction to
writing measurable objectives. The program was refined and later published. It
is probably one of the most influential publications in the field. Other
important contributions historically were the development of the domains of
educational objectives (Bloom, 1956; Krathwohl, Bloom and Masia, 1964) and
learning classifications (Gagne, 1965).
In the late 1960s Stuffiebeam (1969) introduced another
approach to evaluation which has now become classic, one which sought "not
to prove but to improve" Stufflebeam (1983, p. 118). His model suggested
four types of evaluation: context, input, process, and product (CIPP). The four
elements in the CIPP model provide for considering information relating to:
needs assessment; design decisions which address content and strategy; guidance
for implementation; and outcome assessment (Braden, 1992).
With the concern for more formalized evaluation, it became
evident that to evaluate one needed to compare results with goals. Thus, the
area of evaluation came to encompass needs assessment. With this orientation,
Roger Kaufman (1972) presented a conceptual structure for analyzing the
appropriateness of teaching goals.
The evaluation domain grew as the educational research and
methodology field grew, often in tandem.or parallel with that field. Important
distinctions between traditional educational research and evaluation became
clearer as both areas developed. Scriven (1980) emphasized the difference
between evaluation and other types of research. He said that while evaluation
is the process of determining the merit, worth or value of a process or product
and that this is a research process, the purpose of educational evaluation is
different from the purpose of other educational research. The purpose of
evaluation is to support the making of sound value judgments, not to test
hypotheses.
Evaluation research and traditional research, then, are
distinguished by several characteristics. While they often employ similar
tools, the ends are different. For traditional research, the end is an increase
in knowledge broadly defined. For evaluation research, the end is the provision
of data for decision making in order to improve, expand, or discontinue a
project, program or product. The aims of traditional research are less time and
situation specific because it attempts to uncover principles that apply more
broadly. With evaluation research, the object being evaluated is most often. a
specific program or project in a given context. In other words, much less
attention is paid to the question of generalizing the findings to a larger
population. While both types of research have common roots historically and
share many characteristics Ad processes, the enterprises in practice are quite
distinct.
Evaluation is the
process of determining the adequacy of instruction and learning. Evaluation begins with problem analysis. This is an important
preliminary step in the development and evaluation of instruction because goals
and constraints are clarified during this step.
In the domain of evaluation important distinctions are
made between program, project and product evaluations; each is an important
type of evaluation for the instructional designer, as are formative and
sununative evaluation. According to Worthen and Sanders (1987):
Evaluation is the determination of a thing's value. In
education, it is formal determination of the quality, effectiveness or value of
a program, product, project, process, objective, or curriculum. Evaluation uses
inquiry and judgment methods, including: (1) determining standards for judging
quality and deciding whether those standards should be relative or absolute;
(2) collecting relevant information; and (3) applying the standards to
determine quality (pp. 22-23).
As seen in the root concept of the word, the assignment of
value is central to the concept. That this assignment is done fairly,
accurately, and systematically is the concern of both evaluators and clients.
One important way of distinguishing evaluations is by
classifying them according to the object being evaluated. Common distinctions
are programs, projects, and products (materials). The Joint Committee on
Standards for Educational Evaluation (1981) provided definitions for each of
these types of evaluation.
Program
evaluations—evaluations that assess
educational activities which provide services on a continuing basis and often
involve curricular offerings. Some examples are evaluations of a school
district's reading program, a state's special education program, or a
university's continuing education program (p. 12).
Project
evaluations—evaluations that assess
activities that are funded for a defined period of time to perform a specific
task. Some examples are a threeday workshop on behavioral objectives, or a
threeyear career educational demonstration project. A key distinction between a
program and a project is that the former is expected to continue for an
indefinite period of time, whereas the latter is usually expected to be short
lived. Projects that become institutionalized in effect become programs (pp.
12,13).
Materials
evaluation (instructional products)--evaluations
that assess the merit or worth of content-related physical items, including
books, curricular guides, films, tapes, and other tangible instructional
products (p. 13).
An important distinction here is the separation of
personnel evaluation from other categories. In practice, such a distinction is
difficult to accomplish. People become personally involved with the development
or success of a program or product; even though an evaluator may constantly
refer to a separation, with statements like: "People are not being
evaluated here. We just want to know if this model program works or not."
The people responsible for creating and maintaining these entities are justifiably
concerned about the outcomes-of the evaluation. In practice, people's
effectiveness is often judged by the success of their program or product,
regardless of what definitional distinctions one would like to make.
Within the domain of evaluation there are four subdomains:
problem analysis, criterion-referenced measurement, and formative and summative
evaluation. Each of these subdomains will be explained below.
Problem
Analysis. Problem analysis involves determining the nature and parameters of the
problem by using information-gathering and decision-making strategies.
Astute evaluators have long argued that the really thorough evaluation will
begin as the program is being conceptualized and planned. In spite of the best
efforts of its proponents, the program that focuses on unacceptable ends will
be judged as unsuccessful in meeting needs.
Thus, evaluation efforts include identifying needs,
determining to what extent the problem can be classified as instructional in
nature, identifying constraints, resources and learner characteristics, and
determining goals and priorities (Seels arid Glasgow, 1990). A need has been
defined as "a gap between 'what is' and 'what should be' in terms of
results" (Kaufman, 1972), and needs assessment is a systematic study of
these needs. An important distinction should be offered here. A needs assessment
is not conducted in order to perform a more defensible evaluation as the
project progresses. Instead, its purpose is more adequate program planning.
Criterion-Referenced Measurement.
Criterion-referenced measurement involves techniques for determining learner
mastery of pre-specified content. Criterion-referenced
measures, which are sometimes tests, also can be called content-referenced,
objective-referenced, or domain- referenced. This is because the criterion for
determining adequacy is the extent to which the learner has met the objective. A
criterion-referenced measure provides information about a person's mastery of
knowledge, attitudes or skills relative to the objective. Success on a
criterion-referenced test often means being able to perform certain
competencies. Usually a cut-off score is established, and everyone reaching or
exceeding the score passes the test. There is no limit to the number of
test-takers who can pass or do well on such a test because judgments are not
relative to other persons who have taken the test.
Criterion-referenced measurements let the students know
how well they performed relative to a standard. Criterion-referenced items are
used throughout instruction to measure whether prerequisites have been mastered.
Criterion-referenced post-measures can determine whether major objectives have
been met (Seels and Glasgow, 1990). Curriculum designers and other educators
were interested in criterion-referenced measurement before Mager described
behavioral objectives (Tyler, 1950). Early contributors to the application of criterion-referenced
measurement in Instructional Technology came from the programmed instruction
movement and included James Popham and Eva Baker ( Baker, 1972; Popham, 1973).
Current contributors include Sharon Slirock and William Coscarelli (Shrock and
Coscarelli, 1989).
Formative and Summative Evaluation. Formative
evaluation involves gathering information on adequacy and using this
information as a basis for further development. Summative Evaluation involves
gathering information on adequacy and using this information to make decisions
about utilization.
The emphasis on both formative evaluation in the early
stages of product development and summative evaluation after instruction is a
prime concern of instructional technologists. The distinction between these two
types of evaluation was first made by Scriven (1967); although Cambre has
traced these same types of activities to the 1920s and 1930s in the development
of film and radio instruction (Cambre, cited in Flagg, 1990).
According to Michael Scriven (1967):
Formative evaluation is conducted during the development or improvement of a program or product (or
person, etc.). It is adevaluation which is conducted for the in-house staff of the program and normally remains in-house;
but it may be done by an internal or
external evaluator or (preferably) a combination. The distinction between
formative and summative has been well summed up in a sentence of Bob Stake's
"When the cook tastes the soup, that's formative; when the guests taste
the soup, that's summative" (p. 56).
Summative evaluation is conducted after completion and for the
benefit of some external audience or decision-maker (e. g. funding agency, or
future possible users; though it may be done by either internal or external evaluators for a mixture. For
reasons of credibility, it is much more likely to involve external evaluators
than is a formative evaluation. It should not be confused with outcome evaluation,
which is simply an evaluation focused on outcomes rather than on process—it
could be either formative or summative (p. 130).
In product development, the use of formative and summative
evaluations are particularly important at varying stages. At the initial stages
of development (alpha stage testing), many changes are possible, and formative
evaluation efforts can have wide ranging scope. As the product is developed
further, the feedback becomes more specific (beta testing), and the range of
acceptable alternative changes is more limited. These are both examples of
formative evaluation. When the product finally goes to market and is evaluated
by an outside agency, which plays a "consumer reports" role, the
purpose of the evaluation is clearly summative—i. e. helping buyers make a wise
selection of a product. At this stage, without a wholesale revamping of the
product, revision is virtually impossible. Thus, we see that in the development
of a product, the uses of formative and summative evaluation vary with the
stage of progress and that the range of acceptable suggestions narrows over time.
The methods used by formative and sunimative evaluation
differ. Formative evaluation relies on technical (content) review and tutorial,
small-group or large group tryouts. Methods of collecting data are often
informal, such as observations, debriefing, and short tests. Summative
evaluation, on the other hand, requires more formal procedures and meth- ods of
collecting data. Summative evaluation often uses a comparative group study in a
quasi-experimental design.
Both formative and summative evaluation require
considerable attention to the balance between quantitative and qualitative
measures. Quantitative measures will typically involve numbers and will
frequently work toward the idea of "objective" measurement.
Qualitative measures frequently emphasize the subjective and experiential
aspects of the project and most often involve verbal descriptions as the means
of reporting results.
Trends and
Issues. Needs assessment and other types
of front end analyses have been primarily behavioral in orientation through
their emphasis on performance data and on breaking content down into its component
parts. However, current stress on the impact of context on learning is giving a
cognitive, and at times a constructivist, orientation to the needs assessment
process. This emphasis on-context is evident in the performance technology
movement, situated learning theories, and the new emphasis on more systemic
approaches to design (Richey, 1993). Consequently, the needs assessment phase
is gaining in importance. In addition, many are recommending that the needs
assessment phase assume greater breadth, moving beyond concentration on content
and placing new emphases on learner analysis and organizational and
environmental analysis (Richey, 1992; Tessmer and Harris, 1992). The performance
technology movement is also making an important contribution to the new needs
assessment emphasis. Performance technology approaches may cause a broadening
of the designer's role to include identifying aspects of the problem that are
not instructional and working with others to create a multi-faceted solution.
The quality improvement movement will affect the
evaluation domain. Quality control requires continuous evaluation including
extending the cycle beyond summative evaluation. Confirmation evaluation
(Misanchuk, 1978) is the next logical step in the cycle. In a 1993 article
Hellebrandt and Russell argue that:
Confirmative evaluation of instructional materials and
learners completes a cycle of evaluative steps in order to maintain performance
standards of an instructional system. Following some time after formative and
summative evaluation, a team of unbiased evaluators uses tools like checklists,
interviews, rating scales, and tests to answer two fundamental questions:
first, do materials still meet the original objectives; second have learners maintained
their level of competence?
Other researchers are re-examining criterion-referenced
measurement techniques. For example, Baker and O'Neil (1985) explore in-depth
the issue of assessing instructional outcomes including new directions for
criterion-referenced measurement. They present a new model of evaluation
adapted to the new technologies. Their model takes into account the goals,
intervention, context, information base and feedback loops.
Other areas of great interest are the measurement of
higher level cognitive objectives, affective objectives and psychomotor
objectives. Research on computerized criterion-referenced measurement will
stimulate this domain as will the research on qualitative measures, such as
port- folios and more realistic measurement items like case studies and evaluation
of taped presentations. Cognitive science will continue to influence this
domain in terms of newer approaches to diagnosis (Tennyson, 1990). These areas
will be discussed further in Chapter Three.
New technologies have raised further issues in the
evaluation domain and created a need for new techniques and methods. For
example, attention needs to be directed toward improving the evaluation of
distance learning projects. These tend to be evaluated superficially. It is
important that evaluation of distance learning cover many aspects, i. e.
personnel, facilities, equipment, materials, programming (Clark, 1989;
Morehouse, 1987). Reeves (1992) recommends formative experimentation which uses
a small scale trial and error approach to study a variable in real life context.
Tessmer (1993) proposes a formative evaluation model which
accommodates a 'layers of necessity' approach. This approach takes into
consideration the resources and constraints of each project, and attempts to
avoid planning layers of formative evaluation which cannot be realistically
accomplished within a project.
Eastmond (1991) presents a scenario of an evaluator's
dilemma in 2010. In the scenario, the evaluator's role becomes one of
questioning data collected by sophisticated information management tools.
Duchastel (1987) suggests a triangular procedure of checks and balances on data
collected for the evaluation of software. Thus, product review, checklist
procedure, user observation and objective data evaluations are used together to
give a more complete picture of the software. This approach supports the trend
towards a combination of quantitative and qualitative data gathering techniques
(Seels, 1993c).
Summary
The five domains of Instructional
Technology highlight the diversity of the field. In addition, these domains are
complex entities in themselves. This chapter emphasizes the taxonomic nature of
the domain structure. One could continue the definition process and develop
more specific levels of the taxonomy. The future work of instructional
technologists will shape more finite definitions of the subcategories and the
areas within them.
Tidak ada komentar:
Posting Komentar