36

Old Concerns with New Distance Education Research

Media Comparison Studies Have Fatal Flaws That Taint Distance Learning Research Relying On This Strategy
, , &

Editor’s Note

This article was originally published in Educause Quarterly and is republished here by permission. Educause retains the copyright [https://edtechbooks.org/-ke], and the original article can be found here [https://edtechbooks.org/-mL].

Lockee, B. B., Moore, M., & Burton, J. (2001). Old concerns with new distance education research. Educause Quarterly, 24(2), 60–62. Retrieved from https://net.educause.edu/ir/library/pdf/EQM0126.pdf

Distance educators who have gone through the process of designing, developing, and implementing distance instruction soon realize that the investment is great and the results of their efforts possibly tenuous. Ultimately, the question arises, “Does it work?” Unfortunately, to answer the question, many educators use a strategy of comparing distance courses with traditional campus-based courses in terms of student achievement. This phenomenon, called the media comparison study, has actually been in use since the inception of mediated instruction.

The following analysis looks at research currently being conducted by some stakeholders in the name of valid distance learning research. Such “research” essentially repeats the mistakes of prior media studies that use the long-discredited media comparison approach.

Flawed Research Design

A popular educational research strategy of the past compared different types of media-based instruction (for example, film to television) or compared mediated instruction to teacher-presented instruction (lecture) to determine which was “best.” These types of studies became known as media comparison studies.[1] [#footnote-221-1] These studies assumed that each medium was unique and could or could not affect learning in the same way. The researcher who conducted this type of research — comparing one medium to another — looked at the whole unique medium and gave little thought to each one’s attributes and characteristics, to learner needs, or to psychological learning theories.

The research design is based on the standard scientific approach of applying a treatment variable (otherwise known as the independent variable) to see if it has an impact on an outcome variable (the dependent variable). For example, to determine if a new medicine could cure a given illness, scientists would create a treatment group (those with the illness who would receive the new medicine) and a control group (those with the illness who would receive a placebo). The researchers would seek to determine if those in the treatment group had a significantly different (hopefully positive) reaction to the drug than those in the control group. (The terms “significant difference” and “no significant difference” are statistical phrases referring to the measurement of the experimental treatment’s effect on the dependent variable.)

In the case of media comparison studies, the delivery medium becomes the treatment variable and student achievement, or learning, is seen as the dependent variable. While such an approach may seem logical at face value, it’s unfortunately plagued with a variety of problems. Such a design fails to consider the many variables that work together to create an effective instructional experience. Such factors include, but are certainly not limited to, learner characteristics, media attributes, instructional strategy choices, and psychological theories.

Learner Characteristics

In media comparison studies, researchers view students as a homogenous unit instead of as individuals with unique characteristics and learning needs. As anyone who has ever taught knows, learners bring with them a variety of qualities and experiences. For example, if learners have a certain cognitive style that affects their perception of complex visual information (that is, field dependence), they may be disadvantaged in Web-based courses, particularly if the interface lacks intuitiveness or consistency. To lump all learners together ignores important traits that may affect learning.

Media Attributes

Media comparison studies usually provide little information about a specific medium’s capabilities.[2] [#footnote-221-2] The comparison design inherently assumes that each medium is unique and can affect learning in some way. The confounding factor here is that each medium consists of many attributes that may affect the value of the medium’s instructional impact.

Media attributes are traditionally defined as “…the properties of stimulus materials which are manifest in the physical parameters of media.”[3] [#footnote-221-3] Levie and Dickie provided a comprehensive taxonomy of media attributes, including type of information representation (text, image, or sound), sensory modalities addressed (auditory, visual, and so on), level of realism (abstract to concrete), and ability to provide feedback (overt, covert, immediate, or delayed). So, instead of treating a distance delivery medium as amorphous, we could ask a more relevant question by targeting the specific qualities or attributes of the medium.

For example, is a videotape instructionally successful because of the movement it illustrates, the realistic color image, the close-up detail, the authentic sound, or some combination of these characteristics? Individual attributes need to be isolated and tested as variables in and of themselves, instead of treating the whole delivery system as one functional unit.

Instructional Strategies

Clark[4] [#footnote-221-4] maintained that one of the primary flaws in media comparison studies is the confusion of instructional methods with the delivery medium. Instead of treating the distance delivery technology as a facilitator of the chosen instructional strategies, many who engage in such comparisons treat the medium (Web-based instruction, for example) as the strategy itself. For example, comparing a face-to-face course to a Web-based course doesn’t tell us anything about what the teacher or students did in a face-to-face class, or what strategies the Web-based event employed. Perhaps a Web-based event succeeded because students engaged in collaborative problem-solving compared to students in the face-to-face setting who simply received information through lectures. Note that the students in a face-to-face class could also engage in collaborative problem-solving. In fact, occupying the same room during such an exercise might actually enhance the experience.

Any instructional environment can support a variety of instructional methods, some better than others. To credit or blame the delivery medium for learning ignores the effectiveness of the instructional design choices made while creating a learning event.

Theoretical Foundations

The conduct of research relies on the testing of some theory. Research regarding the processes of learning should frame its inquiries around the psychological theories that underpin these processes. For example, the theoretical position of behaviorism depends on the use of reinforcement to strengthen or weaken targeted learning behaviors.[5] [#footnote-221-5] A research study that invokes this theory might investigate the use of positive reinforcement to reduce procrastination in distance education, for example. The primary concern related to media comparison studies is that they test no theoretical foundation — they simply evaluate one instructional delivery technology against another. Inquiry devoid of theory is not valid research.

As indicated earlier, many factors work together to create an effective instructional event. In addition to the previous variables, any study should also consider instructional content and context, as well as the type of learning (cognitive, affective, or psychomotor). It’s possible that the interactions of all these elements contribute to more effective experiences. Given the many good questions to ask, it should prove relatively easy to avoid asking a poor one — like comparing different distance delivery media.

In 1973, Levie and Dickie[6] [#footnote-221-6] suggested that comparison studies were “fruitless” and that most learning could be received by means of “a variety of different media.” To avoid the same errors, we should heed their advice and seek answers to more beneficial questions. Unfortunately, problems affecting comparison studies often don’t stop with their research design flaws, but continue with the interpretation of their outcomes.

Misuse of Results

Ask a poor question, get a poor answer. Clearly, any outcomes generated by comparison studies are invalid, based on the fact that the questions themselves are inherently confounded. However, that fact doesn’t stop those who conduct such studies from misinterpreting and misapplying their results.

Most media comparison studies result in “no significant difference” findings. This means that the treatment had no measurable effect on the outcome, or dependent, variable. A distance-education comparison study typically compares the achievement of students on campus to the achievement of students engaged in distance-delivered instruction. Unfortunately, researchers often incorrectly interpret a “no significant difference” result as evidence that the mediated, or distance-delivered, instruction is as effective as traditional, or teacher-led, instruction in promoting learning.

Many early comparison studies aimed to prove an instructional medium’s effectiveness to justify the purchase and implementation of new technologies (radio, television, computers, and so forth). The outcomes of current distance-education comparison studies are being used to demonstrate not the superiority of the distance experience, but the equality of it. The problem lies in the flawed logic behind the interpretation of “no significant difference.”

“No significant difference” is an inconclusive result, much like the “not guilty” assumption in the U.S. legal system. It means just that and nothing more — not guilty does not mean innocent. A finding of “no significant difference” between face-to-face instruction and distance-delivered instruction does not mean they’re equally good or bad.

Russell’s brief work,[7] [#footnote-221-7] as well as his Web site, is widely referenced. Yet he demonstrated an apparent misunderstanding of why comparison studies aren’t appropriate:

“[A department head]…long felt that such studies amounted to beating a dead horse.” This is true. There no longer is any doubt that the technology used to deliver instruction will not impact the learning for better or for worse. Comparative studies, such as those listed in the “no significant difference” document, are destined to provide the same “no significant difference” results. So why do they continue to be produced?

Could it be that the inevitable results are not acceptable? When this listing was first compiled and published in 1992, it was stated that it was and continues to be folly to disagree with those who say that it is time to stop asking the question: Does the technology used to deliver instruction improve it? Clearly, it does not; however, it does not diminish it either. As far as learning is concerned, there is just “no significant difference.”[8] [#footnote-221-8]

In this statement, Russell showed little understanding of the problems inherent in comparison studies that we described (technically, the inherent violations of the assumption of ceteris paribus — that all things are assumed equal except for those conditions that are actually manipulated; see Orey,[9] [#footnote-221-9] for example). Worse, he commited the fallacy of assuming that “no significant difference” means “the same.”

Research and Evaluation

Many authors of early comparison studies intended to justify implementation of new media or replacement of “traditional” methods of teaching with more efficient (but equally effective) approaches. These reasons are painfully similar to what current research is being asked to do concerning the quality of distance education experiences. On a positive note, the past 20 years have seen attempts to move away from these comparison approaches and place more emphasis on content to be learned, the role of the learner, and the effectiveness of instructional design decisions, rather than on the instructional quality of a specific medium.

Many investigators who engage in media comparison studies sincerely believe they’re conducting valid research that will generalize to the larger distance learning population. More probably, they need to focus on the localized evaluation of their particular distance education courses and programs.

The distinction between research and evaluation sometimes blurs because they share many of the same methods. However, the intentions differ considerably. Research involves testing theories and constructs to inform practice, while evaluation seeks to determine if a product or program was successfully developed and implemented according to its stakeholders’ needs. To assess the effectiveness of a given distance education experience, investigators can answer relevant questions through the more appropriate evaluation techniques.

Application Exercises

  • According to the author, what would be a valid method of evaluating distance education?
  • In what ways do media and pedagogy intersect? Which do you believe to have a greater impact on student learning?
  • In your own words, describe why media comparison studies may not be productive.
  • In a small group, design a study that would more accurately test the difference between in-class and online classes. How would you isolate the variables Lockee suggests?
question mark Please complete this short survey to provide feedback on this chapter: http://bit.ly/OldConcernsWithNewDE
  1. A. Lumsdaine, “Instruments and Media of Instruction,” Handbook of Research on Teaching, N. Gage, ed. (Chicago: Rand McNally, 1963). [#return-footnote-221-1]
  2. G. Salomon and R. E. Clark, “Reexamining the Methodology of Research on Media and Technology in Education,” Review of Educational Research, 47 (1977), 99–120. [#return-footnote-221-2]
  3. W. H. Levie and K. Dickie, “The Analysis and Applications of Media,” The Second Handbook of Research on Teaching, R. Travers, ed. (Chicago: Rand McNally, 1973), 860. [#return-footnote-221-3]
  4. R. E. Clark, “Reconsidering Research on Learning from Media,” Review of Educational Research, 53 (4) (1983), 445–459. [#return-footnote-221-4]
  5. M. P. Driscoll, Psychology of Learning for Instruction, Second edition (Boston: Allyn and Bacon, 2000). [#return-footnote-221-5]
  6. Levie and Dickie, 855. [#return-footnote-221-6]
  7. T. L. Russell, (1997). “Technology Wars: Winners and Losers,” Educom Quarterly, 32 (2) (March/April 1997). [#return-footnote-221-7]
  8. Ibid. [#return-footnote-221-8]
  9. M. A. Orey, J. W. Garrison, and J. K. Burton, “A Philosophical Critique of Null-Hypothesis Testing,” Journal of Research and Development in Education, 22 (3) (1989), 12–21. [#return-footnote-221-9]
Barbara Lockee

Virginia Tech

Dr. Barbara B. Lockee is a professor of education at Virginia Tech. She received her B.S. in communications media and her M.A. in curriculum and instruction from Appalachian State University. She received a Ph.D. in curriculum and instruction with concentration in instructional technology from Virginia Tech. She has authored or coauthored more than 90 publications. Her awards and honors include the XCaliber Award for Excellence in Courseware Development in 2000 and the Clifton Garvin Fellowship in 2002.

Mike Moore

Virginia Tech

Dr. David Mike Moore is an emeritus professor in instructional technology in the College of Human Resources and Education at Virginia Tech. His research has been focused on instructional design and distance learning, and he is the author of over 100 articles in that field. He is also the author of Visual Literacy, which received AECT’s outstanding textbook publication award.

John Burton

Virginia Tech

Dr. John Burton is a professor in the education department at Virginia Tech and specializes in instructional design and technology. He is currently the director of the Center for Instructional Technology Solutions in Industry and Education (CITSIE) and consulting editor for Educational Technology Research & Development. He also served as associate director for Educational Research and Outreach from 2005-2008. Dr. Burton received his PhD in educational psychology from the University of Nebraska-Lincoln.

This content is provided to you freely by BYU Open Learning Network.

Access it online or download it at https://open.byu.edu/lidtfoundations/old_concerns_distance_education.