Many instructional designers who design innovative learning experiences, and then conduct research that investigates the usefulness of those learning experiences, fail to fully apply the instructional theory framework as a design foundation. This reduces the usefulness of their designs and ultimately leads to learners and other stakeholders not fully adopting and benefitting from the designer’s learning experiences.
The aims of this chapter are to (1) help both designers and researchers improve the usefulness of their instructional designs and subsequent research, and (2) reduce diffusion barriers that impact the dissemination and adoption of learning experiences. The sections of this chapter include:
Formally linking instructional design and research to the instructional theory framework and its related design principles enables designers and researchers to answer questions about the relative advantage, compatibility, complexity, observability, and trialability of their innovations (Rogers, 2003).
What is a theory? Simply put, it is a set of ideas about how something might work. For example, Darwin’s theory of evolution contains the ideas of genetic variation and natural selection, among a host of others. In the education field, learning theory is a set of ideas about how people learn, such as behaviorism, cognitivism, and constructivism. However, what should most interest instructional designers is instructional theory, a set of ideas for how best to help people learn.
Here is a simple example of an instructional theory: Drill-and-practice is a useful method for efficiently helping a learner memorize such things as the names of all the U.S. states. This theory contains the idea of an instructional method (drill-and-practice) to help a person remember things. How well will it work? We will not know until we actually deliver our instructional theory to learners in the intended type of situation.
Yanchar et al. (2010) suggested some instructional designers feel that instructional theory has little relevance in how they design instruction: “There is clearly an uneasiness about the applicability of theories and other conceptual tools in everyday design work” (p. 41). Honebein and Honebein (2014), on the other hand, suggest designers do use instructional theory, “but their usage of theory is tacit—e.g. not apparent, even to them” (p. 2).
Thus, the number-one job for instructional designers is to create or modify instructional theories more overtly in a way that meets a client’s requirements. To accomplish this, designers should consider using the instructional theory framework (Honebein & Reigeluth, 2020; Reigeluth & Carr-Chellman, 2009a), illustrated in Figure 1.
A Simple Representation of the Instructional Theory Framework in Action
Note. We call the product (outcome) of this framework an instructional theory.
The instructional theory framework is a design theory, a set of ideas focused on how to “create” instruction rather than “describe” instruction. Central to this idea of creating things is the concept of a method, which encapsulates the know-how a designer uses to create something.
There are several categories of methods that instructional designers use in their design work, such as process methods (e.g., ADDIE), instructional methods (e.g., demonstrations and practice with feedback), media methods (e.g., words, pictures, or video to communicate content), and data-management methods (e.g., gradebooks and learning management systems). While the instructional theory framework guides all of these types of design decisions, our specific interest in this chapter is instructional methods, for example lecture and project-based instruction, which promote learning.
Designers use the instructional theory framework as a way to select instructional methods that promote learning. To select the most useful instructional methods, designers rely on the instructional situation to guide them. Front-end analysis (the “A” part of ADDIE) is where the instructional theory framework begins its journey to deliver value. As shown in Figure 1, the instructional situation has two parts: conditions and values.
Conditions are matters of fact about the situation that a designer can elicit empirically and objectively from stakeholders and documents. Conditions include information about:
This type of information is usually what designers and clients focus on collecting during front-end analysis.
However, what designers often fail to collect during front-end analysis is information about values. Values are matters of opinion that are subjective in nature. For example, a client might say “I hate lectures and I don’t want them in my course!” In that statement, the client is expressing a value—an opinion—that is true for them but may not be true for others. A designer elicits values empirically and multidimensionally from a variety of stakeholders. Values can have a huge impact on the success of an instructional design project. The instructional theory framework specifies four unique types of values:
Like conditions and values, instructional methods have their own unique characteristics, such as:
Once a designer understands the situation (conditions and values), the designer uses their knowledge of the situation in combination with method characteristics to select the best methods. In other words, the instructional theory framework is like a conditional heuristic (Figure 1) whereby a manager or client gives a designer a situation (a mess), and the designer must then consider the conditions and values to select methods that enable the designer to create a solution that cleans up the mess.
The framework shown in Figure 1 is a pattern that produces and characterizes all instructional theories. Essentially, an instructional theory is the product of the instructional theory framework. It contains a collection of one or more instructional methods that best fit one or more designated situations. An instructional theory is different from learning theories, such as behaviorism, cognitivism, and constructivism. As shown in Figure 2, learning theory descriptively explains the “what happens” of the learning process, typically what might be going on in one’s head. For example, a cognitivist learning theory suggests that information received by a learner is first processed in short-term memory, and then transferred to long-term memory. Also, notice that a learning theory does not include any methods.
Learning Theory Describes the “What Happens” of Various Learning Processes
Note. The arrow indicates the flow of knowledge. Instructional theory prescribes possible methods for “how” one might effectively, efficiently, and appealingly learn. The arrow indicates that drill & practice is a possible instructional method that enables a learner to recall U.S. state names.
Examples of instructional theories include those summarized in the four volumes of the “Green Book” (Instructional-Design Theories and Models) (Reigeluth, 1983; Reigeluth 1999; Reigeluth & Carr-Chellman, 2009a; and Reigeluth et al. 2017), such as Shank and colleague's (1999) goal-based scenarios, and Huitt and colleague's (1999) direct approach to instruction. Instructional theories do not need to be published in a book to be instructional theories. Because instructional theory is situational, anyone can create instructional theories for situations that are narrow or wide in scope, and they can improve, change, or “mash up” existing instructional theories in any way they want to fit the situation. In fact, we believe that all people have their own personal theory of instruction, often as tacit knowledge, based on their experiences as a learner or instructor. The challenge for designers is to improve and expand their personal instructional theories.
Applying the instructional theory framework is not hard. In some regards, the instructional theory framework is like a checklist of good practices. To elaborate our ideas about how to use the framework, we have synthesized six core principles that will help instructional designers embrace the ideals of the instructional theory framework:
These principles facilitate the transition between situations and methods that leads to superior instructional solutions, as well as demonstrating a solution’s value. We suggest instructional designers find renewed inspiration to embrace the instructional theory framework through these six principles.
Instructional designers conduct the work they do in a living, self-organizing, complex system (You, 1993; Rowland, 1993, 2007; Solomon, 2000, 2002; Honebein, 2009). What this means is that learning experience designs will behave in ways that designers cannot predict or expect; their nature is emergent “…in that [it is] shaped and developed over time through an evolutionary process” (Honebein, 2009, p. 29). For example, an instructor can design and teach a class one semester, and then the very next semester teach the same class again, and the experience for the instructor, the learners, and any other stakeholders will likely be different.
Reigeluth and Carr-Chellman’s (2009b) and Honebein’s (2019) explorations of the “galaxy” question, which is about whether some instructional methods have universal properties, provide evidence to support the proposition that the instructional theory framework represents a complex system. Merrill (2002, 2009) has argued that some instructional methods are universal, that they are present in all good instruction, such as his first principles. However, those principles are described on a very imprecise level. The implementation of any of those principles will vary from one situation to another, making any reasonably precise description of the principle situational, in recognition of the complexity of instructional situations. Furthermore, given that the instructional theory framework provides categories for conditions, values, and methods, the permutations of instances for each category a learning-experience designer could combine is immeasurable. In other words, situations and methods represent a complex system (Honebein & Reigeluth, 2020).
This idea of complexity is expanded upon philosophically by Cilliers (2000), who distinguishes a system as simple, complicated, and complex based upon the distance from which one observes that system. For example, an aquarium seen in one’s home, observed as a decoration, is simple. That same aquarium can seem complicated when observed by a person who needs to repair it, in terms of heaters, pumps, tubes, and chemicals. The aquarium becomes complex when a person observes the aquarium as an ecosystem, with an immeasurable number of variables.
What does this mean for designers? It means that designers should be comfortable knowing their design situation qualitatively, whereby a variety of learning-experience “experiences” are possible due to the number of elements present in a situation and the interaction between those elements (Honebein & Reigeluth, 2020).
A design fundamental is a “good practice” that one expects a learning-experience designer to overtly apply when designing a learning experience. For us, learning-experience-design fundamentals focus on three key instructional design practices: (1) clearly synthesized situations (conditions, including the nature of the content, and values) that should be stated as instructional objectives, (2) aligned assessments, and (3) formative evaluation that demonstrates a learning experience can achieve the mastery standard specified in the objective.
When a learning-experience designer conducts an instructional analysis, the designer gathers data about the situation in the form of conditions and values. The designer then synthesizes the situation’s primary, actionable factors into a form that enables the selection of instructional methods, an instructional objective.
A well-formed instructional objective has three parts: the conditions for performing the behavior, the behavior, and a standard of performance (criteria for mastery). There are specific rules for each part that maintain logical consistency and hierarchy of the instructional objective (Mager, 1984). In instructional theory framework terms, the specification of mastery is called values about goals, and since it is a value (a matter of opinion), a designer can define it quantitatively, qualitatively, or some mixture of both.
Instructional designers must specifically link and align instructional objectives with assessments. Assessments not only confirm mastery of desired behaviors, but also provide data about formative improvements.
What is typically missing in criterion-referenced assessments is an indication of acceptable mastery. For example, learning experience “A” might report test performance of 83%, while learning experience “B” might report test performance of 89%. If the instructional objective guiding both learning experiences lacks mastery criteria, it becomes very difficult to assess the efficacy of each learning experience across the outcomes of effectiveness, efficiency, and appeal. Designers must identify acceptable mastery so that other designers and researchers can assess the improved learning effects within the context of efficiency and appeal. You can learn more about this in the Measuring Student Learning chapter.
Learning-experience design must be more about “improving” and less about “proving”. Research methods to prove the usefulness of an instructional method or theory make little sense when the instructional situation surrounding the learning experience can vary so much that the level of usefulness does not generalize (Honebein & Reigeluth, 2020; Reeves & Lin, 2020). What makes more sense is research that aims to improve the instructional theory, such as formative evaluation using single-subject techniques (Brenneman, 1986) or expert reviews, and formative research (Reigeluth & An, 2009; Reigeluth & Frick, 1999). This improvement orientation is particularly important when a method or theory is at a relatively early stage in its development. However, the iterative nature of research to improve also allows a general method to be tailored to a specific situation. This enables designers to, over time, offer multiple versions of the general method for different situations. Designers can learn more about this in the Continuous Improvement of Instructional Materials chapter.
When designers implement a learning experience (delivery to actual learners), they should collect formative data about all three metrics: its effectiveness, efficiency, and appeal. From that point, the learning experience may undergo any number of formative improvements over various iterations in its design lifecycle. Why? Because the learning experience will never be perfect; the situation (a complex system) is always changing, forcing the learning experience to change and adapt to deliver the right proportions of effectiveness, efficiency, and appeal. Thus, designers must not only consider the changes in methods and the resulting changes to effectiveness, efficiency, and appeal, but also how the changes in the situation influence the various effects of those methods.
What do these three design fundamentals mean for designers? It means that no matter where you work or who your clients are, you will have core data that enables you to defend your designs against nit-picking know-it-alls. And if the client has new or additional data, you will have a structure by which to collaboratively improve the design rather than a fight to prove a design.
Instructional designers must adopt a mindset that considers all instructional methods as having unknown or neutral usefulness until the instructional situation is known. It is only at this time when data is present that a designer can assess an instructional method’s usefulness (Honebein, 2016, 2019).
This practice helps avoid philosophical bias. Honebein’s research showed that designers have a pre-existing bias toward certain instructional methods; the instructional theory framework calls this values about methods. For example, as shown in Figure 3, many designers view authentic tasks as very useful, whereas those same designers view peer-based or cooperative methods as less useful. This type of biased thinking can lead designers to reject instructional methods that might be very useful in a given situation, just because those methods are incompatible with their biases. There are situations in which behaviorist methods (e.g., drill and practice) are useful and many situations in which they are not.
Chart Illustrating Designer Bias
Note. Instructional designers in introductory and capstone instructional design courses at two different universities were asked to rate the usefulness of various instructional methods in the absence of any type of “condition.” This chart illustrates designer bias (values about methods), where more useful (powerful) methods are to the left, and less useful methods are to the right. If there was no bias, all bars in the chart should be the same, at 3.5. From Honebein (2019).
What does unbiased consideration of instructional methods mean for designers? It means that the solution to a thorny instructional design situation might just be an instructional method that you or your client hates. So, anytime you find yourself saying, “I hate lectures,” watch Randy Pausch’s Last Lecture (20+ million viewers), and find the hidden value and inspiration present in all instructional methods.
All instructional designs involve some sort of sacrifice (Gropper & Kress, 1965; Tosti & Ball, 1969; Clark & Angert, 1980; Hannifin & Rieber, 1989). Honebein and Honebein’s (2015) research into this topic suggested that an instructional design iron triangle likely exists in all instructional design projects (see Figure 4). The theory of the iron triangle is that if you have three competing factors, you can only maximize two of them; you always sacrifice one. In instructional design, the competing factors (outcomes) are effectiveness, efficiency, and appeal. For example, if a designer favors effectiveness and appeal, the designer will sacrifice efficiency. Favoring efficiency and appeal sacrifices effectiveness, and favoring effectiveness and efficiency sacrifices appeal.
The Instructional Design Iron Triangle
Note. The triangle depicts the three outcomes (or constraints) associated with instructional methods: effectiveness, efficiency, and appeal. An instructional theory, model, or method typically involves the sacrifice of one or more of the outcomes.
What does the iron triangle mean for designers? It means that you’ll always have to give up something in your designs, and that is okay. Perfect is the enemy of the good.
Methods and media have a unique influence on effectiveness, efficiency, and appeal. We have already defined instructional methods earlier in this paper. Examples of instructional methods include lecture, drill and practice, and apprenticeship, as well as others depicted in Figure 3. Media is, of course, the communication channel that carries instructional methods to learners (Heinich et al.1989). Media itself is a method, and as such one should differentiate between instructional methods and media methods. Media methods include such things as words, diagrams, pictures, films, models, and realia—organized across categories of enactive, iconic, and symbolic (Bruner, 1966).
There has been much debate in our field about how instructional methods and instructional media contribute to effectiveness, efficiency, and appeal (Clark, 1985, 1986, 1994). Following Clark’s arguments and our own design experiences, we feel that instructional methods influence effectiveness, efficiency, and appeal, whereas instructional media influence only efficiency and appeal.
Why should the designer be aware of this point of view? Because as designers explore our field’s research to help construct their own personal instructional theory, they will find studies and examples that describe what Tennyson (1994) calls “big wrench” solutions, which take the form of a panacea. Big wrenches, which are typically media methods, are sprinkled throughout the history of our science, from Thomas Edison’s “motion pictures” which promised to make books obsolete, to today’s mobile game-based learning (mGBL) solutions. The studies and examples may suggest it is the media method that drives effectiveness, whereas in reality it is more likely the instructional method that does.
What does this mean for designers? It means that you should be cautious about media in terms of touting its influence on effectiveness. No one doubts its strong impact on efficiency and appeal. As noted previously, learning experiences are complex systems that mash together a variety of methods to deliver results. A designer may never know what design element (or more likely, combination of elements) served as the secret sauce for effectiveness. But more than likely, it will be an instructional method (or combination of instructional methods).
For more than 15 years now, the authors have taught a capstone instructional theory framework course for graduate students at Indiana University. The course culminates in students writing their own personal theory of instruction (see the application exercise below). In writing their personal theory paper, students consider the conditions (learner, content, context, constraints) and values (about goals, outcomes, methods, power) associated with their situation, and discuss the instructional methods that reflect their “stamp” as a designer.
The activity our students complete should be an activity that all designers regularly engage in as well, since the activity is all about drawing a line in the sand about your design principles (Brown & Campione, 1996; Stolterman & Nelson, 2000; Collins et al., 2004; Boling et al., 2017). We see such design principles connected to important emerging ideas from the above authors related to design character, design judgment, core judgment, and accountability. As we understand these terms, one’s design character represents inherent, assumed responsibilities for both creative process and outcomes. Design judgment involves creativity and innovation, integrating multiple forms of judgment associated with those aims. Stolterman and Nelson (2000) refer to design judgment as “an act of faith” (p. 8). After a designer experiences the results of their design judgments, design judgments contribute to core judgment, in which certain judgments over time become fixed and very hard to change. For example, the learning-experience-design fundamentals we discuss in this paper are, for us, core judgments. Ultimately, designers must be accountable for their designs in terms of effectiveness, efficiency, and appeal, and avoid the temptation to move, hide, or remove accountability to some other stakeholder. The aggregate of these ideas represents one’s design character and one’s belief “in his or her capacity to make good judgments” (p. 8). That belief is reinforced in terms of how one reflects on their actions.
What does this mean for designers? We think one’s design principles were meant to be dynamic, not static. As the comedian Groucho Marx once said, “Those are my principles, and if you don’t like them …, well, I have others.” Groucho was wise, as he appears to have known the instructional theory framework’s foundational idea that situation drives methods, or in this case, principles. Whether a learning-experience designer is eclectic or orthodox in their adoption of learning theories and instructional methods (Yancher & Gabbitas, 2011; Honebein & Sink, 2012), the designer’s choice of methods must be dependent on the situation. Designers should not assume even Merrill’s (2002, 2009) first principles to be appropriate in all situations (Honebein, 2019). It is through the ideas of formative evaluation, design research, and reflection-in-action/reflection-on-action that one’s principles increase and decrease in strength.
The instructional theory framework is a cornerstone that must guide our field’s practice and research. The foundations of situation and methods are simple, logical, and aligned with good practice. Philosophically, the instructional theory framework addresses both the objective world (conditions) and the subjective world (values), which mimics what instructional designers encounter in the real world. It provides designers a means to assess and articulate their design judgment, enabling them to be more confident in assuming accountability (Stolterman & Nelson, 2000). And as shown by Honebein and Honebein (2014, 2015) and Honebein (2017, 2019), the instructional theory framework functions as expected. The instructional theory framework’s key benefit is that it guides designers in creating learning experiences that have a higher relative advantage (effectiveness, efficiency, and appeal), better compatibility, lower complexity, easier observability, and actionable trialability.
However, a word of advice: avoid using the “theory” word when you are talking with your clients and subject-matter experts about how you design learning experiences. It will scare them. Keep it as your little, tacit secret.
Boling, E., Alangari, H., Hajdu, I. M., Guo, M., Gyabak, K., Khlaif, Z., Kizilboga, R., Tomita, K., Alsaif, M., Lachheb, A., Bae, H., Ergulec, F., Zhu, M., Basdogan, M., Buggs, C., Sari, A., & Techawitthayachinda, R.I. (2017). Core judgments of instructional designers in practice. Performance Improvement Quarterly, 30(3), 199–219. DOI: 10.1002/piq.21250
Brenneman, J. (1989, March). When you can’t use a crowd: Single-subject testing. Performance & Improvement, 22–25.
Brown, A., & Campione, J. (1996). Psychological theory and the design of innovative learning environments: On procedures, principles, and systems. In L. Schauble & R. Glaser (Eds.), Innovations in learning: New environments for education (pp. 289–325). Mahwah, NJ: Lawrence Erlbaum Associates.
Bruner, J. S. (1966). Toward a theory of instruction. Cambridge, MA: Belknap Press.
Cillers, P. (2000). Complexity and postmodernism. London: Routledge.
Clark, R. E. (1985). Evidence for confounding in computer-based instruction studies: Analyzing the meta-analyses. Educational Communications and Technology Journal, 33(4), 249–262.
Clark, R. E. (1986). Absolutes and angst in educational technology research: A reply to Don Cunningham. Educational Communications and Technology Journal, 34(1), 8–10.
Clark, R. E. (1994). Media will never influence learning. Educational Technology Research & Development, 42(2), 21–29.
Clark, F. E., & Angert, J.F. (1980). Instructional design research and teacher education. Paper presented at the Annual Meeting of the Southwest Educational Research Association (San Antonio, TX, February 8, 1980). ERIC Document 183528.
Collins, A., Joseph, D., & Bielaczyc, K. (2004). Design research: Theoretical and methodological issues. The Journal of the Learning Sciences, 13(1), 15–42. https://edtechbooks.org/-srB
Czeropski, S., & Pembroke, C. (2017). E-learning ain’t performance: Revising HPT in an era of agile and lean. Performance Improvement, 56(8), 37–47. 10.1002/pfi.21728
Gibbons, A. S., & Rogers, P. C. (2009). The architecture of instructional theory. In C. M. Reigeluth & A. Carr-Chellman (Eds.), Instructional-design theories and models: Building a common knowledge base (Vol. III) (pp. 305–326). Hillsdale, NJ: Lawrence Erlbaum Associates.
Gropper, G. L. & Kress, G.C. (1965). Individualizing instruction through pacing procedures. AV Communications Review, 13(2), 165–182.
Hannifin, M. J., & Rieber, L.P. (1989). Psychological foundations of instructional design for emerging computer-based instructional technologies, part II. Educational Technology Research and Development, 37(2), 102–114.
Heinich, R., Molenda, M., & Russell, J. D. (1989). Instructional Media (3rd Ed.). Macmillian: New York.
Honebein, P. C. (2009, January-February). Transmergent learning and the creation of extraordinary learning experiences. Educational Technology, 27–34.
Honebein, P. C. (2016). The influence of values and rich conditions on designers’ judgments about useful instructional methods. Educational Technology Research and Development, 65(2), 341–357. https://edtechbooks.org/-StAK
Honebein, P. C. (2019). Exploring the galaxy question: The influence of situation and first principles on designers’ judgments about useful instructional methods. Educational Technology Research and Development, 67(3), 665–689. https://doi.org/10.1007/s11423-019-09660-9
Honebein, P. C., & Honebein, C. H. (2014). The influence of cognitive domain content levels and gender on designer judgments regarding useful instructional methods. Educational Technology Research and Development, 62(1), 53–69. https://edtechbooks.org/-UmC
Honebein, P. C., & Honebein, C. H. (2015). Effectiveness, efficiency, and appeal: pick any two? The influence of learning domains and learning outcomes on designer judgments of useful instructional methods. Educational Technology Research and Development, 63(6), 937–955. https://edtechbooks.org/-bwhu
Honebein, P. C., & Reigeluth, C. M. (2020). The instructional theory framework appears lost. Isn’t it time we find it again? Revista de Educación a Distancia, 64(20). https://edtechbooks.org/-ERaX
Honebein, P. C., & Sink, D. L. (2012). The practice of eclectic instructional design. Performance Improvement, 51(10), 26–31.
Huitt, W. G., Monetti, D. D., & Hummel, J. H. (1999). Direction approach to instruction. In C.M. Reigeluth (Ed.), Instructional-design theories and models: A new paradigm of instructional theory, volume II (pp. 73–97). Hillsdale, NJ: Lawrence Erlbaum Associates.
Mager, R. F. (1984). Preparing instructional objectives. Belmont, CA: Lake.
Merrill, M. D. (2002). First principles of instruction. Educational Technology Research and Development, 50(3), 43–59.
Merrill, M. D. (2009). First principles of instruction. In C. M. Reigeluth & A. Carr-Chellman (Eds.), Instructional-design theories and models: Building a common knowledge base (Vol. III) (pp. 41–56). Hillsdale, NJ: Lawrence Erlbaum Associates.
Molenda, M. (2003). In search of the elusive ADDIE model. Performance Improvement, 42(4), 34–36.
Nelson, H. G., & Stolterman, E. (2012). The design way: Intentional change in an unpredictable world (2nd ed.). Cambridge, MA: MIT Press.
Reeves, T. C., & Lin, L. (2020). The research we have is not the research we need. Educational Technology Research and Development, 68, 1991–2001. https://doi.org/10.1007/s11423-020-09811-3
Reigeluth, C. M. (1983). Instructional design: What is it and why is it? In C.M. Reigeluth (Ed.), Instructional-design theories and models: An overview of their current status (pp. 3–36). Hillsdale, NJ: Lawrence Erlbaum Associates.
Reigeluth, C. M. (1999). What is instructional-design theory and how is it changing? In C.M. Reigeluth (Ed.), Instructional-design theories and models: A new paradigm of instructional theory, volume II (pp. 5–29). Hillsdale, NJ: Lawrence Erlbaum Associates.
Reigeluth, C. M. & An, Y. (2009). Theory building. In C. M. Reigeluth & A. Carr-Chellman (Eds.), Instructional-design theories and models: Building a common knowledge base (Vol. III) (pp. 365–386). New York, NY: Routledge.
Reigeluth, C. M., Beatty, B. J., & Myers, R. D. (Eds.). (2017). Instructional-design theories and models, Volume IV: The learner-centered paradigm of education. New York, NY: Routledge.
Reigeluth, C. M. & Carr-Chellman, A. (2009a). Understanding instructional theory. In C. M. Reigeluth & A. Carr-Chellman (Eds.), Instructional-design theories and models: Building a common knowledge base (Vol. III) (pp. 3–26). Hillsdale, NJ: Lawrence Erlbaum Associates.
Reigeluth, C. M. & Carr-Chellman, A. (2009b). Situational principles of instruction. In C. M. Reigeluth & A. Carr-Chellman (Eds.), Instructional-design theories and models: Building a common knowledge base (Vol. III) (pp. 57–68). Hillsdale, NJ: Lawrence Erlbaum Associates.
Reigeluth, C. M., & Frick, T. W. (1999). Formative research: A methodology for creating and improving design theories. In C.M. Reigeluth (Ed.), Instructional-design theories and models: A new paradigm of instructional theory, volume II (pp. 633–651). Hillsdale, NJ: Lawrence Erlbaum Associates.
Reigeluth, C.M. & Keller, J. B. (2009). Understanding instruction. In C. M. Reigeluth & A. Carr-Chellman (Eds.), Instructional-design theories and models: Building a common knowledge base (Vol. III) (pp. 27–35). Hillsdale, NJ: Lawrence Erlbaum Associates.
Reigeluth, C. M., Myers, R. D., & Lee, D. (2017). The learner-centered paradigm of education. In C. M. Reigeluth, B. J. Beatty, & R. D. Myers (Eds.), Instructional-design theories and models, Volume IV: The learner-centered paradigm of education (Vol. IV, pp. 5–32). New York, NY: Routledge.
Rogers, E. M. (2003). Diffusion of Innovations (5th Ed.). New York: Free Press.
Rowland, G. (1993). Designing and instructional design. Educational Technology Research and Development, 41(3), 79–91.
Rowland, G. (2007). Performance improvement assuming complexity. Performance Improvement Quarterly, 20(2), 117–136.
Shank, R. C., Berman, T. R., & Macpherson, K. A. (1999). Learning by doing. In C.M. Reigeluth (Ed.), Instructional-design theories and models: A new paradigm of instructional theory, volume II (pp. 161–181). Hillsdale, NJ: Lawrence Erlbaum Associates.
Solomon, D. L. (2000). Toward a post-modern agenda in instructional technology. Educational Technology Research and Development, 48(4), 5–20.
Solomon, D. L. (2002). Rediscovering post-modern perspectives in IT: Deconstructing Voithofer and Foley. Educational Technology Research and Development, 50(1), 15–20.
Stolterman, E., & Nelson, H. (2000). The guarantor of design. Proceedings of IRIS 23. Laboratorium for Interaction Technology, University of Trollhättan Uddevalla, 2000. L. Svensson, U. Snis, C. Sørensen, H. Fägerlind, T. Lindroth, M. Magnusson, C. Östlund (eds.)
Tosti, D. T. & Ball, J. R. (1969). A behavioral approach to instructional design and media selection. AV Communications Review, 17(1), 5–24.
Yanchar, S. C., & Gabbitas, B. W. (2011). Between eclecticism and orthodoxy in instructional design. Educational Technology Research and Development, 59(3), 383–398. https://doi-org.proxyiub.uits.iu.edu/10.1007/s11423-010-9180-3
You, Y. (1993). What can we learn from chaos theory? An alternative approach to instructional systems design. Educational Technology Research and Development, 41(3), 17–32.
CC BY-NC: This work is released under a CC BY-NC license, which means that you are free to do with it as you please as long as you (1) properly attribute it and (2) do not use it for commercial gain.