Saturday, May 28, 2011

Frick and Boling - Effective Web Instruction (Chapters 1 & 2)

In this blog post, I summarize and critique the first two chapters of Effective Web Instruction: Handbook for an Inquiry-Based Process.


Summary:
Frick and Boling (2002) present an inquiry-based, iterative instructional design and development process in order to avoid common instructional pitfalls such as:
  • No user input
  • Little or no testing
  • “No record of decision-making”
  • No justification for design decisions (p.2-3)
In their process, objectives are created upfront and the assessments are built before work on the content begins. The process includes iterative reviews first of a paper prototype, then a computer prototype, and finally the site itself. The results of each iteration are analyzed and the site is improved based on this analysis. (p.4)

The instructional goals are developed with all stakeholders, recognizing that some perspectives are more valuable than others. The reading recommends thinking about how you’ll assess the instructional goals while you’re developing the instruction, which is in line with Mager’s guidelines for writing instructional objectives. (p.12)

The learner analysis section discusses the importance of knowing your learners and contends that the best way to do this is to try teaching the content to them, at least once. The final section, context analysis advises that you shouldn’t pursue a Web solution for no reason. You should ask yourself: “What can we do with information technology that could not be done without it to help students learn?”

Critique:
I didn’t find the learner analysis section particularly helpful or practical. Yes, it’s great to teach the subject matter at least once in order to better understand your learner needs, but I’ve never had this opportunity as an e-learning developer. So if you can’t teach the subject matter beforehand, how does one uncover learner needs?

I found the paper prototype recommendation interesting and am looking forward to that section of the reading and better understanding such an approach. My clients tend to request computer prototypes, so I’m not sure how the paper prototype would actually be implemented.

Mager's Tips on Instructional Objectives

This blog post summarizes and critiques "Mager's Tips on Instructional Objectives."

Summary:
The reading provides a good summary of Mager’s book—Preparing Instructional Objectives. The key points from Mager’s book are summarized.

The three main reasons for stating objectives:
  1. They lay out a road map for you to follow when creating instruction.
  2. If you don’t state the objective upfront, you won’t know if it’s being followed.
  3. They provide an overview for the learner, which allows students to create a personal strategy for accomplishing the instructional goals (p.1).
Useful objectives are those that clearly define the audience, behavior, condition, and degree (p.1). Behaviors can be overt (observed directly) or covert (not observed directly). Covert behaviors require an indicator, so the performance can be demonstrated (p.2).

The reading concludes with some common pitfalls for writing objectives including: false performance, false givens, teaching points, gibberish, instructor performance, and false criteria. 

Critique:
While Mager’s book is very informative and helpful for people who have little experience with creating objectives, it is also long and spends a great deal of time explaining specific points. This reading summarizes Mager’s key points in eight pages. It’s an ideal reference for people who have already read the book but need a refresher or instructional designers who already have some familiarity with instructional objectives. 

The reading does a good job of reminding you to always state the main intent, not just an observable behavior. Mager recommends that you state the main intent and then an observable behavior in parenthesis if the objective is covert. I think this is a helpful practice for an ID who is planning their courses but it hasn’t been my experience as a student to see objectives stated this way. What do others think?

I appreciated the pitfall section, but I would have liked to have seen revised statements. Examples are provided of what not to do but it would have been helpful to see a revised example of the same statement, specifically for the teaching points section, which I found confusing. 
 

Kim and Frick - Changes in Student Motivation During Online Learning

In this blog post, I summarize and critique Kim and Frick's (2011) "Changes in Student Motivation during Online Learning." 

Summary:
Taking into account the high attrition rate in online courses and the fact that attrition is often caused by a lack of motivation, Kim and Frick (2011) investigate learner motivation in online learning. The first section of this reading is a literature review and the second section describes the actual study.

The literature review discusses internal, external, and personal factors that influence motivation in Web-based instructions. One internal factor is that instruction is more motivating if it applies the ARCS model (attention, relevance, confidence, satisfaction) or Merril’s first principles. On the other hand, “cognitive overload” can decrease motivation to learn (p.3). External factors such as technical difficulties with the learning environment or lack of support from an employer also influence motivation. Finally, personal factors such as preference for a particular learning style can impact motivation.

The rest of the reading describes the current study in great detail including participants, research instrument, and data collection/analysis methods. Table 2 on page 18 outlines eight instructional design principles based on the findings:
  1. Provide learners with content that is relevant and useful to them.
  2. Incorporate multimedia presentations that stimulate learner interest.
  3. Include learning activities that simulate real-world situations.
  4. Provide content at a difficulty level which is in a learner's zone of proximal development.
  5. Provide learners with hands-on activities that engage them in learning.
  6. Provide learners with feedback on their performance.
  7. Design the website so that it is easy for learners to navigate.
  8. If possible, incorporate some social interaction in the learning process (e.g., with an instructor, technical support staff, or an animated pedagogical agent). (p.18). 
Critique:
I appreciated table 2, which included practical guidelines for increasing learner motivation based on the study results. These are things I can actually apply to my Web-based courses.
In addition, I found the notion of disruptive innovations very interesting. I think we’re at an exciting point for Web-based instruction where quality is improving and it’s being distributed more widely. Still, the following statistic blew me away: "50% of high school courses will be offered online by 2019" (p.2). That's in eight years from now. 50% seems high to me. What do others think?

Kim, K.-J. & Frick, T. W. (2011). "Changes in Student Motivation During Online Learning." Journal of Educational Computing Research, 44(1), 1-24.

Sunday, May 22, 2011

Merrill's 5 Star Instructional Design Rating

Summary:
Merrill’s 5 Star Instructional Design Rating presents a simple method for evaluating instructional products based on five questions:
  1. Is the courseware presented in the context of read world problems?
  2. Does the courseware attempt to activate relevant prior knowledge or experience?
  3. Does the courseware demonstrate (show examples) of what is to be learned rather than merely tell information about what is to be learned?
  4. Do learners have an opportunity to practice and apply their newly acquired knowledge or skill?
  5. Does the courseware provide techniques that encourage learners to integrate (transfer) the new knowledge or skill into their everyday life? (Merrill, 2007)
Each of these questions includes sub-questions that can be asked in order to make the correct assessment. 

Critique:
I like the simplicity of Merrill’s 5 Star Instructional Design Rating and that it’s presented in a manner that should be easy to use when I evaluate the two e-learning courses. Merrill could have created this rating system using a series of checklists with statements such as “All demonstrations (examples) are consistent with the content being taught.” Instead, he poses questions, which caused me to pause and reflect for a moment. I think I would have been more likely to skim over statements.
My main critique with this reading is that Merrill uses unfamiliar terms in the beginning that are also vague at times. For example, he doesn’t explain what he means by “kinds-of,” “how-to,” and “what-happens.” It’s assumed that the reader knows what he means, but this wasn’t the case for me. In addition, he recommends using his rating system for tutorial or experiential courseware but never explains how he would define these types of courseware; he only explains what they aren’t: receptive or exploratory courseware.

Ratings for Instructional Products:
1.       E-learning course on how to give core messages
  •  I gave this course a silver star for presenting content in the context of real world problems. It addresses the first two sub-points but involves a single problem, not a progression of problems.
  • I gave this course no stars for activation of prior knowledge. There is no pre-test and the learner is never asked to recall prior knowledge.
  • I gave the course a gold star for demonstration of concepts to be learned. It does this very well and provides multiple examples and non-examples. It uses short videos for these examples, which is the right choice of media for this content.
  •  I gave the course a silver star for application because the practice activity is realistic, effective, and provides helpful feedback; however, the learner cannot access help if necessary.
  • I gave the course a silver star for integration. The main objective is to deliver core messages related to abstinence and safe sex in a clear and unbiased manner. There is a question and answer section that provides integration guidance; however, since this is an e-learning course, there is no realistic way for the student to demonstrate the new skill because that would require public speaking.
Final score = three silver stars and one gold star. If you have time, I definitely recommend checking out the course. It’s a good example of e-learning and it takes less than 10 minutes to complete. I would love to hear if you agree with my rating.


2.       E-learning course on how to give core messages
  • I gave this course a gold star for presenting content in the context of real world problems. It addresses all three sub-points and is especially effective at presenting the problem in a series of steps.
  • As with the first course I reviewed, I gave this course no stars for activation of prior knowledge. There is no pre-test and the learner is never asked to recall prior knowledge.
  • I gave the course a silver star for demonstration of concepts to be learned. I didn’t think the media used was always relevant to the content and it didn’t always enhance the training.
  • I gave the course a gold star for application because there are many practice activities in the course that allow learners to reflect on and apply what they’ve learned. Good feedback is always provided.
  • I gave the course a gold star for integration. The final assignment is a clever way to get the learner to reflect on what they’ve learned and take the first step of transferring their new knowledge to the real world in a realistic situation.
Final score = three gold stars and one silver star. I’m curious to see what ratings others assigned for this course.  

Merrill, M. D. (2007). 5 Star Instructional Design Rating. © M. David Merrill Retrieved 13 May 2011: http://id2.usu.edu/5Star/FiveStarRating.PDF

Thursday, May 12, 2011

First Principles of Instruction


In “Prescriptive Principles for Instructional Design,” Merrill reviews instructional design models and theories by experts in the field and identifies five principles that many of the models/theories share in common. Merrill’s (2008) five principles for promoting learning, or what he calls the “first principles of instruction,” are:

·         Task-centered approach – Instruction should be sequenced in small tasks.
·         Activation principle – Instruction should activate the learner’s previous knowledge.
·         Demonstration principle – Instruction should include relevant demonstrations of new skills.
·         Application principle – Instruction should allow the learner to apply what they have learned and provide feedback.
·         Integration principle – Instruction should be relevant to real life and learners should have an opportunity to practice what they have learned in the real world (p.174). 

According to Merrill (2008), “these design principles apply regardless of the instructional program or practices prescribed by a given theory or model. If this premise is true, research will demonstrate that when a given instructional program or practice violates or fails to implement one or more of these underlying principles, there will be a decrement in learning and performance” (p.175). Merrill (2008) cites an example from Shell EP where over 65 courses were redesigned based on the first principles of instruction and this led to deeper learning, greater business relevance of the subject matter, and an increase in job performance (p.177).

I enjoyed Merrill’s analysis of other instructional design principles and how they overlap with the first principles of instruction. I don’t quite understand why Merrill included the section on Designing Task-Centered Instruction in this chapter. I felt it was out of place and incomplete. I thought Merrill would explain each principle in more detail, but he stopped after Task-Centered Instruction. In addition, I found the writing in this section confusing and Figures 14.3 and 14.4 did not clarify the concepts for me although I am a visual learner.


Merrill, M.D., Barclay, M. & A. van Schaak. (2008). Prescriptive Principles for Instructional Design. In M. J. Spector, et al., Handbook of Research on Educational Communications and Technology, 3rd Edition (pp. 173-184). New York: Routledge, Taylor & Francis Group.