Monday, June 20, 2011

Frick and Boling - Effective Web Instruction (Chapters 5 & 6)

Summary:
The final reading by Frick and Boling is divided into three sections: building a Web prototype, assessing and maintaining the site, and conducting the analysis. The first section provides an overview of hosting servers and the programming languages you might use for your site. It offers different approaches for building the site as well as precautions.

The second section explains in detail how to test your site for quality assurance. It provides a sample QA matrix, explains how to use this matrix, and discusses the QA process.
The final section explains how to conduct a thorough needs assessment and analyze the results. It presents advantages and disadvantages of one-on-one interviews and focus groups. It also offers many tips for reviewing the data collected and eventually drawing conclusions. 

Critique:
This last reading started off slow but really provided some good information in the final two sections. I didn’t find the first section extremely useful because it either covered basic information such as its overview of the Web or information that I wasn’t interested in, such as the descriptions of different programming languages. However, sections 2 and 3 were well worth reading. I liked the QA matrix that was presented in section 2 as well as the systematic approach the authors take to QA. I also liked how they explained prioritizing bugs and their distinctions between priorities one through four. My favorite part of section 3 explained how analyze and synthesize data. I find this to be the hardest part of conducting interviews and focus groups. Collecting data can be fun and interesting but analyzing the data is often challenging and overwhelming. The authors describe an approach that I believe is very effective. I appreciated the practical tips that were provided throughout this section. For example, to identify trends in data, the authors recommend:

  • Really look at what’s there
  • Look for items that are similar
  • Look for items that may be linked
  • Look for items that are missing
  • Pay attention to the questions that come to your mind
  • Don’t be too quick to summarize the data
  • Be sure to summarize the data eventually (p. 131-134)

Merrill's What Makes Effective, Efficient, Engaging Instruction?


Summary:
In this reading, Merrill starts off by identifying three recent approaches for instructional design and development including “problem-based learning, communities of learners, and distributed learning via the Internet” (p. 1). Merrill states that many times people consider these approaches to be distinct and separate when really they should be integrated in order to produce instruction that is effective, efficient, and engaging. Merrill asserts that his First Principles can be used to integrate the approaches and bring on e3 instruction.
While explaining his argument, Merrill briefly discusses two memory processes: associative and mental models. He explains that much of the instruction in the world relies on an associative model, which is not as effective as a mental model. By being problem-centered, Merrill’s First Principles rely more on mental models. 

Critique:
I’ve enjoyed exploring Merrill’s thoughts in more depth this semester and this reading is no exception. However, one of my pet peeves with Merrill is that his diagrams aren’t very intuitive to me and he doesn’t provide as many examples as I would like. Also, the examples he does provide could be more concrete and visual. For example in the second paragraph on page 4, he outlines how peer interaction might take place in the classroom. Rather than keep the scenario in general terms, I would have loved a specific example that I could actually visualize.

Sunday, June 12, 2011

Snyder's Making a Paper Prototype

Summary:

The reading explains paper prototyping in great detail including what it is, how to do it, and why it's useful. It begins by outlining the materials needed to create a prototype and then explains why you might choose to include a background (e.g., poster board) and how to do it. The majority of the reading discussed three closely related topics: representing interface widgets, representing user's choices, and simulating interactions.

Near the end, the reading moves away from software prototyping and provides recommendations for hardware prototyping.

Critique:

In a way, this reading complemented the week 4 reading by Frick and Boling. Whereas that reading focused more on the prototyping testing process as a whole, this reading concentrated on the paper-based prototype itself. While I found some of the tips/techniques interesting, the Frick and Boling reading was more helpful. My biggest problem with the reading is that the prototype they created sounded very confusing, complicated, and cumbersome to me. It's a process I would have to observe in order to understand. I found myself thinking wouldn't it be easier to create a low-fidelity computer mock-up at this point, rather than keeping track of all these note cards, transparencies, and invisible tape? 

Saturday, June 11, 2011

Frick and Boling - Effective Web Instruction (Chapters 3 & 4)

Summary:

This reading provides instruction for creating a paper-based prototype, conducting a pilot session, and analyzing results in order to provide necessary changes.

According to Frick and Boling, the main goal of prototype testing is to address three key issues with any instructional site:
  1. Is the instruction effective?
  2. Are students satisfied with the instruction?
  3. Is the product usable by students? (p.70)
Designers want to make these determinations without spending unnecessary time and money. This is one reason for testing a paper-based prototype instead of a computer-based prototype. In addition, subjects are more forthcoming with feedback when what they are testing is clearly in draft form.

Critique:

The sections on prototype testing are excellent. I thoroughly enjoyed the reading and learned a great deal. At one point, the authors suggest that the best way to learn how to test prototypes is to watch someone who knows what they're doing. I couldn't agree more because this is how I learned how to facilitate focus groups. I observed a more experienced consultant conducting focus groups until I felt comfortable taking the lead. Unfortunately, I didn't have this opportunity with prototype testing. Now, I wish I had read these sections before my first prototype test. I can see where I went wrong in many areas and how important some of Frick and Boling's points are. Key points that I will take away from the reading include:

  • "Spend the least amount of time, resources and effort in order to learn as much as we can about how to improve the instructional product we hope to eventually develop." (p. 19) - This is always important to keep in mind, especially since many clients have limited resources for formative evaluations. 
  • "Observe subjects using the materials and ask them to think aloud." (p. 20) - I've certainly failed to do this in the past. Most of my clients insist on group pilot sessions, which makes it nearly impossible to observe all participants. Also, it forces us to draw our own conclusions because there are too many subjects for everyone to think aloud. 
  • "Your learners will be more frank about their problems if they feel that the instruction is still being worked on--they usually don't want to bring up issues that might disappoint you." (p. 27) - This is very true and the best argument for testing a paper prototype initially. 
  • "Don't bog down the process by testing a prototype with technical problems. Users will stumble on those problems and you won't get the best data that you could." (p. 38) - This is another mistake I've made in the past. People will jump to point out the obvious problems and many won't get past these issues. 
  • "The cause of observed problems is always in the design, so diagnoses (or findings) are always stated in terms of design and not the subjects." (p. 72) - This is a great reminder. It's too easy and too tempting to blame subjects (e.g., they weren't computer savvy enough for e-learning) when test results aren't what you anticipated.
 The reading also left me with a few questions. First, are the authors advising that you conduct one-on-one sessions, so one subject per observer? It's been my experience that when clients ask for formative evaluation like this (and many don't), they set up group sessions, where it's hard to observe and listen to individual subjects. The way suggested makes sense, but I just want clarification that I understood correctly.
My other question is how do you conduct a pilot for a mult-module course with many different tasks? If each module is 40 minutes long and there are 8 modules, that's a lot of testing.

A final question is whether the paper prototype can take the place of storyboarding or if the design phase would include both.

Overall, I found the reading very informative and wish that the authors covered other sections such as learner and context analysis in such depth.

Saturday, May 28, 2011

Frick and Boling - Effective Web Instruction (Chapters 1 & 2)

In this blog post, I summarize and critique the first two chapters of Effective Web Instruction: Handbook for an Inquiry-Based Process.


Summary:
Frick and Boling (2002) present an inquiry-based, iterative instructional design and development process in order to avoid common instructional pitfalls such as:
  • No user input
  • Little or no testing
  • “No record of decision-making”
  • No justification for design decisions (p.2-3)
In their process, objectives are created upfront and the assessments are built before work on the content begins. The process includes iterative reviews first of a paper prototype, then a computer prototype, and finally the site itself. The results of each iteration are analyzed and the site is improved based on this analysis. (p.4)

The instructional goals are developed with all stakeholders, recognizing that some perspectives are more valuable than others. The reading recommends thinking about how you’ll assess the instructional goals while you’re developing the instruction, which is in line with Mager’s guidelines for writing instructional objectives. (p.12)

The learner analysis section discusses the importance of knowing your learners and contends that the best way to do this is to try teaching the content to them, at least once. The final section, context analysis advises that you shouldn’t pursue a Web solution for no reason. You should ask yourself: “What can we do with information technology that could not be done without it to help students learn?”

Critique:
I didn’t find the learner analysis section particularly helpful or practical. Yes, it’s great to teach the subject matter at least once in order to better understand your learner needs, but I’ve never had this opportunity as an e-learning developer. So if you can’t teach the subject matter beforehand, how does one uncover learner needs?

I found the paper prototype recommendation interesting and am looking forward to that section of the reading and better understanding such an approach. My clients tend to request computer prototypes, so I’m not sure how the paper prototype would actually be implemented.

Mager's Tips on Instructional Objectives

This blog post summarizes and critiques "Mager's Tips on Instructional Objectives."

Summary:
The reading provides a good summary of Mager’s book—Preparing Instructional Objectives. The key points from Mager’s book are summarized.

The three main reasons for stating objectives:
  1. They lay out a road map for you to follow when creating instruction.
  2. If you don’t state the objective upfront, you won’t know if it’s being followed.
  3. They provide an overview for the learner, which allows students to create a personal strategy for accomplishing the instructional goals (p.1).
Useful objectives are those that clearly define the audience, behavior, condition, and degree (p.1). Behaviors can be overt (observed directly) or covert (not observed directly). Covert behaviors require an indicator, so the performance can be demonstrated (p.2).

The reading concludes with some common pitfalls for writing objectives including: false performance, false givens, teaching points, gibberish, instructor performance, and false criteria. 

Critique:
While Mager’s book is very informative and helpful for people who have little experience with creating objectives, it is also long and spends a great deal of time explaining specific points. This reading summarizes Mager’s key points in eight pages. It’s an ideal reference for people who have already read the book but need a refresher or instructional designers who already have some familiarity with instructional objectives. 

The reading does a good job of reminding you to always state the main intent, not just an observable behavior. Mager recommends that you state the main intent and then an observable behavior in parenthesis if the objective is covert. I think this is a helpful practice for an ID who is planning their courses but it hasn’t been my experience as a student to see objectives stated this way. What do others think?

I appreciated the pitfall section, but I would have liked to have seen revised statements. Examples are provided of what not to do but it would have been helpful to see a revised example of the same statement, specifically for the teaching points section, which I found confusing. 
 

Kim and Frick - Changes in Student Motivation During Online Learning

In this blog post, I summarize and critique Kim and Frick's (2011) "Changes in Student Motivation during Online Learning." 

Summary:
Taking into account the high attrition rate in online courses and the fact that attrition is often caused by a lack of motivation, Kim and Frick (2011) investigate learner motivation in online learning. The first section of this reading is a literature review and the second section describes the actual study.

The literature review discusses internal, external, and personal factors that influence motivation in Web-based instructions. One internal factor is that instruction is more motivating if it applies the ARCS model (attention, relevance, confidence, satisfaction) or Merril’s first principles. On the other hand, “cognitive overload” can decrease motivation to learn (p.3). External factors such as technical difficulties with the learning environment or lack of support from an employer also influence motivation. Finally, personal factors such as preference for a particular learning style can impact motivation.

The rest of the reading describes the current study in great detail including participants, research instrument, and data collection/analysis methods. Table 2 on page 18 outlines eight instructional design principles based on the findings:
  1. Provide learners with content that is relevant and useful to them.
  2. Incorporate multimedia presentations that stimulate learner interest.
  3. Include learning activities that simulate real-world situations.
  4. Provide content at a difficulty level which is in a learner's zone of proximal development.
  5. Provide learners with hands-on activities that engage them in learning.
  6. Provide learners with feedback on their performance.
  7. Design the website so that it is easy for learners to navigate.
  8. If possible, incorporate some social interaction in the learning process (e.g., with an instructor, technical support staff, or an animated pedagogical agent). (p.18). 
Critique:
I appreciated table 2, which included practical guidelines for increasing learner motivation based on the study results. These are things I can actually apply to my Web-based courses.
In addition, I found the notion of disruptive innovations very interesting. I think we’re at an exciting point for Web-based instruction where quality is improving and it’s being distributed more widely. Still, the following statistic blew me away: "50% of high school courses will be offered online by 2019" (p.2). That's in eight years from now. 50% seems high to me. What do others think?

Kim, K.-J. & Frick, T. W. (2011). "Changes in Student Motivation During Online Learning." Journal of Educational Computing Research, 44(1), 1-24.

Sunday, May 22, 2011

Merrill's 5 Star Instructional Design Rating

Summary:
Merrill’s 5 Star Instructional Design Rating presents a simple method for evaluating instructional products based on five questions:
  1. Is the courseware presented in the context of read world problems?
  2. Does the courseware attempt to activate relevant prior knowledge or experience?
  3. Does the courseware demonstrate (show examples) of what is to be learned rather than merely tell information about what is to be learned?
  4. Do learners have an opportunity to practice and apply their newly acquired knowledge or skill?
  5. Does the courseware provide techniques that encourage learners to integrate (transfer) the new knowledge or skill into their everyday life? (Merrill, 2007)
Each of these questions includes sub-questions that can be asked in order to make the correct assessment. 

Critique:
I like the simplicity of Merrill’s 5 Star Instructional Design Rating and that it’s presented in a manner that should be easy to use when I evaluate the two e-learning courses. Merrill could have created this rating system using a series of checklists with statements such as “All demonstrations (examples) are consistent with the content being taught.” Instead, he poses questions, which caused me to pause and reflect for a moment. I think I would have been more likely to skim over statements.
My main critique with this reading is that Merrill uses unfamiliar terms in the beginning that are also vague at times. For example, he doesn’t explain what he means by “kinds-of,” “how-to,” and “what-happens.” It’s assumed that the reader knows what he means, but this wasn’t the case for me. In addition, he recommends using his rating system for tutorial or experiential courseware but never explains how he would define these types of courseware; he only explains what they aren’t: receptive or exploratory courseware.

Ratings for Instructional Products:
1.       E-learning course on how to give core messages
  •  I gave this course a silver star for presenting content in the context of real world problems. It addresses the first two sub-points but involves a single problem, not a progression of problems.
  • I gave this course no stars for activation of prior knowledge. There is no pre-test and the learner is never asked to recall prior knowledge.
  • I gave the course a gold star for demonstration of concepts to be learned. It does this very well and provides multiple examples and non-examples. It uses short videos for these examples, which is the right choice of media for this content.
  •  I gave the course a silver star for application because the practice activity is realistic, effective, and provides helpful feedback; however, the learner cannot access help if necessary.
  • I gave the course a silver star for integration. The main objective is to deliver core messages related to abstinence and safe sex in a clear and unbiased manner. There is a question and answer section that provides integration guidance; however, since this is an e-learning course, there is no realistic way for the student to demonstrate the new skill because that would require public speaking.
Final score = three silver stars and one gold star. If you have time, I definitely recommend checking out the course. It’s a good example of e-learning and it takes less than 10 minutes to complete. I would love to hear if you agree with my rating.


2.       E-learning course on how to give core messages
  • I gave this course a gold star for presenting content in the context of real world problems. It addresses all three sub-points and is especially effective at presenting the problem in a series of steps.
  • As with the first course I reviewed, I gave this course no stars for activation of prior knowledge. There is no pre-test and the learner is never asked to recall prior knowledge.
  • I gave the course a silver star for demonstration of concepts to be learned. I didn’t think the media used was always relevant to the content and it didn’t always enhance the training.
  • I gave the course a gold star for application because there are many practice activities in the course that allow learners to reflect on and apply what they’ve learned. Good feedback is always provided.
  • I gave the course a gold star for integration. The final assignment is a clever way to get the learner to reflect on what they’ve learned and take the first step of transferring their new knowledge to the real world in a realistic situation.
Final score = three gold stars and one silver star. I’m curious to see what ratings others assigned for this course.  

Merrill, M. D. (2007). 5 Star Instructional Design Rating. © M. David Merrill Retrieved 13 May 2011: http://id2.usu.edu/5Star/FiveStarRating.PDF

Thursday, May 12, 2011

First Principles of Instruction


In “Prescriptive Principles for Instructional Design,” Merrill reviews instructional design models and theories by experts in the field and identifies five principles that many of the models/theories share in common. Merrill’s (2008) five principles for promoting learning, or what he calls the “first principles of instruction,” are:

·         Task-centered approach – Instruction should be sequenced in small tasks.
·         Activation principle – Instruction should activate the learner’s previous knowledge.
·         Demonstration principle – Instruction should include relevant demonstrations of new skills.
·         Application principle – Instruction should allow the learner to apply what they have learned and provide feedback.
·         Integration principle – Instruction should be relevant to real life and learners should have an opportunity to practice what they have learned in the real world (p.174). 

According to Merrill (2008), “these design principles apply regardless of the instructional program or practices prescribed by a given theory or model. If this premise is true, research will demonstrate that when a given instructional program or practice violates or fails to implement one or more of these underlying principles, there will be a decrement in learning and performance” (p.175). Merrill (2008) cites an example from Shell EP where over 65 courses were redesigned based on the first principles of instruction and this led to deeper learning, greater business relevance of the subject matter, and an increase in job performance (p.177).

I enjoyed Merrill’s analysis of other instructional design principles and how they overlap with the first principles of instruction. I don’t quite understand why Merrill included the section on Designing Task-Centered Instruction in this chapter. I felt it was out of place and incomplete. I thought Merrill would explain each principle in more detail, but he stopped after Task-Centered Instruction. In addition, I found the writing in this section confusing and Figures 14.3 and 14.4 did not clarify the concepts for me although I am a visual learner.


Merrill, M.D., Barclay, M. & A. van Schaak. (2008). Prescriptive Principles for Instructional Design. In M. J. Spector, et al., Handbook of Research on Educational Communications and Technology, 3rd Edition (pp. 173-184). New York: Routledge, Taylor & Francis Group.