Saturday, June 11, 2011

Frick and Boling - Effective Web Instruction (Chapters 3 & 4)

Summary:

This reading provides instruction for creating a paper-based prototype, conducting a pilot session, and analyzing results in order to provide necessary changes.

According to Frick and Boling, the main goal of prototype testing is to address three key issues with any instructional site:
  1. Is the instruction effective?
  2. Are students satisfied with the instruction?
  3. Is the product usable by students? (p.70)
Designers want to make these determinations without spending unnecessary time and money. This is one reason for testing a paper-based prototype instead of a computer-based prototype. In addition, subjects are more forthcoming with feedback when what they are testing is clearly in draft form.

Critique:

The sections on prototype testing are excellent. I thoroughly enjoyed the reading and learned a great deal. At one point, the authors suggest that the best way to learn how to test prototypes is to watch someone who knows what they're doing. I couldn't agree more because this is how I learned how to facilitate focus groups. I observed a more experienced consultant conducting focus groups until I felt comfortable taking the lead. Unfortunately, I didn't have this opportunity with prototype testing. Now, I wish I had read these sections before my first prototype test. I can see where I went wrong in many areas and how important some of Frick and Boling's points are. Key points that I will take away from the reading include:

  • "Spend the least amount of time, resources and effort in order to learn as much as we can about how to improve the instructional product we hope to eventually develop." (p. 19) - This is always important to keep in mind, especially since many clients have limited resources for formative evaluations. 
  • "Observe subjects using the materials and ask them to think aloud." (p. 20) - I've certainly failed to do this in the past. Most of my clients insist on group pilot sessions, which makes it nearly impossible to observe all participants. Also, it forces us to draw our own conclusions because there are too many subjects for everyone to think aloud. 
  • "Your learners will be more frank about their problems if they feel that the instruction is still being worked on--they usually don't want to bring up issues that might disappoint you." (p. 27) - This is very true and the best argument for testing a paper prototype initially. 
  • "Don't bog down the process by testing a prototype with technical problems. Users will stumble on those problems and you won't get the best data that you could." (p. 38) - This is another mistake I've made in the past. People will jump to point out the obvious problems and many won't get past these issues. 
  • "The cause of observed problems is always in the design, so diagnoses (or findings) are always stated in terms of design and not the subjects." (p. 72) - This is a great reminder. It's too easy and too tempting to blame subjects (e.g., they weren't computer savvy enough for e-learning) when test results aren't what you anticipated.
 The reading also left me with a few questions. First, are the authors advising that you conduct one-on-one sessions, so one subject per observer? It's been my experience that when clients ask for formative evaluation like this (and many don't), they set up group sessions, where it's hard to observe and listen to individual subjects. The way suggested makes sense, but I just want clarification that I understood correctly.
My other question is how do you conduct a pilot for a mult-module course with many different tasks? If each module is 40 minutes long and there are 8 modules, that's a lot of testing.

A final question is whether the paper prototype can take the place of storyboarding or if the design phase would include both.

Overall, I found the reading very informative and wish that the authors covered other sections such as learner and context analysis in such depth.

5 comments:

  1. I accidentally highlighted the passage above and couldn't figure out how to remove it. Sorry. I'm really not trying to emphasize that section.

    ReplyDelete
  2. Thanks, Nicole, for sharing your insights from your own experiences with prototyping. It seems that it's rare to find anyone who does it at all! I wish I had a client who asked for it, even if they don't know how to properly gather data.

    Although I can't answer your questions definitively, my guess is that one-on-one is exactly what they mean. As you have observed, unless you can get direct feedback from one subject's thinking aloud, you don't really have accurate data.

    Re pilot testing multiple modules, I would think that the common elements like the navigation, interface, path through the learning and format of assessments--the elements that are pretty much standard throughout the whole experience--are what you'd be testing. My approach would be to standardize those elements anyway since you want the learner to get comfortable with those elements and focus on the content and the learning, not the technology. So you'd make the pilot a standardized version of the common elements throughout the course. Prototyping the interface is more the point. Content is a different animal and needs to be formatively tested in other ways--if you're lucky enough to get that chance.

    ReplyDelete
  3. Great summary and critique, Nicole. I also learn by observing, so this felt a bit like I had done so and I too wish I'd read this even one month prior to a recent data collection I did using video. Hindsight - razor sharp.

    Regarding two of your critique points:

    - learners' honesty - I've found that learners' ethnic and professional background impact the extent to which they can honestly critique someone else's work. Arab learners, and others from Eastern cultures, find it very difficult to be openly critical of another - even when it is explicitly solicited and the purpose is explained. We struggle with that issue when we have learners evaluate the 'learning environment' (including teachers). Inevitably we get 'strongly agree' and only positive comments on questionnaires. Professionally, I find teachers or those with education backgrounds are excellent at providing critical feedback - not surprisingly, I guess.

    - Being careful about blaming the learner for any stumbles with the materials is something I constantly remind myself about. After tough teaching days, I tend to blame my students and quite often, on reflection, it's been the design (or delivery) of instruction that has not engaged, challenged, or satisfied the learners that lies at the root of the problem. That was a good reminder for me as well.

    Thanks for the insights - you've had some interesting experiences!

    ReplyDelete
  4. Karen - Thanks Karen and interesting insight on learners' ethnic and professional background. I've run into this as well.

    ReplyDelete
  5. MediaSage - Thanks for answering my questions and clarifying that prototyping the interface is the point right now.

    ReplyDelete