Monday, June 20, 2011

Frick and Boling - Effective Web Instruction (Chapters 5 & 6)

Summary:
The final reading by Frick and Boling is divided into three sections: building a Web prototype, assessing and maintaining the site, and conducting the analysis. The first section provides an overview of hosting servers and the programming languages you might use for your site. It offers different approaches for building the site as well as precautions.

The second section explains in detail how to test your site for quality assurance. It provides a sample QA matrix, explains how to use this matrix, and discusses the QA process.
The final section explains how to conduct a thorough needs assessment and analyze the results. It presents advantages and disadvantages of one-on-one interviews and focus groups. It also offers many tips for reviewing the data collected and eventually drawing conclusions. 

Critique:
This last reading started off slow but really provided some good information in the final two sections. I didn’t find the first section extremely useful because it either covered basic information such as its overview of the Web or information that I wasn’t interested in, such as the descriptions of different programming languages. However, sections 2 and 3 were well worth reading. I liked the QA matrix that was presented in section 2 as well as the systematic approach the authors take to QA. I also liked how they explained prioritizing bugs and their distinctions between priorities one through four. My favorite part of section 3 explained how analyze and synthesize data. I find this to be the hardest part of conducting interviews and focus groups. Collecting data can be fun and interesting but analyzing the data is often challenging and overwhelming. The authors describe an approach that I believe is very effective. I appreciated the practical tips that were provided throughout this section. For example, to identify trends in data, the authors recommend:

  • Really look at what’s there
  • Look for items that are similar
  • Look for items that may be linked
  • Look for items that are missing
  • Pay attention to the questions that come to your mind
  • Don’t be too quick to summarize the data
  • Be sure to summarize the data eventually (p. 131-134)

Merrill's What Makes Effective, Efficient, Engaging Instruction?


Summary:
In this reading, Merrill starts off by identifying three recent approaches for instructional design and development including “problem-based learning, communities of learners, and distributed learning via the Internet” (p. 1). Merrill states that many times people consider these approaches to be distinct and separate when really they should be integrated in order to produce instruction that is effective, efficient, and engaging. Merrill asserts that his First Principles can be used to integrate the approaches and bring on e3 instruction.
While explaining his argument, Merrill briefly discusses two memory processes: associative and mental models. He explains that much of the instruction in the world relies on an associative model, which is not as effective as a mental model. By being problem-centered, Merrill’s First Principles rely more on mental models. 

Critique:
I’ve enjoyed exploring Merrill’s thoughts in more depth this semester and this reading is no exception. However, one of my pet peeves with Merrill is that his diagrams aren’t very intuitive to me and he doesn’t provide as many examples as I would like. Also, the examples he does provide could be more concrete and visual. For example in the second paragraph on page 4, he outlines how peer interaction might take place in the classroom. Rather than keep the scenario in general terms, I would have loved a specific example that I could actually visualize.

Sunday, June 12, 2011

Snyder's Making a Paper Prototype

Summary:

The reading explains paper prototyping in great detail including what it is, how to do it, and why it's useful. It begins by outlining the materials needed to create a prototype and then explains why you might choose to include a background (e.g., poster board) and how to do it. The majority of the reading discussed three closely related topics: representing interface widgets, representing user's choices, and simulating interactions.

Near the end, the reading moves away from software prototyping and provides recommendations for hardware prototyping.

Critique:

In a way, this reading complemented the week 4 reading by Frick and Boling. Whereas that reading focused more on the prototyping testing process as a whole, this reading concentrated on the paper-based prototype itself. While I found some of the tips/techniques interesting, the Frick and Boling reading was more helpful. My biggest problem with the reading is that the prototype they created sounded very confusing, complicated, and cumbersome to me. It's a process I would have to observe in order to understand. I found myself thinking wouldn't it be easier to create a low-fidelity computer mock-up at this point, rather than keeping track of all these note cards, transparencies, and invisible tape? 

Saturday, June 11, 2011

Frick and Boling - Effective Web Instruction (Chapters 3 & 4)

Summary:

This reading provides instruction for creating a paper-based prototype, conducting a pilot session, and analyzing results in order to provide necessary changes.

According to Frick and Boling, the main goal of prototype testing is to address three key issues with any instructional site:
  1. Is the instruction effective?
  2. Are students satisfied with the instruction?
  3. Is the product usable by students? (p.70)
Designers want to make these determinations without spending unnecessary time and money. This is one reason for testing a paper-based prototype instead of a computer-based prototype. In addition, subjects are more forthcoming with feedback when what they are testing is clearly in draft form.

Critique:

The sections on prototype testing are excellent. I thoroughly enjoyed the reading and learned a great deal. At one point, the authors suggest that the best way to learn how to test prototypes is to watch someone who knows what they're doing. I couldn't agree more because this is how I learned how to facilitate focus groups. I observed a more experienced consultant conducting focus groups until I felt comfortable taking the lead. Unfortunately, I didn't have this opportunity with prototype testing. Now, I wish I had read these sections before my first prototype test. I can see where I went wrong in many areas and how important some of Frick and Boling's points are. Key points that I will take away from the reading include:

  • "Spend the least amount of time, resources and effort in order to learn as much as we can about how to improve the instructional product we hope to eventually develop." (p. 19) - This is always important to keep in mind, especially since many clients have limited resources for formative evaluations. 
  • "Observe subjects using the materials and ask them to think aloud." (p. 20) - I've certainly failed to do this in the past. Most of my clients insist on group pilot sessions, which makes it nearly impossible to observe all participants. Also, it forces us to draw our own conclusions because there are too many subjects for everyone to think aloud. 
  • "Your learners will be more frank about their problems if they feel that the instruction is still being worked on--they usually don't want to bring up issues that might disappoint you." (p. 27) - This is very true and the best argument for testing a paper prototype initially. 
  • "Don't bog down the process by testing a prototype with technical problems. Users will stumble on those problems and you won't get the best data that you could." (p. 38) - This is another mistake I've made in the past. People will jump to point out the obvious problems and many won't get past these issues. 
  • "The cause of observed problems is always in the design, so diagnoses (or findings) are always stated in terms of design and not the subjects." (p. 72) - This is a great reminder. It's too easy and too tempting to blame subjects (e.g., they weren't computer savvy enough for e-learning) when test results aren't what you anticipated.
 The reading also left me with a few questions. First, are the authors advising that you conduct one-on-one sessions, so one subject per observer? It's been my experience that when clients ask for formative evaluation like this (and many don't), they set up group sessions, where it's hard to observe and listen to individual subjects. The way suggested makes sense, but I just want clarification that I understood correctly.
My other question is how do you conduct a pilot for a mult-module course with many different tasks? If each module is 40 minutes long and there are 8 modules, that's a lot of testing.

A final question is whether the paper prototype can take the place of storyboarding or if the design phase would include both.

Overall, I found the reading very informative and wish that the authors covered other sections such as learner and context analysis in such depth.