Summary: The authors describe the development of a writing assessment program at Eastern Michigan University. A key point to the piece is that assessment programs should be grounded in a particular context (they use the term "place"). Additionally, they used the assessment program to "make visible the work of first-year writing students in various ways across campus," and to have the results be useful--to instructors, students, and other constituencies on campus, including administration. One of the most compelling elements of the article is how the authors engaged the university community in a cross-disciplinary discussion about writing and what makes it good; then, they used several different mapping methods to produce an assessment process and rubric that reflected the values of that particular university. The resultant process yielded qualitative and quantitative data on students' portfolio writing that enabled them to revise (and explain) the writing program and its courses.
Response: I was taken with the authors' process of developing the assessment program and its emphasis on place. It seems like an elegant idea--find out what people in a specific context value and use that to guide the assessment. I also really liked how they drew together a diverse group: I felt like that would encourage the college as a whole to increase investment (both emotional and, one would hope, financial) in the writing program and to understand better what was happening in it. And I liked their emphasis on continual revision and the generation of meaningful assessment data, rather than just jumping through assessment hoops for compliance's sake. As someone who has co-developed and been involved in a writing assessment program for over a decade, though, I think their form may take too long to fill out and analyze. I have been caught myself in the bind between wanting lots of information and trying to make the process easy and quick enough so that faculty do not get burned out. (The longest form I designed, complete with Likert scales and fill-in-the-reason sections similar to the authors', was unmanageable. Instructors were spending as much as eight to ten hours assessing departmental portfolios on top of their other work. They took it with excellent grace, but in retrospect, I can't believe I wasn't burned in effigy.) Assessment is always a balancing act--do you statistically sample and use a more comprehensive form, thereby generating good data for the department but very little of use to individual students or teachers, or do you assess each student, which provides better data for instructors and student but necessitates a less time-consuming process for each portfolio, thereby generating less useful data for the department? I have not solved this problem yet.
Again, though, the process the authors described for developing their instrument was really excellent.
Uses: The design of an assessment program.
No comments:
Post a Comment