Participatory Assessment

I have been toying with an idea in my head for some time… its finally spilling out my ears, around how to make assessment more meaningful for students, parents, and manageable for teachers.

So much of what separates good quality assessment from bad hinges on the quality of the feedback that (1) teachers provide (2) students receive AND understand (3) students act on and (4) teachers reflect on. If we could somehow improve the timing and nature of our feedback, the benefit is that we improve the quality and accuracy of our evaluation, and theory, should improve student learning.

It’s the last part of this complex assessment equation that seems to be the hardest to track though…

While we may already define the learning goal, co-create the criteria for success, find time to assess student work, and write good feedback for learners, the true test is whether students can internalize the feedback and apply it to their understanding, thinking or application. We often scrutinize the success criteria, but do we take a close look at the quality of the feedback? Finding the time for teachers to reflect holistically on how the quality of the feedback impacted students’ subsequent learning and teachers’ subsequent teaching is so challenging. Do we provide different opportunities to ‘try and try again’?

One way to overcome this challenge is not to do it alone. Imagine more of a participatory approach to assessment ( and, not to worry, I will pause here to emphasize that the teacher is the one who eventually applies his / her professional judgment when it comes to evaluation ). First off, I’d love to advocate for teacher collaboration because when I work in grade level teams it forces me to reflect on and improve my practice; I would like to think that my colleagues find peer-peer collaboration helpful as well. So now that we have set our long range plans together, imagine a model where students (and even parents???) feel they play an integral part of meaningful assessment for learning.

As noted above, some educators have figured out how to create a solid AFL program by involving student voice in co-constructing success criteria and putting rubrics and other assessment tools into parent friendly language. But I am taking about more than this…

Could students play more of a direct role in planning the when? and how? the expectations will be taught and learning after they receive the feedback (essentially, helping with the sticky second part of the equation that I referenced above-the middle of the AFL process, before evaluation). The Ministry expectations (the what?) are pre-determined, but the how? and how well? is left up to the teacher ( and maybe the students) to determine, no?

Might more of a participatory approach to classroom assessment, where together we regularly revisit and re-direct/inform how well we are dong as individual students, groups, educators and even leaders, give us the time we need to provide higher quality ‘just in time’ feedback. And wouldn’t this have the potential to increase our efficacy and teachers and learners?

How do you engage multiple participants in your assessment practice? Is the a role for technology somewhere in the mix? Does it improve the quality of your feedback? student learning?


Expanding our view of student achievement

I engaged in a riveting conversation with a colleague at my new workplace, Pearson Canada-School Division (an awesome place to work, btw !!!) regarding the way we measure and define student achievement. Could it be that all of this focus on grades and achievement is actually interfering with student learning?

Punctuation marks made of puzzle pieces

Horia Varlan Attribution 2.0 Generic (CC BY 2.0)

This latest a-ha moment reminded me of a fantastic infographic from the CEA website (2011)ย  that depicts student engagement from grades 5-12. Take a look and compare the level of student engagement at the various grades with your school environment.

  • Do these same ratios ring true where you teach or lead? Why or why not?
  • What sets your setting apart?

The “What did you do at school today?” study provides great insight into the difference between what teachers define as ‘engagement’ and ‘student success’ vs. how students define these qualities.

How can we expand our definition of student achievement to include more than just grades, aka intellectual engagement? How can these ‘other’ things be measured/monitored? Should they?


Student readiness

Some may know that the computer-based vs. pencil-based testing debate has become near and dear to my heart; so dear that it is actually the main thrust and focus for my doctoral study for which I will begin collecting data next week actually ๐Ÿ™‚ Insert happy dance here!

Out the Classroom Window

Out the Classroom Window By Elfboy CC

And so as I am transitioning into the world outside of the classroom to my new exciting role as Pearson Canada‘s Digital Learning Research & Communications Manager, I have been learning even more about how districts are tackling this debate.

In my reading, I came across an online social networking community actually created and maintained by the larger Pearson in the US called FWD.ย  [SIDE NOTE-I encourage you to check out this online community as it hosts a lot of topics, like next generation learning,ย  educator and leader effectiveness, and instructional improvement that I know will resonate with my fellow Canadian educators and thought leaders. Besides which, we have a lot to add to the conversation from our perspective]. In one post by @bryanbleil last year entitled, Lessons Learned, he shares first-hand tips and tricks from the field to make the transition to online testing more manageable from an implementation point of view. Some practical suggestions include ensuring the district has necessary bandwidth, and testing the testing instrument ahead of time. Prudent moves indeed, but one other BIG caveat I would offer is to FIRST ensure students have the pre-requisite skillsย  to complete online assessments. Before assessing a district’s readiness for administering online tests, I might suggest educators need to ask themselves, “Are my students ready?”

At the very least, shouldn’t both conversations happen simultaneously?

Student Readiness a REAL factor
In its joint feasibility study and report with the Texas Education Agency regarding Texas’ readiness for state-wide implementation of online testing, Pearson researchers noted:

A majority of districts discussed โ€œdigital gaps,โ€ such as the lack of equitable access across the student population to computers and the technology skills necessary for online testing. The digital gap was perceived as being primarily attributable to the student bodyโ€™s socio-economic status; districts reported a belief that students from lower socio-economic families with more limited access to computers outside of school might be at a disadvantage with respect to online testing when compared with other students. (2008, p. 5)

I am encouraged by the recommendation that before online testing occur, that staff AND students receive the training they need to set the testing experience up for success. How much and how to access student technology training is another blogpost for another day…


Texas Education Agency. (2008). An evaluation of districtsโ€™ readiness for online testing (Document No. GE09 212 01). Austin, TX; Texas Education Agency.