CILP [Collaborative Inquiry and Learning of PADI Online Diving Course]

As I make my way through the 15+ hours of online learning prior to become certified for closed and open water diving with my family (holy time commitment Batman! Sounded like a good idea at the time when we booked our trip…), we find ourselves having an obscure out of body experience of sorts. As we read through and listen to the content, watch the ‘at point of use’ videos, branch off to outside links should our interests take us there, we have collaborative a-ha moments where we realize maybe THIS is what personalized learning is supposed to be???

The PADI online course, but my favourite feature is the short checks for understanding that I get to do after short chunks of new learning. Learning goals are stated right up front for me as I move along to the next set of ideas, and there are guiding questions in the top right hand corner of the screen where the content can be found. I have been called ‘old school’ in my approach to learning, because I still find it necessary to take notes on pencil-paper (my kids don’t do this, and interestingly enough, neither does my husband), but this system works for me.

After I complete each section, the program provides me with a summary of BIG ideas that I then compare to my own notes. The little quizzes help prepare me for the larger end of unit assessment and when I score 100% I feel great. I really like the way the automated-scoring final assessment works too. With each answer, I get more information about why my answer is correct. If I answer it incorrectly, this feedback box tells me why my answer is wrong, provides me with the correct responses AND directs me to the whereabouts in the course I can review to consolidate my learning. Imagine if students could be provided with such timely feedback? Imagine what I could do as an educator, seeing how many times certain students needed to revisit certain concepts? This would be so helpful for guided instruction…

I’m only on module 2 of 5 but I can’t get over how each of my 5 family members is going through the course differently. The boys click on videos whenever they are available, whereas Julia and I are task masters, want to keep a steady efficient pace, and so we avoid the extras but keep notes to be accurate. Gender difference maybe, hmmm?

To date, because of individual extra curricular and work schedules etc…, we have not worked on the online course work simultaneously. I think it’d be interesting if we did because we could experience the benefits of ‘blended’ learning. Due to the number of computers in our house, this would require someone doing it on a mobile device. Wonder what that would be like? If we completely the modules concurrently, I wonder if they will turn to discuss ideas with one another? Will save those thoughts for the next CILP reflection…

Sure has been an interesting snapshot into how we each approach the task differently. PADI has got it right: technology is allowing us to customize the learning experience.

Yours in SCUBA newbie-ness,


Participatory Assessment

I have been toying with an idea in my head for some time… its finally spilling out my ears, around how to make assessment more meaningful for students, parents, and manageable for teachers.

So much of what separates good quality assessment from bad hinges on the quality of the feedback that (1) teachers provide (2) students receive AND understand (3) students act on and (4) teachers reflect on. If we could somehow improve the timing and nature of our feedback, the benefit is that we improve the quality and accuracy of our evaluation, and theory, should improve student learning.

It’s the last part of this complex assessment equation that seems to be the hardest to track though…

While we may already define the learning goal, co-create the criteria for success, find time to assess student work, and write good feedback for learners, the true test is whether students can internalize the feedback and apply it to their understanding, thinking or application. We often scrutinize the success criteria, but do we take a close look at the quality of the feedback? Finding the time for teachers to reflect holistically on how the quality of the feedback impacted students’ subsequent learning and teachers’ subsequent teaching is so challenging. Do we provide different opportunities to ‘try and try again’?

One way to overcome this challenge is not to do it alone. Imagine more of a participatory approach to assessment ( and, not to worry, I will pause here to emphasize that the teacher is the one who eventually applies his / her professional judgment when it comes to evaluation ). First off, I’d love to advocate for teacher collaboration because when I work in grade level teams it forces me to reflect on and improve my practice; I would like to think that my colleagues find peer-peer collaboration helpful as well. So now that we have set our long range plans together, imagine a model where students (and even parents???) feel they play an integral part of meaningful assessment for learning.

As noted above, some educators have figured out how to create a solid AFL program by involving student voice in co-constructing success criteria and putting rubrics and other assessment tools into parent friendly language. But I am taking about more than this…

Could students play more of a direct role in planning the when? and how? the expectations will be taught and learning after they receive the feedback (essentially, helping with the sticky second part of the equation that I referenced above-the middle of the AFL process, before evaluation). The Ministry expectations (the what?) are pre-determined, but the how? and how well? is left up to the teacher ( and maybe the students) to determine, no?

Might more of a participatory approach to classroom assessment, where together we regularly revisit and re-direct/inform how well we are dong as individual students, groups, educators and even leaders, give us the time we need to provide higher quality ‘just in time’ feedback. And wouldn’t this have the potential to increase our efficacy and teachers and learners?

How do you engage multiple participants in your assessment practice? Is the a role for technology somewhere in the mix? Does it improve the quality of your feedback? student learning?