2016/09/22

EQAO: Meeting the Needs of an "Average" Student

http://meandmythrees.blogspot.ca/

In my previous post I expressed concern over the nature of EQAO testing and its use of closed-ended questions. At the time I felt that no opportunity for differentiation would be afforded by this design, however after attending a professional development workshop concerning EQAO assessment, I now have a better understanding of the role the EQAO serves in our schools.

I should begin by mentioning that we had a guest speaker from the Dairy Farmers of Canada during a recent health and physical education class, for which the presentation concerned the teaching of healthy eating habits for grades 7 and 8 through the use of Canada's Food Guide. While the DFC have worked closely with the Ministry of Education in developing such resources, there was voiced a concern over potential biases and the pushing of an agenda. As someone that does not eat meat of dairy products I can see how one might take issue with the presentation, however it occurred to me while I listened that the presentation was not for me. The presentation as well as Canada's Food Guide are intended to provide information regarding healthy eating to the average Canadian. That is, the average meat- and dairy-consuming Canadian that lives in close proximity to a big chain grocery store. The Guide was developed based on the eating habits of Canadians, and these habits were in turn used to outline ways of meeting our physiological needs from the foods we already eat.

(Note: there are available web- and mobile-based applications of the Food Guide that can be tailored to your eating habits as well as versions for FNMI communities)

http://www.hc-sc.gc.ca/fn-an/food-guide-aliment/index-eng.php

I found many similarities between the concerns voiced over the DFC presentation and my feelings towards EQAO testing. How can we create a one-size-fits-all model that suits all of our eating habits or learners? The idea flies in the face of diversity and differentiation. But while I was willing to look past my differences in the case of the Food Guide, I found myself more resistant with respect to EQAO. Perhaps it is my background in mathematics or my experience having an IEP that have made me skeptical of standardized testing, but I never really saw a place for EQAO in education.

The role of EQAO however is much like the Food Guide. Like the Food Guide, EQAO testing is based on a set of underlying standards (nutrition and curriculum, respectively) and serves as a way for us to examine and direct instruction for the average learner. Students with accommodations or modifications outlined in an IEP will find these needs met for their sittings of EQAO, and resources such as manipulatives are also made available for all students. Some degree of differentiation therefore exists in the process component of writing EQAO, however the product remains the same as a result of the aforementioned closed-ended questions answered via pencil and paper. Fortunately, work is being done that will hopefully allow the use of technology to deliver EQAO to students, thereby affording more opportunities to incorporate new ways of differentiating not only process but product as well.

While I remain somewhat skeptical of testing such as EQAO, I have come to see what its intended role is in our education system. I do however take issue with the practice of "teaching to the test." As EQAO is based on the Ontario curriculum, teaching with the test in mind will direct students towards meeting the required overall expectations. The problem however is that in preparing for the test it is not uncommon to have students solve EQAO-like questions (or questions from previous EQAO tests), and this only serves to propagate a very traditional approach to mathematics in which closed-ended questions reign supreme. For us to use EQAO data reliably the test will need to be updated in order to better reflect the teaching and learning practices of the 21st century. Gone are the days of pencil and paper and learning by rote, so why are we still relying on data that is produced this way?



An interesting component of the EQAO workshop was an opportunity to review the assessment tools and attempt assessing a handful of questions (including that pictured above). Open-response questions are scored on a scale from 10 to 40 that seems to roughly correspond with the typical levels of 1 to 4 we would see on a report card. In addition, assessment includes the letters B and I, representing blank and illegible, respectively. I was happy to see these latter components included in the assessment guidelines, as I believe the distinction between a level 1 and having made no attempt at a solution is important.

The assessment conversation became heated when discussing the solution pictured above. Workshop participants were split between a score of 30 and 40, with most favouring a 30. The argument for a 30 was that the student did not show a "complete" solution process, as they had not written out 24-20=4 despite that the question required them to show their work. As with any open-response assessment, some degree of uncertainty will arise, however the solution was assessed as a 40, and I am inclined to agree. Had the question dealt with decimal numbers or fractions, one could make the argument that the last step of subtraction should be shown. However, as the student clearly used the strategy of counting---as evidenced by the marks on the diagrams---the difference is easily noted without this arithmetic. The solution is complete.

So now the real question---and I have been wondering about this ever since watching a video about how the tests are created. EQAO problems, or "items," go through rigorous screening and those that---as it was put during the workshop---"perform nicely" are eventually placed on the test. But when only 50% of students are meeting standards, what does it mean for an item to perform nicely? Are we pulling questions that students are struggling with and replacing them with ones they perform better on? If items are screened so well, why do some fare better than others? Are we introducing bias?

Related: How the "average" person does not exist.