On 7 April, the Department for Education (DfE) published the results of a study that compared the three types of reception baseline assessments used by schools in the 2015 to 2016 academic year. In a statement that went alongside the report, the DfE said, “The study concludes that the three different assessments are not sufficiently comparable to create a fair starting point from which to measure pupils’ progress." No one should've been remotely surprised.
In it's efforts to create market-driven competition in early-years assessment, the Department for Education gave schools the choice of using one of three providers; Early Excellence (EE) , Durham University’s Centre for Evaluation and Monitoring (CEM) and the National Foundation for Educational Research (NFER) - all of whom secured their contracts partly as a result of already having at least 10% of the early-years assessment market. GL assessment, who lost out at the bidding stage as they didn't meet the 10% threshold, may have lost the £200,000 they rashly spent on marketing before the tender results were announced, but at least they dodged the bullet of hiring a small army of programmers and consultants that were required to make the assessments actually happen. I'm told that all 3 successful providers were given just one hours' notice of the decision before the press announcement was made. Whilst the assessments only just fell short of the required level of comparability in the eyes of the DfE, it came as no surprise to practitioners familiar with the assessment approaches of the 3 main providers. Irrespective of the actual content and weighting of the subjects in the tests being different, EE were, as expected, planning to assess over several weeks, whilst NFER and CEM were looking to obtain more of an initial snapshot. In addition, the organisations were not allowed under the rules of the tender to communicate or collaborate in any way. Not only were the DfE aware and concerned by of all of this - hence the commissioning of the comparability study - but it was an entirely unnecessary situation created by the ideological market-approach of the DfE themselves. Which is a shame, because whilst there have been vocal opposition to the tests, particularly from the NUT, the rationale for the assessments is sound; children come into the educational setting having received vastly different experiences and inputs and these need to be carefully assessed (as they already are by good institutions and practitioners). The policy is also backed by Russell Hobby, chief executive of the NAHT (National Association of Head Teachers). He 'firmly believes that the performance of schools should be measured in terms of progress... and in order to measure progress you need a baseline'. What the vast majority of the public probably aren't aware of is that all the assessments are based on play and interaction with the practitioner; so all that would have been different from what is already the norm is that a baseline measure would've been applied to a more standardised set of observations to monitor progress - leading to an accountability that should benefit children, and certainly wouldn't have been noticed by them! Of course, the baseline could be misused by teachers to label children in the same way that National Curriculum levels were - but this is not their purpose, nor should they be portrayed that way. What is clear is that after a very avoidable debacle in which neither the Department nor it's critics emerged with very much credit, there is now an opportunity to establish more common ground with an early assessment approach and methodology that underpins great teaching and learning - as well as facilitating essential accountability. Our children deserve nothing less.
0 Comments
|
Archives
March 2018
Categories
All
|