Assessment at each Key Stage has to this point been determined by National Curriculum level statements in the context of the programmes of study for the particular Key Stage. In the early days of the National Curriculum it was envisaged that there would be a ten level scale covering age 5 to 16 range and learners would progress through this on their way through school. Like a lot of ideas that seem sound in principle, it soon turned out that in practice things were not quite as straightforward. Key Stage 4 quickly reverted to GCSE grades A*-G taking over from the National Curriculum scale and the levels 9 and 10 effectively disappeared altogether. Key Stage 3 became levels 4 to 7 for pretty well everyone and we ended up with arguments between levels in Key Stage 2 and Key Stage 3 because of cross-phase transition. Bizarrely this ended up with the Key Stage 2 level 4 being declared different to the Key Stage 3 level 4. I say bizarrely because to anyone outside education that is what it must seem. The snag is that if Level 4 is the KS2 benchmark and Level 5 the KS3 benchmark why do most pupils only make one level progress in those 3 years? If it was linear and most ended up at level 7 after 11 years starting at level 0, we’d expect 7/11 levels per year or 21/11 or approximately 2 levels across KS3. That though assumes linear progression and that the scale is itself linear.
So what is the basis of the attainment target level descriptors? Most were made up by subject specialists on the basis of what it was thought reasonable to achieve at particular ages in each subject. The word benchmark is really misleading in the sense of it being carefully researched, evidence based, fixed point that is known with absolute precision. There was not any real cross-subject coordination to ensure that expectations are the same across all subjects. (I tried this with the revisions to DT mapped into to science but it never got very far) It works out that they are roughly about right because of the history and experience of those devising them not because of some sort of absolute measure of cognitive ability. There is a whole raft of assumptions. And why should 80% achieve the benchmark at the end of a key stage? Only because the Secretary of State says so. There is no real way of knowing whether this is an under or over-estimate of what is reasonable to expect, except that it’s a round number somewhere in the vicinity of what has happened in the past.
Does this mean levels are no use? No, but we need to understand their limitations. What has grown up from simple beginnings has become a complex morass attempting to do things to a precision that level statements are never going to provide - at least not without a crippling amount of bureaucracy and associated opportunity costs in other areas. The National Strategies show what goes wrong when people too far removed from the practicalities “have a good idea”. Assessment foci and fine sub-levels are all a nice idea - like a ten scale set of levels for the 5-16 age range - but in practice it is not so easy. Without very detailed cross-moderation for which there was never ever going to be sufficient resources, the chances of reliable working to that precision across the nation is pretty well zero. Certainly some individuals or even groups will be able to do something with it but national strategies need to be implemented by the great majority not just a group of enthusiasts. If we can’t operate to that precision why waste a lot of effort on it? Especially since that effort could be put to better use.
So let’s clear up the word levels first. National Curriculum Levels I’ll refer to as NCLs because there are other levels that can be used and it gets confusing when the word level is used generically and also to refer to a particular type of level. The ICT NCLs are not usable with the new Computing Programmes of Study because the content on which they are based is different. You can of course still teach ICT and use the levels or a subset of them in order to set targets and monitor progress. Here is a link to the ICT AT level statements translated into assessment criteria. https://theingots.org/community/NCU1ICT There is no statutory requirement to do this any more but it is likely that OFSTED will expect some system for monitoring progress in ICT either incorporated into Computing or as a separate entity. Here is what the DfE said last June
“Ofsted’s inspections will be informed by whatever pupil tracking data schools choose to keep. Schools will continue to benchmark their performance through statutory end of key stage assessments, including national curriculum tests.”
So something is needed and for Computing that something has to be new whereas for ICT it could be, for example the 10 criteria at level 5, based on the old attainment target. There is nothing in these that would be controversial and there is no current mandatory ICT programme of study so you are free to interpret these from a zero base maybe, dare I say, taking evidence from across the curriculum. But this does not fix the wider issue of the new parts of the computing programmes of study so perhaps it would be better to start from scratch. After all, how hard is it to write 10 criteria in keeping with the old level 5 to fit to the new Programmes of Study? Of course the downside of this is that every school will be writing its own criteria so the chances of consistency are not high. Nevertheless this is the freedom the DfE have provided. A little ironic that we had a lot of complaints previously about the National Curriculum stifling innovation and now a lot of the constraints are taken away we get complaints there is no guidance. Maybe a case of be careful what you wish for.
One way round this dilemma is to use a different level system altogether. The Qualifications and Credit Framework (QCF) was devised just a few years ago to give a more flexible way of specifying nationally accredited qualifications and referencing them to the European Qualifications Framework. https://theingots.org/community/QCF_levels The argument against would be that the QCF was primarily designed for adults but if you read the general level descriptors there is no real reason why they would not provide a reasonable starting point in schools. In the end, it’s really what you can do that matters, not how old you are. Remember even the NCLs interpreted levels in the context of the programmes of study in a Key Stage so interpreting the level descriptor for the QCF in the context of say the KS3 POS for computing is perfectly reasonable. The advantage of this is that a QCF qualification at level 1 or level 2 would then be no different from a summary of the teacher assessment at the end of Year 9. Progression on to Level 2 and 3 in Key Stage 4 then has much more coherence and its simpler because we are dealing with one national framework and much simpler levels than was the case with NCLs. Of course we don’t have to make it a formal qualification, we could just use the framework to provide the progress monitoring OFSTED will want with a clear rationale for raising attainment since there is no disjoint in the assessment frameworks from L1 to L2 and L3.
So what about differentiation? The snag with trying to do differentiation through matching criteria to work in fine detail is that it is very time consuming to achieve any precision. My advice is use the criteria broadly to set targets and ensure a baseline of competence across the programme of study, don’t try and achieve formal differentiation from it, at least not to start with. Exams are much better for this purpose as they are quick and easy to deploy. In fact in computing in KS3 an on-line multiple choice test with well defined questions will probably suffice. One such exam at the end of each year could be used to predict progress towards a particular GCSE or equivalent grade if we have sufficient statistical data. This is another drawback of individual schools doing their own thing. If a representative sample of schools give out the same exam to year 7 and the top 20% achieve a grade A, those students are on target to get a GCSE grade A if 20% are achieving this grade in GCSE in year 11. That assumes that grade inflation comes to a halt and the proportion achieving grade As in GCSE does not vary very much. No prediction is ever going to be 100% accurate, all we can do is try to make it as accurate and as informative as possible and make any necessary adjustments when any variations become known. Whatever the case its probably going to be at least as good as using the old levels and a lot less administrative hassle.
If taking a simple annual test can be used to monitor progress why bother with then assessment criteria? Firstly, we have to base the test on something. Translating the programmes of study into assessment criteria makes it easier to designed valid tests. If we are going to the trouble to devise assessment criteria, why not share them with the pupils? After all if they know what is expected and work with focus they are more likely to do well on this and subsequent tests. In short they will learn better. For those that believe in assessment for learning, assessment criteria enable self-assessment, peer assessment and target setting at whatever level of detail deemed appropriate. Using ICT (and let’s face it if we aren’t prepared to do this in ICT/Computing where will it ever happen?) has the capacity to support all this without any paperwork at all. A reduction in what would be routine marking is possible with better quality feedback. But to start with we can have a simple set of end of year exams and a system to check a limited number of criteria as we work through the programmes of study. Manageable but with the potential to incrementally improve once the system is in place. Perhaps its time to acknowledge that allowing teachers time to adjust to new systems and build from them is necessary for sustainable development so plan it like that from the outset. Good teaching is getting people from where they are to where we want them to be.
1. What are the main strengths and weaknesses of how you currently assess progress in ICT?
2. Do you consider that the outcomes of your current assessment justify the time and effort doing it?
3. Do you see the lack of ready made assessment criteria for computing as a threat or an opportunity? Why?
4. How much time do you think it will take to devise an assessment structure for monitoring progress in computing across a key stage? KS3? KS4?