Did the Bretons Break? (And What’s It Got To Do With Textbooks?)

Picture the scene. A cold morning in the middle of October. A dreich mist envelopes the fields. The air full old English and mangled French battle cries of two armies arrayed and poised for action. The repetitive thud, thud, thud of plastic swords and cardboard axes against shields of neoprene…

It is time once more to re-enact the most famous battles in British history.

It is time once more for the wind to keep William in port with a wafted piece of paper. Time once more for one Viking to single-handedly hold the scrap-paper Stamford Bridge for hours. Time for Harold Godwinson to force-march an army south to the other side of the classroom to once again re-fight a battle he can only lose. Like the most craven of addicts, or the inmate of some circle of Hades, he is trapped eternally in a repetitive cycle of bad decisions that will ultimately and inevitably lead to his destruction. Every year he will ignore his brothers’ advice. Every year he will fail to take William by surprise. Every year his shield wall will break and every year an archer, from the back of the room by the pile of school bags, will nock an imaginary arrow, draw an imaginary bow and loose a fateful board pen into the mêlée. Every year Edith Swanneck will pick her way through the tangle of giggling, uniformed corpses looking for her lover.

And we love it.

One thing that trads and progs both love is an Ian Luff-inspired Battle of Hastings role-play. It is direct instruction and it is kids taking part for themselves. You might get kids to take notes at the end. You might create a huge  spider diagram. You might have students shouting ‘Stop!’ when they hear reasons why William won. This might be the basis of an essay or this might just be an introduction to Year 7 purely designed to get one-up on the geography department.

Whatever the reason and rationale it is a sequence repeated up and down the land. But, it is never the same twice. There are myriad subtle variations and nuances that each teacher brings to the story. We each embroider the narrative with our own favourite facts. We each add details to spin out the story, to cast the spell wider. You might amuse your Year 7s with Taillefer’s sword-juggling. You might take delight in demonstrating the use of a Dane axe. You might have all of the fyrd making scramaseaxes from shatter-proof rulers… These details may represent your swag, your pomp, your professional pride, your delight in conveying knowledge and story slightly better than you heard it. Or, maybe you are less petty-minded than me. In a sense, it doesn’t matter if you label kids as Gyrth and Leofwine or not. It doesn’t really matter if you miss out Edwin and Morcar’s heroic failure at Fulford. Most of the good-story bits are not essential to the narrative.

But some are. In your version of these lessons do the Bretons break?

When William invaded England, he did not just bring Normans with him; his army was a coalition of different elements ‘attracted by the well-known liberality of the duke’* as William of Poitiers would have it or, in Orderic Vitalis’s blunter phrase, because they were ‘panting for the spoils of England’. This meant that at Hastings, the left of William’s army was (probably) made up of a contingent of Bretons. There is a dispute about their actions in the battle and I believe that thinking about how we as history teachers deal with this issue raises a lot of other questions about the teaching of causal thinking and the resources we use to do so.

There are two near-contemporary accounts of the Battle of Hastings. The most famous is by William of Poitiers who was a trained soldier who became a priest and worked as chaplain to Duke William. He probably wasn’t at Hastings but knew, and talked to, people who were, including William himself. He wrote an account of the Battle of Hastings in his Gesta Guillelmi sometime in the 1070s.

The second source is a poem called the Carmen de Hastingae Proelio, probably written by Bishop Guy of Amiens, someone connected with the French court, possibly within months of Hastings. The reasons for its creation are unclear but it may have been intended for the ears of William himself.

The event concerning the Bretons occurs about half-way through the battle. According to Williams of Poitiers, stout English defence by the shield wall at the top of the hill causes the Bretons on the left of William’s army to break. At the same time, there is a rumour in the Norman ranks that William has been killed. William rescues the situation by raising his helmet and showing his face. His men, given new heart by their leader, turn and begin to slaughter the English who were foolish enough to have left their defensive position at the top of the hill. Inspired by these events, William orders another cavalry attack and this time feigns a retreat causing enough of the English to break ranks to fatally weaken their defences.

The Carmen, relates similar events but makes no mention of the troops that broke being Breton. Also, in this account the first retreat starts off as a trick but, when the Saxons press the Normans harder than they expected, the feigned retreat becomes a real one until, as in William of Poitiers’ account, William, through sheer force of personality, saves the day and rallies his men. Again, as in the Gesta, after this there is a second feigned retreat with its terrible consequences for Harold.

So who to believe? Which version should be acted out in classrooms and gymnasiums all over the country?

At first glance, William of Poitiers’ account seems the most convincing; he was, after all, a soldier so should know what he’s talking about. Also, he probably spoke to people who were at Hastings themselves, and while the Gesta is almost certainly Norman propaganda, it appears in this instance as if William’s army is weak and foolish – they are broken by the English. In comparison to the Carmen which which has William trying a clever feigned retreat (albeit one that goes wrong)  this doesn’t seem to be hagiography.

But look again. It is William that saves the day and it is not, technically speaking, his men that broke – it is the mercenaries of the Breton contingent. William is then able to use that misfortune to his advantage as it gives him the idea for the famous trick that would break, or at least fatally weaken, the Saxon shield wall. William actually comes out better in Poitiers’ account than in the Carmen. Is William of Poitiers excusing this mistake by dressing it up as the fault of the Bretons?

Although there is potential merit in both accounts if we are going to re-enact the Battle of Hastings we should probably choose one version of events. So, do you teach the William of Poitiers version with William rescuing a terrible situation created by weakness of the mercenary Bretons? Or, do you have William trying and re-trying his famous trick until the Saxon shield wall is thinned-out enough to leave Harold vulnerable?

Like, I suspect, most of you, I teach the William of Poitiers version, despite the fact that, to my mind, the Carmen account seems more plausible. Does this mean that I am teaching students something that is untrue? Am I lying to Year 7? I would suggest that am doing neither of those things. In order to tell a narrative of the past, I have created a model of events. We cannot be sure which of the accounts we have is most accurate and while on balance, the Gesta seems preferable to the Carmen it would not be unreasonable to go with William of Poitiers’ account.

So what? Why does it matter whether you have 11-year-old’s on imaginary horses pretending to be Bretons or Normans?

It doesn’t matter at all. Except that if you have Bretons, the William you are creating for your students is a very different character from the William that would be conjured through the Carmen’s account. One is dynamic, daring and able to exploit an unfortunate situation to his advantage who is in striking contrast to Harold who makes some very poor decisions. The other William is a much less exciting figure who does have a skilful cavalry but, like an adolescent magician, has one trick that he rolls out again and again until it gets the reaction he hoped for.

Again, so what? So what if William of Poitiers’ Conqueror is a better character than the Carmen’s William? What does it matter other than it allows a history teacher to show off more through a more dramatic re-telling of events?

It matters because of what you want your students to do with your William – and here it is worth pausing to point out that I do mean your William. Students using the Battle of Hastings role-play as the basis of a piece of causal writing are not writing about the eleventh-century duke of Normandy, they are writing about the duke of Normandy created in your classroom.

Any causal explanation can only be a model of the events of the past. When you are leading a role-play like the Battle of Hastings you are in control of creating that model. When you ask students to explain why William won the battle, based on your (re)creation, you are asking them to fit together the pieces you created. If you created a William based on William of Poitiers’ account then it would be reasonable for a Year 7 to argue that William won the Battle of Hastings because of his tactical brilliance. This argument would be less easy to sustain if you have given them a William who uses the feigned retreat a second time after it nearly ended in disaster for him the first. That William may be dogged or determined but he may also be desperate and lack imagination.

But aren’t we engaged in a search for historical truth? Couldn’t we engage students with the complexity of the historical debate about these sources? Shouldn’t we be encouraging students to write using the language of doubt? Do we not have a moral and professional duty to highlight the temporary and contingent nature of historical ‘fact’? Could we not ‘extend the most able’ by asking them to do ‘research’ on this question?

Well, yes. Yes to all. Each of those (with the possible exception of the last which smacks of planning-by-booking-a-computer-room) would be a valid thing to be doing with Year 7 which, at other times, I would happily help students engage with but they are not what I am using the Battle of Hastings role play for in this instance. In this instance, I want to see how students can construct an argument and support it using the information they have been given. To this end, I am going to deliberately select and limit the information they have. I am going to create a simplified model of the past – a Duplo version of history. This is not an insult to my students and nor should one imagine that it makes the task of explaining William’s victory at Hastings simple. Textbooks do this all the time. In our GCSE book, in the section on the Reformation, the complexity of the ‘new learning’ of humanism and boil it down to a couple of paragraphs. Essentially, it says ‘Erasmus wanted a Greek Bible – oh and there was a thing called the Renaissance.’ This would be appaling if the syllabus questions were about the Renaissance but if we are looking at the impact of the Reformation on English people, then, yep, ‘there was a thing called the Renaissance’.

I have read many calls for an increase in the use of texts and textbooks in history classrooms. This is not necessarily a bad thing but it should be realised that a textbook is, if it is used thoroughly, a course book – its contents become the syllabus. Necessary decisions about omissions and simplifications are taken by the authors and editors and not by the teachers using them. This is fine if that is what is wanted and is recognised by those who use them. If, however, it is assumed that textbooks are neutrally-voiced and transparently convey historical truth then there are problems.

Similarly, this issue highlights issues when anyone tries to make any comparisons between classrooms and schools. I have heard many calls recently for all history marking to be done through comparative judgement and there is a lot of merit in this. However, it is difficult to know how a level playing field might be constructed to allow this to happen. If the William created in my room is a ‘Poitiers William’ then, as noted above, the conclusions that might be reasonably drawn about him are not the same as those that might be drawn about the ‘Carmen William’ who existed briefly in the Sports Hall of the school down the road. So when looking at essays, how can I judge the difference between them? If one of the essays doesn’t mention the first feigned retreat is that an omission by the student or did it never happen in their particular Hastings-verse? How might I judge?

Similarly, what good will it do my students to buy into some sort of knowledge-organising, testing system that has not been designed for their particular course? If I have chosen to simplify Charles I’s religious beliefs for Year 8 to ‘he’s a Protestant but he wanted services to be beautiful’ what merit is there in a bunch of questions about Arminianism? It’s not that Year 8 are incapable of understanding the subtleties of seventeenth-century beliefs but that is not the focus of what I am asking them to do.

To sum up, what I suppose I am arguing for is teachers who have deep and rich subject-knowledge in order that they can know what simplifications and omissions they are making. That they have a firm grasp of what thinking they want students to do with the information they are given so that they can make those simplifications in a deliberate way. It is also vital to recognise that any assessment of student work has to have some reference to the syllabus they have been taught (whether that is created by the teacher or taken from a book) as even potentially trivial differences in approach, such as whether the Bretons broke or not, can have profound implications for students’ work.

 

 


* Much of the history for this comes from Marc Morris’s excellent The Norman Conquest ISBN: 9780099537441. As always, mistakes are all mine.

1991 And All That – Why I won’t be buying anything from Pearson Progression Services

Lost in the Supermarket

PBlog1This week I was in a branch of a major supermarket trying to find some new swimming shorts for my 2-year-old son. Amongst the cartoon-violence-film-franchise trunks and unicorn-loveheart child bikinis I chanced upon three pairs of day-glo knee-length shorts – one pink, one yellow and one green, each in eye-watering neon with a black stripe across them. The vividness of the solid blocks of acid colour pitched me straight back to the summers of my youth. I would have seriously coveted these shorts in my pre-teen years and I felt a pang of nostalgia for a time when I didn’t yet know that the answers to life’s great mysteries were, very often, themselves mysterious.

It turns out that I am not the only one who has been wondering what it would be like to return to 1991…

An Introduction From a Trusted Friend

The Historical Association recently told me that they were “pleased” to bring me a message sent via email from the publishing/examination behemoth Pearson touting a tool to help me “find new ways to track and report on your students’ progress in History after the removal of National Curriculum Levels”.

However, when I looked at what was on offer my heart sank like Cambridge United’s dreams of winning the Second Division playoffs. [1]

Let me explain my disappointment. The package on offer claimed it would allow me access to a ‘Progression Map’ that “builds on our 12 step scale, breaking down the curriculum and providing clear progress descriptors, prior knowledge requirements and boosters for additional challenge.”

At first glance this might look like Pearson have just replaced the National Curriculum level descriptors with Pearson-defined level descriptors. However, unlike the National Curriculum level descriptors, the Pearson ‘Steps’ are designed to be applied to individual pieces of work and are designed to be divided ‘horizontally’ to allow fine grading.[2]

Also, unlike every National Curriculum since 1991 that has had level descriptors that deliberately interwove their different elements, Pearson has taken the trouble of dividing the descriptors ‘vertically’ as well; divorcing ‘Cause and Consequence’, ‘Change and Continuity’, ‘Evidence’, ‘Interpretations’, ‘Structuring and Organising Knowledge’, ‘Using historical vocabulary’ and ‘Chronological Understanding’ into different ‘sub-strands’. This would, I suppose, allow you to clearly separate your assessment of a student’s understanding of ‘Using historical vocabulary’ from their understanding of ‘Cause and Consequence’ etc.

However, there is more. In order to keep track of my students’ progress through this 12-step programme they had, “developed a straightforward, time-saving and reliable approach to monitor learning throughout KS3 and KS4.”

What Pearson is selling is a re-write of the 1991 National Curriculum and an Excel spreadsheet.

Why I Won’t Be Buying

It is sad that an international organisation such as Pearson with its abundance of resources and  huge influence is peddling something that is so conceptually flawed. The criticisms levelled at the (mis-)use of the NC Levels [3] are exactly applicable to this system and while retro seems always to be the order of the day, the memories of those arguments are too fresh and the scars to raw to revive them here. However, what is worth saying is that Pearson are selling this conceptually-flawed product without having taken the trouble to even address the flaws in its execution.

It would be churlish to cherry-pick and isolate phrases from the Step Sub-Strand descriptors to challenge or question. I’m not going to spend time here making jokes about flux-capacitors and the phrase “Learners are able to manoeuvre within their own chronological framework with ease”. I’m not going to ask you whether “starting to make judgements about sources and how they can be used for a specified enquiry” is more or less difficult than making “supported inferences about the past by using a source and the detail contained within it.” Nor, am I going to point out that a huge publishing house has published documents that use both spellings of judge(e)ment on the same page. It is precisely because writing these generalised descriptors is so hard that their creation is meaningless. It is for precisely these reasons that the application of these generalised statements to individual pieces of work is meaningless.

I would, however, like to draw your attention to the Baseline Test that Pearson invites teachers to set Year 7 after a brief topic on the Norman Conquest.

The idea of a baseline test for the beginning of a Key Stage is a good one – it gives the teacher some idea of the strengths and weaknesses of their students. It can be used to tailor support, intervention, extension etc. etc. With appropriate caveats, it is not unreasonable to compare the results of this test with later ones to help inform some judgements about students’ progress and, perhaps, the efficacy of some aspects of a teacher’s performance.

However, in order that a baseline test is effective it must be a fair test. The Pearson Year 7 Baseline test is flawed in many, many ways: [4]

PBlog2

3 – Are ‘Romans’ and era? Shouldn’t this read ‘Roman Britain’, ‘the Roman period’, ‘the Roman era’, ‘The period of the ascendency of Romano-British culture in South Eastern England’…?

PBlog3

4 – If ‘The Dark Ages’ is an era, don’t at least two of these labels also require the definite article?

PBlog4

5 – An emperor or empress would be the ruler of an empire (and an Empire?) and be just as much of a monarch as a king or queen.

PBlog5

6 – I looked at the mark scheme and realised that I got this one wrong.

PBlog6

7a – It would not be unreasonable to describe a way of explaining a set of historical facts as a ‘cause’. A cause is identified (constructed?) by a historian, therefore it is a way of explaining historical facts. An ‘interpretation’ is not a way of explaining historical facts it is a construction made from (the selection of those things that the historian determines are pertinent) facts.

7b – Interpretations happen because of something else. That is in the nature of ‘interpretations’ of history: a historian’s Marxist beliefs will cause them to have a Marxist interpretation etc. Long-term causes of historical events are, in turn, caused by other things.

7c – A short-term cause of William’s victory at Hastings didn’t happen a short while ago. Things that happened a short while ago and had an impact can also be consequences of something else.

PBlog7

8 – I’m not even going to start to pretend that I understand the subtleties of what (bastard) feudalism is/was/whether it ever existed… but I do know that it would be perfectly reasonable to offer, “Because it wasn’t a feudal society,” as an answer to 8b. Would this count as an explanation?

PBlog8

10 – Wouldn’t it be more useful to phrase the question as the difference between what the historians are saying in Interpretations 1 and 2?

PBlog9

13 – This implies that the historian’s questioning itself is evidence of why William won as if William ushered in a new era of evidential thinking in the discipline of history. I think they mean ‘usefulness’.

 

Does It Really Matter?

Okay, so some of the questions are clumsy in their execution and some suggest some clumsy thinking. Again, this wouldn’t be terrible if you cooked this up with a colleague in the last week of term because you needed an end-of-year test but if you are one of the world’s largest educational publishers it’s probably a bit embarrassing. However, I would argue that much more importantly (and I know that some friends and colleagues will roll their eyes at this point and suggest that I have spent too long in the company of Mr. Hyperbole) that the system that supports this test is dangerous and unhelpful.

It is not unreasonable to give numerical scores to questions on a history test. What is unreasonable is to use those numbers to draw unsupportable conclusions.

According to Pearson’s Baseline Test Markbook, all elements of question 7 are at a Step 4 level of difficulty but each answer is worth only 1 mark. This is the same value as question 1 which is only rated as Step 3 level. This happens all over the test and this causes problems.

PBlog11

While I appreciate that the screenshot of the fake data in the markbook is probably illegible, please take my word that students Joseph Bloggs and Anne Nother have the same overall score: Step 2 Developing [5]. This is despite the fact that Joseph Bloggs had failed to get right any of the simpler questions (i.e. those rated at Step 2) but aced those rated 4 and got somewhere with those rated 7. Anne Nother got all of the simpler questions right but did less well on the more difficult ones. Are these students at the same level?

Well, yes and no. The data about how each student performed on each individual question is interesting and can be useful and pertinent. However, this system is designed to smooth out all of the nuance and produce a summative grade. This in itself is still not necessarily a problem. So long as everybody is clear that the grade given refers only to the performance of that student on that day on that particular test, this average can have some meaning. However, the problem is Pearson are implying that the score on that test has some relation to a student’s capacity to perform according to complex level descriptors.

It does not.

The fact that the students scored 13 marks on the test tells us that they got 13 on that test. It is fair to say that the Bloggs scored below the class average on that test. It is fair say that Nother scored 26% on that test. It is fair to say that one of them probably doesn’t know what ‘a decade’ is because they got question 2 wrong.

It is not fair to use that score to describe either of them as Developing Step 2.[6] The score in no way relates to the descriptors. If a student is doing some parts of Step 7 but not Step 2, it doesn’t mean that they are doing the things described in Step 4.

Creating mean averages and then extrapolating judgements about a student’s capabilities is a gross over-simplification and while it does all people to generate pretty line graphs it is impossible that they generate any meaningful information. They generate a lot of noise but very little signal.

The Illusion of Reliability

So what you ask? So, the system is imperfect; it’s better than nothing. It gives heads of department/heads of year/heads some rough-and-ready data to help them out. I would strongly argue that it is much, much worse than nothing. The problem lies in the illusion of reliability that numbers give information – if you put together a system that generates numbers, it won’t be long before some idiot assumes that they mean something. After that, it won’t be long before people are judged on whether those numbers appear next to particular students’ names. After that, it won’t be long before sets/rewards/trips/badges or promotions/pay awards/professional reputation/the ability to put food in your child’s mouth are dependent on those numbers. After that, it won’t be long before the stakes for not getting the numbers are so high that teaching is to the test and marking is done with one eye on self-preservation. After that the numbers obscure the things that they are supposed to be measuring. After that, habit, fear and exhaustion will lead us to a place where we are teaching students how to get numbers rather than get excited about the past.

The abolition of the National Curriculum Level Descriptors has provided us as professionals such a wonderful opportunity. Paying money for a system like Pearson’s Progression Services is just Stockholm Syndrome – a self-defeating desire for the comfort of our previous imprisonment – don’t succumb.

Matt Stanford


[1] Okay, so technically that match was in 1992 but that season started in 1991.

[2] A student can be ‘beginning’ step 4, ‘developing’ step 4, ‘securing’ step 4 or ‘excelling’ step 4… no, hang on… ‘beginning’ to understand step 4, ‘developing’ to understand… no, hang on… ‘beginning’ to perform at step 4-level, ‘developing’ to perform at… no, hang on.. ‘securing’ their understanding of step 4 before ‘developing’ their… no, hang on… 4a, 4b, 4c… or was it 4c, 4b, 4a…?

[3] For example, Burnham and Brown in Teaching History 115 & 157, Fordham in Teaching History Supplement 153, Ofsted in History for All, 2011, Final report of the Commission on Assessment without Levels, 2015.

[4] I am prepared to admit that at least some of these criticism verge on pedantry. However, had any of my colleagues suggested these questions I would ask them to consider the following changes. If we expect accuracy and clarity of thought from our students, shouldn’t we expect it from ourselves? However, if you have a low-tolerance for smug nit-picking please feel free to skip on to the section entitled “Does It Really Matter?”

[5] They are developing Step 2? Their understanding of the historical thinking required to achieve Step 2 is developing from slight to comprehensive?? They are developing Step 2 into Step 3???

[6] Step 2 Descriptors:

Cause and Consequence Step descriptor: Learners show a basic comprehension of causes and understand that things happen in the past for more than one reason. However, they view these relationships as unmoving or definite, i.e. X was always going to cause Y. They may display a simple understanding of consequence.

Change and Continuity Step descriptor: Learners can identify basic differences between our lives and the lives of people in the past, but will often see the present as a time when problems of the past have been solved or sorted out.

Evidence Step descriptor: Learners have a sense that historians need to look at evidence about the past to find out what happened, but they see this evidence as independent and able to speak for itself. For example, they may believe that a report or relic has its own truth without any interrogation.

Interpretations Step descriptor: Learners can decide what they think about the past (e.g. I think that King John was bad) but cannot link this idea to the way in which history is constructed. They may be able to repeat stories that they have been told about the past, but cannot see that these stories are interpretations.

Knowledge Step descriptors: Learners begin to use simple historical terms, such as years, and understand that some things happened a long time ago. However, they are unable to distinguish between different lengths of time. They may be able to talk about periods that they have studied (e.g. Ancient Greeks, Romans) but cannot fit these into their existing knowledge. Learners can remember historical vocabulary with some relevance within a given period (e.g. Roman emperors, Viking longships) but struggle to use it to describe the period or features of the period.
Learners can recount simple stories about the past (e.g. myths, battles) but are unable to move beyond what they have already been told or to combine knowledge together.

 

 

What are we doing when we think that we are dual coding? – Part Two: Does consistency matter?

We have a Year 7 enquiry question that asks students to consider the historical significance of four medieval women.

We’re pretty happy with it.

It allows us to cover some chronology that would otherwise be missing, introduces students to the idea that historical significance is ascribed rather than inherent, suggests some criteria by which historical significance may be judged, makes a moral point about the exclusion of some groups of people from conventional historical narratives, asks students to practise supporting claims with examples and allows them to, if they choose, to create some dramatic art work. It’s quite good fun and it sits nicely within a progression model of the second-order concept of significance planned into our Key Stage 3 curriculum.

Like I said, we’re pretty happy with it.

Teaching this topic this year, Maggie Johnson, one of our brilliant Specialist TAs from our Hearing Support Centre, said that she was concerned about the amount of substantive content there was in this course and asked if she could make an aide memoire to stick on the desk of a student with hearing difficulties so that she and the student could refer to it when signing. Feeling once again grateful to work with such amazing TAs, I said, “of course”.

Which is when Maggie, very politely, pointed out that the inconsistency of the pictures might actually prove to be an obstacle developing students’ understanding and that the pictures were, in short, rubbish.

Have a look at the pictures we were using…

Eleanor of Aquitaine…

…Julian of Norwich…

JofN1

…Margery Kempe…

MK1

 and Margaret of Anjou…

It’s not the fact that Eleanor of Aquitaine appears to be riding a My Little Pony that was the problem. Nor was it that the picture of Margaret of Anjou is from the ‘wonky’ school of art that was so popular in the medieval period  – it’s that there is no consistency between the images. Three of the pictures are modern artistic impressions, two are from medieval manuscripts, one is from a medieval mural and one is a twenty-first century photograph of a twenty-first century statue. The images not only look radically different from each other, the intentions behind their creation are radically different. Yet, we were going to use them to try and help Year 7s learn about women who, as far as they are concerned in this enquiry, are of the same order: they are all medieval women, they are all objects of our study and they all will be subjected to analysis of their potential historical significance. Whether explicitly or implicitly, the pictures did not suggest that parity.

If we wanted students to understand that the women were of the same order we needed to have pictures that made this clear – we needed consistency in our dual coding.

We rectified this problem by drawing our own pictures.

We projected the images onto a whiteboard and went around them with a dry-wipe marker. We then took a photo of our drawings, emailed it to ourselves and tidied and coloured jpeg with Photoshop. Below are the results:

MedievalWomen

The images we have created are certainly not more beautiful than (some of) the originals but they are at least consistent – images that represent things (in this case, medieval women) that are of the same importance, and will be studied and analysed in the same way, now look the same.

However, they are not exactly the same. They still have visual clues that might act as cues to remembering a little about these women’s lives – the queens are wearing crowns, Julian of Norwich is dressed as a nun and Margery Kempe is holding her imaginatively-titled autobiography, The Book of Margery Kempe.*

For me, the take away from this experience was that consistency in the images you use has more than just aesthetic value…

…and that there are people who paint pictures of medieval queens riding fantasy horses and post them on the internet.

 


* I realised later that we would need to correct the image of Margery Kempe before teaching this again next year – one of the things she does that gets her into trouble with the Mayor of Leicester is wearing white despite being a married woman.

What are we doing when we think that we are dual coding? – Part One: What do students already need to know in order to understand the pictures?

Dual coding in some subjects must be easy.

In biology, the picture you need to illustrate the parts of a leaf is one that shows the parts of a leaf. In physical geography, a description of the creation of oxbow lakes would probably be best dual-coded by a diagram of the formation of an oxbow lake.

But in history?

I wouldn’t suggest that we are the only subject that has to deal with abstract concepts but there does seem to be a disproportionate amount of them. Take for example the Soviet invasion of Afghanistan in 1979. The exam-board-endorsed textbook on this subject has four paragraphs on this subject – it is after all only one small part of a one-hundred-year course on international relations. However, beneath that brevity lies much complexity. Take for example the sentence:

“The conservative, strictly Muslim rural Afghan people disliked communism because it was an atheist ideology (it denied the existence of God).”

Let’s just unpack that. In order that a student can parse that sentence they need to hold in their heads some understanding of (at least some of) the following concepts: ‘conservative’, ‘Muslim/Islam’, ‘rural’, ‘Afghanistan’, ‘communism’, ‘atheist’ and ‘ideology’. The schemata that you need to hold to understand this sentence are numerous. If you have that cultural knowledge, then this is a clear, concise summation of the reaction of some Afghanis to Taraki’s government. If you don’t, it is intimidating and confusing. All of this for a question that, if it comes up at all, might be worth 4.76% of a GCSE.

I know that building these schemata and developing knowledge is what Key Stage 3 is for – and we should be at least engaged with the question about which of substantive concepts we want to teach to our students (and which we want them to retain). But, if we are being honest, how many of those concepts do our students understand because they are taught them at KS3? How much of students’ understanding of other concepts like ‘law’, ‘parliament’, ‘economy’, ‘trade’ etc. is built by lessons? Isn’t it actually the case that the students who are most au fait with those terms have had most of them introduced, defined, explained, illustrated and clarified by their experiences outside of the classroom? Don’t tell me that the tens-of-thousands of hours with their families, peers, books, telly and internet is not more instrumental in their development than the hundreds of hours they spent in history lessons – even if they are devastatingly brilliant lessons.

Even if it were reasonable to expect history teachers to be responsible for the entirety of a student’s historical, political, economic and cultural education, the weight of outside influences will always be heavier than the influences of a history teacher, no matter how inspirational, dedicated or passionate. This is how privilege becomes entrenched – those who are born into families that help them develop cultural capital valued by academia have easier access to more of it. Those who are not face a harder challenge.

So, what does this mean for dual coding?

First, it means we need it. We need to give students ways to hang on the myriad abstract ideas that lie behind our subject.

Second, it means that it is hard to get that dual coding right. If an image is supposed to illustrate a point you are making, it needs to resonate with the students looking at it. It needs to tap into, and latch on to, students’ understanding of the world around them. It means that the students need to have the cultural understanding to make sense of the images you are showing them. Take a look at this picture from a well-known and mostly excellent GCSE textbook:

Unemployment Diagram

This image is part of a diagram designed to dual-code the idea of the spread of the Great Depression to the rest of the world. I guess that if you are reading this blog then you have the cultural reference points to understand why three men standing in a row represents unemployment. You see that image and it triggers for you memories of photos of the breadlines in the Great Depression. Maybe it’s the Thursday dole queue. Perhaps you see the workers from Metropolis. You might see that long line of Hendon Young Conservatives used in Saatchi & Saatchi’s Labour Isn’t Working poster from the 1979 election campaign. You might even be humming a Hot Chocolate song thinking about The Full Monty.

But what about the kids who have never seen those images? What about those who have never seen those films? What about the kids with the least cultural capital? What about those kids who do not share our mental schema? It is our duty to help introduce them into the great conversation that is academic discourse and yet the very thing we have selected in order to try and help them understand a complicated idea is an image that means nothing to them. If you do not already have those images in your head you are excluded from the thing that will help you put a new idea into your head. If you are not already part of a group that values academic knowledge, you are further excluded from it.

Okay, so a picture of men standing in a row is very unlikely to cause a budding historian from a non-traditionally-academic background to give up in frustration but, if it does not place another tiny brick in the wall, it certainly does help them climb over it.

We, as history teachers, have to think carefully about what assumptions we are making when we illustrate a point. What cultural knowledge, what schema, is required for the pictures we use to be meaningful for all of our students?

Who are we excluding and discouraging when we think we are being helpful?

Cognitive Psychology in the History Classroom – An Introduction

In January of this year, we were lucky enough to hear a presentation from Dr. Yana Weinstein from the University of Massachusetts (@doctorwhy) and Dr. Carolina Kuepper-Tetzel from the University of Dundee (@pimpmymemory), both from the organisation  The Learning Scientists. We heard them give a talk about six strategies to help students develop their long-term memory. These strategies were:

  • Dual coding;
  • Retrieval practice;
  • Elaboration;
  • Interleaving;
  • Spaced practice; and
  • Concrete examples.

We were very impressed.

What made Weinstein and Kuepper-Tetzel’s presentation different was the fact that, unlike so many of the fads and fashions teachers have been encouraged to take up, what they were saying was based upon actual scientific research.

This is not the place for us to (badly) rehash their work. If you want to know more about what these techniques are and their background, we heartily recommend you visit The Learning Scientists’ website.

This is, however, the place to discuss the opportunities and difficulties faced by history teachers in applying their work. With that in mind, we intend to post some blogs about cognitive psychology in the history classroom based around these six techniques.

Hello World!

Hello,

Welcome to ‘…what a wonderful world this would be’ a blog about teaching history.

My name is Matt Stanford and I am a history teacher at a proudly comprehensive, non-selective, co-educational, 11-16 state school.

Recently, my colleague Corinne Goullée and I have had some people express interest in learning more about the workshop that we presented at the Schools History Project Summer Conference on the possible application of ideas from cognitive psychology in the history classroom. While we were flattered at the interest and very happy to share our thoughts, the presentation on its own made little or no sense. So, we thought that it might be more useful to share the ideas in a different way.

Hence the blog.

While that work will probably form the content of the first few blog posts, we also hope to offer thoughts and questions on other aspects of history teaching.

This is intended to be a collaborative blog by me, Corinne and our colleague Geraint Brown and we hope that our musings here will be of some interest and possibly some use.

Comments and criticisms will be warmly received.