#CSEd #Education #PGCE
# An Investigation of Pupil Progress and Assessment in Computing
## *Evaluating Digital Mini-Whiteboards as a Student Response System in Assessment for Learning*

### Preamble
This is not the essay I hoped to write. To accommodate difficult circumstances*,* I was only able to scope my investigation broadly, looking at assessment for learning (AfL) and its implementation with whole-class questioning over mini-whiteboards (MWB). There is, however, a computing twist: the whiteboards are digital (DMWB). I took full advantage of a free trial of [whiteboard.fi](https://www.whiteboard.fi/), using it across all the year groups I taught, in my 'short placement' school, a Catholic, all-girls academy ("School B").
MWBs are a *medium* of assessment more than a *method* -- *validity* and *reliability* are solely dependent on the questions being asked, not where/how the answers are written. However, I will endeavour to show in this essay that digital MWBs improve both *practicality* and *equitability* of assessment.
### Literature Review
Assessment is crucial to any form of teaching. It is one of six standards in the Department for Education's Core Content Framework for teacher training (Twiselton et al., 2019) and one of eight sections in the teaching part of the Teacher's Standards (DfE, 2011). It is a broad notion: my intuitive conception of it, prior to entering the field of education, focused on marks and exams: the *summative* function of assessment; little did it occur to me that a straightforward question-answer-feedback exchange between teacher and pupil *was* assessment too, of the *formative* kind.
#### Before the Black Box: teaching exchanges and formative assessment
The template for this kind of questioning was formalised by Sinclair and Coulthard (1975) as 'Initiation-Response-Feedback' (IRF), a type of teaching exchange (TE) they found to be so commonplace as to constitute the basic steps through which a lesson progresses. Mehan (1979) broadened this framework, substituting 'evaluation' for 'feedback' as the third step, and including pupil-initiated discursive exchanges; however, none of those authors speak of 'assessment' as we now use this term. The formative/summative distinction can be traced to Michael Scriver's *Methodology of Evaluation* (1969) -- although he applied it to evaluation of 'educational instruments' (e.g., curriculums), not assessment of individual students[^1]. Royce Sadler (1989) offered a framework of guiding principles for formative assessment which is still the basis of current best practices, in particular his criteria for effective feedback.
#### Beneath the Black Box: The work of the Assessment Reform Group
The 1990s in the UK marked the rise to prominence of the Assessment Reform Group (ARG): founded in 1996 as a project of the Nuffield Foundation, its members had been working together since 1989, as the Policy Task Group on Assessment within the British Educational Research Group. The ARG counts amongst its members Dylan Wiliam and Paul Black, whose small pamphlet[^2] *Inside the Black Box* (1998a) casts a long shadow over British education[^3]. It marked the start of a shift in semantics driving a change in practices. The ARG argued that the adjective 'formative' was too ambiguous to be helpful, in particular because, although it implied teacher feedback (as per Sadler, 1989), it did not guarantee the student had means to act upon that feedback (Broadfoot et al, 1999); this prompted the coining of the term 'Assessment for Learning' (AfL), which the ARG defined as \"the process of seeking and interpreting evidence for use by learners and their teachers to decide where the learners are in the learning, where they need to go and how to best get there\" (Broadfoot et al., 2002, p. 2). The start of what Brand (2016) refers to as the 'AfL movement' caused the assessment paradigm to pivot around a procession of prepositions: assessment *for* learning, but also *as* leaning, particularly for self- or peer assessment. Summative assessment thus became assessment *of* learning (AoL[^4]).
#### Beyond the Black Box: Assessment in the 2000s
The ARG's definition of AfL hinges on three questions about the learners, which, in a beautiful fractal twist, have since then been asked *of AfL itself.* Writing in 2006, Black asks 'Where is it now? Where is it going?' and expresses concern AfL turned into 'a free brand name to attach to any practice' (Black, 2006: 11). A decade later, and Wiliam's tone is dire, when he asks: 'What will it take to make it work?' (Wiliam and Thompson, 2017). Schellekens and colleagues, in their 2021 scoping review, draw on English-language articles published between 1999 and 2018, aggregated texts pertaining to AfL, AoL and AaL -- notions they call 'appealing in theory', but 'unclear (...) to comprehend' (Schellekens et al., 2021: 1), They noted no fewer than nine themes, in which we find formative and summative functions[^5], under the category of *Educational Outcomes*, with the remaining themes pertaining to outcomes in the *Teacher-Students Roles and Relationships* and *Learning Environment* categories. This illustrates that Assessment \[preposition\] Learning is far from being just about "Learning".
#### Mini Whiteboards as Student Response Systems
Mini whiteboards are the inheritors of writing slates, which predate paper books as a writing medium for pupils and were a feature of classrooms well into the twentieth century (Davies, 2005). If used to communicate information to the teacher, they are *student responses systems* (SRS); Huang (2021) notes that both [nearpod.com](https://nearpod.com/) ("high tech SRS") and mini whiteboards ("Low tech SRS") lead to improved academic outcomes (reading comprehension and vocabulary acquisition in EFL for nursing students) compared to SRS-free teaching (*n*\>=50 in each group). There was no significant difference between the high-tech and the low-tech group. Gimbutas (2019) has found mini whiteboards to increase engagement for 70% of pupils, and academic engagement for 80%. This only involved ten pupils (10^th^ grade, English): mini whiteboards can be difficult to scale -- as equipment, if not as practice. There is an overhead in time and class focus in deploying any kind of equipment in the classroom. This is an issue the digital mini whiteboards obviate[^6], which speaks to their **practicality** as an assessment technique.
When comparing SRSs, Fallon and Forrest (2011) found that students preferred clickers to response cards because of the *anonymity* those offered as to the responses, although clickers did not improve learning compared to cards (*n*=70 psychology undergraduates). Stowell and Nelson (2007) posit the anonymity offered by clickers contributed to the significant increase in participation, compared to both response cards and teacher initiated IRF questioning, noting further that students in the latter groups inhibited or influenced each other's responses. DMWBs offer a similar advantage over their physical counterparts, (whilst allowing an individual student's work to be displayed to the whole class) which scores a strong point for their **equitability**.
[^1]: The picture is further complicated by the fact American practitioners speak of 'evaluation' (not as Mehan means it) to describe what their British counterparts would call 'assessment'.
[^2]: It is in fact the executive summary, aimed at policymakers, of a much longer work published earlier the same year (Black and Wiliam, 1998b).
[^3]: Expert colleagues who have stayed in the profession long enough to see the before and after have attested of this.
[^4]: Or, in the case of secondary qualifications disproportionately hinging on high-stakes, one-off exams, Assessment *over* Learning!
[^5]: *Enhancing student learning* and *determining the status of learning achievements* in the text.
[^6]: Assuming of course all pupils are already logged into a functioning computer. An assumption that is only safe -- if that -- in computing lessons....
### Planning
I have been using [whiteboard.fi](https://whiteboard.fi/)[^7] increasingly at School B: with [year seven, for retrieval practice](#_Retrieval_Practice:_HTML) starters on HTML ([Appendix A](#appendix-a-y7-web-design-css---lesson-plan)), as well as with [years eleven and thirteen, for exam-question](#exam-questions-with-years-11-and-13) based activities ([Appendix B](#appendix-b-y11-networkprotocols---lesson-plan-and-reflectionstable-description-automatically-generated)). This section details the planning [a lesson for a mixed-ability year 9 class](#cybercrime-with-9z) (n=25). This was a class acknowledged as disruptive by other teachers of the department, I had only taught them for a couple of weeks and had yet to build relationships with[^8] most of the students. Adding to this that the lesson, although on the department's scheme of work, was a PSHE lesson, departing from the creative approach used for computing topics, and a perfect storm of off-task behaviour was appearing on the horizon. Investigating the use of response cards in primary school, Lambert and colleagues (2006) found SRS to decrease disruption, keep pupils on task, and increase response rates; Prezsler and colleagues (2007) also found active response systems (in this instance clickers) to increase engagement, especially for the otherwise passive learners; they also found improvement in exam results[^9]. I chose to embed DMWB in the planning and delivery of the lesson, in the hope they would maximise engagement and minimise disruptions.
The lesson was adapted from the "Exploring Cybercrime" [materials](https://pshe-association.org.uk/search?queryTerm=Exploring%20Cybercrime) published by the PSHE Association, in partnership with the National Crime Agency (NCA) and its National Cyber Crime Unit (NCCU). The published resources include two lesson plans with associated materials, the first focusing on the types of cybercrime and possible motivations for it, the second on its effects and consequences. I adapted the available materials into a single lesson, targeting the following learning objectives:
- Learn the definition of cybercrime, and its specific instances
- Explore the motivations of people turning to cybercrime
- Understand the personal, social, and legal consequences of cybercrime
A full lesson plan is attached in [Appendix C](#_Appendix_C:_Y9): it proceeds through three activities, each suited to being carried out on MWBs: The 'Ideas Shower'[^10] to elicit prior knowledge, the 'Agreement Spectrum', to promote discussion by asking pupils to declare (and argue) their views, and the 'Diamond Nine' to prioritise the consequences of cybercrime by degree of seriousness. I set up an 'assignment' in whiteboard.fi: a set of pages of whiteboard backgrounds for the pupil to load, copying them to their own boards to work on them.
<figure>
<img src="media/image2.png" style="width:3.48819in;height:2.47244in" alt="Text Description automatically generated with low confidence" />
<figcaption><p>Figure 2 - Monet's Mind Maelstrom. Masterful.</p></figcaption>
</figure>




Figure 1: The prepared assignment: a page for each activity, plus a reference page of the consequences of cybercrime.
### Teaching
#### Cybercrime with 9Z
This lesson had the most heavily integrated use of DMWBs, and the *mind maelstrom*[^11] activity was by far the one where they shone. The other two, *agreement spectrum* and *diamond nine*, were more typical 'workbook' activities, intended for individual students to reflect, engaging higher levels of Bloom's taxonomy; still, those tasks benefited from the possibility of displaying a student's work to the class[^12], using it as starting point for classroom discussion. Pupils were immediately engaged by the novelty: I introduced *whiteboard.fi* as being by the makers of the much-loved *Kahoot!,* which piqued their interest even before they logged onto the site. [^13]
In the mind maelstrom, working in pairs, but on their individuals DMWBs, pupils were given five minutes to note down examples of cybercrime, and potential motivations for cybercriminals. I used the teacher's screen to look at all pupils' whiteboards, and the classroom's screen to display selected contributions. I had found out early that leaving the panoptic view on the big screen created disruption and dissuaded some from offering responses as they would be visible[^14] of the rest of the class.
<figure>
<img src="media/image6.png" style="width:3.48819in;height:2.47244in" alt="Text Description automatically generated with low confidence" />
<figcaption><p>Figure 2 - Monet's Mind Maelstrom. Masterful.</p></figcaption>
</figure>
I first asked for volunteers to show and explain their responses, I praise, the odd correction, and complementary information, asking the pupils to use a different colour on their whiteboards to capture fellow students' responses. Each item, be it a type of cybercrime or a motivation, was a starting point for classroom discussion, with definitions and examples.
<figure>
<img src="media/image7.png" style="width:3.53542in;height:2.51944in" alt="Graphical user interface, text, application Description automatically generated" />
<figcaption><p>Figure 4 - Charlotte embraces moral ambiguity</p></figcaption>
</figure>
Going though answers from volunteers got us to cover most of the types of cybercrime and underlying motives. I then used the teacher's view of all pupils' whiteboards to identify outliers -- contributions made by just a pair or two -- and asked them if they could explain to the class[^15]. By the end of this activity, pupils had a whiteboard page with their own answers, and, in another colour, other answers from the class, and any notes on my explanations and commentary. Other activities were conducted on other pages, and each page saved.
<figure>
<img src="media/image8.png" style="width:3.64514in;height:2.4875in" alt="Diagram, table Description automatically generated" />
<figcaption><p>Figure 3- Annabel's Nine Diamond</p></figcaption>
</figure>
#### Retrieval Practice: HTML with year 7
<figure>
<img src="image9.png" style="width:2.74051in;height:1.87818in" alt="Text, letter Description automatically generated" />
<figcaption><p>Figure 5 - Retrieval quiz, with correction in green.</p></figcaption>
</figure>
In the second lesson on Web Design with the Year 7 classes, I used DMWB for retrieval practice as my starter. I put questions to the class such as "What is the structure of an HTML document?", "Write down examples of HTML tags" or "I have an image called cat.jpg, how do I display it in my HTML page?" -- all concepts they had encountered in the previous lesson.
This was a quick way of revisiting the key points of the previous lesson; I asked pupils who responded correctly to explain their responses to the class, displaying it on the big screen. Seeing all the responses enabled me to identify and target specific misconceptions (e.g., quotation marks around attribute values in HTML tags). The persistence of the data on my screen during the lesson gave me a quick and easy gauge of the aptitude of the pupils in a class I didn't know well and had only taught a couple of times. This in turn allowed me to prioritise when circulating, spending more time with the ones needing the most help before they knew they would need it.
<figure>
<img src="media/image10.png" style="width:2.82388in;height:1.99123in" alt="Text Description automatically generated" />
<figcaption><p>Figure 6 – Yargui spotted all <strong>five</strong> errors!</p></figcaption>
</figure>
Another functionality unique to DMWB is the possibility for the teacher to write on their whiteboard, then copy this to pupil whiteboard for amendments and correction. This was ideal for a "Fix this HTML code" exercise.
#### Exam questions with years 11 and 13
One of the most insightful lessons I have had the chance observe was the PCM of School A[^16] teaching business to a small group of A-level students, making masterful use of physical mini whiteboards. This engaged the whole class in assessment, and the teacher would check for understanding, asking additional questions before moving to the next part of the lesson. The MWBs were the medium, but the true skill was 'the "respond" part of responsive teaching' (Barton and Morgan, 2022): the information going from the teacher to the pupils is as important as that going the other way.
<figure>
<img src="media/image11.png" style="width:2.98403in;height:2.08264in" alt="Graphical user interface, text, application Description automatically generated" />
<figcaption><p>Figure 7 - Activity questions over DWMBs</p></figcaption>
</figure>
At School B, year 11 students had to learn, in a single lesson, two very specific pieces of information left over in the Edexcel GCSE specification for 2022, after publication of the advance notice document. I ran this as two 30 minutes smaller lessons, each on the same: a starter quiz, five minutes of presenting the material, question activities (e.g. calculate transfer time of a file), and a short exam question to finish[^17]; DMWBs were embedded throughout -- for the starter, we used them as physical MWBs, but in the latter activities, pupils were effectively writing as if in their books (they went on to save their whiteboards and include them in their notes), with the teacher *being able to see the work of the whole class in in real time*. Instead of circulating, I would go directly to a pupil I had identified as needing support or scaffolding. Independently of any pupil progress, the practical advantages are immediately self-evident.
<figure>
<img src="media/image12.png" style="width:3.01181in;height:2.0748in" alt="Text, letter Description automatically generated" class="internal-embed"/>
<figcaption><p>Figure 9 – Exam question on email protocols</p></figcaption>
</figure>
### Evaluation
#### Quantitative (Cybercrime lesson)[^18]
Pupils filled in short polls at the start and end of the lesson. Each had three questions asking them to rate their understanding of the nature, motives, and consequences of cybercrime, plus a free text field for what they hoped to learn[^19] or what new piece of new information they did learn. The metrics are eloquent:
------------------------------------------------------------------------
LESSON START (*n=25)* Q1 Q2 Q3
--------------------------- -------------- -------------- --------------
Mean 6.28 6.12 6.12
Standard Deviation 2.09 1.99 2.73
Median 6 7 7
Mode 8 7 4
Min 2 1 1
Max 10 9 10
------------------------------------------------------------------------
-------------------------------------------------------------------------
LESSON END (*n=23*[^20]*)* Q1 Q2 Q3
---------------------------- -------------- -------------- --------------
Mean 8.61 8.30 8.65
Standard Deviation 1.44 1.61 1.34
Median 9 9 9
Mode 10 9 10
Min 4 5 5
Max 10 10 10
-------------------------------------------------------------------------
*Rate your understanding of:*
- ***Q1** What cybercrime is*
- ***Q2** Why do people commit cybercrime (motivations)*
- ***Q3** What consequences cybercrime has*
At lesson start, although the distributions (cf. [Appendix D](#_Appendix_D:_Y9)) look very different -- a reassuring sign the pupils are not filling in the form *completely* randomly, the statistical metrics are strikingly similar; my experience at School A is that KS3 pupils overestimate their prior knowledge in surveys. Overall, attainment has improved, the spread reduced, consistent with Black and Wiliam (1998). Particularly notable to me was the increase in the *minimum*.
#### Qualitative reflections
Digital MWB are a powerful AfL tool -- they offer all the functionality of physical MWB, minus the logistical difficulties (getting the board and a working marker in the hands of each of the pupils -- and collecting them afterwards). They also have unique advantages: pupils' contributions need not be shown to the whole class, which facilitates participation (Stowell and Neson,2007; Fallon and Forrest, 2011), and there is a degree of persistence, both within the lesson and across the unit. They enable a great amount of creativity from the pupils, something which some of the girls made full use of. ([Appendix F](#_Appendix_F:_Selected))
I also evaluated DMWBs as part of digital note-taking and record keeping, another potential use. Microsoft Teams offers integration with two other products, OneNote and Whiteboard, with education specific features that seem to point to this use case. *whiteboard.fi* however is ill-suited to 'book use': Students need to save pages individually: without authentication, they cannot recover their works if their session times out -- even with the premium version.
As an assessment method, I have found them, as expected, to be as equitable[^21], extremely practical, and as valid and reliable as the questions and exercises conducted with them.
### Conclusion: black boxes all the way ~~down~~ in
> The basic tool for the manipulation of reality is the manipulation of words. If you can control the meaning of words, you can control the people who must use the words. ([Dick](https://external-preview.redd.it/02w_lkoOjGVSCUCHwrq-lYPrNEMpBHJLLGmAnhYyXd0.jpg?width=640&crop=smart&auto=webp&s=199d7da5c314034f5f1b753e2e0df4d682587f4c), 1978)
Throughout the literature, the nomenclature focus has evolved from assessment (in its strictest definition, *the gathering of pupil attainment information*), to *formative* assessment (the same, *plus feedback from the teacher to the pupil*), to *AfL* (assessment, feedback, *plus the ability for the student to act on this feedback*). The definition has moved from the *nature* of assessment, to include its *function*, then its *intention.* The term 'assessment for learning' directly implies that assessment *promotes pupils' progress*, but formative assessment can only *support students' strides*[^22] if it is embedded in a loop in which the teacher acts on assessment information, delivering feedback, and the student is able to receive this feedback and act upon it.
Opening the black box of the classroom, we therefore find three smaller, but equally black, boxes[^23], whose outputs feed into each other's inputs. Herein lies the difficulty of formal evaluation of 'aspects' or 'methods' of assessment: for them to promote pupil progress, they need to be embedded in this feedback loop, and it is all three steps (or indeed boxes) that are evaluated simultaneously. Hattie and Timperley (2021) agree that effective feedback answers the three questions of AfL, and they note further that it does so at four *levels*: task, process, self-regulation, and self (identity). The last two fall under the umbrella [of](https://ctl.s6img.com/society6/img/EvknIZcv4Nui85j6dXaHSGsrcXE/w_700/posters/top/~artwork,fw_2719,fh_3621,fx_-251,fy_241,iw_3264,ih_3264/s6-original-art-uploads/society6/uploads/misc/29ec7f9cbad44b65890ebba3eb6274f5/~~/vancouver-seal-with-umbrella-posters.jpg) SEAL, the social and emotional aspect of learning: the way in which the feedback is delivered, and the relationship, nascent as it may be, matter just as much, if not more, to pupil progress than the method of assessment. This in turns makes any studies of assessment for progress difficult to control[^24], impossible to blind, and gravid with confounding factors that are poorly understood and rarely mentioned - let alone controlled for. The drive of the field of education towards evidence informed practices is noble, and the ARG was at its forefront; Tom Sherrington, on *Inside the Black Box*, blogs that 'It was the first time many of us realised that people undertook research into classroom practice at all' (Sherrington, 2021, first paragraph). It is great that Black and Wiliam explain, three pages in their pamphlet, what an effect size is and how it is calculated; it is a shame that they do not mention *n* nor *p*. This fetishisation of 'evidence' without critical understanding[^25] is a perfect illustration of *a little knowledge being a dangerous thing.*
Schellekens and colleagues conclude in their review that 'an assessment culture with a central role for students is *still in its infancy*' (Schellekens et al., 2021: 8) -- for a child in its twenties, this is a problem. A continuous professional development disorder. Teachers, more than policymakers have a role in the growth of a student-centred assessment culture that promotes progress. I do not know, however, that the prescriptive[^26] approach of the ARG[^27] and its inheritors is the best way to achieve this. As practitioners, we are less interested in an academic evaluation of assessment methods in a vacuum than we are in a personal, organic evaluation of the compatibility of an assessment method to our individual teaching style, and our classes. If we want to investigate assessment to promote pupil progress, we would be well advised to simply *try things*[^28] *and see what works.*
### Bibliography
Barton, C. (2022). *Tips for Teachers: Jo Morgan* \[Podcast\]. Available at: <https://tipsforteachers.co.uk/jo-morgan/> (Accessed April 2022).
Black, P. (2006). Assessment for learning: where is it now? Where is it going? *Improving student learning through assessment*, 9-20.
Black, P. J., Wiliam, D., and King\'s College London. Department of Education and Professional Studies (1998a). *Inside the black box: raising standards through classroom assessment.* London: Nelson.
Black, P. and Wiliam, D. (1998b). Assessment and classroom learning. *Assessment in Education: principles, policy & practice*, *5*(1), 7-74.
Brand, D. (2016). *Shifting the IRF paradigm: an action research approach to improving whole-class interactional questioning competence* Doctoral dissertation, Newcastle University. Available at <http://theses.ncl.ac.uk/jspui/handle/10443/3230>
Department for Education (2011). *Teacher's Standards.* Department for Education
Davies, P. (2005). Writing slates and schooling. *Australasian Historical Archaeology*, *23*(2005), 63-69.
Dick, P. K. (1978, printed 1995). How to build a universe that doesn't fall apart two days later. *The Shifting Realities of Philip K. Dick: Selected Literary and Philosophical Writings*. Vintage.
Fallon, M. and Forrest, S. L. (2011). High-tech versus low-tech instructional strategies: A comparison of clickers and handheld response cards. *Teaching of Psychology*, *38*(3), 194-198.
Gimbutas, E. C. (2019). *The effects of using mini whiteboards on the academic performance and engagement of students in a tenth grade resource English/Language Arts classroom*. Master's dissertation, Rowan University. At: <https://rdw.rowan.edu/etd/2712/>
Hattie, J. and Timperley, H. (2007). The power of feedback. *Review of educational research*, *77*(1), 81-112.
Huang, J. W. T. (2021). Is SRS (Student Response System) icing on the cake? Comparing efficacy of different modalities of SRS engagement incorporated into collaborative reading in an EFL classroom. *The Reading Matrix: An International Online Journal*, *21*(1).
Lambert, M. C., Cartledge, G., Heward, W. L., & Lo, Y. Y. (2006). Effects of response cards on disruptive behavior and academic responding during math lessons by fourth-grade urban students. *Journal of Positive Behavior Interventions*, *8*(2), 88-99.
Mehan, H. (1979). *Learning lessons: Social organization in the classroom*. Cambridge, MA : Harvard University Press.
Preszler, R. W., Dawe, A., Shuster, C. B., & Shuster, M. (2007). Assessment of the effects of student response systems on student learning and attitudes over a broad range of biology courses. *CBE---Life Sciences Education*, *6*(1), 29-41.
Sadler, D. R. (1989). Formative assessment and the design of instructional systems. *Instructional science*, *18*(2), 119-144.
Scriven, M. (1967). *The Methodology of Evaluation*. In Perspectives of Curriculum Evaluation. AERA. Monograph 1.
Sherrington, T. (2021). *'Inside the Black Box'. Classic Education Gold from Wiliam and Black.* Available at: <https://teacherhead.com/2021/09/26/inside-the-black-box-classic-education-gold-from-wiliam-and-black/> (Accessed April 2022)
Sinclair, J. M. and Coulthard, M. (1975). *Towards an analysis of discourse: The English used by teachers and pupils*. Oxford University Press, USA.
Stowell, J. R. and Nelson, J. M. (2007). Benefits of electronic audience response systems on student participation, learning, and emotion. *Teaching of psychology*, *34*(4), 253-258.
Twiselton, S., Blake, J., Francis, B., Gill, R., Hamer, M., Hollis, E., Moore, R. and Rogers, J.N., (2019). *ITT core content framework.* Department for Education.
Wiliam, D., & Thompson, M. (2017). Integrating assessment with learning: What will it take to make it work?. In *The future of assessment* (pp. 53-82). Routledge.
## Appendices
### Appendix A: Y7 Web Design (CSS) - Lesson Plan
This is a more realistic example of my lesson plans than the official ones in the University of Roehampton pro-forma. I will often display it on the visualiser at the very start (hence the relative neatness of this one), sometimes writing some or most of it live. This is a great way to model not only what a correctly maintained workbook should look like (product), but also how to get there (process). I was inspired by how effective (and fun) live coding is and ported the technique to book work.
{width="6.409027777777778in" height="4.807638888888889in"}
I have found that letting the pupils know the outline of the lesson **("L.O." -- left**) alleviate their anxiety and gets them to think of progress through the lesson (Learning Objectives, **"L.O" -- right**) and as a collaborative endeavour with the teacher. To hold up the teacher side of the deal, I try to stick to the timings, using a physical kitchen timer.
The "**MWB"** in the first ten minutes is retrieval practice using mini whiteboards. "Cheevos"[^29] means A*chieve*ment Points -- the school's reward system. A quiz without prizes is a cruel trick to play on twelve-year-olds.
### Appendix B: Y11 Network/Protocols - Lesson Plan and Reflections{width="6.581395450568679in" height="9.020873797025372in"}
{width="6.40890748031496in" height="7.410256999125109in"}
### Appendix C: Y9 Cybercrime - Lesson Plan{width="6.409027777777778in" height="9.065972222222221in"}{width="6.409027777777778in" height="9.065972222222221in"}
### Appendix D: Y9 Cybercrime pupils survey results
#### What is cybercrime?
<figure>
<img src="media/image18.png" style="width:6.26806in;height:2.98125in" alt="Mean: 6.52173913 Std Dev: " />
<figcaption><p>Figure 9 Lesson Start</p></figcaption>
</figure>
<figure>
<img src="media/image19.png" style="width:6.26806in;height:2.97986in" alt="Application Description automatically generated with low confidence" />
<figcaption><p>Figure 10 Lesson End</p></figcaption>
</figure>
#### Why do people turn to cybercrime?
<figure>
<img src="media/image20.png" style="width:6.26806in;height:2.98125in" alt="Chart Description automatically generated with medium confidence" />
<figcaption><p>Figure 11 Lesson start</p></figcaption>
</figure>
<figure>
<img src="media/image21.png" style="width:6.26806in;height:2.97986in" alt="A picture containing chart Description automatically generated" />
<figcaption><p>Figure 12 Lesson end</p></figcaption>
</figure>
#### What are the consequences of cybercrime?
<figure>
<img src="media/image22.png" style="width:6.26806in;height:2.98125in" alt="Chart, bar chart, histogram Description automatically generated" />
<figcaption><p>Figure 13 Lesson start</p></figcaption>
</figure>
<figure>
<img src="media/image23.png" style="width:6.26806in;height:2.97986in" alt="A picture containing application Description automatically generated" />
<figcaption><p>Figure 14 Lesson end</p></figcaption>
</figure>
### Appendix E: PSHE UK/ NCA Resources on Cybercrime{width="7.052083333333333in" height="9.409290244969378in"}{width="6.76875in" height="9.514583333333333in"}
{width="6.76875in" height="9.514583333333333in"}
### Appendix F: Selected examples of pupil creativity
<figure>
<img src="media/image27.png" style="width:5.20934in;height:3.67433in" alt="Shape Description automatically generated with low confidence" />
<figcaption><p>Figure 16 - Y11 “What does [b, c, a, t, z] look like after the first pass of a bubble sort?”</p></figcaption>
</figure>
<figure>
<img src="media/image28.png" style="width:5.22139in;height:3.94484in" alt="Text, whiteboard Description automatically generated" />
<figcaption><p>Figure 17 - I call this one 'Safeguarding Concern'</p></figcaption>
</figure>
<figure>
<img src="media/image29.png" style="width:6.76875in;height:4.67292in" alt="Diagram Description automatically generated" />
<figcaption><p>Figure 18 - not all in 9Z were on task all the time but come on - look at that jungle.</p></figcaption>
</figure>
[^7]: After finding out about it from Amy Cartwright at Ibstock School Place.
[^8]: (or indeed learn the names of)
[^9]: The study was on undergraduates in six distinct biology courses total *n*=549. Equally excitingly, values of [*p*](https://www.explainxkcd.com/wiki/images/3/3f/significant.png) were calculated and published.
[^10]: A new, more sensitive naming for the exercise formerly known as 'brainstorm', this term leaves me underwhelmed. I am trying to make 'mind maelstrom' happen instead, with limited success thus far.
[^11]: I'm leaning into it.
[^12]: Always with their explicit consent, asked for in private to minimise pressure, lest it cancels the 'anonymity effect' (Stowell and Nelson, 2007; Fallon and Forrest, 2011). Also, basic courtesy.
[^13]: It was their very first exposure. I expect this magic will abate. I use the indicative because 100% of computing teachers to whom I have shown whiteboard.fi have immediately decided to use it heavily (*n*=2, my mentors)
[^14]: They would have been unreadable for most beyond the first row, but the *notional* loss of anonymity still had a visible impact.
[^15]: And, if not, doing it myself. There was a fair few 'I've heard the word and know it's bad, but beyond this I'm not sure' - which incidentally was very much my stance about 'assessment' before this assignment.
[^16]: A selective, *invoice-sending,* Catholic independent mixed-sex school.
[^17]: Those completing the question at the end of the second half moved on to an extension on sorting algorithms, also run over DMWBs.
[^18]: I readily acknowledge the many flows in my methodology, but *you wanted data, Miles.*
[^19]: To the question "what are you hoping to learn in today's lesson?" S. answered "how to commit cyber crime anonymously". *Precious, aren't they?*
[^20]: There's always one. Or two, in this instance. One of whom was S.
[^21]: Though they will only ever be as *accessible* as the computers themselves.
[^22]: ... or accelerate apprentices' advancement...leverage learners'...learning?
[^23]: Of course, a computational thinker would attempt such *decomposition.* Typical.
[^24]: If I teach the control class, my lack of excitement for printed worksheet and paper books will be sure to transpire in my feedback. If someone else does it, having a different teacher delivering feedback is a confounding factor.
[^25]: I am not implying Black and Wiliam lack this critical understanding, merely that *their readers* might.
[^26]: It is telling that the key literature on assessment in the UK is aimed -- quite ostensibly -- as policy makers first, and educators second. More than ten years after the dissolution of the ARG, I witness the results of its efforts in the classroom, in CPD sessions and at university, but I am struggling to see the connection to government policy, beyond the mentions in the CCF and the Teacher's Standards.
[^27]: The ARG was dissolved in 2010, the year of the appointment of Michael Gove as Secretary of State for Education and the start of his reforms. "[Mission Accomplished](https://knowyourmeme.com/memes/mission-accomplished)" ?
[^28]: Whilst still being methodical, conscious and reflective throughout.
[^29]: I have been leaning into this one too.