Taking a previous post, “…
what are we measuring?” I believe there is a fundamental problem in how to measure
the positive benefits of technology.
Whatever new and exciting
creative media might be dreamed up in a web 2.0 world, educators and exam
boards squash and flatten it to a 2D paper based medium in order to assess. I
admit to being perplexed when vocational courses such as GNVQ were introduced
into schools. In the main they were not vocational; rather a flattened and
mechanistic portfolio based 2D representation.
Of course there are true
vocational qualifications. You only have to see the work of centres like Thanet
Skills Studio in Thanet, and Community College Whitstable amongst others, but
these are still not typical, they are still dominated by soft vocational
options that can be delivered in a classroom. It’s no wonder that the greatest
worry about technology is plagiarism.
Even in schools that have
embedded ICT use, students face having to leave their familiar technology at
the exam room door, and work alone. What are we measuring? ... What’s the point
of technology rich environments when the de facto measure is a flattened 2D
world of paper representing lonely achievement with no authentic audience? This
is what the success of our schools and Headteachers are measured by, and why
evidence of the impact of technology is scarce.
Children in the real world
create intellectual assets in a range of media, from video, images, Blogs, MySpaces
etc. Whilst schools are recognising that rich media have a curricular value, especially
in terms of audience, the assessment system compresses this into 2D artefacts. It
ends up being represented on a 2D sheet of paper that is shipped to an external
exam board. The recent exam round shows just how this is becoming further
embedded in the technology exam boards use. They are now digitally reading text
for marking! This further embeds the Flatworld approach and 2D environment. Assessment
must recognise digital assets in a range of formats, but examination boards don’t
know how to achieve this; this should be the focus of the debate.
There is also the problem
of what we are assessing. I have come across automated web 2.0 tools that do
all the work of creating whizzy digital artefacts. A different child might have
much deeper knowledge and skills but produce something less whizzy because they
don’t have access to whizzy software or resources. How do we measure? Is it
fair to base it on the final product or do we need to define the process more consistently?
This post presents
some of the challenges I see as fundamental, but does not answers. Please comment.