Without Theory, there are only Opinions By Roger Dale
By Roger Dale, University of Bristol.
The phrase ‘without data you’re just another person with an opinion’ which has been frequently uttered by Andreas Schleicher as a defence of the PISA surveys, is – I argue – not only indefensibly dismissive but very misleading. I say this, because I argue that without knowing why and how they were collected (i.e, the theory/ies informing their collection), data such as those assembled by PISA remain mere agglomerations of numbers, open to the post hoc attribution of any sort of evidential status an observer might want to attach to them, with the result that we might suggest that ‘without theory, data both lack meaning and carry the capacity to seriously mislead’. Or, as Immanuel Kant put it, rather more pithily, ‘Concepts without percepts[1] are empty (but) percepts without concepts are blind’.
And this leads to a crucial question: in the absence of any theory of comparison beyond the simplest juxtaposition, and in the face of the demonstrable need for the suspension of disbelief that the same instruments can be validly claimed to provide any sort of meaningful comparison between countries that are clearly very different from each other in a number of respects, why does anyone take notice of them?
There are two quite different sets of issues here, though both relate to what is being ‘compared’ through PISA. One is ‘countries’ as the units of comparison.[2] On the one hand the analytic problems generated by methodological nationalism[3] (which I have written about at some length) are by now well recognised, and increasingly, observed. On the other hand, when comparing countries’ performance on tests, what assumptions are made about ‘countries’ as the bases of the comparisons? These difficulties have been recognised at least since Przeworski and Teune’s work on the nonsense of assuming that the proper names of countries in themselves embrace/include everything necessary to enable effective inter-country comparison, and the need to replace them by the names of variables…‘replacing the notion that ‘nations differ’ by statements formulated in terms of specific variables’ (Przeworski and Teune, 1970: 29-30).[4]
The other, and possibly greater, problem, concerns the goals of PISA, and the foundations upon which they are erected. To take just one instance: PISA says it needs to ‘develop indicators that show how effectively countries have prepared their 15-year-olds to become active, reflective and intelligent citizens from the perspective of their uses’ of these subjects (OECD, 2006: 114). No evidence is, or data are, or could be, provided for the future consequences of what 15 year olds have learned at school – which makes it sound very much like just somebody’s opinion, or a pretty baseless wish. The problem is that it is not just any somebody’s opinion, but that of – what some see as – the most powerful agent of education policy formation in the world.
So, all this leads us to ask what does it matter, what purposes does all this serve? To answer this, we need to take PISA itself as explanandum rather than explanans (requiring explanation rather than providing it), and try to theorise the basis of its success. This entails not asking just how far it has achieved what it claims to, but what differences it has actually made. One piece of evidence we have is a recent internal review of how member countries have used PISA in their own systems. This shows that in a significant number of cases, countries’ responses are based on doing what will maintain their relative ranking, rather than radically altering their education system. That is, it seems to be that it’s the position that matters, (see Breakspear, 2012), not what it’s based on – which might lead us to consider the possibility that the most effective outcome of PISA is not how it has changed the education of 15 year olds around the world, but how it has created at the level of national education systems a high level of reputational risk through ranking, which acts as very powerful pressure to demonstrate conformity. Such competitive comparison, even when lacking in validity, is an extremely effective technology of governance in governments’ attempts to control education systems.
Such fears on the part of governments, though generated by the changing forms and demands of capitalism have been formulated into diagnoses and remedies by international organisations such as the OECD.
As part of these diagnoses (which do not so much represent solutions for national governments, as provide definitions of the problems they face), measurement becomes a tool of management. Where this happens, as has frequently been pointed out, ‘what gets measured gets managed’, though we should note that the full proposition is ‘What gets measured gets managed – even when it’s pointless to measure and manage it, and even if it harms the purpose of the organisation to do so’ (Caulkin 2008).
Concluding Comments
The data collectors of course recognise that data are never sufficient in themselves, but they seem to be content if the data feel ‘intuitively’ good enough, especially when they have been enthusiastically received by many interested and informed parties, and they fit nicely with their perpetrators’ prejudices. So what’s the problem? Why can’t we settle for technically brilliantly produced data? Why do I argue that opinions shouldn’t be based on data alone (without knowing why and how the data/evidence was generated)? And most importantly, why does it matter so much with PISA?
The problem is four fold:
First, surveys like PISA can never reveal, let alone take into account, everything about any particular issue in one country. Nor have they (without theory) grounds to claim that the differences that we know exist between countries are irrelevant (another theory-dependent claim) (and if we say it’s not necessary to know everything, only what is important, we are already acknowledging the need for a theory of what makes them important—because they don’t speak for themselves).
The second problem is that as a result of these shortcomings, theory is replaced by the informed guess, or at best what are more politely known as empirical generalisations. The most prominent example of this in education is ‘the correlation’ whose ‘intuitive’ appeal often manages to overcome all the health warnings and examples of spuriousness that elementary methods textbooks can throw at us.[5]
The third problem is that the consequences, as opposed to the outputs, of the exercise are not implicit in its findings. This is extremely important in an area like education – because ‘the facts’ of education management do not arrive from nowhere. They do not exist, except as a result of the PISA system, and moreover they cannot be taken in isolation. Not only are the consequences – or even the uses to which they are put – of the assumed ‘effects’ of the findings unknowable, but the findings themselves could be explained in myriad ways.
The final problem, and perhaps most important, is that this subsumption of theory to data means that we have no way of knowing why or how they work, on, and for whom, under what conditions, or of how they might be changed, either in themselves or in terms of their implications, in ways that might bring about changes deemed to be desirable.
These are the critical elements that data fetishism tends to make invisible, but even worse, that could clear the way for the emergence of a nightmarish new paradigm of knowledge, where, as one sardonic critic put it:
‘massive amounts of data and applied mathematics replace every other tool that might be brought to bear. Out with every theory of human behaviour, from linguistics to sociology! Forget taxonomy, ontology, and psychology! Who knows why people do what they do? The point is they do it, and we can track and measure it with unprecedented fidelity. With enough data, the numbers speak for themselves.’
Roger Dale is a Professor of Education at the Centre for Globalisation, Education and Society, University of Bristol, and Co-editor and Review Editor, ‘Globalisation, Societies and Education’. Email: R.Dale@bristol.ac.uk
>>Related post: ‘Big Data’ Does Not Mean Good Data’, By Susan L. Robertson, University of Bristol.
>>See all NORRAG blogs on Learning Assessments
[1] A general rule prescribing a particular course of action, conduct or thought.
[2] Blog Editor: This use of countries as a unit of comparison is, of course, now widespread in many global reports.
[3] Blog Editor: First discussed in the mid-1970s, methodological nationalism is ‘the assumption that the nation/state/society is the natural social and political form of the modern world’ (Wimmer and Glick Schiller, 2002: 302). The nation-state is considered to be the appropriate primary unit of analysis.
[4] Przeworski, A. and Teune, H. (1970) The Logic of Comparative Social Inquiry. New York: Wiley-Interscience.
[5] Blog Editor: A spurious relationship is a mathematical relationship in which two variables (let’s call them x and y) have no direct causal connection, yet it may be wrongly inferred that they do. Just because x and y are correlated, this is not proof of a causal relationship; both x and y may have been affected by a third (or more) variable(s).
NORRAG (Network for International Policies and Cooperation in Education and Training) is an internationally recognised, multi-stakeholder network which has been seeking to inform, challenge and influence international education and training policies and cooperation for almost 30 years. NORRAG has more than 4,200 registered members worldwide and is free to join. Not a member? Join free here.
Pingback : The Politics of Indicator Development in the Education 2030 Framework for Action | NORRAG NEWSBite
Pingback : NORRAG – The Politics of Indicator Development in the Education 2030 Framework for Action By Tore Bernt Sørensen - NORRAG -
Pingback : NORRAG – Education as a Stronghold? The Ambiguous Connections between Education, Resilience and Peacebuilding By Mieke T.A. Lopes Cardozo - NORRAG -