There should be greater public involvement in deciding what is a legitimate ‘nudge’
The Coalition Government has been at the forefront of using insights from behavioural research to craft more effective policies, ‘nudging’ citizens in other words. Rikki Dean argues that ‘nudges’, especially those that rely on deception or concealment, should be subject to a ‘participatory principle’. Only citizens themselves can legitimately rule on what is in their own interest, and therefore there should be greater exploration around how to involve the public in decisions about their use.
Since its creation the Cabinet Office’s Behavioural Insights Team (BIT), or ‘Nudge Unit’, has received a lot of attention. Fêted both in the media and by the Prime Minister for its innovative, primarily experimental, approach to policy-making, BIT recently ran into its first scandal when The Guardian newspaper claimed its innovations to Jobcentre Plus procedures involved forcing job-seekers to undertake ‘bogus psychometric tests’ in order to boost their psychological resilience. The story raises some interesting questions about the ethical limits of libertarian paternalism. Is it acceptable for government to deceive us if it is for our own good? And, can we trust that a nudge is a helping hand and not a shove in the back?
Libertarian paternalism is a novel reformulation of the enlightenment project which simultaneously rejects and reaffirms the notion of human perfectability. Unlike Marxism for instance, where a rationally organised utopia is predicated on a rational transformation of human nature, libertarian paternalism rejects the perfectability of individual humans and instead proposes that the clever design of the right ‘choice architecture’ can harness the imperfections of individuals – their irrationality, their inertia and so on – to fulfil the promise of a more rational society. Such a philosophy inevitably raises the questions of who are the choice architects, what is their rational project and what is the basis of their legitimacy to implement it? In this case of psychometric testing, the answer to the first question is simple: BIT.
Question two is more difficult. The stated agenda is ‘helping people back into work‘, but there are some unstated objectives too, for instance, optimising Jobcentre Plus processes, and reducing government spending on out-of-work benefits. The third question is more difficult still and it is difficult to see a sound basis for legitimately deceiving people into taking psychometric tests to boost their psychological resilience.
One potential basis for legitimacy is the consequentialist argument: does it work? If the tests inculcated greater psychological resilience, which led to the job-seekers finding employment, then they are justified. The BIT trial achieved quite impressive results; the treatment group was 15-20% more likely to find work than the control group – quite an achievement when compared with the derisory, worse-than-doing-nothing performance of the expensive Work Programme. However, there are some notable difficulties with interpreting the effects of the randomised control trial (RCT) for this intervention. First, four interventions were tested simultaneously – the equivalent of simultaneously giving a patient four different drugs in a medical trial – so it is impossible to know whether the psychological resilience element works. Second, it is doubtful that this effect would be scalable given the effect size measured by the RCT occurs in a partial equilibrium and full implementation would create a general equilibrium effect. Scalability is a common problem with RCTs in social policy, and an important consideration in assessing whether an RCT is an appropriate test for an intervention, which is missing from BIT’s guide to RCTs.
‘What works’ has become something of a sacred mantra in policy circles since the Blair years, and recently reached its rhetorical apogee in the proposal for ‘what works’ centres for evidence in social policy. However, the resurgence of this naïve belief in ‘scientific government’ – outmoded in the social sciences since the failure of the US Great Society Programmes of 1960s – does little to resolve our ‘legitimacy of deception’ conundrum. A world view that posits the policy maker as scientist, society as her laboratory and citizens as her unwitting lab rats is anathema to a generation of researchers raised on the importance of ethical reflection and informed consent. Ever since Kant the notion of respect for the moral autonomy of persons has arguably been the bedrock of Western morality, and any policy that does not respect this autonomy – whether it works or not – is unlikely to command legitimacy.
That a nudge should respect, or even promote, individuals’ moral autonomy is a useful principle for judging its legitimacy. This would not prohibit all nudges; such a principle is compatible with forcing citizens to choose whether or not to become an organ donor, for instance. Thaler and Sunstein, in their bestselling book Nudge: Improving Decisions about Health, Wealth and Happiness, draw on this Kantian tradition in order to draw the boundaries of libertarian paternalism and reject invidious nudges like subliminal advertising. They call for transparency in the use of nudges and argue for a Rawlsian ‘publicity principle’: that government should not adopt any policy it would be unwilling to defend in public. I would go further: nudges, especially those that rely on deception or concealment, should be subject to a ‘participatory principle’.
Nudges may be publicly justified as in citizens’ best interests, however; Foucault has written extensively on how the modern state manifests its authority in concern for the population and the optimisation of its health, wealth and happiness, utilising this to justify the regulation of citizens’ conduct. Interpreted in the light of the Foucaultian notion of ‘subjectification’, the nudge becomes a disciplinary technology through which citizens’ actions are policed by their own unconscious selves, and it is by this process that government is accomplished through the agency of the governed. With our psychometric testing example, the intervention is couched in terms of helping people back into work (a laudable aim), but it also targets a key government policy of reducing expenditure on unemployment. This is representative of the subjectified nature of current debates on unemployment which are framed in terms of a problem of agency among the unemployed rather than as, say, a structural problem of lack of employment opportunities. As such, it is only government through the agency of the unemployed that can remedy budget deficits.
Separating the interests of citizens from the interests of the state is not a simple task. BIT may be trying to promote citizens’ health, wealth and happiness, but its existence rests on raising/saving money for the Treasury – it has to save at least ten times what is spent on funding the Team. Nudges may be justified as the altruistic acts of a beneficent government department, but whether they are in fact in citizens’ own interests should be properly scrutinised. There is only one authority that can legitimately rule on what is in citizens own interests: citizens themselves. Therefore, the publicity principle is not sufficient and, if we are to abide by our long-held moral intuitions regarding respect for the moral autonomy of persons, there should be greater exploration around how to involve the public in decisions about the applications of libertarian paternalism. Citizens themselves should judge whether they think a nudge is a helping hand or a shove in the back. There is of course something of a paradox in predicating the use of techniques that can shape the way the public thinks on what the public thinks of them and, it should go without saying, the public should not be nudged into consenting to nudges.
Originally posted @ LSEpoliticsblog
Original Blogpost, May 17th, 2013
Reposted on Democratic Audit