The world is a complicated place – partly because humans are complicated animals. One of the many things this means is that when governments try to influence our behaviour, their chances of stuffing up are surprisingly high.
Consider this. Say I’m an investment adviser telling you (or your parents or grandparents) where to invest your retirement savings. I warn you that, should you take my advice, I’ll be paid a commission by the managers of the investments I put you into.
How do you react?
Well, you should react by becoming a lot more cautious about following my advice. It’s clear I have a conflict of interest. Is my advice aimed at doing the best I can for you, or at maximising the commissions I earn?
When governments require investment advisers to disclose any conflict of interest to their clients, that’s how the pollies expect you’ll react. They also expect that this requirement will prompt advisers to eliminate or reduce any conflict so their advice is more likely to be trusted.
But research by Dr Sunita Sah, a psychologist at Cornell University in upstate New York, has found it often doesn’t work like that. Although such disclosures do indeed cause clients to have less trust, they can often lead people to feel social pressure to act on the advice anyway.
Clients may be concerned that refusing to follow the advice would be a signal of their distrust in the adviser, with whom they’ve often formed a personal bond. They may even interpret the disclosure as a request that the advice be taken, as a favour to the adviser who, after all, needs to earn a living like the rest of us.
Sah found that clients given advice they knew to be conflicted were twice as likely to follow that advice as were clients where no disclosure was made.
The lesson is not that we should stop requiring advisers to disclose their conflicts, but that government policymakers need to think carefully about the specific design of their policies.
It turns out you can reduce the undesirable effects of disclosure if they come from a third party – that is, someone other than the adviser. It also helps if clients’ decisions are made in private, or if there’s a cooling-off period before the decision is finalised.
Have you guessed where this is leading? It’s a plug for a relatively new tool that’s been added to the bureaucrats’ policy toolkit – “behavioural insights”.
In a speech he gave in Canada last week, Dr David Gruen, a deputy secretary in our Department of Prime Minister and Cabinet, explained that behavioural insights is an approach to policymaking that draws from psychology, cognitive sciences and economics to better understand human behaviour, help people make good choices more easily, and help improve the effectiveness of public policy interventions.
As the case of conflict-of-interest disclosures illustrates, people’s responses to government policy measures can be surprising. Politicians and bureaucrats need to be more conscious of the insights of behavioural insights when designing policies to fix problems.
And the behavioural insights tool can also be used for real-world testing of how policy measures are working – or not working – in practice.
The first government to establish a behavioural insights team was Britain in 2010, at the initiative of prime minister David Cameron, Gruen says. It’s since become a partly privatised joint venture.
By now, according to the Organisation for Economic Co-operation and Development, there are more than 200 public sector organisations around the world that have applied behavioural insights to their work.
In Australia, the federal government’s behavioural economics team – BETA – was set up to apply behavioural insights to public policy and to build behavioural-insights capability across the public service. It’s at the centre of a network of 10 behavioural insight teams across the federal government and alongside several state government teams.
These teams are also known as “nudge” units because they’re often trying to give individuals a nudge in the direction of making more sensible decisions, while leaving them free to do something else should they choose. You’re not forced, just nudged.
Gruen offered several examples of what the feds have been doing. BERT, the behavioural economics research team in the Department of Health, looked at the ballooning cost of reimbursements to doctors for providing after-hours care.
After-hours care considered urgent was remunerated at about twice the rate of that judged a non-urgent visit. Who judged whether the care was urgent? The doctor.
The department identified the 1200 doctors with the highest urgent after-hours claims, and ran a randomised control trial, sending each of them one of three alternative letters, with the letter a doctor received chosen at random.
One letter compared the doctor’s billing practices with their peers, showing they were claiming the urgent category far more often than others were. This drew on the behavioural insight that individuals are often motivated to change their behaviour when they are out of step with their peers.
The second letter emphasised the consequences of non-compliance, including the penalties and legal action. This letter drew on the behavioural insight that people tend to avoid losses more than they seek the equivalent gains.
The third letter was the control – the standard bureaucratic compliance letter, running to three pages.
All three letters were successful in reducing claims, but the peer-comparison one was far more effective than either the standard compliance letter or the loss-framing letter. The peer-comparison letter reduced claims by 24 per cent.
And it was just a nudge, not a threat of punishment for dishonestly claiming cases to be urgent when they weren’t.
In the six months after the letters were sent, the 1200 high-claiming doctors reduced their claims by more than $11 million (across all three letters), and 18 doctors voluntarily owned up to more than $1 million in previous incorrect claims.
So, as Gruen concludes, a simple and cheap nudge can yield big dividends.
Read more >>
Consider this. Say I’m an investment adviser telling you (or your parents or grandparents) where to invest your retirement savings. I warn you that, should you take my advice, I’ll be paid a commission by the managers of the investments I put you into.
How do you react?
Well, you should react by becoming a lot more cautious about following my advice. It’s clear I have a conflict of interest. Is my advice aimed at doing the best I can for you, or at maximising the commissions I earn?
When governments require investment advisers to disclose any conflict of interest to their clients, that’s how the pollies expect you’ll react. They also expect that this requirement will prompt advisers to eliminate or reduce any conflict so their advice is more likely to be trusted.
But research by Dr Sunita Sah, a psychologist at Cornell University in upstate New York, has found it often doesn’t work like that. Although such disclosures do indeed cause clients to have less trust, they can often lead people to feel social pressure to act on the advice anyway.
Clients may be concerned that refusing to follow the advice would be a signal of their distrust in the adviser, with whom they’ve often formed a personal bond. They may even interpret the disclosure as a request that the advice be taken, as a favour to the adviser who, after all, needs to earn a living like the rest of us.
Sah found that clients given advice they knew to be conflicted were twice as likely to follow that advice as were clients where no disclosure was made.
The lesson is not that we should stop requiring advisers to disclose their conflicts, but that government policymakers need to think carefully about the specific design of their policies.
It turns out you can reduce the undesirable effects of disclosure if they come from a third party – that is, someone other than the adviser. It also helps if clients’ decisions are made in private, or if there’s a cooling-off period before the decision is finalised.
Have you guessed where this is leading? It’s a plug for a relatively new tool that’s been added to the bureaucrats’ policy toolkit – “behavioural insights”.
In a speech he gave in Canada last week, Dr David Gruen, a deputy secretary in our Department of Prime Minister and Cabinet, explained that behavioural insights is an approach to policymaking that draws from psychology, cognitive sciences and economics to better understand human behaviour, help people make good choices more easily, and help improve the effectiveness of public policy interventions.
As the case of conflict-of-interest disclosures illustrates, people’s responses to government policy measures can be surprising. Politicians and bureaucrats need to be more conscious of the insights of behavioural insights when designing policies to fix problems.
And the behavioural insights tool can also be used for real-world testing of how policy measures are working – or not working – in practice.
The first government to establish a behavioural insights team was Britain in 2010, at the initiative of prime minister David Cameron, Gruen says. It’s since become a partly privatised joint venture.
By now, according to the Organisation for Economic Co-operation and Development, there are more than 200 public sector organisations around the world that have applied behavioural insights to their work.
In Australia, the federal government’s behavioural economics team – BETA – was set up to apply behavioural insights to public policy and to build behavioural-insights capability across the public service. It’s at the centre of a network of 10 behavioural insight teams across the federal government and alongside several state government teams.
These teams are also known as “nudge” units because they’re often trying to give individuals a nudge in the direction of making more sensible decisions, while leaving them free to do something else should they choose. You’re not forced, just nudged.
Gruen offered several examples of what the feds have been doing. BERT, the behavioural economics research team in the Department of Health, looked at the ballooning cost of reimbursements to doctors for providing after-hours care.
After-hours care considered urgent was remunerated at about twice the rate of that judged a non-urgent visit. Who judged whether the care was urgent? The doctor.
The department identified the 1200 doctors with the highest urgent after-hours claims, and ran a randomised control trial, sending each of them one of three alternative letters, with the letter a doctor received chosen at random.
One letter compared the doctor’s billing practices with their peers, showing they were claiming the urgent category far more often than others were. This drew on the behavioural insight that individuals are often motivated to change their behaviour when they are out of step with their peers.
The second letter emphasised the consequences of non-compliance, including the penalties and legal action. This letter drew on the behavioural insight that people tend to avoid losses more than they seek the equivalent gains.
The third letter was the control – the standard bureaucratic compliance letter, running to three pages.
All three letters were successful in reducing claims, but the peer-comparison one was far more effective than either the standard compliance letter or the loss-framing letter. The peer-comparison letter reduced claims by 24 per cent.
And it was just a nudge, not a threat of punishment for dishonestly claiming cases to be urgent when they weren’t.
In the six months after the letters were sent, the 1200 high-claiming doctors reduced their claims by more than $11 million (across all three letters), and 18 doctors voluntarily owned up to more than $1 million in previous incorrect claims.
So, as Gruen concludes, a simple and cheap nudge can yield big dividends.