This is the live blog for ESRC ‘behaviour change’ seminar number seven. Check back here from 1pm on 21st June for our live feed.

A few minutes til kick off…

Sabine Trepte

We don’t frame our research as behaviour change but it is. We use longitudinal studies and experiments. How do people change when we manipulate different circumstances. How can we effect behaviour change? I will present different scenarios.

People are concerned about their privacy but they don’t get the chance to transfer these concerns to their behaviour. There is a paradox between attitudes and behaviour, however. There is a deep wish for people to follow up on their concerns.

There are different scenarios which mitigate people’s experiences. If someone is openminded on their social media, they might change if they have offline negative orpositive experiences.

Alo knowledge and education. These things have an effect. I’ll present the data.

Negative experiences:

If they receive hostile messages on FB. Half a year later, do you have a higher risk assessment? Does it change your privacy settings? Do you share your details as frequently? Would you be as willing to share your emotions and fears?

Negative experiences consequent in heightened privacy behaviour.

No relationship between negative expeirences and social or psychological privacy.

People change their privacy but not how they communicate with others, just the infrastructure of the social networking site.

Positive experiences - emotional or social support. Instrumental support. How do people learn from these experiences. The more I disclose the more support I get (previous research). This was not found in the last 60 years of social psychology research. There isn’t this research in face to face relationships. This isn’t as clear cut. Disclosure isn’t required as much off line. Online people have to tell that they need help. How do the positive expeirences online transfer to offline encounters? There is an interesting relationship between online and face to face encounters. It transfers from online to offline realms. If people make positive experiences about disclosures online then it might transfer to their offline relationships.

Knowledge: Online privacy literacy scale development, through lit review, validation. 4 dimension

Knowledge is correlated with different practices:

Maybe people are resigned? They don’t feel like they can do anything so don’t bother. Or maybe they feel like they’re part of the community. A question of interest. Rather, law inspires self-efficacy.

Education - the privacy tutorial did not have an influence on how many fields were filled out. And what kind of content they filled in. What was effected was what they considered to be private, eg gender and university subject. Just influences awareness.

Awareness is only one of the first steps and not the last. Doesn’t influence the routines of disclosure that people have.

There’s a contradiction between behaviours associated with privacy (seclusion, not sharing, withdrawing - not a positive expeirences) - how do we turn this into a positive expeirences?

Alessandro Acquisti

Information Technology and Public Policy

Privacy Nudges

Economist by training. Interested in behavioural economics. Dichotomy between attitudes and behaviours. Behavioural decision research.

Nudging Privacy. Can we use liberal paternalism to influence online behaviours?

Online, privacy decision making is harder. Bad decisions can be damaging- over and under sharing.

Positive not normative argument. Positive is ‘how it is’ rather than normative ‘how the world should be’.  Would nudges work? That was my starting point. Various studies.

Study - Facebook regrets.

Asking what people regret. Why do they post when they regret later?

58% of Facebook users have regretted something. Specifically sex, relationships, profanity, alcohol...

Hot and cold state theory. People do things in their ‘hot state’ that they would then regret. Instant reactions.

Another study - Twitter regrets. How quickly did people regret things they’d said? What ‘states’ led to the regret? How did people ‘repair’?

1221 Twitter users. Randomly assigned.

Differences between offline conversations and twitter regrets. Generally an emotional state leads someone to making a comment they regret. E.g. being so happy they overshare. People regretted

Differences between Twitter and face to face conversation - in terms of repair strategies.

People delete first then apologise. But deleting isn’t an efficient strategy because once it’s out it’s out.

How long did it take before people realised this is an error? Conversation - within a few minutes.

Messenging

Another study - asking people what they would most like to change about their behaviour on SNS. Expected people that they would say they want to use social media less. Noone is going to say they wish they’d tweeted more, but people want to use sns better. People wanted to use it more, or better, or avoid bad usage. Or less frequently. People wanted to expand their contacts and cultivate their personal brand.

Respondents have behaviour change goals. Everyone wanted to change their behaviour but the goals varied enormously. People had different goals so how can we help people achieve those goals?

Nudges embedded within the interface of Facebook. Complicated experiment. We needed people to install things intheir computers which modified the way they saw Facebook. Facebook is always changing their code so it was a difficult study to achieve.  We collected data about their facebook usage. Then we collected data about the nudges. We didn’t want to increase or decrease but rather just see if the nudges worked.

E.g. we had a line saying ‘your post will be published in ‘countdown second’s. Rather than making it as easy as possible to disclose, this enables people to regret and withdraw without posting.

E.g. ‘your post can be seen by xxxx’ and 100 more people. This is the victim identifying effect. If you can see their posts then you might be more likely to think of your audience as real people and this might affect behaviour. People then changed their settings.

Study - Android mobile security cloud. Data from 5 million users. What were they allowing people to see? You can learn from the choices people make and transfer this to the decisions that other people make in other context.

We developed Personal Privacy Support…

Should we nudge? I don’t know. I think it’s effective in some cases but it’s not an alternative to regulation. My study is about how we can manipulate online behaviour.

Angela Sasse

If people don’t do what we want them to do then we presume they’re stupid but we need to understand what the problem is first. It’s not necessarily individual behaviour change that is the problem. When it comes to changing behaviour there are lots of different sources of information. There are contradictions. We have to make it as simple as possible to understand the message. A good first step?

If you look at what’s available in the public space - you hear about announcements. Protection. Lots of different campaigns going on - ‘get safe online’. Ernest guy telling you how to change settings on a firewall. Cyber Streetwise. Slick graphics. Designed by Saatchi. Google’s ‘good to know’ campaign - including Tube ads. BBC Webwise. High profile people (Myra Seeall) talking about concerns about kids. So much confusing stuff going on. Not really thinking about how the information is being delivered. The email from the BBC came through and said ‘this message maybe a scam’...

A few year’s ago there was a campaign called ‘the devil’s in the detail’... brilliant campaign but so scary but people rang their banks and said they don’t want to do online banking now. Overall aim is about having a functional digital economy. You can’t have a tunnel vision and scare people.

BBC campaign was poor. Lots of stats but no actionable goal. You have to be REALLY motivated to keep going. But eventually you click through to the Citizens Advice Bureau page and it said ‘Page Not Found’. Very poor campaign. Poor advice.

People think security is a joke. Their experience is a joke. They expeirence getting security warnings. ‘This is dangerous’ but they don’t think anything bad happens. For every correct warning about a ‘malicious certificate’ there are 15000 false ones. That is why people think this is a joke. All security warnings are a joke. If you experience you can ignore the warnings and get away from it then you might carry on ignoring it.

If you look at hte literature - iNformation Security FOrum. They pooled their resources to develop security advice and tools. A report in 2014. “Awareness is just background noise”. Everyone’s doing security awareness but it isn’t changing behaviour.

A lot of the efforts are pitifal - large organisations spending not enough money. No executive support. Employees aren’t listening. They feel they have to do something but not prepared to do what it would take. There is a sense that scaring people and providing information is enough and it can be done cheaply but that isn’t the case. There aren’t any good metrics for security behaviour.

The framing is misguided. We teach people about phishing by phishing. What does that say about the relationship between employee and employer. This is blaming and not based on or building trust. If you look at how much is being spent, it’s quite astounding. Phishing makes people more grumpy.

Research into how to frame a message to tell them what they should be doing. Framing didn’t make a difference. 1000 people in the study. Same people who clicked phishing link clicked again 3 months later.

What can we do?

Engage engage engage. Relationship building. Don’t push information at them. That’s not the way forward. No tricking.

If you’re trying to change behaviour, then awareness is only the first step. Then you have to embark on the awareness maturity curve. Difference between automatic and deliberate behaviour.

Kahneman - human factors. 90% of our human behaviour is automatic. We don’t think about it. Once you’ve learnt to drive it’s automatic. Behaviours are triggers and conducted automatically.

We can’t ‘yank people out’ of their routinized behaviours. Just saying people shouldn’t do something isn’t enough. We must replace an insecure behaviour with a safe one. But do this in an easy way which doesn’t include more problems for people.

Changing the behaviour is the opposite of a cheap fix. You tackle the top few of their behaviours. You can’t change all behaviours at the same time.

Cyber streetwise - segmentation and measurement. Different topical concerns that they got from the statistics. A step in the right direction. Still need to question whether or not the advice is good enough quality.

Questions/discussion:

Security hygiene - to make sure that security behaviour doesn’t get in the way of their primary business.

Any studies about whether or not people get treated differently now that we have ‘disclosed’ information about ourselves.

Allessandro - Facebook disclosure can lead to discrimination in the job market. CVs did not include Facebook links but the employer searches and they would find the LinkedIn profiles and Facebook profiles. Sexual, ethnicity and religious affiliation. Callback rates were measured. Religion seemed to correlate to lower callback rates (muslim).

There is also a tendency to believe that the more we disclose about ourselves the more accepting we are of others. But we found the opposite. People become more judgemental of people making the same disclosures online. These papers are available.

Question for Allessandro - are nudges good for longer term privacy?

There is evidence for short term. If you can do ‘just in time’ nudges and if that’s all you need then we’re ok. If the issue is to make people more aware of how mobile apps are going to use their information, maybe it’s a good enough solution. However, there are other things that mean that it is hard for poeple to influence behaviour in a long term way (clicking on phishing links). I’m fearful of nudging becoming the ‘whole’ policy answer. THey can’t be the only or main solution.

Angela

Long term goals and reflection are also important. We need to adjust. Short term goals will always ‘win’...

Question - how can interventions be transfered acros contexts, work and home?

Sabine - compared different media. We saw a large effect across different media.

Angela - the observation is that beahviours do transfer.

Question - how much of it is about changing behaviour or is it attidues, systems and so on?

Angela - the systems are badly considered. Designs for behaviour change need to be built in.

Allessandro - the term behaviour change ‘irks me’ or puzzles me because it presumes there is no external influence. Nudges make it transparent that the choice architects have an influence in our decision making. Whatever Facebook decides is the default setting is already a nudge. Nudges are already there. We can nudge to make the existing behaviour changes more transparent. Refers to self-commitment advertising. People plan to act in a certain way but in their hot state they don’t. Self-commitment tools - the first was to chain Uridysses to the boat mast against the furies. You can download a nudge app which helps you commit to a set of behaviours online (e.g. how much time you spend on facebook).

Culture - does it play a role?

Question - how can we identify who is most likely to click on a phishing email?

There is some research out there on how to segment.

Privacy is culturally universal and culturally dependent (Allessandro).

Sabine - people who are more anxious about anything will be less likely to click. Also people in collectivist cultures people will be less likely to click if they are worried about the company…

Angela - you can’t modify your recruitment according ot whether someone is more neurotic about security. That’s putting the horse behind the cart!

Yummy cake!

Adam Joinson

Our research is about individual susceptibility to nefarious influence. If we were treating this as a behaviour change problem, how would we design an intervention?

4 step approach. Quite well used in behavioru change.

  1. Analyse the problem
  2. What’s been done before
  3. Design interventions
  4. Evaluate

Why is it a problem, what causes, who benefits if we solve it, what’s been done before?

So far we have done work on the individuals and what makes them susceptible. Organisations and their phishing attacks. Profiling people. Lab studies. Evlauating interventions.

A Ukraine cyber attack - took down the power grid and took it down. Phished power workers. Shared passwords between non secure and secure system. Got paswords and triggered something that broke the valves.

Last week - a report out about older people being susceptible to certain types of scam. FCA’s Scamsmart campaign. Some people are more susceptible than others.

Most attempts rely on well established social psychological influence attempts. Cialdini - weapons of influence. Reciprocity. Follow other people. Herding and social proof. Most attempts rely on these kinds of influence processes. On top of a sense of urgency. Asking people to act really quickly. Negative emotions. COncern. Influence attempts use certain techniques and require that people respond within a certain time period or something negative will happen. Well established within marketing. But the end product is different. Security issues. The end product is generally non existent.

Are some people more susceptible than others?

Phishing - Suspicion, Cognition and Automaticity Model (Viswanath, Harrison and Ng, 2016). People should  approach emails in a suspicious frame of mind. What increases suspicion is systematic processing, habit and beliefs about cyber risks and ability to self regulate. These predict how suspicious people will be.

What makes people susceptible?

State based vulnerabilities

If you engage in lots of self control then you get tired. You expend your quantity of self control and then can’t control yourself later on.

How do phishing emails work?

Can you predict the type of person who will be most susceptible?

You can - various factors make a particular attack vector more likely to succeed.

You can map this and then work out how to protect people.

Romance scams - certain approaches about the techniques to achieve trustworthy profiles. Crises and trigger points. All interacts with the individual and their amount of time and so on.

STudy - Spear Phishing simulations. COmpanies which simulates phishing. Goes to a survey. Sometimes just the message they sent and the click rate.

Techniques used, e.g. special offer, parcel deliver, policy update etc. and the success rate. Same techniques as in social psychology. Urgency, authority.  Most of the time you can predict success rate by looking at the subject of the emails.

The most successful exploit

Urgency, curiosity and authority.

Some geographic differences, subsets of vulnerable users. No consistent findings. Interesting!

Study

Finally - if you can bore people for long enough, are they more susceptible to different influences?

Study - began to bore. STudents presented with different videos. No change in level of boredom or alertness during hte lecture.

Extraversion - people more likely to click on a link. Generally a few patterns.

Potentially something about a need of escape. SOmething that gives them a positive emotional release.

Debi Ashenden - Cranfield.

What happened when we tried to apply social marketing in cyber security. Practical way - skeleton to work through behaviour change.

Introducing the concept of social marketing. Very structured way of developing a behaviour change intervention. Use techniques used in marketing. Same approaches in order to achieve behaviour change.

Opportunity to see how social marketing might work in a cyber security field.

We spent three months trying to identify a clear, crisp non-divisible target behaviour. Could cripple a business… Main problem was evaluation. Where are the baselines? We were stumped. No suitable behaviour to try social marketing out on.

One problem our client kept coming back to - create interactions between security and the business in a way that both sides value and that which holistically benefits the business.

There was a lack of clarity here. Relationships aren’t easily measurable. We knew it was messy but also worth tackling.

Insider threat happens when there is a poor security culture and other factors in combination.

Interest in pushing the boundaries about what the sm framework does and how it can be used. If we’re so keen to borrow from traditional marketing. Marketing is using relationship marketing. If we have a relationship problem, can’t we use that problem?

We segmented our audience. Security practitioners and software developers. When the relationship is poor, we learnt that there is a fear. If software took their ideas to security they were worried their ideas would be not allowed, shot down. Lack of trust. The developers would put off engaging until too late in the lifecycle. Processes were disrupted. The organisation was getting a very incomplete view of the risk they faced. Never an open enough dialogue with developers. We realised that if the relationship was bad then there would be insufficient time and resources allocated to security.

A few individuals thought they had a good relationship. We could see there were benefits in trying to tackle this problem.

An intervention with more than one aim. Gather more information about this problem.

Action research - an extended focus group to use as a way of changing the behaviour of security and developers.

Pilot phase. 12 month period. 3 workshops. 2 in client oragnisation and 1 in a similar one.

WOrkshops - initially were 3 days each. Too exhausted. 2.5days. They were being taught stuff they found very difficult. And we were pulling information from them. We wantedt o give something back. We were taking info and teaching them in exchange.

3 researchers in the room. 18 people in the workshop.

Every time we ran a workshop we revised the workshop and then ran it again.

A good relationship can make a bad security process work.

POWER - comes through ability to influence and persuade

Stereotypical image of security practitioner scan inhibit progress.

Security practitioners are seen as policy - the people who say ‘no’. They have done work to try and dispel this myth but this is their baggage.

When we started to design this intervention… we knew what we wanted to achieve.

Every workshop started with a problem from outside the security space. Then brought it back into security space and then did a role play. E.g. GP-patient relationship. Talked through why you might not adhere to what GP says.

Patients don’t take GPs medical advice. Wanted to move the dialogue. We want to move patients from being non compliant through adherance (collaboration to a certain extent) - to get to a position of concordance.

How could we make the ‘product’ more appealing.

We aske them what they do to stop or help the security process. A reflective process. A lot of work around questioning skills, negotiating compromise. A lot of them had done CIPD before but they’d never internalised it because it hadn’t seemed relevant before. Questionning skills was hard for them to pick up. They found it hard to construct different kinds of sentences, to wait for answers, to stop through an interaction when they realised something was going wrong.

Exercise - develope a message.

We gave them a framework for working through behaviour change. Hug, Shove, Smack or Nudge? Come up with examples.

Evaluation - how to come up with proxy measures that people have a good relationship? Has to be a collaborative programme.

Tried to take these workshops into the private space. Different nuances.

We are interested in the role of language in the way these security dialogues occur.

Ian Brown - Oxford Internet Institute

Moving to the societal level. Promotion and regulation.

Nudging - frequently associated with political agendas. Neoliberal agenda. Reducing regulation. We need to be aware of how discussions of nudges and consumer ‘education’ and changing indiivdual behaviours are associated with broader political contexts. Links to ‘responsibilisation’... it’s your fault.

There are implications of this political approach:

We have to think about whether or not there is a distributive impact. How can we expect illiterate people to read 30,000 word privacy statements. Most people are not like us…

Western Educated … Weirdos. Most psychology is based on this.

Market regulation is avoided. Market failures come from asymmetry between information between providers and consumers. E.g. all data is collected online. Digital technology has changed the fallible memory. Things can be perfectly captured.  That is done invisibly. People generally have no idea. People don’t realise that.

Privacy policies are unreadable. And how would you enforce this anyway?

Behavioural economics - most individuals are bad at weighing up options that have immediate benefits and deferred uncertain costs. Deferred self-gratification. Most people won’t wait for that second sweet! People overshare. They want to show off online. Teenagers don’t think for the future - that might stop them getting a job or mortgage.

Impact of competition in markets - a free market is precise. There are few perfect markets. In perfect markets there are no brands. There is total competition. That’s not how business to consumer works in the west. Google has 95% marketshare! In these marketplaces there are strong network effects. Goods with network effects become more valuable as more people have them.

You can’t say put the responsibility on the consumer. Consumer can’t make an informed decision. Even when they can, people are bad to make decisions in their long term interest. BE can help us not make ‘hot’ decisions (lowenstein) that are bad for our long term wellbeing.

All humans have their price (although the price varies)... People inside organisations that had legitimate access to data who would sell this information inside.

You can pick organisations you think have taken steps for privacy but there is a limit to how much organisations can take themselves.

Data Protection Act. Organisations - data controllers. ANy organisation that has data about people. Some success. US has hardly ainy rules. EU has lots. No clear cut difference.

2010, EU decided that technology had changed so much that they should update the directive. Long drawn out process.  COntentious. Most lobbied piece of legislation they’d ever been involved in. We now have the GDPR.Coming into force in2018. Innovation is that the processes that are sharing data have to take some responsibility, even in the design of their systems. Their primary target is computer scientists. So complicated. Make work scheme for computer scientists. What it means in practice… to manage data through systems. No other region in the world has taken this approach. The right one. Trying only to regulate the use of personal data once ‘accidents or ‘breaches’ have happened, is far too late.

Examples:

Data minimisation…

Limit and decentralise. Can you leave data with their users rather than cloud? Crypography. Protect against hackers and corrupt insiders.

Users should be notified and give consent to what is done with their data. How do you do this with the internt of things? No screens…

Using features on mobile phone networks… serving personalised ads but without using profiling technologies.

Smart meters - energy consumption information. Can reveal intimate details about people’s life. Privacy friendly smart meters? Privacy congestion charging?

What about increasing consumer choice - taking data from one service (Facebook) and putting it into another?

Conclusion

Questions and discussion

Can we have control our own data and monetize it ourselves? Economists were skeptical.

Australian Privacy Foundation - it should always be an option for users to pay for service not with their data. Tickets are three times Oyster! TfL can see all your journeys. It’s easy to reidentify anonymous data streams. You can identify 67% of the population from 4 position points from a mobile phone.

DO indiivdual differences, attitudes and so on have much to do with guiding people’s behaviour? What seem to be more important are systems and infrastructures? Do people matter?

Debi - yes. It’s not that people don’t matter. It’s that you need all of this. We are skewed becasue we’re only looking at certain parts of this. We need secure systems but awareness is a problem too. Since the data breaches we’ve just bee nrolling out security awareness prgrams. We’ve been behind the curve. It’s incorporating indiivdual programs with systems that is required. We’re not learning from the disciplines that did the original research.

Skewing - important point. Dangerous olive of evidence. We measure phishing because we can. It skews our impression of what works. Let’s focus on something that is measurable. Easy to understand.

Technologies for enhancing security…

Whatsapp - encription. Over night millions more people sending more secure messaging.