Usable privacy: a research perspective

We saw in Step 1.9 that where online privacy is concerned, there is a privacy paradox in that people’s stated privacy concerns often do not match their online behaviour.

Current research at Newcastle University aims to unpick this paradox by looking at the way our mental models differ in offline and online behaviour when it comes to privacy. You can read more about this work by looking at the position paper included in the ‘See Also’ section below this article.

Online privacy design uses a combination of two approaches. First the ‘privacy by policy’ approach, which (as we have already seen) is legalistic, confusing and unusable. Policies are presented to users and we leave them to conduct their own risk assessment. Second the ‘privacy by architecture’ approach in which tools such as access control and encryption are provided to manage the complexity, but users have to then understand enough about the tools to be able to know how to use them.

Cognitive psychologists use a so-called ‘dual-process’ model to describe how people respond to and interact with the world (Kahneman, 2013). Put simply, thought processes can be intuitive (for example, processing what our eyes see to provide an interpretation of the world), or analytical (making logical choices and decisions as and when needed). When we become used to performing a particular task through practice - playing music, driving a car - more of the thought processes involved become automatic.

In the offline world, it is thought that personal privacy is largely intuitive. We are happy to share information with others as long as established social norms are respected. But exactly how much we share will depend on how things change over time and how we choose to present ourselves (eg we may present different versions of ourselves to different audiences: work colleagues, family and friends).

In the online world, privacy becomes a matter of complex decisions and so becomes an analytical process for us to understand what information is being shared and to whom, how third parties can share our information. Current designs for privacy expect users to understand the privacy mechanisms at their disposal and how to use them.

Mental models

Researchers in Human-Computer Interaction (HCI) believe that users have mental models which they build when learning how to use a computer system. Mental models guide users’ interactions with the system, evolving as they improve their understanding and become able to predict and explain the operation of the system.

A mental model is a small-scale, simplified impression of the system and its behaviour as it appears to the user. Users who are given a conceptual model of the system before they interact with it will generally use the system more effectively as they have not had to build their own model from scratch.

We generally help users to build a mental model through the use of metaphors. For example, to help users understand risk and how to manage it, we use metaphors such as worms and viruses derived from a medical model, breaches derived from a model of criminality, and perimeter control derived from models of physical security. For privacy, we might use a metaphor of an audience view to help users form a model of the recipients of shared information.

So, research in this area seeks to understand how users build their own mental models, how they affect privacy decision making and how this understanding can be used in design of privacy tools - currently influenced by expert models rather than user models - to increase the usability of these tools.

Share this article:

This article is from the free online course:

Cyber Security: Safety at Home, Online, in Life

Newcastle University

Get a taste of this course

Find out what this course is like by previewing some of the course steps before you join: