Designing meaningful choices to protect user privacy

The key elements that designers should take into consideration when building privacy-aware interfaces and experiences

Luiza Jarovsky
UX Collective

--

Nine cups of coffee positioned in a diamond shape within a pixelated mosaic
Photo by Nathan Dumlao on Unsplash

Continuing last post’s conversation, in which I presented Transparency by Design — one of the frameworks I am proposing in my Ph.D. — today I would like to discuss the design of meaningful choices in the context of online data protection.

Transparency by Design highlights UX design’s essential role as a tool to empower users with relevant, timely and adequate information about how their personal data is being collected and processed by organizations. Another important element of Transparency by Design, particularly connected to its fairness dimension, is the availability of meaningful choices to users. Choice is the main voice of users in their daily interaction with organizations online; without meaningful choices, users have a weaker presence and existing informational vulnerabilities are exacerbated. Even if there are accessible privacy notices, containing all the legally mandatory information, absent meaningful privacy choices, users will be unable to exercise their autonomy.

In terms of meaningful choices, at this point, I am not discussing the content of these choices or the extent to which privacy deliberations should be open or not for discussion with users. I focus on design guidelines to help organizations implement meaningful privacy choices in real-world interfaces and systems, according to the approach proposed by HCI scholars Yuanyuan Feng, Yaxing Yao and Norman Sadeh, whose model I incorporate in my research.

By designing choice mechanisms that are meaningful and aware of users’ cognitive biases and vulnerabilities, organizations will be able to support autonomy and empower users, therefore helping once again to reduce informational vulnerabilities that permeate the online interaction between users and organizations.

Feng et al developed a design space of five key dimensions to be considered when designing privacy choices. Their design space also provides a “taxonomy to categorize, evaluate, and communicate different privacy choice design options with all involved stakeholders, including users and legal professionals.”

According to the authors, there are five key dimensions for the design space in privacy choices: type, functionality, channel, timing, and modality, and each of these dimensions have multiple options to be chosen from. According to Transparency by Design, an organization should evaluate what options can help mitigate users’ informational vulnerabilities and promote the other elements of the Transparency by Design framework. Below is an image from Feng et al’s research illustrating the multiple aspects of each of the five key dimensions in the design space for privacy:

Yuanyuan Feng, Yaxing Yao & Norman Sadeh, A Design Space for Privacy Choices: Towards Meaningful Privacy Control in the Internet of Things, CHI’21 (2021).

Transparency by Design proposes that, to promote fairness, which is one of the aspects of transparency obligations in the GDPR, an organization must be aware of the key design elements that compose a privacy choice, as developed by Feng et al. While designing products and services, an organization must create privacy choice mechanisms that can reduce users’ informational vulnerabilities.

Meaningful privacy choices are in the intersection between Privacy by Design and Transparency by Design, as they highlight how the design of a certain product or service — since its inception — should have privacy in mind. Privacy is a complex topic and even a simple privacy toggle with the options “yes” or “no,” to be coherent with Privacy by Design and Transparency by Design, must be aligned with a background planning and strategy on how to better serve users’ best interest and mitigate informational vulnerabilities.

Meaningful privacy choices are also part of the broader concept of usable privacy. Jakob Nielsen has defined usability as “a quality attribute that assesses how easy user interfaces are to use.” He added that:

“Usability is defined by 5 quality components: a) learnability: how easy is it for users to accomplish basic tasks the first time they encounter the design? b) efficiency: once users have learned the design, how quickly can they perform tasks? c) memorability: when users return to the design after a period of not using it, how easily can they reestablish proficiency? d) errors: how many errors do users make, how severe are these errors, and how easily can they recover from the errors? e) satisfaction: how pleasant is it to use the design?”

When discussing usable privacy, we are inquiring about methods and tools to protect privacy that embrace or aim at achieving the quality attributes described above. Meaningful privacy choices are one of these methods. The aim is to bring usability to the choices presented to users so that they can choose meaningfully regarding their privacy.

There is a lot to unpack here and I hope you continue this journey with me. You can read my full article about Transparency by Design here and the one about Dark Patterns in Data Protection here. I personally believe that the web can be a more empowering place, where people can have their vulnerabilities protected, their privacy respected and their autonomy valued. I think this transformation is possible and I invite you to join the conversation.

✅ Before you go:

See you next week. All the best, Luiza Jarovsky

--

--

CEO of Implement Privacy, LinkedIn Top Voice, Ph.D. Researcher, Author of Luiza's Newsletter, Host of Luiza's Podcast, Speaker, Latina, Polyglot & Mother of 3.