Consent mechanisms in privacy engineering
Consent mechanisms in privacy engineering
As the number of online services powered by personal data is growing, the technology behind those services raises unprecedented concerns with regard to users’ privacy. Although there are significant privacy engineering efforts made to provide users with an acceptable level of privacy, often users lack mechanisms to understand, decide and control how their personal data is collected, processed and used. On one hand, this affects users’ trust towards the service provider; on the other, under some regulatory frameworks the service provider is legally required to obtain user’s consent to collection, use and processing of personal data. Therefore, in this thesis, we focus on privacy engineering mechanisms for consent. As opposed to the simple act of clicking ‘I agree’, we view consent as a process, which involves the formation of user’s privacy preferences, the agreement between the user and the service provider and the implementation of that agreement in the service provider’s system.
Firstly, we focus on understanding the user’s consent decision-making. Specifically, we explore the role of privacy knowledge in data sharing. To that end, we conduct an experiment, where we inform participants how they stop allowing the collection of their online activity data. We compare the behaviour of two groups with an increased knowledge of data collection: one provided only with actionable information on privacy protection, and one additionally informed about the details of how and by whom the collection is conducted. In our experiment, we observe no significant difference between the two groups. Our results suggest that procedural privacy knowledge on how users can control their privacy has impact on their consent decisions. However, we also found that the provision of factual privacy knowledge in addition to procedural knowledge does not effect users’ prevention intent or behaviour. These outcomes suggest that the information about privacy protection itself may act a stimulus for users to refuse consenting to data collection.
Secondly, we investigate the idea of agent-based privacy negotiations between a user and a service provider. To that end, we propose a novel framework for the implementation of semi-automated, multi-issue negotiation. Our findings suggest that such a framework is more suitable for negotiation in the privacy domain that the ‘take-it-or-leave-it’ approach or setting privacy preferences manually, because it allows for a collaborative search for mutually beneficial agreements: users consent to data use more often, consent is more consistent with users’ data-sharing sensitivity and it requires less users’ effort. Moreover, in order for an agent to accurately represent the user, the agent needs to learn the user’s privacy preferences. To address this problem, we compare two approaches to privacy preference elicitation through a user study: one where the preferences are personalised for each user based on their previous consent and one where the user classified into one of the three privacy profiles and later re-classified if their consent decisions reflect a change. We find that the latter approach can represent the user more accurately in the initial negotiation rounds than those of the former.
Finally, we look at the implementation of consent on the service provider’s side after the agreement regarding data use has been made. In more detail, we consider a scenario where a user can deny consent to process certain data for certain purposes. To that end, the existing approaches do not allow service providers to satisfy the user’s consent in the optimal way. Therefore, we propose a novel graph-theoretic model for the service provider to store consent, which indicates the kinds of data processing that can be performed under the privacy agreement. Then, we formalise the consent problem as a constraint satisfaction problem on graphs. We provide several algorithms to solve the problem and compare them in terms of their trade off between execution time and quality of the solution. Our algorithms can provide a nearly optimal solution in the face of tens of constraints and graphs of thousands of nodes in a few seconds.
The research presented in this thesis contributes to understanding users’ consent decision making and addresses an emerging need for technologies that can help service providers manage users’ consent. We propose ideas for potentially fruitful lines of exploration within this area.
University of Southampton
Filipczuk, Dorota
582b73c6-5445-4679-88b5-15d8e1234679
Filipczuk, Dorota
582b73c6-5445-4679-88b5-15d8e1234679
Gerding, Enrico
d9e92ee5-1a8c-4467-a689-8363e7743362
Filipczuk, Dorota
(2021)
Consent mechanisms in privacy engineering.
University of Southampton, Doctoral Thesis, 164pp.
Record type:
Thesis
(Doctoral)
Abstract
As the number of online services powered by personal data is growing, the technology behind those services raises unprecedented concerns with regard to users’ privacy. Although there are significant privacy engineering efforts made to provide users with an acceptable level of privacy, often users lack mechanisms to understand, decide and control how their personal data is collected, processed and used. On one hand, this affects users’ trust towards the service provider; on the other, under some regulatory frameworks the service provider is legally required to obtain user’s consent to collection, use and processing of personal data. Therefore, in this thesis, we focus on privacy engineering mechanisms for consent. As opposed to the simple act of clicking ‘I agree’, we view consent as a process, which involves the formation of user’s privacy preferences, the agreement between the user and the service provider and the implementation of that agreement in the service provider’s system.
Firstly, we focus on understanding the user’s consent decision-making. Specifically, we explore the role of privacy knowledge in data sharing. To that end, we conduct an experiment, where we inform participants how they stop allowing the collection of their online activity data. We compare the behaviour of two groups with an increased knowledge of data collection: one provided only with actionable information on privacy protection, and one additionally informed about the details of how and by whom the collection is conducted. In our experiment, we observe no significant difference between the two groups. Our results suggest that procedural privacy knowledge on how users can control their privacy has impact on their consent decisions. However, we also found that the provision of factual privacy knowledge in addition to procedural knowledge does not effect users’ prevention intent or behaviour. These outcomes suggest that the information about privacy protection itself may act a stimulus for users to refuse consenting to data collection.
Secondly, we investigate the idea of agent-based privacy negotiations between a user and a service provider. To that end, we propose a novel framework for the implementation of semi-automated, multi-issue negotiation. Our findings suggest that such a framework is more suitable for negotiation in the privacy domain that the ‘take-it-or-leave-it’ approach or setting privacy preferences manually, because it allows for a collaborative search for mutually beneficial agreements: users consent to data use more often, consent is more consistent with users’ data-sharing sensitivity and it requires less users’ effort. Moreover, in order for an agent to accurately represent the user, the agent needs to learn the user’s privacy preferences. To address this problem, we compare two approaches to privacy preference elicitation through a user study: one where the preferences are personalised for each user based on their previous consent and one where the user classified into one of the three privacy profiles and later re-classified if their consent decisions reflect a change. We find that the latter approach can represent the user more accurately in the initial negotiation rounds than those of the former.
Finally, we look at the implementation of consent on the service provider’s side after the agreement regarding data use has been made. In more detail, we consider a scenario where a user can deny consent to process certain data for certain purposes. To that end, the existing approaches do not allow service providers to satisfy the user’s consent in the optimal way. Therefore, we propose a novel graph-theoretic model for the service provider to store consent, which indicates the kinds of data processing that can be performed under the privacy agreement. Then, we formalise the consent problem as a constraint satisfaction problem on graphs. We provide several algorithms to solve the problem and compare them in terms of their trade off between execution time and quality of the solution. Our algorithms can provide a nearly optimal solution in the face of tens of constraints and graphs of thousands of nodes in a few seconds.
The research presented in this thesis contributes to understanding users’ consent decision making and addresses an emerging need for technologies that can help service providers manage users’ consent. We propose ideas for potentially fruitful lines of exploration within this area.
Text
Dorota_Filipczuk_Thesis_Final
- Version of Record
Text
PTD_Thesis_Filipczuk-SIGNED
Restricted to Repository staff only
More information
Submitted date: 18 October 2021
Identifiers
Local EPrints ID: 456829
URI: http://eprints.soton.ac.uk/id/eprint/456829
PURE UUID: 21d774be-6c56-4506-bd62-954ec3aecdde
Catalogue record
Date deposited: 12 May 2022 16:45
Last modified: 17 Mar 2024 03:03
Export record
Contributors
Author:
Dorota Filipczuk
Thesis advisor:
Enrico Gerding
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics