The General Data Protection Regulation (GDPR) rests on the premise that the protection of personal data guarantees the protection of privacy. This premise has been subject to substantial criticism. It has been convincingly argued that with the help of algorithm-based processing of large data sets, knowledge about individuals can be inferred without violating GDPR provisions. This raises several questions regarding the impact of the new challenges. How will individuals adapt their behavior to the so-called datafication of life? Will privacy rights be enough to “disrupt” the chilling effect?
The “datafication” of life – which refers to the growing transformation of various aspects of life in data – means that the data available for processing is vast and often diverse. Analyzing such data is only possible if carried out in an automated fashion, that is, through the use of learning algorithms. As a result, automated, algorithm-based data processing (in the following: automated data processing) has become the predominant means of extracting information from data.
Automated data processing draws on correlations to make an inference about a person. This involves sorting data points into categories through an algorithm. These categories, or groups, are then used in making predictions about individuals. These predictions are derived not from causal relationships but from an assessment of the relationship between variables in the available information regarding all members of a group on the basis of correlation. The statistical summary of all group members in terms of their history and development are thus used to generate information about the future of each member. These predictions are then translated into evaluations of a person. In turn, these evaluations will heavily influence how a person is perceived and treated.
Self-policing and the chilling effect
The practice of translating predictions into evaluations is often referred to as “scoring” and is already in use in several sectors. Access to crucial services such as healthcare or loans and housing often depends on the algorithms that provide such scores. These effects of automated data processing on the everyday lives of individuals are profound – and obvious. Innovative scholars have already developed elaborate responses to such effects and have called for transparent, bias-free and accountable automated decision-making. However, there is another, more subtle aspect to be considered in terms of the effects of automated data processing: the privacy of individuals.
From a reductionist perspective, the value of privacy is derived from its capacity to prevent undesirable consequences that can come in a variety of forms. Two of the most important forms are self-policing, which is understood as the exercise of self-restraint and self-censorship, and its counterpart, the development of a so-called chilling effect in society, which describes the process by which certain activities are discouraged by the threat of legal or social sanctions. The novel threats posed by automated data processing can be best understood in terms of the impact it has on both of these undesirable consequences.
Automated data processing possess change in behavioral responses
As undesirable consequences of automated data processing, self-policing and the chilling effect can heavily influence how individuals behave, how they interact with and how they control their environment. They are also a direct result of the way in which knowledge is acquired in automated data processing. In the conventional understanding of self-policing, the fear of being subject to observation compels individuals to monitor or control their behavior. They tend to fear that certain actions they take and statements they make will be traced back to them, and that they will suffer undesirable consequences because of their behavior. Traditionally, the undesirable consequences consisted of sanctions imposed by the state. The logic on the basis of which sanctions have been imposed has been causality. People who express views that challenge the regime in power, who engage in activities not aligned with a regime’s underlying values, or who are in contact with enemies of the regime are considered to be a threat to the regime. In order to avoid sanctions, individuals must self-police their behavior and avoid any activities potentially deemed hostile by the regime. The fear of being subject to surveillance and sanctions by the state leads to the widespread adoption and moderation of behavior in a society. This phenomenon is otherwise known as the “chilling effect.” As automated data processing is implemented more broadly in various spheres of life, the dual problems of self-policing and the chilling effect will gain in relevance and the nature of each will also change.
In order to understand why the problems of self-policing and the chilling effect will amplify, one has to understand how the context in which self-policing evolves changes, and one has to understand how people become aware of the importance of automated data processing. In the past, people feared being subject to surveillance exercised by the state; today, automated data processing is used largely by companies. The sanctions we fear are no longer limited to state sanctions but come in a variety of forms and are often inexplicit and even unknown to those affected by them. As life becomes increasingly datafied, almost all areas of life are subject to data analytics. The diversification and multiplication of collected data points bring with it the magnification of privacy concerns. Given these circumstances, individuals’ desire to have control over information about them is only rational.
Skeptics will argue that self-policing presupposes the awareness of algorithms and that this awareness is lacking. Today, so the argument, most people are aware neither of the use of automated data processing nor of the logic driving such processing. It is therefore unrealistic to assert that people adapt their behavior in response to the use of automated data processing. This argument may be currently relevant, but it will weaken with time. In fact, we already see examples of people beginning to respond to the use of algorithms in certain areas. An obvious and rather harmless example is found in advertisements. People are well aware of the fact that their browsing behavior today determines the online ads that will be displayed for them in the future. Even if it is not a fully conscious awareness, people have begun to understand that the advertisements they see are chosen on the basis of their data combined with the data of those with similar interests. Indeed, “Customers who bought this item also bought …” is a familiar phrase to all online shoppers. As the use of algorithms in important sectors grows, so, too, will concerns about this use. When people are denied loans because of weak scoring evaluations or when they have to pay higher fees for their health insurance, they will want to know what’s behind these scoring evaluations. It may take 10, 20 or 30 years until the majority of the population becomes fully aware of the relevance of algorithms and their impact. Accordingly, it may take decades for most people to change the way they police their lives – but the change will come, and it will be profound.
It has been shown that with the use of automated data processing, the way one is perceived and treated depends significantly on the categories or groups one is assigned to. In order to understand what knowledge is generated about myself, I have to consider what I do and what everybody else does. Thus, in order to prevent unwanted perceptions and treatment regarding myself, in addition to self-policing my behavior, I have to police with whom I share data points. In an era of automated data processing, self-policing can thus be understood as the urge to control one’s environment in order to control knowledge and evaluations regarding oneself and thus how one is treated. The object of control expands from one’s behavior to include one’s environment.
The urge to self-police is an urge to reassess one’s environment
With the widespread use of automated data processing, the urge to police the perception of oneself intensifies. As the echo of one’s perception becomes omnipresent – be it in the ads that one sees, the financial credibility attributed to oneself or the evaluation of one’s health, – the desire to control grows. As a person’s environment is used to evaluate their personality, they have a strong incentive to adopt their environment in order to create a favorable picture of themselves. As a response to automated data processing, the urge to self-police is an urge to reassess one’s environment – to clean it and screen out all influences which could have a detrimental effect on the knowledge that is generated about oneself. The effects of such self-policing do not always have to be negative. It is quite possible and likely that individuals make positive changes to their behavior. Self-policing may help individuals cast off negative habits (e.g., unhealthy eating) or adopt healthy activities (e.g., join sports clubs). However, in a pluralist society there are no singular, encompassing concepts of what constitutes good or bad. Hence, if automated data processing influences people’s behavior on a large scale, its justification cannot be achieved in a value-based assessment of its effects. In order to reconcile automated data processing with individual freedom, other standards, such as the criterion of transparency, are likely to provide more suitable starting points.
Self-policing turns into a vaguer chilling effect when one loses oversight on how certain behavior is translated into categories and groups and results in predictions about one’s future (note: this dynamic is referred to as “social cooling” by Tijmen Schep). When groups become too random, too granular, too counter-intuitive and too opaque to be anticipated, automated data processing does not trigger specific behavioral responses. Rather, a feeling of uncertainty emerges. And a feeling of uncertainty is likely to usher in a general modification and moderation of a person’s behavior. Bearing the vague knowledge that with whom they are grouped has severe consequences on their lives, individuals will explicitly or subconsciously adapt their behavior toward what they believe leads to the best evaluations and predictions about them.
This new form of self-policing and new type of chilling effect poses a novel challenge for privacy protection. Legal scholars have to refine privacy rights in order to develop effective responses and protection standards. However, such refinement presupposes a deep understanding of self-policing and chilling effects and will therefore require the help of sociologists, anthropologists, psychologists, and other social scientists. Transdisciplinary research must provide the basis on which a new, automated data processing-proofed right to privacy can be defined while translating these insights into specific legal rights that complement the protections offered by the GDPR.
Write a comment