In 2016, for the first time in France, online advertising investment exceeded that of television advertising. Algorithms now play an increasingly significant role in the purchase of advertising space on websites, raising many ethical and legal issues.
Algorithms rise to power
The digital advertising market in France is now estimated at 3.5 billion euros. Whereas up until now this advertising mainly involved displays on web sites, and the purchase of Google AdWords, the purchase of automated advertising space (called programmatic buying) has now emerged. The profiling of internet users is carried out using traces of their web activity, which makes it possible to predict their interest in an ad at any given time. Therefore, thanks to algorithms, it is possible to calculate, in real time, the value of the advertising space on a page the user is viewing.
The use of algorithms has the advantage of displaying banner ads that match our interests, but there are risks involved in their uncontrolled use. The lack of transparency in how these algorithms operate impacts Internet users’ behaviour without them realising it. What is more, the algorithms sometimes benefit from exaggerated confidence, yet their results can be discriminatory.
This raises the question of algorithms’ neutrality and ethical issues. The study of ethics in this area must be based on an understanding of how we are linked to these new technologies. This involves, on the one hand, how algorithms are covered by law and, on the other hand, the development of the digital advertising ecosystem.
In light of these new challenges, it would be wise to focus on the algorithms themselves, rather than on the data that is processed, by establishing systems capable of testing and controlling them, in order to prevent harmful consequences.
Law and algorithms: reforms in Europe
A new revolution is underway, based on data collection and processing that has reached an unprecedented scale, and stimulates the creation of new products and services. This increase in the amount and diversity of data is explained by the development of connected objects and the empowerment of consumers. Their ability to act has increased with the development of technology: businesses are becoming more and more dependent not only on the data consumers produce, but also on their opinion, and must therefore constantly ensure they maintain a good e-reputation.
In light of this situation, European institutions have begun the process of reforming personal data legislation. The new European General Data Protection Regulation (GDPR) entered into force in all member states in May 2018. It imposes increased transparency and the accountability of those who process data, based on a policy of compliance with the law, and it provides for severe penalties.
Similarly, it affirms the right to data portability, and those in charge of processing personal data must ensure that their operations comply with personal data protection standards, starting at the design stages for a product or service (privacy by design).
The GDPR strives to implicitly regulate the algorithmic processing of data. We see a trend in the advertising sector: in general, all sites, services and products that use algorithms are careful not to refer to them. They hide the crucial role algorithms play, instead referring to “customisation”. However, when there is customisation, often there is “algorithmisation”.
Legislation ill-suited to digital advertising
Laws pertaining to “traditional” advertising are based on the principle of receiving prior informed consent from individuals before processing their data. However, this concept of data protection is less relevant when it comes to digital advertising.
Data collected in the context of traditional marketing often involves objective and relatively predictable information such as name, age, gender, address or marital status. Yet the concept of “data” radically changes when it comes to digital marketing. On social networks, the data is not only basic classification information (age, gender, address), but also includes data from everyday life: what I’m doing, what I’m listening to, etc.
This new situation questions the relevance of the distinction between personal and non-personal data. It also raises questions about the relevance of the principle of prior consent. It is often virtually impossible to use an application without accepting to be tracked. Consent therefore becomes mandatory in order to use the technology, and exactly how the data will be used by the data controller is completely unknown. Therefore, the problem is no longer related to prior consent, rather it is the automatic, predictive deductions made by the companies that collect this data.
Algorithms accentuate this trend by multiplying the collection and use of trivial and decontextualised data, likely to be used for specifically profiling individuals, and creating “knowledge” about them based on probabilities rather than certainties about their personal and intimate inclinations. In this situation, rather than examining the data feeding the algorithms, wouldn’t it be more relevant to examine the algorithms that process them and generate new data?
Legal and ethical challenges of online advertising
Influencing consumer choices, subliminal influence, submission that changes the perception of reality: behavioural targeting carries serious risks. Requirements for the accountability, transparency and verifiability of the actions caused by algorithms have become crucial in preventing potential excesses.
This situation calls into question the relationship between law and ethics, which is unfortunately often confused. Laws are established to regulate behaviour – what is allowed, forbidden, or required from a legal perspective – whereas ethics refers more broadly to the distinction between good and bad, independent of and beyond any compliance with the law. Ethics applied to algorithmic processing would need to focus on two major principles: transparency, and the establishment of tests to check the algorithms’ results in order to prevent possible damage.
Transparency and accountability of algorithms
The activities of online platforms are essentially based on the selection and classification of information, as well as on offers for goods or services. They design and activate various algorithms that influence consumption behaviour and how users think. This customisation is sometimes misleading, since it is based on the machine’s concept of how we think.
This concept is not based on who we are, but rather on what we have done and looked at. This observation reveals the need for transparency: the people impacted by an algorithm should first of all be informed of the existence of the algorithmic processing, as well as what it implies, the type of data it uses and its end purpose, so that they may file a claim if needs be.
Tests for algorithms?
In advertising, algorithms can lead to a differentiation in the price of a product or service or can even establish typologies of high-risk policyholders in order to calculate the insurance premium based on criteria that is sometimes illegal, by cross-checking “sensitive” information. Not only is the collection and the processing of this data (racial and ethnic origins, political and religious opinions) generally prohibited, the results of these algorithmic methods can be discriminatory. For example, the results of the first international beauty contest based entirely on algorithms led to the selection of only white candidates.
To avoid this type of abuse, urgent steps must be taken to establish tests for the results produced by algorithms. In addition to the legislation and the role played by the protection authorities (CNIL), codes of conduct are also beginning to appear: advertising professionals belonging to the Digital Advertising Alliance (ACD) have introduced a protocol represented by a visible icon next to a targeted ad that explains how it works.
It is in companies’ interests to adopt more ethical behaviour in order to maintain a good reputation, and hence a competitive advantage. Internet users are weary of advertising deemed too intrusive. If the ultimate goal of advertising is to better anticipate our needs to “consume better”, it must occur in an environment that complies with legislation and is responsible and ethical. This could be a vector for a new industrial revolution that is mindful of fundamental rights and freedoms, in which citizens are invited to take their rightful place and take ownership of their data.
The original French version of this article was translated to English by the Institut Mines-Télécom.
Want to continue this conversation on The Media Online platforms? Comment on Twitter @MediaTMO or on our Facebook page. Send us your suggestions, comments, contributions or tip-offs via e-mail to firstname.lastname@example.org.