COLUNAS

  1. Home >
  2. Colunas >
  3. Humanidades e Novas Tecnologias >
  4. I am seen, therefore i am - mass liquid surveillance and government hacking

I am seen, therefore i am - mass liquid surveillance and government hacking

sexta-feira, 9 de agosto de 2024

Atualizado às 07:38

INTRODUCTION

This article aims to bring critical reflections, with the theoretical framework based on the works and the surveillance course taught by Professor David Lyon in 02/2024, held at USP Ribeirão Preto in CEADIN, coordinated by myself together with Professor Nuno Coelho. David Lyon is the principal investigator of the Big Data Surveillance Project, emeritus professor of sociology and law at Queen's University, former director of the Surveillance Studies Centre, and one of the foremost specialists on the topic. The objective is to present reflections on some of his main works, in dialogue with other authors who study the subject, involving stages prior to the digital surveillance society, especially regarding Foucault's thoughts on his studies of the society of normalization, discipline, and regulation, and its evolution in the works of Deleuze and Byung-Chul Han, with the perspective of the control society and the digital panopticon, analyzing concrete paradigmatic cases to combine theoretical analysis with practice, in the sense of "phronesis," meaning practical knowledge for the Greeks.

Surveillance is a key dimension of the modern world and is currently closely related to big data (Big Data Surveillance Project, Surveillance Studies Centre, Canada), such as AI applications like facial recognition and predictive policing, in terms of surveillance now characterized as massive surveillance, under the slogan "collect everything," through the analysis and access of a vast volume of personal data. Besides a general vulnerability due to informational ubiquity and asymmetry in such a relationship, there is also an increasing use of AI applications enabling real-time prediction and automation of results and modulation of human behaviors, intentions, and emotions (neuromarketing, captology, data brokers, affective computing), bringing new specific vulnerabilities, raising various issues that go far beyond the protection of individual fundamental rights, such as privacy and data protection, involving modern democratic principles and the limits of such surveillance in a Democratic State of Law, as the lack of transparency practically eliminates the possibility of control, accountability, and responsibility in cases of abuses or errors. These topics must be critically considered in light of new colonialisms (data, carbon, biocolonialism), as countries with a historical past of discrimination against parts of the population are more fragile, as stated in a recent study by the Security Observatories Network.

I AM SEEN, THEREFORE I EXIST - MASS LIQUID SURVEILLANCE

The main characteristic of current security intelligence is the extensive collaboration with technology companies, which store, process, and use our digital footprints, relying on big data, expanding the previous focus on collaboration with telecommunications companies, such as AT&T, which collaborated with the USA, the subject of a lawsuit filed by the Electronic Frontier Foundation (EFF). The lawsuit, however, was dismissed based on the approval by Congress of the controversial Foreign Intelligence Surveillance Act (FISA) of 1978, granting retroactive immunity to AT&T and allowing the Attorney General to request dismissal of the case from 2008 onwards, if the government secretly certifies to the court that the surveillance did not occur, was legal, or was authorized by the president, whether legal or illegal. Based on retroactive immunity for cases involving criminal liability, the possibility of criminalization based on the law prohibiting warrantless wiretaps was nullified, with the law being replaced by the presidential order, whether legal or illegal, undermining the foundations of the separation of powers and the Rule of Law.

This immunity becomes the rule, being increasingly used by governments to enable their mass surveillance activities. Retroactive immunity reveals the illegal origin of mass surveillance, operating in an anti-law zone, blurring the lines between legal and illegal surveillance, as such practices exist in a kind of "gray area." One example of the growth of surveillance technologies and the hegemonization of this business model based on big data is the growth in the offer of informational services and software to public education institutions "for free" by the largest data technology companies in the world - known by the acronym GAFAM (Google, Apple, Facebook, Amazon, Microsoft), with the counterpart being full access to the personal data of thousands of users, affecting what can be understood as state sovereignty, as Big Techs are mostly in the USA and increasingly in China, in an obscure relationship, without data being provided to verify the details of such operation, as no data is officially disclosed by the companies or institutions.

There is an asymmetry of power and knowledge, given the evident disparity between what companies operating under the surveillance capitalism system know about us and what we know about what they do with our personal data, and a deepening of north-south asymmetries. As some research points out, the inequalities and potential affront to human and fundamental rights in the field of AI are more problematic in Global South countries, having a greater impact in places where there is a systematic denial of rights to communities with a history of oppression (NOBLE, Safiya Umoja, 2018). The agreements between companies and Brazilian universities, especially regarding Google Suite for Education and Microsoft Office 365 for Schools & Students, reveal how such relationships are opaque, real black boxes, lacking the fundamental requirement for speaking of trustworthy AI, which is transparency, especially for those whose personal data is being used, as pointed out by the Electronic Frontier Foundation report (Spying on Students: School-Issued Devices and Student Privacy).

In this sense, David Lyon, in the course held by CEADIN - Advanced Center for Studies in Innovation and Law at the University of São Paulo, Law School, Ribeirão Preto campus, points out that originally, in the 1990s, surveillance was defined as systematic and routine attention to personal details with the intention of influencing, managing, protecting, or directing individuals, involving targeted, systematic, and routine observation for various purposes, including influencing social media, labor relations, and organizational behavior. Although generally associated with entities like the police, security agencies, border controls, and similar, surveillance can also influence life choices, purchasing decisions, or work, with its concept later expanded to include both the operation and experience of surveillance, involving the collection, analysis, and use of personal data to shape choices or manage groups and populations. In the modern or postmodern era, 21st-century surveillance is characterized by its ubiquitous nature, involving a "surveillance culture," a new dimension of surveillance now relying on our voluntary participation as a fundamental factor, with personal data being its main ingredient. Smartphones, for example, have become the predominant surveillance devices due to their widespread adoption, with their data analysis capacity used by large companies, public and private entities, and government agencies to monitor individuals, often without any indication of being suspects.

Among David Lyon's various works, "Liquid Surveillance" stands out, co-authored with Zygmunt Bauman (LYON, BAUMAN, 2014), resulting from successive exchanges of messages, dialogues, and joint activities, such as participation in the 2008 biannual conference of the Surveillance Studies Network. The authors point to the new phase of liquid, mobile, and flexible surveillance, infiltrating and spreading across various areas of our lives, becoming an increasingly present aspect, assuming ever-changing characteristics, differentiating from the old panopticon form studied by Foucault and Deleuze.

According to Foucault, in studying disciplinary, regulatory, and normalization societies, the panopticon is one of the main instruments of disciplinary power, a surveillance mechanism that allows seeing without being seen, producing the effect of a constant state of visibility. The architecture is designed so that light passes through. Everything must be illuminated; everything must be visible. In the transparency society, nothing should be left out. For Deleuze, in his "Postscript on Control Societies," control societies are characterized by informatics and computers, as a mutation of capitalism. In control societies, the essential is no longer a signature or a number, but a code: the code is a password. Individuals have become "dividuals," divisible, and masses have become samples, data, markets, or "banks."

The characteristic of the digital panopticon, according to Byung-Chul Han when speaking of the "transparency society," is to allow the globalized reach of digital winds, transforming the world into a single panopticon: "there is no outside of the panopticon; it becomes total, with no wall separating the inside from the outside." Network giants like Google and Facebook present themselves as spaces of freedom, but they can also be instruments of adopting panoptic forms, such as the revelations made by Edward Snowden in 2013 about the PRISM project, whose program allowed the United States National Security Agency (NSA) to obtain practically anything it wanted from internet companies. A fundamental feature of the digital panopticon is the total protocolization of life, replacing trust with control, following an efficiency logic. The possibility of a total protocolization of life entirely replaces trust with control. Instead of Big Brother, there is big data. We live the illusion of freedom, based on self-exposure and self-exploitation. Here, everyone observes and surveils everyone else.

The surveillance market in the democratic state has a dangerous proximity to the digital surveillance state. Instead of biopower, there is psychopower, as it can intervene in psychological processes. It is more efficient than biopower as it surveils, controls, and influences the human being not from the outside, but from within. Era of digital psychopolitics. Large volumes of data are thus a decisive factor of change. From the ubiquitous barcode allowing the identification of products by type or factory, we have evolved to radio frequency identification (RFID) chips, comprising individual identifiers for each product, and to quick response codes (QR, Quick Response Code), sets of symbols placed on products and scanned by smartphones to access certain websites. These codes reveal different uses and applications of monitoring, for example, for customer convenience, such as reducing queues in supermarkets.

Therefore, Big Data can be defined as data resulting from its ubiquity. Its amount and speed are the main characteristics, but what matters most are the new applications they enable, such as predictive policing and neuromarketing, as pointed out in the Big Data Surveillance Project, with Lyon highlighting that data results are mainly characterized by the combination of databases from various sources, often merged into a single source. Therefore, big data is both complex and complicated, characterized by the immensity of data, capturing details of our lives in vast amounts, almost impossible to compute. This phenomenon of surveillance capitalism, a new "non-violent" economic and social order, was denounced in the 2018 work "The Age of Surveillance Capitalism," by Shoshana Zuboff. Shoshana reveals an economic system based on the commodification of personal data with the primary purpose of making a profit, involving an emergent logic of accumulation with unprecedented power, through means of extracting and commodifying personal data to predict and modify human behavior, using big data analytics. Therefore, there are several consequences, revealing significant problems for democratic societies, pointing to the influence of new information technologies on our understanding and reality of freedom, power, democracy, and control, both in individual and social terms. As a response, we must be aware and critically reflect on the digital surveillance society. Digital surveillance is one of the fundamental dimensions of our contemporary society, involving new forms of vulnerabilities and new models of organization, as well as fundamentally modifying democracy itself. These relationships must be critically analyzed so that we are not limited to digital sociotechnical black boxes.

Regarding the new mass surveillance system, Snowden, in his book "Permanent Record," states that we have moved from targeted surveillance of individuals to mass surveillance of entire populations, with national identity cards being a central factor. These combine high-precision technology with embedded biometrics and RFID chips, justified by arguments for better accuracy, efficiency, and speed, as well as immigration control, anti-terrorism measures, and e-government. However, despite these alleged benefits, there are numerous potential dangers, including unforeseen financial costs, increased security threats, and an unacceptable imposition on citizens, making independent and continuous risk assessment and regular review of management practices essential (LYON, David, BENNETT, Colin J. 2008). There is talk of a true 'Card Cartel' involving the state, companies, and technical standards, generating significant controversies in countries such as Australia, the United Kingdom, Japan, and France.

I am seen, therefore I exist. This phrase reflects the desire to be seen on social networks, leading to the voluntary and even enthusiastic sharing of personal data, which is used by the market to personalize ads with high potential for manipulating choices (through seduction, not coercion), thus commoditizing our lives and personas. At the same time, there is consumer surveillance, in a positive sense, directed at the consumer market, and in a negative sense, concerning those who do not conform to expectations, resulting in "rational discrimination" and creating a negative spiral where the poor become poorer, and wealth concentration increases (LYON, David, 2005).

Algorithms are part of the essential infrastructure of security surveillance, using algorithm-based alerts to detect suspicious activities and control the movements of suspects. Related to surveillance in the big data era, issues of inferences and profiling stand out, through the enormous amount of personal data, which is amplified by the questionable role of data brokers who sell personal data in unethical and illegal activities, as there is no necessary real consent (informed, fragmented, and with new consent required for each new purpose and change of the company benefiting from such data). These data are used in analysis via deep learning through quantitative optimization to enhance behavioral and emotional manipulation, meaning personalized ads are made to maximize the probability of a purchase or time spent on a social network, being a fundamental fact in creating previously nonexistent desires.

As Morozov points out (MOROZOV, Evgeny, 2018, p. 33 et seq.), in 2012, Facebook entered into an agreement with Datalogix, allowing them to associate what we buy at the market with the ads displayed on Facebook. Similarly, Google has an application that allows the analysis of nearby stores and restaurants to recommend offers.

In turn, several interesting cases are cited by Kai-Fu Lee in his book "2041: How Artificial Intelligence Will Transform Your World" (LEE, Kai-Fu, 2022), and although it is a book with fictional stories, it brings information, examples, and scenarios that already occur in reality. For example, there are AI-based fintech companies like Lemonade in the United States and Waterdrop in China, aimed at selling insurance through apps or obtaining loans with instant approval. In the chapter "Quantum Genocide," Kai-Fu Lee states that technology is inherently neutral, following what Jose van Dijck calls "dataism," which corresponds to the belief in the "objectivity of quantification," and what is termed "solutionism," imagining that the solution to all social problems lies in data and analysis of results, not in causes. He argues that "disruptive technologies can become our Promethean fire or Pandora's box, depending on how they are used." He cites the example of the Ganhesha insurance with the objective function of the algorithm being to reduce the insurance cost as much as possible. Consequently, with each behavior of the insured, the insurance cost increases or decreases, besides being linked to several applications, sharing user data, encompassing e-commerce, recommendations and coupons, investments, ShareChat (a popular Indian social network), and the fictional FateLeaf, a divination app. One of the possible alternatives mentioned by the author to balance such an objective function aimed at maximizing corporate profit would be to teach AI to have complex objective functions, such as lowering insurance costs and maintaining justice. However, he believes that such a requirement would only be possible through regulation, as it would run into commercial interests, preventing voluntary action. He also mentions the important role of corporate responsibility, such as ESG - Environmental, Social, and Corporate Governance.

In the book "Big Data Surveillance and Security Intelligence - the Canadian Case" by David Lyon and David Murakami Wood (LYON, David, MURUKAMI, D. 2020), the change in surveillance practices with the use of "big data" and new data analysis methods to assess potential risks to national security is emphasized. The "Five Eyes" partnership involving Australia, Canada, New Zealand, the United Kingdom, and the United States stands out, with the interconnection between "security intelligence" and "surveillance," now including internet monitoring and, especially, social networks, linked to personal data analysis. The notion of security expands to encompass a series of new domains, allowing the use of torture and interrogation as extraordinary means, as happened with the Canadian Maher Arar after the September 11, 2001 event, considered a suspect.

The connection of national security activities with big data and surveillance, now in terms of "mass surveillance," is corroborated by the revelations of American security agents like William Binney, Thomas Drake, Mark Klein, and Edward Snowden, including the use of metadata from the study of more than 500 documents disclosed by Snowden that show how metadata can be used to build detailed profiles of the lives of those under surveillance (LYON, David, MURUKAMI, D. 2020).

On the other hand, there are several initiatives to legalize state mass surveillance activities, such as in Canada, with bill proposals in 2009 (Bill C-46, "Investigative Powers for the 21st Century Act" and Bill C-47, "Technical Assistance for Law Enforcement in the 21st Century Act"), with emphasis on Bill C-51 of 2019, giving intelligence authorities more powers domestically and internationally and immunity from liability for the use of these powers, resulting in Bill C-59 (National Security Act, 2017), following a trend or global wave of legalization, such as the "Big Brother Laws" in France (anti-terrorism measures enacted after 2015) and Japan's surveillance laws - the Secrecy Law, the 2016 Wiretapping Law, expanding the categories of crimes subject to wiretap investigations by the police, legitimizing wiretap means in criminal investigations, and authorizing, in sum, the police to potentially eavesdrop on everyone's conversations.

However, despite the mentioned legal foundation, there is a lack of transparency measures, involving, for example, demonstrating that security measures have been adopted regarding the personal data used, so as not to violate the Canadian Charter of Rights and Freedoms, as well as international human rights treaties, and proving that the so-called "four-part constitutional test" has been respected, demonstrating that the secrecy or other adopted security measures are minimal, proportional, necessary, and effective. There is a lack of information about what content is intercepted, what types of metadata are stored, where the data is stored and for how long, data disposal methods, which organizational entities have access to the data and for what purposes, whether the data is anonymized, or if "minimization" and security procedures have been adopted (R v Oakes, [1986] 1 SCR 103; "Necessary and Proportionate: International Principles on the Application of Human Rights to Communications Surveillance", 2014).

The proportionality of such measures is questioned in light of their potential infringement on privacy and freedom of expression, as the measures of exception are indeed becoming the norm, which was previously foreseen by Nietzsche, Walter Benjamin, and more recently explored by Giorgio Agamben, and to some extent by Shoshana Zuboff with the theme of surveillance capitalism, speaking of a "state of exception by Google," in line with what Morozov also asserts when pointing to algorithmic governance, as evidenced by the numerous social experiments carried out by Facebook, as a true real-life laboratory, in addition to the defense of "information sovereignty" by Russia, China, and Iran.

In this context, the Council of Europe Convention on Cybercrime, from 2001, stands out, in favor of surveillance legislation during the War on Terror, signed by forty-three countries, including non-member states like Canada, Japan, South Africa, and the United States; this convention requires participating nations to enact legislation that facilitates the investigation and prosecution of crimes committed over the internet, also providing for broad legal access to traffic data by law enforcement authorities.

These AI tools used for surveillance based on big data have the potential for 'bias' in the sense of a feedback loop of prejudices and biased data, encompassing content shaped by structural prejudices and reproduced via algorithms (LYON, David, MURAKAMI, David, 2020). Facial recognition technology, in particular, may therefore be duplicating or amplifying the institutional and structural racism that exists in society, resulting in coded inequity that fosters unjust infrastructures, as it perpetuates injustices and other forms of discrimination due to various instances of 'bias,' which are not systematically addressed through an appropriate algorithmic governance framework. For example, studies by Big Brother Watch indicate that 98% of matches obtained by cameras that alert UK police incorrectly identified innocent people as fugitives).

Other issues relate to the absence of mechanisms for holding citizens accountable for their rights and the lack of preventive and mitigative measures for damage and information security. Additionally, there is a lack of assessments on the proportionality of the negative impacts versus the positive externalities, which are generally associated with greater effectiveness, although this is questionable as pointed out by a 2021 LAPIN report. The report states that there is a lack of transparency due to the absence of systematized, consolidated, or publicized statistical data on the processing of data by facial recognition technologies by Public Administration. Therefore, there is no evidence of greater efficiency in public sector activities; in other words, according to the disclosed data, 'the narrative of the technology's efficiency does not seem to be statistically confirmed' ('Report on the use of facial recognition technologies and surveillance cameras by Public Administration in Brazil').

As there would be other ways to achieve the same intended purpose, and there are doubts about the technology's efficiency due to the errors and other issues raised, it seems that questioning the proportionality of the measure is valid, given the potential harm to the fundamental rights of millions of people who, without being suspects, are subjected to mass surveillance by the State and have their personal data collected, as seen in the paradigmatic example of Salvador's 2020 carnival, where 80 cameras with facial recognition were used, leading to the arrest of 42 fugitives but capturing the biometric data of 11.7 million people, including adults and children.

In order to reduce the mythology surrounding the neutrality and objectivity of algorithms and their predictions, it is important to emphasize that data is only a sample and never speaks for itself. Correlations can be random and may generate incorrect information as there is a lack of contextual and domain-specific knowledge. Therefore, it is essential that technical teams, usually from the exact sciences, be expanded to include qualified personnel with expertise in law, philosophy (ethics), and sociology, providing an interdisciplinary and holistic analysis.

The application of such technology, given its potential for errors and infringements on fundamental rights, and being classified as high-risk by various international documents, should be preceded by the prior development of an algorithmic impact assessment. This is to ensure that measures are taken to mitigate the negative impact, bringing about a better balance between the benefits to be achieved by the measure and the damage to fundamental rights. Finally, it is essential that this document be prepared independently by a multidisciplinary team, in order to ensure the legitimacy and impartiality of the document.