COLUNAS

  1. Home >
  2. Colunas >
  3. Humanidades e Novas Tecnologias >
  4. "Into Our Minds" - "Neuromarketing and AI accountable: The New Frontier of Behavioral Manipulation" and AI Compliance

"Into Our Minds" - "Neuromarketing and AI accountable: The New Frontier of Behavioral Manipulation" and AI Compliance

sexta-feira, 28 de junho de 2024

Atualizado às 07:29

In today's surveillance and data society, it is urgent to analyze the critical issues and negative externalities associated with AI, alongside its positive impacts, through an interdisciplinary, critical, holistic, and multifaceted approach as recommended by the European Commission. This approach aims to mitigate negative effects through "legal frameworks on fundamental rights" applied in the compliance and design phases within a multi-layered governance system. Such a system involves practical measures to protect potentially affected rights from AI applications, fostering responsible and sustainable innovation. Preemptive measures can help businesses avoid or significantly reduce reputational damage and legal repercussions, enhancing their competitive edge.

One of the most notable examples of market-adopted frameworks is the CIPL's "Building Accountable AI Programs - Examples and Best Practices for Implementing the Core Elements of Accountability." Here, accountability is central to a framework used in AI compliance, balancing business sustainability with individual protection. However, the focus must also include collective and social damages, emphasizing democratic values and environmental concerns. An essential aspect of this process is external supervision by a multidisciplinary team, ensuring the independence and legitimacy of the adopted or audited instruments. This holistic responsibility can be demonstrated through effective governance structures and adequate supervisory teams, fostering awareness and support throughout the organization. Digital trust is closely linked to sustainable, long-term business growth, meeting the increasing expectations of customers, investors, regulators, and the media, and thus becoming a competitive differentiator.

This text aims to address the right questions, rather than providing ready-made answers, going beyond dualistic thinking and unquestionable dogmas. It embraces the provisional nature of scientific thought being developed on this topic, grounded in practical case analysis and committed research, as AI, the most disruptive technology, evolves.

AI's ubiquity is evident in various sectors, often unbeknownst to many people regarding its usage and potential human rights infringements. This includes the use of behavioral control and manipulation techniques, particularly related to "captology" for economic and political purposes, linked to current issues like fake news, hate speech, and filter bubbles, as Eli Pariser discusses in "The Filter Bubble", highlighting the post-truth era.

Behavioral manipulation and persuasion have long been tools in advertising and marketing. However, with social media and AI, these activities have exponentially increased, transforming how news is produced, disseminated, and interpreted. Previously, news sources were limited and relatively reliable, but now, new publication forms, content sharing, and viral dissemination capabilities have emerged. This clickbait logic in social media values online content by its traffic volume, not its truthfulness. Sensationalist stories and images are crafted to capture user attention, directing them to propagandistic sites with consumerist goals. The problem with fake news is a lack of critical thinking and an active stance on checking information sources among many people. According to Hervey (2017), bad news is the only news because it is addictive, while good news remains invisible as it is not sellable.

A lesser-explored development is "neuromarketing",, using brain data analysis to decode emotional reactions to persuasive advertisements. Companies in this sector (e.g., salesbrain.com.br) often omit compliance or data protection measures. This technique can involve facial analysis to decode universal micro-expressions, mainly below the conscious level (Morin, Christophe; Renvoise, Patrick. "The Persuasion Code: How Neuromarketing Can Help You Persuade Anyone, Anytime, Anywhere", DVS Editora, Kindle Edition). Measuring our emotions means controlling them, thus influencing our behavior, as numerous studies emphasize that decisions are often driven by impulses and emotions, reflected in the Latin origin of "emotion" from "movere", meaning to move. Primal brain activity, responsible for emotional responses, is monitored through voice analysis, skin conductance response, heart rate variability, respiratory sinus arrhythmia, eye movement tracking, facial expression decoding, and frontal lobe dominance, recording blood flow changes.

Neural data are highly sensitive, relating to the fundamental right of mental integrity, protecting against manipulation. This is already a reality at the state level, as seen with PEC 298 of 2023 in Rio Grande do Sul, amending the Constitution to protect mental identity against brain-affecting research without consent. However, this provision is limited, not addressing data protection and damage mitigation measures when data is used commercially. Also notable are EC 29/23, recognizing mental integrity and algorithmic transparency as fundamental rights, and PL 522/22, amending the LGPD to define and regulate this right.

Comparatively, Chile has explicitly recognized neuro-rights as fundamental rights in Article 19 of its Constitution, highlighted by the recent Constitutional Court decision in "Guido Girardi vs. Emotiv Inc." (Case No. 105.065-2023, rel. min. Ángela Vivanco, ruled on 9/8/23).1 Additionally, the OECD Recommendation on Responsible Innovation in Neurotechnology, the Inter-American Declaration of Principles on Neurosciences, Neurotechnologies, and Human Rights by the OAS, and the UNESCO Report on the topic are significant references.

Finally, PL 2.338/23 stipulates the need for AIA - Algorithmic Risk Assessment in high-risk applications, though it lacks detailed procedural specifications, minimum requirements, and standardization to contribute to legal security. Article 14 prohibits the implementation and use of AI systems with excessive risk, including subliminal techniques and those exploiting vulnerabilities of specific groups or profiling individuals based on behavioral analysis or personality traits, except as provided in article 15.

Thus, just as data protection culture is consolidating, it is essential for AI to ensure sustainable, responsible, and trustworthy applications (AI accountable), avoiding what Byung-Chul Han describes as the "total protocol of life" and digital panopticon, where trust is replaced by control, a hallmark of the transparency society. The total protocol possibility replaces trust entirely with control. Instead of Big Brother, we have Big Data, living the illusion of freedom (self-exposure and self-exploitation). Here, everyone observes and monitors everyone. Surveillance markets in democratic states dangerously approach digital surveillance states. Psychopower replaces biopower, intervening in psychological processes, more efficient because it controls from within. In this phase of capitalism, hypertrophy of the "surveillance capitalism" model alters traditional concepts like democracy, citizenship, and sovereignty, now linked to digitalization. The generalization of the control society and the new digital panopticon, anticipated by philosopher G. Deleuze in "Post-scriptum on Control Societies" from "Conversations", follows Foucault's initial developments in discipline, regulation, and normalization.

New control forms, both soft and hard, emerge with social media from the 21st century's first decade, seen in the DHS's use of social media for "harder" surveillance, creating the "Socmint" (Social Media Intelligence) department within security agencies.

Thus, it is urgent to analyze ethical, political, and legal aspects of persuasive technology use, as such practices exponentially increase, diversify, and become "invisible", blending with everyday life for anyone with internet access and a computer or smartphone. With the Internet of Things and services incorporating AI into daily objects and environments, precise intervention potential grows, enhancing persuasive power. Persuasive technology use spans more areas, including advertising, marketing, sales, labor relations, and general political and economic use.

New persuasive forms, especially with AI, big data, and machine learning, have greater intrusive and damage potential. Persuasion techniques are more effective when interactive, adapting influence tactics to evolving situations based on real-time feedback. This personalizes persuasion compared to traditional media's behavioral manipulation tactics, which can't produce personalized results.

Computers large data analysis capacity enables persuasion techniques like suggestions, simplified understanding, and other influence tactics. A clear manipulation example is "dataism", where social media users trust these platforms, believing their data is secure and not used for other purposes without informed consent. Users are often unaware of what happens behind the scenes, their real position, and that these "free" services make them the product, not just users, paying with their data and behavioral profiles, becoming lab subjects for countless behavioral experiments without their knowledge or transparency.

"Captology", coined by B.J. Fogg in the 1990s, led to the "Persuasive Tech Lab" at Stanford University, researching this field. Sometimes confused with a Scientology offshoot, captology relates to a long-standing aspect: human behavior manipulation, evident in marketing, advertising, media, and politics. Now, technology, computers, and AI amplify this potential, enhancing manipulation dissemination speed and target vulnerability. Captology studies computers and technologies as persuasive tools, manipulating behavior, habits, emotions, and feelings using psychology principles in technology and design. This can create new products aimed at behavioral change, often non-transparent and even surreptitious, exemplified by the Cambridge Analytica scandal. Cambridge Analytica used behavioral and psychological research from Facebook data to profile individuals, personalizing ads, messages, and publicity for behavior manipulation, aiming to elect political clients. Facebook estimated up to 87 million users' data and behavioral analysis were improperly shared with Cambridge Analytica during U.S., Philippines, Indonesia, and U.K. elections.

Conclusion 

Computers, AI, and technologies for behavioral manipulation are far more potent than human-only methods, due to intrusion, speed, and interactivity, increasing target vulnerability.

In an era dominated by AI and pervasive surveillance, it is crucial to establish robust legal and ethical frameworks that ensure AI accountability and adequately and systematically protect potentially affected fundamental rights, preemptively through compliance and design. The integration of interdisciplinary expertise and independent oversight can foster a culture of responsible AI use, safeguarding against the harmful effects of behavioral manipulation. By doing so, companies can not only avoid reputational damage and legal repercussions but also contribute to a more ethical and sustainable technological future. This proactive stance is essential to prevent a dystopian reality where trust is replaced by control, as warned by thinkers like Byung-Chul Han. Ultimately, promoting transparency and accountability in AI will be the cornerstone of a democratic society in the digital age, enabling us to uphold a Democratic State of Law from conception and achieve algorithmic justice.

__________

1 Disponível aqui.