In recent weeks, due to the fast advancement and large-scale use by various parts of the population of AI tools based on natural language processing, such as CHAT GPT, notwithstanding further concerns with other AI applications, especially high-risk ones, that are launched in the market even without knowledge about all the possible impacts and externalities, especially those potentially negative, some manifestations have been released by researchers, entrepreneurs and countries like Italy, and in general the EU, either requesting or already applying a suspension or moratorium, and in some cases a ban.
Among other problematic points highlighted there is the possible affront to the rules protecting personal data, such as the European GDPR and in Brazil the LGPD, for example, namely, the principles of transparency, good faith, data minimization, and user rights, such as the right to have prior and qualified information.
As is to be read in the institutional page of the company OPEN AI, to be understood as an effort to achieve a "compliance", there are a few recently published documents ("Documents and policies"), bringing a weak level of information (qualitative and quantitative), but far from being qualified as real compliance instruments. So, for example, the Data Protection Impact Report (DPIA) or the AIIA - Algorithmic Impact Assessment, as recommended by several international documents and international bodies of highest repute, in order to verify potential risks to fundamental/human rights, in order to verify the level of risk according to international standards, "frameworks" and documents, to adopt risk mitigation measures. Yet two of the documents cited are recent, published after the product was on the market (03.01.2023, and 03.23.2023), and others do not contain the date of the version they were written, and are not complete in the proper sense of a compliance, starting with the fact that it was the company itself that created and measured which activities it considered to be of higher risk than others, and only to bring meager recommendations. Furthermore, the approach is not pro-active (sub-principle of "privacy by design"), since the burden is not for the company to be more protective and to respect the requirements for a trustworthy AI, but rather it places the responsibility on the user when mentioning that if he adopts those simple recommendations the application would be safe and responsible.
Given the under-representation of Global-South countries in the arena of discussions involving the AI theme, and therefore the lack of epistemic diversity there, as well as the under-representation in the discussions of specific documents dealing with the theme and in oversight bodies of "compliance" documents, of vulnerable groups, despite some isolated proposals towards a decolonial AI, as well as towards an inclusive and democratic AI, linked to the respect for the socio-cultural characteristics of each country, there are still few specific initiatives in this direction, and so far without expressiveness in Brazil.
The present petition intends to be a manifesto calling for greater social, political and public engagement in general, either through economic incentives for good practices, compliance, teams of researchers with expertise in the subject and qualified with transdisciplinary, in addition to investments in public universities in order to have the possibility of independently following the discussions and compete with the "Big techs", since only they have the financial means, the infrastructure, and the huge database which is necessary for such development, to overcome the issue of informational asymmetry in the area. It is urgent then a change in Brazil's position in terms of public policies and incentives, even fiscal, donations by private individuals, for the adoption of good practices, in startups and small companies that work with high[1]risk applications to take compliance measures, without hindering innovation, the formation and hiring of specialized teams and public investment both in public and in private Universities of recognized quality in the areas involved, especially focusing on the humanities, in face of the derisory number of research scholarships and incentives for high level researches in such areas.
We think that it is a perspective more than that of "human-centered AI", but of "life-centered AI", sensible to the concern with the aspects of inclusion, diversity, and respect for fundamental and human rights, at all levels, individual, collective, and social, being necessary the search for a proportionality between guaranteeing technological development and also guaranteeing the protection of such rights, thus reducing negative externalities, and considering above all the socio-cultural context of the country and of the user involved.
Considering the fundamental epistemic characteristic of AI, as originating from a trans-classical, holistic discipline, Cybernetics, it is essential to strengthen research also in the fields of humanities, that is to say, of critical, innovative and independent thinking, since the theme requires trans/interdisciplinarity so that it could be well understood and adequately regulated.
In this sense, we call upon all countries of the Global South and whoever else understands the decisive importance of such an agenda in any part of the world to unite around it and other related ones, in a long-term perspective, which would contribute to the benefit of all society and to the strengthening of the Democratic Rule of Law, locally and worldwide. In this sense, it could be mentioned the favoring of taxes or fees benefiting not automation, for example, as occurs in the USA, but investments in the production of critical, trans/interdisciplinary knowledge, with priority investments in public universities, in new skills needed in face of even greater automation with AI, since a drop in the levels of such investment is occurring (following the example of the USA, with a reduction of 50% of investments in the last ten years), in order to have less social inequality, rescuing some principles of the welfare state, and thereby reducing the socially disseminated violence.
We need, therefore, with urgency, to call for the union of representatives of the global south, also in order to be not only dependent on technology, but producers of technology, and of knowledge in general, that comes in our favor, and the present manifesto meets this perspective, demanding the participation of representatives of the global south in international discussions, and in collective representative bodies of such themes, in order to promote respect for the cultural aspect, thus making it possible for everyone's voice to be heard, so that we can then broaden the concept of equity and algorithmic justice, as well as social justice as a whole, avoiding even affronts to what is called the principle of the prohibition of social regression.
In this context of absurd level of complexity, vertiginous speeds of change, it seems urgent to us the creation of a legal framework regulating production, uses and applications of AI systems, without prejudice to the incentives for good practices and compliance, because in our understanding the best option would be the urgent heteroregulation, even if, perhaps not complete or deficient, but preferable to none; the "loopholes" will occur anyway. In this sense, the creation of a National Authority also seems crucial to us.
Another fundamental point to be observed is the proposal of a general power of caution under certain conditions that would make possible the immediate suspension of excessive risk or even high risk technologies, since the announced pause of only six months will not solve this problematic issue.
The problem of control (Deleuze) has reached the entire noosphere, the infosphere, thus, the solutions will always be partial and precarious, and it does not seem possible to reach a universalizing pretension on this issue (even because the concept of universal, which according to Badiou emerges with St. Paul, is limited to Christian religions, and to the West, which adopts it the most, so to speak), and would not have the power to observe the socio-cultural context of each country, unless in a very generic way or in the form of a compromise solution.
Facing such issues, the manifesto is also in the sense of the urgent need to enact federal legislation about AI for our country, even because there are already some state legislations being applied. In particular, it becomes urgent regarding high-risk AI, for example the application of "facial recognition" in elementary and high schools, as well as in front of other vulnerable groups, with several weaknesses, without observance of a list of principles and assured rights, and without minimum "compliance" measures, in particular the Data Protection Impact Assessment (DPAI) and the Algorithmic Impact Assessment (AIIA), focusing on fundamental rights. The present manifesto also goes in this direction of a call to the Brazilian Legislative Power to make the issue urgent and face it!
Finally, in the wake of the initiative taken by Chile, and just as there was the express recognition via constitutional amendment of data protection as a fundamental right in our Constitution of 1988, it is urgent that this also expressly provides for new fundamental rights, called "neurorights", which are:
1. The right to mental privacy
2. The right to personal identity
3. The right to free will
4. The right to equal access to mental enhancement
5. The right to protection against prejudice
It is also proposed that, before such insertion of new fundamental rights, a "caput", to the respective article, be made positive via constitutional amendment, stating as an essential value of the Democratic State of Law and in respect to the republican values, that the scientific and technological development related, in special to disruptive technologies such as AI, must obligatorily be at the service of people (life-centered AI) and, furthermore, it should observe, besides such values, the respect for the fundamental rights of all, including the protection of the new neurorights to brain activity and the information derived from it, requiring prior evaluation and authorization in a manner similar to the medical/pharmacological regulations, as well as the prohibition of the purchase or sale of such data resulting from such analysis.
P.S.: I would like to thank Willis S. Guerra Filho, Belmiro Patto and Cristina Amazonas, from the Ethikai Institute Study Group - ethics as a service, for their careful reading and valuable comments on this manifesto.