Κείμενο

The ability to make informed and free choices is at the heart of modern democracies. However, the use of robotics and other artificial intelligence (AI) machines (i.e., self-driving cars, drones, facial recognition technologies, chatbots, satellite-based GPS, smart assistants etc.) are gaining an increasing level of control over individuals and their lives. The increasing development of smart environments (e.g., smart houses, smart restaurants, smart cars, etc.),with embedded sensors that can recognise individuals’ presence, and the launch of new intelligent machines with the capacity to change people’s perception and moral conscience have brought an overwhelming range of concerns for human freedom and impair the exercise of rights and fundamental values of individuals. As a result, robotics and smart environments may have a direct impact on the value of autonomy and thus give rise to dignity, privacy, and data protection issues.
Due to these machines - AI technologies - data can be easily tracked, accumulated, and combined, from a number of different data sources (e.g., social network profiles, fitness trackers, e-coaching Apps, banking services through robotic process automation systems (RPA), etc.). These technologies enable real-time adjustments to take place by monitoring and collecting data related to individuals’ daily (online and offline) activities that will automatically be evaluated and categorised in profiles.Such profiles generate potential for business entities to personalise their products and services and adopt persuasive strategies in their practices so to increase their profitability.
AI, therefore, has progressively maximised the ability of technology to understand individuals’ behaviour and strengthened its adaptive nature to influence the behaviour and opinion of people. Persuasive strategies, profiling, and personalisation practices constitute the cutting edge for the development of the marketplace and the revolution of today’s smart and artificial society. Further, the use of captology thinking to target individuals’ behaviour enables business entities to change people’s perceptions by influencing what they think and what they do.
In this context, AI technology is becoming omnipresent and influential - controllable - while people are on the move. This is because, through their programming, such machines may determine how individuals can express themselves and alternate their behaviour and thoughts based on the tools and patterns provided.
As a result, with their algorithmic decision-making and their extensive use in almost every context of our lives, AI machines may distort the functioning of society and infringe upon the freedom of choice of individuals who might be vulnerable because of the knowledge and power asymmetries.
AI technologies, therefore, may be able to impose their own ‘wishes’ and ‘choices’ over the decision-making process of individuals and thereby pose threats to individuals’ autonomy and freedom to self-develop their personality and identity within society. As such, the use of robotics and other AI machines challenges the protection of individuals’ fundamental rights and freedoms and creates conditions of asymmetries and unbalanced distributions of knowledge and power.
Moreover, the extensive application of smart environments and the use of robotics and smart assistants (e.g., Amazon Alexa, Apple Siri, Google Assistant, Microsoft Cortana, etc.) by individuals for their daily routine activities, can lead to habit formation and thus, to the dependency of people on those intelligent machines (i.e., habit-forming technology).Such dependency may impair the quality of human life and undermine human relationships of every kind. Further, the capacity of the technology to increasingly influence individuals’ personal choices and moral sense, to absorb their capabilities, and to mimic aspects of human behaviour may intervene with the human process and nature.Arguably, therefore, AI technology challenges the boundaries between human and non-human life and conflicts the (levels of) legal and non-legal protection of humans and non-human beings (i.e., technohumans and robots). In particular, the use of robots and smart assistants can cause threats to the dignity of the human person, the protection of personal data and the respect of the individual’s rights to privacy and to freedom of thought and conscience. All these rights are necessary instruments for a democratic society.
Therefore, robots and other smart machines are likely to impose wider possible threats that may arise in relation to legal and social justice, democracy, liability and the survival of life and human nature as we know it.
What are Europe’s Responses to Techno-human Challenges?
In response to these techno-human challenges, Europe introduced a new law. This is the proposed EU Artificial Intelligence (AI) Act, which was published by the European Commission in January 2021.The AI Act intends to regulate the acceptable and non-acceptable use of AI technologies. Like the GDPR,the AI Act is also very ambitious, and it is intended to become an international standard regulating the use of AI technologies for all sectors (private and governmental) worldwide. In particular, the AI Act aims to apply not only to the organisations (i.e., providers, users, importers, and distributors of AI systems) established in the EU, but also to non-EU organisations that supply AI systems in the EU.
The significant objective of the Act is to set the rules that will apply to establish the boundaries around the use of AI systems across all sectors. In doing so, the Act introduces firstly, the Risk- Based Approach under which Europe restricts certain uses of AI systems, and secondly, the rules governing the use of biometric identification systems (i.e., facial, voice, or gait recognition technology) by private entities or governmental actors.
With the Risk-Based Approach, Europe bans certain unacceptable uses of AI technologies (e.g., dark pattern AI, social scoring systems), strictly regulates some other uses of AI that encompasses important risk (e.g., toys, medical devices), and leave unregulated the technologies that, according to the Act, are of limited (e.g., deep fakes, smart assistance, emotion recognition systems), low, or no risk at all (e.g., spam filters). For the latter category, and in the form of recommendations, the Act suggests the adoption of Codes of Conduct within organisations that are using such minimal or limited risk systems.
In relation to remote biometric identification systems, the Act set the key prohibitions and obligations for those using biometric recognition technologies. For this reason, Europe defined two distinguishable levels of use. The ‘first level of use’ requires that any biometric identification technology must comply with the requirements of the Risk-Based Approach mentioned above. However, any such technology that is to be used within the EU either by private or by governmental entities, must also go through a third-party conformity assessment or comply with the European standards (to be published along with the AI Act) and be subject to ex-post surveillance requirements. The ‘second level of use’ restricts law enforcement actors to use biometric recognition technology in public places on the ground that such technology enables mass surveillance and identification of people unless certain exceptions apply. Such exceptions suffice if the use of the AI system is necessary to aid in specific investigations (e.g., for missing children), to prevent a substantial threat (e.g., terrorist attack) or to detect, localise, identify, or prosecute a suspect for environmental and computer-related crimes (e.g., human trafficking, terrorism, and rape).
It must be highlighted, however, that the proposed ‘second level of use’ only bans the use of biometric technology by the law enforcement sector. The Act does not expressly refer to the use of biometric technology by the private sector. Thus, private entities are permitted to use biometric recognition technology as long as they comply with the requirements of the Risk- Based Approach (under the high-risk category) under the Act.
What is to be expected?
Following Europe’s legislative responses, it remains to be seen whether the AI Act will follow the global regulatory influence of GDPR and whether the boundaries set out in its provisions are sufficient and effective in restoring and maintaining trust, meaning, protecting people’s safety and fundamental rights within a techno-human services environment.
The AI Act is currently going through a detailed legislative process, and it is not expected to become binding law until late 2023 or early 2024. Once it becomes binding, there will be a grace period (24-36 months) before the Act comes into force to give time for organisations to adjust and comply with their new legal requirements.