Following an ambitious digital strategy, the European Union (EU) wants to regulate Artificial Intelligence (AI) to ensure better conditions for developing and using this innovative technology. AI has the potential to bring many benefits, such as improved health systems, safer and cleaner transportation, more efficient manufacturing, and cheaper and more sustainable energy.
In April 2021, the European Commission proposed the first EU legislative framework for AI. It provides for the analysis and classification of AI systems used in different applications according to the risks they pose to users. Different levels of risk will involve more or less restrictive regulations. Once approved, these will be the world's first AI rules.
Parliament's priorities for AI legislation
Parliament's priority is to ensure that AI systems used in the EU are secure, transparent, easy to track, non-discriminatory and environmentally friendly. AI systems should be overseen by humans, not automation, to prevent harmful consequences.
Parliament also wants to establish a technology-neutral and uniform definition of AI that could also be applied to future AI systems.
Learn more about Parliament's work on AI and its vision for the future of AI.
AI Act: Different rules for different levels of risk
The new rules set obligations for providers and users of AI, depending on the level of risk of artificial intelligence. Although many AI systems present minimal risk, they must be evaluated.
Unacceptable risk
AI systems deemed a threat to humans will be banned. These include:
- Cognitive manipulation of the behavior of people or certain vulnerable groups, such as voice-activated toys that encourage dangerous behavior in children.
- Social scoring: classifying people based on behavior, socio-economic status or personal characteristics.
- Biometric identification and categorization of people.
- Real-time and remote biometric identification systems such as facial recognition.
Some exceptions may be allowed for law enforcement purposes. Real-time biometric identification systems will be allowed in a limited number of serious cases, while "post" remote biometric identification systems, where identification takes place after a significant delay, will be allowed for the prosecution and prosecution of crimes serious only with court approval.
High risk
AI systems that adversely affect safety or fundamental rights will be considered high risk and will be divided into two categories:
- AI systems used in products covered by EU product safety legislation. This includes toys, aviation, automobiles, medical devices and elevators.
- AI systems that fall into specific areas and will need to be registered in an EU database:
- Management and operation of critical infrastructure.
- Education and vocational training.
- Jobs, workforce management and access to self-employment.
- Access and use of essential private and public services and their benefits.
- Law enforcement.
- Management of migration, asylum and border control.
- Assistance in the interpretation and application of the law.
All high-risk AI systems will be assessed before they are placed on the market and throughout their lifetime.
General and generative AI
Generative AI like ChatGPT will need to comply with transparency requirements such as:
- Disclosure that the content was generated by AI.
- Model design to prevent the generation of illegal content.
- Publishing summaries of copyrighted data used for training.
General high-impact AI models that may pose systemic risks, such as the more advanced GPT-4 AI model, will have to go through detailed assessments and any serious incidents will have to be reported to the European Commission.
Limited risk
Limited-risk AI systems should comply with minimum transparency requirements that would allow users to make informed decisions. After interacting with the applications, the user can decide whether he wants to continue using them. Users should be informed when interacting with AI. This includes AI systems that generate or manipulate image, audio or video content, such as deepfakes.
Implications and application of the rules
The implication of the AI Act rules will have a significant impact on providers and users of AI systems in the EU. Companies that do not comply with the rules can be fined up to 7% of global turnover. Bans on banned AI will take effect in six months, transparency requirements in 12 months, and the full set of rules in about two years.
The measures in the AI Act make it easier to protect copyright and impose stricter requirements on general-purpose AI systems regarding the transparency of their energy use.
European Commissioner Thierry Breton emphasized that Europe has positioned itself as a pioneer in this field, understanding the importance of its role as a global standard. However, it will be many years before it will be possible to assess whether the AI Act succeeds in taming the downsides of Silicon Valley's latest export.
Conclusion
The regulation of Artificial Intelligence in the European Union through the AI Act represents a remarkable initiative that will establish a unique legislative framework in the field. These rules will ensure the sustainable and safe development of AI technology, protecting the fundamental rights of citizens and the environment. By establishing risk categories and imposing specific rules for each level of risk, the AI Act strengthens confidence in the use of Artificial Intelligence in the EU.
NSHOST VPS servers are hosted with NVMe storage. To launch a new solution, you can purchase your favorite domains at the most convenient prices using a quick domain registration solution and invest in a secure and optimal hosting plan - choosing an NSHOST hosting solution web shared, VPS or Cloud. We recommend paying close attention to the right caching strategy for your business to ensure optimal load times for each web page.