The European Union’s Parliament has passed the world’s first set of regulatory AI laws, a set of ground rules intended to govern artificial intelligence development projects based upon the perceived level of risk. There were 523 votes in favor, while 46 voted against the measure.
The EU AI Act assigns artificial intelligence projects to a category based upon the perceived level of risk. Those categories are as follows.
-
-
- Low hazard
- Medium risk
- High risk
- Unacceptable
-
The act calls for an all-out ban of technologies that fall into the “unacceptable” category. Artificial intelligence that falls into one of the other three categories will soon be subject to regulation.
Developers of more dynamic “general purpose” AI technology may be required to submit a comprehensive summary of the content that is used to train their AI models.
The Artificial Intelligence Act also requires that AI-generated “deepfake” content must be explicitly labeled as such.
The EU AI Act will go into effect in May 2024, after it undergoes review and receives endorsement from the European Council.
There are continued talks concerning the implementation of the EU AI Act, as was confirmed by Romanian lawmaker Dragos Tudorache. Tudorache, who took part in the efforts to bring the act into law, wrote, “The AI Act is not the end of the journey, but, rather, the starting point for a new model of governance built around technology. We must now focus our political energy in turning it from the law in the books to the reality on the ground.”
Tudorache’s remark also touches upon a very real fear in the development world, where innovators are concerned that their next big invention could be stifled by overly-harsh regulations.
Many surmise that this could lead to the formation of a sort of AI underground, which could be a source of dangerous technology that would be impossible to monitor, regulate or even control. For this reason, legislators abroad and in the US are taking a more measured approach that’s akin to erecting guardrails instead of a stone tunnel.
In fact, virtually all reputable and ethical AI developers are very careful with the technology they create, ensuring that there are no harmful consequences, intended or unintended. As such, you can be certain that there has been no shortage of updates to company ethics policies since artificial intelligence entered the mainstream.
European Union Parliament President Roberta Metsola called the artificial intelligence regulations “trailblazing,” adding that “artificial intelligence is already very much part of our daily lives. Now, it will be part of our legislation too,” according to her social media post on the topic.
What’s Allowed and Disallowed Under the EU Artificial Intelligence Act?
In creating the EU AI Act, lawmakers took a risk assessment approach, evaluating the level of risk associated with an artificial intelligence application.
An overwhelming majority of AI projects will fall into the “low risk” category, such as AI-powered email spam filters or a tool that recommends related content to users.
At the other end of the spectrum, there are “high risk” AI applications, such as AI that’s used in conjunction with medical devices. AI-powered software for infrastructure systems, such as a power plant, would also fall into the high risk category.
There is a diverse range of technology that falls into the “unacceptable” category, including predictive policing and AI-powered emotion recognition technology in a school or workplace setting. The same is true for social scoring systems aimed at manipulating human behavior and remote AI-powered facial scanning technology. There is an exception for the latter in cases of serious crimes such as murder, kidnapping and acts of terrorism.
What’s more, innovative developers are using AI to detect AI that holds the potential to do harm.
Who’s Affected by the EU Artificial Intelligence Act?
This new AI law affects companies and organizations that are doing business in one or more 27 European Union member nations.
US-based businesses are not subject to the EU’s Artificial Intelligence Act, so most will not feel any direct impact when the law goes into effect in May. Although it should be noted that AI has been a common topic of discussion and debate amongst American lawmakers. It’s expected that the EU AI law will serve as a model of sorts for other nations that are seeking to establish AI “guardrails.”
Transparency and accountability are essential aspects of the AI development process and these regulations reinforce that idea. For reputable and ethical artificial intelligence innovators, very little — if anything — will change when the law goes into effect.
Why is it Important to Hire an AI Development Company With Knowledge of Related Laws?
It’s essential that your AI and machine learning development partner is well-versed on the applicable laws and regulations that could affect your technology and your organization as a whole. You don’t want to find yourself in a situation where you have a brand-new AI deployment that’s a liability at best or unusable at worst.
A reputable, industry-leading AI development company will have a solid grasp on the latest laws, regulations and technological advancements. This awareness is especially important in the rapidly-evolving field of artificial intelligence and machine learning. It’s impossible to know how AI and ML will evolve in the coming weeks, months and years, but one thing is certain: we will continue to see roll-outs of new legislation and new regulations impacting machine learning and artificial intelligence development projects.
Hiring a Dallas AI Development Company
AI holds the power to transform an organization and its operations. At 7T, we specialize in enterprise AI development which begins with a deep-dive into an organization’s strategy, challenges and objectives. Then, we create a value-generating solution with machine learning-driven AI technology. This problem → solution approach to AI development has the potential to bring about exceptional results, generating new, profitable opportunities with the latest, most innovative technologies.
Connecting with the right artificial intelligence developer can be challenging as you need a partner who understands the organization’s pain points, objectives and business strategy — both today and in the future. At 7T, we take the time to get to know your business, providing clients with a Business Requirements Document (BRD). The completed BRD leads to a comprehensive understanding of an organization’s needs, pain points and future objectives.
Also, check out our latest eBook, A Guide to Prepare Your Business for Artificial Intelligence (AI) Development.
7T’s Digital Transformation development team is guided by the approach of “Digital Transformation Driven by Business Strategy.” As such, the 7T development team works with company leaders who are seeking to solve problems and drive ROI through Digital Transformation with innovative business solutions such as multimodal machine learning-powered AI implementations. 7T has offices in Dallas, Houston, and Austin, but our clientele spans the globe. If you’re ready to learn more about Digital Transformation development solutions, contact 7T today.