The European Union has become the first government in the world to legislate on the proper use of AI technology. What does this mean for enterprise architects looking at AI adoption and AI governance?
The European Union Parliament has finally passed an artificial intelligence (AI) regulation act that has been brewing for four years. The legislation is the first of its kind, but other nations are sure to follow with their own official guidelines for AI with the EU rules acting as a template.
Following a 2020 report on the risks of AI, the European authority began formulating guidelines to protect its citizens. Finally, the EU Parliament endorsed guidance on March 13, 2024.
The guidelines are likely to come into force in May, when they will regulate against the improper use of large language models (LLMs) and other AI technology. How will this impact enterprise architects, however?
Well, the legislation won't impact your organization directly, but it could lead to some AI vendors withdrawing or limiting their products. Enterprise architects need to be prepared to swap out or abandon any tools that fall foul of the new regulation.
To find out more about how you can adopt AI tools without the risk of vendor lock-in, see our previous article:
Otherwise, let's look more closely at the history of the EU AI act and what it means for enterprise architecture.
Artificial intelligence (AI) has been on the minds of the European Union Parliament for years now. Interest began with a report by the European Commission, entitled "White Paper On Artificial Intelligence – A European Approach To Excellence And Trust", which documented the opportunity for proper regulation to guide Europe to become a center of excellence for AI adoption.
Discussions of this paper led to the EU putting a road map in place for AI regulation in May, 2022. The goal wasn't to hold back AI, but to promote an environment where it could be developed and leveraged in an atmosphere of trust, with the approval of consumers.
This sets an example for enterprise, showcasing that AI governance can actually serve to encourage AI adoption among nervous users. The EU maintains that governance actively supports the adoption of innovative technology.
The EU finally endorsed the AI act on March 13, 2024 with 523 votes to 46. The legislation will go into effect in May, with implementation beginning in 2025.
The European Union Artificial Intelligence (AI) Act primarily comprises a ranking system for the risk levels of various types of AI technology. Each category will receive a scaled level of scrutiny from lawmakers.
The risk levels run from minimal to unacceptable risk. Let's look at each level and what the regulations will entail:
Minimal-risk AI tools include games or minor utilities like spam filters. Items that pose no risk to the public may be used freely.
AI tools like chatbots that don't control systems, but may provide customers with misinformation, can be used. However, these tools must be leveraged transparently so users are aware that the information they're being provided isn't moderated by a human and could contain errors.
AI tools used in infrastructure or to make important decisions with real consequences, such as in law enforcement, may be used under strict regulation. Those leveraging high-risk AI must ensure data quality, documentation, and human oversight of the tools, will be audited by government agencies to ensure accuracy, and must be approved before they go to market.
AI systems that pose unacceptable risks to humans will be banned. This includes those that threaten people's safety, livelihood, and rights, such as social scoring tools that classify people based upon their personal characteristics.
The European Union Artificial Intelligence (AI) Act won't directly impact the majority of enterprise architects. Unless you're developing your own internal AI platform that somehow has a risk for consumers, then the law won't apply to your work.
What could be an issue for enterprise architects, however, is if their AI vendors are caught out by the new regulation. If your organization is leveraging an AI tool for legitimate reasons that has high-risk factors, you could suddenly lose that functionality as the vendor is forced to withdraw the product.
Of course, these rules will only apply to software that is made or available within the EU region. However, other nations are likely to follow the EU act's template if it's successful, so these rules could soon apply worldwide.
If you lose an essential AI tool with little warning, it could have major consequences for your IT landscape. As such, the onus is on enterprise architects to be aware of what AI tools they're using, how likely they are to be banned by the act, and how easily they can be removed.
Artificial intelligence (AI) tools are only just being implemented into IT landscapes, so having them suddenly ripped away is a daunting prospect. To ensure they are prepared, enterprise architects need to have complete oversight of the AI in use in their application portfolio and also complete information on their AI toolset.
To do that, you need a tool that can track your application portfolio and identify which applications have AI capabilities. You also need complete vendor lifecycle information to identify when software is about to be withdrawn.
LeanIX will empower you to track all the AI in use across your estate and give you detailed information to prepare to retire software that faces a ban from the EU. To find out more about how LeanIX can support you, book a demo: