AI legislation is approaching rapidly. Discover what enterprise architects need to know in order to stay compliant with incoming AI regulation.
Generative artificial intelligence (AI) could revolutionize the way we do business. Unfortunately, it also has huge potential for illicit use.
Legislators are already discussing how to prevent misuse of AI technology. This ranges from an executive order by US President Joe Biden calling for compulsory measures to the EU AI act currently gaining momentum in the European Parliament.
Most of this legislation will be aimed at immoral actors, rather than legitimate companies, but there will still be a requirement to comply with rules enforcing the fair treatment of customers. What will this involve and how can you leverage AI technology while staying compliant?
The key is to have full oversight of your IT landscape, and how and where you're leveraging generative AI technology across your tech stack. To find out more about how LeanIX can support you with this, book a demo:
Meanwhile, let's look more closely at how AI is being used in an illicit manner. This will also show us why AI regulation is inevitable.
Protect Taylor Swift
At the time of writing, there are currently 'deep fake' explicit images of pop singer Taylor Swift being shared across social media. The images are not real, but created using generative artificial intelligence (AI) tools.
The public outrage at the defamatory fake images led to thousands of fans posting official, approved images of the singer with the hashtag "#protecttaylorswift" in order to flood social media and make the illicit images impossible to search for. With such high-profile cases of AI being used for illicit means, it's likely the onset of regulation will be hastened even further.
Meanwhile, in the same week, a group of New Hampshire, USA, voters received an AI-generated audio message from President Joe Biden asking them not to vote in the November elections. While the Taylor Swift images are concerning, the threat of vote manipulation is frightening.
Enterprise will, of course, be very unlikely to target celebrity singers or voters in this way, but coming legislation may well have implications about the way that generative AI can be legally leveraged. So, let's look more closely at five things you can do to prepare.
1 Choose Your AI Carefully
There's a whole raft of artificial intelligence (AI) tools that have entered the market since ChatGPT opened the floodgates at the end of November 2022. The question is, are any of them of any value?
AI technology has tremendous potential, but there are also many startups making promises they can't truly fulfill. Not to mention, some of those could be fronts for data harvesting or simply not secured against cyber attack.
Cyber security firm Group-IB reports that they found over 100,000 stolen ChatGPT logins available on the dark web. Cyber criminals accessing these accounts may be able to mine confidential information from the history of your use of ChatGPT.
It's easy to get swept up in the excitement and potential of AI technology, but Group-IB's research shows how important it is to be careful about which tools you choose to add to your application portfolio. Careful auditing will be important to ensure security and compliance with the regulators.
2 Monitor AI Use In Your Company
Artificial intelligence (AI) tools are so easy to leverage that they can be used by anyone within your organization without technical support. This means that you need to keep a careful eye on, not just the authorized applications you leverage, but what AI tools your colleagues could be using without authorization.
In leveraging AI tools to generate content for your organization, your employees could unwittingly input private data into the public instance of ChatGPT. Not only does this share that data with ChatGPT's vendor, OpenAI, but it actually trains ChatGPT on that content, meaning the AI tool could potentially output that information to another user outside of your organization.
Alternatively, overuse of generative AI tools without proper supervision could lead to factual or textual errors being published to your customers. Gen AI tools need careful supervision to ensure they don't "hallucinate" or produce mistakes, as they are unable to self-edit.
It's equally important to be able to report back to legislators on what AI is being used across your company, so they can see you're compliant. This will likely become a regulatory requirement in the near future.
3 Get A Private AI Instance
Last year, Samsung employees experimented with ChatGPT by inputting proprietary company code into the interface and asking the tool to bug-fix it . As a result, Samsung banned the internal use of ChatGPT and instead acquired a private, secure instance of the platform for its employees to experiment with.
These internal instances are trained on all of the same data as the public version, but aren't connected to the external instance, so nothing you enter into the system will find its way outside your organization. This is vital for leveraging the power of generative AI without risking your data.
Showing regulators that the data you're inputting into AI tools is safe there will be vital for compliance. If your AI vendors can't offer a private instance, you may need to reconsider working with them.
4 Avoid AI Prejudice
We, as human beings, have prejudices. We can, however, train ourselves to ignore and resist those prejudices.
Artificial intelligence (AI) tools are trained on the way we speak, meaning they will also adopt our prejudices of speech and content. However, AI is incapable of self-regulating itself regarding those prejudices as it isn't aware of them.
If we choose to leverage our generative AI tools to read all of the applications we receive for a job role and highlight the best candidates, how can we be sure it won't copy our prejudices and only highlight the white, male candidates? This will not only lose us the opportunity to acquire key talent, but could lead us falling foul of regulators as well.
As well as ensuring AI isn't used by cyber criminals, world governments seem equally concerned about the prejudicial nature of AI tools. This is likely to be a key factor in coming regulation.
5 Train Your Staff
In addition to the above elements, it's also key to ensure that your staff are aware of the realities of generative artificial intelligence (AI) tools. This ensures that they can avoid the pitfalls of AI, even when you aren't able to advise them.
This is why it's important to train your team to understand the limitations of AI and to carefully supervise the use of AI tools to ensure success. While it's important for enterprise architects to monitor the use of AI in your IT landscape, they need the support of your entire organization.
If you can show regulators that you're training your staff on AI awareness, it will go a long way to proving compliance. This will be key to ensure you're ready for coming regulation.
LeanIX For AI Administration
The first step to ensuring compliance with coming artificial intelligence (AI) regulation is knowing exactly where and how AI is being used in your IT landscape, and by whom. That requires a full map of your application portfolio within a dedicated platform.
Not only does the LeanIX platform log and map the infrastructure of your software applications and IT components, it also leverages AI technology itself. Using a built-in, private ChatGPT instance, you can generate reports based on the information stored in your LeanIX instance and even get it to do automated research for you.
To find out more, read our previous article: