Artificial intelligence has tremendous potential, but it also poses risks for both security and hallucination. Discover how you can ensure safe AI use in your organization without holding back your team from leveraging AI for ideation and innovation.
Artificial intelligence (AI) tools are set to revolutionize the way we all do business. It's tremendously exciting to see computers complete everyday workplace tasks for us simply because we typed a prompt asking them to.
As such, it's no wonder that workers the world over are rushing to play around with tools like ChatGPT to see what they can do and how they can enhance productivity. Their excitement, however, must be tempered by caution.
Samsung recently discovered employees entering proprietary code into ChatGPT in order to test its bug-fixing capabilities. This not only allowed ChatGPT's vendor, OpenAI, to view the code, but could also have taught the structure of the code to ChatGPT, meaning it could potentially output code samples as answers to other users' prompts.
Samsung, however, realized that banning employees from using the tool wasn't an effective way to protect their organization. Instead, they provided their employees with a private instance of ChatGPT that was unreadable to OpenAI and would not output information to users outside of Samsung.
This allows Samsung employees to continue to experiment with ChatGPT and find new ways to leverage its potential, but in a safe environment where they can't put the company at risk. Enabling the use of AI tools within necessary safeguarded limits will be essential for successfully achieving the tools' potential.
Advancing AI Adoption
Artificial intelligence (AI) has incredible potential. Analyzing advanced data sets and making strategic decisions far faster than a human can is impressive enough, but generative AI's ability to produce human-sounding content and functional software code instantaneously is a game-changer.
Holding your team back from taking advantage of these tools will leave them less productive than your competitors who are allowing their team to make use of generative AI. This, in itself, will put you at a competitive disadvantage.
However, generative AI can also be used for ideation and organizations not taking advantage of this could find themselves struggling to compete creatively. It's, therefore, vital for IT teams and enterprise architects to be able to empower their organizations with innovative AI tools.
This requires knowledge of the developing AI marketplace, so that enterprise architects can find the tools their colleagues need to succeed. More challenging, however, is identifying the best place to fit these tools into your IT landscape.
To properly empower your organization with AI tools, your enterprise architects need to start with a complete map of your application portfolio and data landscape. Yet, getting the tools in place is just the start of your work.
Safeguarding Your Organization
Artificial intelligence (AI) tools are so important because they can act without direct instruction, reducing the stress on your resources. AIs can analyze data to derive intelligence, direct resources to where they're needed, cut off wasted investment using predictive algorithms, and even create new software before you know you need it.
The question, however, is what happens when AI gets it wrong? Nothing is infallible and even if there is a 1% chance of error, errors will still occur.
This is, of course, no more than you can say about a human being. Both humans and AIs make mistakes, but the problem is that human beings can identify and correct their mistakes when they make them.
AI, however, cannot self-diagnose its own problems. AI, not only struggles to recognise errors, it will often create fictional evidence to try to prove it is right, which is known as AI 'hallucination'.
This isn't a theoretical concern, however. There are many recent real-world examples of AI causing actual issues for organizations.
When AI Goes Wrong
Microsoft recently made 50 writers in its news division redundant to use generative artificial intelligence (AI) to write articles instead. This led to embarrassment when AI listed a charity food bank in Ottawa, Canada, as a popular place for tourists to visit.
More seriously, a Manhattan district court recently fined law firm Levidow, Levidow, & Oberman USD 5,000 for using legal precedents in their cases that had been derived from research done in ChatGPT. It turned out that the precedents had been completely fabricated by the AI tool.
This isn't a malicious action or false capability, it's simply that generative AI tools are unable to tell the difference between factual information and creative fiction. These are just a few examples of AI tools going rogue.
Generative AI misuse is one thing, but when you're letting AI platforms control your logistics or monitor your data for cyber attacks, errors can be critical. What happens when your AI blocks access to your network as it thinks all your employees are cyber attackers, or worse, when it believes a cyber attacker is a legitimate user and gives them full control?
When leveraging AI tools, you must put in place the correct protections to ensure human oversight of AI innovation. Yet, you need to do this without becoming a blocker to the competitive edge that AI tools can offer.
Finding The Right Balance
Artificial intelligence (AI) has incredible potential and those who don't take advantage of it risk being left behind in the market. Yet, we have seen that there are tremendous risks involved in the unsupervised use of these tools.
You can't simply allow your employees to have free rein ('Shadow AI') to leverage these tools across your organization, but you also can't prevent your organization from utilizing AI innovation. Striking the right balance involves aiding AI adoption, while simultaneously ensuring AI governance.
The key here is to manage your AI platforms as you would an employee. Let them do the work while your organization supervises their activity and monitors their performance.
To do that, you need to have an AI strategy and create an AI governance framework to evaluate and approve AI tools for use in your organization, supervise their addition to your application portfolio, and then continue to monitor them and ensure their utility in the future. Rather than a blocker to AI innovation, enterprise architects must become AI champions, sourcing, implementing, and developing the use of AI tools.
By becoming the one most often saying 'yes' to AI, you will also become the one people are more likely to listen to when you have to say 'no'. Doing that, however, means gathering enough intelligence to be able to confidently say 'yes' when it's safe to do so.
LeanIX Keeps Track Of Your AI Portfolio
Artificial intelligence (AI) platform management requires enterprise architects to understand both the capabilities and limitations of AI tools, so that they can best implement them within their application portfolio and IT landscape. They need to know where they are safe to use and where they are not.
Gaining this insight into your application portfolio will empower you to say 'yes' to AI innovation without the risks. To do that, you need the LeanIX platform to map out your entire application portfolio in a live, collaborative environment.
To find out more about how the LeanIX platform can unlock AI adoption and enable AI governance, book a demo: