Continuous Transformation Blog

AI Governance And Enterprise Architecture: Our Perspective

Written by LeanIX | June 27, 2024

As a market leader in enterprise architecture, our customers have been asking us what our position is on AI adoption and governance. Seth Lippincott explores the SAP LeanIX viewpoint on AI and our recommendations for leveraging AI tools.

Enterprise architecture has long been associated with governance. In fact, it's sometimes seen as the enforcing power for IT.

Enterprise architects, however, have worked hard to push back against their perception as 'IT cops'. Instead, they're increasingly becoming champions of innovation and transformation.

Thankfully, the meteoric rise of artificial intelligence (AI) is offering enterprise architects yet another opportunity to demonstrate the value of IT governance. Governance isn't a blocker to innovative agility, but rather a practice that makes the strategic, sustainable adoption of AI possible.

This is why we feel that enterprise architects are best placed to drive AI governance. Likewise, we believe that governance is the key to accelerating AI adoption.

To find out what our customers are saying about generative AI adoption and more about our recommendations for governance, download our report on the results of our latest customer survey:

REPORT: SAP LeanIX AI Survey Results 2024

 

First Things First

Before we explore our position on artificial intelligence (AI) governance and adoption, it’s important to emphasize a few things:

1 AI Governance Is Just Governance

First, AI governance is not separate from any of the existing processes a company may have in place. The adoption of technology is already controlled by IT governance policies; just as data collection, storage, and management is overseen by data governance policies; and financial practices, among other activities, are subject to corporate governance.

2 AI Governance Is Not Just For EAs

Second, AI governance is not solely the domain of enterprise architects. Legal and compliance teams, for example, have a stake in AI governance, particularly given the ongoing introduction of laws and regulations focused on AI in countries around the world.

Product teams have a stake as more companies seek to add AI capabilities to their existing products or develop standalone AI applications. Just as customer support, marketing, and HR teams get pulled in as companies begin using AI to improve the customer experience, or to increase the effectiveness of recruiting and onboarding systems.

3 AI Governance Is About People

Finally, it’s important to remember that we don’t really govern applications or technology. We govern people, providing direction on the behaviors we wish to encourage and guidelines to help people avoid behaviors that pose risk.

From this perspective, AI governance is as much about management in general as it is about technology.

 

SAP's AI Ethics And Pillars

SAP LeanIX is part of the larger SAP ecosystem, which informs how we approach artificial intelligence (AI) within our products, as well as how we look to support our customers’ AI initiatives. SAP is at the forefront of embedding AI capabilities into its products, while also enabling customers to take full advantage of the specific AI tools they want to deploy, whether provided by SAP or not.

SAP’s philosophy regarding AI is simple: AI solutions should be...

  • Relevant - they should solve real business problems
  • Reliable - you should be able to trust the output (eg, no hallucinations)
  • Responsible - they should be developed in accordance with clear ethical guidelines

The ethical guidelines SAP has established for AI development, which can be applied to any AI tools a company adopts, rest on three pillars:

First Pillar: Human Agency And Oversight

According to this pillar, AI solutions should not be able to make decisions that humans cannot reverse or that humans have no insight into.

Second Pillar: Addressing Bias And Discrimination

The training data used to build an AI model can introduce biases, some unexpected and some foreseeable. To abide by this pillar, AI solutions must be free of any bias baked into or derived from the training data.

Third Pillar: Transparency And Explainability

This pillar helps ensure that the models or tools you use or develop are unbiased. It does so by insisting on visibility into the data being used to build your models and requiring explainability on how the AI generated its results.

 

Mapping Your AI As-Is Landscape

A recent survey conducted among SAP LeanIX customers uncovered something concerning: 90% of respondents said that having a comprehensive overview of artificial intelligence (AI) in the IT landscape was critical for compliance and governance., but only 14% reported that they had such an overview today.

REPORT: SAP LeanIX AI Survey Results 2024

One of the core EA beliefs that SAP LeanIX has promoted from the very outset is that, before designing or planning anything for the future, enterprise architects must have a solid grasp of the current state of their IT landscape:

  • What’s in it
  • What business capabilities it supports
  • Who owns what
  • How it’s all connected

We call this 'mapping the as-is state'. No improvement, innovation, or transformation of the IT landscape is possible without having taken this first step.

Whatever plans your company has for AI, if you are going to govern its adoption and use, you need to know where it has already been adopted and where it is being used. Creating an inventory of all AI-enabled applications is, in this regard, no different from creating an inventory of any other technology in your landscape.

That being said, there are certain aspects of these applications that you will want to record. To be compliant with regulations such as the EU AI Act, for example, you must be able to account for the data sources used by the AI in question.

READ: What World-First EU AI Act Means For Enterprise Architects

You also need to ensure that AI is not being used for any prohibited activities. Along those lines, you will also want to assess the risk associated with particular AI applications, such as exposure of proprietary or personally identifiable information.

If it were already common practice, we wouldn't be placing such an emphasis on very basic data collection around AI usage. In fact, the collection of AI usage data is still quite rare in enterprises, and we strongly believe that any company interested in AI governance needs to address this gap immediately.

Without reliable insight into the current state of AI adoption and usage, any efforts to govern AI be fruitless.

 

Share The Data, Plan Adoption

To maximize the impact of your efforts to inventory data on artificial intelligence (AI) use within your organization, the data must be shared. As we have already said, AI governance is an organization-wide responsibility.

Making this data available to everyone from compliance managers to the C-suite allows each person in your organization to understand at a glance where AI is being used and the related risk profile. Just as importantly, this level of visibility will also reveal where AI is not in use.

Of course, there's no need to deploy AI everywhere, as there isn't any point in paying for and deploying AI or AI-enhanced tools if there isn't a use case for it. Yet, you may have overlooked opportunities to improve efficiency or increase productivity in parts of your business by leveraging AI.

Collaboration between your enterprise architecture teams and your business process teams can help remedy this situation. Process analysis focused on performance, manual effort, and bottlenecks can uncover areas where AI may be appropriate.

With these targets in mind, enterprise architects can then recommend AI solutions to address the issue or provide insight into opportunities for technological investment.

 

Governing What You Build

So far, we've covered the fundamental steps that a company can take to begin governing the artificial intelligence (AI) applications they have acquired. What about governing the AI that you're building?

We will stress, once again, that enterprise architects alone can't be solely responsible for this. SAP, for example, maintains an AI Ethics Office to review proposals for AI products.

This office decides whether or not a particular proposal is in-line with SAP’s stated criteria for ethical AI. Still, enterprise architects have an important role to play here, specifically when it comes to modeling the tool chain being used in AI development and how it connects to the broader architecture.

Such models are important, both from a reporting standpoint and from a security view. This is because they allow you to quickly visualize data flows and critical dependencies.

 

Supporting AI Adoption Through Governance

Every company must decide for itself how it will adopt artificial intelligence (AI) and whether or not it will invest in its development. Every organization must, likewise, decide how much care it wants to take over ethics in these pursuits and how much risk it is willing to assume in its efforts, though regulators will soon enforce a certain standard.

Even if you don't plan to leverage AI, you will still need oversight into whether your employees are using unauthorized AI tools in their work. For example, last year, Samsung discovered its developers were uploading proprietary code into ChatGPT for bug-fixing.

As such, regardless of your specific AI strategy, reliable visibility on your evolving AI-enabled landscape is essential. Without it, it’s hard to imagine how you would execute any strategy, let alone avoid paying costly fines for failure to comply with the relevant regulatory regimes.

What’s more, such visibility must be shared with all interested parties across the enterprise. For this reason, it would serve you well to create your inventory in a tool that allows for easy visualization and sharing of inventory data.

For support with inventorying your AI landscape, book a demo of SAP LeanIX: