Press Releases | LeanIX

SAP LeanIX AI Survey 2024 | Flying blind

Written by LeanIX | Jun 18, 2024 12:00:00 PM

Bonn/Boston, June 18, 2024. Companies have high expectations for AI. The majority (78%) of those surveyed believe it will increase efficiency and nearly half believe it will significantly improve work quality. The fact that 80% of companies are already using generative AI reflects these high expectations. However, concerns about data security risks and uncertainty around legal issues mean that most of these companies are not yet using AI extensively. In fact, three quarters of companies have imposed limits on AI usage. Almost as many (71%) have implemented some kind of AI governance framework or are developing one.

Despite these measures, 85% of those surveyed believe they are not yet well prepared, or only partially prepared, to comply with existing and upcoming AI regulations. The SAP LeanIX AI Survey 2024 suggests a possible reason for this: Although almost all respondents (90%) believe that a comprehensive overview of generative AI usage will be important in the near future, only 14% possess such a view. To accelerate the broad, responsible adoption of AI, let alone make the most of its potential, companies will need to focus on closing this enormous gap.

What opportunities–and challenges–do companies associate with the introduction of AI? What are they doing to address the challenges and maximize the opportunities?

To explore these questions, SAP LeanIX conducted an online survey of its global customer base from December 2023 through January 2024. Of the 226 IT professionals who responded, 71% work for European companies and 19% for US companies. Respondents were evenly distributed across companies of different sizes, with 40% working in organizations with over 10,000 employees. In terms of job roles, enterprise architects made up the largest proportion of the sample at 67%.

Greater efficiency and a significant increase in the quality of work through AI

Almost 80% of companies expect AI to help companies get work done more quickly. Many (47%) also believe AI will improve the quality of work performed.

80% of companies are using AI, though with some restrictions

Almost a third of companies surveyed (32%) report using generative AI extensively today, whether embedded in applications or as a separate tool. Nearly half (48%) report using AI, though to a limited extent. The remaining 20% say they have not yet begun using it.

What prevents companies from using this technology extensively? A large majority (72%) cite data security risks, with uncertainty around legal issues also a major concern (59 percent). Almost half (48%) of those surveyed cite a lack of employee know. Interestingly, only 22% and 16% respectively cite ethical concerns or excessive costs as hurdles.

Three quarters of the companies already using AI have chosen to address these concerns by limiting the use of, or access to, AI. Half of the companies using AI have restricted its use to AI model from pre-approved providers. Just as many restrict its use to certain tasks.

A clear need for AI governance frameworks

Almost everyone sees the need for an AI governance framework. However, only 19% have already implemented one. For the majority, an AI governance framework remains a work in progress.

That so many companies around the world recognize the importance of a governance framework reflects the general understanding that legal regulations for AI are coming. The European Union is the most advanced in this regard. The European Parliament approved the EU AI Act in March 2024 and it will come into full force two years after its official publication, which is expected this summer. The act requires companies to provide detailed information on the risk level of their AI use cases, with penalties of up to 7 percent of total annual revenue for non-compliance.

Few feel well prepared to comply with AI regulations

Only 15% of companies surveyed say they are well prepared to comply with emerging legal requirements for the use of AI in the regions where they operate. More than half see themselves as partially prepared. A third do not believe they are prepared at all.

It is worth noting that even in companies with an AI governance framework already implemented, fewer than 40% believe themselves to be well prepared.

The fact that there is so much uncertainty around compliance readiness can be explained, at least in part, by the fact that the regulations themselves are a work in progress. Nevertheless, the coming rules have been approved in the EU where the majority of those surveyed operate. What what else might explain the perceived lack of full preparedness?

Most lack a comprehensive overview of AI usage

Regardless of what form legal regulations may ultimately take, compliance has to start with a full understanding of where AI is being used in the organization. Respondents to this survey almost unanimously see it this way. Looking ahead to the near future, 90% say that a comprehensive overview of generative AI usage in the company is either important or very important. However, in more than a third of companies surveyed, no one is clearly responsible for collecting such data, or it is not collected at all.

Given this situation, the gap between perceived need for an overview and its general absence is not surprising. Currently, only 14% of companies surveyed enjoy access to comprehensive data on AI usage.

Conclusion: AI governance demands meaningful insight into AI usage

This survey shows that three quarters of companies already using generative AI limit its use in various ways. Almost all of them see the need for an AI governance framework. Most companies are currently working on developing such a framework or have already implemented one.

What close to 90% of companies lack, however, is a comprehensive overview of AI usage. And this despite the fact that nearly all respondents agree this data is urgently needed. Such data necessarily provides the basis for any AI governance framework and is indispensable for meeting legal requirements. How, after all, can companies report on the risk level of AI use cases if they only have a partial overview, or no view at all, of AI usage?

There is no consensus among companies surveyed regarding responsibility for creating such an overview. In our view, enterprise architects are perfectly suited for this role. By consistently and comprehensively tracking AI usage, enterprise architects can enable the responsible, legally compliant usage of artificial intelligence, on the one hand, while accelerating its responsible adoption across the organization on the other.

Free downloads:

Annotation:

For better readability, the results in this report are presented as percentages without decimal places. If the addition of these values does not result in exactly 100%, then this can be attributed to rounding differences.