AI Governance Interview: Victoria Riess From Riess Consulting

Posted by LeanIX on December 19, 2024
AI Governance Interview: Victoria Riess From Riess Consulting
AI Governance Interview: Victoria Riess From Riess Consulting
10:07

Artificial intelligence (AI) was discussed by regulators twice as much in 2023 as in 2022. We spoke to AI expert Victoria Riess from Riess Consulting to find out where these conversations will lead.

 

While the initial buzz about generative artificial intelligence (AI) and ChatGPT may have begun to die down, the technology isn't going away. Regulators are working to limit the risks imposed by the adoption of the technology, while organizations are making leveraging the benefits of AI a priority out of fear of losing their competitive edge.

How can you rapidly and comprehensively leverage all the capabilities of AI, while staying compliant with regulation that's still developing? The closest thing we have to a crystal ball is the advice of AI governance specialists.

To support you, we spoke to AI experts across the world to discover the truth within the AI buzz. In the next part of this series, we spoke to Victoria Riess from Riess Consulting.

To find out more about what the market is saying about artificial intelligence (AI) governance, download our AI survey results:

REPORT: SAP LeanIX AI Survey Results 2024

 

Meet Victoria Riess

victoria-riess-tech-advisor-digital-experiences_1980_2640Victoria Riess, MBA is the ESG and AI Board Advisor, Strategy Executive and Keynote Speaker, Founder, CEO, and CIO of Riess Consulting, the leading German digital and artificial intelligence (AI) strategy consultancy. For her contributions on the topics of AI, leadership, digital transformation and future trends, Victoria has already been awarded five times as Top Women Leader in AI, and is also on the road as keynote speaker throughout Europe.

Starting in 2023, Victoria built Riess Consulting & Tech on this expertise without outside capital, and the company has been operating profitably since day one. She builds and leads C-level digital and AI strategy consulting programs with global teams and renowned clients, and helps guide CEOs through AI transformation.

With all this experience, Victoria is exactly the right person to speak to about AI adoption and governance. We began by asking her about the state of AI regulation.

 

How Are Regulators Governing AI?

"victoria-riess-tech-advisor-digital-experiences_1980_2640Artificial intelligence (AI) is already subject to regulations that apply to more than just the technology itself. Furthermore, the regulation of AI is increasing.

"In 2024, the EU AI Act came into force after years of debate and anticipation, which will result in the establishment of risk management frameworks. In the US, the focus will be on how regulators and jurisdictions take action against companies that spread algorithmic discrimination or intentionally use bad data and dark patterns.

"In China, the regulatory activity is explicitly focused on generative AI with China's Interim Administrative Measures for Generative Artificial Intelligence Services. The scope and diversity of AI regulations and standards is expected to increase further in the foreseeable future as policymakers grapple with how to manage AI risks.

"There are three challenges for artificial intelligence (AI) regulation:

  1. Dealing with the pace of AI-driven change may exceed the existing powers and authority of the federal governments. Existing regulations are not flexible enough to keep pace with the speed of AI development.
  2. As AI is a multi-faceted capability, a 'one-size-fits-all' approach will regulate too much in some cases and too little in others. The regulation of AI must therefore be risk-based and targeted.
  3. In general, regulation has an advantage for the first step. Not least because of the interconnectedness of the 21st century, the government that sets the first rules sets the discussion for all other countries."

 

How Do You Stay Compliant With Emerging Regulation?

victoria-riess-tech-advisor-digital-experiences_1980_2640"In the face of new and ever-changing regulatory measures, organizations should opt for self-governance approaches to stay compliant and drive alignment with their corporate values and enhance their reputation. Implementing an organization's principles often involves adhering to ethical standards that go beyond legal requirements.

"Self-governance of artificial intelligence (AI) systems will involve both organizational and increasingly automated technical controls. AI requires strong organizational management systems with controls as described in the international standard ISO/IEC 42001.

"Technical controls are just as important as socio-technical systems. Automation can often help them, for example, by automating AI red-teaming, meta-data identification, logging, monitoring, and alerts. [i]"In a nutshell: collaboration between humans and AI will remain important."

 

What Role Do Global Standards Play In AI Regulation?

"victoria-riess-tech-advisor-digital-experiences_1980_2640Artificial intelligence (AI) was discussed by policymakers around the world and mentioned twice as often in legislative processes in 2023 as in 2022. Activities in the field of AI standards and international co-operation have also increased.

"Examples include initiatives by the Organization for Economic Co-operation and Development (OECD), the US National Institute of Standards and Technology (NIST), The United Nations Educational, Scientific and Cultural Organization (UNESCO), the International Organization for Standardization (ISO), and the the Group of Seven (G7). In 2024, the focus on AI safety increased with the establishment of new AI safety institutes and the expansion of efforts by institutes in the EU, US, UK, Singapore and Japan.

"International agreements on interoperable standards and basic regulatory requirements will play an important role in enabling innovation and improving AI safety. Companies may choose to use voluntary methodologies and frameworks, such as the US NIST AI Risk Management Framework, the Singapore AI Verify Framework and Toolkit, and the UK AI Safety Inspect's open AI safety testing platform."

 

How Are Organizations Implementing AI Governance?

"victoria-riess-tech-advisor-digital-experiences_1980_2640In addition to the potential value of artificial intelligence (AI), executives are also concerned about its risks, including bias, security, and reputational damage if something goes wrong. Many executives also recognize that mitigating these risks can lead to a competitive advantage and fundamentally contribute to the success of their business.

"Therefore, the ethical and responsible adoption of the technology has become an important consideration, leading to the rapid development and adoption of AI governance. Companies can use a holistic approach to evaluate return on investment in AI governance by examining not only traditional returns, but also the returns generated by the impact on reputation and the opportunity to build new organizational capabilities.

"Executives should ensure that they are looking at AI governance from a value-creating perspective and not just from a risk avoidance perspective. Several companies have already successfully put Responsible AI into practice and produced AI-powered products and services in an ethical and meaningful way, for example:

 

How Will AI Regulation Evolve In The Next 5-10 Years?

victoria-riess-tech-advisor-digital-experiences_1980_2640"Recognizing that each jurisdiction has a different regulatory approach consistent with different cultural norms and legal contexts, I see six areas or regulatory trends in the field of AI that are unified under the general principle of mitigating the potential harms of artificial intelligence (AI) while enabling its use for the economic and social benefit of citizens.

  1. Fundamental principles: the AI regulation and guidance under consideration will be consistent with the fundamental principles for AI defined by the Organization for Economic Co-operation and Development (OECD) and endorsed by the Group Of 20 (G20)
  2. Risk-based approach: these countries will take a risk-based approach to the regulation of AI
  3. Sector-independent and sector-specific: due to the different use cases of AI, some jurisdictions will be focusing on the need for sector-specific regulation in addition to sector-independent regulation
  4. Policy alignment: jurisdictions will be implementing AI-related regulations in the context of other digital policy priorities
  5. Collaboration with the private sector: many of these countries will be using regulatory sandboxes as a tool for the private sector to work with policymakers
  6. International co-operation: driven by shared concerns about the new generative AI systems, countries will be seeking international cooperation to understand and address these risks"

 

What Tools Can Help Organizations Comply With AI Regulation?

victoria-riess-tech-advisor-digital-experiences_1980_2640"I have found that the best way for organizations to prepare for artificial intelligence (AI) regulation is through a Responsible AI Strategy. At the heart of this is a set of five principles that emphasize accountability, transparency, privacy, and security, as well as fairness and inclusivity in the development and use of algorithms:

  1. Empowering Responsible AI Strategy leadership: one person responsible for the Responsible AI Strategy plays a critical role in keeping the strategy and initiatives on track
  2. Building and implementing an ethical AI framework: at the heart of the Responsible AI Strategy is a set of principles and guidelines, an ethical AI framework, that organizations create and incorporate into their culture.
  3. Include people in the AI loop: most current and proposed AI regulations call for strong governance and human accountability
  4. Create Responsible AI Strategy reviews and integrate tools and methods: the goal is to identify and fix problems as early as possible in development and be vigilant during and after implementation
  5. Participate in the Responsible AI Strategy ecosystem: actively contributing to a Responsible AI Strategy consortium or working group is a good strategy, as it encourages collaboration and sharing"

 

To find out more about what the market is saying about artificial intelligence (AI) governance, download our AI survey results:

REPORT: SAP LeanIX AI Survey Results 2024

Subscribe to the LeanIX Blog and never miss a post again!

Related Posts