Continuous Transformation Blog

AI Governance Interview: Frith Tweedie From Simply Privacy

Written by LeanIX | October 3, 2024

AI governance is becoming a priority for enterprise architects, but how can you build a framework to leverage AI safely? We spoke to AI expert Frith Tweedie to find out.

 

Artificial intelligence (AI) is a revolutionary technology that's already fundamentally changed the way we do business. Yet, since it's so new, it's also very little understood.

For every AI innovation, there's a question about AI ethics, AI sustainability, and the possibility of cyber criminals leveraging the technology for nefarious purposes. How can you leverage AI safely and effectively?

To find out, we spoke to AI experts across the world to discover the truth within the AI buzz. In the first part of this series, we spoke to Frith Tweedie from Simply Privacy.

To find out more about what the market is saying about AI, download our AI survey results:

REPORT: SAP LeanIX AI Survey Results 2024


 

Meet Frith Tweedie

Frith Tweedie has been working in artificial intelligence (AI) governance since 2019, when she was leading the digital law practice at EY in New Zealand, She also became a member of the executive council of the AI Forum NZ.

Last year, Frith was appointed to the International Association of Privacy Professionals (IAPP) AI governance center's global advisory board. She is also a member of the Data Ethics Advisory Group for the New Zealand public sector. Throughout, Frith's work has been in helping her clients design and develop responsible AI governance frameworks.

This includes risk management tools, such as the Algorithm Impact Assessment Toolkit she developed for government agencies in New Zealand. Frith also teaches the IAPP AI governance professional training course.

As such, we felt Frith was the perfect person to share the best methodology for AI governance and adoption. To begin, we asked her just what AI governance is, in her opinion.

 

Can You Define AI Governance And Its Importance?

"Artificial intelligence (AI) governance - or what I prefer to call 'responsible AI' - involves a holistic framework designed to help direct, manage, and monitor the AI activities of an organization across the AI lifecycle. Responsible AI is a critical enabler for responsible innovation and informed decision-making."

 

What Are The Challenges Implementing AI Governance?

"Some of the biggest challenges I'm seeing organizations face is being crystal-clear on their artificial intelligence (AI) strategy and risk appetite, and having a sufficiently robust view of what data and AI are being used across the organization.

"Other challenges include determining where responsibility for AI governance should sit, how to efficiently risk assess AI use cases, and identifying appropriate metrics for testing AI systems across their lifecycle."

 

What Role Will AI Governance Have In The Next 5 Years?

"I expect awareness of the importance of artificial intelligence (AI) governance to grow significantly as we see more AI-specific legislation emerge around the world alongside ongoing examples of the potential harm that poorly managed AI can cause - as well as the impact this can have on an organization's reputation, trust, and bottom line."

 

How Do You Balance Innovation Against AI Compliance?

"It's critical that you take a cross-functional approach to artificial intelligence (AI) that draws on the skills and experience of a wide range of different stakeholders. Those stakeholders need to have a pro-innovation mindset and take a risk-based approach to their work.

"The goal should always be maximizing the benefits of AI while minimizing potential risks."

 

How Are Organizations Implementing AI Governance?

"I've worked with small and large corporate clients to design their responsible artificial intelligence (AI) framework. That can include developing responsible AI principles tailored to the organization to guide their overall approach, developing strategic and risk management approaches, focusing on key risk areas like vendor due diligence, and developing policies and training.

"Key lessons include the need to raise awareness across the business but particularly at senior levels of the key risks and, importantly, how a Responsible AI approach not only manages risk but delivers real value in terms of better performing models, competitive advantage and trust.

"Early identification of accountabilities, roles and responsibilities is important, although it can get political. It's important to take everyone on the journey with you, ensuring relationships are built with key stakeholders such as data science, data governance, privacy, cybersecurity, legal and enterprise architecture teams."

 

What Role Do Stakeholders Play In AI Governance?

"All stakeholders play important roles and should be involved as much as practical and appropriate. Leadership (including at the board level) needs to set the tone from the top, emphasizing the importance and value of a responsible artificial intelligence (AI) approach.

"This can then flow down to inform developers' thinking, ensuring - for example - that users and those impacted by the AI system are not disadvantaged and are aware when they're dealing with AI. Policymakers need to understand how AI works, how it is being used and what the risks might be, then take a risk-based approach that supports innovation while appropriately managing risk."

 

How Do You Get An Overview Of AI Use Across Your Operations?

"A key first step is mapping all of your artificial intelligence (AI) systems, models, and use cases so you understand what is being used, where, by whom, and why. From there you can undertake a risk assessment of that usage to inform your risk-based approach going forward, tailoring your Responsible AI framework to reflect the specifics of your organization."

 

To find out more about what the market is saying about AI, download our AI survey results:

REPORT: SAP LeanIX AI Survey Results 2024