AI governance is a complex and multi-faceted undertaking that requires foresight on how AI will develop in the future. We spoke to Seth Dobrin, founder of 1Infinity, about what AI has in store for the market.
Artificial intelligence (AI) governance is key for ensuring the power of AI is harnessed responsibly. With AI fuelling everything from snake-oil startups whose AI doesn't do all it claims, to large-language-models (LLM) designed to enhance cyber attacks, we need to ensure that the money we spend on AI investment is going to the right place.
How can we ensure that AI is being used for the right reasons, both morally and also to offer real value? To find out, we spoke to AI experts worldwide to discover the truth within the AI buzz.
In the second part of this series, we spoke to Seth Dobrin from Qantm AI and AI fund 1Infinity Ventures. To find out more about what the market is saying about AI, download our AI survey results:
After completing a PhD in genetics, Seth started his career at Monsanto, where he witnessed one of the first two successful digital transformations of a Fortune 500 company. He led the transformation’s data and AI portion from 2011 until 2016, creating billions of dollars of new revenue and cost savings through the application of data and AI.
In 2016, he left Monsanto to become IBM’s first Global Chief AI Officer. There, he focused on aligning AI innovation with ethical practices, ensuring that the technology delivered both business outcomes and societal benefits.
Since then, Seth has co-founded Qantm AI, become president of the Responsible AI Institute, and co-founded 1Infinity Ventures, the world's first responsible AI fund, to invest in startups pushing the frontiers of generative AI while adhering to ethical standards. Inside 1Infinity, he also runs Silicon Sands Studio, a creative consultancy supporting AI startups.
Seth's passion for building ethical AI systems that reflect the values of the humans who use them made him the perfect person to help expand our viewpoint on AI governance. We began by asking what he believes is the importance of AI governance.
"AI governance is essential because AI is embedded across various sectors, which introduces risks, such as algorithmic bias, a lack of transparency in decision-making, and concerns over data privacy and misuse. Robust AI governance ensures these risks do not undermine the technology's benefits, maximizing AI's positive impact.
"One aspect of AI governance is its alignment with internationally recognized frameworks, such as the OECD AI Principles. These principles, adopted by many countries, set a global standard for trustworthy and responsible AI.
"They focus on five key areas, ensuring that AI is developed and used to benefit society and respect human rights:
"Aligning AI governance with the OECD AI Principles is essential for creating a globally consistent standard of responsible AI development. Ensure compliance and foster trust among businesses and the public that AI systems are developed ethically and serve the greater good."
"Large, pre-trained foundation models like ChatGPT are trained on vast datasets (the whole of the internet) and often span multiple domains, presenting risks such as the perpetuation of bias and ethical concerns. Governing these models is particularly complex because organizations may not have control over the data used in training, making it more difficult to ensure that models comply with ethical standards and regulations like GDPR or the EU AI Act.
"Key governance challenges include:
"To address these issues, AI governance should align with frameworks like the OECD AI Principles, ensuring that systems are transparent, fair, and accountable. By doing so, organizations can navigate the complex landscape of foundation models while ensuring responsible and ethical AI deployment."
"
"Instead of global regulatory convergence, we will likely see a proliferation of diverse and often conflicting AI regulations. Companies must develop agile governance structures that adapt to specific legal requirements in each operating region.
"Governments worldwide will likely impose stricter transparency requirements on tech companies, compelling them to reveal more about how their AI systems work. Organizations must be prepared for audits, explainability mandates, and compliance checks as governments enforce stricter oversight.
"As AI systems become more complex, organizations will use regtech automation to manage governance. This will allow companies to manage the challenges of operating across multiple jurisdictions with different regulatory standards while maintaining consistent governance practices.
"Governments will prioritize eliminating bias in AI, especially in areas like gender and race, where inequities are more immediately apparent. However, ensuring that AI systems account for different regions’ diverse moral and ethical constructs may remain a lower priority as regulatory bodies focus on enforcing fairness and transparency.
"As AI models continue to evolve post-deployment, organizations must adopt governance structures that account for continuous learning and 'model drift'. This means implementing real-time monitoring and governance processes that ensure AI systems remain compliant and aligned with ethical standards as they adapt to new data and applications."
"Ba
"One of the main strategies is to build adaptive governance structures that evolve alongside AI technologies. These frameworks should be flexible enough to allow innovation, particularly in areas like generative AI.
"As AI systems, especially foundation models, become more complex and integrated into critical sectors like healthcare and finance, openness and explainability become paramount. This is particularly important as governments push for more stringent regulations, requiring tech companies to be more transparent about their AI systems' operations.
"Ultimately, organizations can balance innovation and compliance by creating governance systems that continuously monitor AI systems, implement transparency measures, and ensure that AI models are developed with ethical considerations at the forefront. This continuous monitoring is a key factor in ensuring the reliability and ethical use of AI systems, allowing companies to navigate the challenges of operating in a rapidly evolving technological and regulatory landscape without compromising their ability to innovate."
"Stakeh
"Good governance starts at the top and needs to be mandated by the board and the C-suite so AI governance comes with the authority and ‘teeth’ to follow through on implementation and have full decision rights. By championing AI governance, leadership ensures that ethical considerations, such as fairness and transparency, are integrated from the top down, providing teams with the resources and authority to adhere to governance frameworks.
"[Yet,] effective AI governance requires collaboration across leaders, policymakers, developers, and users. Each stakeholder group contributes to a robust governance structure that fosters innovation and ensures that AI systems are transparent, fair, and accountable."
"Organizations must
"Automated AI discovery tools are particularly effective at mapping AI models to the data they interact with, ensuring sensitive information is protected and aligned with global data regulations. By integrating these tools with broader governance practices, organizations can achieve greater visibility into how AI models are deployed, what data they use, and whether they adhere to regulatory and ethical standards.
"In addition to leveraging automated tools, organizations should establish centralized governance frameworks to oversee AI initiatives across departments. These frameworks ensure that AI models are inventoried and continuously monitored for fairness, transparency, and accountability.
"Cross-functional collaboration between IT, data science, compliance, and business teams is vital to maintaining a comprehensive view of AI usage. Regular audits and assessments, both automated and manual, can help ensure that AI systems align with the organization’s ethical standards and regulatory requirements while also enabling innovation."
"To effectively ma
"A core feature automatically discovers and catalogs all AI models across various environments and departments. This capability provides an up-to-date, comprehensive view of all AI systems, including shadow AI models that may operate outside formal governance oversight.
"The tool should be able to assess and continuously monitor AI models for potential risks, including bias, data drift, model drift, and security vulnerabilities. Real-time alerts and risk scoring can help organizations identify and mitigate potential issues before they become problematic, ensuring AI systems comply with ethical standards and performance expectations.
"The solution should include data mapping capabilities that trace data flow through various AI models. This allows organizations to track where sensitive data is used and ensure compliance with data privacy regulations such as GDPR.
"As AI systems become more complex, especially with the rise of black-box models like deep learning, the tool should include features that make AI decision-making transparent and understandable. Explainability modules allow users to see how a model arrived at a particular decision, especially in critical fields such as healthcare, finance, and law.
"Cross-functional collaboration is essential for successful AI governance. To ensure everyone adheres to governance frameworks, a tool should enable seamless integration across various teams, including data scientists, developers, compliance officers, and business units.
"Effective AI governance tools should be able to manage the entire lifecycle of AI models while providing transparency, ethical oversight, and regulatory compliance. Automated discovery, risk monitoring, and collaboration features ensure that AI systems operate responsibly within an organization’s governance framework."
To find out more about what the market is saying about artificial intelligence (AI) governance, download our AI survey results: