AI regulation: Balancing innovation and accountability

As generative AI and large language models become increasingly mainstream, integrating these technologies into enterprise operations comes with two profound ethical dilemmas: responsible use and trustworthy outputs.

On top of that, there is a deeper, more philosophical layer to this technological evolution. The advancement of AI brings into focus critical questions about human consciousness, including what truly differentiates human intelligence from AI, and the potential implications of AI achieving a level of sentience.

AI is moving faster than any technological innovation in history. While policymakers can’t be expected to keep pace, not having an effective framework of standards and policies to ensure it is truly developed and used responsibly could prevent humanity from realising AI’s infinite potential for the greater good.

Elements of an effective government-regulated AI framework

Regulation ultimately needs to hold humans accountable for decisions based on generative AI results. People are more likely to trust AI-driven solutions when they can understand how these systems achieve their outcomes.

Currently, there are more than 1,600 AI policies and strategies worldwide. The European Union, United States, and United Kingdom have been instrumental in the development and governance of AI in a global landscape. Policymakers in the EU have taken swift and decisive action to regulate the riskiest uses, like deepfakes and facial recognition, with the passing of the AI Act. Meanwhile, US President Joe Biden has issued an executive order to improve “AI safety and security,” though it lacks the legitimacy of congressional legislation and leaves open many questions, including what is needed to keep AI systems compliant: transparency, trustworthiness, and accessibility.

AI regulation is an ethical conundrum on a global scale, but the route forward is grounded in finding pragmatic ways to train AI models with the massive amounts of data feeding it.

An effective international policy for the regulation of AI appears to lack a crucial component essential for its effectiveness and success. To ensure AI systems benefit humanity, instil trust, and meet key regulatory standards, it’s crucial to consider the broader technology infrastructure, and most critically, the data systems. Among these, knowledge graphs are noted for their ability to enhance AI solutions by providing increased accuracy, transparency, and explainability.

Balancing AI regulation to foster innovation

Data is widely recognised as the fuel for AI. AI needs to be trained on massive amounts of information. Data quality is critical to achieving these three qualities, but structured data alone is not enough to ensure safe uses and accurate outcomes. It requires a much more comprehensive approach that considers the many different real-world outcomes and risks, as well as the ecosystems of interconnected technologies that make up an AI ecosystem.

Graph databases and knowledge graphs provide additional guardrails as these technologies anchor AI systems in human-readable, deep knowledge-based representations of data. They provide contextual understanding, reasoning, and training for machine learning — making AI outcomes more accurate, transparent, and explainable. Yet, this essential technology, which is widely used by organisations worldwide for mission-critical work, is conspicuously missing from conversations with lawmakers who are seeking input from industry leaders to build frameworks that safeguard AI development and use.

This discrepancy underscores a critical truth about the future of the AI landscape: We need a more systemic, evidence-based approach that brings together the core tenets that will define safe and responsible uses of AI.

Considerations for developing AI policies

AI policies must consider diverse technological issues underpinning AI systems, including data storage, analytics, programming, and security. Together, these elements enable AI to make autonomous decisions.

For technologies where compliance or safety is important, creating a policy environment that helps inform the use of that technology is ideal. Our best response is to blend the best of generative AI with other tools to ensure that safety, rigour, and transparency are upheld. This approach can benefit organisations, businesses, and society at large. The decisions made today regarding data and AI investments will determine the future leaders in the field.

The role of open sourcing in enhancing AI accountability, transparency

Policymakers globally have focused on making generative AI safer for all, whether by applying existing regulations to aspects of generative AI such as data protection or equal opportunity laws, or new guidance for uses within education, government, and other sectors. Discussions about open source have largely centred on whether these models should be excluded from regulation, rather than exploring how open-source principles could enhance AI regulation.

To further promote accountability, open-sourcing AI allows the broader community of stakeholders to continuously review and improve systems. While this approach presents inherent risks and challenges, drawing from the security industry’s experiences, open sourcing enables widespread scrutiny by experts worldwide, leading to quicker identification and resolution of vulnerabilities.

Applying these principles to AI can aid in regulatory compliance by offering greater transparency into the workings of AI systems. This increased transparency is likely to foster greater accountability and more effective policies.