Lead image by Ryzhi / iStock

By Dan Rubinstein

Artificial intelligence (AI) is one of most powerful tools created by humans. It can interpret medical images quickly and accurately, for example, helping doctors make better-informed decisions, and can automate routine jobs, recalibrating the labour market by freeing up people to take on more nuanced and meaningful tasks.

AI also presents some of the biggest risks we have ever encountered, from reinforcing societal biases and blurring the lines between reality and fiction to, worst-case scenario, commandeering weapon systems.

Maximizing the potential of AI while managing and mitigating these dangers is challenging because the technology is evolving so rapidly — and because just like the mysterious and complex human brain, we don’t fully understand how artificial intelligence works.

A man with glasses wearing a blue blazer and light blue dress shirt, crosses his arms while looking at the camera

Carleton University’s Canada Research Chair in Governance and Artificial Intelligence Adegboyega Ojo

“It’s a bit of a black box,” says Adegboyega Ojo, the Canada Research Chair (CRC) in Governance and Artificial Intelligence at Carleton University and one of the keynote speakers at the third annual Carleton Challenge Conference on May 13, which is exploring the transformation and disruption of AI.

“AI technologies are getting better and better so quickly and our understanding is not keeping pace. Having something so powerful in our hands without a complete understanding of it raises a lot of concern for different stakeholders, from the public and policy makers to tech companies themselves, if they’re being honest.”

The solution, according to Ojo, is strong governance.

As a researcher in Carleton’s School of Public Policy and Administration, that’s something he thinks about a lot. And while the word “governance” may seem like bureaucratic minutiae — the tiny details underpinning the big wheels of government — how Canada and other countries regulate AI will go a long way toward shaping our world in the next few decades and beyond.

A digital brain hologram

A Radical New Paradigm

Ojo first encountered the forerunners of today’s AI systems more than 30 years ago as a computer science PhD student in Lagos, Nigeria.

He was developing a program to translate Yoruba into English when his supervisor returned from a visit to the International Centre for Theoretical Physics in Italy with a book about artificial neural networks, which were still in their infancy.

Ojo fell in love with this new paradigm, a radical departure from traditional computing, and its ability to learn from training data. Backpropagation learning algorithms (now called deep learning) became the focus of his PhD studies, but even though early experiments in areas such as computer vision and autonomous vehicles hinted at the technology’s potential, he moved on after receiving his doctorate, gravitating toward software engineering and later tech policy research.

Until about four years ago, when the opportunity to join Carleton as a CRC arose and brought Ojo “squarely back to my roots,” he says.

Scales of justice with AI on one side and a human brain on the other.

An International Leader in AI Governance

Ojo’s expertise in digital government and how new technologies can support public sector innovation position him as an international leader in AI governance.

A paper he co-authored last year clearly outlines the benefits and risks of AI for governments. It can help the public sector transform its internal processes and provide better services, yet it also poses social, ethical and legal challenges that requires both technical and institutional solutions.

“Like with any new technology, there’s a possibility of misuse,” says Ojo.

“Consider social media. Something that seems safe and harmless at the outset can become dangerous over time. Especially because of the scale and speed of change.”

With conventional software, he continues, coders discover a bug, deconstruct it and fix it so a specific task can be completed. But today’s “frontier” AI models — created to perform a diverse array of functions — don’t offer the same degree of certainty. And because the data that they are trained on can contain real-world biases, those patterns are replicated with full fidelity.

Moreover, AI may not always align with public sector ethics, says Ojo, such as the need to explain the rationale behind decisions. An automated system could be deployed, for instance, to screen applications for social security benefits, to ensure forms that people submit are complete. But it might not be the best way to approve or deny an individual’s claim.

“For more careful decision making, humans will still be required,” says Ojo.

“For now, AI should be used for low-hanging fruit. We’re not ready yet for other things because of potential controversy. We don’t want it to blow up in our face.”

A row of users sitting down while using their mobile devices.

The Need for a Bold Vision

Earlier this spring, Canada released a new AI strategy for the federal public service. Ojo supports this approach — a clear plan and gradual progress, with responsible use ground rules in place.

But in his morning keynote at the Carleton Challenge Conference, he’ll talk about the need for a bold AI vision in Canada.

“It’s not enough to have excellent research and infrastructure and talent,” he says.

“The country also needs to better support startups and small companies that want to develop AI services and products. We need an atmosphere in which small companies can experiment. Canada needs to incentivize this.”

A professional headshot of a woman with a grey suit and black shirt.

Elena Fersman Vice President and Head of Global AI Accelerator, Ericsson

Elena Fersman, a vice president and head of the Global AI Accelerator at Swedish telecommunications multinational Ericsson, will deliver an afternoon keynote at the conference.

Other speakers include Kate Purchase, Microsoft’s Senior Director for International AI Governance, and several business, government and academic AI specialists.

In addition to Ojo, three more Carleton researchers are on the agenda: nursing program director Danielle Manley, Intelligent Machines Lab director Majid Komeili and cognitive scientist Mary Kelly, who, like Ojo, speaks of the need for a human-in-the-loop approach to AI decision making.

A woman overseeing the work of a robot.


More Stories

First wide image by Sansert Sangsakawrat / iStock
Second wide image by Alexander Sikov / iStock
Third wide image by monkeybusinessimages / iStock
Final wide image by dit:demaerre / iStock

Thursday, May 1, 2025 in , , , , ,
Share: Twitter, Facebook