By Dan Rubinstein
Photos by Matthew Murnaghan
Artificial intelligence (AI) is an incredibly powerful tool, but navigating the risks and responsibilities that come with such an enormously disruptive and transformative technology will not be easy.
This dichotomy — potential versus pitfalls — was the focus of the third annual Carleton Challenge Conference on May 13.
Adegboyega Ojo, the Canada Research Chair in Governance and Artificial Intelligence at Carleton University, provided an international and national AI overview in the opening keynote — perfect context for the rest of the day.

Adegboyega Ojo, the Canada Research Chair in Governance and Artificial Intelligence at Carleton, provided the opening keynote
His talk explored an emerging paradox: Canada is a global leader in AI research and talent, yet it continues to face challenges in building a dynamic AI ecosystem. In an increasingly complex and uncertain global environment, he asked, what will it take for Canada to turn its AI leadership into broader societal and economic gains?
There’s a been a surge in AI usage in recent years, Ojo noted, with 78 per cent of organizations around the world now using it, plus millions more people tapping into the technology for work and leisure, and an accompanying increase in legislative activity and global cooperation.
On the flip side, Ojo said, most national AI strategies are weak on human rights protection, very few countries have standards to ensure the safety and accuracy of AI systems, and the number of AI incidents is increasing. From AI-enabled factory robots endangering workers in China to a WhatsApp child’s voice scam, he listed a long list of incidents that occurred over just two days earlier this month.
“That’s the good and the bad,” Ojo said. “Things are great but also not so great.”
Canada is a leading international AI innovator, he continued, but this isn’t translating into meaningful benefits. There’s strong domestic research and development in AI, especially by startups, but low industry demand, so talent and products are exported.
Ojo’s conclusion is that Canada needs clear, measurable and bold AI goals, and needs to use public funding to unlock and attract private venture capital.
“It’s go time,” he said. “A new government, new opportunities, a fresh start. The time is now.”

A Digital Tsunami
The first panel of the day, “AI Value — Catching the Wave of a Digital Tsunami,” explored some of the broad applications of AI and how the technology can improve efficiencies at scale and help solve complex problems for all types of organizations.
Danielle Manley, the director of Carleton’s new nursing program, specializes in healthcare innovation and the use of AI in medicine and education.
Healthcare is one of the biggest issues in the country, she said, and one of the opportunities for AI is “ambient listening” — tools that transcribe conversations between patients and healthcare providers, so the latter can look at and listen to people instead of concentrating on documentation.
“AI can take away the administrative burden and reconnect providers to patients,” Manley said, “which is why most of us are in this profession.
“There’s a big cultural shift in patient care,” she added.
“Providers used to ‘own’ your data. Now patients have full information access. We’re moving away from this provider-centric model.”

Left to right: Carleton Intelligent Machines Lab director Majid Komeili, Carleton nursing program director Danielle Manley, Mistral Venture Partners Director of Finance Julien Kathiresan and Invest Ottawa CEO with moderator Allan Thompson, director of Carleton’s School of Journalism and Communication
Fellow panelist Majid Komeili, director of the Intelligent Machines Lab at Carleton, conducts interdisciplinary research on AI-driven solutions for real-world issues, including using machine learning to predict an individual’s risk of chronic homelessness to support early and effective intervention.
“This system can be used to help us ensure that nobody is left behind,” said Komeili, “and can also be used to help train junior staff.”
Joining them at this session were Julien Kathiresan, the Director of Finance at Ottawa-based Mistral Venture Partners, which invests in “smart enterprise” companies, and Sonya Shorey, the President and CEO of Invest Ottawa.
Asked about the most exciting AI advances, Kathiresan said that the technology can now take in multi-modal info — voice, text, images, video, structured and unstructured data — and come up with a cohesive interpretation.
“That’s very human-like and five years ago it would have been considered science fiction,” he said.
“But unless we add trust, we’re never going to get to operational efficiency.”
Asked where she sees AI in five years, Shorey talked about a world “where everybody will have more data at our fingertips and this will change how we live our lives. If we equip companies with the right guardrails, we’ll see tremendous benefits.”

Ethics and Governance
The conference’s second panel focused on issues such as ethics, policy, bias, governance and risk.
Kate Purchase, the Senior Director for International AI Governance at Microsoft, said that her company has six principles behind responsible AI practices, including fairness, transparency and inclusiveness.
Purchase’s team spends a lot of time thinking about constraints on AI policy makers, and because a lot of expertise in this field is in the private sector, Microsoft is constantly thinking about how to partner with the public sector and support research.
“There is a risk of not using this technology to help solve widescale societal challenge,” she said, adding that the biggest overall risk around AI is that “we don’t know what the biggest risk actually is.”
Jordan Zed, Assistant Secretary of the Artificial Intelligence Secretariat in the Privy Council Office, represented Canada’s federal government on the panel. His presence was important on a day that the government’s new cabinet was unveiled, including MP Evan Solomon, the new Minister of Artificial Intelligence and Digital Innovation — a brand-new role on Parliament Hill.
When asked about the biggest risks of AI, Zed talked about the role it can play in amplifying disinformation. “One of the things I worry about most is the impact on democracy and our institutions. But there are technical solutions we can advance and international efforts to provide tools to help people determine whether the content they’re seeing is credible and authentic.”

Left to right: Carleton cognitive science researcher Mary Kelly, Assistant Secretary of the Artificial Intelligence Secretariat in the Privy Council Office Jordan Zed, National Research Council of Canada research officer Kathleen Fraser and Kate Purchase, the Senior Director for International AI Governance at Microsoft, with moderator Allan Thompson
Kathleen Fraser, a research officer at the National Research Council of Canada who specializes in AI, said that because AI systems can do so many diverse things, it’s difficult to even define problems such as bias.
Among the concerns that researchers need to prioritize, she pointed out that “we don’t have a really good way yet to get a good explanation from a large language model as to how it came to a decision. This is an area in which we need to do a lot of work.”
Carleton cognitive science researcher Mary Kelly added a different perspective to the panel, drawing from her background in psychology and philosophy in addition to computer science and machine learning.
Her research, which bridges human cognition and AI, with the aim of developing fairer, more adaptable systems, also seeks to enhance our understand of how human brains work.
Calling our brains “meat computers,” Kelly said “we exist in a culture that has all sorts of harmful prejudices embedded in it, and these prejudices also exist in AI systems.”
Even if people have some of these implicit biases, however, we don’t go around behaving intentionally in this way. But AI doesn’t have intention — “ChatGPT is just trying to insert the next most probable word” — so we need to be more careful about how we train AI systems.
“They seem like they can reason like human minds, but when push comes to shove, they can’t,” Kelly said. “Getting AI systems to understand what humans are thinking is a big part of making them safe and reliable.”

The Collective Good
The Carleton Challenge Conference was emceed and moderated by Allan Thompson, director of the university’s School of Journalism and Communication.
Carleton President and Vice-Chancellor Wisdom Tettey delivered opening remarks.
“The disruption of AI is something we’re all familiar with,” Tettey said, noting that while AI brings anxiety, it also holds incredible potential “for the common good, the collective good and the continued forward movement of our society.”

Carleton President and Vice-Chancellor Wisdom Tettey delivered opening remarks
Elena Fersman, Vice President and Head of Global AI Accelerator at Ericsson, was the closing keynote speaker.
Ericsson, which is involved in a major, multi-year partnership with Carleton and was one of the conference’s sponsors along with the Danbe Foundation, is a multinational networking and telecommunications company.
“We are an AI company,” Fersman said, “because we are delivering AI infrastructure.”
As 5G networks evolve into 6G networks, AI is an important part of this “technological convergence,” according to Fersman. It’s an application but is also becoming part of the infrastructure.
“We use a lot of AI in the telecom sector,” she said, “to make our networks effective and predictable, and so anyone can use AI.”

Elena Fersman, Vice President and Head of Global AI Accelerator at Ericsson, delivered the closing keynote
Tuesday, May 13, 2025 in Artificial Intelligence (AI), Challenge, Events
Share: Twitter, Facebook