By Joseph Mathieu
Photos by Josh Hotz
Will Canadian democracy be able to withstand the inflammatory online tactics of foreign interference during this October’s federal election?
It was the key question of Artificial Intelligence, Democracy, and Your Election, an event hosted by Carleton University’s Initiative for Parliamentary and Diplomatic Engagement on Feb. 25, 2019.
A panel of experts and stakeholders responded to a report by the Communications Security Establishment (CSE) that multiple hacker-activist groups will “very likely” try to influence the 2019 federal election. An update to the report, “Cyber Threats to Canada’s Democratic Process,” will be out before the election, said Minister of Democratic Institutions Karina Gould.
“Foreign interference is not new,” said Gould. “But the tools and the ways and the ability to engage directly with citizens [are].”
Before the panel discussion, the minister presented the government’s plans to counter cyber threats to democracy. They include combatting foreign interference through legislation like the recently passed Bill C-76, improving government co-ordination between departments and agencies, and putting pressure on social media companies to act responsibly. The best possible defence is enhancing public awareness of possible fraud and disinformation, said Gould.
Electoral Interference and the Emerging Field of Artificial Intelligence
Carleton’s Merlyna Lim, Canada Research Chair in Digital Media and Global Network Society, primed the audience of parliamentarians, students and diplomats on the emerging field of artificial intelligence (AI).
AI is already being used in many industries, from health care to robotics. The most prominent use of AI in politics, however, has been in social media manipulation: from demographic targeting for campaign ads to cyberattacks using bots that crack passwords or fake accounts that spread propaganda on Facebook, Twitter, Reddit and Instagram.
“We are anticipating the use of deep-fake, which are videos created by similar algorithms that show someone doing or saying something they didn’t actually do (or say),” said Lim.
Roughly 40 per cent of Internet traffic is fake, said Matthew Hindman, a media and public affairs professor at George Washington University. He suggested that fake accounts could be quickly identified if social media more regularly checked whether those posting were real people.
“Forcing somebody to check a box and solve a series of captchas (online challenge) for political speech pretty regularly . . . would make a significant difference,” said Hindman.
Regulating Social Media
Allan Rock, commissioner of the Transatlantic Commission on Electoral Integrity, pointed out that Germany, the European Union and United Kingdom are all at different stages of regulating social media through compulsory codes of conduct or independent regulators.
“I want to make it clear, this is about more than just Facebook and about other social media platforms,” said Rock. “It’s about legislation, it’s about educating the population, it’s about ultimately the kind of conversation we want to have in our elections.”
Kevin Chan, global director and head of public policy at Facebook Canada, said Facebook is already regulated by Canadian hate speech and regulated goods laws. Chan agreed there is more to do, but said Facebook has already added 20,000 people to its sensitive security teams and released a Canadian Election Integrity Initiative plan.
“We all want the same thing, we want free and fair elections,” said Chan.
Rock praised the initiatives and agreed there was more to do. He also lauded the “team effort” of the G7 Rapid Response Mechanism (established in June to improve co-ordination between G7 countries in identifying and responding to democratic threats) and suggested the government’s primary role is to protect Canadians from powerful corporate interests that would try to undermine democracy.
Wednesday, February 27, 2019 in Faculty of Public and Global Affairs, Parliamentary and Diplomatic Engagement
Share: Twitter, Facebook