Local News

Canada’s security agencies urged to detail AI use

A federal advisory body is calling on Canada’s security agencies to publish detailed descriptions of their current and intended uses of artificial intelligence systems and software applications.

In a new report, the National Security Transparency Advisory Group also urges the government to look at amending legislation being considered by Parliament to ensure oversight of how federal agencies use AI.

The recommendations are among the latest measures proposed by the group, created in 2019 to increase accountability and public awareness of national security policies, programs and activities.

The government considers the group an important means of implementing a six-point federal commitment to be more transparent about national security.

Federal intelligence and security agencies responded to the group’s latest report by stressing the importance of openness, though some pointed out the nature of their work limits what they can divulge publicly.

Security agencies are already using AI for tasks ranging from translation of documents to detection of malware threats. The report foresees increased reliance on the technology to analyze large volumes of text and images, recognize patterns, and interpret trends and behaviour.

As use of AI expands across the national security community, “it is essential that the public know more about the objectives and undertakings” of national border, police and spy services, the report says.

“Appropriate mechanisms must be designed and implemented to strengthen systemic and proactive openness within government, while better enabling external oversight and review.”

As the government collaborates with the private sector on national security objectives, “openness and engagement” are crucial enablers of innovation and public trust, while “secrecy breeds suspicion,” the report says.

A key challenge in explaining the inner workings of AI to public is the “opacity of algorithms and machine learning models” — the so-called “black box” that could mean that even national security agencies lose understanding of their own AI applications, the report notes.

Ottawa has issued guidance on federal use of artificial intelligence, including a requirement to carry out an algorithmic impact assessment before creation of a system that assists or replaces the judgment of human decision-makers.

For news impacting Canada and around the world, sign up for breaking news alerts delivered directly to you when they happen.

Get breaking National news

For news impacting Canada and around the world, sign up for breaking news alerts delivered directly to you when they happen.

It has also introduced the Artificial Intelligence and Data Act, currently before Parliament, to ensure responsible design, development and rollout of AI systems.

However, the act and a new AI commissioner would not have jurisdiction over government institutions such as security agencies, prompting the advisory group to recommend Ottawa look at extending the proposed law to cover them.

The Communications Security Establishment, Canada’s cyberspy agency, has long been at the forefront of using data science to sift and analyze huge amounts of information.

Harnessing the power of AI does not mean removing humans from the process, but rather enabling them to make better decisions, the agency says.

In its latest annual report, the CSE describes using its high-performance supercomputers to train new artificial intelligence and machine learning models, including a custom-made translation tool.

The tool, which can translate content from more than 100 languages, was introduced in late 2022 and made available to Canada’s main foreign intelligence partners the following year.

The CSE’s Cyber Centre has used machine learning tools to detect phishing campaigns targeting the government and to spot suspicious activity on federal networks and systems.

In response to the advisory group report, the CSE noted its various efforts to contribute to the public’s understanding of artificial intelligence.

However, it indicated CSE “faces unique limitations within its mandate to protect national security” that could pose difficulties for publishing details of its current and planned AI use.

“To ensure our use of AI remains ethical, we are developing comprehensive approaches to govern, manage and monitor AI and we will continue to draw on best practices and dialogue to ensure our guidance reflects current thinking.”

The Canadian Security Intelligence Service, which investigates threats including extremist activity, espionage and foreign meddling, welcomed the transparency group’s report.

The spy service said work is underway to formalize plans and governance concerning use of artificial intelligence, with transparency underpinning all considerations. But it added: “Given CSIS’s mandate, there are important limitations on what can be publicly discussed in order to protect the integrity of operations, including matters related to the use of AI.”

In 2021, Daniel Therrien, the federal privacy commissioner at the time, found the RCMP broke the law by using cutting-edge facial-recognition software to collect personal information.

Therrien said there were serious and systemic failings by the RCMP to ensure compliance with the Privacy Act before it gathered information from U.S. firm Clearview AI.

Clearview AI’s technology allowed for the collection of vast numbers of images from various sources that could help police forces, financial institutions and other clients identify people.

Amid concern over Clearview AI, the RCMP created the Technology Onboarding Program to evaluate compliance of collection techniques with privacy legislation.

The transparency advisory group report urges the Mounties to tell the public more about the initiative. “If all activities carried out under the Onboarding Program are secret, transparency will continue to suffer.”

The RCMP said it plans to soon publish a transparency blueprint that will provide an overview of the onboarding program’s key principles for responsible use of technologies, as well as details about tools the program has assessed.

The Mounties said they are also developing a national policy on the use of AI that will include a means of ensuring transparency about tools and safeguards.

The transparency advisory group also chides the government for a lack of public reporting on the progress or achievements of its transparency commitment. It recommends a formal review of the commitment with “public reporting of initiatives undertaken, impacts to date, and activities to come.”

Public Safety Canada said the report’s various recommendations have been shared with the department’s deputy minister and the broader national security community, including relevant committees.

However, the department stopped short of saying whether it agreed with recommendations or providing a timeline for implementing them.


Shares:

Leave a Reply

Your email address will not be published. Required fields are marked *