More than 10,000 Canadians received a medically-assisted death in 2021: report
Quebec Superior Court suspends Bill 96’s translation requirement until constitutionality determined
The Ontario government has given Maggie an ultimatum: the disabled teen can lose her funding or her independence
FBI took 11 sets of classified material from Trump’s Mar-a-Lago home while investigating possible Espionage Act violations (US)
Ontario class action settlement reclassifies volunteers as employees, setting new precedent
Availability of Judicial Review in SABS Disputes
Are masking policies still valid?
Justice Canada releases commission report on impact of lack of legal aid in family law disputes
Harmonized sales tax part of maximum amount of attendant care benefits owed by insurer: court
New rules coming next month to help Canadians with cancelled and delayed flights
Stephen King set to testify for govt in books merger trial (US)
New law program in Quebec to begin next fall, a first in 50 years
The Impact of the Lack of Legal Aid in Family Law Cases
SCC rules that when someone is required by their partner to wear a condom but do not, they could be guilty of sexual assault.
Big Plastic suing feds over single-use ban — again
Tim Hortons offers coffee and doughnut as proposed settlement in class action lawsuit
The SCC has refused to hear the appeal to declare the renewal of the state of health emergency by the Quebec government invalid
Federal privacy commissioner investigating controversial ArriveCAN app
Kraken, a U.S. Crypto Exchange, Is Suspected of Violating Sanctions (US)
Ontario court certifies class action on former patients’ anxiety from notice of risk of infection
The stakes couldn’t be higher as Canada’s top court decides whether to hear climate class action lawsuit
Professor Barnali Choudhury selected by EU as trade and sustainable development expert
The Supreme Court decision on the ‘Ghomeshi’ amendments will help sexual assault victims access justice
AFN Reaches $20 B Final Settlement Agreement to Compensate First Nations Children and Families

America’s Newfound Interest in Regulating Tech May Be a Game Changer

Because the United States is home to so many of the tech giants whose products affect our lives, the White House’s newfound interest in regulation could be transformative.

PHOTO: Former Facebook employee and critic Frances Haugen answers questions during a U.S. House Subcommittee on Communications and Technology on Capitol Hill in Washington, December 1, 2021. (REUTERS/Elizabeth Frantz)
Could the tide be turning on our tacit acceptance of the role big tech plays in moulding our minds?

On March 1, President Joe Biden declared in his State of the Union address that “we must hold social media platforms accountable for the national experiment they’re conducting on our children for profit.” He credited the courage of Facebook whistle-blower Frances Haugen, whom he hosted as his guest at the joint session of Congress and whose revelations last fall made social media’s impact on the mental health of young people around the world undeniable. It is a relief that Haugen’s inside story has finally struck a chord in the United States. For many, the issues she highlights were no surprise.

As campaigners such as Privacy International have flagged, in the current data economy even our mental health is for sale. Many mental health websites around the world share information about their visitors, including, in some cases, answers to self-assessment questionnaires about mental health. That information, whether about children or the adults around them, is highly valuable in an online ecosystem where it is assumed to be legal to prey on and play with people’s emotional states. And it is the kind of information that could be fed into the algorithms that decide the price, or availability, of your medical insurance.

Reports from Australia in 2017 claimed that Facebook had offered advertisers real-time access to the emotional states of teenagers and young adults, allowing them to be targeted when they were at their lowest ebb. It was a claim that Facebook denied. But last year, Reset Australia found that it could buy advertising targeting thousands of children with dangerous interests such as extreme weight loss, alcohol and gambling for a few dollars. Surveillance advertising is the exploitation of all our mental states for someone else’s benefit.

But that weighted blanket Facebook wants to sell you to calm your anxious dreams is just the tip of the iceberg. The surveillance advertising business model is the oil that drives disinformation about COVID-19, turning it into “a partisan dividing line” instead of an infectious disease. That business model is also the pusher of conspiracy theories that lead people to take up arms on the steps of the US Capitol Building. It is the fuel for Russian information warfare in the current crisis in Ukraine, and it has been targeting democracies around the world for years. The algorithms that support surveillance advertising thrive on division, whatever the topic.

Campaigners and legislators have been grappling with these issues for more than a decade. Earlier this year, campaigners in Europe had a groundbreaking win with a ruling from the Belgian Data Protection Authority that consent pop-ups that are used to legitimize massive online tracking by advertisers are in fact a breach of EU law. Meta’s response to increased EU regulation had been to threaten that it may withdraw its business from Europe. Perhaps that would be no bad thing. If its business model cannot respect our rights, maybe it’s time for a new tech paradigm.

In the European Union, the Digital Services Act and the AI Act attempt to limit the human rights impacts of technology. And the UK’s Online Safety Bill, touted as a flagship piece of legislation to make the internet safer, was recently published. It will no doubt provoke more intense debates that pit safety against freedom of speech. But the bill’s focus on content is simultaneously too broad and too narrow and fails to touch the real problem. It is the systems, not the content, that cause the real harm. And the issues caused by business models built on surveillance, profiling and targeting go far beyond what we say. They affect how we feel, how we behave, how we spend and how we vote.

Even China has introduced regulation to tackle the influence of recommender algorithms that manipulate the way we see the world. The United States may be late to the party, but as the home of many of the tech giants that affect all our lives, its newfound interest in regulation may be a game changer.

But the reality is that any genuine move to address the harms must go beyond legislating to protect children online or to police content. It is the business model that uses vast troves of data on each of us to understand what we think. It determines how to press our individual emotional buttons in order to change our opinions and our actions. That is the biggest threat to our collective human future, and our children will only be safe when we are all free of it.

This article first appeared on Techonomy.


The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

Susie Alegre is a CIGI senior fellow and an international human rights lawyer. She is an associate at Doughty Street Chambers in the United Kingdom.


Want direct access to the latest LITN content?

Stay in the loop ➞ Subscribe to LITN instant notifications.
Receive the latest content delivered directly to your device.
Unsubscribe at anytime.

Latest News


Join the LITN Newsletter ➞ the latest news delivered to your inbox. Unsubscribe at any time.


Instagram Feed