How does facial recognition work - and why do tech companies use it? How afraid should we be that tech companies can “recognise” us?
Amid accelerating concerns about user safety and privacy, Facebook has announced it will shutter its super-sophisticated facial recognition system, deleting a vast biometric database of more than a billion faceprints.
What is facial recognition?
Simply put, facial recognition technology refers to software that maps, analyses and confirms the identity of a face in a photo or video. Arguably the most powerful surveillance tool ever created, the controversial technology is used by law enforcement, in airports and schools, by governments and private companies alike.
(To be clear: the faceprint that unlocks your phone is entirely different and goes no further than your own device.)
What are the dangers?
Because it can be used to identify people from afar without their knowledge or consent, fears about the misuse of facial recognition software abound. Those concerns are well-founded.
In China, for example, facial recognition has been used by the government to track the Uyghurs, the Muslim minority group that has been rounded up into mass “reeducation” camps.
Less extreme risks include predatory marketing practices, identity fraud and privacy issues, ranging from stalking to disadvantage when applying for jobs.
Facebook's history of 'saving face'
Facebook maintains it has only used its facial-recognition functionality on its own site. The company also insists it has not sold its software to third parties. Concerns around the practice stem more from what the company could do with the data if it chose, not what it has done.
Yet the company has faced heat - and massive fines - over its facial recognition practices in the past. In August 2019, the company lost a federal appeal after it was convicted of collecting and storing “biometric data” without seeking users’ consent.
Facebook’s version of the technology, which bears the spooky name DeepFace, is said to be 97% accurate - compared to the FBI’s system at 85% accuracy.
The class-action suit had been ongoing since 2015. It finally concluded in January 2020, when Facebook agreed to pay a $650 million fine. In 2019, the Federal Trade Commission fined the company a record $5 billion to settle a range of privacy complaints, facial recognition practices among them.
A company spokesperson maintained at the time of the latest fine, “We have always disclosed our use of face recognition technology and that people can turn it on or off at any time.”
That’s debatable. Facebook has used software to auto-tag people by name in photos since 2010. It wasn’t until 2018 that the company started to explain its facial capture technology to users, pointing to a settings page where the feature could be disabled.
And it wasn’t until 2019, that it revised its policy to make facial recognition on the platform opt-in only.
Before that time, faceprints were collected by default for all users.
Facebook’s version of the technology, which bears the spooky name DeepFace, is said to be 97% accurate - compared to the FBI’s system at 85% accuracy.
What's the point?
Why did Facebook do it? For the same reason Facebook does most things: to grow. According to anonymous sources who spoke to the Washington Post, company data scientists realised that alerting people that they’d been tagged in photos was a golden goose for deepening engagement on the platform.
Its success in driving users to the platform was duplicated by Instagram (which the company acquired in 2012) despite some employees objecting that it was “creepy and tacky.”
A broader trend
Facebook - or rather, Meta, to use its umbrella entity’s new corporate name - isn’t the only social platform with a history of using facial recognition as a growth hack, while at the same time amassing a gigantic database of valuable biometric information.
Tiktok is another. In June 2020, the company released a statement regarding the "For You" page, detailing how its algorithm recommended videos to users; it didn’t mention facial recognition. But this year, the company Tiktok agreed to a $92 million fine to settle a lawsuit that alleged it did indeed use facial recognition to identify users’ age, gender and ethnicity.
Other big tech companies have marketed face-print software directly to police, but have drawn back in recent times. Amazon, for example, has extended a global ban of its police facial recognition software indefinitely, citing a lack of legal clarity about its use. IBM and Microsoft have also stopped selling their facial recognition technology to police.
Industry watchdogs believed the Facebook about-face could motivate lawmakers to re-examine the practice and take long-overdue legal steps to regulate it.
With just a few clicks from your smartphone, Family Zone lets you manage your child's access to streaming services, games and social media - and so much more.
Create a home where children thrive, and start your free trial today.
Tell me more!
Topics: Cyber Bullying, Parental Controls, Screen time, Mobile Apps, Excessive Device Usage, online safety, facebook, meta, facial recognition
Sign up now to try Family Zone for 1 month, totally free of charge.
Free TrialCOVID blew up our teens’ screen-time. It’s time to get them back on track. In the wake of the COVID pandemic, our children are facing a ...
If you have more than one child - and statistics show 86 percent of families do - then managing screen-time can be double trouble. Or ...
We’re starting to understand how social media can damage girls’ self-esteem - but what about our boys? New research finds disturbing ...
Mixing kids and adult strangers in a self-moderated online environment ... What could possibly go wrong?
Sign up now to try Family Zone for 1 month, totally free of charge.
Sign UpTo improve the level of online safety and protection given for children, the Family Zone parental app is being replaced with the world-leading Qustodio parental app.
Learn more