I spent last week interviewing the parents of a teenager who committed suicide after a terrible encounter with sextortion. The entire episode began and ended in a single, horrible night on Instagram. I'll tell their full story another day, but the interviews are haunting me. Can't something be done? Companies like Meta and Alphabet know where I'm having lunch next week. Can't they stop sextortions in progress?
School districts around the country are already trying -- and the results are also haunting me. A fantastic story published this week by The Seattle Times and The Associated Press takes an honest, even-handed look at surveillance software designed to protect kids, and all that can go wrong with it. Their piece begins:
One student asked a search engine, “Why does my boyfriend hit me?” Another threatened suicide in an email to an unrequited love. A gay teen opened up in an online diary about struggles with homophobic parents, writing they just wanted to be themselves. In each case and thousands of others, surveillance software powered by artificial intelligence immediately alerted Vancouver Public Schools staff in Washington state.
Software named "Gaggle" had flagged these posts during a typical school year in that small Pacific Northwest town. According to the Electronic Frontier Foundation, thousands of schools around the country use such software. Getting flagged by it is so common the kids call it getting "Gaggled."
I'd like to begin my thoughts in a place I rarely admit; I'm deeply conflicted. For years, people have wasted money on technology solutions for major social problems. Tech companies often overpromise, while buyers are often hoping for a cheap, quick answer that doesn't exist.
But with those parents in my head, I can't dismiss the idea out of hand.
Here are just some of the problems.
Many parents and kids are unaware that they are under such intense scrutiny. And, as is often the case with privacy issues, others might be vaguely aware thanks to a fine-print disclosure, but not aware of the extent of the surveillance -- they've not given informed consent.
Picking through thousands of messages each minute to decide which are dangerous or illegal is a sensitive and challenging task. It's easy to imagine a literature student searching for phrases found in classic novels could trigger nudity or violence alerts, for example.
And -- this should surprise no one -- when the Seattle Times and AP asked questions about the software, they discovered some pretty egregious vulnerabilities that exposed students' very private data.
Students around the country have fought back against Gaggle and similar software. Student journalists in Kansas have complained the software violates their free press rights (they are correct). When clever students ran "tests" against the software to see what phrases would trigger it -- administrators told them to cut it out.
The EFF points out that companies which sell the software keep their algorithms secret, so parents and students don't really understand how they are being watched. Also, the software could be used to expose LBTQ+ students against their will, a potentially dangerous misstep.
And the software isn't cheap. The story points out that the six-figure price tag could instead fund a full-time school counselor, maybe more.
It's important to note that student surveillance software is used -- for now, anyway -- only on school-issued devices. That changes the calculus around students' expectations of privacy.
Now for the obvious and more compelling counter-argument -- if a student says they are planning a mass shooting on a school-issued device, or sends a note indicating they are contemplating self-harm, don't we want to do everything we can to stop that? In fact, isn't there a burden on schools to do so? Yes, and yes.
These all-or-nothing privacy debates feel coldly academic in a blog post, but they have serious, life-or-death consequences. If you stick around the privacy world long enough, you'll hear variations of this debate over and over. Imagine the health secrets we could unlock if all patient medical data were fed into brilliant AI machines, for example?
I'm not arrogant enough to offer an answer about school surveillance software here, though I have observations to share. Firms that sell it should welcome clever experiments by students; they will only serve to improve the product. Parents and students should never feel surprised by it. Disclosures should be clear and frequent. Secrecy around this kind of software raises my spidey sense, and it should raise yours, too.
Companies that sell the software should open themselves up to rigorous, independent testing for efficacy. I have a sneaking suspicion that it does not work as well as advertised. First, because kids have a way of evading filtering tools, and second, because the chilling effect of always-on surveillance could easily do more harm than good. (Here's more reading from Rand on that.)
The hope lies in the future. As I often say, tech created this problem, it's going to have to solve it. Plenty of tech firms are working on "privacy enhancing technologies" like differential privacy, which offer the promise of data sharing without privacy compromises. Whatever realm you can imagine -- fighting terrorism, evaluating mortgage applicants, finding at-risk children -- privacy enhancing technologies will be absolutely essential to unlocking tech's true potential. It will also get us out of these endless debate loops that pit privacy and safety as enemies.
In the meantime, I have to say, I sure wish school districts would invest in great teachers and counselors rather than questionable software. But I take it on good faith that many people are trying their best. I don't want to ever interview parents whose child was killed by software again -- though sadly, I'm sure I will.
Safety and security cannot be had by sending children to public indoctrination jails. They must be taught morality and personal responsibility at home. When parents relinquish access to their children for 8+ hours a day to a party who is controlled by liability management attorneys and not the long-term welfare of the child, you get this kind of outcome.
Most schools are pinched budgetarily and lack the competent leadership to have a strong Chief Technology Officer, even if only on a fractional basis. Even less have a Chief Information Security Officer.
Governance does not exist in these places. The boards provide no viable oversight because that would require hard work and the investment of time for unpaid jobs.
I have had dozens of conversations in my CTO/CISO capacity advising school executives to be sensitive to the long-term implications of their decisions. Instead, what I see in 99% of cases is the decision to go with the lowest friction and most instantly perceived convenience for admin staff and teachers.
In a secondary school I know of, the entire curricula is dictated by software solutions which perform the grading on behalf of the teacher. It has nothing to do with whether or not the curriculum is actually providing the students the instruction they need to gain the skills for the expected outcome. It's all about teacher convenience.
The Google Education machine is all about certifications and credentials for teachers while decoupling instruction and grading from the skills (or lack thereof) related to individual instructors. Administrators like it better when the teacher is simply a plug-and-play facilitator resulting in less continuity interruptions when the teacher is out or departs.
All parents should realize they would be vastly better off to use self-paced classes in a homeschool model. The counterparty risk and indoctrination war on the values of the parents are eliminated by taking responsibility for teaching one's own children.
If you think you don't have time to homeschool, the real thing you don't have time to do is to go to war with the school, teachers, or school board for constantly violating your child's safety, security, privacy, and more.