When the Algorithm Doesn't Ask Your Name | Book Review: "Weapons of Math Destruction"
What I’m reading, what it reminded me of, and why I think it matters more now than ever.
I just finished reading Weapons of Math Destruction by Cathy O’Neil, and I can’t stop thinking about it. If you’ve read it, I’d love to hear your thoughts. And if you haven’t, it might be time. It came out in 2016, and that part matters. Almost a decade later, the urgency of the book hasn’t faded; it’s deepened. The predictions have aged into reality. If you have something else in this vein I should be reading next, drop it my way. I like reading topics in trios.
Cathy O’Neil’s recent book, Weapons of Math Destruction, is basically about how data gets dressed up like truth and turned loose on people, especially people without power to push back. She calls these systems “Weapons of Math Destruction,” or WMDs (I wonder who’ll get that reference). They’re algorithms that are opaque, scalable, and harmful. And when I read her descriptions about education rankings, hiring filters, risk scores in courtrooms, I didn’t feel surprised. I felt seen.
If you’re here for a play-by-play review, you’re in the wrong spot. That’s what Amazon reviews are for. Let’s talk about how this book affected me and why it’s an important read.
One of the first stories that came to mind while reading happened a few years ago. We were handed a multi-page rubric during a staff meeting and were told that the administration was trying a new approach to evaluations. It was framed as feedback-seeking: take a look, let us know what works, help shape the system. I didn’t think much of it. I filled mine out generically and turned it in. Others, especially newer teachers, wrote with raw honesty, believing they were helping design something better. Weeks later, we found out those “feedback forms” were our actual evaluations. I had enough status to get waved through. But those who answered with sincerity were judged, sometimes harshly. Their words, meant to improve a tool, became the same data that hurt them. It’s a small example. But it’s also a precise one. “Opacity” doesn’t always mean code. Sometimes it’s the illusion of participation.
The book hit even closer when I thought about my daughter. A sharp, thoughtful young woman with an enviable shock of wavy red hair, she had been accepted to a handful of universities her senior year. But she knew she wasn’t quite ready, so she deferred. Instead, she spent a year working at an outdoor science school, living on site and leading elementary students through ecology lessons in the woods. During that year, she took a few community college classes, just enough to keep her progress going. When she contacted the university that had already admitted her, they told her she’d need to reapply, this time as a transfer student. The algorithm, designed to sort efficiently, saw only partial coursework and no degree so she was denied as a non-completer transfer student. Never mind the full-time teaching, the maturity, the clarity she’d gained. The system never asked. Why would it? It wasn’t designed for that. Eventually, she appealed and wrote an essay explaining her story. A human read it. She was accepted.
This, to me, is the heart of O’Neil’s warning. These systems don’t just miss the point while damaging people, they make it nearly impossible for the point to be known.
I remembered another story too. Early in my teaching career, I worked in a district with a large Hmong population. One year, about eight middle school students arrived straight from refugee camps in Thailand. They didn’t speak English. They didn’t know the alphabet. They were unfamiliar with the cultural landscape of an American school. And yet, within weeks, they were required to take the state’s standardized tests, complete with references to Scooby-Doo and Shaggy. I sat beside one of them as a translator tried to explain who those characters were so the question would make sense. It didn’t. It couldn’t. But their scores were calculated just the same as the dominant culture students who grew up in the neighborhood. Somewhere, an algorithm registered failure. It never logged the fact that they'd only just arrived, that they were incredibly intelligent, resourceful, happy, playful, literate in two other languages, or that they were navigating not just a new school, but a new world.
Reading this book made me feel like I had names for things I’d been seeing for years. But it also left me wondering: what happens when the math gets more powerful? In 2016, the systems O’Neil described were mostly predictive. Now, we’re layering in generative AI, systems that not only sort, but simulate. We’re not just being measured anymore. We’re being spoken to by things trained on our data, our language, our choices. The stakes are bigger. The speed is faster. And still, the patterns of harm follow the same grooves: marginalized groups made more vulnerable, complexity flattened into inputs, systems that say “we’re just being efficient” while quietly deciding who gets seen, who gets skipped, and who never even makes the list.
I don’t think this means we abandon the tools. But I do think it means we need to ask better questions. O’Neil ends the book with a call for transparency, for regulation, for an insistence that algorithms be held to the same ethical scrutiny we’d demand of any policy or practice that affects millions of lives. I’d add this: we have to keep telling the stories they miss. The ones where a rubric pretends to be a conversation. The ones where a young woman’s year of service almost disqualified her from advancing her education and maturity. The ones where a student faces a test built for a cartoon he’s never seen.
Maybe the most dangerous thing about a Weapon of Math Destruction is that it never asks your name. And when it’s done with it’s analysis, no one else asks either.


