This article is part of a limited-run newsletter. You can sign up here.
Recently, I published a chart of something called the Martech 5000 (also known in the industry as the LumaScape). It sounds high-tech and menacing, like a Transformer. Really, it’s just a map of the 7,040 companies that make up the “marketing technology landscape.”
If you want to know where your personal data goes and who it’s shared with you can follow this handy chart (Spoiler: No mere mortal can follow it). If this is the industry standard, the industry is broken, I argued. Turns out I wasn’t alone.
For Lisa Macpherson, the former senior vice president of marketing at Hallmark Cards, the chart represents the reason she left her career as a chief marketing officer. “The LumaScape has become a symbol of everything that’s gone wrong with consumer marketing over the past 10 years or so,” she told me recently (she’s now at Harvard doing research on how to to help rein in digital platforms).
But her distaste goes well beyond a convoluted chart; at heart, she believes that, for all of the privacy invading and data-sucking, the advertising technology isn’t all that good at doing what it’s supposed to do: help companies understand their customers.
“As marketers we now have access to our customers in every nook and cranny of their lives. But with more layers of technology between us, we have less insight on what makes them tick,” she said. “Algorithms cobble together ads based on “optimization,” instead of creative teams dreaming them up based on human insight. We’re told that all that data is making ads “more relevant” but consumers find our ads annoying and their use of ad blockers is at an all-time high.”
I’ve argued that we’re in the early stages of a broader privacy reckoning and I think Macpherson’s story gets at an underlying frustration when it comes to online ads, platforms and privacy: It all feels deeply unhuman.
Macpherson argues that human behavior is complex and that data mining is, despite its insights, still a brute-force tactic. “Desire and motivation are influenced by many factors that require context and conversation in order to decode,” she told me. “The data shows what people do (and even what they will do), but not why they do it.”
It’s a powerful notion, and I think it’s behind our dissatisfaction with so much of the technology we use. Even those of us who’re willing to make the trade-off of our personal information for more tailored, targeted services can take offense at the way in which we’re being algorithmically nudged toward outcomes and then brazenly told it’s what we want.
And perhaps the algorithms are right much of the time! But the decision to segment, mine and target us in a complex 7,040 company ecosystem that largely removes humans from the process, seems destined to fail in the long run. We may love the immediate convenience of the services that run on our personal information or an ad that appeals to our animal urges of consumption. But those glimmers of dissatisfaction — of a lack of autonomy — behind each click or app install are real, too, and they accumulate.
It might be wishful thinking to hope that there’s a human piece of us that computers can’t quite understand, even if they’re able ultimately to predict what we might end up doing with our complex desires. But that gap — between how the algorithms know what we’ll do, but not why we do it — seems destined to catch up to us at some point.
Our collective fears about facial recognition — and its deployment by private companies and governments — have been simmering for a while. But it feels like the dam burst a bit in the month of May. Here’s a brief rundown to catch you up on a few of the big things.
Thanks in large part to the A.C.L.U., the San Francisco board of supervisors voted earlier this month to ban the use of facial-recognition technology by both the police and other agencies.
A large study out of Georgetown Law exposed how Chicago, Detroit and a few other cities are moving quickly to implement sweeping facial recognition systems.
Last week, shareholders at Amazon — which provides open-source facial recognition technology — got enough traction to move two nonbinding proposals to stop facial recognition onto the ballot of the company’s recent shareholder meeting. The proposals were voted down. But activists suggest it was an important symbolic move.
The same day, Amazon came under harsh criticism for directing users to report abuses of its facial recognition platform via a pithy online form. “They simply do not know what they are doing,” a civil liberties attorney said of the move.
Facial recognition got a day on Capitol Hill. “This is a moment for us, in a bipartisan way, to say stop. we already have lost our right to be left alone,” Representative Mark DeSaulnier said during the hearing. Other Representatives, including Alexandria Ocasio-Cortez skewered big tech companies and city governments for creating the infrastructure of a surveillance state, linking it to issues like abortion.
The debate over facial recognition is just beginning, but recent activity seems to suggest that it’s going to be a roiling one (last Thursday, Wired declared “Facial Recognition Has Already Reached Its Breaking Point”). Plenty of technology appears without legislation or any real conversation about ethics, civil liberties or a framework for adoption. While there’s bound to be hasty facial recognition deployment from cities and agencies to come, there’s also a protracted fight over its future that lies ahead.
It’s a fascinating time capsule both for the book’s utopian 1997 web predictions, but also for Kakutani’s prescient skepticism about the internet. These questions from Kakutani stand out, as many still have no good answers in 2019:
How can people sift out truth from opinion on the Net, facts from misinformation? What should be done about genuinely dangerous information (bomb-making instructions, say, or maps of security installations) that may be found in cyberspace? How can users protect their own privacy? How can they avoid being deceived and cheated by online con men, hiding behind a cloak of anonymity or an assumed name?
There’s also some eerily good predictions that ring very true in our current moment:
She contends that the Net will give individuals ‘’the ability to be heard across the world, the ability to find information about almost anything . . . along with the ability to spread lies worldwide, to discover secrets about friends or strangers, and to find potential victims of fraud, child abuse or other harassment.’‘ She writes that it will foster decentralization, undermining ‘’central authorities whether they are good or bad’‘ and nurture ‘’a profusion of entrepreneurs.
There will be a ‘’premium on people who can market themselves,’‘ she writes. ‘’In a world where competitive advantage comes either from new design or from the attention of people, those who succeed will be those who are good at getting their new designs or themselves noticed.”
Microphones are everywhere. We seem to love willingly installing them inside our homes so we can yell at our speakers to play N.P.R. and have them tell us the weather. But not all are benevolent; some pieces of software or websites might access your microphone and you might not even know it. Unlike the little green light for your camera (which is by no means foolproof) there’s no microphone activity indicator.
Enter Micro Snitch. It’s a downloadable plug-in that I’ve been experimenting with that tells you whenever your microphone is in use. Once you download it, a big pop-up appears when your mic is active — you’ll see it if you FaceTime or video chat. But it’ll also appear if some shady third-party software or site tries to access. The app also keeps a running second-by-second log of activity, which is helpful, should you end up with somebody spying on you.
Caveat No. 2: I liked this disclaimer I found from designer Tobias van Schneider, while looking into Micro Snitch. If you’re having a sensitive conversation and “you do it with a MacBook, two phones and a Google Home or Amazon Alexa in the same room, the conversation is most certainly not really private.”
An interesting piece from a doctor who makes the case that a lack of privacy is a health concern.
If you want to get into the technical weeds, here’s a proposal for changing ad tracking.
A look at privacy at the state level.
Comcast rumored to pilot in-home health monitors before year’s end.
Left Coast Kratom is here to help you experience the freshest highest quality kratom powders and extracts at competitive prices.