The Future of Vandals

Since this is what goes, anything goes. Raising questions about what we were led to believe – no, an after-the-fact description in place of an assessment is not one. It’s a critique, disassociated and casually thoughtless. And all the while confirming that anyone can just do anything and… wait a minute: who is the nihilist here? Oh. The advertising company

The Wikimedia Foundation released a statement asserting that North Face and the ad agency behind the campaign, Leo Burnett Tailor Made, had “unethically manipulated Wikipedia” and “risked your trust in our mission for a short-lived marketing stunt.”

“Wikipedia and the Wikimedia Foundation did not collaborate on this stunt, as The North Face falsely claims,” the statement read. “When The North Face exploits the trust you have in Wikipedia to sell you more clothes, you should be angry.”

And then ‘Brought to you by’ declares they will commit to ‘ensuring their teams and vendors are better trained on the site policies,’ though of course they did not say they are committed or when they would be. Until then, and perhaps for some time afterward, we should remain vigilant about what we are led to believe.

Tech Fascination Capture


A somewhat cheeky line connects the many points along what I’ll call our Tech Fascination Capture. Describing that line can be tricky, but that’s what blogs are for, so here goes.

Interpretive problems that computers cannot solve, or rather those they can solve that aren’t the important ones, are at the center of a cognitive gap that is only increasing – and doing so fueled by our gaze and awe. We can’t seem to figure out why or how Russian troll farms might have swayed the most recent U.S. presidential election, if not others. Will artificial intelligence and the occupations lost to robots be good/a net value/desired? Self-driving cars – will we get there safely?

Much of this mystery is obscured by the need for a single answer to any one question, of course. But we are also frightened by the prospect of a single answer to multiple questions. This fear is a sort of disbelief itself, based on our own uncertainty about what we know from what we’ve learned, plus this more recent tendency to fall back on what everyone knows to be true. I’m actually unsure about the origin of that dynamic, though I am unafraid to speculate.

But, one thing is certain (and demanding of emphatic, if parenthetical, punctuation!): the answers lie in the questions themselves.

On social media misinformation, we don’t seem to want to contemplate the very top-level tradeoff: is the ability to connect with people worth the price of manipulation? That is, transmission of information and disinformation flow through the same tube – whether we believe one is sacred and the other profanely immoral or not is of not consequence whatsoever. There is one tube/portal; these are its uses; do you want to play?

Will robots kick us to the curb and take our places? Who programs what robots can do? What will machine learning do about the should question? Is there such a thing as robot creativity, outside of MFA programs, that is?

Self-driving cars: so few startups and new products have anything to do with actual technology anymore that this one – which does – should (ha!) be attached with a free-rider proviso. The billions of dollars and pixels that accrue to its pursuit all ignore the same problem with driverless cars: unanticipated events. If a couple, holding hands, is jay walking and a young mother is in the crosswalk with her carriage on the same section of a street simultaneously, who gets run over? It all happens in an instant, plus bikes, buses, other cars (are there bad self-drivers?), weather, darkness… the idea that these variables can be solved is an answer to a solution, not a problem.

This is not to suggest understanding our capture is simple. But let’s think about it.

The last and the next 20 years

Peter Singer’s 1975 book Animal Liberation is perhaps the seminal text on awakening human consciousness about nonhuman animals. More of a philosophical tract, it presents an even-handed narrative of why animals’ interests should be considered that is neither ‘good’ not ‘bad’ per se. It’s big idea of ‘the greatest good’ is an effective route to ethical behavior, and it resonates with the challenge of how to get people to care about nature, which – if not cast as satire – is one of the most urgent ideas of the last and the next twenty years:

It is easy to see how bleak accounts of the state of the planet can overwhelm people and make them feel hopeless. What is the point of even trying if the world is going down the drain anyway?

To muster public and political support on a scale that matches our environmental challenges, research shows that negative messaging is not the most effective way forward. As a conservation scientist and social marketer, I believe that to make the environment a mainstream concern, conservation discussions should focus less on difficulties. Instead we should highlight the growing list of examples where conservation efforts have benefited species, ecosystems and people living alongside them.

The promise of positive messaging and marketing language to sway greater environmental sh*t-giving is cynical, but here we are. He’s not wrong, though the degree to which the vision of this kind of promotion will necessarily muster the language of commodity (great cause of said looming catastrophic scenarios) to save the Earth makes the pain in my neck throb. It could also make the messages that feel like Coca-Cola ads that much easier to dismiss from familiarity. Optimism in the face of destruction has its limits, and sometimes we need to look at things as they are and act accordingly. Like adults instead of media companies.
Still, Lost & Found is a good idea. We can do worse than trying to invigorate the public with the wonder of natural wonder, as long as they don’t begin to believe too strongly in its resilience. We can lead the water to horses, but can we make them care?