On Popularism

The political strategy characterized by advising politicians, particularly Democrats, to prioritize policies with majority support supported by Wall Street while avoiding divisive, ideologically extreme, or unpopular issues, popularism consists of poll-driven, focused-grouped balderdash designed to resonate with existing public sentiment. While following polls is a bad approach, lefty or progressive stances that poll strongly are often not considered at all. Popularism is pseudo-analysis that elides clear public support for things like funding childcare and protecting immigrant communities, aka communities.
What are some leading issues? Are they popular? It’s an election year – let’s advise our candidates.
Over the past year, the White House has courted tech billionaires and gone out of its way to protect the AI industry’s agenda, fast-tracking permits for data centre construction and approving the sales of advanced chips to China while cracking down on states’ attempts to regulate chatbots … But across the US, citizens, clergy and elected officials in conservative communities are leading a grassroots rebellion against the rapid rollout of the technology.
Conclusion: not popular

And it was all too easy to be pessimistic about the prospects both for cooperation and for persuading voters to accept even modest future-oriented sacrifices.

Then came the renewable energy revolution. Solar and wind power have become cost-competitive with fossil fuels — they are, in particular, clearly cheaper than coal. Huge progress in batteries has rapidly reduced the problem of intermittency (the sun doesn’t always shine, the wind doesn’t always blow.) There’s now a clear path for a transition to an “electrotech” economy in which renewable-generated electricity heats our homes, powers our cars, and much more.

Conclusion: very pop– wait. It seems that the Trump administration has decided to block/rollback this transition that would benefit the planet, ensuring that the US will be left behind in global competition. Oh well.

ICE detention centers:

Communities across the country have been shocked to learn that DHS wants to use warehouses in their towns for detention space amid the ongoing immigration crackdown.

Conclusion: as popular as the plague. And speaking of…

The bipartisan American investment, which the Trump administration led, was absolutely key to containing a horrific global pandemic which could have been exponentially worse without the stunning accelerated development of mRNA vaccines — one of the great public health triumphs in modern history. But this miracle cure was only the beginning. The massive investment in mRNA opened doors to numerous other medical advances…

These mRNA advances would obviously benefit people in the United States, who would be much less likely to die of cancer, flu, pandemics, and a range of other illnesses.

Conclusion: popular, live-saving, beneficial across borders and populations. Unfortunately, also vulnerable to disinformation by cranks and malefactors willing to lie to enrich themselves and endanger others.

So even among this small variety of issues, clear policy preferences can be sorted. Public opinion has a role to play, and it’s especially important in the face of corporate media with thumbs on the scale, keeping unpopular issues and policies in a kind of eternal toggle state where the jury is still out. These should not be avoided. Look for candidates who run toward your preferences. Some might even already be there.

Fake intelligence not intelligent

Similar to the junk science being peddled at present to torture parents blame autism on Tylenol, as in it would be a terrible and cruel metaphor except that it’s actually happening.

Transpose that idiocy (generous interpretation) onto a much larger scale and you have the new dance craze known as AI. Well, the media is dancing, nonstop.

You can follow the money, and it just doesn’t make sense.

Check out the imagery, actually do not do that. It started as slop and it’s getting worse.

But science!, one might say. Surely, there are infinite uses! And there may be some for data set analyses on a massive scale – finding exoplanets and folding proteins. And yet, if we return to the most commonly propagated use case and raisin debt of the whole of the monstrous waste of natural resources as well as cash, it’s fake, broken turtles all the way down:

Gu and his team asked OpenAI’s ChatGPT, running on the GPT-4o model, questions based on information from 21 retracted papers on medical imaging. The chatbot’s answers referenced retracted papers in five cases but advised caution in only three. While it cited non-retracted papers for other questions, the authors note it may not have recognized the retraction status of the articles. In a study from August, a different group of researchers used ChatGPT-4o mini to evaluate the quality of 217 retracted and low-quality papers from different scientific fields; they found that none of the chatbot’s responses mentioned retractions or other concerns. (No similar studies have been released on GPT-5, which came out this August.)

The public uses AI chatbots to ask for medical advice and diagnose health conditions. Students and scientists increasingly use science-focused AI toolsto review existing scientific literature and summarize papers. That kind of usage is likely to increase. The US National Science Foundation, for instance, invested $75 million in building AI models for science research this August.

“If [a tool is] facing the general public, then using retraction as a kind of quality indicator is very important,” says Yuanxi Fu, an information science researcher at the University of Illinois Urbana-Champaign. There’s “kind of an agreement that retracted papers have been struck off the record of science,” she says, “and the people who are outside of science—they should be warned that these are retracted papers.” OpenAI did not provide a response to a request for comment about the paper results.

Quality indicators. Inventing a need for things the new thing said would no longer be necessary. Truly the wave of the future.

Be skeptical. Don’t abandon the ability to discern just yet.

Carrying the water [away]

The metaphors become really complicated at this level, given the thirsty water requirements of LLMs. But give Bloomberg its due for the most succinct cut-line in history of such things:

It cuts way down past the chase, to the quick, and presents what seems an unlikely reveal, inevitable as it may be. We can be relatively sure that neither Fallon nor Google is ashamed to be called out like this. And Photographer: Google really adds that special something.

The clown show is hard, one would imagine. When making people laugh is what keeps the audience coming back, eventually the comedian will become a water carrier for the status quo. It’s the raisin debt of every influencer, about which they are quite open. The question is what it does to us and everything around us, shaded in this light, as it were. The quick can still burn, if the numbness isn’t total.

Pay attention to what ‘becomes the norm.’ It’s certainly not as passive of an activity as the construction suggests.

When will you know

Google anything and you’ll see what they’re up to, with the “AI” results pushed up top. Scare quotes for reasons but really, the Google is doing a weird thing to the internet by this strategy – and they know they’re doing it.

But when will people realize it? When will they know?

Because the appearance of “AI” in expected places has already become… expected and rather commonplace. And this is what they at the Google understand well, that humans get used to stuff. However, now, you must look past these top results to find actual websites, with real information from people trying to provide it. People trying to find you – or have you find them. That’s one reason they have a site. I know, it sounds trite.

But those sites are being buried beneath these “AI” results. How concerned should you be? Should you care?

So, context: Open AI has unleashed all these free products training models like ChatGPT and several iterations you can pay for, in order to train us to use and become dependent on their products. But they need to make money. Lots of money and fast.

So we can know that, should we so choose. It’s expensive to run these things – both in financial and environmental costs, and the information they provide becomes degraded rather quickly – a thing humans still readily recognize [they call that foreshadowing in the biz].

These tech companies realize all of this, plus the fact that the media does all of their PR for free. And yet they are still burning through cash, mostly, it seems, on the vibes that people aren’t noticing the sixth finger or the clunky syntax, will grudgingly embrace accept most of it with a kind of dull fascination.

Are you overwhelmed? Is it all so little that it seems too much? What if there was a way to sharpen your fascination instead?

 

Stunning-Kruger-incidence

This is perhaps over-determined, but how were we to know? Is it just the mildest coincidence that just when critical thinking skills are at their most needed, a mysterious and mostly useless tool is helping us file-down any remaining sharp points and edges?

A new paper from researchers at Microsoft and Carnegie Mellon University finds that as humans increasingly rely on generative AI in their work, they use less critical thinking, which can “result in the deterioration of cognitive faculties that ought to be preserved.”

“[A] key irony of automation is that by mechanising routine tasks and leaving exception-handling to the human user, you deprive the user of the routine opportunities to practice their judgement and strengthen their cognitive musculature, leaving them atrophied and unprepared when the exceptions do arise,” the researchers wrote.

I’m convinced that key ironies need to be mandatory elements of all strategic planning documents going forward, numerated AND annotated. Making dumb dumber and lazy lazier is sufficiently opportune that making us softer and doughier, paired nicely with a ’54 magnum of News You Can Trust and a much more recent vintage of doing your own research, births the inevitability of powerlessness. Aside from the button that releases the treats, of course.

The charge is that the hard work of cowing a populace to submit to not notice authoritarianism is far easier than imagined, and especially when people allow themselves to be confused about the difference between important things and trivialities.  When you’re not sure how to watch out for what you don’t know you need to watch out for, please note the lack of passive construct before proceeding.

Image: Discreet nose. Fruity. Smoke. Suave and rounded on the palate, almost sweet.

Are greenhouse gasses actually a delicious dessert topping AND a floor polish?

NYT runs an ad sponsored content article about AI and Hollywood without once mentioning water or energy usage.

Can’t honestly quote it because it’s so cheerlead-y all the way through, it doesn’t seem to have any other point – and yet it leaves out so many. And Tom Hanks’ concerns about his estate are simply adorable.

The energy usage requirements of super-computing aren’t just downsides. This weird thing no one seems to actually want isn’t possible without massive electricity consumption. See also, bits o’coin.

Image: Inadvertently apropos actual article at Bloomberg today.

If it has sentience

it’s being used to protect us? Now that is thoughtful:

Imagine a medical-advice chatbot that lists fewer diseases that match your symptoms, because it was trained on a narrower spectrum of medical knowledge generated by previous chatbots. Or an A.I. history tutor that ingests A.I.-generated propaganda and can no longer separate fact from fiction.

Just as a copy of a copy can drift away from the original, when generative A.I. is trained on its own content, its output can also drift away from reality, growing further apart from the original data that it was intended to imitate.

In a paper published last month in the journal Nature, a group of researchers in Britain and Canada showed how this process results in a narrower range of A.I. output over time — an early stage of what they called “model collapse.”

Apparently, visual artists have been attempting to poison the models for a while now, to the point where they can’t tell the difference between a cat and a cow. Turns out even in Plato’s Cave you need people who know things.

But using itself to replicate itself is, shall we say, projecting deformity.

Hapsburg AI, indeed.

Image: Based on research by Ilia Shumailov and others.

Natural selection

It’s important to step back for a moment and consider the scrum from which the hype around Artificial Intelligence arises.

Even without casting [m]any aspersions on the tools as they are bandied about – and there ARE documented, purposeful uses for crunching data with super computers, from folding proteins to finding exoplanets; real stuff and revolutionary for these fields – the general rush to embrace AI for all sorts of, let’s say, less purposeful application should be acknowledged.

After decades of artificial sweeteners, fabrics, food, and foliage, and of course the accompanying, devastation of health impacts and pollution from plastics, PCBs, and many more, a noticeable shift toward the all-natural, hand-selected, bespoke, organic, non-invasive ensued, at least in the marketing materials. This acknowledgement, more human-centered, initially had a kind of desperate last-gasp tone to it that morphed into a realm of preference, if not elevated choice. Thanks, branding!

But it was more than that, and the shift itself coincided with a growing awareness about the dangers of this fakeness and its seamless integration into the activities as well as the mindset that led to and accelerated global warming.

So, now – if you’re keeping score at home – because some of our overlord disruptors in Silicon Valley need to get in on the ground floor of the next new thing, we’re ready to reek further devastation on the information and images we use to navigate the world. It’s not enough to use the verb ‘consume.’ Once we began to use the word and consider ourselves consumers and now just customers instead of citizens, students, patrons, whatever, everything else became easier. And by everything else, I refer to most things unpleasant, empty, lesser, vapid, wasteful of your time, and detrimental to your heart. Yes, doesn’t that sound quaint. Your heart, come now! C’est drôle.

It’s not that the next new thing could destroy us, but that we are so happy to play our part in the destruction. Suddenly we’re helpless to watch another dynamic seize control of how we navigate the physical world as humans. You need not be an AI skeptic to be a tiny bit underwhelmed by that prospect.

The next new thing after this (not investment advice!) will surely consist of selling us back the key to imagination(tm) we somehow lost because everything is fake.

We worry about AI taking jobs but do our part in cheer-leading the takeover, in wonder no less at the ease with which it all happens and the productivity gains sure to follow. In this senseless meandering from one shiny thing to the next, AI might appear to be just another trend we might try, even get used to. Meanwhile, our only job ever has been to discern not the good from the bad, but the real from the fake.

Natural selection, by humans. Darwin should have been more specific.

And by the way, I’m not at all amused by the extent to which this all rhymes with the original rationale I presented for the green blog, oh so [no that] many years ago.

Fighting emissions with AI

Fossil fuel-derived emissions, that is. Ahem.

So… some of the coverage of COP28, when it’s not debating whether science supports eliminating all fossil fuels, has of course focused on our latest and shiniest of objects, AI.

Artificial intelligence has been a breakout star in the opening days of COP28, the United Nations climate summit in Dubai, United Arab Emirates. Entrepreneurs and researchers have dazzled attendees with predictions that the fast-improving technology could accelerate the world’s efforts to combat climate change and adapt to rising temperatures.

But they have also voiced worries about A.I.’s potential to devour energy, and harm humans and the planet.

We should just go with ‘machine learning’ but that train has apparently left the barn, been sold for parts and reassembled as an uncanny train. But there is a direct conflict with using inordinate computer power to push giant algorithms to solve immense problems, namely the word ‘power.’

Leaders at the companies developing A.I. technology have already cautioned that it could someday pose a risk of extinction to humanity, on par with nuclear war. Researchers at COP28 have focused on a different risk — that the computing power required to run advanced A.I. could be enormous. That electricity appetite could send emissions soaring and make climate change worse.

A peer-reviewed analysis published in October estimated that A.I. systems worldwide could use as much energy in 2027 as all of Sweden. That would almost certainly add to emissions, even though countries are lagging on their pledges to cut them. (A Boston Consulting Group study for Google also noted that powering A.I. would quite likely require vast quantities of water and produce an increasing amount of waste.)

So, here’s your query, Alexa/Google/Siri: if super-computing will require all of the energy we produce – what is energy or super-computing for?

Image: remember Fantastic Contraption?

No new shows

Another episode in the continuing series ‘what does green mean?’ Ahem.

And a sub-them of what does the Screen Actors Guild strike have to do with sustainability – in the business sense, everything. Every. Little. Thing.

The issues of the strike might simultaneously seem clear and be difficulty to parse, especially when the sides are show writers, actors, and creators versus the studios. One might think they would be able to work in concert, at least for the sake for of self-preservation. But panning out just a little, the sand in the gears becomes a bit more apparent. From the third link above:

If you read any of the business, publishing or entertainment press you’ll see stories about hard times in streaming world. This means Netflix, Amazon Prime Video, Max, Hulu et al. This is undoubtedly true. You’ve likely seen this in the rising prices you pay and the declining offerings your subscription gets you. I don’t write to dispute any of this. But it’s nothing new under the sun. It is more or less exactly what we’ve seen in the digital new industry. The same pattern.

Entrants raise large sums of money (or use cash on hand from other business lines) and then spend substantially more than your subscription merits. They lose money in order to build market share. At some point the industry becomes mature and then they have to convert the business to one that can sustain itself and make a profit. That means substantial retrenchment. Inevitably that means spending less on the product and charging you more.

Another way of looking at this is that the product as you knew it was never viable. You were benefiting from the excess spending that was aimed at building market share. Now the market is saturated. So that era of great stuff for relatively little money is over. At a basic level what many of us enjoyed as a Golden Age of TV was really this period of excess spending. It was based on a drive for market share, funding lots of great shows with investments aimed at building market share.

Very important to realize that, as Josh points out, streaming media is not a viable business. Without transparency and the upfront, continual re-investment in creative, there is no model, because there is no business. The streaming services don’t own anything – they have platforms and partners. One set of partners is now standing up for themselves but pointing out something very important to us and to the tech companies. If we will  listen. World domination or bust is a faulty Silicon Valley idea and a very costly reality. Maybe they’ll make a show about that. Maybe that’s what they’re doing. Don’t touch that dial.

Image: SAG-AFTRA president Fran Drescher, left, takes part in a rally by striking writers and actors outside Netflix studio in Los Angeles in July. (Chris Pizzello / Associated Press) via LA Times