What's new
The Brexit And Political discussion Forum

Brexit may have begun but it is not over, indeed it may never be finished.

The human race has gotten its layoff notice, and now we're only training our replacements

Brexiter

Active member
Hey there, I hope you got every present you wanted, that all your friends and family are well, and that you have a warm cup of something laced with pumpkin pie spice close at hand. Because this may be the gloomiest, soot-stained story to come down your chimney this holiday season.

Because, not to sound like one of those late night ads pushing gold coins (or 20 years worth of dried food), we’re right on the edge of disruptive event that threatens to absolutely sink our economy, toss a good deal of society into chaos, and take us swiftly into a new Age. That’s big-’A’ Age. As in the transition from the Bronze Age to the Iron Age. You know, the one that historians still talk about as “the Bronze Age collapse” in which every major civilization that had defined the previous age either went into steep decline or disappeared outright.

The threat taking us there—and I’m dead serious about this—is not MAGA or Q-Anon. It’s not even COVID-19. It’s something that slipped out in the last few months that hasn’t received 1/1000th the concern it should be getting. It’s a chatbot.

Over 20 years ago, I worked a company that happened to be the second-largest landowner in the country, right behind the Union Pacific railroad. In addition to owning a lot of acres, that company had something like 80,000 contracts on land that it leased from the government, companies, and individuals. Those contracts, having mostly been written by guys in cowboy hats and ostrich-skin boots who sidled up onto someone’s porch for a “chat,” were essentially chaos. They could include anything, from demands that you pay for driving on the property, to a schedule of payments for storing things on the property, to fees or adjustments based on anything else you might think of—and some you can’t. Absolutely arbitrary arrangements.

As a result, there was an floor full of people whose entire job was to read these contracts, parse what they said, and put out a continuous flow of checks to all those landowners. Because failing to pay what someone was owed, every month, right down to the penny, could result in losing rights to the land. Which could be a disaster if that property happened to be in the middle some new project.

Then, over a period of about a year, a team of developers put together a system that could parse those contracts, determine what inputs were required, and automate the process of paying landowners. Within two years, that floor full of people was gone. What had seemed to be a nearly impossible task requiring the full-time attention of dozens of people really came down to about 100,000 lines of code. At the time it seemed amazing. I led that development team, and I can tell you that we thought it was pretty damned impressive work.

In retrospect, it was trivial.

Almost a decade ago, popular YouTube channel CCP Grey put out a genuinely terrifying video, “Humans Need Not Apply,” which predicted the inevitable fall of all human work to the combination of “mechanical muscle” and “mechanical minds.”

YouTube Video

Watching that video years later, the first thing that becomes clear is that human beings are really, really bad at predicting the future, and particularly bad at predicting the future of computing technology. This has been true of science fiction and professional “futurists” for decades. It’s still true now. This isn’t all that surprising. Since we don’t understand our own consciousness, it makes sense that we’re generally awful at seeing the possibilities of a different kind of mind. Even the people who have a very good batting average on predicting future tech, have been solidly awful in looking at the development of AI.

The biggest issue is that people seem utterly incapable of determining the difficulty of a problem until they try to solve it. Parsing all those dusty old contracts in a way that gave meaningful results and generated accurate payments seemed hard, but it fell to a core programming team of no more than a half-dozen people pecking away without a PhD in sight. Building a fully self-driving car seemed like a given 10 years ago considering the available instruments and computing power that could be applied to a task humans can do with minimal training. Thousands of programmers, billions of dollars, and a number of horrific accidents later and that problem is still a long way from being solved to the extent that people would be happy to kick back and just tell the car “take me home.” It turns out to be what mathematicians would call “a non-trivial problem.”

These spectacular failures have given a lot of people license to sneer at the predictions made in videos like “Humans Need Not Apply,” as well as to dismiss the large number of books, articles, and scientific papers warning about the coming age of general artificial intelligence. Problems that seem simple, even obvious, can turn out to be devilish when trying to turn them into code—especially code that is required to deal with the messy irregularity of the physical world.

download17.jpg

“Waitress serving strawberry pie to an old man in a Santa suit.”

However, as we’ve seen recently, some classes of problem that we might have expected to be the last to fall to any sort of machine learning turn out to be much less of a challenge than anyone a decade ago predicted. Here’s an example. That image over on the right is the result of feeding the prompt “A waitress serving a strawberry pie to an old man in a santa suit” to an AI.

By now, you’ve surely seen dozens, if not hundreds, of examples of AI-generated art from programs like DALLE·2. As it happens, the one that did this pie art uses an entirely different form of generative approach, but the ability to produce high-quality images that seem as if they might have come from the hand, or camera, of a human artist is impressive no matter how the underlying algorithm works.

This particular example came from an open source program called Stable Diffusion. It’s based on a kind of neural network developed by researchers in Germany. It was trained on hundreds of millions of captioned public domain images, learning to associate text in those captions with components of the images. It’s available to use for free at a number of online sites. Unlike most of the other systems, it’s also lightweight enough that you can download the whole thing and run it on your own laptop.

download4.png

“Robot riding a Vespa through Rome, as a colored pencil sketch.”

All of these programs have their limitations. Stable Diffusion seems to have a very poor understanding of human limbs and hands. If I showed you the three alternate images it generated in response to the strawberry pie prompt, none of them would be acceptable to illustrate a holiday magazine, and some of them are downright disturbing. Hands merge with plates, fingers articulate like the legs of a spider crab, and in one particular example the man has a dark depression where his eye should be while the waitress’ face is reduced to … I really don’t want to describe it.

But for every miss the program generates there’s at least one image that is utterly convincing. If I happened to be writing a book that involved a robot hero trying to make his escape through the twisting roads of Rome on a “borrowed” Vespa, I’d certainly be happy if the illustrator handed me this image to pop onto a page.

Naturally, artists—actual human artists—are extremely concerned about this sudden proliferation of image generators. While many of the results you see stuck into social media are cranked out from the free version of these programs, almost all of them have pay-to-play version that are capable of handling much more detailed prompts to produce images that are much, much better than these. Images that look like they were photographed by Richard Avedon or sketched by Leonardo Da Vinci.

That “look like” bit is one reason that artists are, rightfully, concerned. If an AI produces a work that looks like it came from the brush of a well-known artist, shouldn’t that artist be compensated in some way? Many of the artists whose styles can be pulled up on command by these programs are long dead, but many of those whose work was part of those millions of images that went into training the models are very much still around. And most are not at all thrilled by the idea of a program that drank in their work only in order to learn how to replace their work.

Are any of these programs capable of producing something that is genuinely new, original, and deeply moving art? Maybe. Maybe not. But then, most artists—whether they make images or words—are just adequate hacks. I am an adequate hack. On my good days.

My first thought in seeing the images from programs like Stable Diffusion wasn’t “gee, that’s going to put a lot of people out of work” but “gee, I’m going to be able to get illustrations a lot cheaper.” But those two sentences? They’re the same thing.

Still, in the best of all worlds, humans and AI can cooperate on creative tasks by leveraging the unique strengths of each. Humans are capable of creative thinking and have the ability to come up with unique ideas and concepts. AI, on the other hand, is able to process large amounts of data quickly and accurately, allowing it to analyze and evaluate ideas in a way not possible for humans. By working together, humans and AI can combine their strengths to generate new and creative ideas that neither could have come up with on their own.

Does that sound like a rosy outcome? That may be because that whole last paragraph was written by an AI. It was the response of a program called GPT 3 exposed through the ChatGPT interface. That paragraph was a response to the prompt “Can humans and AI cooperate on creative tasks?” Sorry that I didn’t put it in block quotes for you, but I wanted to see how many people got to this point before noticing.

What my pal the AI suggests is exactly what a lot of pundits are saying at this point. Programs like GPT won’t replace writers, just as programs like Stable Diffusion won’t replace artists. They’ll just be new tools as we usher in the happy age of human-AI cooperation! It’ll be like moving from the fountain pen to the typewriter, or putting down those pastels to use a digital pad.

Except it won’t be.

The transition from pen to typewriter to word processor may have liberated writers from ink-stained cuffs and the joys of Wite-Out, but the fundamental writing process didn’t change. This is a change to the fundamental process. On the most basic level.

I’ve been here before. Many of us have. It’s that place where you find yourself training your own replacement. Hey, GPT, is that’s what we’re doing here?

GPT 3: It's important to remember that AI is a tool that is created and trained by humans. While AI can perform certain tasks more efficiently than humans, it is not capable of independently making decisions or understanding the world in the same way that humans can. AI relies on humans to provide it with the data and instructions it needs to function, and ultimately it is up to humans to decide how to use AI and what its capabilities will be. So, in a sense, we are training AI to do certain things, but it is not a replacement for human intelligence.

Somehow, I am not reassured. And oh yeah, before you dust off the “programs need programmers” response, note that GPT can already write fully functioning code. Somewhere there is a whole floor full of people who used to read contracts who are looking at this with satisfaction.

A couple of months ago, Cleo Abram put together this video in which she and an artist friend took turns using a generative art program.

YouTube Video

Abram is a solid communicator, both as a writer and a video host, and she seems to get the possibilities of this technology. However, she doesn't go even one minute into this video before noting that the images produced by the artist friend are better than those produced in response to her post. The reassuring message: “it doesn’t seem like AI is leveling the playing field.” Except that the difference here seems to be just that the friend took a moment to study the way prompts to the system can be elaborated on to produce different styles. That’s a difference in “level” of five minutes reading. Not decades of hard-won skill.

Let me show you one more trick from ChatGPT. This is just one paragraph from it’s response to the prompt “write an essay about the book “a wrinkle in time,” seventh grade level.”

At first, Meg is unsure of herself and doesn't think she can do anything to help find her father. But with the help of her new friends, she learns to believe in herself and her own strengths. As they travel through the universe, they face many challenges and dangers, but they never give up.

Think for a moment about why why teachers ask students to write essays. Sure, you want to know that they actually read the book, but you also want to see that they’ve learned the concepts in the book, that they’ve grasped the themes, that they understand what’s important and can give back that information in a way that shows they’ve internalized what the book had to offer.

It’s easy to dismiss ChatGPT as “just another chatbot.” But what’s the difference between being able to fake understanding and actual understanding? If you have an answer to that, please call your local university, because that’s a problem that’s been kicked around for millenia.

Some of you are now thinking “Well, you’re only really concerned about general purpose AI now because it’s coming for your job, when you thought you had a lot of years left.” To which I say, Damn Straight. Still, don’t expect it to be too long before you have your own reason for concern, and don’t get too comfortable. It’s coming for every job, sooner than you expect, and not in the order you expect.

Many of those general purpose robot thoughts in the “humans” video still seem silly. They will, right up until they’re not. We are this close to having the entire foundation of our economy pulled out from under us, and facing a whole different system with absolutely no plan on how to deal with it.

Because things like Stable Diffusion and ChatGPT? They’re just a crack in the dam. There’s a flood on the way.

In the early days of atomic weapon development, researchers engaged in a practice called “tickling the dragon’s tail.” It involved sitting at a table with two masses of plutonium, separated by only a few millimeters. Then they pushed those pieces around with the tip of a screwdriver, while a Geiger counter let them know how close they were to critical mass.

This is where we are with AI right now. Only there’s no Geiger counter.


On top of releasing “demos” that are already threatening to upend the economy in a way absolutely no one is prepared to address, people at half a dozen different companies and groups, as well as literally thousands of developers working with publicly released code, are months, or weeks, away from systems that can not just generate photorealistic depictions of any event, but modify their own code in non-trivial ways.


On May 21, 1946, Dr. Louis Slotin was engaged in moving together two half-spheres of Plutonium when his screwdriver slipped. There was a flash of blue light and a wave of warmth that rippled through the room. Slotin ripped the two pieces apart with his bare hands, but he knew it was already too late. He died just over a week later, his body practically dissolved by the burst of incredibly intense radiation. Even so, that was a lucky outcome. Because it was only Slotkin, not everyone in a ten mile radius.

Luck is not guaranteed.
 
Back
Top