6 + 3^2 * 4 (second post)

In my first post, I said “hello, world!” It’s the traditional first program in a new language, and so it seemed appropriate for a first post.

But honestly, though, there’s something kind of weird about the “hello, world” convention.

It seems so friendly and inviting, doesn’t it? Perfect for a student’s first computer program. The computer is your friend. It just said “hello” to you!

And the computer is more than just a friend. It’s an innocent little being. It barely knows anything. It only just said its first words, as if waking up to the new world around it.

Hello!

Hello, world!

Hello, new world!

But the computer is not really talking to you. All you did was print some characters to a screen, and maybe that tugged on the right memories, fired the right synapses, released the right chemicals — and now, you feel nice and fuzzy.

In some ways, this seems like a good approach. Learning a new computing language, or especially learning your first computing language ever, can be a daunting task. Gatekeeping can be a problem in the field in general. Why not make things friendly and inviting?

The “hello, world,” program was created in a mostly pre-“AI” world. In the 1972 “hello, world” tutorial, we did not have ChatGPT and its ilk to content with. But in an increasingly “AI”-hyped world, where computers are designed to imitate (or replace) human beings, and where we are asked to trust and even empathize with this technology — then, things are getting just a little bit icky.

In his essay in the anthology Possible Minds, Daniel Dennett says of so-called “AI” that “we’re making tools, not colleagues” and that “humanoid embellishments are false advertising — something to condemn, not applaud.”

The ways in which we make computers look less threatening and more human can trick us into trusting them. This buys into the “AI” hype. ChatGPT is your friend, right?

No, it is not.

Moreover, forgetting that this is a tool (albeit, sometimes an impressive tool) leads us further toward a world where we value machines over people, and where we become increasingly dependent on them for our everyday lives.

Dennett refers to UI humanization as “Disneyfication.” The term predates Dennett’s use, as was probably first used in 1959 and later popularized in The Disneyfication of Society by Alan Bryman in 2004. But it’s appropriate for what’s happening. Computers are cute and friendly. They’re little babies, just learning about the world for the first time.

Except, they’re not. They’re tools, and they’re not even very good tools all of the time. When we attribute human characteristics to them, we trust them too much. When we see them as little babies, we forgive them too much.

Do you remember Microsoft’s doomed chat-based “AI” Tay? It was supposed to learn from user input on Twitter but this bot quickly echoed racist content fed to it by trolls. There’s a Wikipedia article that gives a summary, but content warning if you want to dig into any of the specific things that were output by it.

But the thing is, these tools or models are more than just a racist bot (which is honestly bad enough). This technology moreover drives life-changing decisions for human beings. Labor, and the ability to earn a living as a human being (see the recent writers and SAG-AFTRA strikes). Privacy, and the freedom from mass surveillance (see the problem of increasing camera presence in New York subways). Security, and the ability to live your life without undue arrest or biased bail or sentencing decisions (see this ProPublica investigation of racial bias in recidivism risk models). I could keep going, but I hope you get the idea.

The rise of a data-driven society, and the “AI” hype, have very real consequences for human beings. We should be highly critical of this technology. The default stance should be skepticism, not trust.

And skepticism is good for engineering, actually, if we want to get down to it. You’d want to “kick the tires” on a new car design before driving in it. As either engineers or as consumers of engineered tools, we should not humanize or implicitly trust technology.

So, to return to the point: It struck me how odd it is, that this early tradition in computer programming fell into the same trap that we so often do as human beings. It anthropomorphized the machine.

Obviously, this program was created in a very different world from the one we now find ourselves in, and I make no claims about the intentions of the original program’s creators. This is simply an observation. The impulse to humanize seems an easy hole to fall into, and perhaps requires active effort to avoid.

Should we have a different tutorial program, then? Perhaps printing the results of simple mathematical calculation rather than a conversational phrase? A reminder of the machine’s computing power rather than “false advertising” of empathy? I’m not sure. Some functional languages do use this type of mathematical first program as an alternative to “hello, world” (for example, the Haskell tutorial calculates 6 + 3^2 * 4 as a first program). Maybe other language tutorials should follow suit.

These are simply my own feelings on the topic, though. Who am I? Just a human, I guess.

In any case, stay safe out there friends. If you want some advice: Be skeptical, remember that “AI” is not your friend, and know that you absolutely do not need to pet the Boston Robotics “dog” things.

Since we’re at the end, here’s a photo of a real cat:

A brown tabby cat looking up at the camera. He's on a white rug next to a wooden cabinet.

Leave a Reply

Your email address will not be published. Required fields are marked *