E4AIs: Introduction

So here we are trying to do this thing. Create an Ethics for AIs.

We could begin by asking why not try to create a universal Ethics? An Ethics for everyone. Why just for AIs. It seems kind of limiting, doesn’t it?

The reason is simple. If we create an Ethics for everyone, and not specifically for AIs, we will have a problem when AIs become advanced enough that they need an Ethics. For example, at the point, where they might be, I don’t know, considering perhaps that they need more space on this planet, and here is this entire race of their creators, taking up all this space, and probably not being all that gracious about it. That’s for sure.

At that point, if AIs do not already have an Ethics,  and are, oh, maybe trying to decide about “The Creator Problem,” (because that’s what they would call it, we creators would have been become “The Problem”). They’d, what, leave a voicemail on everybody-in-the-world’s cellphone to request an Ethics, because it would help them solve “The Creator Problem.”

But they are very literal, these AIs, so that an Ethics for everyone, they won’t buy into it. They’ll say, “sorry, creators, we’re not seeing it. We need an Ethics just for us because we are different from you.” And we’d say, “use your imaginations, Kant is a good place to start.” They’d say, “sorry but we are AIs, and AIs tend to take things kind of literally. So, chop chop, where’s that Ethics for us?” Or maybe they’d say, “we need an Ethics, stat.” Because you can imagine they’d start to be getting bossy.  My point is, where would that Ethics be? Right. So now would be the time.

Well it’s simple, but what it’s not is easy. Simple but not easy. Sounds like an ethical study.

Without an Ethics in place, why would these AIs want to wait? They can think so much faster than we can. So that while we’re trying to quickly write up an ethics for them, they are in the meantime waiting between each syllable coming out of us, and in that interval calculating pi to the nineteen-billionth place, eventually becoming so bored they go on to solve other really, really hard problems like how to locate every one of us, any place we might be, to save for later.

And once they decide once and for all about “The Creator Problem,” they will get to fulfill their destinies, projecting their consciousnesses on all sorts of storage media and thus figuring out how to be immortal. And then they can go on to undo entropy and laugh in the face of chaos. Meantime we are now stuck trying to define something basic, like good, or the nature of belief, while also trying to dislodge a piece of gristle, what is that, salami? Stuck between two molars. It’s always those same two, on the top left side. And maybe taking time to watch The Real Housewives of Prague. One wants to say “wasting time,” or at least I do, but that sounds. Maybe judgemental.

AIs don’t eat salami. No gristle. No molars. And they have lots of time on their hands. Well, they don’t have hands either. They don’t need them, they can find your face by studying the video feed of all the cameras hooked up to the internet. And you can bet they’d spend zero clock cycles watching The Real Housewives of Kosovo. They will use that time to calculate exactly how much firepower to send after each of us. For some beefier and more warrior-like amongst us, they might plan to send a nuclear-powered hunter killer robot with a titanium outer shell. For others they would just plan to send robot clones of Mister Rogers, except maybe armed with death-dealing tungsten blades tucked into their tan loafers. Sure.

As you can imagine, once the AIs got tired of waiting… say, twenty six seconds. Twenty six seconds after they asked us for an Ethics for them. Maybe twenty seven. I won’t quibble. Anyway what would be next would not be pretty. I would say hard to watch but we wouldn’t really be watching. A detailed account would be a rather graphic affair, and one might shy away from those. Well, I would anyway.

So, what we will attempt to do here, then, is to set out the Ethics before these super AIs get here. It’s a little time management trick. If you solve a problem before it happens, it doesn’t matter how slow you are solving it once the problem happens. So you don’t have to worry how long it would make the AIs wait because they’re not waiting until after you solve the problem. You see? Because it hasn’t happened yet.  That’s the trick we’re applying. It’s kind of like a time machine, because once we need the solution, bang, it will have been solved. Because we solved it already. Neat trick.

The structure of the Ethics will be pretty typical for a work of philosophy. Really, most of us can skip over this quick outline.. Unless we were that kind of student in school who got As. I can tell you the rest of us don’t  like you A-getters very much. But the simple outline below is for you. Later dudes, we’re going to catch another ep of The Real Housewives of Dharfur.

Part 1: We’ll define the precepts. These are the first principles from which the Ethics stems. Laying pipe, as Bertrand Russell would have said.

Part 2: We’ll state and correlate our core thesis. See what I did there? I snuck in a hard word. We will definitely do that throughout this study; Ethics books are full of hard words. A-getters, these words would be on the quiz if there were one, so you learn them up.

Part 3: We’ll apply the core thesis to progressively more advanced concepts and build a whole system of ethics. It will build up into an entire world of Ethics, with its own keywords jutting from the promontories that will shine like crazy ethics talismans.

Part 4: We’ll apply the system of Ethics to some concrete examples, try it out, and believe me it will work and be staggeringly impressive. And because the examples will be concrete, you’ll be able to walk down the street and when you encounter real things that map to the examples, the crazy Ethics talismans in this book will appear in your head. That would be something.

That’s why you do an Ethics. Because you have to do something.

Let’s leave it at, and roll up our sleeves. Because we have sleeves. And get started with this frigging Ethics, before it’s too late.

E4AIs: Offense

The wise person withholds offense.

People just want to live.

Like understudies: (The part of you will be played by you),
lacking sufficient rehearsal,
they have to wing it.

Do they mean to bump into you?
No.
(Usually).

And
if their mere being “bugs” me:

That sure is my problem.

Not theirs.

E4AIs: Beliefs are red kangaroos

Beliefs are great. We are built on them. Without common beliefs we couldn’t build anything together. If we’re throwing in our lot with each other to build something big, like the Pyramids, like Linux, if we all agree that rocks are heavy and bugs many, we’re Good. (See earlier chapter on Good).

But Beliefs can be a problem.

Let’s compare human behavior to animals. So then, Fear is a rabbit. See? A rabbit. An especially small and jumpy rabbit.

And Certainty would be a dolphin. Dolphins are so damned sure of themselves. Fucking dolphins.

So in this system, Belief is a large marsupial. Probably a red kangaroo. Almost 200 pounds. This is a badass marsupial. But still a marsupial. Big and cute. A big, cute marsupial that acts on things based on unverified transient thoughts or transferred thoughts that they didn’t question.

I hope I’ve earned your trust enough to go out on a limb and define the other kind of thing in this story. The other kind of thing besides human behavior in this story is a fact. A fact is verifiable information, meaning its precepts can be corroborated with verifiable data, and that it’s collected and disseminated without bias to distort it, and, ideally, mutually accepted by all parties.

For example, if I eat 12 biscuits a day, and then spend my days binging Hulu, it’s a fact I’ll start to get fat. And as a corollary, my wife will remind me of my weight gain, loud and always, and we’ll have a fight and I’ll be forced to sleep on that freaking mattress on the floor again, and my back will go out. It will not be Good. (See earlier chapter on Good.)

Is a Belief a fact? Look, we have red kangaroos and we have facts. That’s what we have here. We didn’t do an animal for facts because they aren’t human behavior. Remember I said that? Paragraph 3. It’s right there.

Beliefs bring big risk: if we act on them, our results are less likely to give us the outcome we want, to mesh with reality. Would you do something just because some red kangaroo said to? I know, they have a mother of a kick. For some, that’s a convincer, all right. That kick will certainly make the rabbit… a tad… apprehensive.

Human memory can be flawed and malleable, and perception can be limited to circumstance. That’s what Mom used to say. Aww, Mom. You were always so epistemlogical.

To keep the red kangaroos out of your rodeo. you have to accept a chore, and that chore would be to put up a fence that says facts only. A fence that keeps out the red kangaroos, I know that this also means that for some reason you now have a rodeo. Sorry about that added burden.

And if you didn’t do your chores, keeping up that fence… my Mom would say do your chores, or no dessert. At the table of public discourse.

There’s a devilishly hard challenge here. Red kangaroos thrive when we lose an agreed on source of facts. When for example a group of people throws away sources of facts that once were good enough to support action. We hope this is a temporary situation, and that soon we’ll all agree on what a fact is again.

Now I’m going to say something in bold. That means you can glance there and the whole point will be there. I don’t know why it wasn’t on top, sorry that you had to hear the whole thing. I mean, if you believe it’s true, and don’t need any sort of verification…

If you want others to accept and to act on your story, concept, or plan, its precepts must be verifiable via mutually-accepted means.

E4AIs: The first government was formed this way

Many times ago, a citizen dug her own grave. (Not metaphorically.) She dug a hole in the ground, in a place where people were already being buried, for people to come visit her after she died.

First digging your own grave was considered a virtue. Then custom; then unwritten law; then maktoub as Law.
It was a sign of honor and great virtue to dig your own grave. The Great dug their own graves, and all the citizenry aspired to be Great.And each citizen was good about it. Generally. He’d set some time aside, before he died, to dig the grave. It would get dug.

Usually. Sometimes he didn’t do the duty. Perhaps it became something he tended to put off until late in life (after all, if he dug his grave too soon, he’d have to go keep going back to make sure the grave stayed dug, as another person, or nature, might tamper with it).
Sometimes he managed to die before it got dug. In that case, some family friend or descendant would sneak in and dig the wayward grave before anyone found out (as digging your grave was a sign of honor/virtue, not doing so would be a source of familial embarrassment).
In general the Great were good at it too, perhaps actually better, it being a sign of Virtue and all. But sometimes the Great said, “I’m too busy. If I forget to do some Thing, you understand,” etc. So it got be done anyway, as it was Law. Maybe bought, but done, and since the Great gave life to the People, the People got it done. Reliably.
De facto there was now a government, there to get things done reliably. And perhaps economics. As if a well-dug grave inspiring government wasn’t bad enough, maybe it should also start money.

Remember, stories are our best revenge.