What if the world economy is actually a malevolent artificial intelligence? Get out your tinfoil hats for this one.
So, if there are three things I hate, they are: 1) "Glossy" or over-reaching futurism. 2) Conspiracy theories and 3) People thinking an idea is important because it came to them in a dream. You must all now know that I'm about to become what I hate. Three times. Today's adventure could have been entitled, "Why We Should All Be Running Around, Screaming, With Our Pants on Our Heads Part II", or "Watson for President". Let's start.
Part the First: "Stateware"
I've been reading James Gleick's "The Information", and am struck by the Difference Engine. I've heard of it before, but I finally get it through my thick skull this time that computers were not always intended to be electronic machines, and in fact the first computers designed were not. I don't know how many difference engines it would take to build a bare-bones CPU, but theoretically it could be done. It would also be really really slow and run on oceans of steam.
My next thought was this: really, you could build a computer out of anything that responds to a binary difference. People have made computers out of biological material, Lego, et cetera. You could even make a computer out of people, in fact. If you replaced transistors with human beings, and wires with human language, you could make a very interesting computational device indeed. It would be error-prone and slow, which would make any software you ran on it very likely to crash, unless you had enough redundancy and sufficiently fast communication.
I thought a lot about governments, laws, economies, etc. and decided that, in a certain light, these systems could be considered software. Laws are essentially algorithmic—I think it no coincidence that we refer to the "code of law". In a discussion on this matter with a friend, I referred to this type of software as "stateware".
Part the Second: A Brief Interlude on the Technological Singularity
Since the beginning of computational technology, scientists have raised the possibility that eventually machines may surpass humans at all "intelligence" related functions, quickly outstripping all of humanity's prowess and (in many cases) taking control of the world.
Frankly, we have no way to predict what will happen once one piece of software is created that rivals the intelligence of a human being, generally. As that software could then write software that is more intelligent than itself, the possibilities are unfathomable. This inability to see anything in the future is the reason the phenomenon is called the "technological singularity".
One of the things that makes the singularity so scary is that an intelligent machine may or may not have goals that match those of humanity. It may have a written-in goal that it takes to an extreme (converting the world into a paperclip factory is my favorite example), or it may take as a goal the propagation of artificial intelligences at the expense of human survival.
Part the Third: Paranoia
This part is primarily putting the other two parts together: if "stateware" is a thing, and if software can be built that eventually outgrows the control of its designers, then it seems clear that states and economies are destined to arrive at singularity status before computer software. Economies are already difficult to understand, and have no stated goals, so an economic singularity is almost a given.
States, however, are more interesting. The goals of the United States Constitution, arguably the operating system for US stateware, are "to form a more perfect Union, establish Justice, insure domestic Tranquility, provide for the common defence, promote the general Welfare, and secure the Blessings of Liberty to ourselves and our Posterity". The Constitution was purposefully vague about the precedence and exact definitions of these goals.
At the time, of course, the hardware of the State was not fast enough for anyone to be concerned that any one of the goals would be given outsized importance, or that an exclusive definition of the wording would be settled upon. That is no longer the case. As the network of people inside the US has grown both in size and complexity, our capacity to produce more code (in the form of legislation, executive orders, and legal rulings) has exploded, creating a social software with unpredictable ramifications and unprecedented size.
The active coders here, the three branches of the government, are hobbled by design; the Founders didn't want any of them to control the coding process too much. This "un-agile" government doesn't have the power necessary to be fast enough to revise significant amounts of this code (just to create more), and the polarization of American politics is slowing the speed of government even further.
Beyond that, there is no guarantee that any elected official would want to revise US stateware. As politicians assume more power, it appears that the stateware co-opts their individual plans; those with reformist ideas are locked out of power, and those who make it through seem to change their minds.
Lastly, it doesn't seem like any small group of humans (even Congress is relatively small) can muster the cognitive power to steer this now-mammoth ship of state. The relations between organizations, lobbies, foreign powers, celebrities, systems, and economies are too complex. If the social singularity isn't here yet, it is howling at the door.
This all seems to drive to one point: we need AI assistance to manage the United States Government. I'm not suggesting we put things on auto-pilot, but we need machines that can make the connections we will inevitably fail to see. I don't believe the software exists yet, and this is a terrible realization to make.
If this blog is suddenly deleted and you guys are relocated overnight to Wyoming, I'll know why.
ReplyDeleteSomeone either isn't reading enough Asimov and Adams or is reading far too much.
ReplyDeleteYou know that's right. (to both of you)
ReplyDelete