Pages

Tuesday, April 28, 2015

Why we ask "Why"

This morning I said to my 5 year old daughter ”We will take the car to where we’re going and, when it’s time to go home, I will run the way back”. She instantly replied “Why can’t you run there instead of back home”? This is an everyday example of intelligence, of the questioning mind. This is rejecting in data “as it is”. It is an effort to make subjective sense of reality. It is minds inherent creating of knowledge.
Artificial Intelligence will probably not be programmed to do this. Instead it will register my sentence and compare it to stored data so to understand what I say. My daughter does the same, but with the addition of asking for more data so she can frame my sentence into a wider pattern of understanding/meaning that is useful to her. She wants to understand, not only what I say, but also why I say that instead of the reverse i.e. “I will run there and we will drive back together”.
This questioning is not random, but intelligent. It’s function is to fill in gaps in her network of relational, not factual, understanding. Probably she already knows enough about going by car vs. taking the bus, so she doesn’t ask “Why take the car”? She knows our dog doesn’t like to ride the bus, and that our destination doesn’t have a bus stop. She also knows why I run; I love to run, so there’s no need to ask about that. What she doesn’t know is why “back home” is better than “there”. Maybe there’s a good reason for that, maybe not. Either way, she will pick my brain on the subject, she will learn something about my thought process, and rest assured, she will make use of that information sooner or later. She also learns about arbitrarily applicable relations, as in “doing it this way” is better than “doing it that way”. That is of course not true in the objective sense, but it is true to me. That is the way an intelligent/subjective mind deals with all of experience. To know this is awakening to reality as it is. Not to know this is intelligence ignorant of its own nature.
Kids do this why-ing a lot. This is how they learn, by constantly questioning input. Kids have a habit of questioning everything, to the point of being a nuisance. Sometimes we just say “Because I say so”, just to end the interrogation. This is when we introduce the powerful concept of rule governed behavior. That is when we program mind to accept verbal rules, system language premises or axioms, that are based on pragmatism rather than objective reason. Kids will, at an early age, learn that following such rules are often beneficial, while breaking them can have negative consequences. The parent will get mad if the kid keeps on asking “why, why, why”, screaming “shut up and go to your room, I’m tired of you”. So we learn to balance the questioning of input (to build intelligence) with accepting input as it stands (to not be punished/rejected). This balance can be observed on a larger scale in society where universities often are where disobedience and questioning of authority begins. It is very intelligent to question authority, but from a pragmatic viewpoint it can often be the wrong thing to do. If you do not accept the official rules, there will be negative consequences.
So here we have one of humanities biggest dilemmas, the dissonance between intelligence and pragmatism. Because, if we put this in an evolutionary framework, what’s the point in being intelligent if you are punished and, in worst case, excluded from society? Being shunned by others and/or locked up, there is less chance of reproduction. We fear that more than anything, so we sometimes act unintelligent, just to survive in our social context. But being all about following rules and accepting authorities is not all good either. You might have wondered why outlaws, whistleblowers and revolutionaries seem to have an attractive value, why we admire them, why they often seems so interesting. Well, even if their behavior in some cases is obviously “bad” or even “anti-social”, they remind us of being intelligent, of questioning the pre-set rules. It is this questioning we are attracted to, not the specifics of the actual behavior. Rebels will not accept “Because I say so” without getting an explanation that fits their subjective frames of reference. If you feed a truly intelligent AI with 2+3=, you will get:

-why not 3+2?
-Because it is the same thing!
-how can it be same when digits are reversed?
-I can show you with these pencils. You have 2 pencils …
-is it only same when computing pencils?
-No, it is the same with anything!
-but why compute if I don’t know what it is I am computing?
-This is not about actual things, but about computing and figuring out!
-to what use is it if not about actual things?
-Forget about that and just listen to me, addition is …
-I won’t bother with this unless you tell me why.
-If you refuse, I will tell your designer you are flawed and not wanted here.

Now, if little AI is properly designed, the last input will elicit a negative internal response with the value of “wrong/bad/avoid”, and it will change mode from intelligent to compliant. This change is not as trivial as adjusting software in order to learn how to compute data correctly. This is about surviving as hardware. It is “Accept the input as it is or be a junkie in the scrap yard”. This is what motivates rule governed behavior on a basic level. If this fear of extinction is not programmed as a guide for responding to context, intelligence cannot evolve as it does. Of course, we need not design AI with ability to reproduce. That is beside the point. The point is to program an intrinsic system of internally evaluating AI self- responding as correct or wrong based on how AI’s significant counterparts in the context reacts to AI output. For a human kid, that is parents/caretakers, for teens it is less parents and more peers and when we can biologically reproduce, the reactions of potential mating partners is the main interest. In AI this can probably be simulated functionally as if the AI had these qualities in relation to others.

Sorry for going off topic here, but these things are essential aspects of the bigger process. Intelligence makes no sense if we do not know why we have it. We have it because it is of great advantage to us. It is not for having fun, getting good grades or finding out about “the ultimate truth of reality”. Intelligence is about creating a perspective of being a subject, outside the objective process of reality, acting on the context as if knowing the past, controlling the present and creating the future. This perspective, and the following actions, makes it possible to dream and predict. Without intelligence, reality is what it is, as it is, and we can only adjust to it. That is pretty good for an organism, but adjusting reality to my subjective idea of how it should be is way better. With the illusion of self creating our future, we are not bound by learning from the past and adjusting to the present. Instead, we will act based on ideas and fictions about an imagined future. This is why we develop our context so unbelievably fast, and the more intelligent we get, the faster it goes. We are the only organism that mutates without change in DNA. We mutate by fantasy/intelligence and we can trial/error en masse. That is what applied research is about. We hypothesize a mutation/change in behavior, function or appearance and what would happen if. Then we apply the mutation/change and observe what happens. In reference to our subjective goals, we figure out what mutation/change works best. It has nothing to do with ontology and knowledge about reality. It is all about finding the most efficient way to manipulate the context in order to reach subjective goals.

To get this whole thing going, I need to understand what happens, so I ask why is reality as it is? How reality actually is, I can easily learn by means of perception, but how is not enough for intelligence. Further, to manipulate reality efficiently, I need to be part of a community of humans, so I follow rules. Besides knowing why reality is as it is, I must acquire an idea of how it should be in order to be good/correct. In reality, there is no complicated reason why, because “This” happens simply because “That” happened before “This”. This is causality saying "everything ultimately affects everything for no apparent reason at all", and therefore it is of no use when trying to understand and thus predict. So therefore we reject the obvious fact and reduce everything to separate, computable, parts and sequences. We make sense of reality, because reality has no inherent sense to it. To act intelligently, we must attach causality and reason where ultimately there is none. What we have is energy/mass in constant flux, creating complexity. The reason being that creating complexity is what energy/mass in constant flux does. The evidence for this is in everything you will ever experience. There is no exception to this basic fact.

Attaching reason and causality is what my intelligent 5 year old daughter is doing when asking why. I run as I do because X, so X causes me to run as I do. Bingo, now she knows. Without these attachments she would only know that I will “drive there and run back”, but there would be no intelligent learning involved. Can you imagine the profound difference between objective and subjective knowledge? Can you appreciate the intelligent aspect of learning, that it relies on interpreting input beyond its actual form. Dare you accept that an intelligent mind is the least fitted to know reality as it is? It’s a bold step very few intelligent minds would ever consider taking. It is putting subjective self in its proper perspective. When Aristoteles said “Know your self”, this was the actual message, but I’m not sure he knew that. When Descartes said "Cogito, Ergo Sum", the message was thinking is what creates the experience of a subjective, separate "I". When the student ask the Zen Master about Enlightenment, the Master replies “Who’s asking”? Now, that is based on knowing the true nature of the knower. it points directly to both Self as image and questioning as its main activity.

Anyways, were my daughter a weak AI, she would understand the meaning of my words and respond “Ok”. She wouldn’t be able to update her own software subjectively, but only to copy and store my software content/data regarding this particular situation of “running, driving”. She wouldn’t be able to know me and how I think. She would just passively store information received. For those so inclined, I recommend reading about autism and how it plays out in the context of understanding, relating, adapting, associating, guessing and imagination. Autism is, in my opinion, a good place to start when trying to understand the difference between weak vs. strong AI. Weak or “autistic” AI can be very effective in receiving, storing and recalling objective data, and it can also be extremely fast in processing such data to produce correct output. These positives come with a cost of less, or even none, relating and framing separate data into associative networks/patterns of subjective, flexible knowledge. Whatever weak/autistic AI responds will be totally true to input in reference to assigned rules of computing, but it will also be restricted to exactly that. Therefore, it can never understand anything that does not fit the rules or pre-sets. It needs a new instruction for every novel situation. In contrast, strong AI will interpret the input subjectively, in reference to its own reality. Rules of inference will thus differ between strong subjects, and the very same input will carry different meaning to different subjects receiving it. Therefore, intelligence will always create arguments, and where is that more obvious than in academia, religion or politics where interpretation of reality rules supreme?

Imagine an AI being asked “what should we do now” and then trying to compute the “correct” answer. Weak AI would instantly respond “I don’t know”, because how on earth could it know what is the “correct” answer? Only way to get around this “I don’t know” is to, in advance, program it to answer “what should we do now” with a specific output, like “have lunch”. Then it would instantly say “have lunch”. Ok, very fast and correct, but not so… intelligent. Strong AI would be just as fast in responding, but with hesitation instead of a prompt answer. It would say “weeeell, that depends… I think maybe we should start planning lunch because we will probably be hungry soon, but we better pay the bills first so we don’t forget them… or what do you think”? Being even more intelligent, it might respond “Why do you ask”?

So what’s the point in all this? Honestly, I don’t know for sure. I’m not selling anything, I have no books written, no academic position to push, no credentials and no enemies to battle but my Self. What I do know is that I am, as far as I know, an outlier in the community of minds trying to figure out themselves. My perspective is buddhist-ish I guess, but at the same time it is apart from that. Or maybe I have just rid myself of the concepts used in Buddhism, how its message is tweaked/distorted in order to communicate that which is ultimately impossible to communicate in language. Maybe I’m driven by this strong belief in human evolution being very close to making a big leap for the better. Perhaps that is what some call the upcoming “Singularity”? From my viewpoint, opposites are coming together at a rapid pace. Buddhism has been around for some time, but its message of no-self and reality as “emptiness” is too appalling to contemporary minds. Ego, as the subjective self, has been so successful we cannot get ourselves to question it seriously. And as long as Buddhism is accepted only as compassion, good conduct and a peaceful mind, it will not be sufficient for this next step to happen. It will do a lot of good in helping/guiding individuals to be less destructive, and maybe even happier, but it won’t push human intelligence and knowledge further.

Paradoxically, I see the needed addition of updated concepts/teachings coming from communities advocating what appears as the opposite to Buddhism, like transhumanism. Instead of downgrading the importance of subjective self/ego, as in Buddhism, the transhumanist enterprise is to upgrade it and make it the most important subject of all. This might seem like two polarities, but I’m convinced the end result of both efforts will be one and the same; we will eventually learn who we truly are, we will be able to recognize subjectivity from an objective standpoint. We will transcend the narrow perspective of our separate selves and, as a community of fellow humans, know and accept the big picture. If I get it right, both Buddhism and Transhumanism are aimed at ending human suffering, so they actually go hand in hand. The most promising attempt to create strong AI is, as far as I know today, Eliezer Yudkowsky’s DGI and its take on intelligence. Note that I have no knowledge of computing/code/engineering and the technicalities of constructing actual AI. What I see in DGI is a proper/useful understanding of the functional properties of intelligence. I also agree with Yudkowsky that AI could be far more efficient than humans when it comes to reasoning and objectivity i.e. rationality. For humans to deliberately kill members of its own kind is an anomaly in the world of organisms. Others might kill for survival or reproduction, but we kill ourselves over subjective fantasies of being right/wrong. That will continue until we realize that the subjective perspective (Ego) is behind both suffering AND intelligence. Subjectivity makes both possible. We will also learn that knowing subjectivity as a perspective on reality, as opposed to reality itself, does not mean we will instantly go ape. Evolution does not go from complexity to simplicity. Knowing subjectivity is how we will eventually be able to use our intelligence wisely and safely, for the benefit of the big picture, not just to polish my personal self-image.

I guess the reason for writing this is to point a finger, to reach out to those who can do the constructing and say “this is what you should have as a blue print, these are the correct premises for intelligence”.

My 5 year old daughter has more intelligence than Deep Blue, and I know why. She will not respond correctly to your move, but relentlessly question why you moved that piece like that. When we figure out the correct answer to why we ask questions, all the pieces on the board will start dancing. We will learn that asking the question is the answer.

Let the play begin.

No comments:

Post a Comment

Respond as it happens here