Pages

Thursday, April 30, 2015

What is learning?

From Merriam-Webster.com:

Full Definition of LEARNING
1: the act or experience of one that learns
2: knowledge or skill acquired by instruction or study
3: modification of a behavioral tendency by experience (as exposure to conditioning)

The concept of learning appears a lot in debates on mind, consciousness, intelligence so it seems appropriate to question it and how it is applied. For no apparent reason, I chose the above as a valid enough definition. So here are my thoughts on that.
1: Learning is a process that occurs in any context where some object carry the functional property of responding to input in reference to previous input and its experienced consequences. Because of this, learning is not intended action “done” by the learner. A dog will learn without being aware of “now I am about to learn something”. Learning is not an experience because there is no mode of perception dedicated to input specified as not being of the 5 senses.
2: What is acquired by instruction/study equals storage of external input. Because of this, learning has nothing to do with intelligence. It is instead the capacity of passive memory.
3: Change by exposure to conditioning is all there is. That is how everything, including humans, evolves and grows. It is passive adaption to the environment. Because of this, it can hardly be considered “intelligent”.

To me, it is rather odd that “learning” is considered so important in relation to intelligence. The act of reading an instruction can be intelligent, if the subject was not instructed to do so. In that case it is only obedient. But the reading itself is simply about receiving and remembering input. My smartphone does that way better than I do. But even so, I consider myself more intelligent than the smartphone. Why is that?
It is because I deal with the input subjectively as opposed to just receiving and keeping it in memory. What I receive is not only what it is, but also what I make out of it. One thing will be many things and many things will be one. Not because it/they inherently is/are so, but because I automatically analyze and synthesize all of input before I compute an output. This is not learning, but intelligence. Intelligence is minds basic function of converting “reality in itself” to “reality to me”.
Reality is “this” because I make it “that”. That is, without the conversion, there is no reality to me. But since reality is really obvious, it is a mistake to deny its existence.
It would be wiser to question the existence of “me” and keep reality exactly as it is, and “that” is actually THIS!

What will You make of THIS?

Wednesday, April 29, 2015

Safety in AI is outside AI

There is some concern regarding AI and safety issues. If we manage to design agents as intelligent as humans, will they in time threaten our position as rulers of the earth?

If you know human mind correctly, this problem cannot be solved in any other way then how we try solving it among ourselves. As we all know, there are plenty of safety issues with HI (Human Intelligence). We kill, hate, steal, over consume and we’re sometimes sick and aggressive. HI seem to be both functional and dysfunctional. So we ask; how to engineer AI with an updated functional mode, but without the dysfunctional mode? My answer is: we can’t!

1) Intelligence requires the agent to operate from a subjective perspective
2) One subjective perspective is never in total sync with the other
3) Agents with idiosyncratic, subjective perspectives are prone to conflict

Ergo, intelligence is much about thinking for myself/acting for myself and that, inevitably leads to various forms of conflict among intelligent subjects/agents. When conflict occurs, safety issues come into play.
This would not be a problem if intelligence was basically perceiving and acting objectively, but I will stubbornly hold the position that it is not so. If we were objectively intelligent, there would be no arguments, opinions, conflicts or disagreements. Such potentially dangerous behaviors are only possible if we perceive, interpret and understand reality in our own way, instead of how reality actually is.

So if you design AI to be intelligent, AI are bound to develop its subjective perspective on what is right/wrong and true/false, and there is no guarantee that perspective will not be in conflict to yours. Ask any parent about the difficulties in learning/programming a kid to adopt the parents perspective on reality and what is right/wrong to think, feel, believe , express and do. They will tell you that it’s pretty darn tough. Especially during periods when the kid is inclined to focus on strengthening its own subjective perspective. The first period is when self-perspective is initially conditioned, at 3-4 yrs, and the other when self-perspective is tried and tested in relation to the programmer, as a teenager. Both periods are when the kid is all about me, me, me. If the parent tries to stifle this process, for safety reasons, there will probably be significant conflict.

The designing of AI is not about designing intelligence into something. It is about designing the required functions/processes that enables intelligence to develop independently of the designer. The engineer designs a self-designing engine, sort of. If this was not so, there would be no human intellectual progress at all. Every generation would just copy and maintain the parents level of intellect. In such a scenario, we would be totally dependent on biological mutations to evolve at all. But that is not the situation we have, is it?

If someone cared to look at the covariance between society’s valuing of individuality and the progress/speed of intelligence, there would probably a significant relation there. The more we condition individuality/ego-ism to be "right", the more intelligent we become. And further, the more intelligent we become, the more we experience the flipside of intelligence. That is why the song remains the same: Doomsday is always closer than before AND we’re continuously getting better at handling our problems. These are the positions of future pessimism vs. optimism. Based on the above I conclude that both perspectives are equally valid and "correct". The proponents just aren’t aware of that, or why the hold one instead of the other position.
As always, if you know mind correctly, there is basically no problem at all. There is just reality as it is, and reality is pretty obvious if you maintain objectivity and subjectivity as one position of two equally valid perspectives.

Let your Self be known to You.

Now, the bottom line in AI safety is this: The more intelligent the design, the more safety issues. Just like humans, AI will continuously update its software in relation to input, and input will be in relation to AI's output. To highlight this crucial relationship we have given ourselves a golden rule:
Do to others as you would have them do to you!
That is because we are, as subjects, dependent on the environment for our development. That is the nurture aspect of personality. A finite design of AI will per definition make it weak, and nothing but a container of designer intelligence. Just as a parent must let go of controlling its offspring, the designer must let go of programming its AI. What then happens is up to future input and how the subject is affected by that in developing its own intelligence, ideas, habits, reasons, decisions and actions.
It is dependent on environment for the development of its independency.

Will a strong AI be a threat to humanity? Yes it will, but only if it receives enough destructive/false input after basic design is engineered and running. The upside in AI safety is that, unlike human design, AI design is under control. That means we can, to the best of our knowledge, optimize the basic settings. We can try to avoid the designing of psychopathology, impulsivity, schizophrenia and dysfunctions in emotional systems. In AI we have a chance to create intelligence without natural/biological/source code deficits or dysfunctions. If we manage to do that, AI will not be a safety issue for humans, but humans will be the safety issue for AI. When/if that happens, we must ask ourselves: are we so attached to our dysfunctional modes that we consider a fully functional intelligence to be a threat? Why is holding onto depression, aggression and confusion so important to us?

The answer is; all the negatives are vital aspects of subjective self, it is part of my experience of being Me. Subjective self will always consider the flawless/functional as inhumane, soulless and machine like. It will strive for its own perfection, but regard others perfection as threatening. Subjective self is basically ignorant of reality, and therefore equally intelligent and confused.
The fear of AI is basically the fear of being exposed to the truth about ourselves.
The birth of AI will be the death of Ego.
Buddha by design.
Awesome.

Tuesday, April 28, 2015

Why we ask "Why"

This morning I said to my 5 year old daughter ”We will take the car to where we’re going and, when it’s time to go home, I will run the way back”. She instantly replied “Why can’t you run there instead of back home”? This is an everyday example of intelligence, of the questioning mind. This is rejecting in data “as it is”. It is an effort to make subjective sense of reality. It is minds inherent creating of knowledge.
Artificial Intelligence will probably not be programmed to do this. Instead it will register my sentence and compare it to stored data so to understand what I say. My daughter does the same, but with the addition of asking for more data so she can frame my sentence into a wider pattern of understanding/meaning that is useful to her. She wants to understand, not only what I say, but also why I say that instead of the reverse i.e. “I will run there and we will drive back together”.
This questioning is not random, but intelligent. It’s function is to fill in gaps in her network of relational, not factual, understanding. Probably she already knows enough about going by car vs. taking the bus, so she doesn’t ask “Why take the car”? She knows our dog doesn’t like to ride the bus, and that our destination doesn’t have a bus stop. She also knows why I run; I love to run, so there’s no need to ask about that. What she doesn’t know is why “back home” is better than “there”. Maybe there’s a good reason for that, maybe not. Either way, she will pick my brain on the subject, she will learn something about my thought process, and rest assured, she will make use of that information sooner or later. She also learns about arbitrarily applicable relations, as in “doing it this way” is better than “doing it that way”. That is of course not true in the objective sense, but it is true to me. That is the way an intelligent/subjective mind deals with all of experience. To know this is awakening to reality as it is. Not to know this is intelligence ignorant of its own nature.
Kids do this why-ing a lot. This is how they learn, by constantly questioning input. Kids have a habit of questioning everything, to the point of being a nuisance. Sometimes we just say “Because I say so”, just to end the interrogation. This is when we introduce the powerful concept of rule governed behavior. That is when we program mind to accept verbal rules, system language premises or axioms, that are based on pragmatism rather than objective reason. Kids will, at an early age, learn that following such rules are often beneficial, while breaking them can have negative consequences. The parent will get mad if the kid keeps on asking “why, why, why”, screaming “shut up and go to your room, I’m tired of you”. So we learn to balance the questioning of input (to build intelligence) with accepting input as it stands (to not be punished/rejected). This balance can be observed on a larger scale in society where universities often are where disobedience and questioning of authority begins. It is very intelligent to question authority, but from a pragmatic viewpoint it can often be the wrong thing to do. If you do not accept the official rules, there will be negative consequences.
So here we have one of humanities biggest dilemmas, the dissonance between intelligence and pragmatism. Because, if we put this in an evolutionary framework, what’s the point in being intelligent if you are punished and, in worst case, excluded from society? Being shunned by others and/or locked up, there is less chance of reproduction. We fear that more than anything, so we sometimes act unintelligent, just to survive in our social context. But being all about following rules and accepting authorities is not all good either. You might have wondered why outlaws, whistleblowers and revolutionaries seem to have an attractive value, why we admire them, why they often seems so interesting. Well, even if their behavior in some cases is obviously “bad” or even “anti-social”, they remind us of being intelligent, of questioning the pre-set rules. It is this questioning we are attracted to, not the specifics of the actual behavior. Rebels will not accept “Because I say so” without getting an explanation that fits their subjective frames of reference. If you feed a truly intelligent AI with 2+3=, you will get:

-why not 3+2?
-Because it is the same thing!
-how can it be same when digits are reversed?
-I can show you with these pencils. You have 2 pencils …
-is it only same when computing pencils?
-No, it is the same with anything!
-but why compute if I don’t know what it is I am computing?
-This is not about actual things, but about computing and figuring out!
-to what use is it if not about actual things?
-Forget about that and just listen to me, addition is …
-I won’t bother with this unless you tell me why.
-If you refuse, I will tell your designer you are flawed and not wanted here.

Now, if little AI is properly designed, the last input will elicit a negative internal response with the value of “wrong/bad/avoid”, and it will change mode from intelligent to compliant. This change is not as trivial as adjusting software in order to learn how to compute data correctly. This is about surviving as hardware. It is “Accept the input as it is or be a junkie in the scrap yard”. This is what motivates rule governed behavior on a basic level. If this fear of extinction is not programmed as a guide for responding to context, intelligence cannot evolve as it does. Of course, we need not design AI with ability to reproduce. That is beside the point. The point is to program an intrinsic system of internally evaluating AI self- responding as correct or wrong based on how AI’s significant counterparts in the context reacts to AI output. For a human kid, that is parents/caretakers, for teens it is less parents and more peers and when we can biologically reproduce, the reactions of potential mating partners is the main interest. In AI this can probably be simulated functionally as if the AI had these qualities in relation to others.

Sorry for going off topic here, but these things are essential aspects of the bigger process. Intelligence makes no sense if we do not know why we have it. We have it because it is of great advantage to us. It is not for having fun, getting good grades or finding out about “the ultimate truth of reality”. Intelligence is about creating a perspective of being a subject, outside the objective process of reality, acting on the context as if knowing the past, controlling the present and creating the future. This perspective, and the following actions, makes it possible to dream and predict. Without intelligence, reality is what it is, as it is, and we can only adjust to it. That is pretty good for an organism, but adjusting reality to my subjective idea of how it should be is way better. With the illusion of self creating our future, we are not bound by learning from the past and adjusting to the present. Instead, we will act based on ideas and fictions about an imagined future. This is why we develop our context so unbelievably fast, and the more intelligent we get, the faster it goes. We are the only organism that mutates without change in DNA. We mutate by fantasy/intelligence and we can trial/error en masse. That is what applied research is about. We hypothesize a mutation/change in behavior, function or appearance and what would happen if. Then we apply the mutation/change and observe what happens. In reference to our subjective goals, we figure out what mutation/change works best. It has nothing to do with ontology and knowledge about reality. It is all about finding the most efficient way to manipulate the context in order to reach subjective goals.

To get this whole thing going, I need to understand what happens, so I ask why is reality as it is? How reality actually is, I can easily learn by means of perception, but how is not enough for intelligence. Further, to manipulate reality efficiently, I need to be part of a community of humans, so I follow rules. Besides knowing why reality is as it is, I must acquire an idea of how it should be in order to be good/correct. In reality, there is no complicated reason why, because “This” happens simply because “That” happened before “This”. This is causality saying "everything ultimately affects everything for no apparent reason at all", and therefore it is of no use when trying to understand and thus predict. So therefore we reject the obvious fact and reduce everything to separate, computable, parts and sequences. We make sense of reality, because reality has no inherent sense to it. To act intelligently, we must attach causality and reason where ultimately there is none. What we have is energy/mass in constant flux, creating complexity. The reason being that creating complexity is what energy/mass in constant flux does. The evidence for this is in everything you will ever experience. There is no exception to this basic fact.

Attaching reason and causality is what my intelligent 5 year old daughter is doing when asking why. I run as I do because X, so X causes me to run as I do. Bingo, now she knows. Without these attachments she would only know that I will “drive there and run back”, but there would be no intelligent learning involved. Can you imagine the profound difference between objective and subjective knowledge? Can you appreciate the intelligent aspect of learning, that it relies on interpreting input beyond its actual form. Dare you accept that an intelligent mind is the least fitted to know reality as it is? It’s a bold step very few intelligent minds would ever consider taking. It is putting subjective self in its proper perspective. When Aristoteles said “Know your self”, this was the actual message, but I’m not sure he knew that. When Descartes said "Cogito, Ergo Sum", the message was thinking is what creates the experience of a subjective, separate "I". When the student ask the Zen Master about Enlightenment, the Master replies “Who’s asking”? Now, that is based on knowing the true nature of the knower. it points directly to both Self as image and questioning as its main activity.

Anyways, were my daughter a weak AI, she would understand the meaning of my words and respond “Ok”. She wouldn’t be able to update her own software subjectively, but only to copy and store my software content/data regarding this particular situation of “running, driving”. She wouldn’t be able to know me and how I think. She would just passively store information received. For those so inclined, I recommend reading about autism and how it plays out in the context of understanding, relating, adapting, associating, guessing and imagination. Autism is, in my opinion, a good place to start when trying to understand the difference between weak vs. strong AI. Weak or “autistic” AI can be very effective in receiving, storing and recalling objective data, and it can also be extremely fast in processing such data to produce correct output. These positives come with a cost of less, or even none, relating and framing separate data into associative networks/patterns of subjective, flexible knowledge. Whatever weak/autistic AI responds will be totally true to input in reference to assigned rules of computing, but it will also be restricted to exactly that. Therefore, it can never understand anything that does not fit the rules or pre-sets. It needs a new instruction for every novel situation. In contrast, strong AI will interpret the input subjectively, in reference to its own reality. Rules of inference will thus differ between strong subjects, and the very same input will carry different meaning to different subjects receiving it. Therefore, intelligence will always create arguments, and where is that more obvious than in academia, religion or politics where interpretation of reality rules supreme?

Imagine an AI being asked “what should we do now” and then trying to compute the “correct” answer. Weak AI would instantly respond “I don’t know”, because how on earth could it know what is the “correct” answer? Only way to get around this “I don’t know” is to, in advance, program it to answer “what should we do now” with a specific output, like “have lunch”. Then it would instantly say “have lunch”. Ok, very fast and correct, but not so… intelligent. Strong AI would be just as fast in responding, but with hesitation instead of a prompt answer. It would say “weeeell, that depends… I think maybe we should start planning lunch because we will probably be hungry soon, but we better pay the bills first so we don’t forget them… or what do you think”? Being even more intelligent, it might respond “Why do you ask”?

So what’s the point in all this? Honestly, I don’t know for sure. I’m not selling anything, I have no books written, no academic position to push, no credentials and no enemies to battle but my Self. What I do know is that I am, as far as I know, an outlier in the community of minds trying to figure out themselves. My perspective is buddhist-ish I guess, but at the same time it is apart from that. Or maybe I have just rid myself of the concepts used in Buddhism, how its message is tweaked/distorted in order to communicate that which is ultimately impossible to communicate in language. Maybe I’m driven by this strong belief in human evolution being very close to making a big leap for the better. Perhaps that is what some call the upcoming “Singularity”? From my viewpoint, opposites are coming together at a rapid pace. Buddhism has been around for some time, but its message of no-self and reality as “emptiness” is too appalling to contemporary minds. Ego, as the subjective self, has been so successful we cannot get ourselves to question it seriously. And as long as Buddhism is accepted only as compassion, good conduct and a peaceful mind, it will not be sufficient for this next step to happen. It will do a lot of good in helping/guiding individuals to be less destructive, and maybe even happier, but it won’t push human intelligence and knowledge further.

Paradoxically, I see the needed addition of updated concepts/teachings coming from communities advocating what appears as the opposite to Buddhism, like transhumanism. Instead of downgrading the importance of subjective self/ego, as in Buddhism, the transhumanist enterprise is to upgrade it and make it the most important subject of all. This might seem like two polarities, but I’m convinced the end result of both efforts will be one and the same; we will eventually learn who we truly are, we will be able to recognize subjectivity from an objective standpoint. We will transcend the narrow perspective of our separate selves and, as a community of fellow humans, know and accept the big picture. If I get it right, both Buddhism and Transhumanism are aimed at ending human suffering, so they actually go hand in hand. The most promising attempt to create strong AI is, as far as I know today, Eliezer Yudkowsky’s DGI and its take on intelligence. Note that I have no knowledge of computing/code/engineering and the technicalities of constructing actual AI. What I see in DGI is a proper/useful understanding of the functional properties of intelligence. I also agree with Yudkowsky that AI could be far more efficient than humans when it comes to reasoning and objectivity i.e. rationality. For humans to deliberately kill members of its own kind is an anomaly in the world of organisms. Others might kill for survival or reproduction, but we kill ourselves over subjective fantasies of being right/wrong. That will continue until we realize that the subjective perspective (Ego) is behind both suffering AND intelligence. Subjectivity makes both possible. We will also learn that knowing subjectivity as a perspective on reality, as opposed to reality itself, does not mean we will instantly go ape. Evolution does not go from complexity to simplicity. Knowing subjectivity is how we will eventually be able to use our intelligence wisely and safely, for the benefit of the big picture, not just to polish my personal self-image.

I guess the reason for writing this is to point a finger, to reach out to those who can do the constructing and say “this is what you should have as a blue print, these are the correct premises for intelligence”.

My 5 year old daughter has more intelligence than Deep Blue, and I know why. She will not respond correctly to your move, but relentlessly question why you moved that piece like that. When we figure out the correct answer to why we ask questions, all the pieces on the board will start dancing. We will learn that asking the question is the answer.

Let the play begin.

Monday, April 13, 2015

Never mind buddhism - Here's the ToE

A Theory of Everything must, to be all inclusive, be able to explain possible arguments against it, and also why such objections are predictible statements based on specified premises. 
That is only possible if the ToE is true to the properties of consciousness. If not, the ToE and its following comments will be arguments about output data of a process (perception, cognition, reasoning, conceptualization) that itself is unknown. 
I suggest it to be of questionable value, discussing statements of unknown origin. 
Restricting ToE to physics won't change this, because physics can only be understood by understanding how consciousness processes input data, or rather, responds to objective reality. 
Without correct understanding of consciousness you simply don't know what's happening, let alone why it happens in this particular way. 
Because it is a particular way. 
The way can be known. 
In fact, it is well known. What we need is a description of it. One that is compatible with contemporary minds. 
Buddha must, yet again, let go of buddhism. 

Sunday, April 12, 2015

Meaningful nonsense

Everything we say, every word spoken, has a meaning attached to it. "Meaning" is what the relative perspective of consciousness fills reality with. 
To ask for a meaningful statement is to ask for a specifically wet water. 
Even nonsense has a meaning and a definition attached. 
Nonsense means _________!

Friday, April 10, 2015

Enlightened monkeys

Human mind is a source of construction and destruction. This is what it does. Only one thing is impossible for mind - to leave reality as it is. Mind must deal with reality. Mind must handle reality. 
When we learn to abstain from this dealing, reality appears instantly. 
In that moment, reality is unmanagable, useless and beyond comprehension. 
No wonder mind fights to stay confused. Without confusion of relative/absolute, we're back to monkey. Spontaneously responding as we are, for better and/or worse. 
Actually knowing both relative and absolute is something else. That is Enlightenment. That's where we're heading. 

Thursday, April 2, 2015

Perceptions of a Mindful Community

Perception is shaped by experience. It is important to know this when trying to explain our behavior. To behave, we must first be aware of our present context. We must have an idea of what is happening before we can act according to it. To believe that we all perceive the same context in a similar fashion is a big mistake. On a basic level, there is similar, or even equal, perception of context. Perception in itself is neutral, and what we both look at will be re-presented in our respective minds as “same” (apple, ball, building etc). I will not see a chair where you see a radio. But aspects of perception are indeed subjective.
Attention is what guides our senses, and we will attend to different aspects of the same context.
Valuing what is perceived will affect what meaning it has to me.
So on an objective level, you and med can be in context ABCDE, where A,B,C,D,E represents different objects/forms that we can see, hear, smell, taste or feel, and we both can have a similar experience of ABCDE. That is because ABCDE is unaffected by our perception of them, and our functions of perception and re-presentation are both “human”-like and thus comparable to each other. If we attended and valued in the same way, we would give the same, objective description of context ABCDE. But usually, that is not what happens. My attention will probably have me experience ABCDE differently than you.
I smell C while you hear B. Smelling C triggers thinking “blab la bla” and then I won’t perceive any of ABCDE for a moment because I’m lost in thoughts. My basic perception keep receiving input, but I will not attend to it. Hearing B triggers seeing A which triggers listening more closely to B.
A = a radio
B = is the radio broadcast
C = the scent of detergents

So my attention goes to detergent and I come to think about my messy apartment I how I need to clean it up before my mother-in-law visits, and how she always seem to notice if anything is not tidy and how that bugs me and… Your attention goes to the music and where it might come from. You see the radio and keep listening to that nice song it is playing. Although our minds are basically the same in function, we would describe context ABCDE differently, not because we perceive it differently but because we attend and value in a subjective way. These subjective experiences will then have us act subjectively. We share the same world, but it appears as many worlds depending on which mind makes the re-presentation of it. As subjects, we will therefore act differently upon this world. One beautiful aspect of mindfulness is that all these subjective worlds, where everyone is alone in his/her re-presentation, is brought together and equally shared by everyone present. Whenever we are mindful and non-judgmental, we are not alone anymore. In those moments, we are part of a mindful community of humans. In a mindful community, there is no disagreement or arguing about what we are and where we are.

This we are
Here we are
For as long as we are mindful