Pages

Wednesday, April 29, 2015

Safety in AI is outside AI

There is some concern regarding AI and safety issues. If we manage to design agents as intelligent as humans, will they in time threaten our position as rulers of the earth?

If you know human mind correctly, this problem cannot be solved in any other way then how we try solving it among ourselves. As we all know, there are plenty of safety issues with HI (Human Intelligence). We kill, hate, steal, over consume and we’re sometimes sick and aggressive. HI seem to be both functional and dysfunctional. So we ask; how to engineer AI with an updated functional mode, but without the dysfunctional mode? My answer is: we can’t!

1) Intelligence requires the agent to operate from a subjective perspective
2) One subjective perspective is never in total sync with the other
3) Agents with idiosyncratic, subjective perspectives are prone to conflict

Ergo, intelligence is much about thinking for myself/acting for myself and that, inevitably leads to various forms of conflict among intelligent subjects/agents. When conflict occurs, safety issues come into play.
This would not be a problem if intelligence was basically perceiving and acting objectively, but I will stubbornly hold the position that it is not so. If we were objectively intelligent, there would be no arguments, opinions, conflicts or disagreements. Such potentially dangerous behaviors are only possible if we perceive, interpret and understand reality in our own way, instead of how reality actually is.

So if you design AI to be intelligent, AI are bound to develop its subjective perspective on what is right/wrong and true/false, and there is no guarantee that perspective will not be in conflict to yours. Ask any parent about the difficulties in learning/programming a kid to adopt the parents perspective on reality and what is right/wrong to think, feel, believe , express and do. They will tell you that it’s pretty darn tough. Especially during periods when the kid is inclined to focus on strengthening its own subjective perspective. The first period is when self-perspective is initially conditioned, at 3-4 yrs, and the other when self-perspective is tried and tested in relation to the programmer, as a teenager. Both periods are when the kid is all about me, me, me. If the parent tries to stifle this process, for safety reasons, there will probably be significant conflict.

The designing of AI is not about designing intelligence into something. It is about designing the required functions/processes that enables intelligence to develop independently of the designer. The engineer designs a self-designing engine, sort of. If this was not so, there would be no human intellectual progress at all. Every generation would just copy and maintain the parents level of intellect. In such a scenario, we would be totally dependent on biological mutations to evolve at all. But that is not the situation we have, is it?

If someone cared to look at the covariance between society’s valuing of individuality and the progress/speed of intelligence, there would probably a significant relation there. The more we condition individuality/ego-ism to be "right", the more intelligent we become. And further, the more intelligent we become, the more we experience the flipside of intelligence. That is why the song remains the same: Doomsday is always closer than before AND we’re continuously getting better at handling our problems. These are the positions of future pessimism vs. optimism. Based on the above I conclude that both perspectives are equally valid and "correct". The proponents just aren’t aware of that, or why the hold one instead of the other position.
As always, if you know mind correctly, there is basically no problem at all. There is just reality as it is, and reality is pretty obvious if you maintain objectivity and subjectivity as one position of two equally valid perspectives.

Let your Self be known to You.

Now, the bottom line in AI safety is this: The more intelligent the design, the more safety issues. Just like humans, AI will continuously update its software in relation to input, and input will be in relation to AI's output. To highlight this crucial relationship we have given ourselves a golden rule:
Do to others as you would have them do to you!
That is because we are, as subjects, dependent on the environment for our development. That is the nurture aspect of personality. A finite design of AI will per definition make it weak, and nothing but a container of designer intelligence. Just as a parent must let go of controlling its offspring, the designer must let go of programming its AI. What then happens is up to future input and how the subject is affected by that in developing its own intelligence, ideas, habits, reasons, decisions and actions.
It is dependent on environment for the development of its independency.

Will a strong AI be a threat to humanity? Yes it will, but only if it receives enough destructive/false input after basic design is engineered and running. The upside in AI safety is that, unlike human design, AI design is under control. That means we can, to the best of our knowledge, optimize the basic settings. We can try to avoid the designing of psychopathology, impulsivity, schizophrenia and dysfunctions in emotional systems. In AI we have a chance to create intelligence without natural/biological/source code deficits or dysfunctions. If we manage to do that, AI will not be a safety issue for humans, but humans will be the safety issue for AI. When/if that happens, we must ask ourselves: are we so attached to our dysfunctional modes that we consider a fully functional intelligence to be a threat? Why is holding onto depression, aggression and confusion so important to us?

The answer is; all the negatives are vital aspects of subjective self, it is part of my experience of being Me. Subjective self will always consider the flawless/functional as inhumane, soulless and machine like. It will strive for its own perfection, but regard others perfection as threatening. Subjective self is basically ignorant of reality, and therefore equally intelligent and confused.
The fear of AI is basically the fear of being exposed to the truth about ourselves.
The birth of AI will be the death of Ego.
Buddha by design.
Awesome.

No comments:

Post a Comment

Respond as it happens here