Sunday, July 26, 2015

Intelligence

I've always been curious about intelligence; both as it relates to our species's ability to reason about its surroundings and to how much we can actually fit into software. I've done little bits of reading, here or there, for years but it's never been intentional. As such I've built up a considerable number of disjoint knowledge fragments that I thought I'd try to organize into something a little more coordinated. Given my utterly amatuer status on this topic any comments, scepticism, corrections or other fragments are highly welcome.

I'll start with a long series of statements, then I'll do my best to explain what I think is both right and wrong about them:

  1. Life forms are dynamic, driven internally.
  2. Life forms choose how to spend their energy.
  3. Smart is the sophistication of that choice.
  4. The choice can be very simple, such as fight or flee.
  5. The choice can be on-the-fly, thus choosing the best current option.
  6. The choice can be based on internal models, this supports longer term benefits.
  7. The models are dynamic, both the data and the structure itself can adapt and change.
  8. Intelligence is the ability to create models of the world and benefit from them. To be able to adapt to changes faster than evolution.
  9. Life forms can be self-aware.
  10. Self-awareness makes a life form smarter.
  11. Internal questions are spontaneous queries directed at models. They are not direct responses to external stimuli.
  12. The purpose of asking internal questions is to refine the models.
  13. Consciousness is a combination of models, being self-aware and asking internal questions.
  14. Humans are not entirely logical, their models get created for historic and/or emotional reasons.
  15. Some understanding comes initially as instinctive models.
  16. Some understanding gets built up over time from the environment and cultures.
  17. Common sense is only common to inherited cultural models.
  18. Contradictions arise from having multiple models.

Definitions

I'll start first with expanding out a few basic definitions  and then gradually delve into the more interesting aspects:

An object can be static, like a rock. With respect to its surroundings, a rock doesn't change. An object can be dynamic, like an ocean. It is constantly changing, but it does so because of the external forces around it. If an object can change, but the reason for the change is encapsulated in the object itself, then I see this as some form of life. In order to accomplish the change, the object must build up a supply of energy. A cell is a good example, in that it stores up energy and then applies that in a variety of different ways, such as splitting, signalling other cells or repairing itself. A machine however is not alive. It may have sophisticated functionality, but it is always driven by some external life form. It is an extension.

One big problem I am having with the above definition is hybrid cars. They store up energy and they choose internally whether to run on electric or gas power. A car is just a machine and it's destination is the choice of the driver, but that energy switchover collides with my first two points. I'll try to address that later.

I generally take 'smart' to be an external perspective on an object's behaviour and 'intelligent' to be the internal perspective. That is, something can behave smartly in its environment even if it doesn't have a lot of intelligence, and intelligent beings aren't necessarily being smart about what they are doing externally. Splitting the two on external vs. internal boundaries helps clarify the situation.

Given the external perspective, a life form that is smart is one that adapts well to its environment in a way that benefits it as much as possible. Depending on the environment, this behaviour could range considerably in sophistication. Very simple rules for determining when it is autumn help Maple trees shed their leaves in the fall. It's likely that behaviour is a simple chemical process that gradually accumulates over a period of time to act as a boolean switch. Cluster enough cold days together over a month or so and a tree decides to hibernate. That may be smart, but we generally do not define that as intelligence. It is a reactionary triggering mechanism to external stimuli.

Some life forms make considerably more sophisticated choices. Not just reactions, but there is a long term component there. In order to project out into the future one needs some data about both the present and the past. But this data can't be unstructured. Since it is internal, at best it can be some combination of symbolic representations and explicit sensory input. The structure is no doubt chaotic (naturally organized), but exists never the less. It is this resource that I call a model. A very simple example might just be a crude map of where to go to get food. A more complex one might be handling social organizational relationships. Practically, it is likely that models get coalesced into larger models and that there are several different internal and external ways in which this is triggered. Still, it is expected that life forms that have this capacity would have multiple models, that are in different stages of development.

The purpose of keeping a model is that it can be used to project future events. It is essentially a running total of all exposed relative information. Once you know that night is coming in a few hours, you can start arranging early to find a safe sleeping place. Certainly, a great many creatures have crude models of the world around them. Many apply crude logic (computation?) to it as well. We have shown a strong preference for only accepting sufficiently advanced models as truly being intelligence, but that is likely a species bias. Sometimes people also insist that the ability to serialize paths through a model and communicate it with others who can then update their own understanding is really what we mean by intelligence. The ability to transfer parts of our models. As far as we know, we're the only creatures with that ability. Other animals do communicate their intentions, but it seems restricted to the present.

Douglas Hofstadter made an interesting point in "I am a Strange Loop" about the strength of self awareness. A feedback loop can have very deep and interesting behaviours. Life forms whose models include an understanding of their own physical existence can no doubt make more sophisticated choices about their actions.

Self-awareness clearly plays a crucial role in behaviour. I'm just not sure how deeply that goes, most mobile creatures have at least the fight or flee responses and it seems that fleeing could require self-awareness. An attempt to save oneself. Perhaps, or it could just be wired into the behaviour at the instinctual level? Or both.

The really big question about life forms has always been consciousness. Some feel that it is just self-awareness, but I think you can make many valid objections to it being that simple. As I mentioned above, lots of creatures will flee at the first sign of danger, but some of those seem simple enough, like insects, that it is hard to imagine calling them intelligent, let alone conscious. Their behaviour most often seems to be statically wired. It doesn't really change quickly, if at all.

Some others feel it is just a thread or stream of conscious thought. That really only works if the stream is aware of itself, and is essentially monitoring itself. Still it doesn't seem to be strong enough to explain consciousness properly, and it doesn't provide any reason why it would have evolved that way.

Recently I've come to think of it as the ability to internally ask questions of one's own models. That is, it is a stream of self-awareness thoughts, that frequently question the model on how (and why) to expend energy. A non-conscious life form would just react to the external changes around it directly via the gradual buildup of rules or simple models. A conscious life form would also question those models and use those questions to further enhance them. Simply put, a conscious life form is curious about itself and it's own view of the world.

That opens the door to someone correctly arguing that there is a big hole in this definition. You could point out that people who never question their own actions are not conscious. For example, someone who joins a cult and then does crazy stuff only because the leader says that they should do it. I'm actually fine with the idea that there are degrees of consciousness, and that some people are barely conscious, and that some people can be turned that way. It matches up with that sense I got when I was younger that suddenly I had 'emerged' as a being. Before that I was just a kid doing whatever the adults specified, then almost suddenly I started getting my own views. Questioning what others were saying or what I was told. I can accept that as becoming gradually more conscious. So it isn't boolean, it is a matter of degrees.

Asking questions does show up in most fiction about artificial intelligence. From Blade Runner to Ex-machina, the theme of the AI deviating from expectations is always driven by their questioning the status quo. That makes sense in that if they just did what they were told, the story would be boring. It's just the story of a machine. Instead, there needs to be tension and this derives from the AI coming to a different conclusion than at least one of the main characters. We could see that as "questions that are used to find and fill gaps in their models". Given that each intelligent life form's model is unique, essentially because it is gradually built up over time, internal expansions would tend towards uniqueness as well. No doubt the models are somewhat sticky, that is it is easier to add stuff then it is to make large modifications. People seem similar in this regard. Perhaps that is why there are so many disagreements in the world, even when there are established facts?

Artificial Intelligence

Now that I've sort of established a rough set of definitions, I can talk about more interesting things.

The most obvious is artificial intelligence. For a machine to make that leap, it would have to have a dynamic, adaptable model embedded within it, otherwise it is just a machine with some embedded static intelligence. To test this, you would have to insure that the model changed in a significant way. Thus, you might start with a blank slate and teach the machine arithmetic on integers. Then you would give it a definition for complex numbers (who are structurally different than integers) and see if it could automatically extend its mathematical definition to those new objects.

These days in software development we always make a trade-off between static finite data types and having less structure in some generic dynamic representation (a much rarer form of programming but still done at times like in applications like symbolic algebra systems), but I've never seen both overlaid on top of each other. It's currently a trade-off.

A fully dynamic structure as a massive graph would give you some form of sophisticated knowledge representation, but that would still not be artifical intelligence as we often think of it. To get that next leap, you'd need this thread of constant questioning, that is gradually refining the model. Testing to see if this exists would be problematic, since by definition it isn't connected to external events. My guess is that you need to use time. You give the machine a partial model with an obvious gap, and latter you return to see that the gap has been filled in. An important point here is that at bare minimum it would require two different tests spaced out by time, and as such shows a huge weakness in the definition of a Turing test. That latter test would have to be really long, in order to allow for the modifications to happen, and possibly by definition it wouldn't because the subject would be preoccupied by the test itself.

A second critical point is that any life form that is questioning itself and its surroundings is inherently non-controllable. That is, you could never guarantee that the answers to any of those questions always came out positive in your favor. As such, any fixed set of rules like Isaac Asimov's three laws of robotics are probably not enough to guarantee a domesticated and benevolent artificial intelligence. It's more like a slot machine, where a jackpot means big trouble. Once the wrong question is asked, all bets are off. A conscious being always has enough free will to extend the model in a way that is not in the best interests of those around it. It is looking after itself with respect to the future, but that forward projection could literally be anything.

Self-driving and self-repairing cars

The current state of self-driving cars leads to some interesting questions. The version at Google clearly models the world around it. I don't know how dynamic that modelling is, but it's aligned in a way that could potentially make it a life form. The differentiator is probably when the car encounters some completely new dynamic object, and not only adds it to it's database but also all it's new, previously unseen, properties as well. That update would have to alter both the data and its structure. But to arrive at a life form it might not even need to be dynamic...

At some point, a self-driving car will be able to drop you off at work, and then find a cheap place to park during the day. What if one day it has an accident after it left you? It could easily be wired with sensors which might notify it that the fender is dented. At this point, it could call up a garage, schedule an appointment and afterwards pay for it with your credit card. This is fairly autonomous, is it a life form now? One could argue that it is still reliant on external objects for the repair, but we have the technology to bypass that. What if it had a built-in 3D printer and some robot arms? At this point it could sculk away somewhere and repair itself, no one needs to know.

So let's say it's a normal day, the car drops you off and you go to work. If later in the afternoon the police show up at your office and start questioning you about a fatal hit and run accident, what happens? You tell them that you've been at work all day, they ask to see the car. You summon it, and it is in pristine condition, nothing wrong. What if, as part of the repair, the car automatically deletes any logs. You ask the car if it got into an accident. It says no. At this point, you would not expect to be held liable for the accident. You weren't there, there isn't any proof and the car itself was involved in the cover up, not you. It does seem that with the right combination of existing technologies that the car could appear to be fully autonomous. Making it's own decisions. Repairing itself. That, at this point has it crossed the threshold between machine and life form? It also seems as if it doesn't even need to be dynamic to make that leap? It has just become smart enough that it can decide how to spend it's energy.

Origins of life

If life is just an internally dynamic process that got set in motion, a crucial question is how and why did it all start? My naive view is that somehow bundles of consumable energy coalesced. From there, all that would be necessary to go forward is a boolean mechanism to build up containment around this energy. Given those two pieces, something like a cell would start. Initially this would work to fully contain the current bundle, and then gradually extend out to more sophisticated behaviours, such as gathering more energy. In that sense, one could start to see how the origins could be derived out of the interplay between mechanical processes.

In a way that's similar to the issues with a self-driving car. What's radically different though is what set it all in motion. Our perspective of evolution implies that some of these internally dynamic processes are a consequence of external dynamic processes. That is, the mechanism to attempt to contain, and later collect energy is a normal part of our physical world. Something akin to saying that it is similar to the way weather and water shape the landscape, but in this case the external force gradually morphs to maintain its own self interests. This is an extraordinarily deep issue, in that it differentiates the line between machines and life forms. If a self-driving car does become a life form, it is because we set it in motion. We're in motion because of all the life forms before us. But before that is seems as if we need to look to physical processes for an explanation?

Dynamic Relationships

There are sometimes very complex relationships between different things. They might be continually dynamic, always changing. Still, they exist in a physical system that bounds their behaviour. Computation can be seen as the dynamic expression of these relationships. In that sense it is a complex  time-based linkage whether explicit or not, that relates things together. We have some great theories for what is computable and what is not. We know the physical world has boundaries as well, a classic example is limits on exponential operations such as paper folding. Over the millennia we have refined all sorts of ways of expressing these relationships, and have dealt with different degrees of formality as well as uncertainty. In that we externally model the relationships and then apply them back to the real world, in order to help us make better choices. If that sounds incredibly like my earlier description of intelligent life forms, it is no coincidence.

We started out communicating parts of the models, but now we are developing ways to work on the models externally. If the addition of internal models helped us along, the ability to create shared external ones is central to our continued survival. Change is the default state of the universe, and the planet; it can be locally faster than many life forms can handle. Thus speed and also accuracy are essential for dynamic processes to continue. The sooner we figure out how to collectively model the world around us, the more likely we will be prepared for the next oncoming major change. There will always be a next major change.

Getting back to computation, we can express any mechanism as a collective series of relationships. If we think of the flow of the universe as going from chaos to natural organization and then back to chaos again, this could be seen as permuting most forms of relationships. That is, eventually, somewhere, a given relationship is going to be instantiated. Most likely more than once. If that relationship exists, then the mechanism exists and if that is the ability to contain energy, then we can see how the dynamic nature of the universe can morph, rather unintentionally, into some internally dynamic process. At that point, the snowball just starts rolling down the hill...

Final thoughts

Given my lack of concentrated depth on this subject, my expectation is that my views will continue to change as I learn more. I am not certain of anything I said in this post, but I do find this line of reasoning to be interesting and it seems to fairly explanatory. It is, explained in the current context, just a minor model that has been gradually accumulating accidently over time. Lots of gaps, and plenty of overlap with the many other models floating about in my thoughts.

We have arrived at a point in our development as an intellectual species were it seems that we really need a deeper understanding of these issues. They need to be clarified, so we can work upwards to correct some of our accidental historic baggage. That is, our loosely organized collective model of the world as it is chaotically distributed between the billions of us is not particularly accurate. With lower population densities and less technological sophistication this wasn't a problem, but that seems to have changed. Every person will predict the future differently because their starting points, their models, will differ. Those differences lead to conflict, and enough conflict leads to us into taking a rather massive leap backwards. History implies this is inevitable, but I tend to think of it as just one more changing aspect of the world that we have to learn to adapt to. Intelligence or our collective lack of it, are crucial to understanding how we should best proceed in the future.

No comments:

Post a Comment

Thanks for the Feedback!