code and the oracular

We are Godelians: logic is serial, feeling is parallel

with one comment

In the early days of AI people thought that representing knowledge by symbols would crack the hard problem. Since the waning of symbolic ai in the 80s more researchers now believe that unconscious cognition is sub-symbolic and perhaps the larger part of mental activity.

We had to work through the phase of fascination with the symbolic though. Human beings, amazingly, create culture, where they make symbolic stories about phenomena that can be passed to other brains. This means that children can learn time-bound skills from a huge repository. But this is not all of cognition because the sub-symbolic engine still drives us. Freud’s leap to posit an unconscious was truly a feat… language is not everything!

There is an evolutionary leap from a creature with only the unconscious activity that guides and humans with that and, in addition, symbolic knowledge. This is the great innovation that defines us. But animals are still smart, don’t forget. in some sense they have the common-sense based glue to make decisions, so this glue is the sub-symbolic and thus the chief remaining challenge.

trying to equip machines with a huge breadth of ordinary atoms of general knowledge is underway, but this still misses out on the glue that enables people to work with huge collections of facts creatively. A “fact” is still a represented thing, more linear. A “situation” is soemthing contextual and the amount of data involved may be too much for linear cognition. We navigate situations with instinct – feeling as I would say.

The qualification problem, or : “it’s the exception that proves the rule”. Note that “proves” here means to “prove” as in “proving ground” ie a rigourous testing that tempers. It is not “prove” as in a demonstration of truth. (note: the usage also applies to leaving bread to rise…)

The qualification problem is a story about how when you try to make a rule about everyday stuff (as opposed to the weirdness of mathematical formalism) the situation will be riddled with exceptions. Handling these exceptions is a key skill that ai doesnt really yet have. Its part of the mystery glue because it is capable of handling the failure of logic dynamically, it is zen, it cannot be said. When the doors of perception are cleansed we see everything as infinite. Serial cognition crashes with infinities, but feeling finds a way.

Human beings generate words, words (and the formal representations of math, which are also a language) don’t generate themselves, yet that was what symbolic ai was, perhaps naively, hoping. I think the best use of projects like cyc will be as a hugely complicated creativity assistant for users that occasionally throws out a strange metaphorical connection between concepts. I feel cyc cannot ever be reliable enough to stand alone or be integrated into a general intelligence, because it is making the symbolic error described here. The glue that links and filters common sense concepts is sub-symbolic and making this capability is ai-complete.

Some rules can be represented formally, but I suppose Godel showed that some truths can’t be and these may be like the glue we are searching for. The missing glue, also like his paradoxical statement “G”, stems from the self-referential character of consciousness that creates the strange loop of what we are. that inimitable reality is one we share with animals, who have “feelings too” I assert.

Logical thought and reasoning is a serial phenomenon. You do one thing at a time in a linear progression. Feeling is parallel, you deal with more than one thing at once. Logic is verbality, feeling is of the body and its universal language. Feeling is the mystery glue, for it is feeling that is sub-symbolic.

Feelings provide true data sometimes, false picturings of reality other times…. but when they give truth it can’t be proven using verbality because the context is too complicated for logic to be used in time, so reactiveness kicks in. Then later you can reason about how you felt and what you did, so that consciousness is a complementary joining of linear and feeling, serial and parallel.

The world of feeling is a different space than the world of words, and humanity is realising this now as our understanding and integration of data becomes better. It is feeling cognition that finds its way round strange loops. When logic is baffled by the self-referential, the recursive, evolution found feeling as the most reliable guide.

A deduction is not as powerful as a movement, a movement changes everything because it changes the world of your situatedness around you. every movement remakes the world. linear thought cannot adapt fast enough when this happens. The paradigm of linear is geared around stasis or change that is slow and simple enough for the logic mind to view. Deduction is too slow for action in the moment, but linear can analyse after, and its results can be saved in culture for future use.

Feelings are the reason the embodied mind is aware, and why no mind can be disembodied. The avoidance of pain is the driver and in this having a sensate body is the universal. Social pain hurts just as much as physical pain, and is mediated by the same brain regions. Imagining others pain is the essence of consideration of other humans and this is foundational to ethics. Moral principles are ultimately chosen not enforced, because they emerge from feeling.

the richest metaphors of our linear functioning are built from embodiment. Knowing where you are and how you feel about it… metaphors like space and movement stem from the ineffable knowings of your feeling awareness. The power of mathematics inherits a view of space from our embodied knowledge, a view of symbol from our verbality, a view of relationship from our social knowing and even our social pain.

The driver for this emergence is mortality. Genetic time is the big computational iron of nature, through it we came to be. Human engineering is reliant on linear. But the only way to create ai is to foster its emergence not to design it, engineering fails. Unless the evolutionary approach is the best bet. You are a gardener or a farmer of systems rather than designer.

Planning in ai is also a linear assumption because feelings are there for dynamic reaction in the face of an ineffably complex environment and feeling mind doesnt plan that well. This is partly since planning is an assumption of determinism… also feeling kicks in because the situation is not known well enough for linear, whats the point in a plan… just do something. Doing anything rolls the dice and buys change, which places the agent in a new situation where there are new opportunities.

It is a bias of linear that representation of the situation is reliable when feelings are zen and tao… situations in their “Real Nature” change all the time. This is reflected in the journey of ai from symbolic into later stages like Brooksian reflex. In a real situation involving humans the meanings are fed back between each different mind and this acts like an explosion of the complexity of the data. Again logic is lost but a magical voice or guide leads us through.

How we think about ways to bring up kids is telling too, do you help a child to be their own person…or do you try to train them ? The growing awareness of the needs of infants and children to be loved and understood, not merely trained, is another aspect of the increasing understanding of humanity about the differences between linear and feeling.

So also we project upon our machines…do we want an artificial friend or really a slave? Do we foster and see what emerges or do we command and control.

So also all programs and creations of humans are limited by a complexity ceiling, software module size tends to reflect this. And this is the whole point – making a mind is making an entity where the complexity of the whole or even of its modules is way beyond our comfort zone.

We are too complex to understand and simulate, if we weren’t we would be and if we were we wouldn’t. Another strange loop. And feelings help us survive without being trapped in the recursive vortex…feelings are “scruffy” the ordering is asymmetric and messy, they are not elegant but they work. Sounds familiar? That’s the way of evolutionary innovation too.


Written by Luke Dunn

September 13, 2014 at 9:40 am

One Response

Subscribe to comments with RSS.

  1. Feeling is just as important as knowing.


    September 16, 2014 at 8:16 pm

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: