Sunday, June 02, 2013

AI Rambling

I've always had delusions of grandeur. That's what inspired me to start this blog: that I would be able to chronicle my development of AI software. Of course I have since found that I'm not quite smart enough to do that. However today I will indulge myself.
I have recently been reading New Scientist articles on consciousness and the analogical nature of thought.

Firstly: It (finally?) occurred to me that AI should be layered. I had always had in mind a distributed model, but it had always been on one layer. But if some processes seemed "unconscious" and others "conscious", for example the aggregation of input vs the perception of input, then it would be easier to combine input into conscious thoughts because of the specialised nature of the "conscious" and "unconscious" processing units. The AI would only have thoughts that made sense to it (so the theory goes).

The distributed model would have various components, trying to produce analogies of things like "the seat of consciousness". Which brings me to my second thought: how analogy fits in. Douglas Hofstadter says that "Analogy is the machinery that allows us to use our past fluidly to orient ourselves in the present." So analogy can be used not only as the storage mechanism for thought, but as the transport too. It's important to remember that "storage" is shorthand not only for long term or short term memories, but also for the information currently being processed. The thought currently bubbling through your prefrontal cortex, etc. is analogical.

The problem for me is trying to figure out what to base analogies for a programme on. Humans use feeling. What is a good analogy for feeling in a programme? Processor strain? Amount of memory being used? What are good feelings? What are bad feelings? Could it be some sort of arbitrary value? I don't think an arbitrary value would work because of its ungrounded nature. I would prefer things based on what represents reality for the programme, not some abstract idealism concocted by my imagination. What feels good or bad for a programme won't be the same as what feels good or bad for me, but there will be a way to link them together through analogy.

I think I will continue this after lunch.

No comments: