I write software. Mostly in object oriented c-like languages and javascript. I'm keeping a web log of my activities.
Monday, June 24, 2013
More AI Ramblings
The other problem I thought up today was that I don't really know what each node of the distributed processes would do.
Should they all do the same thing? Neurons are all basically the same, should each processing unit be basically the same? Or should a unit be more like a part of the brain? So a unit would be like Broca's area, or the visual cortex, etc. But not those things exactly because they are human building blocks, not blocks of the programme.
Let's say there are N types of block. That is analogous to the modular approach to neurology and cognitive psychology. But they need to be resilient to defect, so any block's functions should, over time, be able to be transferred to some other block, as required. This is sort of analogous to the monolithic approach. Meeting in the middle seems par for the course with psychology. Shades of grey are inherent in defining consciousness. Maybe it's even more exciting than a grey scale, perhaps it involves all values of colour.
So the blocks' functions would be mutable. That's a scary thought, but practical.
I shall carry this on after today.
Sunday, June 02, 2013
AI Rambling
I've always had delusions of grandeur. That's what inspired me to start this blog: that I would be able to chronicle my development of AI software. Of course I have since found that I'm not quite smart enough to do that. However today I will indulge myself.
I have recently been reading New Scientist articles on consciousness and the analogical nature of thought.
Firstly: It (finally?) occurred to me that AI should be layered. I had always had in mind a distributed model, but it had always been on one layer. But if some processes seemed "unconscious" and others "conscious", for example the aggregation of input vs the perception of input, then it would be easier to combine input into conscious thoughts because of the specialised nature of the "conscious" and "unconscious" processing units. The AI would only have thoughts that made sense to it (so the theory goes).
The distributed model would have various components, trying to produce analogies of things like "the seat of consciousness". Which brings me to my second thought: how analogy fits in. Douglas Hofstadter says that "Analogy is the machinery that allows us to use our past fluidly to orient ourselves in the present." So analogy can be used not only as the storage mechanism for thought, but as the transport too. It's important to remember that "storage" is shorthand not only for long term or short term memories, but also for the information currently being processed. The thought currently bubbling through your prefrontal cortex, etc. is analogical.
The problem for me is trying to figure out what to base analogies for a programme on. Humans use feeling. What is a good analogy for feeling in a programme? Processor strain? Amount of memory being used? What are good feelings? What are bad feelings? Could it be some sort of arbitrary value? I don't think an arbitrary value would work because of its ungrounded nature. I would prefer things based on what represents reality for the programme, not some abstract idealism concocted by my imagination. What feels good or bad for a programme won't be the same as what feels good or bad for me, but there will be a way to link them together through analogy.
I think I will continue this after lunch.