This follows on from my post Object Thinking is the natural way to think. Introduction
This post deals with how the brain perceives the world as objects.
A neurological
perspective of how perception work, via studying perceptual
disorders, is covered in chapter two of Neuropsychology: from theory
to practice [1]. This is a review of that chapter.
Studying perceptual
disorders tell us how we work by looking at damaged brains in people,
or damaging brains in animals, and seeing how that affects what is
perceived.
The chapter
concentrates largely on visual perception, due to “the natural
dominance of our visual sensory system”. It starts out by
identifying two major pathways in the brain, the “what” pathway,
which is responsible for identification of objects, and the “where”
pathway, which is responsible for location position and motion. These
were originally identified in monkeys in 1983 by Mishkin, Ungerleider
and Macko. Milner and Goodale (1995) expanded on this model to
explain that the “where” pathway is dedicated to the preparation
of movement.
This demonstrates that
humans understand the world as objects and actions.
The chapter goes
onto explain that these two pathways are linked, essentially the flow
of data goes primary visual cortex → “what” pathway → “where”
pathway → motor cortex. The system also gets feedback, via other
pathways, from interactions with the environment to aid in learning.
This of course means that we get better at performing actions the
more we do them.
The next section of the
chapter deals with sensation versus perception. It is not
particularly relevant to this discussion. In short summary: sensation
occurs before perception, and is not consciously recognised. In
vision the sensation pathways are those that link the retina to the
visual cortex. People with damage to these pathway will not notice
that they don't see something, unless they are made aware of it
appearing and disappearing from view.
Discussion of the
hierarchy of the visual cortex follows on. This has quite a strong
neurological focus, and describes a lot of the brain's structure in
this area. The key point relevant here is that the brain is modular
and parallel, which means that human thinking is modular and
parallel, which is clearly analogous to separation of concerns. The
parallelism is accomplished through pathways that allow feedback
between modules. This could be thought of as message passing,
although it might be a stretch to say it scales up to conscious
thought.
Next the chapter
discusses what certain disorders show us about visual perception. The
two types of disorder covered are apperceptive agnosia – a
condition that means the patient has a difficulty distinguishing
between objects – and associative agnosia – in which the patient
is unable to recognise objects or their functions.
Apperceptive agnosia,
and its milder counter part; categorisation deficit, give strong
evidence that the mind perceives the world as objects. People with
these disorders cannot discern one object from another. This impedes
problem solving, as the person with the condition does not know how
to act on what they see. In fact, in the case of apperceptive
agnosia, it can be equivalent to blindness, as those with the
condition find it easier to navigate with their eyes shut.
Associative agnosia,
prevents people from being able to recognise objects or their
functions. This class of agnosia can affect any of the senses. The
book focuses on vision.
People with associative
agnosia can copy (e.g. by drawing) and match objects, but they cannot
recognise. So it appears that primary perceptual processing is
intact.
The current theory for
what causes this agnosia is that the “what” pathway has become
disconnected from the memory store for associative meaning. People
with this condition can write something down, such as their name or
address, but are completely unable to read it back. This is clear
evidence that we use background knowledge to solve problems.
The chapter gives an
example (p. 53) of a patient, with associative visual agnosia, who
can only tell what a banana is after eating it, and even then only
through logical deduction: “...and here I go right back to the
stage where I say well if it's not a banana, we wouldn't have this
fruit.”
The next section of the
chapter discusses object and face recognition. The focus is on how
this works at a neurological level, and the difference between face
recognition and object recognition. The key point it makes is that
the left hemisphere of the brain deals with parts of objects, and the
right deals with objects as a whole. (Faces, are a special case,
however, as they seem to be perceived as a whole, and not as parts,
i.e. most of facial recognition is done in the right hemisphere.) The
brain is set up to understand about composition.
The rest of the chapter
focuses on describing top down (using past experience to influence
perception) and bottom up (working from first principles) processing
of visual information, and come to a conclusion about how the left
and right hemispheres interact to give what we see meaning.
Essentially they work together, the left hemisphere identifying
objects and the meaning of objects, while the right analyses
structural form, orientation and does holistic analysis of an object.
So, in conclusion, the chapter lays out clearly that human beings perceive the world as objects, even at a neurological level. This is our nature. Thus is makes sense when designing software to think of our problem space in terms of the objects in it.
The next section will deal with why action is integral to how we think about the world, and can be found here: Object Thinking - Objects have actions.
[1] Neuropsychology: from theory to practice, David Andrewes (2001, Psychology Press)