Showing posts with label object thinking. Show all posts
Showing posts with label object thinking. Show all posts

Monday, June 24, 2013

More AI Ramblings

(This is technically after lunch, don't judge me!)

The other problem I thought up today was that I don't really know what each node of the distributed processes would do.

Should they all do the same thing? Neurons are all basically the same, should each processing unit be basically the same? Or should a unit be more like a part of the brain? So a unit would be like Broca's area, or the visual cortex, etc. But not those things exactly because they are human building blocks, not blocks of the programme.

Let's say there are N types of block. That is analogous to the modular approach to neurology and cognitive psychology. But they need to be resilient to defect, so any block's functions should, over time, be able to be transferred to some other block, as required. This is sort of analogous to the monolithic approach. Meeting in the middle seems par for the course with psychology. Shades of grey are inherent in defining consciousness. Maybe it's even more exciting than a grey scale, perhaps it involves all values of colour.

So the blocks' functions would be mutable. That's a scary thought, but practical.

I shall carry this on after today.

Friday, April 27, 2012

Object Thinking - Anthropomorphism

This follows on from Object Thinking - Objects have actions

Anthropomorphism is essential for object thinking to take place. Anthropomorphism is when a person attributes human mental states to other, non-human, things. Attributing human-like mental states to objects allows a programmer to treat the object as an agent, as opposed to something inanimate, and so bestow upon it appropriate behaviours allowing it to act in an appropriate manner within the application, interacting with other object. The amount of responsibility that you want an object to have will reflect how much you anthropomorphise it. It is important not to give an object too much responsibility, as explained by the Single Responsibility Principle.

That Anthropomorphism occurs is so obvious it doesn't need investigating! So obviously it has been researched by a huge number of people. The paper being looked at here, Making Sense by Making Sentient: Effectance Motivation Increases Anthropomorphism, by Waytz et al. in 2010[1], is one that attempts to explain why and how people anthropomorphise.

Their hypothesis is that one of the reasons people anthropomorphise objects because they want to increase their effectance motivation. This is the motivation to be an effective social agent. The researchers conduct six experiments based on this hypothesis.

The first experiment asked participants to rate their computers. Half of the participants (A) were asked to rate how much they felt their computer has a mind of its own. The other half (B) were asked to rate how much their computer appeared to behave as if it has its own beliefs and desires. Both sets were asked how often they had problems with the computer or its software. The hypothesis for the study is that the more problems people have with their computer, the more they will anthropomorphise it.

Results showed that, in accordance with the hypothesis, the more often participants in group A had problems with their computers, the more they thought their computers had minds of their own and that the more often participants in group B had problems, the more likely they were to believe their computers had beliefs and desires.

The second experiment asked participants to judge the agency of gadgets that had been assigned one of two descriptions about it. The gadget's description either made it seems as though what it did was within or outside the control of the user, but always described the same functionality. There were two groups of participants. They all saw the same set of gadgets, there were alternating sets of descriptions. After reading the descriptions, the participants were asked to rate how much control they thought they had over the gadgets, and then to assess how much the gadget had a “mind of its own”, “intentions, free will and consciousness” and appeared to experience emotions, in the same way they had to rate how much control they thought they had over the gadget.

In alignment with their hypothesis, the participants rated the gadgets with low controllability to be more anthropomorphic than those that were perceived to be more easy to control.

The third experiment was essentially replica of the second, but the participants were subject to an fMRI scan while rating the gadgets. This was conducted because the researched reasoned that people could be using mind as a metaphor for the behaviour they were seeing, rather than actually attributing minds to the objects. By determining the region of the brain in use when anthropomorphising takes place they could rule out certain modes of thinking and give weight to a possible seat for anthropomorphism in the brain. The researchers propose, through reference to previous studies, that the superior temporal sulcus (STS) is involved in social or biological motion, the medial prefrontal cortex (MPFC) is in use when considering people vs objects and considering the mind of another, and the amygdala, inferior parietal lobe and intraparietal sulcus are active when evaluating unpredictability. They therefore hypothesise that the MPFC will increase in activity when anthropomorphising.

The results of the experiment showed the ventral MPFC (vMPFC) to be the most active region, whereas the STS was not active.

The results also showed activation in a network of areas related to mentalising, which strongly resembles a circuit corresponding to processing of self-projection, mentalising and general social cognition, which is what would be expected for anthropomorphism.

This implies that unpredictable gadgets are perceived to have a mind, in an actual rather than metaphorical sense.

The results are inconsistent with the alternative hypotheses: attribution of mind to objects only related to social or biological motion analogies; that processing unpredictability is the cause of the activation; or that the activation is influenced by animism.

The fourth experiment asked participants to evaluate a robot that would answer yes/no questions the participants asked. There were three conditions that the participants were randomly assigned to: the condition where the robot answered yes as often as no, the condition where the robot answered no more often, and the condition where the robot answered yes more often. The second two conditions were the predictable conditions.

After asking the questions and receiving answers, the participants were asked to rate the robot on predictability, then on how much they thought it had free will, its own intentions, consciousness, desires, beliefs and the ability to express emotions. The participants were also asked to rate the robot on attractiveness, efficiency and strength. The ratings were done on a five point scale from “Not at all” (1) to “Extremely” (5).

Results from the experiment showed that participants in the predictable groups found the robot to be predictable, more-so than those in the unpredictable group. Also predicable-no was felt to be more predictable than predictable-yes.

Importantly anthropomorphism was found to be more prevalent where the robot would found to be less predictable.

The only significant difference between the conditions and the non-anthropomorphic evaluation was that predictable-yes participants found the robot to be more attractive than predictable-no. The researchers do not discuss this finding. There was no significant interaction found between liking the robot and anthropomorphising it.

These results show people anthropomorphise unpredictable agents, and present a causal link between the two. This is important as the previous three experiments could be interpreted as a simple association rather than a clear cognitive process.

Experiment five gave some participants motivation to predict the behaviour of a robot, and the others were asked to predict the behaviour with out being motivated. The hypothesis was that increasing motivation should increase motivation to understand, explain and predict an agent.

Participants evaluated a robot on a computer screen. They watched videos of the robot perform but not complete a task. Participants saw options of what the robot would do next and were asked to pick what they thought would happen. Participants in the motivation condition were offered $1 per correct answer. All participants then evaluated the robot's anthropomorphism. Finally the participants were shown the outcome, and compensated where necessary.

Results showed that motivated participants rated the robot as more anthropomorphic.

This shows that effectance motivation is increased when a person is motivated to understand an agent, and not simply controlled by the predictability of the agent.

The sixth and final experiment was predicated by the hypothesis that anthropomorphism should satisfy effectance motivation, i.e. anthropomorphism should satiate the motivation for mastery and make agents seem more predictable and understandable.

Participants evaluated four stimuli (dog, robot, alarm clock, shapes). Half of the participants were told to evaluate the dog and alarm clock objectively, and the robot and shapes in an anthropomorphic fashion, the other half were given the opposite instructions.

Each participant was shown a video of each stimulus three times. After the third time the participant was asked to evaluate the stimulus on two scales: the extent to which they understood the stimulus and the extent to which they felt capable of predicting its future behaviour.

The results showed that the dog and shapes were found to be easier to understand than the robot or alarm clock.

Importantly, participants perceived greater understanding and predictability of agents they had been told to anthropomorphise. The effect did not seem to depend on the group the participant was in.

This study implies that anthropomorphism satisfies effectance motivation.

It is clear from this paper that anthropomorphism is a natural part of human cognition, that is used to make behaviour of objects in the world around us seem more predictable and thus give us a better sense of control. It also shows that there is a neurological basis for this behaviour; the brain is set-up to anthropomorphise the world around us.

[1] Making Sense by Making Sentient: Effectance Motivation Increases Anthropomorphism. A. Waytz, C. K. Morewedge, N. Epley, G. Monteleone, J. H. Gao, J. T. Cacioppo. Journal of Personality and Social Psychology 2010, Vol.99, No.3, 410–435

Wednesday, October 19, 2011

Object Thinking - Objects have actions

This post follows on from Object Thinking - Objects: a neurological basis

The paper being reviewed is Micro-affordance: The potentiation of components of action by seen objects (Ellis and Tucker, 2000)[1]
 
The paper focuses on two experiments. The first is concerned with power and precision micro-affordance, and the second with wrist rotation micro-affordance.

In the first experiment the participants were told to memorise objects as they were shown them. They were then tested on the objects halfway through the experiment and at the end. During the memorisation phase, whenever they heard a tone, the participant was to either squeeze a cylindrical button with their whole hand, or pinch a small button between their index finger and thumb.

The type of grip response would be dependant on the type of tone; high or low. So there were two mappings known to the participants: high – large grip, low – small grip, and high – small grip, low – large grip. There were also two unknown mappings: high – large object, low – small object, and high – small object, low large object.

Each participant was assigned one mapping from each of the two groups and this was sustained throughout the experiment.

In the results from the experiment there was a statistically significant positive correlation between grip type and object type.

The second experiment was set up much the same as the first. The differences were that instead of large or small grips, the participant would make clockwise or anticlockwise wrist rotations dependant on tone, and the objects were categorised as ones more easily grasped with an anticlockwise or clockwise wrist rotation.

The results showed a statistically significant positive correlation between wrist rotation and object type.

The paper classifies micro-affordance (MA) as the state of an observer that gives rise to stimulus-response compatibility (SRC) between what the viewer sees and what actions they perform regardless of their intention. The theory is meant as a solution to the symbol grounding problem. (The reference to this problem in the paper is Harnad, 1990[2].)

The paper explains that SRC is demonstrated in many previous experiments, by various researchers, in forced choice reaction time tests. For example an advantage is gained when reaching for something on the left with the left hand, and similarly for the right. In fact an advantage is gained even in non-reaching tasks, where the location of the stimulus gives an advantage when it is on the same side as the response, this is known as the Simon Effect.

Previous experiments by Ellis and Tucker show that location is not the only action related feature encoded in this way.

This preparedness for action is thought to be a coordination of the what and where pathways in the brain.

The paper reports that the theoretical implications of the results of the study are:
  1. MA are different from Gibsonian affordance in that they suggest the affordance is encoded in the viewer's nervous system (not the object being viewed), they only apply to grasping, and only grasping appropriate to the object.
  2. SRC works because what is being responded to is unrelated to what is causing the compatibility effect. SRC theories suggest that stimulus → response options elicit particular mental codes, so the location of an object elicits a left or right handed response. MA, however, can be evoked without evoking a coherent action.
    This means that MA should interfere with SRC experiments.
    SRC effects have been modelled as ecological relations between visual properties and actions. They have also been modelled as effect codes that can be combined into whole actions.
    MA and these two approaches share the assumption that a compatibility effect arises from visual objects and possible, real-world actions that can be performed on them.
    MA diverges from the ecological approach by retaining representation of objects, and from effect codes by having a direct connection between vision and action. MA diverges from both because it states that actions are potentiated whenever an object is seen, regardless of the intention of the viewer.
  3. Developmentally, MA fits in well with the popular theory of Neural Darwinism. Development of adaptive behaviours requires integration of sensory and motor processes. The paper proposes learning coordinated actions result from gradual adaption of the neuron groups involved. This leads to coupling of motor and sensory systems.
    The implication of the experiments is that MA reflect the involvement of the motor components of the global mapping, which have come to represent visual objects.

So what does this tell us about how natural object thinking is? Object thinking requires that you understand the objects your are working with in terms of the behaviours that they can perform. You need to be able to create your objects so that discovering what behaviours are available is intuitive — i.e. when others come to your API they aren't spending hours going through the documentation, they can just get on and use it.

Ellis and Tucker show that the brain is well suited to understanding and preparing for expected behaviours. When we see an object, we immediately know the actions that the object has available, and are primed to use them.

This implies that once we have a good understanding of a problem domain, we should be able to model the behaviours of the objects in the domain intuitively, and anyone else with a good understanding of the problem domain will be able to intuitively discover each object and its behaviours.

The behaviour driven aspects of object thinking are intrinsic to how the human mind works at the brain level.

The next section deals with anthropomorphism, why OT needs it and where it comes from: Object Thinking - Anthropomorphism.

[1] Micro-affordance: The potentiation of components of action by seen objects; Rob Ellis, Mike Tucker. British Journal of Psychology (2000), 91, 451-471
[2] Harnad, S. (1990). The symbol grounding problem. Physica D, 42, 335±346. (As sited in [1])

Thursday, September 22, 2011

Object Thinking - Objects: a neurological basis



This post deals with how the brain perceives the world as objects.

A neurological perspective of how perception work, via studying perceptual disorders, is covered in chapter two of Neuropsychology: from theory to practice [1]. This is a review of that chapter.

Studying perceptual disorders tell us how we work by looking at damaged brains in people, or damaging brains in animals, and seeing how that affects what is perceived.

The chapter concentrates largely on visual perception, due to “the natural dominance of our visual sensory system”. It starts out by identifying two major pathways in the brain, the “what” pathway, which is responsible for identification of objects, and the “where” pathway, which is responsible for location position and motion. These were originally identified in monkeys in 1983 by Mishkin, Ungerleider and Macko. Milner and Goodale (1995) expanded on this model to explain that the “where” pathway is dedicated to the preparation of movement.

This demonstrates that humans understand the world as objects and actions. 

The chapter goes onto explain that these two pathways are linked, essentially the flow of data goes primary visual cortex → “what” pathway → “where” pathway → motor cortex. The system also gets feedback, via other pathways, from interactions with the environment to aid in learning. This of course means that we get better at performing actions the more we do them.

The next section of the chapter deals with sensation versus perception. It is not particularly relevant to this discussion. In short summary: sensation occurs before perception, and is not consciously recognised. In vision the sensation pathways are those that link the retina to the visual cortex. People with damage to these pathway will not notice that they don't see something, unless they are made aware of it appearing and disappearing from view.

Discussion of the hierarchy of the visual cortex follows on. This has quite a strong neurological focus, and describes a lot of the brain's structure in this area. The key point relevant here is that the brain is modular and parallel, which means that human thinking is modular and parallel, which is clearly analogous to separation of concerns. The parallelism is accomplished through pathways that allow feedback between modules. This could be thought of as message passing, although it might be a stretch to say it scales up to conscious thought.

Next the chapter discusses what certain disorders show us about visual perception. The two types of disorder covered are apperceptive agnosia – a condition that means the patient has a difficulty distinguishing between objects – and associative agnosia – in which the patient is unable to recognise objects or their functions.

Apperceptive agnosia, and its milder counter part; categorisation deficit, give strong evidence that the mind perceives the world as objects. People with these disorders cannot discern one object from another. This impedes problem solving, as the person with the condition does not know how to act on what they see. In fact, in the case of apperceptive agnosia, it can be equivalent to blindness, as those with the condition find it easier to navigate with their eyes shut.

Associative agnosia, prevents people from being able to recognise objects or their functions. This class of agnosia can affect any of the senses. The book focuses on vision.

People with associative agnosia can copy (e.g. by drawing) and match objects, but they cannot recognise. So it appears that primary perceptual processing is intact.

The current theory for what causes this agnosia is that the “what” pathway has become disconnected from the memory store for associative meaning. People with this condition can write something down, such as their name or address, but are completely unable to read it back. This is clear evidence that we use background knowledge to solve problems.

The chapter gives an example (p. 53) of a patient, with associative visual agnosia, who can only tell what a banana is after eating it, and even then only through logical deduction: “...and here I go right back to the stage where I say well if it's not a banana, we wouldn't have this fruit.”

The next section of the chapter discusses object and face recognition. The focus is on how this works at a neurological level, and the difference between face recognition and object recognition. The key point it makes is that the left hemisphere of the brain deals with parts of objects, and the right deals with objects as a whole. (Faces, are a special case, however, as they seem to be perceived as a whole, and not as parts, i.e. most of facial recognition is done in the right hemisphere.) The brain is set up to understand about composition.

The rest of the chapter focuses on describing top down (using past experience to influence perception) and bottom up (working from first principles) processing of visual information, and come to a conclusion about how the left and right hemispheres interact to give what we see meaning. Essentially they work together, the left hemisphere identifying objects and the meaning of objects, while the right analyses structural form, orientation and does holistic analysis of an object.

So, in conclusion, the chapter lays out clearly that human beings perceive the world as objects, even at a neurological level. This is our nature. Thus is makes sense when designing software to think of our problem space in terms of the objects in it. 

The next section will deal with why action is integral to how we think about the world, and can be found here: Object Thinking - Objects have actions.

[1] Neuropsychology: from theory to practice, David Andrewes (2001, Psychology Press)

Saturday, July 09, 2011

Object Thinking is the natural way to think. Introduction

Preface
I don't know why I'm up so early on a Saturday, but I am. *yawn*. So I've been writing a paper reviewing other texts, to explain why Object Thinking is the natural way to think.
I am doing this because I do not want to lose an internet argument. I know. I've already lost. Both side have. That's how internet arguments work.
The argument is at Programmers, particularly my answer to the question "is OOP hard because it is not natural?" SK-Logic is zealously anti OO, and I am equally zealously pro OO.
Then the other day I was discussing what I'm writing with Pierre 303, in the Programmers' chat room, and he suggested that I make it into several 'blog articles, because then it would be easier to digest. I agree, so that's what I'm doing. I still don't know why I'm up so early, but at least I'm doing something.


Introduction
Object Thinking; it's been around for decades as a paradigm for software design, but what is it? When presented with a problem, someone using object thinking will start to decompose the problem into discrete sections that can interact with each other. You could, for example, be forced to change the tyre on your car. A simple task, certainly, but to do it you must understand the tools and relevant components of your car, and how they need to work together to achieve your goal.

It might take several attempts to achieve a fine grained enough understanding to effectively solve the problem. Your first pass at the above example might leave you with the idea to take the wheel off your car. A second thinking might make you realise that you need to lift the car off the floor to do that, and so on.

One thing that can give you a head start in solving a problem using object thinking is background knowledge. Knowing about your problem domain, what the objects in it are capable of, makes it easier to plan how to use them. Not knowing enough can cause issues, however, if assumptions are made based on incomplete knowledge.

For example: You are asked to stick a poster to the wall, without leaving holes in the wall. You are given a hamster, newspaper and some Blu Tack®, along with the poster. If you don't know what Blu Tack® is for then your understanding of the problem domain is incomplete and you could end up using the hamster to chew up newspaper into balls, and use those to stick the poster to the wall.

It is also important to note that not everything present in your problem domain will necessarily be used to solve the problem. So, in the previous example, you might not use the newspaper or hamster at all (or, of course, you might find the hamster solution better, as it reuses the newspaper, which is more ecological).

So how does this apply to software design? Software is just “algorithms and data structures”, right? Well, at the end maybe, but you've still got to design it. Software is the output of people's attempt to solve a problem. Solving a problem with object thinking is the natural way, as this series of posts hopes to demonstrate, because it uses people's natural problem solving techniques.

Object thinking is a core tenet of Object Oriented Design (OOD), a well known software design paradigm. The inventors of OOD set out to fix what they saw are the main problem with software design – software design was taught to make people think like computers, so that they could write software for computers.
 
A book that extensively covers the meaning and practical aspects of object thinking is Object Thinking by David West (2004, Microsoft Press). In it he likens the way that traditional programmers use OOD to writing lots of small COBOL programmes [1]. Objects in this sense have been turned into data structures with algorithms wrapped around them. While modularising code is better than having one large function, it only makes designing software a little easier. It still focuses the attention of the design on how a computer works and not how the problem should be solved.

So what makes reasoning about large systems easier? Focusing on the problem space and decomposing it into several smaller problems helps. But what is easier to think about? Is it easier to think how those problems translate into code? Perhaps in the short term, but you will end up solving the same problems over and over again, and your code will probably be inflexible.

Would it be better to think about software design the same way you think about solid world problems? That way you can use your innate problem solving skills to frame and express your design.

It turns out that the way people reason about real world problems is to break them down into smaller parts, using their background understanding of the problem space, take the parts of the problem space and treat them as objects that can do things and have things done to them, and find way for the objects to interact. [2]

This works well because people like to anthropomorphise objects, so that they can imagine the object doing things under its own agency, even if in the end it's a person causing the action.[3]

How can you be sure this is how you think, and is therefore the more sensible way to approach software design? Well it turns out that there is an oft ignored backwater science known as Cognitive Psychology, and scientists in this field have been studying people for decades, to find out how they work.

Future posts in this series will review certain cognitive psychology and neuropsychology texts and expand on how this applies to object thinking. The end goal is to demonstrate that object thinking is innate and therefore the best strategy for designing software.

Next post in the series: Object Thinking - Objects: a neurological basis

References
[1] Object Thinking, D. West (2004, Microsoft Press) p9
[2] Problem Solving from an Evolutionary Perspective visited 9th July 2011
[3] Object Thinking, D. West (2004, Microsoft Press) p101

Blu-Tack is a registered trademark of Bostik. I am not affiliated with Bostik.