Sunday, January 24, 2016

Moral Objectivism

A friend of mine asked me to write about the existence of an objective morality. He said I had done something about it for my MSc. but, I can't remember it, that was 10 years ago. This is a new take, I guess.

I am an atheist, but I will try to address my understanding from both "their is a supernatural being that knows how everyone should behave" and the atheist standpoints.

There is a being that judges humans good or bad

Let's assume that one of the religions that says there is a being that knows right from wrong, e.g. Santa Claus, is right. Whatever that being says is the ultimate moral truth is just that.

So then that would mean there are objective morals.

OK, but since the being is intangible, and not everyone believes in that being, how can a human know what is and isn't moral.

Those who have faith say that their being (or beings) has instructed them in the way. There is more than one faith, so which is right? Also, faith comes from within, and therefore is subjective.

For some reason humans have to decide which of the beings is correct by themselves. Thus, from a human standpoint, morals must be subjective, because humans cannot objectively know the right way from the wrong way, because if that could be shown then we would all know if what we were doing was objectively right or wrong.  We could all agree about it, so we would know if someone was "bad" or not. Since we don't all agree about what is right or wrong, bad or good, then the true path isn't self evident.

Atheist are right

Let's assume there is no being that has moral oversight of the universe. So, then morality must be subjective. Each person has their own idea of right and wrong.

If that is the case then why do we have laws? Why do we have power structures? Why are there any rules at all in society?

Two reasons: to aid in coöperation and to reïnforce the power structure.

In the society of the UK as I am writing this, if I want something somebody else owns:
  • I can buy it, if they are willing to sell it
  • I can be given it, if they are willing to give it away
  • I can steal it.
We class the first two as morally correct and the second as morally incorrect. Except when we don't. If the thing has come into the possession of the other person by immoral means, e.g. if they stole it, then my buying or receiving it is deemed immoral. If the possessor stole the object and I steal it and give it back to the original owner, I am deemed to have behaved morally.

In the context of stealing, it is seen as morally incorrect because it reduces the likelihood of coöperation in the future (people protect their possessions more and don't share as much), and also because people with useful possession have power over those who need to use them, and so stealing breaks the power structure.

(This is also why gift giving to a stranger is seen as weaker than selling things to them, because without an exchange of money, the power structure is changed, so the person who has given is perceived as weaker and weakness is perceived as bad.)

So are increasing coöperation and reïnforcing power structures moral absolutes? Should we always seek to do these things?


If being morally correct is a choice, then no.


Sometimes increasing coöperation will change the current power structure. Sometimes enforcing the current power structure will decrease coöperation.

Let's say, for example, you want to increase the coöperation of environmentalists with oil prospectors.

Environmentalists (E) believe oil prospectors (OP) are wrong because E believe that the results of the OP will damage the environment, which is anti-coöperation, as it hurts lots of people, and will result in a complete collapse of the current power systems.

OP believe that E are wrong because OP's results keeps the current power systems running and therefore is in maximum coöperation.

Forcing one to coöperate with the other will cause a change in the power structure, because they will both have to give way, and so lose power.

Humans have very low predictive power. We struggle to accurately predict the repercussions of anything we do further ahead than a year, and most of the time we don't even try for further than a moment.

That's natural, due to how many moving parts our universe has.

Neither OP nor E can be sure about the long term outcome of their actions, so their moral stances are both relative to their understandings of the situation.

Conclusion

In the grand scheme of things, due to the human race's lack of understanding of the universe, moral decision is relative, as we don't know if the universe is for anything, so cannot know if our actions improve or impede the chance of universal success.

If the universe isn't for anything, then morals are a personal thing.

If the universe is under the moral purview of some being or beings then absolute morals are their concern, but not something we can divine.

Friday, November 27, 2015

I hate Drupal

To the tune of Tigger's Song

The terrible thing about drupal
Is drupal's a terrible thing
It's frontend is sluggish and clunky
It's backend keeps making me scream
It's inconsistent, bugs persistent

What's doc-u-men-ta-tion?
But the most terrible thing about drupal
Is I am using it.

I

am using it.

Wednesday, August 26, 2015

Conversions between one dimensional pixel arrays and two dimensional coördinates

If you have an image as a one dimensional array of pixels, organised by width, you can calculate the x, y coördinate of a particular index like so:

x = index modulo width
y = index ÷ width

Where the division is an integer division.

To convert to an array index from an x, y coördinate:

index = x + (y × width)

Monday, June 24, 2013

More AI Ramblings

(This is technically after lunch, don't judge me!)

The other problem I thought up today was that I don't really know what each node of the distributed processes would do.

Should they all do the same thing? Neurons are all basically the same, should each processing unit be basically the same? Or should a unit be more like a part of the brain? So a unit would be like Broca's area, or the visual cortex, etc. But not those things exactly because they are human building blocks, not blocks of the programme.

Let's say there are N types of block. That is analogous to the modular approach to neurology and cognitive psychology. But they need to be resilient to defect, so any block's functions should, over time, be able to be transferred to some other block, as required. This is sort of analogous to the monolithic approach. Meeting in the middle seems par for the course with psychology. Shades of grey are inherent in defining consciousness. Maybe it's even more exciting than a grey scale, perhaps it involves all values of colour.

So the blocks' functions would be mutable. That's a scary thought, but practical.

I shall carry this on after today.

Sunday, June 02, 2013

AI Rambling

I've always had delusions of grandeur. That's what inspired me to start this blog: that I would be able to chronicle my development of AI software. Of course I have since found that I'm not quite smart enough to do that. However today I will indulge myself.
I have recently been reading New Scientist articles on consciousness and the analogical nature of thought.

Firstly: It (finally?) occurred to me that AI should be layered. I had always had in mind a distributed model, but it had always been on one layer. But if some processes seemed "unconscious" and others "conscious", for example the aggregation of input vs the perception of input, then it would be easier to combine input into conscious thoughts because of the specialised nature of the "conscious" and "unconscious" processing units. The AI would only have thoughts that made sense to it (so the theory goes).

The distributed model would have various components, trying to produce analogies of things like "the seat of consciousness". Which brings me to my second thought: how analogy fits in. Douglas Hofstadter says that "Analogy is the machinery that allows us to use our past fluidly to orient ourselves in the present." So analogy can be used not only as the storage mechanism for thought, but as the transport too. It's important to remember that "storage" is shorthand not only for long term or short term memories, but also for the information currently being processed. The thought currently bubbling through your prefrontal cortex, etc. is analogical.

The problem for me is trying to figure out what to base analogies for a programme on. Humans use feeling. What is a good analogy for feeling in a programme? Processor strain? Amount of memory being used? What are good feelings? What are bad feelings? Could it be some sort of arbitrary value? I don't think an arbitrary value would work because of its ungrounded nature. I would prefer things based on what represents reality for the programme, not some abstract idealism concocted by my imagination. What feels good or bad for a programme won't be the same as what feels good or bad for me, but there will be a way to link them together through analogy.

I think I will continue this after lunch.

Saturday, June 02, 2012

Sparse Matrix Multiplication

I want the Math.NET Numerics developers to know their work is great, they put together an easy to use, astoundingly well documented numerical library for .NET. Please know this little criticism comes from a place of respect. It could even be that the code has been updated since your last release and what I'm going to point out is no longer a problem.

I really don't know much about calculus and mathematics at that level. I barely passed A-level maths, and the only time I've used any of the knowledge gained therein was when I had to calculate the first derivative of 1-e-x at university. My mathematics skills are weak (sadly). So, when, in mid-April, I was asked at work to implement some maths heavy algorithms, I felt suitably challenged. Thankfully the scientist who was feeding me the algorithms understood them really well and was on hand to explain things to me over and over again until we finally got things working yesterday. Yay!

Some of what we did relied on sparse matrices, something I had heard of, but never used. So my first thought was that I needed a third party library to do these calculations. The library we are currently using is the bluebit .NET matrix library, it's not perfect and we'll have to replace it with something faster, but for the moment it makes the code testable. This matrix was not my first choice, ideally I wanted something we didn't have to pay for. My first stop was the Math.NET Numerics library. This, unfortunately proved to be too slow. I also tried out Extreme Optimization, but this library was also too slow. Other libraries I looked at were ILNumerics, IMSL.NET and Center Space NMath. I looked but I did not test these last three because each library's API and help were so bad I couldn't figure out how to do what I needed to do. I don't have time to figure out matrix maths, this is why I'm looking for a library. If you want me to choose yours, make it easy to use.

So that was the bulk of the outcome of my foray in numerical libraries. Bluebit is my current choice, but I will have to change it for something faster. This is not the only thing I learned. I learned something that I hope, if they haven't already, the Math.NET developers will be able to use in their code. I've not time to dive into the project, and patch it myself — as I've said, my understanding of the maths is not great — so feel free to take the code here and fix it to work in the library.

At work I'm dealing with quite large matrices. The stuff I've been testing with is 8K x 8K points, and the real data will probably be up to 32K x 32K. But these are sparse matrices, so working with them should not be too processor and memory intensive. The major things I need to do are transposition, multiplication and inversion. Inversion is the killer, and understanding it is currently over my head. It's the place where Extreme Optimization fell down, and where bluebit struggles. I need the algorithms to run in a few seconds. Currently, with 16K x 16K points and bluebit, it's taking 2 minutes. The algorithm did not complete with the other two libraries. I waited for over half an hour, and still nothing, and that was with 8K data.

The first problem that Math.NET encountered was with the multiplication of the matrices. This is what I hope I've optimised. All I've done is profile their code and change the bit that took forever - assigning data to a point in the matrix

My first step was to write these two tests, to make sure I was multiplying the matrices correctly:

[Test]
public void MatrixMultiplication()
{
    var leftM = new double[,] {{4, 5, 6, 7, 8, 1, 2}, {3, 9, 6, 7, 3, 3, 1}, {2, 2, 8, 4, 1, 8, 1}, {1, 9, 9, 4, 3, 1, 2}};
    var rightM = new double[,] {{1, 8, 1}, {2, 6, 2}, {3, 4, 1}, {4, 2, 2}, {5, 1, 1}, {6, 3, 2}, {7, 5, 1}};
    var expectedM = new double[,] {{120, 121, 46}, {107, 133, 51}, {106, 98, 40}, {97, 122, 43}};

    var sm = new SparseMatrix();

    var resultM = sm.MultiplyMatrices(leftM, rightM);

    Assert.AreEqual(expectedM.Rank, resultM.Rank);
    Assert.AreEqual(expectedM.GetLength(0), resultM.GetLength(0));
    Assert.AreEqual(expectedM.GetLength(1), resultM.GetLength(1));

    for(int row = 0; row < 4; row++)
    {
        for(int col = 0; col < 3; col++)
        {
            Assert.AreEqual(expectedM[row, col], resultM[row, col]);
        }
    }
}

[Test]
public void SparseMatrixMultiplication()
{
    var leftM = new double[,] {{1,2,3,0,0,0,0,0,0,0}, {0,0,0,0,0,1,2,0,0,0}, {1,0,4,0,0,5,0,0,0,0}, {0,4,0,5,0,6,0,0,7,0}, {9,0,0,0,0,0,8,0,0,0}};
    var rightM = new double[,] {{0,2,0,4,0}, {1,0,0,1,1}, {3,0,1,3,0}, {4,0,0,0,0}, {0,5,6,0,0}, {0,9,0,6,0}, {0,1,0,3,0}, {0,0,8,0,9}, {0,0,0,0,7}, {0,1,0,0,5}};
    var expectedM = new double[,] {{11,2,3,15,2}, {0,11,0,12,0}, {12,47,4,46,0}, {24,54,0,40,53}, {0,26,0,60,0}};
 
    var sm = new SparseMatrix();

    var resultM = bc.MultiplyMatrices(leftM, rightM);

    for (int row = 0; row < 4; row++)
    {
        for (int col = 0; col < 3; col++)
        {
            Assert.AreEqual(expectedM[row, col], resultM[row, col]);
        }
    }
}

(SparseMatrix isn't really the name of the class, I put the multiplication into the class that was handling the algorithm, but I'm not allowed to talk about that!)

Then I spent ages struggling (because of my ignorance, the code is easy to read) with the Math.NET code to try and understand sparse matrix multiplication - how it could be faster than normal matrix multiplication, and how I could implement it faster. It took a couple of days. I spent a couple of days, rather than giving up and finding a proprietary library right away, because I thought that Math.NET would do the business when it came to inversion. Sadly this isn't the case. Anyway, this is my optimised sparse matrix multiplication method:

private IEnumerable GetNonZeroIndicesForMatrixColumn(double[,] matrix, long col, int rowcount)
{
    for (int row = 0; row < rowcount; row++)
    {
        if (matrix[row, col] != 0)
        {
            yield return row;
        }
    }
}

private IEnumerable GetNonZeroIndicesForMatrixRow(double[,] matrix, int row, int colcount)
{
    for (int col = 0; col < colcount; col++)
    {
        if (matrix[row, col] != 0)
        {
            yield return col;
        }
    }
}
        
/// <summary>
/// Matrix multiplication optimised for sparse matrices
/// </summary>
/// <param name="matrix1">Matrix on the left of the multiplication</param>
/// <param name="matrix2">Matrix on the right of the multiplication</param>
/// <returns>A matrix that is the multiplication of the two passed in</returns>
public double[,] MultiplyMatrices(double[,] matrix1, double[,] matrix2)
{
    int j = matrix1.GetLength(1);
    if (j != matrix2.GetLength(0))
    {
        throw new ArgumentException("matrix1 must have the same number of columns as matrix2 has rows.");
    }

    int m1Rows = matrix1.GetLength(0);
    int m2Cols = matrix2.GetLength(1);
    double[,] result = new double[m1Rows, m2Cols];

    var nonZeroRows = new List[m1Rows];

    Parallel.For(0, m1Rows, row =>
    {
        nonZeroRows[row] = GetNonZeroIndicesForMatrixRow(matrix1, row, j).ToList();
    });

    var nonZeroColumns = new List[m2Cols];

    Parallel.For(0, m2Cols, col =>
    {
        nonZeroColumns[col] = GetNonZeroIndicesForMatrixColumn(matrix2, col, j).ToList();
    });



    Parallel.For(0, m1Rows , row =>
    {
        Parallel.For(0, m2Cols, column =>
        {
            var ns = nonZeroColumns[column].Intersect(nonZeroRows[row]);
            double sum = ns.Sum(n => matrix1[row, n] * matrix2[n, column]);
            result[row, column] = sum;
        });
    });

    return result;
}

As you can see, there is a lot of reliance on the parallel methods that come with .NET 4. That, coupled with the trick of getting the intersection of the non-zeros in the rows of the left matrix with the columns of the right matrix, seems to be the major advantage of my method over Math.NET, because their assignments can't be done in parallel. This could be to do with Silverlight compatibility issues, I don't know. I don't have to worry about Silverlight.

I have run a benchmark for my code. I created a 5000 x 5000 point matrix and filled it at random points with random data (well, pseudo-random). I benchmarked at 5, 50, 150 and 500 non-zero items per row. I ran the test 10 times, to get a mean. The table shows the results:

Number of non-zeros per rowMean seconds taken to multiplyStandard Deviation
56.244657160.1037383251
5051.109723320.8521258197
15093.2973362977.751344564
50013.184354116.4991175895

I find it strange that the standard deviation for the 150 condition is so high. If anyone can see a problem in my code, I'd be really happy to hear it! The full test is below:

toggle test code
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Threading.Tasks;

namespace Math.NetBenchmark
{
    class Program
    {
        private static Random _r = new Random();

        static void Main(string[] args)
        {
            const int rows = 5000;
            const int cols = 5000;
            var nonzerosPerRow = new [] {5, 50, 150, 500};

            Console.WriteLine("started");
            using (var sw = new StreamWriter("MyMX10.results"))
            {
                sw.WriteLine("Number of non-zeros,Time taken");
                foreach (var nzpr in nonzerosPerRow)
                {
                    Console.Write(nzpr+" - making left");
                    var left = MakeMatrix(rows, cols, nzpr);
                    Console.Write("making right");
                    var right = MakeMatrix(rows, cols, nzpr);
                    Console.Write("multiplying...");
                    var startTime = DateTime.Now;
                    MultiplyMatrices(left, right);
                    var endTime = DateTime.Now;
                    var diff = endTime - startTime;
                    sw.WriteLine(nzpr + "," + diff.TotalSeconds);
                    Console.WriteLine("done");
                }
            }

            Console.WriteLine("done");
        }

        private static double[,] MakeMatrix(int rows, int cols, int nonzerosPerRow)
        {
            var result = new double[rows, cols];
            var colsPoss = Enumerable.Range(0, cols).ToArray();
            Parallel.For(0, rows, iRow =>
            {
                var posleft = colsPoss;
                Console.Write(".");
                for (int i = 0; i < nonzerosPerRow; i++)
                {
                    int posindex = _r.Next(posleft.Length);
                    int index = posleft[posindex];
                    result[iRow, index] = 1+_r.NextDouble();
                    posleft = posleft.Take(index).Concat(posleft.Skip(index+1)).ToArray();
                }
            });
            return result;
        }

        private static IEnumerable GetNonZeroIndicesForMatrixColumn(double[,] matrix, long col, int rowcount)
        {
            for (int row = 0; row < rowcount; row++)
            {
                if (matrix[row, col] != 0)
                {
                    yield return row;
                }
            }
        }

        private static IEnumerable GetNonZeroIndicesForMatrixRow(double[,] matrix, int row, int colcount)
        {
            for (int col = 0; col < colcount; col++)
            {
                if (matrix[row, col] != 0)
                {
                    yield return col;
                }
            }
        }

        public static double[,] MultiplyMatrices(double[,] matrix1, double[,] matrix2)
        {
            int j = matrix1.GetLength(1);
            if (j != matrix2.GetLength(0))
            {
                throw new ArgumentException("matrix1 must have the same number of columns as matrix2 has rows.");
            }

            int m1Rows = matrix1.GetLength(0);
            int m2Cols = matrix2.GetLength(1);
            double[,] result = new double[m1Rows, m2Cols];

            var nonZeroRows = new List[m1Rows];

            Parallel.For(0, m1Rows, row =>
            {
                nonZeroRows[row] = GetNonZeroIndicesForMatrixRow(matrix1, row, j).ToList();
            });

            var nonZeroColumns = new List[m2Cols];

            Parallel.For(0, m2Cols, col =>
            {
                nonZeroColumns[col] = GetNonZeroIndicesForMatrixColumn(matrix2, col, j).ToList();
            });

            Parallel.For(0, m1Rows, row =>
            {
                Parallel.For(0, m2Cols, column =>
                {
                    var ns = nonZeroColumns[column].Intersect(nonZeroRows[row]);
                    double sum = ns.Sum(n => matrix1[row, n] * matrix2[n, column]);
                    result[row, column] = sum;
                });
            });

            return result;
        }
    }
}

Friday, April 27, 2012

Object Thinking - Anthropomorphism

This follows on from Object Thinking - Objects have actions

Anthropomorphism is essential for object thinking to take place. Anthropomorphism is when a person attributes human mental states to other, non-human, things. Attributing human-like mental states to objects allows a programmer to treat the object as an agent, as opposed to something inanimate, and so bestow upon it appropriate behaviours allowing it to act in an appropriate manner within the application, interacting with other object. The amount of responsibility that you want an object to have will reflect how much you anthropomorphise it. It is important not to give an object too much responsibility, as explained by the Single Responsibility Principle.

That Anthropomorphism occurs is so obvious it doesn't need investigating! So obviously it has been researched by a huge number of people. The paper being looked at here, Making Sense by Making Sentient: Effectance Motivation Increases Anthropomorphism, by Waytz et al. in 2010[1], is one that attempts to explain why and how people anthropomorphise.

Their hypothesis is that one of the reasons people anthropomorphise objects because they want to increase their effectance motivation. This is the motivation to be an effective social agent. The researchers conduct six experiments based on this hypothesis.

The first experiment asked participants to rate their computers. Half of the participants (A) were asked to rate how much they felt their computer has a mind of its own. The other half (B) were asked to rate how much their computer appeared to behave as if it has its own beliefs and desires. Both sets were asked how often they had problems with the computer or its software. The hypothesis for the study is that the more problems people have with their computer, the more they will anthropomorphise it.

Results showed that, in accordance with the hypothesis, the more often participants in group A had problems with their computers, the more they thought their computers had minds of their own and that the more often participants in group B had problems, the more likely they were to believe their computers had beliefs and desires.

The second experiment asked participants to judge the agency of gadgets that had been assigned one of two descriptions about it. The gadget's description either made it seems as though what it did was within or outside the control of the user, but always described the same functionality. There were two groups of participants. They all saw the same set of gadgets, there were alternating sets of descriptions. After reading the descriptions, the participants were asked to rate how much control they thought they had over the gadgets, and then to assess how much the gadget had a “mind of its own”, “intentions, free will and consciousness” and appeared to experience emotions, in the same way they had to rate how much control they thought they had over the gadget.

In alignment with their hypothesis, the participants rated the gadgets with low controllability to be more anthropomorphic than those that were perceived to be more easy to control.

The third experiment was essentially replica of the second, but the participants were subject to an fMRI scan while rating the gadgets. This was conducted because the researched reasoned that people could be using mind as a metaphor for the behaviour they were seeing, rather than actually attributing minds to the objects. By determining the region of the brain in use when anthropomorphising takes place they could rule out certain modes of thinking and give weight to a possible seat for anthropomorphism in the brain. The researchers propose, through reference to previous studies, that the superior temporal sulcus (STS) is involved in social or biological motion, the medial prefrontal cortex (MPFC) is in use when considering people vs objects and considering the mind of another, and the amygdala, inferior parietal lobe and intraparietal sulcus are active when evaluating unpredictability. They therefore hypothesise that the MPFC will increase in activity when anthropomorphising.

The results of the experiment showed the ventral MPFC (vMPFC) to be the most active region, whereas the STS was not active.

The results also showed activation in a network of areas related to mentalising, which strongly resembles a circuit corresponding to processing of self-projection, mentalising and general social cognition, which is what would be expected for anthropomorphism.

This implies that unpredictable gadgets are perceived to have a mind, in an actual rather than metaphorical sense.

The results are inconsistent with the alternative hypotheses: attribution of mind to objects only related to social or biological motion analogies; that processing unpredictability is the cause of the activation; or that the activation is influenced by animism.

The fourth experiment asked participants to evaluate a robot that would answer yes/no questions the participants asked. There were three conditions that the participants were randomly assigned to: the condition where the robot answered yes as often as no, the condition where the robot answered no more often, and the condition where the robot answered yes more often. The second two conditions were the predictable conditions.

After asking the questions and receiving answers, the participants were asked to rate the robot on predictability, then on how much they thought it had free will, its own intentions, consciousness, desires, beliefs and the ability to express emotions. The participants were also asked to rate the robot on attractiveness, efficiency and strength. The ratings were done on a five point scale from “Not at all” (1) to “Extremely” (5).

Results from the experiment showed that participants in the predictable groups found the robot to be predictable, more-so than those in the unpredictable group. Also predicable-no was felt to be more predictable than predictable-yes.

Importantly anthropomorphism was found to be more prevalent where the robot would found to be less predictable.

The only significant difference between the conditions and the non-anthropomorphic evaluation was that predictable-yes participants found the robot to be more attractive than predictable-no. The researchers do not discuss this finding. There was no significant interaction found between liking the robot and anthropomorphising it.

These results show people anthropomorphise unpredictable agents, and present a causal link between the two. This is important as the previous three experiments could be interpreted as a simple association rather than a clear cognitive process.

Experiment five gave some participants motivation to predict the behaviour of a robot, and the others were asked to predict the behaviour with out being motivated. The hypothesis was that increasing motivation should increase motivation to understand, explain and predict an agent.

Participants evaluated a robot on a computer screen. They watched videos of the robot perform but not complete a task. Participants saw options of what the robot would do next and were asked to pick what they thought would happen. Participants in the motivation condition were offered $1 per correct answer. All participants then evaluated the robot's anthropomorphism. Finally the participants were shown the outcome, and compensated where necessary.

Results showed that motivated participants rated the robot as more anthropomorphic.

This shows that effectance motivation is increased when a person is motivated to understand an agent, and not simply controlled by the predictability of the agent.

The sixth and final experiment was predicated by the hypothesis that anthropomorphism should satisfy effectance motivation, i.e. anthropomorphism should satiate the motivation for mastery and make agents seem more predictable and understandable.

Participants evaluated four stimuli (dog, robot, alarm clock, shapes). Half of the participants were told to evaluate the dog and alarm clock objectively, and the robot and shapes in an anthropomorphic fashion, the other half were given the opposite instructions.

Each participant was shown a video of each stimulus three times. After the third time the participant was asked to evaluate the stimulus on two scales: the extent to which they understood the stimulus and the extent to which they felt capable of predicting its future behaviour.

The results showed that the dog and shapes were found to be easier to understand than the robot or alarm clock.

Importantly, participants perceived greater understanding and predictability of agents they had been told to anthropomorphise. The effect did not seem to depend on the group the participant was in.

This study implies that anthropomorphism satisfies effectance motivation.

It is clear from this paper that anthropomorphism is a natural part of human cognition, that is used to make behaviour of objects in the world around us seem more predictable and thus give us a better sense of control. It also shows that there is a neurological basis for this behaviour; the brain is set-up to anthropomorphise the world around us.

[1] Making Sense by Making Sentient: Effectance Motivation Increases Anthropomorphism. A. Waytz, C. K. Morewedge, N. Epley, G. Monteleone, J. H. Gao, J. T. Cacioppo. Journal of Personality and Social Psychology 2010, Vol.99, No.3, 410–435