tag:blogger.com,1999:blog-347635672024-02-20T09:56:52.008+00:00Mojo's software development blogI write software. Mostly in object oriented c-like languages and javascript.
I'm keeping a web log of my activities.Mojohttp://www.blogger.com/profile/11704648978910126233noreply@blogger.comBlogger35125tag:blogger.com,1999:blog-34763567.post-28197826223069448112016-09-15T20:12:00.000+00:002016-09-15T20:30:41.785+00:00Angry man figures out how to make a bar chart in D3 v4<p><strong>Oh, hey!</strong> apparently D3.js is really amazing because </p>
<blockquote>Modifying documents using the W3C DOM API is tedious: the method names are verbose, and the imperative approach requires manual iteration and bookkeeping of temporary state.</blockquote>
<p>Yeah, sure. Whatever. I need to draw a bar chart. Great. Super boring. Whatever. Is there a tutorial for that?</p>
<p><em>Quick Google search on D3 bar chart tutorials</em></p>
<p>Oh! Look at that. Fucking marvellous. <a href="http://bost.ocks.org/mike/bar/">A tutorial specifically about bar charts</a>. Made my day. Wait. WTF is this? sideways barcharts? Who in their right mind does that? Useless. Oh. It's in three parts. <em> skips to the end </em> Cool. I'll copy/paste this into a browser and see what's up...</p>
<blockquote><pre>d3.scale is undefined</pre></blockquote>
<p>What? Code that doesn't work? What the fuck is wrong with these people? And why are they loading in a tsv? I'm getting my data from a server, numbnuts. Show me how to do it with json.</p>
<p><em>Heads over to the tutorial section of the official documentation</em></p>
<blockquote>Tutorials may not be up-to-date with the latest version 4.0 of D3</blockquote>
<p>Just great. No notes to tell me which work with the current version of the software. Fuckin A. Guess I'll write my own.</p>
<p>This is the data I'm working with. Pretty simple stuff.</p>
<pre class="prettyprint">
data.weekday_visits =
{
'Monday':132,
'Tuesday':140,
'Wednesday':159,
'Thursday':129,
'Friday':158,
'Saturday':132,
'Sunday':150,
}
</pre>
<p>Seems pretty reasonable, right? The first thing I need to do is change it suit D3 almighty.</p>
<pre class="prettyprint">
var weekdayVisits = [];
var days = ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday'];
var maxVisits = 0;
for(var i = 0; i < days.length; i++)
{
if(data.weekday_visits[days[i]] > maxVisits)
{
maxVisits = data.weekday_visits[days[i]];
}
weekdayVisits.push({'day': days[i], 'value': data.weekday_visits[days[i]]});
}
</pre>
<p>That's right. Now it's an array of dictionaries. Amazing.</p>
<p>Now I need to figure out how to draw a fucking chart. What does D3 even do, beyond the marketobabble they spout on the homepage? Apparently it can manipulate SVG into a bar chart, so the out of date tutorial told me. </p>
<pre class="prettyprint">
<svg id="weekdayChart" width="400" height="400"></svg>
</pre>
<p>There, least I can do. So. Now what? Let's try drawing an axis. Should be easy, right? Good fucking luck. I've never used this library in my life. It's going to be a nightmare.</p>
<pre class="prettyprint">
var chart = document.getElementById('weekdayChart');
var yAxis = d3.axisLeft(maxVisits); //remember maxVisits from above? I bet you thought I was crazy. Yep, but also we need this. This is for the "scale" parameter
yAxis(chart); //That's all I have to do to add an axis to a chart? Nice!
</pre>
<p>Let's give that a test...</p>
<blockquote><pre>TypeError: n.domain is not a function</pre></blockquote>
<p>Bollocks. What is n? How do I set its domain? Fucksake. What does the <a href="https://github.com/d3/d3-axis/blob/master/README.md#_axis">API</a> say about axis domains? Fuck all. Great. Thanks a million for this really easy to use library. Of course searching brings up everything for version 3 and nothing for version 4 so that's as good as punching myself in the balls for half an hour. OK, so the out of date tutorial says something like: <code>y.domain(some function)</code> It's worth a shot.</p>
<pre class="prettyprint">
yAxis.domain(function(d){ return d.value; });
</pre>
<p> Aaaaand eval... </p>
<blockquote><pre>TypeError: yAxis.domain is not a function</pre></blockquote>
<p>Well, at least this is consistent with the API. What next? OK, what if scales are a type. Maybe that's what they mean in the API. OK, clicking on a few links seems to confirm that, although it's not clear what scale I should use. I'm guessing Linear, what's the documentation say? Ah! They mention domains! Finally. Also something called ranges, which also appeared in the out of date tutorial. I'm sure this will fuck me up, too. </p>
<pre class="prettyprint">
var yAxis = d3.axisLeft(d3.scaleLinear().domain([0, maxVisits]));
</pre>
<p><strong>A NOPE!</strong></p>
<blockquote><pre>TypeError: g.selectAll is not a function</pre></blockquote>
<p>More mysterious error messages about objects I know nothing about. SO HELPFUL! OK, OK. I'll stop using the .min version.</p>
<blockquote><pre>TypeError: selection.selectAll is not a function</pre></blockquote>
<p>Oh yeah. So much more helpful now. I haven't selected anything, so I have no idea what that's about. OK, I'll dig around in the code like a monkey scrounging for shit. Apparently something called <code>context</code> needs to have a method called <code>selectAll</code> or have a member called <code>selection</code>. I'll change things up a bit then.</p>
<pre class="prettyprint">
var chart = d3.select('#weekdayChart');
</pre>
<p>Well, now it's running, but it doesn't... wait... what's that black spot? It's not a dead pixel! I think we've got something. I guess I need to give the scale a range now, so it'll stop being too tiny. I'll take my cue from the out of date tutorial (why am I doing this to myself?)</p>
<pre class="prettyprint">
var yAxis = d3.axisLeft(d3.scaleLinear().domain([0, maxVisits]).range([400, 0]));
</pre>
<p>Well hey! Look at that. I got a black line. Maybe I can do the same for the x axis.</p>
<pre class="prettyprint">
var yAxis = d3.axisLeft(d3.scaleLinear().domain([0, maxVisits]).range([0, 400]));
var xAxis = d3.axisBottom(d3.scaleBand().domain(days).range([0, 400]));
</pre>
<p>OK. The x axis looks a little dumb, but it's on the way to looking right, but now my yaxis is tiny again, or maybe completly invisible. Why the fuck has that happened?</p>
<p>Looking over the out of date tutorial, apparently I need a <code>g</code> for each axis. Fine.</p>
<pre class="prettyprint">
var yAxisHolder = chart.append('g');
var xAxisHolder = chart.append('g');
var yAxis = d3.axisLeft(d3.scaleLinear().domain([0, maxVisits]).range([0, 400]));
var xAxis = d3.axisBottom(d3.scaleBand().domain(days).range([0,400]));
yAxis(yAxisHolder);
xAxis(xAxisHolder);
</pre>
<p>OK. Now the y axis is back, buuuuuuuuuuuut the x axis is at the top. What the fuck? It's called <code>axisBottom</code>, so why's it at the top? Gahhhhhh! Looking at the out of date tutorial, the x axis holder needs to be translated to the bottom of the SVG. Great. But not completely to the bottom, because <strong>that</strong> hides the labels. Also, that means I need to change the range of the y axis to match the translation.</p>
<pre class="prettyprint">
var xAxisHolder = chart.append('g').attr("transform", "translate(0," + 380 + ")");
var yAxis = d3.axisLeft(d3.scaleLinear().domain([0, maxVisits]).range([0, 380]));
</pre>
<p>It turns out the whole <code>axisBottom</code> vs <code>axisTop</code> determines whether the labels will be above or below the line. That is actually mentioned in the API, so fine, but also, that's a terrible name. Of course, adding numbers to the y axis means more fiddling with where everything is, but at least it's easy to add the numbers (see the <code>ticks</code> at the end of the axis declaration).</p>
<pre class="prettyprint">
var yAxisHolder = chart.append('g').attr("transform", "translate(30,10)");;
var xAxisHolder = chart.append('g').attr("transform", "translate(30,390)");
var yAxis = d3.axisLeft(d3.scaleLinear().domain([0, maxVisits]).range([0, 380])).ticks();
var xAxis = d3.axisBottom(d3.scaleBand().domain(days).range([0,370]));
</pre>
<p>And I'll mage the SVG bigger so it will all fit.</p>
<pre class="prettyprint">
<svg id="weekdayChart" width="400" height="410"></svg>
</pre>
<p>Huh. The number are going from 0 at the top. Why? I think I know this. This is the range thing. I have to put the numbers backwards. That's in the out of date tutorial.</p>
<pre class="prettyprint">
var yAxis = d3.axisLeft(d3.scaleLinear().domain([0, maxVisits]).range([380,0])).ticks();
</pre>
<p>Yeah, that's fixed it.</p>
<p>OK. Now I want some bars. From the out of date tutorial, it looks like I want to add <code>rect</code> objects to the chart. This might actually work... <em>doesn't hold breath</em></p>
<pre class="prettyprint">
chart.selectAll(".bar")
.data(data)
.enter().append("rect")
.attr("class", "bar")
.attr("x", function(d) { return xScale(d.day); })
.attr("y", function(d) { return yscale(d.value); })
.attr("height", function(d) { return 390 - yScale(d.value); })
.attr("width", xScale.rangeBand());
</pre>
<p>Now we get the error:</p>
<blockquote><pre>TypeError: xScale.rangeBand is not a function</pre></blockquote>
<p>The API seems to suggest that <code>bandwidth</code> is the correct function name.</p>
<pre class="prettyprint">
chart.selectAll(".bar")
.data(weekdayVisits)
.enter().append("rect")
.attr("class", "bar")
.attr("x", function(d) { return xScale(d.day); })
.attr("y", function(d) { return yScale(d.value); })
.attr("height", function(d) { return 390 - yScale(d.value); })
.attr("width", xScale.bandwidth());
</pre>
<p>Well, the error is gone, and I do see some bars but they are <strong>way</strong> too big, and off centre. Let me just hack at these position values...</p>
<pre class="prettyprint">
chart.selectAll(".bar")
.data(weekdayVisits)
.enter().append("rect")
.attr("class", "bar")
.attr("x", function(d) { return 40 + xScale(d.day); })
.attr("y", function(d) { return 10 + yScale(d.value); })
.attr("height", function(d) { return 380 - yScale(d.value); })
.attr("width", xScale.bandwidth()-20);
</pre>
<p>Amazing. Now I have a bar chart. It only took two days of hacking to get here. Thanks to Mike Bostock for the out of date tutorial, and to the D3.js team for the API. Both gave help like panning for gold in a river. A lot of grit but some nuggets are in there if you've got two days to spare.</p>
<p>See below for the full code:</p>
<h3>HTML</h3>
<pre class="prettyprint">
<svg id="weekdayChart" width="400" height="410"></svg>
<script type="text/javascript" src="d3.js"></script>
<script type="text/javascript" src="blogpost.js"></script>
</pre>
<h3>CSS</h3>
<pre class="prettyprint">
.bar { fill: steelblue; }
</pre>
<h3>JavaScript</h3>
<pre class="prettyprint">
data.weekday_visits =
{
'Monday':132,
'Tuesday':140,
'Wednesday':159,
'Thursday':129,
'Friday':158,
'Saturday':132,
'Sunday':150,
}
var weekdayVisits = [];
var days = ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday'];
var maxVisits = 0;
for(var i = 0; i < days.length; i++)
{
if(data.weekday_visits[days[i]] > maxVisits)
{
maxVisits = data.weekday_visits[days[i]];
}
weekdayVisits.push({'day': days[i], 'value': data.weekday_visits[days[i]]});
}
var chart = d3.select('#weekdayChart');
var yAxisHolder = chart.append('g').attr("transform", "translate(30,10)");;
var xAxisHolder = chart.append('g').attr("transform", "translate(30,390)");
var yScale = d3.scaleLinear().domain([0, maxVisits]).range([380,0]);
var xScale = d3.scaleBand().domain(days).range([0,370]);
var yAxis = d3.axisLeft(yScale).ticks();
var xAxis = d3.axisBottom(xScale);
yAxis(yAxisHolder);
xAxis(xAxisHolder);
chart.selectAll(".bar")
.data(weekdayVisits)
.enter().append("rect")
.attr("class", "bar")
.attr("x", function(d) { return 40 + xScale(d.day); })
.attr("y", function(d) { return 10 + yScale(d.value); })
.attr("height", function(d) { return 380 - yScale(d.value); })
.attr("width", xScale.bandwidth()-20);
</pre>
<h3>SVG output</h3>
<svg height="410" width="400" id="weekdayChart"><g transform="translate(30,10)" fill="none" font-size="10" font-family="sans-serif" text-anchor="end"><path class="domain" stroke="#000" d="M-6,380.5H0.5V0.5H-6"/><g class="tick" opacity="1" transform="translate(0,380)"><line stroke="#000" x2="-6" y1="0.5" y2="0.5"/><text fill="#000" x="-9" y="0.5" dy="0.32em">0</text></g><g class="tick" opacity="1" transform="translate(0,329.3333333333333)"><line stroke="#000" x2="-6" y1="0.5" y2="0.5"/><text fill="#000" x="-9" y="0.5" dy="0.32em">20</text></g><g class="tick" opacity="1" transform="translate(0,278.6666666666667)"><line stroke="#000" x2="-6" y1="0.5" y2="0.5"/><text fill="#000" x="-9" y="0.5" dy="0.32em">40</text></g><g class="tick" opacity="1" transform="translate(0,228)"><line stroke="#000" x2="-6" y1="0.5" y2="0.5"/><text fill="#000" x="-9" y="0.5" dy="0.32em">60</text></g><g class="tick" opacity="1" transform="translate(0,177.33333333333334)"><line stroke="#000" x2="-6" y1="0.5" y2="0.5"/><text fill="#000" x="-9" y="0.5" dy="0.32em">80</text></g><g class="tick" opacity="1" transform="translate(0,126.66666666666669)"><line stroke="#000" x2="-6" y1="0.5" y2="0.5"/><text fill="#000" x="-9" y="0.5" dy="0.32em">100</text></g><g class="tick" opacity="1" transform="translate(0,76)"><line stroke="#000" x2="-6" y1="0.5" y2="0.5"/><text fill="#000" x="-9" y="0.5" dy="0.32em">120</text></g><g class="tick" opacity="1" transform="translate(0,25.333333333333314)"><line stroke="#000" x2="-6" y1="0.5" y2="0.5"/><text fill="#000" x="-9" y="0.5" dy="0.32em">140</text></g></g><g transform="translate(30,390)" fill="none" font-size="10" font-family="sans-serif" text-anchor="middle"><path class="domain" stroke="#000" d="M0.5,6V0.5H370.5V6"/><g class="tick" opacity="1" transform="translate(26.428571428571427,0)"><line stroke="#000" y2="6" x1="0.5" x2="0.5"/><text fill="#000" y="9" x="0.5" dy="0.71em">Monday</text></g><g class="tick" opacity="1" transform="translate(79.28571428571428,0)"><line stroke="#000" y2="6" x1="0.5" x2="0.5"/><text fill="#000" y="9" x="0.5" dy="0.71em">Tuesday</text></g><g class="tick" opacity="1" transform="translate(132.14285714285714,0)"><line stroke="#000" y2="6" x1="0.5" x2="0.5"/><text fill="#000" y="9" x="0.5" dy="0.71em">Wednesday</text></g><g class="tick" opacity="1" transform="translate(184.99999999999997,0)"><line stroke="#000" y2="6" x1="0.5" x2="0.5"/><text fill="#000" y="9" x="0.5" dy="0.71em">Thursday</text></g><g class="tick" opacity="1" transform="translate(237.85714285714283,0)"><line stroke="#000" y2="6" x1="0.5" x2="0.5"/><text fill="#000" y="9" x="0.5" dy="0.71em">Friday</text></g><g class="tick" opacity="1" transform="translate(290.7142857142857,0)"><line stroke="#000" y2="6" x1="0.5" x2="0.5"/><text fill="#000" y="9" x="0.5" dy="0.71em">Saturday</text></g><g class="tick" opacity="1" transform="translate(343.57142857142856,0)"><line stroke="#000" y2="6" x1="0.5" x2="0.5"/><text fill="#000" y="9" x="0.5" dy="0.71em">Sunday</text></g></g><rect class="bar" x="40" y="22.666666666666686" height="367.3333333333333" width="32.857142857142854"/><rect class="bar" x="92.85714285714286" y="10" height="380" width="32.857142857142854"/><rect class="bar" x="145.71428571428572" y="91.06666666666666" height="298.93333333333334" width="32.857142857142854"/><rect class="bar" x="198.57142857142856" y="40.39999999999998" height="349.6" width="32.857142857142854"/><rect class="bar" x="251.42857142857142" y="12.53333333333336" height="377.46666666666664" width="32.857142857142854"/><rect class="bar" x="304.2857142857143" y="10" height="380" width="32.857142857142854"/><rect class="bar" x="357.1428571428571" y="10" height="380" width="32.857142857142854"/></svg>
Mojohttp://www.blogger.com/profile/11704648978910126233noreply@blogger.com1tag:blogger.com,1999:blog-34763567.post-16083799494888040842016-01-24T16:27:00.001+00:002016-01-24T16:28:45.372+00:00Moral ObjectivismA friend of mine asked me to write about the existence of an objective morality. He said I had done something about it for my MSc. but, I can't remember it, that was 10 years ago. This is a new take, I guess.<br />
<br />
I am an atheist, but I will try to address my understanding from both "their is a supernatural being that knows how everyone should behave" and the atheist standpoints.<br />
<br />
<h3>
There is a being that judges humans good or bad</h3>
Let's assume that one of the religions that says there is a being that knows right from wrong, e.g. Santa Claus, is right. Whatever that being says is the ultimate moral truth is just that.<br />
<br />
So then that would mean there are objective morals.<br />
<br />
OK, but since the being is intangible, and not everyone believes in that being, how can a human know what is and isn't moral.<br />
<br />
Those who have faith say that their being (or beings) has instructed them in the way. There is more than one faith, so which is right? Also, faith comes from within, and therefore is subjective.<br />
<br />
For some reason humans have to decide which of the beings is correct by themselves. Thus, from a human standpoint, morals must be subjective, because humans cannot objectively know the right way from the wrong way, because if that could be shown then we would all know if what we were doing was objectively right or wrong. We could all agree about it, so we would know if someone was "bad" or not. Since we don't all agree about what is right or wrong, bad or good, then the true path isn't self evident.<br />
<br />
<h3>
Atheist are right</h3>
Let's assume there is no being that has moral oversight of the universe. So, then morality must be subjective. Each person has their own idea of right and wrong.<br />
<br />
If that is the case then why do we have laws? Why do we have power structures? Why are there any rules at all in society?<br />
<br />
Two reasons: to aid in coöperation and to reïnforce the power structure.<br />
<br />
In the society of the UK as I am writing this, if I want something somebody else owns:<br />
<ul>
<li>I can buy it, if they are willing to sell it</li>
<li>I can be given it, if they are willing to give it away</li>
<li>I can steal it.</li>
</ul>
We class the first two as morally correct and the second as morally incorrect. Except when we don't. If the thing has come into the possession of the other person by immoral means, e.g. if they stole it, then my buying or receiving it is deemed immoral. If the possessor stole the object and I steal it and give it back to the original owner, I am deemed to have behaved morally.<br />
<br />
In the context of stealing, it is seen as morally incorrect because it reduces the likelihood of coöperation in the future (people protect their possessions more and don't share as much), and also because people with useful possession have power over those who need to use them, and so stealing breaks the power structure.<br />
<br />
(This is also why gift giving to a stranger is seen as weaker than selling things to them, because without an exchange of money, the power structure is changed, so the person who has given is perceived as weaker and weakness is perceived as bad.)<br />
<br />
So are increasing coöperation and reïnforcing power structures moral absolutes? Should we always seek to do these things?<br />
<br />
<br />
If being morally correct is a choice, then no.<br />
<br />
<br />
Sometimes increasing coöperation will change the current power structure. Sometimes enforcing the current power structure will decrease coöperation.<br />
<br />
Let's say, for example, you want to increase the coöperation of environmentalists with oil prospectors.<br />
<br />
Environmentalists (E) believe oil prospectors (OP) are wrong because E believe that the results of the OP will damage the environment, which is anti-coöperation, as it hurts lots of people, and will result in a complete collapse of the current power systems.<br />
<br />
OP believe that E are wrong because OP's results keeps the current power systems running and therefore is in maximum coöperation.<br />
<br />
Forcing one to coöperate with the other will cause a change in the power structure, because they will both have to give way, and so lose power. <br />
<br />
Humans have very low predictive power. We struggle to accurately predict the repercussions of anything we do further ahead than a year, and most of the time we don't even try for further than a moment.<br />
<br />
That's natural, due to how many moving parts our universe has.<br />
<br />
Neither OP nor E can be sure about the long term outcome of their actions, so their moral stances are both relative to their understandings of the situation.<br />
<br />
<h3>
Conclusion</h3>
In the grand scheme of things, due to the human race's lack of understanding of the universe, moral decision is relative, as we don't know if the universe is for anything, so cannot know if our actions improve or impede the chance of universal success.<br />
<br />
If the universe isn't for anything, then morals are a personal thing.<br />
<br />
If the universe is under the moral purview of some being or beings then absolute morals are their concern, but not something we can divine.Mojohttp://www.blogger.com/profile/11704648978910126233noreply@blogger.com0tag:blogger.com,1999:blog-34763567.post-53135808526067751172015-11-27T14:42:00.001+00:002015-11-27T14:42:53.251+00:00I hate DrupalTo the tune of Tigger's Song<br />
<br />
<em>The terrible thing about drupal<br />Is drupal's a terrible thing<br />It's frontend is sluggish and clunky<br />It's backend keeps making me scream<br />It's inconsistent, bugs persistent</em><br /><em>What's doc-u-men-ta-tion?</em><br /><em>But the most terrible thing about drupal</em><br /><em>Is I am using it.</em><br />
<br />
<em>I</em><br />
<br />
<em>am using it. </em>Mojohttp://www.blogger.com/profile/11704648978910126233noreply@blogger.com0tag:blogger.com,1999:blog-34763567.post-36498346838170215652015-08-26T18:39:00.000+00:002015-08-26T18:40:18.993+00:00Conversions between one dimensional pixel arrays and two dimensional coördinatesIf you have an image as a one dimensional array of pixels, organised by width, you can calculate the x, y coördinate of a particular index like so:<br />
<br />
x = index modulo width<br />
y = index ÷ width<br />
<br />
Where the division is an integer division.<br />
<br />
To convert to an array index from an x, y coördinate:<br />
<br />
index = x + (y × width)Mojohttp://www.blogger.com/profile/11704648978910126233noreply@blogger.com0tag:blogger.com,1999:blog-34763567.post-34828713510585042512013-06-24T22:03:00.001+00:002013-06-24T22:04:20.271+00:00More AI Ramblings(This is technically after lunch, don't judge me!)<br />
<br />
The other problem I thought up today was that I don't really know what each node of the distributed processes would do.<br />
<br />
Should they all do the same thing? Neurons are all basically the same, should each processing unit be basically the same? Or should a unit be more like a part of the brain? So a unit would be like Broca's area, or the visual cortex, etc. But not those things exactly because they are human building blocks, not blocks of the programme.<br />
<br />
Let's say there are N types of block. That is analogous to the modular approach to neurology and cognitive psychology. But they need to be resilient to defect, so any block's functions should, over time, be able to be transferred to some other block, as required. This is sort of analogous to the monolithic approach. Meeting in the middle seems par for the course with psychology. Shades of grey are inherent in defining consciousness. Maybe it's even more exciting than a grey scale, perhaps it involves all values of colour.<br />
<br />
So the blocks' functions would be mutable. That's a scary thought, but practical.<br />
<br />
I shall carry this on after today.Mojohttp://www.blogger.com/profile/11704648978910126233noreply@blogger.com0tag:blogger.com,1999:blog-34763567.post-15692924235923621052013-06-02T11:03:00.000+00:002013-06-24T21:45:24.998+00:00AI Rambling<p>I've always had delusions of grandeur. That's what inspired me to start this blog: that I would be able to chronicle my development of AI software. Of course I have since found that I'm not quite smart enough to do that. However today I will indulge myself.<br />
I have recently been reading New Scientist articles on consciousness and the analogical nature of thought.
</p>
<p>
Firstly: It (finally?) occurred to me that AI should be layered. I had always had in mind a distributed model, but it had always been on one layer. But if some processes seemed "unconscious" and others "conscious", for example the aggregation of input vs the perception of input, then it would be easier to combine input into conscious thoughts because of the specialised nature of the "conscious" and "unconscious" processing units. The AI would only have thoughts that made sense to it (so the theory goes).
</p>
<p>
The distributed model would have various components, trying to produce analogies of things like "the seat of consciousness". Which brings me to my second thought: how analogy fits in. Douglas Hofstadter says that "Analogy is the machinery that allows us to use our past fluidly to orient ourselves in the present." So analogy can be used not only as the storage mechanism for thought, but as the transport too. It's important to remember that "storage" is shorthand not only for long term or short term memories, but also for the information currently being processed. The thought currently bubbling through your prefrontal cortex, etc. is analogical.
</p>
<p>
The problem for me is trying to figure out what to base analogies for a programme on. Humans use feeling. What is a good analogy for feeling in a programme? Processor strain? Amount of memory being used? What are good feelings? What are bad feelings? Could it be some sort of arbitrary value? I don't think an arbitrary value would work because of its ungrounded nature. I would prefer things based on what represents reality for the programme, not some abstract idealism concocted by my imagination. What feels good or bad for a programme won't be the same as what feels good or bad for me, but there will be a way to link them together through analogy.
</p>
<p>
I think I will continue this after lunch.
</p>Mojohttp://www.blogger.com/profile/11704648978910126233noreply@blogger.com0tag:blogger.com,1999:blog-34763567.post-3268858760269434242012-06-02T22:15:00.001+00:002012-06-02T23:03:57.967+00:00Sparse Matrix Multiplication<p>I want the <a href="http://mathnetnumerics.codeplex.com/">Math.NET Numerics</a> developers to know their work is great, they put together an easy to use, astoundingly well documented numerical library for .NET. Please know this little criticism comes from a place of respect. It could even be that the code has been updated since your last release and what I'm going to point out is no longer a problem.</p>
<p>I really don't know much about calculus and mathematics at that level. I barely passed <a href="http://en.wikipedia.org/wiki/GCE_Advanced_Level">A-level</a> maths, and the only time I've used any of the knowledge gained therein was when I had to calculate the first derivative of 1-e<sup>-x</sup> at university. My mathematics skills are weak (sadly). So, when, in mid-April, I was asked at work to implement some maths heavy algorithms, I felt suitably challenged. Thankfully the scientist who was feeding me the algorithms understood them really well and was on hand to explain things to me over and over again until we finally got things working yesterday. Yay!</p>
<p>Some of what we did relied on <a href="http://en.wikipedia.org/wiki/Sparse_matrix">sparse matrices</a>, something I had heard of, but never used. So my first thought was that I needed a third party library to do these calculations. The library we are currently using is the <a href="http://www.bluebit.gr/net/">bluebit .NET matrix library</a>, it's not perfect and we'll have to replace it with something faster, but for the moment it makes the code testable. This matrix was not my first choice, ideally I wanted something we didn't have to pay for. My first stop was the Math.NET Numerics library. This, unfortunately proved to be too slow. I also tried out <a href="http://www.extremeoptimization.com/">Extreme Optimization</a>, but this library was also too slow. Other libraries I looked at were <a href="http://ilnumerics.net/">ILNumerics</a>, <a href="http://www.roguewave.com/products/imsl-numerical-libraries/.net-library.aspx">IMSL.NET</a> and <a href="http://www.centerspace.net/">Center Space NMath</a>. I looked but I did not test these last three because each library's API and help were so bad I couldn't figure out how to do what I needed to do. I don't have time to figure out matrix maths, this is why I'm looking for a library. If you want me to choose yours, make it easy to use.</p>
<p>
So that was the bulk of the outcome of my foray in numerical libraries. Bluebit is my current choice, but I will have to change it for something faster. This is not the only thing I learned. I learned something that I hope, if they haven't already, the Math.NET developers will be able to use in their code. I've not time to dive into the project, and patch it myself — as I've said, my understanding of the maths is not great — so feel free to take the code here and fix it to work in the library.
</p>
<p>At work I'm dealing with quite large matrices. The stuff I've been testing with is 8K x 8K points, and the real data will probably be up to 32K x 32K. But these are sparse matrices, so working with them should not be too processor and memory intensive. The major things I need to do are transposition, multiplication and inversion. Inversion is the killer, and understanding it is currently over my head. It's the place where Extreme Optimization fell down, and where bluebit struggles. I need the algorithms to run in a few seconds. Currently, with 16K x 16K points and bluebit, it's taking 2 minutes. The algorithm did not complete with the other two libraries. I waited for over half an hour, and still nothing, and that was with 8K data.</p>
<p>The first problem that Math.NET encountered was with the multiplication of the matrices. This is what I hope I've optimised. All I've done is profile their code and change the bit that took forever - assigning data to a point in the matrix</p>
<p>My first step was to write these two tests, to make sure I was multiplying the matrices correctly:</p>
<pre class="prettyprint">
[Test]
public void MatrixMultiplication()
{
var leftM = new double[,] {{4, 5, 6, 7, 8, 1, 2}, {3, 9, 6, 7, 3, 3, 1}, {2, 2, 8, 4, 1, 8, 1}, {1, 9, 9, 4, 3, 1, 2}};
var rightM = new double[,] {{1, 8, 1}, {2, 6, 2}, {3, 4, 1}, {4, 2, 2}, {5, 1, 1}, {6, 3, 2}, {7, 5, 1}};
var expectedM = new double[,] {{120, 121, 46}, {107, 133, 51}, {106, 98, 40}, {97, 122, 43}};
var sm = new SparseMatrix();
var resultM = sm.MultiplyMatrices(leftM, rightM);
Assert.AreEqual(expectedM.Rank, resultM.Rank);
Assert.AreEqual(expectedM.GetLength(0), resultM.GetLength(0));
Assert.AreEqual(expectedM.GetLength(1), resultM.GetLength(1));
for(int row = 0; row < 4; row++)
{
for(int col = 0; col < 3; col++)
{
Assert.AreEqual(expectedM[row, col], resultM[row, col]);
}
}
}
[Test]
public void SparseMatrixMultiplication()
{
var leftM = new double[,] {{1,2,3,0,0,0,0,0,0,0}, {0,0,0,0,0,1,2,0,0,0}, {1,0,4,0,0,5,0,0,0,0}, {0,4,0,5,0,6,0,0,7,0}, {9,0,0,0,0,0,8,0,0,0}};
var rightM = new double[,] {{0,2,0,4,0}, {1,0,0,1,1}, {3,0,1,3,0}, {4,0,0,0,0}, {0,5,6,0,0}, {0,9,0,6,0}, {0,1,0,3,0}, {0,0,8,0,9}, {0,0,0,0,7}, {0,1,0,0,5}};
var expectedM = new double[,] {{11,2,3,15,2}, {0,11,0,12,0}, {12,47,4,46,0}, {24,54,0,40,53}, {0,26,0,60,0}};
var sm = new SparseMatrix();
var resultM = bc.MultiplyMatrices(leftM, rightM);
for (int row = 0; row < 4; row++)
{
for (int col = 0; col < 3; col++)
{
Assert.AreEqual(expectedM[row, col], resultM[row, col]);
}
}
}
</pre>
<p>(SparseMatrix isn't really the name of the class, I put the multiplication into the class that was handling the algorithm, but I'm not allowed to talk about that!)</p>
<p>Then I spent ages struggling (because of my ignorance, the code is easy to read) with the Math.NET code to try and understand sparse matrix multiplication - how it could be faster than normal matrix multiplication, and how I could implement it faster. It took a couple of days. I spent a couple of days, rather than giving up and finding a proprietary library right away, because I thought that Math.NET would do the business when it came to inversion. Sadly this isn't the case. Anyway, this is my optimised sparse matrix multiplication method:</p>
<pre class="prettyprint">
private IEnumerable<int> GetNonZeroIndicesForMatrixColumn(double[,] matrix, long col, int rowcount)
{
for (int row = 0; row < rowcount; row++)
{
if (matrix[row, col] != 0)
{
yield return row;
}
}
}
private IEnumerable<int> GetNonZeroIndicesForMatrixRow(double[,] matrix, int row, int colcount)
{
for (int col = 0; col < colcount; col++)
{
if (matrix[row, col] != 0)
{
yield return col;
}
}
}
/// <summary>
/// Matrix multiplication optimised for sparse matrices
/// </summary>
/// <param name="matrix1">Matrix on the left of the multiplication</param>
/// <param name="matrix2">Matrix on the right of the multiplication</param>
/// <returns>A matrix that is the multiplication of the two passed in</returns>
public double[,] MultiplyMatrices(double[,] matrix1, double[,] matrix2)
{
int j = matrix1.GetLength(1);
if (j != matrix2.GetLength(0))
{
throw new ArgumentException("matrix1 must have the same number of columns as matrix2 has rows.");
}
int m1Rows = matrix1.GetLength(0);
int m2Cols = matrix2.GetLength(1);
double[,] result = new double[m1Rows, m2Cols];
var nonZeroRows = new List<int>[m1Rows];
Parallel.For(0, m1Rows, row =>
{
nonZeroRows[row] = GetNonZeroIndicesForMatrixRow(matrix1, row, j).ToList();
});
var nonZeroColumns = new List<int>[m2Cols];
Parallel.For(0, m2Cols, col =>
{
nonZeroColumns[col] = GetNonZeroIndicesForMatrixColumn(matrix2, col, j).ToList();
});
Parallel.For(0, m1Rows , row =>
{
Parallel.For(0, m2Cols, column =>
{
var ns = nonZeroColumns[column].Intersect(nonZeroRows[row]);
double sum = ns.Sum(n => matrix1[row, n] * matrix2[n, column]);
result[row, column] = sum;
});
});
return result;
}
</pre>
<p>As you can see, there is a lot of reliance on the parallel methods that come with .NET 4. That, coupled with the trick of getting the intersection of the non-zeros in the rows of the left matrix with the columns of the right matrix, seems to be the major advantage of my method over Math.NET, because their assignments can't be done in parallel. This could be to do with Silverlight compatibility issues, I don't know. I don't have to worry about Silverlight.</p>
<p>I have run a benchmark for my code. I created a 5000 x 5000 point matrix and filled it at random points with random data (well, pseudo-random). I benchmarked at 5, 50, 150 and 500 non-zero items per row. I ran the test 10 times, to get a mean. The table shows the results:</p>
<table>
<thead>
<tr>
<th>Number of non-zeros per row</th><th>Mean seconds taken to multiply</th><th>Standard Deviation</th>
</tr>
</thead>
<tbody>
<tr>
<td>5</td><td>6.24465716</td><td>0.1037383251</td>
</tr>
<tr>
<td>50</td><td>51.10972332</td><td>0.8521258197</td>
</tr>
<tr>
<td>150</td><td>93.29733629</td><td>77.751344564</td>
</tr>
<tr>
<td>500</td><td>13.18435411</td><td>6.4991175895</td>
</tr>
</tbody>
</table>
<p>I find it strange that the standard deviation for the 150 condition is so high. If anyone can see a problem in my code, I'd be really happy to hear it! The full test is below:</p>
<span onclick="toggleCode('testCode')" style="color: blue; cursor: pointer; text-decoration: underline;">toggle test code</span>
<pre id="testCode" class="prettyprint hideCode">
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Threading.Tasks;
namespace Math.NetBenchmark
{
class Program
{
private static Random _r = new Random();
static void Main(string[] args)
{
const int rows = 5000;
const int cols = 5000;
var nonzerosPerRow = new [] {5, 50, 150, 500};
Console.WriteLine("started");
using (var sw = new StreamWriter("MyMX10.results"))
{
sw.WriteLine("Number of non-zeros,Time taken");
foreach (var nzpr in nonzerosPerRow)
{
Console.Write(nzpr+" - making left");
var left = MakeMatrix(rows, cols, nzpr);
Console.Write("making right");
var right = MakeMatrix(rows, cols, nzpr);
Console.Write("multiplying...");
var startTime = DateTime.Now;
MultiplyMatrices(left, right);
var endTime = DateTime.Now;
var diff = endTime - startTime;
sw.WriteLine(nzpr + "," + diff.TotalSeconds);
Console.WriteLine("done");
}
}
Console.WriteLine("done");
}
private static double[,] MakeMatrix(int rows, int cols, int nonzerosPerRow)
{
var result = new double[rows, cols];
var colsPoss = Enumerable.Range(0, cols).ToArray();
Parallel.For(0, rows, iRow =>
{
var posleft = colsPoss;
Console.Write(".");
for (int i = 0; i < nonzerosPerRow; i++)
{
int posindex = _r.Next(posleft.Length);
int index = posleft[posindex];
result[iRow, index] = 1+_r.NextDouble();
posleft = posleft.Take(index).Concat(posleft.Skip(index+1)).ToArray();
}
});
return result;
}
private static IEnumerable<int> GetNonZeroIndicesForMatrixColumn(double[,] matrix, long col, int rowcount)
{
for (int row = 0; row < rowcount; row++)
{
if (matrix[row, col] != 0)
{
yield return row;
}
}
}
private static IEnumerable<int> GetNonZeroIndicesForMatrixRow(double[,] matrix, int row, int colcount)
{
for (int col = 0; col < colcount; col++)
{
if (matrix[row, col] != 0)
{
yield return col;
}
}
}
public static double[,] MultiplyMatrices(double[,] matrix1, double[,] matrix2)
{
int j = matrix1.GetLength(1);
if (j != matrix2.GetLength(0))
{
throw new ArgumentException("matrix1 must have the same number of columns as matrix2 has rows.");
}
int m1Rows = matrix1.GetLength(0);
int m2Cols = matrix2.GetLength(1);
double[,] result = new double[m1Rows, m2Cols];
var nonZeroRows = new List<int>[m1Rows];
Parallel.For(0, m1Rows, row =>
{
nonZeroRows[row] = GetNonZeroIndicesForMatrixRow(matrix1, row, j).ToList();
});
var nonZeroColumns = new List<int>[m2Cols];
Parallel.For(0, m2Cols, col =>
{
nonZeroColumns[col] = GetNonZeroIndicesForMatrixColumn(matrix2, col, j).ToList();
});
Parallel.For(0, m1Rows, row =>
{
Parallel.For(0, m2Cols, column =>
{
var ns = nonZeroColumns[column].Intersect(nonZeroRows[row]);
double sum = ns.Sum(n => matrix1[row, n] * matrix2[n, column]);
result[row, column] = sum;
});
});
return result;
}
}
}
</pre>Mojohttp://www.blogger.com/profile/11704648978910126233noreply@blogger.com1tag:blogger.com,1999:blog-34763567.post-74772561013064784742012-04-27T21:48:00.001+00:002012-04-27T21:48:23.219+00:00Object Thinking - Anthropomorphism<p>This follows on from <a href="http://mojoai.blogspot.com/2011/10/object-thinking-objects-have-actions.html">Object Thinking - Objects have actions</a></p>
<p>Anthropomorphism is essential for object thinking to take place. Anthropomorphism is when a person attributes human mental states to other, non-human, things. Attributing human-like mental states to objects allows a programmer to treat the object as an agent, as opposed to something inanimate, and so bestow upon it appropriate behaviours allowing it to act in an appropriate manner within the application, interacting with other object. The amount of responsibility that you want an object to have will reflect how much you anthropomorphise it. It is important not to give an object too much responsibility, as explained by the Single Responsibility Principle.</p>
<p>That Anthropomorphism occurs is so obvious it doesn't need investigating! So obviously it has been researched by a huge number of people. The paper being looked at here, Making Sense by Making Sentient: Effectance Motivation Increases Anthropomorphism, by Waytz et al. in 2010[1], is one that attempts to explain why and how people anthropomorphise.</p>
<p>Their hypothesis is that one of the reasons people anthropomorphise objects because they want to increase their effectance motivation. This is the motivation to be an effective social agent. The researchers conduct six experiments based on this hypothesis.</p>
<p>The first experiment asked participants to rate their computers. Half of the participants (A) were asked to rate how much they felt their computer has a mind of its own. The other half (B) were asked to rate how much their computer appeared to behave as if it has its own beliefs and desires. Both sets were asked how often they had problems with the computer or its software. The hypothesis for the study is that the more problems people have with their computer, the more they will anthropomorphise it.</p>
<p>Results showed that, in accordance with the hypothesis, the more often participants in group A had problems with their computers, the more they thought their computers had minds of their own and that the more often participants in group B had problems, the more likely they were to believe their computers had beliefs and desires.</p>
<p>The second experiment asked participants to judge the agency of gadgets that had been assigned one of two descriptions about it. The gadget's description either made it seems as though what it did was within or outside the control of the user, but always described the same functionality. There were two groups of participants. They all saw the same set of gadgets, there were alternating sets of descriptions. After reading the descriptions, the participants were asked to rate how much control they thought they had over the gadgets, and then to assess how much the gadget had a “mind of its own”, “intentions, free will and consciousness” and appeared to experience emotions, in the same way they had to rate how much control they thought they had over the gadget.</p>
<p>In alignment with their hypothesis, the participants rated the gadgets with low controllability to be more anthropomorphic than those that were perceived to be more easy to control.</p>
<p>The third experiment was essentially replica of the second, but the participants were subject to an fMRI scan while rating the gadgets. This was conducted because the researched reasoned that people could be using mind as a metaphor for the behaviour they were seeing, rather than actually attributing minds to the objects. By determining the region of the brain in use when anthropomorphising takes place they could rule out certain modes of thinking and give weight to a possible seat for anthropomorphism in the brain. The researchers propose, through reference to previous studies, that the superior temporal sulcus (STS) is involved in social or biological motion, the medial prefrontal cortex (MPFC) is in use when considering people vs objects and considering the mind of another, and the amygdala, inferior parietal lobe and intraparietal sulcus are active when evaluating unpredictability. They therefore hypothesise that the MPFC will increase in activity when anthropomorphising.</p>
<p>The results of the experiment showed the ventral MPFC (vMPFC) to be the most active region, whereas the STS was not active.</p>
<p>The results also showed activation in a network of areas related to mentalising, which strongly resembles a circuit corresponding to processing of self-projection, mentalising and general social cognition, which is what would be expected for anthropomorphism.</p>
<p>This implies that unpredictable gadgets are perceived to have a mind, in an actual rather than metaphorical sense.</p>
<p>The results are inconsistent with the alternative hypotheses: attribution of mind to objects only related to social or biological motion analogies; that processing unpredictability is the cause of the activation; or that the activation is influenced by animism.</p>
<p>The fourth experiment asked participants to evaluate a robot that would answer yes/no questions the participants asked. There were three conditions that the participants were randomly assigned to: the condition where the robot answered yes as often as no, the condition where the robot answered no more often, and the condition where the robot answered yes more often. The second two conditions were the predictable conditions.</p>
<p>After asking the questions and receiving answers, the participants were asked to rate the robot on predictability, then on how much they thought it had free will, its own intentions, consciousness, desires, beliefs and the ability to express emotions. The participants were also asked to rate the robot on attractiveness, efficiency and strength. The ratings were done on a five point scale from “Not at all” (1) to “Extremely” (5).</p>
<p>Results from the experiment showed that participants in the predictable groups found the robot to be predictable, more-so than those in the unpredictable group. Also predicable-no was felt to be more predictable than predictable-yes.</p>
<p>Importantly anthropomorphism was found to be more prevalent where the robot would found to be less predictable.</p>
<p>The only significant difference between the conditions and the non-anthropomorphic evaluation was that predictable-yes participants found the robot to be more attractive than predictable-no. The researchers do not discuss this finding. There was no significant interaction found between liking the robot and anthropomorphising it.</p>
<p>These results show people anthropomorphise unpredictable agents, and present a causal link between the two. This is important as the previous three experiments could be interpreted as a simple association rather than a clear cognitive process.</p>
<p>Experiment five gave some participants motivation to predict the behaviour of a robot, and the others were asked to predict the behaviour with out being motivated. The hypothesis was that increasing motivation should increase motivation to understand, explain and predict an agent.</p>
<p>Participants evaluated a robot on a computer screen. They watched videos of the robot perform but not complete a task. Participants saw options of what the robot would do next and were asked to pick what they thought would happen. Participants in the motivation condition were offered $1 per correct answer. All participants then evaluated the robot's anthropomorphism. Finally the participants were shown the outcome, and compensated where necessary.</p>
<p>Results showed that motivated participants rated the robot as more anthropomorphic.</p>
<p>This shows that effectance motivation is increased when a person is motivated to understand an agent, and not simply controlled by the predictability of the agent.</p>
<p>The sixth and final experiment was predicated by the hypothesis that anthropomorphism should satisfy effectance motivation, i.e. anthropomorphism should satiate the motivation for mastery and make agents seem more predictable and understandable.</p>
<p>Participants evaluated four stimuli (dog, robot, alarm clock, shapes). Half of the participants were told to evaluate the dog and alarm clock objectively, and the robot and shapes in an anthropomorphic fashion, the other half were given the opposite instructions.<p>
<p>Each participant was shown a video of each stimulus three times. After the third time the participant was asked to evaluate the stimulus on two scales: the extent to which they understood the stimulus and the extent to which they felt capable of predicting its future behaviour.</p>
<p>The results showed that the dog and shapes were found to be easier to understand than the robot or alarm clock.</p>
<p>Importantly, participants perceived greater understanding and predictability of agents they had been told to anthropomorphise. The effect did not seem to depend on the group the participant was in.</p>
<p>This study implies that anthropomorphism satisfies effectance motivation.<p>
<p>It is clear from this paper that anthropomorphism is a natural part of human cognition, that is used to make behaviour of objects in the world around us seem more predictable and thus give us a better sense of control. It also shows that there is a neurological basis for this behaviour; the brain is set-up to anthropomorphise the world around us.</p>
<p></p>
<p>[1] Making Sense by Making Sentient: Effectance Motivation Increases Anthropomorphism. A. Waytz, C. K. Morewedge, N. Epley, G. Monteleone, J. H. Gao, J. T. Cacioppo. Journal of Personality and Social Psychology 2010, Vol.99, No.3, 410–435</p>Mojohttp://www.blogger.com/profile/11704648978910126233noreply@blogger.com0tag:blogger.com,1999:blog-34763567.post-34386977840842495232012-02-17T22:34:00.000+00:002012-05-24T08:19:13.102+00:00What does #deadbeef; look like?I've been working with WPF themes a lot this week. My task has been to take a theme from one application and put it into another, otherwise unrelated, application. This is not as easy as it sounds. The themes from the original application do not transplant to other applications without judicious use of a hacksaw.<br />
<br />
While going through the theme's various XAML files I noticed things like <code>color="#FF123456"</code>, and I looked and I couldn't figure out what colour I was looking at. There are a lot of these hex notation colours and they all seemed opaque to me.<br />
<br />
It struck me that it would be nice if I could just hover my mouse over the hex and get the colour to pop up. Sounded like an easy enough task. So I set out to write an extension for Visual Studio to do just that.<br />
<br />
My first attempt to write an extension to Firefox met with disappointment - I couldn't figure out how to get started - so I was a little apprehensive about writing an extension for Visual Studio. Luckily extensions for Visual Studio are easy to create (so long as you have Visual Studio).<br />
<br />
<ol>
<li>Download the <a href="http://www.microsoft.com/download/en/details.aspx?displaylang=en&id=2680">Visual Studio 2010 SDK</a> (or <a href="http://www.blogger.com/">this one</a> for Service Pack 1)</li>
<li>Install the SDK</li>
<li>Start a new project by selecting from the C#/Extensibility templates - I chose Editor Text Adornment, because I wanted to adorn the editor text with something.</li>
</ol>
The project comes with code already in place, so you can just hit <kbd>F5</kbd> and you'll be able to see the extension at work right away. Then read the code to see how it works! It's pretty obvious, and with the Intellisense of Visual Studio you can discover all the bits you'll need with ease.<br />
<br />
So my task, now that I have the ability to write an extension, was to write an extension that does what I want - i.e. show a colour swatch of the hex notation I'm hovering over.<br />
<br />
Step 1: create a regex that picks out the hex. I tried one or two and settled on this one: <code>#(([0-9A-F]{6})|([0-9A-F]{8})|([0-9A-F]{3}))["<;]</code>. There might be ways to write it shorter, and I'm willing to hear them, but I'm not a regex guru, so I'll stick with simple. You'll notice that I've constrained the hex to start with a # and end with ", <, or ;. This way the regex will only pick up hex that is the right length, and not any old length, and is most likely meant to be a colour. All the colour hexes I could see ended in ", < or ;. I could have missed an edge case, but not so far!<br />
<br />
Step 2: turn that string into a colour. There might be a library function for doing this, but I couldn't find it (would be glad if someone were to tell me about it!). I wrote my own:<br />
<br />
<pre class="prettyprint">
private Tuple<byte, byte, byte, byte> BytesFromColourString(string colour)
{
string alpha;
string red;
string green;
string blue;
if (colour.Length == 8)
{
alpha = colour.Substring(0, 2);
red = colour.Substring(2, 2);
green = colour.Substring(4, 2);
blue = colour.Substring(6, 2);
}
else if (colour.Length == 6)
{
red = colour.Substring(0, 2);
green = colour.Substring(2, 2);
blue = colour.Substring(4, 2);
alpha = "FF";
}
else if (colour.Length == 3)
{
red = colour.Substring(0, 1) + colour.Substring(0, 1);
green = colour.Substring(1, 1) + colour.Substring(1, 1);
blue = colour.Substring(2, 1) + colour.Substring(2, 1);
alpha = "FF";
}
else
{
throw new ArgumentException(String.Format("The colour string may be 8, 6 or 3 characters long, the one passed in is {0}", colour.Length));
}
return new Tuple<byte, byte, byte, byte>( Convert.ToByte(alpha, 16)
, Convert.ToByte(red, 16)
, Convert.ToByte(green, 16)
, Convert.ToByte(blue, 16));
}</pre>
<br />
OK, so this actually returns a <code class="language-cs">Tuple<byte, byte, byte, byte></code>. I'm not entirely sure why I chose that over returning an actual colour. I might refactor that later. Anyway, turning the tuple into a <code>System.Windows.Media.Color</code> is a trivial call to the static method <code>Color.FromArgb(byte, byte, byte, byte)</code>. Also, the above method is a brute force approach to breaking down the colour string into bytes, there could well be a better way. I'm sticking with what works until I'm shown something better.<br />
<br />
My next hurdle was figuring out how to place the colour swatch where I wanted it. I was able to return the position in text the mouse was hovering over, which would give me a single character, but I couldn't think of how to use that position and character to get the hex colour string.<br />
<br />
In the end I opted for a two stage approach. Stage one: when the layout updates, find the start and end positions for any colours in the view. Stage two: when the mouse is hovering somewhere, see if it's position is in any of the ranges previously stored.<br />
<br />
Stage one looks like this:<br />
<pre class="prettyprint">
private void OnLayoutChanged(object sender, TextViewLayoutChangedEventArgs e)
{
_colourPositions = new List<Tuple<int, int, Color>>();
var matches = Regex.Matches(_view.TextSnapshot.GetText(), "#(([0-9A-F]{6})|([0-9A-F]{8})|([0-9A-F]{3}))[\"<;]", RegexOptions.IgnoreCase);
foreach(var m in matches)
{
var match = m as Match;
var mgrp = match.Groups[1] as Group;
var colourbytes = BytesFromColourString(mgrp.Value);
var colour = Color.FromArgb(colourbytes.Item1, colourbytes.Item2, colourbytes.Item3, colourbytes.Item4);
_colourPositions.Add(new Tuple<int,int,Color>(mgrp.Index, mgrp.Index + mgrp.Length, colour));
}
}</pre>
I went with a list to store the position of the colours because I think it makes cleaner code than a dictionary would.
<br />
Stage two's like this:<br />
<pre class="prettyprint">
private void ShowColourSwatch(int position, IMappingPoint textPosition, ITextView textView)
{
_layer.RemoveAllAdornments();
SnapshotPoint? snapPoint = textPosition.GetPoint(textPosition.AnchorBuffer, PositionAffinity.Predecessor);
if (snapPoint.HasValue)
{
SnapshotSpan charSpan = textView.GetTextElementSpan(snapPoint.Value);
var colourPos = _colourPositions.Find(cp => (cp.Item1 <= charSpan.Start) && (cp.Item2 >= charSpan.Start));
if(colourPos != null)
{
Image image = CreateSwatchImage(colourPos, charSpan);
_layer.AddAdornment(AdornmentPositioningBehavior.TextRelative, charSpan, null, image, null);
Thread t = new Thread(p =>
{
Thread.Sleep(3500);
lock (lockObject)
{
Application.Current.Dispatcher.Invoke(new Action(() =>
{
_layer.RemoveAdornmentsByVisualSpan(charSpan);
}), new object[]{});
}
});
t.Start();
}
}
}</pre>
<br />
The <code>Thread</code> in there just makes sure that the colour swatch disappears after three and a half seconds. <code>CreateSwatchImage</code> uses a lot of the code from the example project that Visual Studio gives you to start with, and just draws the colour swatch on a black and white background for contrast.<br />
<br />
That is pretty much all the important code that I wrote in constructing the extension. There is one last snippet, I had to modify a single line in the auto-generated factory class so that the swatch would be above the text: <code>[Order(After = PredefinedAdornmentLayers.Text, Before = PredefinedAdornmentLayers.Caret)]</code>. Before that the property made the adornment go behind the text, which looked silly for my purposes.<br />
<br />
The last thing that tripped me up was installing the extension. Obviously I can't sign my extension because I'm too cheap to pay for a certificate to do that with, so I can't get it put on the online extensions thing. However I was sure I could find a way. My first attempt was to double click on the .vsix file that Visual Studio had generated for me. This looked promising - it ran me through an install process and told me it had been successful, so I loaded up Visual Studio but my extension was no where to be found. I tried rebooting my computer, just in case, but to no avail. So I sought out where the extension had been placed and deleted it - which is how you are meant to uninstall extension, by the way - and went online to find out The Right Way™. A few places told me to put the extension in a folder under %appdata%, but that didn't seem to work. Eventually I found an MSDN page that explained I should be putting it under %localappdata%, which sorted me right out. Essentially the path should go something like <code>%localappdata%Microsoft\VisualStudio\10.0\Extensions\[company]\[extensionName]\[version]\</code> although you can probably leave out [company] and [version] and it will still work. Once I put the extension there and loaded up Visual Studio, I checked the Extensions Manager in the tools menu and it was there, but needed enabling. After being enabled, and restarting Visual Studio, the extension was working like a charm! No more wondering about what a hex colour string means for me.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://i11.photobucket.com/albums/a173/NeoMojo/deadbeefcapture.gif" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img alt="what #deadbeef; looks like" border="0" src="http://i11.photobucket.com/albums/a173/NeoMojo/deadbeefcapture.gif" /></a></div>
<br />
<br />
To view all the code for my extension, and download it for yourself, visit <a href="http://github.com/Mellen/Colour-for-Colour">my Github repository</a>.Mojohttp://www.blogger.com/profile/11704648978910126233noreply@blogger.com0tag:blogger.com,1999:blog-34763567.post-44854017937482680192011-10-19T02:38:00.002+00:002012-04-27T21:52:14.667+00:00Object Thinking - Objects have actionsThis post follows on from <a href="http://mojoai.blogspot.com/2011/09/object-thinking-objects-neurological.html">Object Thinking - Objects: a neurological basis</a><br />
<br />
<div class="western" style="font-style: normal;">
The paper being reviewed is <i>Micro-affordance: The potentiation of components of action by seen objects </i>(Ellis and Tucker, 2000)[1]<i>. </i><br />
<i> </i> </div>
<div class="western" style="font-style: normal;">
The paper focuses on two experiments. The first is concerned with power and precision micro-affordance, and the second with wrist rotation micro-affordance.</div>
<div class="western" style="font-style: normal;">
<br />
In the first experiment the participants were told to memorise objects as they were shown them. They were then tested on the objects halfway through the experiment and at the end. During the memorisation phase, whenever they heard a tone, the participant was to either squeeze a cylindrical button with their whole hand, or pinch a small button between their index finger and thumb.<br />
<br />
The type of grip response would be dependant on the type of tone; high or low. So there were two mappings known to the participants: high – large grip, low – small grip, and high – small grip, low – large grip. There were also two unknown mappings: high – large object, low – small object, and high – small object, low large object. </div>
<div class="western" style="font-style: normal;">
<br />
Each participant was assigned one mapping from each of the two groups and this was sustained throughout the experiment.</div>
<div class="western" style="font-style: normal;">
<br />
In the results from the experiment there was a statistically significant positive correlation between grip type and object type.</div>
<div class="western" style="font-style: normal;">
<br />
The second experiment was set up much the same as the first. The differences were that instead of large or small grips, the participant would make clockwise or anticlockwise wrist rotations dependant on tone, and the objects were categorised as ones more easily grasped with an anticlockwise or clockwise wrist rotation.</div>
<div class="western" style="font-style: normal;">
<br />
The results showed a statistically significant positive correlation between wrist rotation and object type.</div>
<div class="western" style="font-style: normal;">
<br />
The paper classifies micro-affordance (MA) as the state of an observer that gives rise to stimulus-response compatibility (SRC) between what the viewer sees and what actions they perform regardless of their intention. The theory is meant as a solution to the symbol grounding problem. (The reference to this problem in the paper is Harnad, 1990[2].)</div>
<div class="western" style="font-style: normal;">
<br />
The paper explains that SRC is demonstrated in many previous experiments, by various researchers, in forced choice reaction time tests. For example an advantage is gained when reaching for something on the left with the left hand, and similarly for the right. In fact an advantage is gained even in non-reaching tasks, where the location of the stimulus gives an advantage when it is on the same side as the response, this is known as the Simon Effect.</div>
<div class="western" style="font-style: normal;">
<br />
Previous experiments by Ellis and Tucker show that location is not the only action related feature encoded in this way.</div>
<div class="western" style="font-style: normal;">
<br />
This preparedness for action is thought to be a coordination of the what and where pathways in the brain.</div>
<div class="western" style="font-style: normal;">
<br />
The paper reports that the theoretical implications of the results of the study are:</div>
<ol>
<li><div class="western" style="font-style: normal;">
MA are different from Gibsonian affordance in that they suggest the affordance is encoded in the viewer's nervous system (not the object being viewed), they only apply to grasping, and only grasping appropriate to the object.</div>
</li>
<li><div class="western" style="font-style: normal;">
SRC works because what is being responded to is unrelated to what is causing the compatibility effect. SRC theories suggest that stimulus → response options elicit particular mental codes, so the location of an object elicits a left or right handed response. MA, however, can be evoked without evoking a coherent action.</div>
<div class="western" style="font-style: normal;">
This means that MA should interfere with SRC experiments.</div>
<div class="western" style="font-style: normal;">
SRC effects have been modelled as ecological relations between visual properties and actions. They have also been modelled as effect codes that can be combined into whole actions.</div>
<div class="western" style="font-style: normal;">
MA and these two approaches share the assumption that a compatibility effect arises from visual objects and possible, real-world actions that can be performed on them.</div>
<div class="western" style="font-style: normal;">
MA diverges from the ecological approach by retaining representation of objects, and from effect codes by having a direct connection between vision and action. MA diverges from both because it states that actions are potentiated whenever an object is seen, regardless of the intention of the viewer.</div>
</li>
<li><div class="western" style="font-style: normal;">
Developmentally, MA fits in well with the popular theory of Neural Darwinism. Development of adaptive behaviours requires integration of sensory and motor processes. The paper proposes learning coordinated actions result from gradual adaption of the neuron groups involved. This leads to coupling of motor and sensory systems.</div>
<div class="western" style="font-style: normal;">
The implication of the experiments is that MA reflect the involvement of the motor components of the global mapping, which have come to represent visual objects.<br />
<br /></div>
</li>
</ol>
<div class="western" style="font-style: normal;">
So what does this tell us about how natural object thinking is? Object thinking requires that you understand the objects your are working with in terms of the behaviours that they can perform. You need to be able to create your objects so that discovering what behaviours are available is intuitive — i.e. when others come to your API they aren't spending hours going through the documentation, they can just get on and use it.<br />
<br />
Ellis and Tucker show that the brain is well suited to understanding and preparing for expected behaviours. When we see an object, we immediately know the actions that the object has available, and are primed to use them.<br />
<br />
This implies that once we have a good understanding of a problem domain, we should be able to model the behaviours of the objects in the domain intuitively, and anyone else with a good understanding of the problem domain will be able to intuitively discover each object and its behaviours.<br />
<br />
The behaviour driven aspects of object thinking are intrinsic to how the human mind works at the brain level.<br />
<p>The next section deals with anthropomorphism, why OT needs it and where it comes from: <a href="http://mojoai.blogspot.co.uk/2012/04/object-thinking-anthropomorphism.html">Object Thinking - Anthropomorphism</a>.</p>
[1] Micro-affordance: The potentiation of components of action by seen objects; Rob Ellis, Mike Tucker. British Journal of Psychology (2000), 91, 451-471<br />
[2] Harnad, S. (1990). The symbol grounding problem. Physica D, 42, 335±346. (As sited in [1])</div>Mojohttp://www.blogger.com/profile/11704648978910126233noreply@blogger.com0tag:blogger.com,1999:blog-34763567.post-92190648635209419942011-09-22T20:50:00.000+00:002011-10-19T02:40:25.377+00:00Object Thinking - Objects: a neurological basis<br />
<div class="western" style="font-style: normal;">
This follows on from my post <a href="http://mojoai.blogspot.com/2011/07/object-thinking-is-natural-way-to-think.html">Object Thinking is the natural way to think. Introduction</a></div>
<div class="western" style="font-style: normal;">
<br /></div>
<div class="western" style="font-style: normal;">
This post deals with how the brain perceives the world as objects.</div>
<div class="western" style="font-style: normal;">
<br /></div>
<div class="western" style="font-style: normal;">
A neurological
perspective of how perception work, via studying perceptual
disorders, is covered in chapter two of Neuropsychology: from theory
to practice [1]. This is a review of that chapter.</div>
<div class="western" style="font-style: normal;">
<br /></div>
<div class="western" style="font-style: normal;">
</div>
<div class="western" style="font-style: normal;">
Studying perceptual
disorders tell us how we work by looking at damaged brains in people,
or damaging brains in animals, and seeing how that affects what is
perceived.</div>
<div class="western" style="font-style: normal;">
<br /></div>
<div class="western" style="font-style: normal;">
The chapter
concentrates largely on visual perception, due to “the natural
dominance of our visual sensory system”. It starts out by
identifying two major pathways in the brain, the “what” pathway,
which is responsible for identification of objects, and the “where”
pathway, which is responsible for location position and motion. These
were originally identified in monkeys in 1983 by Mishkin, Ungerleider
and Macko. Milner and Goodale (1995) expanded on this model to
explain that the “where” pathway is dedicated to the preparation
of movement.</div>
<div class="western" style="font-style: normal;">
<br /></div>
<div class="western" style="font-style: normal;">
This demonstrates that
humans understand the world as objects and actions. </div>
<div class="western" style="font-style: normal;">
<br /></div>
<div class="western" style="font-style: normal;">
The chapter goes
onto explain that these two pathways are linked, essentially the flow
of data goes primary visual cortex → “what” pathway → “where”
pathway → motor cortex. The system also gets feedback, via other
pathways, from interactions with the environment to aid in learning.
This of course means that we get better at performing actions the
more we do them.</div>
<div class="western" style="font-style: normal;">
<br /></div>
<div class="western" style="font-style: normal;">
The next section of the
chapter deals with sensation versus perception. It is not
particularly relevant to this discussion. In short summary: sensation
occurs before perception, and is not consciously recognised. In
vision the sensation pathways are those that link the retina to the
visual cortex. People with damage to these pathway will not notice
that they don't see something, unless they are made aware of it
appearing and disappearing from view.</div>
<div class="western" style="font-style: normal;">
<br /></div>
<div class="western" style="font-style: normal;">
Discussion of the
hierarchy of the visual cortex follows on. This has quite a strong
neurological focus, and describes a lot of the brain's structure in
this area. The key point relevant here is that the brain is modular
and parallel, which means that human thinking is modular and
parallel, which is clearly analogous to separation of concerns. The
parallelism is accomplished through pathways that allow feedback
between modules. This could be thought of as message passing,
although it might be a stretch to say it scales up to conscious
thought.</div>
<div class="western" style="font-style: normal;">
<br /></div>
<div class="western" style="font-style: normal;">
Next the chapter
discusses what certain disorders show us about visual perception. The
two types of disorder covered are apperceptive agnosia – a
condition that means the patient has a difficulty distinguishing
between objects – and associative agnosia – in which the patient
is unable to recognise objects or their functions.
</div>
<div class="western" style="font-style: normal;">
<br /></div>
<div class="western" style="font-style: normal;">
Apperceptive agnosia,
and its milder counter part; categorisation deficit, give strong
evidence that the mind perceives the world as objects. People with
these disorders cannot discern one object from another. This impedes
problem solving, as the person with the condition does not know how
to act on what they see. In fact, in the case of apperceptive
agnosia, it can be equivalent to blindness, as those with the
condition find it easier to navigate with their eyes shut.</div>
<div class="western" style="font-style: normal;">
<br /></div>
<div class="western" style="font-style: normal;">
Associative agnosia,
prevents people from being able to recognise objects or their
functions. This class of agnosia can affect any of the senses. The
book focuses on vision.</div>
<div class="western" style="font-style: normal;">
<br /></div>
<div class="western" style="font-style: normal;">
People with associative
agnosia can copy (e.g. by drawing) and match objects, but they cannot
recognise. So it appears that primary perceptual processing is
intact.</div>
<div class="western" style="font-style: normal;">
<br /></div>
<div class="western" style="font-style: normal;">
The current theory for
what causes this agnosia is that the “what” pathway has become
disconnected from the memory store for associative meaning. People
with this condition can write something down, such as their name or
address, but are completely unable to read it back. This is clear
evidence that we use background knowledge to solve problems.</div>
<div class="western" style="font-style: normal;">
<br /></div>
<div class="western" style="font-style: normal;">
The chapter gives an
example (p. 53) of a patient, with associative visual agnosia, who
can only tell what a banana is after eating it, and even then only
through logical deduction: “...and here I go right back to the
stage where I say well if it's not a banana, we wouldn't have this
fruit.”</div>
<div class="western" style="font-style: normal;">
<br /></div>
<div class="western" style="font-style: normal;">
The next section of the
chapter discusses object and face recognition. The focus is on how
this works at a neurological level, and the difference between face
recognition and object recognition. The key point it makes is that
the left hemisphere of the brain deals with parts of objects, and the
right deals with objects as a whole. (Faces, are a special case,
however, as they seem to be perceived as a whole, and not as parts,
i.e. most of facial recognition is done in the right hemisphere.) The
brain is set up to understand about composition.</div>
<div class="western" style="font-style: normal;">
<br /></div>
<div class="western" style="font-style: normal;">
The rest of the chapter
focuses on describing top down (using past experience to influence
perception) and bottom up (working from first principles) processing
of visual information, and come to a conclusion about how the left
and right hemispheres interact to give what we see meaning.
Essentially they work together, the left hemisphere identifying
objects and the meaning of objects, while the right analyses
structural form, orientation and does holistic analysis of an object.</div>
<div class="western" style="font-style: normal;">
<br /></div>
<div class="western" style="font-style: normal;">
So, in conclusion, the chapter lays out clearly that human beings perceive the world as objects, even at a neurological level. This is our nature. Thus is makes sense when designing software to think of our problem space in terms of the objects in it. </div>
<div class="western" style="font-style: normal;">
<br /></div>
<div class="western" style="font-style: normal;">
The next section will deal with why action is integral to how we think about the world, and can be found here: <a href="http://mojoai.blogspot.com/2011/10/object-thinking-objects-have-actions.html">Object Thinking - Objects have actions</a>.</div>
<div class="western" style="font-style: normal;">
<br /></div>
<div class="western" style="font-style: normal;">
[1] Neuropsychology: from theory to practice, David Andrewes (2001, Psychology Press)</div>Mojohttp://www.blogger.com/profile/11704648978910126233noreply@blogger.com1tag:blogger.com,1999:blog-34763567.post-35734330091198988242011-07-09T16:57:00.010+00:002011-09-26T12:43:05.934+00:00Object Thinking is the natural way to think. Introduction<span style="font-size: large;">Preface</span><br />
<span style="font-size: small;">I don't know why I'm up so early on a Saturday, but I am. *yawn*. So I've been writing a paper reviewing other texts, to explain why Object Thinking is the natural way to think.</span><br />
<span style="font-size: small;">I am doing this because I do not want to lose an internet argument. I know. I've already lost. Both side have. That's how internet arguments work.</span><br />
<span style="font-size: small;">The argument is at <a href="http://programmers.stackexchange.com/">Programmers</a>, particularly my answer to the question <a href="http://programmers.stackexchange.com/questions/59387/is-oop-hard-because-it-is-not-natural/59479#59479">"is OOP hard because it is not natural?"</a> <a href="http://programmers.stackexchange.com/users/13612/sk-logic">SK-Logic</a> is zealously anti OO, and I am equally zealously pro OO.</span><br />
<span style="font-size: small;">Then the other day I was discussing what I'm writing with <a href="http://programmers.stackexchange.com/users/2567/pierre-303">Pierre 303</a>, in the Programmers' chat room, and he suggested that I make it into several 'blog articles, because then it would be easier to digest. I agree, so that's what I'm doing. I still don't know why I'm up so early, but at least I'm doing something.</span><br />
<br />
<br />
<span style="font-size: x-large;">Introduction</span><br />
<div class="western">
Object Thinking; it's been around for decades as a paradigm for software design, but what is it? When presented with a problem, someone using
object thinking will start to decompose the problem into discrete
sections that can interact with each other. You could, for example, be forced to change the tyre on your car. A simple task, certainly, but to do it you must understand the tools and relevant components of your car, and how they need to work together to achieve your goal.</div>
<div class="western">
<br /></div>
<div class="western">
</div>
<div class="western">
It might take several attempts to achieve a fine
grained enough understanding to effectively solve the problem. Your
first pass at the above example might leave you with the idea to take
the wheel off your car. A second thinking might make you realise that
you need to lift the car off the floor to do that, and so on.</div>
<div class="western">
<br /></div>
<div class="western">
</div>
<div class="western">
One thing that can give you a head start in
solving a problem using object thinking is background knowledge.
Knowing about your problem domain, what the objects in it are capable
of, makes it easier to plan how to use them. Not knowing enough can
cause issues, however, if assumptions are made based on incomplete
knowledge.</div>
<div class="western">
<br /></div>
<div class="western">
</div>
<div class="western">
For example: You are asked to stick a poster to
the wall, without leaving holes in the wall. You are given a hamster,
newspaper and some Blu Tack®, along with the poster. If you don't
know what Blu Tack® is for then your understanding of the problem
domain is incomplete and you could end up using the hamster to chew
up newspaper into balls, and use those to stick the poster to the
wall.</div>
<div class="western">
<br /></div>
<div class="western">
It is also important to note that not everything
present in your problem domain will necessarily be used to solve the
problem. So, in the previous example, you might not use the newspaper
or hamster at all (or, of course, you might find the hamster solution
better, as it reuses the newspaper, which is more ecological).</div>
<div class="western">
<br /></div>
<div class="western">
</div>
<div class="western">
So how does this apply to software design?
Software is just “algorithms and data structures”, right? Well,
at the end maybe, but you've still got to design it. Software is the
output of people's attempt to solve a problem. Solving a problem with
object thinking is the natural way, as this series of posts hopes to
demonstrate, because it uses people's natural problem solving
techniques.</div>
<div class="western">
<br /></div>
<div class="western">
</div>
<div class="western">
Object thinking is a core tenet of Object Oriented
Design (OOD), a well known software design paradigm. The inventors of
OOD set out to fix what they saw are the main problem with software
design – software design was taught to make people think like
computers, so that they could write software for computers.
</div>
<br />
<div class="western">
A book that extensively covers the meaning and
practical aspects of object thinking is Object Thinking by David West
(2004, Microsoft Press). In it he likens the way that traditional
programmers use OOD to writing lots of small COBOL
programmes [1]. Objects in this sense have been turned into data
structures with algorithms wrapped around them. While modularising
code is better than having one large function, it only makes
designing software a little easier. It still focuses the attention of
the design on how a computer works and not how the problem should be solved.</div>
<div class="western">
<br /></div>
<div class="western">
So what makes reasoning about large systems
easier? Focusing on the problem space and decomposing it into several
smaller problems helps. But what is easier to think about? Is it
easier to think how those problems translate into code? Perhaps in
the short term, but you will end up solving the same problems over
and over again, and your code will probably be inflexible.</div>
<div class="western">
<br /></div>
<div class="western">
</div>
<div class="western">
Would it be better to think about software design
the same way you think about solid world problems? That way you can
use your innate problem solving skills to frame and express your
design.<br />
<br /></div>
<div class="western">
It turns out that the way people reason about real
world problems is to break them down into smaller parts, using their
background understanding of the problem space, take the parts of the
problem space and treat them as objects that can do things and have
things done to them, and find way for the objects to interact. [2]</div>
<div class="western">
<br /></div>
<div class="western">
</div>
<div class="western">
This works well because people like to
anthropomorphise objects, so that they can imagine the object doing
things under its own agency, even if in the end it's a person causing
the action.[3]</div>
<div class="western">
<br /></div>
<div class="western">
</div>
<div class="western">
How can you be sure this is how you think, and is
therefore the more sensible way to approach software design? Well it
turns out that there is an oft ignored backwater science known as
Cognitive Psychology, and scientists in this field have been studying
people for decades, to find out how they work.</div>
<div class="western">
<br /></div>
<div class="western">
Future posts in this series will review certain cognitive psychology and neuropsychology texts and expand on how this applies to object thinking. The end goal is to demonstrate that object thinking is innate and therefore the best strategy for designing software.<br />
<br />
Next post in the series: <a href="http://mojoai.blogspot.com/2011/09/object-thinking-objects-neurological.html">Object Thinking - Objects: a neurological basis </a> </div>
<div class="western">
<br />
<span style="font-size: large;">References</span><br />
[1] Object Thinking, D. West (2004, Microsoft Press) p9<br />
[2] <a href="http://en.wikibooks.org/wiki/Cognitive_Psychology_and_Cognitive_Neuroscience/Problem_Solving_from_an_Evolutionary_Perspective#How_is_a_problem_represented_in_mind.3F">Problem Solving from an Evolutionary Perspective</a><span style="font-size: small;"> visited 9th July 2011</span><br />
[3] Object Thinking, D. West (2004, Microsoft Press) p101<br />
<br />
Blu-Tack is a registered trademark of <a href="http://www.bostik.co.uk/">Bostik</a>. I am not affiliated with Bostik.</div>
Mojohttp://www.blogger.com/profile/11704648978910126233noreply@blogger.com2tag:blogger.com,1999:blog-34763567.post-31541352448297963132011-04-29T12:09:00.001+00:002012-05-25T09:09:15.685+00:00Networking client / server exampleAt work I have been writing a lot of code relating to sending data over a TCP connection.<br />
<br />
I have also seen a couple of questions, recently, on Stack Overflow asking about why networking code wasn't working. Unfortunately I didn't have time to answer them, but it did make me think that there must be a dearth of good samples of networking code online.<br />
<br />
Allow me to make that dearth one sample fewer! (Does that make sense?)<br />
<br />
For the full listing visit my Github repository: <a href="https://github.com/Mellen/Networking-Samples">https://github.com/Mellen/Networking-Samples</a><br />
<br />
One problem, that sparked my interest, was how to to keep the server running <a href="http://stackoverflow.com/questions/5715269/constantly-running-server">when a client disconnects</a>. Because the server needs to know when a client disconnects, and not just choke and die. A client disconnecting is not an exceptional circumstance.<br />
<br />
The first problem is to not let the server die when a client disconnects, the second is to keep the server looking for new connections, so that it can be a server.<br />
<br />
<span style="font-size: large;">Keep it alive! </span><br />
<br />
My solution to the disconnection problem got generalised to both the client and the server classes, because it makes sense to not have the client die if the server disappears. The user might want to try to reconnect.<br />
<br />
You'll find this code in the file NetworkSampleLibrary/NetworkStreamHandler.cs<br />
<br />
<pre class="prettyprint">
protected void ReadFromStream(object worker, DoWorkEventArgs args)
{
BackgroundWorker streamWorker = worker as BackgroundWorker;
NetworkStream stream = args.Argument as NetworkStream;
try
{
HandleStreamInput(stream);
}
catch (Exception ex)
{
if (ex is IOException || ex is ObjectDisposedException || ex is InvalidOperationException)
{
streamWorker.CancelAsync();
}
if (ex is IOException || ex is InvalidOperationException)
{
stream.Dispose();
}
if (StreamError != null)
{
StreamError(ex, stream);
}
}
}</pre>
<br />
You might have noticed that the method is an event handler. More on that below.<br />
<br />
As you can see, there are three types of exception that can happen if a client disconnects from the server: <a href="http://msdn.microsoft.com/en-us/library/system.io.ioexception.aspx">IOException</a>, <a href="http://msdn.microsoft.com/en-us/library/system.objectdisposedexception.aspx">ObjectDisposedException</a> and <a href="http://msdn.microsoft.com/en-us/library/system.invalidoperationexception.aspx">InvalidOperationException</a>. I found this out through trial and error.<br />
<br />
The most common exception that gets thrown when a client disconnects is IOException. This is because the server will be trying to read from the client when it leaves.<br />
<br />
Because of the threaded nature of the system, ObjectDisposedExceptions gets thrown when another exception gets thrown and the server still tries to read from the stream in the mean time.<br />
<br />
I'm not entirely sure why InvalidOperationException gets thrown, and it doesn't happen a lot, but it is always when the client disconnects.<br />
<br />
My strategy is to catch all exceptions, deal with the disconnection exceptions by disposing of the stream if necessary and cancelling the process that reads from the stream, then raising an event that contains the exception and the stream that threw it. I could create a custom exception here, but I settled on an event just in case something that wouldn't catch an exception wanted to know about it.<br />
<br />
<span style="font-size: large;">All are welcome</span><br />
<br />
The next part of the puzzle is to make sure that more than one client can connect to your server.<br />
<br />
This is achieved in the NetworkServer class. This can be found at NetworkServerSample / NetworkServer.cs<br />
<br />
The pertinent parts are listed below: <br />
<br />
<pre class="prettyprint">
public NetworkServer(int port)
{
_listener = new TcpListener(IPAddress.Any, port);
_listener.Start();
_listener.BeginAcceptTcpClient(AcceptAClient, _listener);
DataAvilable += SendDataToAll;
StreamError += (ex, stream) =>
{
if (ex is IOException || ex is InvalidOperationException || ex is ObjectDisposedException)
{
_streams.Remove(stream);
Console.WriteLine("lost connection {0}", ex.GetType().Name);
}
else
{
throw ex;
}
};
}
private void AcceptAClient(IAsyncResult asyncResult)
{
TcpListener listener = asyncResult.AsyncState as TcpListener;
try
{
TcpClient client = listener.EndAcceptTcpClient(asyncResult);
Console.WriteLine("Got a connection from {0}.", client.Client.RemoteEndPoint);
HandleNewStream(client.GetStream());
}
catch (ObjectDisposedException)
{
Console.WriteLine("Server has shutdown.");
}
if (!_disposed)
{
listener.BeginAcceptTcpClient(AcceptAClient, listener);
}
}
private void HandleNewStream(NetworkStream networkStream)
{
_streams.Add(networkStream);
BackgroundWorker streamWorker = new BackgroundWorker();
streamWorker.WorkerSupportsCancellation = true;
streamWorker.DoWork += ReadFromStream;
streamWorker.RunWorkerCompleted += (s, a) =>
{
if (_streams.Contains(networkStream) && !a.Cancelled)
{
streamWorker.RunWorkerAsync(networkStream);
}
};
streamWorker.RunWorkerAsync(networkStream);
}</pre>
<br />
In the constructor, the server is set up to listen on a particular port for incoming connections and handle the connection requests asynchronously. It also creates an event handler for when the network stream throws an exception, as explained above. This makes sure that the stream is removed from the list of streams, so that it doesn't try to get disposed of when the server is disposed, and that no data gets broadcast down it.<br />
<br />
The method that deals with the asynchronous requests for connection (AcceptAClient) has to make sure that the server hasn't been disposed of when the connection attempt is made, hence the try-catch block. Once the connection request has been handled then the method starts listening for another connection attempt. This is all it takes, essentially asynchronous recursion.<br />
<br />
The HandleNewStream method also uses asynchronous recursion to read each message from the client. It sets up a <a href="http://msdn.microsoft.com/en-us/library/system.componentmodel.backgroundworker.aspx">BackgroundWorker</a> instance that asynchronously calls the ReadFromStream method in the previous section, and when the work is complete, the worker will call the method again, so long as the stream is in the list of streams on the server and the worker has not been cancelled.<br />
<br />
That's the meat of the server. Accepting and handling input from more than one client is achieved with a list and asynchronous recursion. Dealing with clients disconnecting is done with exception handling and events.Mojohttp://www.blogger.com/profile/11704648978910126233noreply@blogger.com0tag:blogger.com,1999:blog-34763567.post-76743030680209056132011-04-28T21:47:00.003+00:002011-04-28T21:56:12.563+00:00Really basic programming maths (part 1)So I've been trying to mentally do hexadecimal addition. I've found that I'm not very good at it.<br />
<br />
I'm going to slowly explain how I go about working stuff out, with the hope that it will stick in my head and get easier. (Binary is written with the most significant bit first, and all numbers are unsigned.)<br />
<br />
First of all there is how to think about numbers in binary and hex.<br />
<br />
Decimal numbers get split up into multiples of powers of ten.<br />
<br />
For example 4181 can be broken down as:
<br />
<ul>
<li>4 x 10<sup>3</sup></li>
<li>1 x 10<sup>2</sup></li>
<li>8 x 10<sup>1</sup></li>
<li>1 x 10<sup>0</sup></li>
</ul>
<br />
Remembering that all numbers raised to 0 are 1.<br/>
<br />
This applies to both binary and hexadecimal too.<br />
<br />
So 0xFEED breaks down to:<br />
<ul>
<li>F(15) x 10(16)<sup>3</sup></li>
<li>E(14) x 10(16)<sup>2</sup></li>
<li>E(14) x 10(16)<sup>1</sup></li>
<li>D(13) x 10(16)<sup>0</sup></li>
</ul>
<br /> The numbers in parenthesis are the decimal representations of the hexadecimal numbers.<br />
<br />
And 0b1101 breaks down to:<br />
<ul>
<li>1(1) x 10(2)<sup>3</sup></li>
<li>1(1) x 10(2)<sup>2</sup></li>
<li>0(0) x 10(2)<sup>1</sup></li>
<li>1(1) x 10(2)<sup>0</sup></li>
</ul>
<br /> The numbers in parenthesis are the decimal representations of the binary numbers.<br />
<br />
Next up is the easy way to transition from hex to binary and back.<br />
<br />
Since an individual hex digit takes up to a maximum of four bits, all hex numbers can be represented as collections of four bit numbers.<br />
<br />
So 0x4432 can be broken down into 0b0100, 0b0100, 0b0011, 0b0010<br />
<br />
This can be reversed. Say you have the 32bit number 0b10011100110100110101101011110011.<br />
<br />
If you break it down into four bit chunks you get: <br />
<ul>
<li>0b1001</li>
<li>0b1100</li>
<li>0b1101</li>
<li>0b0011</li>
<li>0b0101</li>
<li>0b1010</li>
<li>0b1111</li>
<li>0b0011</li>
</ul>
<br />
Each chunk can be represented as a hex digit: <br />
<ul>
<li>0x9</li>
<li>0xC</li>
<li>0xD</li>
<li>0x3</li>
<li>0x5</li>
<li>0xA</li>
<li>0xF</li>
<li>0x3</li>
</ul>
<br />
Which gives us the number 0x9CD35AF3.<br />
<br />
The difficult part comes in getting that number as decimal.<br />
<br />
To do it from hex, you need to add up all the powers of sixteen that there are:
<br />
<ul>
<li>9 x 16<sup>7</sup></li>
<li>12 x 16<sup>6</sup></li>
<li>13 x 16<sup>5</sup></li>
<li>3 x 16<sup>4</sup></li>
<li>5 x 16<sup>3</sup></li>
<li>10 x 16<sup>2</sup></li>
<li>15 x 16<sup>1</sup></li>
<li>3 x 16<sup>0</sup></li>
</ul>
<br />
Which turns out to be: 2631097075. Not easy to calculate in your head. To do it from binary would take even longer as you would need to add up all the powers of two from 31 to 0.
<br /><br />
Thus endeth part one.Mojohttp://www.blogger.com/profile/11704648978910126233noreply@blogger.com0tag:blogger.com,1999:blog-34763567.post-47625909803177863642010-12-13T22:31:00.000+00:002010-12-13T22:31:52.028+00:00Addresses in databases<style>
th,td
{
padding: 4px;
}
</style>
<br />
Whenever I see something like this:<br />
<br />
<div style="background: none repeat scroll 0% 0% rgb(255, 255, 193); border-style: groove; margin-left: 20px; padding-right: 10px; width: 330px;">
<ul style="list-style-type: none; text-align: right;">
<li>Address Line 1: <input type="-"text"" /></li>
<li>Address Line 2: <input type="-"text"" /></li>
<li>Address Line 3: <input type="-"text"" /></li>
<li>City: <input type="-"text"" /></li>
<li>Country: <input type="-"text"" /></li>
<li>Post Code: <input type="-"text"" /></li>
</ul>
</div>
<br />
I want to find the database designer and smack them.<br />
<br />
What is it about addresses that make people think that they don't need normalising?<br />
<br />
No! Of course! The solution to storing addresses is to create a table and force all addresses to fit into five lines plus a postal code. Brilliant. Really smart.<br />
<br />
There is one mandatory field in the address: country. That's the only one. Everyone lives in a country. I don't want to get into stupid arguments like "Wales isn't a country it's a principality", etc., when you put it in an address it's a country.<br />
<br />
You know something people know? How many lines there are in their address. So don't force them to have 3, 4, 5, xty mumble-jillion, or however many you think is sufficient.<br />
<br />
This is what I want to see from now on:<br />
<br />
<div style="background: none repeat scroll 0% 0% rgb(255, 255, 193); border-style: groove; margin-left: 20px; padding-right: 10px; width: 330px;">
<b>Address</b><br />
<input type="text" /><br />
<input type="button" value="Add a line" />
</div>
<br />
If you do the post / zip / whatever code search thing, then great, but be sure to store the address lines in a sensible manner.<br />
<br />
<table>
<thead>
<tr>
<th>address_id</th>
<th>line_id</th>
<th>text</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>1</td>
<td>My House Name</td>
</tr>
<tr>
<td>1</td>
<td>2</td>
<td>My Street Name</td>
</tr>
<tr>
<td>1</td>
<td>3</td>
<td>My City Name</td>
</tr>
<tr>
<td>1</td>
<td>4</td>
<td>My Post Code</td>
</tr>
<tr>
<td>1</td>
<td>5</td>
<td>My Country</td>
</tr>
</tbody>
</table>Mojohttp://www.blogger.com/profile/11704648978910126233noreply@blogger.com2tag:blogger.com,1999:blog-34763567.post-74980116242193699382010-12-02T16:43:00.001+00:002010-12-02T16:43:33.186+00:00Re: quick ideaIt's not trivial. There is no easy way to convert a file like jpg/png/gif into icon format. Arbitraried!Mojohttp://www.blogger.com/profile/11704648978910126233noreply@blogger.com0tag:blogger.com,1999:blog-34763567.post-83301135573793877962010-11-14T18:34:00.000+00:002010-11-14T18:34:56.937+00:00No coding SundaysI've decided that I'm going to not code on Sundays.<br />
<br />
I'll try and cut out Stack Overflow too, except for next Sunday because that is my 99th consecutive day. I NEED MY BADGE.<br />
<br />
Sundays will be given over to something else. Anything else.<br />
<br />
It's not that I've stopped loving coding. I think I love it too much. I'm going to see what else there is.Mojohttp://www.blogger.com/profile/11704648978910126233noreply@blogger.com0tag:blogger.com,1999:blog-34763567.post-32656180179980466652010-10-22T08:54:00.002+00:002010-10-22T08:54:48.367+00:00Quick ideaI think it should be trivial to make an png/jpeg/gif/bmp -> icon creator<br />
<br />
I'm going to work to one.Mojohttp://www.blogger.com/profile/11704648978910126233noreply@blogger.com0tag:blogger.com,1999:blog-34763567.post-43204961769051468632010-10-08T21:24:00.003+00:002010-10-09T17:36:18.056+00:00Solving SudokuI was chatting with my manager the other day, just shooting the breeze, and we got on to how he knocked together a python script to prove to his girlfriend that programmatically solving sudoku puzzles is easy.<br />
<br />
I disagreed for a moment and then realised I was thinking of generating sudoku puzzles, which we agreed isn't easy.<br />
<br />
I had tried to make a sudoku helper app before, to practice MVVM and WPF, but had messed up in some calculation or other. Probably at the point where I was calculating which block a square was in. Anyway I had deleted that one, but my boss had spurred my interest in doing it again.<br />
<br />
I'm a better programmer than I was that first time - I understand both WPF and MVVM better now, so this little solver is pretty sweet. (Unless you look at the code.)<br />
<br />
It has all the features I need. I can fill in the known numbers, delete mistakes, and click a button to solve the unknowns (once the knowns are in place).<br />
Sometimes you don't even need the button, since the programme eliminates possibilities as you type. One puzzle I tried was solved before I typed in all the known numbers!<br />
<br />
So my amazing solver has two simple algorithms doing the solving:<br />
<ol>
<li>Each square has an event that fires when its number of possible values reaches 1, either programmatically or by user intervention. This event is subscribed to by all the squares related to it (row, column, block), and so each related square will remove this value from their possible values list. This can cause a chain reaction of updates, solving the sudoku puzzle when enough knowns are typed in.</li>
<li>If elimination alone doesn't do the job then the second algorithm is just a button click away. I might have over thought this one:</li>
<ol>
<li>Create a list of squares that have at least 2 possible values, sorted in ascending order of number of possible values</li>
<li>Take the first square and find all the squares in the same block</li>
<li>Add theses squares to a checked block list</li>
<li>Flatten the lists of potential values into one list</li>
<li>Find any unique values in that list</li>
<li>If there are any unique values then these represent solved squares so break out of the loop and update the squares related to those values.</li>
<li>If there isn't a unique value then repeat 3, 4 and 5 for the row, then the column of the current square.</li>
<li>If after that there still isn't a unique value, move onto the next square that hasn't been checked yet.</li>
</ol>
</ol>
If at the end of the second algorithm a number hasn't been updated then the programme lets the user know that it needs more knowns, otherwise it starts the second algorithm again until all the squares are filled. <br />
<br />
I know what you're thinking. You're thinking that if a user makes a mistake inputting a value, then when they delete it and input a new value the possible values list for the related squares will be wrong. Fear not! Deleting a value fires an event that does the opposite of inserting a value, so things go back to the way they were. Phew!<br />
<br />
If you want to look at the code it's on github here: <a href="http://github.com/Mellen/SudokuSolver">http://github.com/Mellen/SudokuSolver</a><br />
<br />
The code is c#. The project is a Visual Studio 10 project, that runs on the .NET 4.0 framework. It even has a couple of unit tests. Yes, I'm <i>that </i>guy. I unit test toy projects.<br />
<br />
The executable is available from github: <a href="http://github.com/downloads/Mellen/SudokuSolver/SudokuSolver1.0.2.zip/qr_code">SudokuSolver1.0.2.zip</a>. It requires .NET version 4.0.<br />
<br />
Anyway! This was a fun little diversion. I makes me happy that I got it right the second time.<br />
<ol>
</ol>Mojohttp://www.blogger.com/profile/11704648978910126233noreply@blogger.com0tag:blogger.com,1999:blog-34763567.post-90843731535506347982010-09-20T18:31:00.000+00:002010-09-20T18:31:50.062+00:00Thinking about learningSo, my lack knowledge needs to take a bit of a beating.<br />
<br />
If I'm to get significantly better at writing C#, I need to understand the specification.<br />
<br />
It seems like a daunting task, but I think if I try and tackle a point at a time, writing small programmes to demostrate my understanding, I'll get a much deeper understanding of how my programmes hang together and how to write them better.<br />
<br />
Wish me luck!Mojohttp://www.blogger.com/profile/11704648978910126233noreply@blogger.com0tag:blogger.com,1999:blog-34763567.post-795876621312581132010-08-28T19:49:00.000+00:002010-08-28T19:49:43.568+00:00Learning to see patterns in my own behaviour<p>So, a week and a half ago I was looking at a question on Stack Overflow (<a href="http://stackoverflow.com/questions/3510586/algorithm-to-calculate-the-number-of-combinations-to-form-100">Algorithm to calculate the number of combinations to form 100 </a>). I set about solving it in Haskell, and came up against a block to my success:</p>
<p>Given a list of numbers <span class="code">xs</span> and another number <span class="code">n</span>, generate a list of all the possible combinations lists of length <span class="code">n</span> that contain the numbers from <span class="code">xs</span>.</p>
<p>So, given the list <span class="code">[1,2]</span> and the number <span class="code">3</span>, the function should generate this list of lists: <span class="code">[[1,1,1],[1,1,2],[1,2,1],[1,2,2],[2,1,1],[2,1,2],[2,2,1],[2,2,2]]</span></p>
<p>I was pretty sure that this had been done before, but because I'm trying to get better at deducing algorithms, I'm stubborn, and I'm doing this for fun I decided to figure out the algorithm for myself.</p>
<p>It wasn't as easy as it seemed.</p>
<p>I sat down and wrote out the outputs for a few different sets of inputs, I looked at them, I looked some more. I could see a couple of patterns, namely that <span class="code">(length of xs)<sup>n</sup></span> is the length of the final output and that you could create a rectangle of answers with width <span class="code">length of xs</span> and height <span class="code">(length of xs - 1)<sup>n</sup></span>. Neither of these were helpful.</p>
<p>I left the problem alone for a while, hoping that time would give me perspective. I was surprised how hard I was finding it to find the pattern.</p>
<p>Today I came back to it with a fresh brain and time to kill. I took a walk to the park, sat down, started to write out the output where the input is a list of length 3, and <span class="code">n</span> as 3. As I was writing, I had the realisation that the way to solve this was to figure out the algorithm of how to write it down. The problem in my previous examples of output was that I hadn't written it in a good enough pattern. I started writing out the output for a different input a list of length 4, with <span class="code">n</span> of 4 (256 items, for those keeping count). This time I was very systematic about how I wrote out the output. I got to the 44th list in the list and stopped to see if I could see it yet. I could: the last element in the individual lists was repeating every 4 items.</p>
<p>I stood up and, as is my wont when I am thinking, I started pacing. I must have looked a little unhinged, as I was pacing in a small circle around my bag.</p>
<p>It took me a few minutes, but eventually I figured out how to represent what I was seeing in my written output as an algorithm: the first time through, each item of <span class="code">xs</span> is appended to an empty list, for each subsequent time through, each item in <span class="code">xs</span> is appended to each list in the list of lists.</p>
<p>In Haskell, I came up with this function to do the work:</p>
<p class="code">makeallsets :: Integral a => [a] -> a -> [[a]]
makeallsets xs n = mas (addtoonelist [] xs) xs (n - 1)
where mas yss _ 0 = yss
mas yss xs (n + 1) = mas (addtoeachlist yss xs) xs n
where addtoeachlist [] xs = []
addtoeachlist (ys:yss) xs = (addtoonelist ys xs) ++ (addtoeachlist yss xs)
addtoonelist ys [] = []
addtoonelist ys (x:xs) = (x : ys) : (addtoonelist ys xs)</p>
<p>This allowed me to create an answer to the Stack Overflow problem. (Although there's no point posting it for 3 very good reasons: 1. it's not in the target language (which is Scala); 2. It uses the brute force approach; 3. There is already a better answer.)</p>
<p>Score 1 for perseverance!</p>
<p>P.s. if anyone would like to show me a better way, I'd be very glad to hear it.</p>Mojohttp://www.blogger.com/profile/11704648978910126233noreply@blogger.com1tag:blogger.com,1999:blog-34763567.post-54098717479742959232010-07-25T10:20:00.000+00:002010-07-25T10:20:55.673+00:00Update to ToDoList<p>I have made an update to the ToDoList WPF application I wrote some time ago.</p>
<p><a href="http://github.com/downloads/Mellen/To-Do-List/ToDo1.2.0.0.zip">ToDoList version 1.2.0.0</a></p>
<h3>Changes:</h3>
<ul>
<li>Created a ViewModel for the To Do List object and To Do List items.</li>
<li>Setup templates in the MainWindow XAML that display the ViewModel.</li>
<li>Added in an edit window.</li>
<li>Added in a context menu for items that allows for editing, deletion and marking as done.</li>
<li>Added in edit and delete functionality.</li>
</ul>
<p>I think the final addition will be to allow users to view done items. I'll get around to this at some point :D</p>Mojohttp://www.blogger.com/profile/11704648978910126233noreply@blogger.com0tag:blogger.com,1999:blog-34763567.post-29259714368518341822010-05-05T13:47:00.010+00:002012-05-29T09:04:27.665+00:00Memoizing functions in c++<p>I was thinking about <a href="http://en.wikipedia.org/wiki/Memoization">memoization</a>, and how I'd not yet used it. I thought this was a bad thing simply because not using it might lead to me forget about it. So I'm putting together this blog post to help me solidify the concept.</p>
<p>A long while a go I realised a simple fact about square numbers: x² = (x-1)² + (x-1) + x, x ∈ <b>N</b>. I.e. for any positive integer its square is the square of the previous integer plus the previous integer plus itself. (e.g. 17*17 = 16*16 + 16 + 17)</p>
<p>This is something that is unlikely to be interesting or useful, except that I can use it to demonstrate memoization.</p>
<p>From the above formula you can write a recursive function:</p>
<pre class="prettyprint">int square(int n)
{
if(1 == n)
{
return 1;
}
return (square(n - 1) + (n - 1) + n);
}</pre>
<p>As you can see this is a very long winded way to get the square of a number, and not a function that would ever be used in reality, but it is a good candidate for memoization.</p>
<p>Memoization in this instance is very easy. Simply add in a static <span class="code">map<int, int></span> and update it for each number you haven't calculated yet:</p>
<pre class="prettyprint">int square(int n)
{
static std::map<int, int> results;
if(1==n)
{
return 1;
}
if(0 == results[n])
{
results[n] = square(n-1) + n-1 + n;
}
return results[n];
}</pre>
<p>It might be that you'll want to make the <span class="code">results</span> variable on the heap with some sort of smart pointer, so that it automatically deletes itself, but other than that this second version should give a performance increase over the original.</p>
<p>I carried out some simple timing tests with <span class="code">std:clock()</span>. The programme had to calculate the squares from 1 to 32767 using the memoized and non memoized functions, in a loop:</p>
<div>
<span onclick="toggleCode('testCode')" style="color: blue; cursor: pointer; text-decoration: underline;">toggle test code</span>
<pre id="testCode" class="prettyprint hideCode">
#include <map>
#include <ostream>
#include <ctime>
int calcSqr(int);
int calcSqrSlow(int);
int main()
{
clock_t start1 = std::clock();
for(int i = 1; i <= 32767; ++i)
{
calcSqrSlow(i);
}
clock_t start2 = std::clock();
std::cout << "Ticks taken (slow): " << start2 - start1 << std::endl;
clock_t start3 = std::clock();
for(int i = 1; i <= 32767; ++i)
{
calcSqr(i);
}
clock_t start4 = std::clock();
std::cout << "Ticks taken (memo): " << start4 - start3 << std::endl;
return 0;
}
int calcSqrSlow(int n)
{
if(1 == n)
{
return 1;
}
return (calcSqrSlow(n - 1) + (n - 1) + n);
}
int calcSqr(int n)
{
static std::map<int, int> results;
if(1==n)
{
return 1;
}
if(0 == results[n])
{
results[n] = calcSqr(n-1) + n-1 + n;
}
return results[n];
}</pre>
</div>
<p>Ticks taken for the normal function: 3120<br/>
Ticks taken for the memoized function: 78</p>
<p>Obviously this test was biased towards the memoized function, but I really did it to show the potential benefits of memoizing a function where the results can be reused.</p>Mojohttp://www.blogger.com/profile/11704648978910126233noreply@blogger.com1tag:blogger.com,1999:blog-34763567.post-48833200058239747972010-03-23T22:59:00.010+00:002020-06-19T14:13:27.552+00:00SVG + Javascript drag and zoomRecently I've been working on a project that uses SVG (<a href="http://en.wikipedia.org/wiki/Svg">Scalable Vector Graphics</a>).<br />
<br />
I have been using SVGWeb (<a href="http://code.google.com/p/svgweb/">http://code.google.com/p/svgweb/</a>) so that the SVG will work in all the major browsers.<br />
<br />
It is a fantastic library and I am so grateful to the people who work on it.<br />
<br />
The things I found difficult were figuring out how to get zooming with the mouse wheel and dragging to work. I had it working in Firefox, using its native SVG renderer, however SVGWeb does things differently. It took me a while to work out how. I'm going to share what I found here. (Hooking the mouse wheel is actually explained on the SVGWeb mailing list: <a href="http://groups.google.com/group/svg-web/browse_thread/thread/fcc7573769813e0c/329fd28f126f1575?lnk=gst&q=mouse+wheel#329fd28f126f1575">Mouse Wheel Events</a>.)<br />
<br />
With dragging, I knew I needed to store the old X and Y values of the position of the mouse and take the difference between them and the new mouse position. For some reason setting global variables for the old X and Y values didn't quite work - the delta was very small, approximatley 7.5 times too small.<br />
<br />
With zooming, the SVGWeb library doesn't pick up the mouse wheel event. The way to get around this is to attach the mouse wheel event to the container tag (e.g. <code>div</code>) that is surrounding the <code>object</code> tag that is holding the SVG on the HTML page.<br />
<p>On to the code!</p>
I did not come up with the Javascript - I took it from various places;
mostly the SVGWeb mailing list entry above and the "photos" demo that
comes with SVGWeb. <br />
<p>This is the main HTML and Javascript for the page that is holding the SVG:</p>
<p><span onclick="toggleCode('mainHTML')" style="color: blue; cursor: pointer; text-decoration: underline;">toggle code</span></p>
<pre class="prettyprint hideCode" id="mainHTML"><!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"><br /><html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"><br /> <head><br /> <meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1" /><br /> <title>SVG Example</title><br /> <meta name="svg.render.forceflash" content="true" /><br /> <link rel="SHORTCUT ICON" href="favicon.ico" /><br /> </head><br /> <body onload="loaded()"><br /> <div id="svgContainer"><br /> <!--[if IE]><br /> <object id="svgImage" src="example.svg" classid="image/svg+xml" width="100%" height="768px"><br /> <![endif]--><br /> <!--[if !IE]>--><br /> <object id="svgImage" data="example.svg" type="image/svg+xml" width="100%" height="768px"><br /> <!--<![endif]--><br /> </object><br /> </div><br /> <script type="text/javascript" src="svg/src/svg.js" data-path="svg/src/" ></script><br /> <script type="text/javascript"><br /> function loaded()<br /> {<br /> hookEvent("svgContainer", "mousewheel", onMouseWheel);<br /> }<br /> function hookEvent(element, eventName, callback)<br /> {<br /> if(typeof(element) == "string")<br /> element = document.getElementById(element);<br /> if(element == null)<br /> return;<br /> if(element.addEventListener)<br /> {<br /> if(eventName == 'mousewheel')<br /> element.addEventListener('DOMMouseScroll', callback, false); <br /> element.addEventListener(eventName, callback, false);<br /> }<br /> else if(element.attachEvent)<br /> element.attachEvent("on" + eventName, callback);<br /> }<br /> function cancelEvent(e)<br /> {<br /> e = e ? e : window.event;<br /> if(e.stopPropagation)<br /> e.stopPropagation();<br /> if(e.preventDefault)<br /> e.preventDefault();<br /> e.cancelBubble = true;<br /> e.cancel = true;<br /> e.returnValue = false;<br /> return false;<br /> }<br /> function onMouseWheel(e)<br /> {<br /> var doc = document.getElementById("svgImage").contentDocument; <br /> e = e ? e : window.event;<br /> doc.defaultView.onMouseWheel(e);<br /> return cancelEvent(e);<br /> }<br /> </script><br /> </body><br /></html></pre>
</p>This is the SVG and Javascript:<p>
<p><span onclick="toggleCode('mainSVG')" style="color: blue; cursor: pointer; text-decoration: underline;">toggle code</span></p>
<pre class="prettyprint hideCode" id="mainSVG"><?xml version="1.0" encoding="UTF-8" standalone="no"?><br /><svg version="1.0" xmlns="http://www.w3.org/2000/svg" onload="loaded()" id="svgMain" ><br /> <script type="text/javascript" language="javascript"><br /> <![CDATA[<br /> var isDragging = false;<br /> var mouseCoords = { x: 0, y: 0 };<br /> var gMain = 0;<br /> <br /> function loaded()<br /> {<br /> var onloadFunc = doload;<br /><br /> if (top.svgweb) <br /> {<br /> top.svgweb.addOnLoad(onloadFunc, true, window);<br /> }<br /> else <br /> {<br /> onloadFunc();<br /> }<br /> }<br /> <br /> function doload()<br /> {<br /> hookEvent('mover', 'mousedown', onMouseDown);<br /> hookEvent('mover', 'mouseup', onMouseUp);<br /> hookEvent('mover', 'mousemove', onMouseMove);<br /> hookEvent('mover', 'mouseover', onMouseOver);<br /> gMain = document.getElementById('gMain');<br /> gMain.vScale = 1.0;<br /> gMover = document.getElementById('mover');<br /> gMover.vTranslate = [50,50];<br /> setupTransform();<br /> }<br /> <br /> function onMouseDown(e)<br /> {<br /> isDragging = true;<br /> }<br /> <br /> function onMouseUp(e)<br /> {<br /> isDragging = false;<br /> }<br /> <br /> function onMouseOver(e)<br /> {<br /> mouseCoords = {x: e.clientX, y: e.clientY};<br /> }<br /> <br /> function onMouseMove(e)<br /> {<br /> if(isDragging == true)<br /> {<br /> var g = e.currentTarget;<br /> var pos = g.vTranslate;<br /> var xd = (e.clientX - mouseCoords.x)/gMain.vScale;<br /> var yd = (e.clientY - mouseCoords.y)/gMain.vScale;<br /> g.vTranslate = [ pos[0] + xd, pos[1] + yd ];<br /> g.setAttribute("transform", "translate(" + g.vTranslate[0] + "," + g.vTranslate[1] + ")");<br /> }<br /> <br /> mouseCoords = {x: e.clientX, y: e.clientY};<br /> <br /> return cancelEvent(e);<br /> }<br /> <br /> function setupTransform() <br /> {<br /> gMain.setAttribute("transform", "scale(" + gMain.vScale + "," + gMain.vScale + ")");<br /> }<br /> <br /> function hookEvent(element, eventName, callback)<br /> {<br /> if(typeof(element) == "string")<br /> element = document.getElementById(element);<br /> if(element == null)<br /> return;<br /> if(eventName == 'mousewheel')<br /> {<br /> element.addEventListener('DOMMouseScroll', callback, false); <br /> }<br /> else<br /> {<br /> element.addEventListener(eventName, callback, false);<br /> }<br /> }<br /> <br /> function cancelEvent(e)<br /> {<br /> e = e ? e : window.event;<br /> if(e.stopPropagation)<br /> e.stopPropagation();<br /> if(e.preventDefault)<br /> e.preventDefault();<br /> e.cancelBubble = true;<br /> e.cancel = true;<br /> e.returnValue = false;<br /> return false;<br /> }<br /> <br /> function onMouseWheel(e)<br /> {<br /> e = e ? e : window.event;<br /> var wheelData = e.detail ? e.detail * -1 : e.wheelDelta / 40;<br /> <br /> if((gMain.vScale > 0.1) || (wheelData > 0))<br /> {<br /> gMain.vScale += (0.02 * wheelData);<br /> }<br /> <br /> setupTransform();<br /> <br /> return cancelEvent(e);<br /> }<br /> ]]><br /> </script><br /> <g id="gMain"><br /> <g transform="translate(50,50)" id="mover"><br /> <circle stroke-width="2" stroke="black" cx="0" cy="0" r="20" fill="red"/><br /> <text font-family="verdana" text-anchor="middle" transform="translate(0,40)" fill="black" stroke-width="1" font-size="12" >Drag me!</text><br /> </g><br /> </g><br /></svg></pre>
There is some overlap in the Javascript presented there, this is just to keep things simple if you're copy/pasting this to test for your self.<br />
<br />
This Javascript in the main file passes the mouse wheel event info to the SVG document:
<pre class="prettyprint">function onMouseWheel(e)<br />{<br /> var doc = document.getElementById("svgImage").contentDocument; <br /> e = e ? e : window.event;<br /> doc.defaultView.onMouseWheel(e);<br /> return cancelEvent(e);<br />}</pre>
The rest of the important Javascript is in the SVG document.<br/>
To get dragging to work, first define a global object to hold position information:
<br />
<pre class="prettyprint">var mouseCoords = { x: 0, y: 0 };</pre>
When the mouse moves over the desired element, update the object:
<pre class="prettyprint">function onMouseOver(e)<br />{<br /> mouseCoords = {x: e.clientX, y: e.clientY};<br />}</pre>
There also needs to be a global boolean to switch dragging on and off. I called mine <code>isDragging</code>. Toggle dragging when the mouse is up or down on the element.<br />
<pre class="prettyprint">function onMouseDown(e)<br />{<br /> isDragging = true;<br />}<br /> <br />function onMouseUp(e)<br />{<br /> isDragging = false;<br />}</pre>
When moving the mouse with dragging on, change the position of the element and update the object. Notice that the delta is being divided by the scale. This prevents the movement from becoming erratic.<br />
<pre class="prettyprint">function onMouseMove(e)<br />{<br /> if(isDragging == true)<br /> {<br /> var g = e.currentTarget;<br /> var pos = g.vTranslate;<br /> var xd = (e.clientX - mouseCoords.x)/gMain.vScale;<br /> var yd = (e.clientY - mouseCoords.y)/gMain.vScale;<br /> g.vTranslate = [ pos[0] + xd, pos[1] + yd ];<br /> g.setAttribute("transform", "translate(" + g.vTranslate[0] + "," + g.vTranslate[1] + ")");<br /> }<br /> <br /> mouseCoords = {x: e.clientX, y: e.clientY};<br /> <br /> return cancelEvent(e);<br />}</pre><br />
And that's how it works.<br />
<br />Mojohttp://www.blogger.com/profile/11704648978910126233noreply@blogger.com1tag:blogger.com,1999:blog-34763567.post-5420906542410684212010-03-05T12:52:00.002+00:002012-01-17T10:31:55.447+00:00Pomodoro!I've been feverishly subscribing to blogs recently after I realised I'm only really reading channel9.<br />
<br />
I've got so much reading to do it's unreal. I've got through about 50 .NET posts so far and I've got 50 more to go, before I'm caught up. I've also got about 50 PHP posts to read too.<br />
<br />
In my .NET blogs I came across this entry: <a href="http://www.developingfor.net/productivity/you-say-tomato-i-say-pomodoro.html">You say tomato i say pomodoro</a> at the developing for .NET blog. The post outlines a simple way to help manage your time effectively. It has inspired me to create a little timer app and a todo list app.<br />
<br />
The timer app is really simple: it's a picture of a tomato with a button on it that minimises the app to the notification area and sets a timeout period. Once the period is reached (the length is set in the config file) then the app pops back up and plays a sound at you. I've put the code over at GitHub: <a href="http://github.com/Mellen/Pomodoro">code for Pomodoro timer</a>.<br />
<br />
The todo list app is equally simple, just a list view and list item entry controls. On close it writes to a file. The source is also at GitHub: <a href="http://github.com/Mellen/To-Do-List">code for To Do List</a>.<br />
<br />
<b>update</b><br />
<br />
I've uploaded the binaries for each, so you don't have to compile them!<b> </b><br />
<br />
<a href="http://github.com/downloads/Mellen/To-Do-List/ToDoList.zip">To Do List executable</a><br />
<a href="http://github.com/downloads/Mellen/Pomodoro/Pomodoro.zip">Pomodoro executable</a>Mojohttp://www.blogger.com/profile/11704648978910126233noreply@blogger.com0