The trials and tribulations of debugging code

Sometimes debugging computer code can be very challenging. We do a lot of computer modeling in the lab and this case I discovered a bug in the process of having a computer model, which we call Tempus, attempt to respond similarly to human learners under the same conditions. This is done through a process called ‘fitting’, where you give a model some initial settings and then compare its behaviour with some target ( the human data) and then fiddle (for instance using gradient descent on the error) with those settings in such a way as to try and make the model produce data that looks more like the human. In this case, some training data was withheld from both the humans and the model during learning, in order to see how well the learning generalizes to novel stimuli. This is a comparison of the model with human data as of last fall:


These fits look decent but we then needed to modify our approach to run the model multiples times at each setting and take an average of its behaviour in order to describe the effect of noise. After implementing this change a quick look now showed the model  to be doing quite poorly on this particular measure.


If this was the only thing that had changed, it would have been easier to narrow down the source of the problem. However, I had used this update as an opportunity to make a few other little adjustments to the model (it can be time consuming to verify output after every seemingly inconsequential commit).When I started to see poor results on this particular measure, I just assumed that it was just going to take the fitting process a bit of time to find some good settings again. When the fitting was unable to improve this measure however, I started to get concerned. What was going on? I decided to look at the single runs of the model instead of the noisy averages. I knew immediately after doing so that there was something wrong with the averaging. Single runs of the model were coming exceptionally close to the human output. An example subject and its associated fit looked like this:


To check this, I needed to look at the actual distribution of simulations that was being averaged. And here we find our snake in the grass; a piece of code that produces the programmer’s worst nightmare – an error that systematically retains the same shape and scale of its output as appropriate data.

modelIndividualTProbs = reshape(cell2mat(lookupTable(constParamRows,5)),length(constParamRows),...

What this code is designed to do is take data that is shaped like this:


And reshape it into data that looks like this:

1 0.5 1
0 0 1
0.5 0.5 1
1 0 0

That is, there are 4 simulations in this example, with 3 pieces of data each. Because the data originally came in a format that doesn’t let you distinguish the individual simulations very easily it needs to be massaged into a shape that has the same number of elements: 12, but is a 4×3 matrix instead of 12×1. But what does the code I had written do?

1 0 1
0.5 1 1
1 0.5 0
0 0.5 0

Notice that this is output is incorrect but it looks very similar to the kind of output we might expect. So rather than getting an average of:

0.625 0.25 0.75

you get:

0.625 0.5 0.5

The corrected piece of code should look like:

modelIndividualTProbs = reshape(cell2mat(lookupTable(constParamRows,5)),...

Which breaks the individual simulations up as columns first in a 4×3 and then rotates the output 90 degrees. So now we can rest easy with results that make a bit more sense.




Goodbye Victor and Betty

photo 1

Betty and Victor have been important team members in the CSLab – they were solidly present and never ceased to put a smile on our faces. They have helped from coding, to running research studies, to hosting lab parties.

Although they are moving on to pursue their dreams, it will not go unnoticed that we have said our good-byes to some of the most valuable members in our tight-knit team. We are forever grateful for their support and dedication; the impact they have both left upon us are great, and will be missed.

Nicole, from Germany

Thank you for being a part of the lab; your presence brightened everybody’s day. It was a pleasure to have you! Please come back to visit Canada soon.



I’m Nicole and a graduate student of psychology from Cologne, Germany, and I’ll finish my master degree this summer.  My research interest are in Cognitive Science and forensic Psychology. I joined the CSL for 5  weeks. It was a pretty cool experience. The people are really nice and funny and took care of me. Except for working I love travelling (no surprise), music, movies, dancing, work outs, laughing and having a lot of fun with chilly people! So, thank all you guys for this great time at your lab in Canada and for all the support, help, taking me out, introducing me to others and explaining how things in the lab works!!! Maybe there’s a chance for me working in Canada one time!

Christmas Party 2013!

It was finally time to relax! Some of us played boxing, tennis and golf on Wii while others played Starcraft. Then came dinner time – we ate delicious pasta from Anton’s while watching the classic and heart warming movie “Elf”. Later on we played Secret Santa, the “I’m-going-to-trade-presents-with-you” version. Then things got hyped up when we played “Who? What? Where?”, a drawing game where the premise was to draw pictures of whatever we get from decks of cards. The party went for hours; we all went home with smiles on our faces. It was a great start to the well-deserved break!

Halloween Party!

Hot pot! 🙂

Great news! We’ve been published!

Thank you for participating and helping to make the SkillCraft project a success! With your support we received an amazing 4400 replays from players across multiple skill levels. The study has been published to PLOS ONE and can be viewed at

We are also still collecting replays for our current longitudinal StarCraft 2 study. If you have not yet participated, please consider doing so at

Thank you again for your support. Please look forward to our future studies!

Camping Trip 2013

Rain or shine we have a good time! See what happened during our first ever lab camping trip!

Initial Findings of Skillcraft Study

Screen Shot 2013-04-03 at 10.55.47 AMThe initial findings of the largest study of expertise were released on a team liquid post.  The findings include action latency as a predictor of skill, how variable importance changes across different skill levels, setting preferences across leagues, and more. Click here to read all about it! If you find that interesting, be sure to participate in our new study that will be even larger and more comprehensive!