Dealing with “drift” in desk-mounted eye-tracking experiments

Despite what this blog may imply, we manage to get a lot of sciencing done around here. (In fact, a whopping 25% of all starcraft-related activities conducted in this lab are research-related, but I’ll let someone else fill you in on those details). In addition to struggling with big picture questions about human cognition, though, we also need to address the nitty-gritty details of – and problems with – collecting data with an eye-tracker.

Here in the cogslab we use four desk-mounted Tobii X120 eye-trackers for data analysis. These machines record the location of your eye-gaze 120 times in a single second – that is, once every 8.3 milliseconds. Wow! As you can imagine, a single participant generates thousands and thousands of data points. In an experiment with 480 trials that takes about 45 minutes to complete, for example, the average participant yields 201,200 samples. We use a modified dispersion threshold (Salvucci & Goldberg, 2000) to condense this raw data into “fixations” by identifying points on the screen the eyes pause at (or “fixate”). Fixations are essentially [x,y] coordinates and durations, and our average participant in the aforementioned experiment has only 4215 of these.

Continue reading


Error-driven attentional learning? um, no.

We just submitted a manuscript to Psych Science on error-driven attentional learning. The basic finding is that people do not seems to shift their attention more on error trials than on correct trials, violating a basic prediction of error-driven accounts of attentional learning. Look for it soon, we hope!

Another paper off!

We just submitted another paper. I love submitting papers, because the whole project is someone else’s problem for like two months. This particular paper was difficult because we had to cut out the working memory bits because working memory span kept correlating with different things in different experiments, and nothing ever replicated.  If you are interested, email me and I will tell you about all the time we wasted running the ANT, the AOSPAN, and the SYMSPAN tasks hoping to understand how working memory was related to attentional allocation in category learning. Ug. Now that we cut out all the messy WM results though, we’re left with some very nice (and replicable) eye-tracking data. I’m hopeful it will go this time. Of course, I’m always hopeful they will go, so who can say.