Visiting York’s Centre for Vision Science

I love the west coast. Although I consider myself a “traveller” in the sense that I’ve put in more road-trip hours around BC and Washington than anyone else I know, in my adult life I’ve really never been farther east than Creston. So when I was invited to attend the Centre for Vision Research at York University in Toronto for their annual week-long summer school, I was excited. Because it’s old and features in many third-grade early Canadian history classes, Toronto’s a bit mythic in my mind. More people live in Toronto than in Vancouver and Seattle combined, and Toronto has a real subway system. The Kids in the Hall are from Toronto. While the Vancouver International Film Festival is fun to attend, it’s TIFF that gets coverage from the American and British media. Toronto = neat.

More than travelling, though, I was looking forward to visiting the CVR. The cogslab is only one degree of separation from the CVR – our newest lab member Caitlyn completed her undergraduate thesis in a CVR lab and is working here over the summer in anticipation of starting a Master’s degree with Mark in the fall. (I’m sure she’ll be blogging soon!) Established in 1992, the CVR brings together researchers in health science, biology, psychology, computer science, kinesiology, and neuroscience who are all interested in studying different aspects of vision. The CVR’s approach to vision science mirrors the kind of approach that drives all cognitive sciences – collaborative and interdisciplinary. At their annual week-long summer school, the CVR aims to expose undergraduate students to this kind of approach to vision research, as well as provide daily crash-courses on everything from basic perception all the way up to human-computer interaction.

The week was pretty intense. The summer school students stayed in a dorm about a five minute walk from the CSE building, where the CVR resides. Although I can’t say I enjoy staying in dorm rooms, ours was conveniently located above a coffee outlet, so I couldn’t complain. Every morning we enjoyed two lectures by CVR researchers; in the afternoons, we split up into smaller groups and had the opportunity to explore the CVR labs, ask questions of professors and their graduate students, and participate in a few experiments.

I was surprised that despite the variety of talks, I found them all quite compelling. Even topics I didn’t think about often were insightful. For example, Wolfgang Stürzlinger lectured on three-dimensional interfaces, and explained how complicated they are given we aren’t very good at navigating in “full” 3D. Aside from deep-sea divers, astronauts, and fighter pilots (who all need to be trained), we’re used to navigating in a constrained ‘2.5D’ environment — rotation in three dimensions is not an intuitive component of human perception. And though I usually find clinical research interesting, but not fascinating, Jennifer Steeves talked about her research program with clinical patients and how studying people with visual agnosia can actually lead us to a greater understanding of normal perception. She told us about her work with a number of patients, including patient DF, who was able to easily classify scenes based on their colours and textures despite her deficit in object recognition — suggesting that scene perception is not simply a bottom-up process of recognizing and combining objects.

Some of the research presented was closer related to areas I was more familiar with as a cognitive psychology student. Maz Fallah gave a rapid but detailed overview of selective attention in a number of contexts, including spatial, feature, and object selection. One of the things he discussed was the role of the frontal eye field (FEF) in spatial attention. It had been established that the FEF is involved in redirecting spatial attention, but Dr. Fallah was able to show the direct link between the FEF and spatial selection through microstimulation of FEF neurons — if stimulated neurons are associated with a target’s location in a visual field, detection of the target is facilitated. Another topic he covered was colour hierarchy – he explained that when colour is an irrelevant feature for the task at hand, we appear to have a consistent hierarchy (red-green-yellow-blue, with red being the strongest) for automatic colour selection during motion processing.

Laurie Wilcox gave a lecture on binocular vision, and I learned that we know far less about stereopsis – seeing in three dimensions – than I realized. She is currently working on a project that found young children are better than adults at detecting both fine and coarse stimulus disparities (that is, children are more accurate at identifying which of two rapidly-presented stimuli is “in front” of the other). The reason for this age difference is unclear — it must have something to do with perceptual development, but there is no obvious explanation for how these differences develop. We had a chance to visit the Wilcox lab and speak with Dr. Wilcox and her students about some of the areas they were studying, including monocular occlusion and how people can use occlusions as depth cues. As part of the 3D Film Innovation Consortium, Dr. Wilcox also studies how people experience 3D movies. And although I must admit that 3D films have yet to win me over, I still find it interesting to think about all the things filmmakers must consider: for example, in traditional filmmaking, background objects are often blurry or out-of-focus in order to draw attention to to the foreground action, but in 3D movies, blurry three-dimensional background objects can appear quite strange to viewers.

We also attended a lecture by Ian Howard, the founder of the Centre for Vision Science. Dr. Howard’s reputation had preceded him — I had heard he was something of an engineer, with a home full of toys and contraptions he had made himself. Dr. Howard doesn’t use any computer stimuli to do his studies of binocular vision, instead opting to build his own experimental apparatuses (it worked for Helmholtz and Kohler, so why not?). He explained his most recent device, the dichoptoscope, a system of mirrors and rotatable tracks designed to investigate motion in depth. Dr. Howard’s lecture covered the evolution of vision and the basic ingredients for any new sensory signal to emerge: if a chance mutation creates a signal that can be associated with an event in the world, and the organism can respond to this with an advantage, local gene expression can allow for the plasticity needed to form stronger associations with this signal, which can be improved through adaptation (in a very simple nutshell) — thus, eyes! Or photoreceptor cells and eyespots to begin with, at least. He talked about the evolution of binocular vision (when the eyes have overlapping visual fields) through Hebbian synapses, and discussed visual illusions. In an effort to drive home the point that visual illusions are really only such in two-dimensions, he brought in three-dimensional models of some classics like Shepard’s tables. He also gave a brief history of how space has been represented by artists over time, and how early artists didn’t take lines of perspective into consideration. The Greeks were some of the first to begin using perspective in their illustrations, but not until the Renaissance did it become popular. Although we all use perspective to perceive the world, replicating it is not an intuitive task for us — without training, we tend to draw concepts that make up objects (ovals as the tops of coffee cups, squares as the fronts of blocks) rather than the image the retina picks up.

We had the chance to do so many things and hear so many lectures that I can’t talk about them all in detail, but here are a handful more: we visited the Tumbling Room, where we sat stationary in a chair while a room (with everything nailed down) was rotated around us — it gives you a very, very strong illusion of being upside-down; we visited a student in the Perception & Plasticity Lab who is using change blindness tasks with macaques as a tool to study memory; students in the Sensorimotor Control Lab took us on a tour of their projects on visuomotor adaptation, body-position judgements, and behaviour in the face of conflicting sensory information; the Human and Computer Vision Lab showed us their face-detecting Attentive Panoramic Sensor (since I can’t do it justice, see the video here). Laurence Harris and Richard Dyde discussed their Bodies in the Space Environment project, which is looking at how physical and visual cues are combined with internal body representations to produce the perception of “up” — in us regular earth-dwellers, where all these cues tend to agree but can be manipulated, and also in astronauts (see the Rick Mercer Report here).

On a more personal note, I found the whole thing really exciting. What I love the most about research is learning about the creative solutions people come up with to solve problems. Everyone at the CVR is clearly dedicated to the research they’re doing, and the CVR has managed to pull together an inventive and resourceful group of people who were happy to share their experiences and teach us as much as possible in one short week.

– Kim

Advertisements