Content previously hosted at CNS Vision Lab
Hosted with permission by Andrew Browning, who authored this page while completing his doctoral dissertation at Boston University’s Department of Cognitive and Neural Systems.


ViSTARS: Visual Navigation in a Cluttered World

A model of primate motion processing cells demonstrates how primates, including humans, use visual information for reactive navigation. Heading (direction of travel) is combined with obstacle and goal location to generate steering commands around obstacles and towards a goal. The model links neurophysiology (the study of neurons) with psychophysics (the study of the relationship between stimuli and perception) and technology.

Our model mimics macaque monkey neurons and connects them together to form a neural network. The neural network is then used to process video and detect heading. The model embodies important elements of human responses to optic flow.

The model succesfully models neurophsysiology, matches human psychophysics data and demonstrates the technological potential of neural algorithms.


Heading Perception: Where am I going?

There should be a picture of optic flow here.

James J. Gibson introduced the concept of optic flow during World War II. He defined optic flow as information carried by light resulting from environmental structure and the animal’s path through the environment.



Model code for heading simulations

Stimuli for heading simulations

Hamburg taxi sequence

Driving video

Stimuli for steering simulations

Humans are able to estimate heading on the basis of very limited visual information. It has been shown that optic flow can be used to detect direction of travel, or heading.

There should be a driving video here!

Example video used to test the model.

There should be a model video here showing that retinal ganglion cells produce a moving boundary representation.

The first stage of the network mimics retinal ganglion cells.

There should be a model video here showing that cells in MT produce an optic flow estimate.

Neurons in primary visual cortex (V1) calculate local motion estimates which are then integrated across space and time in medial temporal area (MT).

There should be a model video here showing that MSTd cells estimate heading.

The output from medial temporal area, an approximation of optic flow, is then used in dorsal medial superior temporal area (MSTd) to determine heading. Our model determines heading accurately is robust to noise in the input stream.


Object Localization: Where are the goals and obstacles?

Humans steering decisions can be modeled by combining heading with the location of a goal and the location of obstacles. Optic flow can be used to determine the location of objects, along with their direction and speed of travel.

There should be the hamburg video here!

Model response to Hamburg Taxi Sequence. The model detects the position, direction and speed of the pedestrian and the three cars. The middle panel illustrates neural activations in area MT, the left panel shows MT activation on a close-up of the pedestrian, and the right panel illustrates activations in area MST. Model responses evolve over time becoming more precise and accurate.

There should be a model video here showing that ViSTARS steering.

When heading information is combined with goal and obstacle location, the ViSTARS model steers around obstacles towards a goal.


This research is supported in part by

The National Science Foundation (NSF BCS-0235398 and NSF SBE-0354378), the National-Geospatial Intelligence Agency (NMS201-01-1-2016), and the Office of Naval Research (ONR N00014-01-1-0624).

  • NGA
  • ONR
  • NSF


Selected Publications

Browning, N. A., Grossberg, S., Mingolla, E. (2007), Heading from optic flow of natural scenes during motion processing by cortical areas MT and MST. Society for Neuroscience conference, November 2007, San Diego