Below you will find some brief descriptions of recent investigations in our lab. Please click on the headings for more information.

Extreme Image Transformations

Current deep neural networks often use shortcuts to recognize objects. Their learned representation is not useful when presented with out-of-distribution data. This work presents novel and computationally cheap data transforms that highlight the differences between strategies used by humans and machines for object recognition. These EITs further help networks in learning more concise object representations.


Today’s deep neural networks track objects by re-segmenting objects of interest in every frame of the video. How can they track if every object looks exactly the same? This work presents a new tracking task, inspired by cognitive psychology, that can be used to test the straining capacity of state-of-the-art deep neural network based trackers.

Subitizing and Occlusion

Humans are able to rapidly and accurately count a number of targets up to about four – this is called subitizing. How is this accomplished? How do partial occlusions impact the human ability to subitize?

Color Perception

There are multiple independent sources of perceived chromaticity, including real light, afterimages, spatial chromatic induction, and positive afterimages. Color perception is a complex process mediated by opponent color channels. What determines the color of objects that you see? How are these sources combined perceptually?


Recently, cells with only local receptive fields have been found to be selective for border-ownership. Can we model the operations such cells may be performing in order to achieve this selectivity?

Data Acquisition and Tracking Robot

To improve video capture for object tracking analysis, the robot allows recording stable video for later analysis while also providing a platform for testing tracking algorithms using the attached PTZ camera.

Detecting Motion Onsets and Offsets for Kinetic Occlusion

Surfaces can appear to have different depths even when the only information available to the visual system comes from motion. What are the mechanisms contributing to these perceptions? Please see the published article in JoV. Also the FMO model source code is available.

ViSTARS: Visual Navigation in a Cluttered World

When heading information is combined with goal and obstacle location, the ViSTARS model steers around obstacles towards a goal. (work by Andrew Browning)

Rapid Numerosity Estimation

Finding objects in natural environments presents a challenge for even simple objects with known properties.  Because of occlusion and distance, the number and extent (visual angle) of visible surfaces do not directly convey the number of objects seen. Are those three reddish surfaces one nearby partially occluded apple or three distant apples? Resolving such ambiguities seems effortless for human observers. How is it done?