The Center for Signal Processing, Imaging, Reasoning, and Learning (SPIRAL) is a federation of labs in the Electrical and Computer Engineering Department of Northeastern University. The lab comprises 8 faculty and more than 30 graduate students and postdocs, working in diverse research areas such as statistical signal processing, machine learning, distributed computing, and optimization.

Research in SPIRAL is generously supported by the National Science Foundation, the National Institutes of Health, the Department of Health and Human Services, the Office of Naval Research, the Defense Advanced Research Projects Agency, the Intelligence Advanced Research Projects Activity, the U.S. Army Research Laboratory, Advanced Robotics for Manufacturing Institute, Google Research, Facebook Research, MathWorks, Amazon Cloud Services, Simons Foundation, the Nancy Lurie Marks Family Foundation, and NVIDIA


Attend the SPIRAL Seminar Series


Social media and online resources


SPIRAL Twitter Feed

Read about the latest papers coming out of SPIRAL researchers.




Watch presentations by SPIRAL researchers as well as recent seminars.



SPIRAL LinkedIn Page

Read about job opportunities at SPIRAL.

Our Research

Human-Robot Object Handover

Coordination of Dyadic Object Handover for Human-Robot Interactions is a project funded by NSF. In collaboration with Tunik and RIVER Labs at Northeastern, we are modeling natural human-to-human object handover dynamics in order to develop robotic behavior strategies for more human-like human-to-robot and robot-to-human object handover in human-robot teams of the future.

Estimating Protein Function From Structure

Mining for Mechanistic Information to Predict Protein Function is a project funded by NSF. In collaboration with researchers from the Chemistry Department, we are using machine learning techniques to develop computational models that can predict protein function from chemical and molecular structure. Models will also be explainable in the sense that active residues will be identified and their roles will be connected to predicted protein function.

Predicting Epileptogenesis After TBI

Multimodal Signal Analysis and Data Fusion for Post-traumatic Epilepsy Prediction is a project funded by NIH. In collaboration with researchers at USC Medical School, we are using machine learning techniques to discover features from multimodal data such as EEG, fMRI, DTI, and blood chemistry, in order to build models that can predict if a traumatic brain injury (TBI) patient is susceptible to epileptogenesis – emergence of epilepsy following TBI.

Modeling TMS-induced Motor Evoked Potentials

Understanding Motor Cortical Organization Through Engineering Innovation to TMS-Based Brain Mapping is a project funded by NSF. In collaboration with researchers at Tunik Lab and MGH, we are developing hybrid models that combine physics based partial differential equations and deep neural networks to predict transcranial magnetic stimulation (TMS) induced muscle evoked potentials (MEPs) in the upper limbs of humans. These models will then be used to develop inverse motor cortex activation imaging methods to estimate how activations of muscle groups in the arms are represented in the motor cortex. We are also exploring active learning techniques for rapid label efficient modeling in this context.

Predicting the Onset of Agression in Children with Autism Using Wearable Sensor Data Fusion

Predicting Onset of Aggression in Minimally Verbal Youth with Autism Using Biosensor Data and Machine Learning Algorithms is a project funded by Simons Foundation and US Army. In collaboration with researchers at Maine and UPitt Medical Centers we are developing sensor fusion algorithms to predict upcoming aggressive behavior onset in minimally verbal children with autism. Our algorithms will give caregivers real-time information and cues regarding the mental state of the child they are interacting with.