BLADDER is a project funded by the NIH-SPARC program [NIH20c]. The team is led by Zach Danziger at FIU and includes multiple academic institute partners. The objective is to synergistically merge mechanistic and machine learning models to obtain system level dynamic models of bladder for the purpose of developing neurostimulation therapies.

Brain Basis of Emotion: A Category Construction Problem is a project funded by NSF [NSF20]. The team is led by Ajay Satpute in the PEN Cluster. The objective is to demonstrate with larger sample size that a priori emotion category construction for brain activity analysis may not be based on models that can be justified with statistical modeling and hypothesis testing. 

Automation of Characterization and Evaluation (ACE) in Personal Protection Equipment Manufacturing Plants is a project funded by the Army [ARM20]. The team led by Taskin Padir includes local PPE manufacturing partners. The objective is to develop robotic and machine learning methods to automate characterization and assessment of produced PPE.

Probabilistic Models for Learning With Less Labels is a project funded by [DARPA19]. The team led by Avi Pfeffer at Charles River Analytics consists of multiple partner institutions, including Northeastern. The team at Northeastern includes Jan-Willem Van De Meent, Byron Wallace, and Deniz Erdogmus. The objective is to develop theory and methods for label-efficient learning of machine learning models. We are exploring the use meta learning and active learning in the context of probabilistic models.

Coordination of Dyadic Object Handover for Human-Robot Interactions is a project funded by [NSF19b]. The team consists of Northeastern researchers, including Eugene Tunik, Mat Yarossi, Taskin Padir, and Deniz Erdogmus, who collectively bring expertise in motor behavior neuroscience, robotics, human-robot interaction, and machine learning. The objective is to model natural human-to-human object handover dynamics in order to develop robotic behavior strategies for more human-like human-to-robot and robot-to-human object handover.

Mining for Mechanistic Information to Predict Protein Function is a project funded by [NSF19a]. The team consists of Northeastern researchers, including Mary Jo Ondrechen, Penny Beuning, and Deniz Erdogmus, who collectively bring expertise in protein function, biochemistry, and machine learning. The main objective is to develop computational models that can predict protein function from chemical and molecular structure. Models will also be explainable in the sense that active residues will be identified and their roles will be connected to predicted protein function.

Multimodal Signal Analysis and Data Fusion for Post-traumatic Epilepsy Prediction is a project funded by [NIH19]. This collaborative project team consists of researchers at University of Southern California and Northeastern, including Dominique Duncan and Deniz Erdogmus, who collectively bring expertise in neuroimaging, biomedical signal and image analysis, and machine learning. The main objective is to discover features from multimodal data such as EEG, fMRI, DTI, and blood chemistry, in order to build models that can predict if a traumatic brain injury (TBI) patient is susceptible to epileptogenesis – emergence of epilepsy following TBI.

Understanding Motor Cortical Organization Through Engineering Innovation to TMS-Based Brain Mapping is a project funded by [NSF18]. This collaborative project team consists of researchers at Northeastern and Mass General Hospital, including Eugene Tunik, Taskin Padir, Deniz Erdogmus, and Wasim Malik, who collectively bring expertise in motor neuroscience, biomedical engineering, and machine learning. The main objective is to develop hybrid models that combine physics based partial differential equations and deep neural networks to predict transcranial magnetic stimulation (TMS) induced muscle evoked potentials (MEPs) in the upper limbs of humans. The project also aims to develop inverse motor cortex activation imaging methods to estimate how activations of muscle groups in the arms are represented in the motor cortex.

Predicting Onset of Aggression in Minimally Verbal Youth with Autism Using Biosensor Data and Machine Learning Algorithms is a project funded by [Simons18, Army18]. This project team consists of researchers at Maine Medical Center Research Institute, University of Pittsburgh Medical Center, and Northeastern, including Matthew Siegel, Matthew Goodwin, Stratis Ioannidis, and Deniz Erdogmus, who collectively bring expertise in autism spectrum disorders, wearable sensors for monitoring and predicting behavior, machine learning, and multimodal sensor fusion. The consortium is collecting a large scale multimodal dataset from minimally verbal children with Autism and developing sensor fusion algorithms to predict upcoming aggressive behavior onset, which is a significant detrimental factor in these individuals participating successfully in social engagements with family, friends, and caregivers.

EEG-Guided Electrical Stimulation for Immersive Virtual Reality is a project funded by [NSF17]. This project team consists of researchers at University of Pittsburgh and Northeastern, including Murat Akcakaya, Doug Weber, and Deniz Erdogmus, who collectively bring expertise in electrophysiological sensing and stimulation, biomedical engineering, and machine learning. The team is working towards developing sensory models for fingertips when stimulated with electrical current waves spatially and temporally, in order to eventually enable realistic haptic feedback for virtual and mixed reality applications where feeling of texture are essential for enhanced immersive experiences.

Automated Analysis of Severity for Retinopathy of Prematurity (ROP) Using Multimodal Clinical Data is a project funded by [NIH20a, NSF16b]. This consortium of researchers at Oregon Health and Science University, Mass General Hospital, University of Illinois Chicago Medical Center, and Northeastern, including Michael Chiang, Peter Campell, Kemal Sonmez, Jayashree Kalpathy-Cramer, Jennifer Dy, Stratis Ioannidis, and Deniz Erdogmus, who collectively bring expertise in opthalmology, ROP, multimodal (image, genetic, clinical) medical data analysis, and machine learning. The team is working towards building a large multimodal ROP dataset and accompanying diagnosis assistance tools that involve feature extraction and severity assessment models based on machine learning approaches such as learning to sort from comparison labels and active learning to achieve label efficient model learning.

Clinical Interactions of a Brain-Computer Interface for Communication is a project funded by [NIH20b]. This is a collaboration of researchers at Oregon Health and Science University and Northeastern, including Melanie Fried-Oken, Betts Peters, Barry Oken, Steven Bedrick, David Smith, and Deniz Erdogmus, who collectively bring expertise in clinical neuroscience and assitive technologies, natural language processing, multimodal sensor fusion, and machine learning. The objective of the project is to develop novel interfaces for computer access, including EEG-based brain interfaces, to improve the quality of life and communication capabilities for individuals with locked-in syndrome. To date we have developed multiple innovative solutions, internationally leading the field of brain interfaces for augmentative and alternative communication. The the best of our knowledge, in 2009, our team was the first in the world to introduce fusion of EEG and natural language models within a recursive Bayesian human intent inference framework. Since 2011 we have introduced multiple innovative approaches to active recursive Bayesian inference based on information theoretic coding (e.g. maximum mutual information based recursive Bayesian coding) and active leaning (e.g. submodular monotonic bounds for mutual information) principles, in order to tackle the problem of speeding up brain interface based communication through optimal stimulus/query selection strategies. We have recently developed a unified recursive inference and active querying scheme in which we started exploiting the momentum of posteriors in the probability simplex. Driven by this framework of active recursive Bayesian inference through multimodal evidence fusion for human intent inference, we have introduced several novel concepts including the RSVP Keyboard, FlashType, Shuffle Speller, Web Speller, and Track Speller.

 

Old Projects

Nested Control of Assistive Robots Through Human Intent Inference is a project funded by [NSF15]. This collaboration of researchers at Northeastern, Worcester Polytechnic Institute, and Spaulding Rehabilitation Hospital, including Deniz Erdogmus, Gunar Schirner, Taskin Padir, Cagdas Onal, and Paolo Bonato, who collectively bring expertise in machine learning and multimodal human-robot interfaces, cyber-physical systems, robotics, and rehabilitation. The objective of the project is to develop multimodal physiological and environmental sensor fusion, including EEg, EMG, cameras, eye tracking, and pressure sensors to infer human intent during reach-to-grasp scenarios, and to use this human intent inference solution in guiding prosthetic and orthotic hand control strategies.