Automated Cognitive Behavior Analysis

Understanding and predicting human behavior has always been a fundamental research question in AI. This project proposes a methodology to address that problem termed automated cognitive behavior analysis (ACBA). The methodology allows uncovering the underlying behavior structures — the mental plans guiding and shaping people’s actions carried out in the world — given observed behavior performed in the context of complex tasks. It involves the use of genetic programming (GP) to iteratively generate programs capable of explaining the behavior exhibited by an individual in a given task. It also includes a set of tools to help analyze and interpret the invariant cognitive structures responsible for different observed behaviors.

Read more…Automated Cognitive Behavior Analysis

Human-Robot Interaction

During my post-doc at GAIPS / INESC-ID I worked in the field of human-robot interaction (HRI) in the context of different collaboration projects. Overall, I developed AI and machine-learning techniques to aid the decision-making of autonomous interactive robots. In particular, in the EMOTE project I developed AI and machine learning (ML) techniques to manage the game-play and autonomous interactive behavior of a robotic tutor. In project INSIDE I created methodologies to collect data and develop the robot’s interaction behavior, in the context of HRI for children with autism disorder. In project CoWriter I prepared and help conducting Wizard-of-Oz studies with children on the impact of a robot managing a collaborative writing activity.

Read more…Human-Robot Interaction

Intrinsically-Motivated Reinforcement Learning

My Ph.D thesis focused in developing flexible and robust mechanisms for autonomous agents by using the computational framework of reinforcement learning (RL). Within the field of machine learning, RL is the discipline concerned with providing mechanisms that allow an agent to accomplish a task through trial-and-error interactions with a dynamic and sometimes uncertain and unreliable environment. Furthermore, agents usually suffer from perceptual, motor and adaptive limitations, i.e., they often do not have access to “all” the information required to make the best decisions and normally do not know the environment’s dynamics or the exact consequences of their actions. As a consequence, standard RL techniques present several design challenges, especially when dealing with complex problems often involving a great amount of fine-tuning and user expert knowledge.

Read more…Intrinsically-Motivated Reinforcement Learning

Associative Learning in Factored MDPs

Associative learning is a paradigm from behaviorism that posits that learning occurs whenever a change in behavior is observed. Classical conditioning is one of the best-known associative learning paradigms and is one of the most basic survival tools found in nature by allowing organisms to expand the range of contexts where some of their already-known behaviors can be applied. By associating co-occurring stimuli from the environment, the organism can activate innate phylogenetic responses (e.g., fight or flight responses) to new and previously unknown situations.

Read more…Associative Learning in Factored MDPs

Learning and Emotions

A great portion of my Ph.D thesis was dedicated to incorporating ideas from the emotional processing mechanisms in humans and other animals into the framework of intrinsically-motivated reinforcement learning (IMRL). Emotions are one of the most basic behavioral phenomena observed in nature, yet they have often been considered as detrimental to rational and sound decision-making. However, as research in psychology, biology, neuroscience and other areas has shown, emotions are a beneficial adaptive mechanism for problem solving, enhancing perception, memory, attention and other cognitive skills.

Read more…Learning and Emotions