[bibshow file=my-publications.bib show_links=1 format=custom-ieee template=custom-bibshow highlight=”P. Sequeira”]
Building upon previous work in the area of affective computing, during my Ph.D I addressed the question of “where do rewards come from?” within the IMRL framework, referred to as the reward design problem, by considering an intrinsic reward mechanism for autonomous learning agents inspired by appraisal theories of emotions. The main technical contribution behind [bibcite key=sequeira2011acii,sequeira2013phdthesis,sequeira2014adb] is a set of four domain-independent emotion-based reward features, namely novelty, valence, goal relevance and control, that combined can guide the learning process and behavior of a RL agent that inhabits an environment of which it has only limited perception.
I also investigated the possibility of emotions emerging as natural complements of an RL agent’s perceptual capabilities. The work in [bibcite key=sequeira2013phdthesis,sequeira2014jaamas,sequeira2016aamas,sequeira2017sas] used a procedure based on genetic programming to evolve intrinsic reward functions to discover interesting sources of information in different simulated environments. The results show that useful rewards emerge having structural and dynamical properties similar to those of many appraisal variables as defined in the emotional appraisal literature.
Overall, the results of these experiments support the claim that reward functions inspired on biological evolutionary adaptive mechanisms, as emotions are, have the potential of providing more autonomy to the agents and a greater flexibility in the reward design, while alleviating some limitations inherent to artificial agents. Moreover, they show that the emotional appraisal process may indeed provide essential information to complement perceptual capabilities and thus guide decision-making.