[bibshow file=my-publications.bib show_links=1 format=custom-ieee template=custom-bibshow highlight=”P. Sequeira”]
The work behind the code focused on developing flexible and robust mechanisms for autonomous agents within the reinforcement learning (RL) computational framework. The problem is to design reward mechanisms that allow an agent to learn a task as efficiently as possible having into account its limitations. Another challenge is to create rewards that are generic enough to be used in a wide variety of situations, i.e., not only for a specific task or domain.
- A set of four domain-independent reward features based on appraisal theories of emotions, that when linearly-combined can guide the learning process of IMRL agents acting in different dynamic environments over which they have only limited perceptions;
- A genetic programming (GP) procedure to evolve general-purpose, domain-independent intrinsic reward functions. Specifically, the functions use information that is produced by any standard RL algorithm as the building blocks for GP to discover interesting sources of information to complement the agent’s perceptual capabilities in partially-observable environments;
- An extension of the IMRL framework to multiagent settings involving complex interdependences between the behaviors of several interacting agents, each using a particular intrinsic reward function. In particular, includes generic social rewards inspired in natural mechanisms of reciprocation that enable the emergence of socially-ware individual behaviors within competitive multiagent scenarios.
Other works that used (possibly earlier) versions of this code: [bibcite key=sequeira2011acii,sequeira2014adb,sequeira2014jaamas,sequeira2016aamas,sequeira2017sas,sequeira2011icdl]