2020 Recognizing Families In the Wild (RFIW)

in conjunction with FG 2020


We are pleased to announce the fourth large-scale kinship recognition data competition, Recognizing Families In the Wild (RFIW), in conjunction with the 2020 FG. RFIW has been made possible with the release of the largest and most comprehensive image database for automatic kinship recognition, Families in the Wild (FIW).

All submissions (i.e., challenge papers, general paper submissions, and Brave New Ideas) will be peer-reviewed for publication as part of RFIW2020 in the IEEE International Conference on Automatic Face and Gesture Recognition (AMFG) proceedings. Also, authors will be expected to join workshop during 2020 AMFG conference on 18-22 May in Buenos Aire, Argentina.

Check back for updates!

Press Releases


What is new for RFIW-2020?

There has been a surge of researchers attracted to problems of visual kinship recognition lately: between a recent Kaggle competition, the latest release of improved data (i.e., v 0.1.2), along with other researchers reaching out for raw data on quest for various tasks and goals. To reduce limitations in novel ideas, while motivating via calls for general papers and papers expressing Brave New Ideas-- different ways of exploring, learning, and interrupting FIW data and visual kinship recognition problem as a whole. Along with the traditional verification task, RFIW2020 will support two new tracks, tri-subject verification and large-scale search-and-retrieval. Tri-subject is a natural extension of the verification task (i.e., when comparing 2 faces, one of father and the other a son, it is practical to assume the perspective mother is known, as the wife of the father subject is likely accessible). Thus, this paradigm follows a 2-to-1 verification protocol (all rules mimic that of verification, with the difference being one item from each pair consisting of a man and woman, and the question then is “are these the parents,” or ’are these two siblings children of this single parent, etc.). Large-scale search-and-retrieval will mimic that of template-based, open-sets protocols provided by benchmarks like IJB-B. A gallery will have millions of distractors, and the task is then to rank faces based on scores representing whether or not a family member (i.e., blood relative). Such a paradigm closely mimics the real-world application of missing children (i.e., a child is found online, exploited by the unknown, and it is unlikely the child’s face is in any database; however, a family member likely is– identify a family member, determine the identity of the unknown child). Additionally, the paradigm can be used to reunite families split as part of the modern-day refugee crisis (i.e., provided technology to recognize family members via visual media, we could then match families together from different camps at the cost of a low-cost security video-feed). More information to come.

Specifically, the new components parts will be a part of RFIW2020:


In addition to the two tasks included in previous RFIW (i.e., Kinship Verification and Family Classification), there will be a new track, Tri-Subject Verification, supported during the proposed RFIW2020 challenge. Tri-Subject Verification focuses on a slightly different view of kinship verification-- the goal is to decide whether a child is related to a pair of parents. Tri-subject is a more realistic assumption, as knowing one parent typically means information of the other is accessible. Following this notion, we propose adding this track, but with additional tri-subject pair types (e.g., given a couple of known siblings, determine if an unknown subject is also a sibling). Plus, at scales far much higher than ever before possible. Unlike RFIW-2017, RFIW-2018, and RFIW-2019, where we used CodaLab to handle registration, provide data downloads, and automate scoring, we will continue to do so. Of course, the organization of each task is done so by well-defined protocols and data-- splits. Participants will be allowed to partake in one or all challenge tracks, as we will handle each submission independently. Benchmarks using conventional methods, along with results of prior RFIW, will be provided; also, source code to reproduce and demonstrate each task end-to-end will be made available. Thus, enabling newcomers while challenging the experts. We will also call for general paper submissions of new work to expand the types of problems and the use-case of the FIW dataset.

Call for papers!

In addition to the three organized task evaluations, we will also add this piece to RFIW2020 (i.e., papers that use FIW in novel ways). The main reason we added this is to ignite the creativity of the community outside the controlled experiments of the task evaluations– we found the assessments to be great for structuring existing problems such that us researchers and practitioners can make fair comparisons of algorithms; however, this limits the scope of the problems of automatic kinship recognition. From this, we expect the light to shed on one or more of the following ways:


Honorary Chairs

Rama Chellappa
University of Maryland
Matthew A. Turk
Toyota Technological Institute at Chicago (TTIC)

General Chair

Yun Fu
Northeastern University

Workshop Chairs

Joseph Robinson
Northeastern University
Ming Shao
University of Massachusetts (Dartmouth)
Siyu Xia
Southeast University (China), Nanjing
Mike Stopa
Konica Minolta
Samson Timoner
Yu Yin
Northeastern University

Web and Publicity Co-Chairs

Zaid Khan
Northeastern University

Important Dates

3 Nov 2019: 1st CFP.

8 Nov: Challenge begins

13 Jan 2020: Test Phase Begins!

20 Jan: Challenge ends.

28 Jan: Papers due
1 Feb : Papers due

12 Feb : Author notifications (i.e., oral or poster)

26 Feb : Camera Ready due

May 18-22 (TBD) : Paper presentations & Awards Ceremony at 2020 FG Conference

Workshop Program

Coming soon!

Information for Authors

Submit via CMT: https://cmt3.research.microsoft.com/RFIW2020/Submission/Index
Following the guidelines of IEEE FG: https://fg2020.org/instructions-of-paper-submission-for-review/

Contact Us

Joseph Robinson (robinson.jo@husky.neu.edu) and Ming Shao (mshao@umassd.edu
Department of Electrical and Computer Engineering, Northeastern University, Boston, MA, USA