CVPR Logo
Search by typing & pressing enter

YOUR CART

The 8th IEEE Workshop on

Analysis and Modeling of Faces and Gestures (AMFG)

-- Deep Learning, Social Media, Insights and Beyond
In conjunction with CVPR 2018
Call for Papers
Over the past five years, we have experienced rapid advances in facial recognition technologies. This due with many thanks to the deep learning (i.e., dating back to 2012, AlexNet) and large-scale, labeled facial image collections. The progress made in deep learning continues to push renown public facial recognition databases to near saturation which, thus, calls more evermore challenging image collections to be compiled as databases. To name a few: Labeled Faces in the Wild (LFW) database, YouTube Faces database, and, more recently, CASIA WebFace, MegaFace, MS-Celeb-1M. In practice, and even widely in applied research, using off-the-shelf deep learning models has become the norm, as numerous pre-trained networks are available for download and are readily deployed to new, unseen data (e.g., VGG-Face, ResNet, amongst other types). We have almost grown “spoiled” from such luxury, which, in all actuality, has enabled us to stay hidden from many truths. Theoretically, the truth behind what makes neural networks more discriminant than ever before is still, in all fairness, unclear—rather, they act as a sort of black box to most practitioners and even researchers, alike. More troublesome is the absence of tools to quantitatively and qualitatively characterize existing deep models, which, in itself, could yield greater insights about these all so familiar “black boxes”. With the frontier moving forward at rates incomparable to any spurt of the past, challenges such as high variations in illuminations, pose, age, etc., now confront us. However, state-of-the-art deep learning models often fail when faced with such challenges owed to the difficulties in modeling structured data and visual dynamics.

Alongside the effort spent on conventional face recognition is the research done to automatically understand social media content. This line of work has attracted attention from industry and academic researchers from all sorts of domains. To understand social media the following capabilities must be satisfied: face and body tracking (e.g., facial expression analysis, face detection, gesture recognition), face and body characterization (e.g., behavioral understanding, emotion recognition), face, body and gesture characteristic analysis (e.g., gait, age, gender and ethnicity recognition), group understanding via social cues (e.g., kinship, non-blood relationships, personality), and visual sentiment analysis (e.g., temperament, arrangement). Thus, needing to be able to create effective models for visual certainty has significant value in both the scientific communities and the commercial market, with applications that span topics of human-computer interaction, social media analytics, video indexing, visual surveillance, and Internet vision. Currently researchers have made significant progress addressing the many problems in the social domain, and especially when considering off-the-shelf and cost-efficient vision HW products available these days, e.g. Kinect, Leap, SHORE, and Affdex. Nonetheless, serious challenges still remain, which only amplifies when considering the unconstrained imaging conditions captured by different sources focused on non-cooperative subjects. It is these latter challenges that especially grabs our interest, as we sought out to bring together the cutting-edge techniques and recent advances of deep learning to solve the challenges above in social media.
This one-day serial workshop (i.e., AMFG2018) will provide a forum for researchers to review the recent progress of recognition, analysis and modeling of face, body, and gesture, while embracing the most advanced deep learning systems available for face and gesture analysis, particularly, under unconstrained environment such as social media. The workshop includes up to two keynotes, along with peer-reviewed papers (oral and poster). Original high-quality contributions are solicited on the following topics:​
  • Novel deep model, deep learning survey, or comparative study for face/gesture recognition;
  • Deep learning methodology, theory, and its application to social media analytics;
  • Deep learning for internet-scale soft biometrics and profiling: age, gender, ethnicity, personality, kinship, occupation, beauty ranking, and fashion classification by facial or body descriptor;
  • Deep learning for detection and recognition of faces and bodies with large 3D rotation, illumination change, partial occlusion, unknown/changing background, and aging (i.e., in the wild); special attention will be given large 3D rotation robust face and gesture recognition;
  • Motion analysis, tracking and extraction of face and body models captured by mobile devices;
  • Face, gait, and action recognition in low-quality (e.g., blurred), or low-resolution video from fixed or mobile device cameras;
  • Novel mathematical models and algorithms, sensors and modalities for face & body gesture and action representation, analysis, and recognition for cross-domain social media;
  • Social/psychological based studies that aids in understanding computational modeling and building better automated face and gesture systems with interactive features;
  • Novel social applications involving detection, tracking & recognition of face, body, and action;
  • Face and gesture analysis for sentiment analysis in social media;
  • Other applications involving face and gesture analysis in social media content.
Previous AMFG Workshops
The first workshop with this name was held in 2003, in conjunction with ICCV2003 in Nice, France. So far, it has been successfully held SEVEN times. The homepages of previous five AMFG are as follows: ​
​​AMFG2003: http://brigade.umiacs.umd.edu/iccv2003/
AMFG2005: http://mmlab.ie.cuhk.edu.hk/iccv05/
AMFG2007: http://mmlab.ie.cuhk.edu.hk/iccv07/
AMFG2010: http://www.lv-nus.org/AMFG2010/cfp.html 
AMFG2013: http://www.northeastern.edu/smilelab/AMFG2013/home.html
AMFG2015: http://www.northeastern.edu/smilelab/AMFG2015/home.html
AMFG2017: https://web.northeastern.edu/smilelab/AMFG2017/index.html
Important Dates
25 March 2018

10 April 2018

15 April 2018
 
Submission Deadline

Notification

Camera-Ready Due

Submissions are handled via the workshop's CMT website:

   https://cmt3.research.microsoft.com/AMFG2018/Submission/Index  


Following the guideline of CVPR2018: http://cvpr2018.thecvf.com/submission/main_conference/author_guidelines
  • 8 Pages + References
  • Anonymous
  • Using CVPR template
Workshop Organizers
Honorary General Chairs
Picture
Thomas S. Huang, University of Illinois
https://ece.illinois.edu/directory/profile/t-huang1
General Co-Chairs
Picture
Picture
Yun Fu, Northeastern University
http://www1.ece.neu.edu/~yunfu/
Matthew A. Turk, University CA
https://www.cs.ucsb.edu/~mturk/
Workshop Co-Chairs
Picture
Picture
Ming Shao, University of Massachusetts
http://www.cis.umassd.edu/~mshao/
Michael Jones, Mitsubishi Electric Research Labs
http://www.merl.com/people/mjones/
Web and Publicity Chair
Picture
Joseph Robinson, Northeastern University
http://www.jrobsvision.com/​
Program Committee
  • Haibo Wang, Philips Research, USA
  • Di Huang, Beihang University, China
  • Meina Kan, Institute of Computing Technology, Chinese Academy of Sciences, China
  • Xiang Xiang, Amazon AI, USA
  • Anoop Cherian, MERL, USA
  • Davis King, DLib
  • Pooya Khorrami, MIT-LL, USA
  • Kevin Brady, MIT-LL, USA
  • Handong Zhao, Adobe Research, USA
  • Bineng Zhong, Huaqiao University, China
  • Siyu Xia, Southeast University, China
  • Xinchao Wang, Stevens Institute of Technology, USA
  • Jun Li, Northeastern University, USA
  • Chengcheng Jia, Huawei, USA
Program Schedule
8:30 AM
Jan Kautz (NVIDIA)
9:10 AM
Oral 1: Recognizing American Sign Language Gestures from within Continuous Videos, YingLi Tian, Yuancheng Ye, Matt Huenerfauth, Jingya Liu
9:30 AM
Oral 2: On Detecting Domestic Abuse via Faces, Saheb Chhabra, Puspita Majumdar, Mayank Vatsa, Richa Singh
9:50 AM
Oral 3: Generative Adversarial Style Transfer Networks for Face Aging, Eirikur Agustsson, Radu Timofte, Luc Van Gool, Sveinn Pálsson
10:10 AM
Oral 4: Empirically Analyzing the Effect of Dataset Biases on Deep Face Recognition Systems, Thomas Vetter, Andreas M Schneider, Andreas Morel-Forster, Thomas Gerig, Bernhard Egger, Adam Kortylewski
10:30 AM
Coffee Break
11:00 AM
Oral 5: Improving Viseme Recognition using GAN-based Frontal View Mapping, Edmilson da Silva Morais, Andrea Britto Mattos, Dario Augusto Borges Oliveira
11:20 AM
Oral 6: Clothing Change Aware Person Identification, Zibo Meng, Karthik Katipally, Kees Zon, Haibo Wang, Jia Xue
11:40 AM
Oral 7: A Compact Deep Learning Model for Robust Facial Expression Recognition, Chieh-Ming Kuo, Shang-Hong Lai, Michel Sarkis
12:00 PM
Lunch Time
1:30 PM
Invited Keynote: Ajmal Mian, University of Western Australia
2:10 PM
Oral 8: FACSCaps: Pose-Independent Facial Action Coding with Capsules, Laszlo A Jeni, Jeffrey Cohn, Itir Onal Ertugrul
2:30 PM
Oral 9: Unraveling Human Perception of Facial Aging using Eye Gaze, Daksha Yadav, Naman Kohli, Ekampreet Kalsi, Mayank Vatsa, Richa Singh, and Afzel Noore
2:50 PM
Oral 10: >Motion Fused Frames: Data Level Fusion Strategy for Hand Gesture Recognition, Gerhard Rigoll, Neslihan Kose, Okan Köpüklü
3:10 PM
Oral 11: Light-weight Head Pose Invariant Gaze Tracking, Kautz Jan, Shalini De Mello, Rajeev Ranjan
3:30 PM
Coffee Break
4:00 PM
Invited Keynote: Andrew Gallagher, Google Research
4:40 PM
Oral 12: Implementing a Robust Explanatory Bias in a Person Re-identification Network, Esube Bekele, Wallace Lawson, Zachary Horne, Sangeet Khemlani
5:00 PM
Oral 13: Fine-Grained Head Pose Estimation Without Keypoints, Nataniel Ruiz, Eunji Chong, James Rehg
5:20 PM
Closing Remarks
Keynote Speakers
  • Jan Kautz, Senior Director of Visual Computing and Machine Learning Research, NVIDIA
  • Ajmal Mian, Associate Professor, The University of Western Australia
  • Andrew Gallagher, Google Research