当前位置 >>成果展示 >>论文

Joint Gaze Correction and Face Beautification for Conference Video Using Dual Sparsity Prior.(IEEE Transactions on Industrial Electronics 2019)时间: 2019-01-17 点击: 142 次

Authors:Deming Zhai ; Xianming Liu ; Xiangyang Ji ; Debin Zhao ; Wen Gao



Abstract:


Gaze mismatch is a common problem in video conferencing, where the viewpoint captured by a camera (usually located above or below a display monitor) is not aligned with the gaze direction of the human subject, who typically looks at his counterpart in the center of the screen. This means that the two parties cannot converse eye-to-eye, hampering the visual communication experience. A recent popular approach to the gaze mismatch problem is to synthesize a gaze-corrected face image as viewed from the screen center via depth-image-based rendering (DIBR), assuming texture and depth maps are both available at the camera-captured viewpoint. Due to self-occlusion, however, there will be missing pixels in the DIBR-synthesized view image that require satisfactory filling. In this paper, we propose to jointly solve the hole-filling problem and the face beautification problem (subtle modifications of facial components and contour to enhance attractiveness of the rendered face) using dual sparsity prior. Specifically, prior to the start of a video conference session, we first train two dictionaries separately offline using two large data sets: one with general face images, and the other with “beautiful” human faces, which means faces with high beauty scores. During the actual conference session, we solve the hole-filling and facial components beautification problems simultaneously by seeking two code vectors-one is sparse in the first dictionary and explains the available DIBR-synthesized pixels, the other is sparse in the second dictionary and matches well with the first vector in terms of feature space distance. This ensures an acceptable level of recognizability of the conference subject, while increasing proximity to “beautiful” facial features to improve attractiveness. Experimental results show naturally rendered human faces with noticeably improved attractiveness.



Published in: IEEE Transactions on Industrial Electronics ( Volume: 66 , Issue: 12 , Dec. 2019 )
Page(s): 9601 - 9611
Date of Publication: 17 January 2019
ISSN Information:
INSPEC Accession Number: 18881912
Publisher: IEEE
Funding Agency:


上一篇:Depth Restoration From RGB-D Data via Joint Adaptive Regularization and Thresholding on Manifolds.(IEEE Transactions on Image Processing 2019) 下一篇:Real-time Indoor Scene Reconstruction with RGBD and Inertial Input. (ICME 2019) 返回列表

用户登录

用户注册