I am Lijian Lin, a Computer Vision Researcher at International Digital Economy Academy (IDEA). Previously, I was a Computer Vision Researcher at Tencent ARC Lab.

My current research focuses on talking head generation, human video synthesis, talking body generation, human-centric 3D Gaussian Splatting, and video content generation. I welcome opportunities for academic collaboration — please feel free to contact me at linlijian@idea.edu.cn. I got my BS degree from Xiamen University in 2021, advised by Prof. Hanzi Wan.

I serve as a reviewer for international conferences, e.g., CVPR, ICCV, ECCV, NeuIPS, ICLR, ICML, ACM MM, AAAI, etc..

👏 We are currently looking for self-motivated interns to explore cutting-edge techniques such as Gaussian Splatting and DM/FM. Feel free to contact me if you are interested. zhihu

🔥 News

  • [July, 2025]:  🎉 One TVCG paper is accepted. Project and demos are coming soon.
  • [June, 2025]:  🎉 Two ICCV papers are accepted. Codes and demos are coming soon.
  • [May, 2025]:  🎉 We introcuce GUAVA, a new Upper Body 3D Gaussian Avatar.
  • [May, 2025]:  🎉 The code of HRAvatar has been released.
  • [March, 2025]:  🎉 The code of TEASER has been released.
  • [February, 2025]:  🎉 Our HRAvatar has been accepted to CVPR 2025.
  • [Jan, 2025]:  🎉 Our TEASER has been accepted to ICLR 2025.

  • [2024]: 5 papers have been accepted by ICLR, CVPR, AAAI, ECCV.
Click for More
  • [December, 2024]:  🎉 One <a href=https://aaai.org/Conferences/AAAI-25/>AAAI</a> paper has been accepted.
  • [February, 2024]: One <a href=https://cvpr2024.thecvf.com/>CVPR 2024</a> papers have been accepted.
  • [Jan, 2024]:  🎉 Our GPAvatar has been accepted to ICLR 2024.
  • [July, 2023]:  🎉 Two ICCV papers have been accepted.

📝 Publications

📩 denotes corresponding author, 📌 denotes co-first author.

TVCG 2025
sym

Qffusion: Controllable Portrait Video Editing via Quadrant-Grid Attention Learning

Maomao Li📌, Lijian Lin📌, Yunfei Liu, Ye Zhu, Yu Li

Project | Video

  • We propose a novel dual-frame-guided framework for portrait video editing, which propagates fine-grained local modification from the start and end video frames.
  • We propose a recursive inference strategy named Quadrant-grid Propagation (QGP), which can stably generate arbitrary-long videos.
ICCV 2025
sym

GUAVA: Generalizable Upper Body 3D Gaussian Avatar

Dongbin Zhang, Yunfei Liu📩, Lijian Lin, Ye Zhu, Yang Li, Minghan Qin, Yu Li, Haoqian Wang📩

Project | Video

  • ⚡️ Reconstructs 3D upper-body Gaussian avatars from single image in 0.1s
  • ⏱️ Supports real-time expressive animation and novel view synthesis at 50FPS !
CVPR 2025
sym

HRAvatar: High-Quality and Relightable Gaussian Head Avatar

Dongbin Zhang, Yunfei Liu, Lijian Lin, Ye Zhu, Kangjie Chen, Minghan Qin, Yu Li, Haoqian Wang

Project | Code

  • We propose HRAvatar, a 3D Gaussian Splatting-based method that reconstructs high-fidelity, relightable 3D head avatars from monocular videos by jointly optimizing tracking, deformation, and appearance modeling.
  • By leveraging learnable blendshapes, physically-based shading, and end-to-end optimization, HRAvatar significantly improves head quality and realism under novel lighting conditions.
ICLR 2025
sym

TEASER: Token Enhanced Spatial Modeling for Expressions Reconstruction

Yunfei Liu, Lei Zhu, Lijian Lin, Ye Zhu, Ailing Zhang, Yu Li

Project | Code

  • A novel approach that achieves more accurate facial expression reconstruction by predicting a hybrid representation of faces from a single image.
  • A multi-scale facial appearance tokenizer and a token-guided neural renderer to generate high-fidelity facial images. The extracted token is interpretable and highly disentangled, enabling various downstream applications.
ICCV 2023
sym

MODA: Mapping-Once Audio-driven Portrait Animation with Dual Attentions

Yunfei Liu, Lijian Lin, Fei Yu, Changyin Zhou, Yu Li

Project

  • We propose a unified system for multi-person, diverse, and high-fidelity talking portrait video generation.
  • Extensive evaluations demonstrate that the proposed system produces more natural and realistic video portraits compared to previous methods.
AAAI 2023
sym

Accelerating the training of video super-resolution models

Lijian Lin, Xintao Wang, Zhongang Qi, Ying Shan

  • Our method is capable of largely speeding up training (up to speedup in wall-clock training time) without performance drop for various VSR models.
ACM MM 2020
sym

Dual semantic fusion network for video object detection

Lijian Lin📌, Haosheng Chen📌, Honglun Zhang, Jun Liang, Yu Li, Ying Shan, Hanzi Wang

  • We present a dual semantic fusion network, which performs a multi-granularity semantic fusion at both frame level and instance level in a unified framework and then generates enhanced features for video object detection.
  • We introduce a geometric similarity measure into the proposed dual semantic fusion network along with the widely used appearance similarity measure to alleviate the information distortion caused by noise during the fusion process.

💻 Work Experience

  • 2021.07 - 2022.10, Tencent, ARC Lab, Shenzhen, China. Full-time Computer Vision Researcher.
  • 2019.012 - 2022.10, Tencent, ARC Lab, Shenzhen, China. Computer Vision Researcher Intern.
Last updated on June. 2025