Qingyang Zhang 张庆阳
I am a first-year PhD student at Tianjin University, supervised by Prof. Changqing Zhang. Currently, I am at an internship at Tencent AI Lab, co-supervised by Yatao Bian .
I received my Bachelor's degree in School of Computer Science and Technology from Tianjin University (2018-2022). After that, I pursued my Master's in the School of Computer Science and Technology at Tianjin University and transitioned into a Ph.D. candidate through the direct doctoral program in 2024.
Email /
Google Scholar /
Github
|
|
News
[2025-01] One paper accepted by ICLR, thanks to all co-authors!
[2024-05] One paper accepted by NeurIPS, thanks to all co-authors!
[2024-05] We release a survey about fusion of low-quality multi-modal data! [arXiv]
[2024-04] Starting an internship at Tencent AI Lab, superivsed by Yatao Bian.
[2023-04] Two paper accepted by ICML including one Oral paper, thanks to all co-authors.
[2022-06] Graduate from Tianjin University.
|
Survey
|
Multimodal Fusion on Low-quality Data: A Comprehensive Survey
Qingyang Zhang, Yake Wei, Zongbo Han, Huazhu Fu, Xi Peng, Cheng Deng, Qinghua Hu, Cai Xu, Jie Wen, Di Hu, Changqing Zhang
arXiv / awesome list
A systematical survey about fusion of low-quality multi-modal data.
|
Publications(* equal contribution)
|
The Best of both Worlds: On the Dilemma of Out-of-Distribution Detection
Qingyang Zhang, Qiuxuan Feng, Joey Tianyi Zhou, Yatao Bian, Qinghua Hu and Changqing Zhang
NeurIPS, 2024
arXiv / code
Solve conflicts between OOD detection and generalization for dual-optimal performance.
|
|
Provable Dynamic Fusion for Low-quality Multimodal Learning
Qingyang Zhang, Haitao Wu, Changqing Zhang, Qinghua Hu, Huazhu Fu, Joey Tianyi Zhou, Xi Peng
ICML, 2023
arXiv / code
Theory-inspired dynimical fuse strategy for quality-varying modalities in real world.
|
|
Calibrating Multimodal Learning
Huan Ma, Qingyang Zhang (co-first author), Changqing Zhang, Bingzhe Wu, Huazhu Fu, Joey Tianyi Zhou, Qinghua Hu
ICML, 2023
arXiv / code
Mitigate the greedy nature of multimodal learning by regularizing the model confidence.
|
Services
Conference Reviewer: ICLR 2022-2024, NeurIPS 2023-2024, ICML 2024
|
Awards
National Scholarship (twice, 1%) 2022, 2023
|
|