Research

Posted on Jul 11, 2022

My goal is to demonstrate that deep learning systems, when interacting with other systems, are capable of much more than they can do today. I’ve worked (or working) on training neural networks to do fact-checking in knowledge base, solve reasoning tasks like mathematics, understand video more efficiently, optimize computation graph generated by compiler and run faster than traditional algorithm on computational chemistry. Themes includes CV, NLP, learning interleaving with searching, and applied DL. Here is the list of work that I’ve contributed to:

Publication

D4FT: A Deep Learning Approach to Kohn-Sham Density Functional Theory (2023) [ICLR23 Spotlight]
Tianbo Li, Min Lin, Zheyuan Hu, Kunhao Zheng, Giovanni Vignale, Kenji Kawaguchi, A.H. Castro Neto, Kostya S. Novoselov, Shuicheng Yan

HloEnv: A Graph Rewrite Environment for Deep Learning Compiler Optimization Research (2022) [NeurIPS22 MLSys Workshop]
[library]
Chin Yang Oh*, Kunhao Zheng*, Bingyi Kang, Xinyi Wan, Zhongwen Xu, Shuicheng Yan, Min Lin, Yangzihao Wang

Formal Mathematics Statement Curriculum Learning (2022) [ICLR23 Spotlight]
[blog]
Stanislas Polu, Jesse Michael Han, Kunhao Zheng, Mantas Baksys, Igor Babuschkin, Ilya Sutskever

Prompting Visual-Language Models for Efficient Video Understanding (2021) [ECCV22]
[site]
Chen Ju, Tengda Han, Kunhao Zheng, Ya Zhang, Weidi Xie

MiniF2F: a cross-system benchmark for formal Olympiad-level mathematics (2021) [ICLR22]
[poster video]
Kunhao Zheng, Jesse Michael Han, Stanislas Polu