Huazhe(Harry) Xu



How to pronounce my name?
In the Wade-Giles system of romanization, it is rendered as Huache Tsu.
In Chinese characters, it is 许华哲.

xuhuazhe12@gmail.com

Download my CV

google scholar

About Me

I am a 4th year Ph.D student in Berkeley AI Research (BAIR) advised by Prof. Trevor Darrell. I received my bachelor degree from Tsinghua University. I have also spent wonderful time at research labs of Facebook AI, University of Toronto and International Computer Science Institute enjoying the collaboration with Dr. Roberto Calandra, Prof. Tengyu Ma, Dr. Yuandong Tian, Prof. Jiashi Feng, Prof. Sergey Levine, Prof. Sanja Fidler and Prof. Raquel Urtasun.

My research focuses on modeling the dynamics of the world, leveraging/finding human priors for policy learning, and further enabling learning algorithms to learn in a sample-efficient manner. I am also interestded in solving complex video games/real applications with deep learning and reinforcement learning.

I am also an amateur pianist and actively looking for potential collaborations (both music-wise or research-wise!). If you do reinforcement learning or computer vision projects or you play the piano, the violin or the cello, etc, feel free to contact me for some potential projects or some fun!

I support Slow Science.

Education

Aug. 2012 - Jul. 2016 , Department of Electronic Engineering, Tsinghua University,

Balchlor of Engineering, GPA: 93/100, ranking: 5/238. Average of Math and Math-Related Courses: 95.4/100.

Aug. 2014 - Dec. 2014, School of Electrical and Computer Engineering, University of Toronto,

Exchange Student, GPA: 4.0/4.0.

July. 2015 - Sept. 2015, Aug. 2016 - now , Department of Electrical and Computer Engineering, University of California, Berkeley,

Visiting Researcher, PhD Student.

Selected Research Projects

Modeling Visual Dynamics

Hierarchical Style-based Networks for Motion Synthesis,

Joint work w/ J. Xu, X. Wang, T. Darrell

published in ECCV'20 [pdf] [code]

Video Prediction via Example Guidance,

Joint work w/ J. Xu, T. Darrell

Published in ICML'20 [pdf] [code]

Disentangling Propagation and Generation for Video Prediction,

Joint work w/ H. Gao, Q. Cai, R. Wang, F. Yu, T. Darrell

Published in ICCV'19 [pdf] code available upon request.

Human priors and dynamics models for efficient policy learning

Scoring-Aggregatin-Planning: framework for learning task-agnostic priors from interactions and rewards for zero-shot generalization,

Joint work w/ B. Chen, Y. Gao, T. Darrell

AAAI'20 Genplan Workshop (spotlight) [pdf] [code]

Learning Self-Correctable Policies and Value Functions from Demonstrations with Negative Sampling,

Joint work w/ Y. Luo and T. Ma

Published in ICLR 2020, also published at Neurips 2019 Deep Reinforcement Learning Workshop [pdf] [code]

Algorithmic Framework for Model-based Deep Reinforcement learning with Theoretical Guarantees,

Joint work w/ Y. Luo, Y. Li, Y. Tian, T. Darrell, T. Ma

published in ICLR'19 [pdf] [code] [website]

Reinforcement Learning from Imperfect Demonstrations,

Joint work w/ Y. Gao, J. Lin, F. Yu, S. Levine and T. Darrell

Appeared in Neurips Deep RL Symposium [pdf] [code]

Learning policies for Real-world Applications

End-to-End Learning of Driving Models from Large-scale Video Datasets,

Joint work w/ Y. Gao, F. Yu and T. Darrell

published in CVPR'17 (oral) [pdf] [code]

Modular Architecture for StarCraft II with Deep Reinforcement Learning,

Joint work w/ D. Lee, H. Tang, J. Zhang, T. Darrell and P. Abbeel

published in AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment 2018 [pdf] [code]

Past Projects

Natural Language and Object Retrieval,

Joint work w/ R. Hu, M. Rohrbach, J. Feng, K. Saenko and T. Darrell,

published in CVPR'16 (oral) [pdf] [code]

Automobile Visual Taste Ranking,

Joint work w/ S. Fidler, R. Urtasun,

2015 Fall , University of Toronto,

Publications and Manuscripts

Honors and Awards

News!

Service

Miscellaneous

Attempts

Here is a partial catalog of my attempts at becoming more than what I am today. I doubt I will ever succeed totally but I hope I will never stop trying.

Who searched for me!