Hao Zhu

I am a third-year undergraduate in the Department of Computer Science and Technology at Tsinghua University. I joined Tsinghua Natural Language Processing Group (THUNLP) in early 2016 and I am fortunately under the supervision of Zhiyuan Liu.

My ultimate goal is to understand human intelligence. Believing in Feynman's famous quote, "What I cannot create, I do not understand.", I am working on teaching Machine Learning models to gain human intelligence. More specifically, I am currently interested in teaching machines to speak human language, as well as to do human-level logical reasoning. I also have broad interests in other cognitive science fields.

Email: zhuhao15 [at] mails.tsinghua.edu.cn

[Curriculum Vitæ] [Calendar][Meet Me!]

[Full Publication List & Preprints]   [Google Scholar]


[Jun. 2018] Released camera-ready version of our ACL paper and its code.

[Apr. 2018] Our paper "Incorporating Chinese Characters of Words for Lexical Sememe Prediction" have been accepted to ACL 2018! My co-authors, Huiming, Ruobing and Prof. Zhiyuan Liu, are really fantasic! See you in Melbourne!

[Apr. 2018] I have become the fellow of Tsinghua University Initiative Scientific Research Program with a funding of 32, 000 USD!

[Apr. 2018] This summer and fall I will fortunately do research with Prof. Matt Gormley at CMU and Prof. Jason Eisner at JHU. See you there!


Reviewer: EMNLP 2018

Volunteer/Review Assistant: IJCAI 2017/2018

Research Hightlights

Iterative Entity Alignment via Joint Knowledge Embeddings

Entity alignment aims to link entities and their counterparts among different knowledge graphs. There are two main challenge in this project: (1) How to find synonymous entity counterparts from different knowledge graphs, (2) How to make full use of entity pairs aligned during training. We proposed to embedding both knowledge graphs into joint embedding space and also proposed an iterative method to make use of newly aligned entity pairs.

Iterative Entity Alignment via Joint Knowledge Embeddings International Joint Conference of Artificial Intelligence (IJCAI-17).

Incorporating Chinese Characters of Words for Lexical Sememe Prediction

Sememes are minimum semantic units of concepts in human languages, such that each word sense is composed of one or multiple sememes. Words are usually manually annotated with their sememes by linguists and form linguistic common-sense knowledge bases widely used in various NLP tasks. Recently, the lexical sememe prediction task was introduced. It consists of automatically recommending sememes for words, which is expected to improve annotation efficiency and consistency. However, existing methods of lexical sememe prediction typically rely on the external context of words to represent the meaning, which usually fails to deal with low-frequency and out-of-vocabulary words. To address this issue for Chinese, we propose a novel framework to take advantage of both internal character information and external context information of a word.

Incorporating Chinese Characters of Words for Lexical Sememe Prediction Annual Meeting of the Association for Computational Linguistics (ACL-18).