每周一起读 | ACL 2019 NAACL 2019:文本关系抽取专题沙龙
”每周一起讀“是由 PaperWeekly 發起的論文共讀活動,我們結合自然語言處理、計算機視覺和機器學習等領域的頂會論文和前沿成果來指定每期論文,并且邀請論文作者來到現場,和大家展開更有價值的延伸討論。
我們希望能為 PaperWeekly 的各位讀者帶來一種全新的論文閱讀體驗、一個認識同行、找到組織的契機、一次與國際頂會論文作者當面交流的機會。
6 月 30?日(周日)下午 2 點,“每周一起讀”將邀請清華大學計算機系本科生朱昊,和大家分享其發表于自然語言處理頂級會議 ACL 2019 和 NAACL 2019 的最新文章。
本期活動的主題是文本關系抽取,作者將分別從關系的相似性度量以及圖神經網絡方法等角度來分享。歡迎對文本關系抽取以及自然語言處理等相關話題感興趣的同學來現場一同參與討論。
01# 本 期 嘉 賓
? 朱昊??
清華大學計算機系本科生
Hao Zhu is graduating from Tsinghua University with a bachelor degree in computer science and will be joining CMU LTI as a Ph.D. student this fall. He feels fortunate to work with Zhiyuan Liu, Jason Eisner (JHU), Matt Gormley (CMU) and Tat-seng Chua (NUS) during his undergraduate research.?
His ultimate goal is to understand human intelligence. Believing in Feynman's famous quote, "What I cannot create, I do not understand.", he is working on teaching Machine Learning models to gain human intelligence. More specifically, he is currently interested in teaching machines to speak human language, as well as to do human-level logical reasoning. To reach such goals, he is always fashioning principle, computable, and effective approaches.
? ACL 2019??
Abstract: In this paper, we propose a novel graph neural network with generated parameters (GPGNNs). The parameters in the propagation module, i.e. the transition matrices used in message passing procedure, are produced by a generator taking natural language sentences as inputs. We verify GP-GNNs in relation extraction from text, both on bag- and instance settings. Experimental results on a human annotated dataset and two distantly supervised datasets show that multi-hop reasoning mechanism yields significant improvements. We also perform a qualitative analysis to demonstrate that our model could discover more accurate relations by multi-hop relational reasoning. Codes and data are released at https: //github.com/thunlp/gp-gnn.
??ACL?2019??
Abstract:?We introduce a conceptually simple and effective method to quantify the similarity between relations in knowledge bases. Specifically, our approach is based on the divergence between the conditional probability distributions over entity pairs. In this paper, these distributions are parameterized by a very simple neural network. Although computing the exact similarity is intractable, we provide a sampling-based method to get a good approximation.?
We empirically show the outputs of our approach significantly correlate with human judgments. By applying our method to various tasks, we also find that (1) our approach could effectively detect redundant relations extracted by open information extraction (Open IE) models, that (2) even the most competitive models for relational classification still make mistakes among very similar relations, and that (3) our approach could be incorporated into negative sampling and softmax classification to alleviate these mistakes. The source code and experiment details of this paper can be obtained from https://github.com/thunlp/relation-similarity.
??NAACL?2019??
Abstract:?We introduce neural finite state transducers (NFSTs), a family of string transduction models defining joint and conditional probability distributions over pairs of strings. The probability of a string pair is obtained by marginalizing over all its accepting paths in a finite state transducer. In contrast to ordinary weighted FSTs, however, each path is scored using an arbitrary function such as a recurrent neural network, which breaks the usual conditional independence assumption (Markov property). NFSTs are more powerful than previous finite-state models with neural features (Rastogi et al., 2016). We present training and inference algorithms for locally and globally normalized variants of NFSTs. In experiments on different transduction tasks, they compete favorably against seq2seq models while offering interpretable paths that correspond to hard monotonic alignments.
時間:6 月 30 日(周日) 14:00–16:00
地點:北京智源人工智能研究院102會議室
北京市海淀區中關村南大街1-1號?
中關村領創空間(信息谷)
?1 / 長按識別二維碼報名?
?2?/ 加入NLP專題交流群?
報名截止日期:6?月 29?日(周六)12:00
* 場地人數有限,報名成功的讀者將收到包含電子門票二維碼的短信通知,請留意查收。
注意事項:
*?如您無法按時到場參與活動,請于活動開始前 24 小時在 PaperWeekly 微信公眾號后臺留言告知,留言格式為放棄報名 + 報名電話;無故缺席者,將不再享有后續活動的報名資格。
?1 / 掃碼關注?
掃碼關注 PaperWeekly?
PaperWeekly
清華大學計算機科學與技術系
北京智源人工智能研究院
?
現在,在「知乎」也能找到我們了
進入知乎首頁搜索「PaperWeekly」
點擊「關注」訂閱我們的專欄吧
關于PaperWeekly
PaperWeekly 是一個推薦、解讀、討論、報道人工智能前沿論文成果的學術平臺。如果你研究或從事 AI 領域,歡迎在公眾號后臺點擊「交流群」,小助手將把你帶入 PaperWeekly 的交流群里。
▽ 點擊 |?閱讀原文?| 立刻報名
總結
以上是生活随笔為你收集整理的每周一起读 | ACL 2019 NAACL 2019:文本关系抽取专题沙龙的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 成为大厂AI算法工程师,“NLP/CV”
- 下一篇: 赠票福利 | 2019,GMIS归来!杨