Sort:
Open Access Issue
Truth Discovery with Memory Network
Tsinghua Science and Technology 2017, 22 (6): 609-618
Published: 14 December 2017
Abstract PDF (586.2 KB) Collect
Downloads:23

Truth discovery aims to resolve conflicts among multiple sources and find the truth. Conventional methods for truth discovery mainly investigate the mutual effect between the reliability of sources and the credibility of statements. These methods use real numbers, which have a lower representation capability than vectors to represent the reliability. In addition, neural networks have not been used for truth discovery. In this work, we propose memory-network-based models to address truth discovery. Our proposed models use feedforward and feedback memory networks to learn the representation of the credibility of statements. Specifically, our models adopt a memory mechanism to learn the reliability of sources for truth prediction. The proposed models use categorical and continuous data during model learning by automatically assigning different weights to the loss function on the basis of their own effects. Experimental results show that our proposed models outperform state-of-the-art methods for truth discovery.

Open Access Issue
Multi-Level Cross-Lingual Attentive Neural Architecture for Low Resource Name Tagging
Tsinghua Science and Technology 2017, 22 (6): 633-645
Published: 14 December 2017
Abstract PDF (2.2 MB) Collect
Downloads:16

Neural networks have been widely used for English name tagging and have delivered state-of-the-art results. However, for low resource languages, due to the limited resources and lack of training data, taggers tend to have lower performance, in comparison to the English language. In this paper, we tackle this challenging issue by incorporating multi-level cross-lingual knowledge as attention into a neural architecture, which guides low resource name tagging to achieve a better performance. Specifically, we regard entity type distribution as language independent and use bilingual lexicons to bridge cross-lingual semantic mapping. Then, we jointly apply word-level cross-lingual mutual influence and entity-type level monolingual word distributions to enhance low resource name tagging. Experiments on three languages demonstrate the effectiveness of this neural architecture: for Chinese, Uzbek, and Turkish, we are able to yield significant improvements in name tagging over all previous baselines.

Total 2