Research Highlights

Open-domain Dialogue Systems

Developing human-level conversational systems has been a long-term goal of AI since the Turing Test. Data-driven conversational models generally fall into two categories: retrieval-based methods, which select a response from a predefined repository, and generation-based methods, which usually employ an encoder-decoder framework.

Letting a chatbot perceive and express emotion has been a standing goal of AI. We studied an interesting problem: given a post and an emotion category (happy, sad, like, angry, etc.), how can we generate a response that belongs to that emotion category? We proposed Emotional Chatting Machine (ECM), which is the first work that addresses the emotion factor in large-scale conversation generation. ECM can generate appropriate responses not only in content (relevant and grammatical) but also in emotion (emotionally consistent). Based on the encoder-decoder framework, we designed the internal memory and external memory for the neural network to inject the emotion information for the decoder. Experiments show that ECM can generate responses appropriate not only in content but also in emotion.

Our work was reported by MIT Technology Review, the Guardian, and NIVIDIA.

The lack of personality of a chatbot is one of the major obstacles that prevent a chatbot to pass the Turing Test. Endowing a chatbot with personality or an identity is quite challenging but critical to deliver more realistic and natural conversations. We studied a novel problem for explicit personalization: given a chatbot’s profile, how can we generate responses that are coherent to the given profile? This allows the system developer to control the profile/personality of a chatbot. We addressed the issue by designing a profile detector, a position detector and a bidirectional decoder on top of the original seq2seq model. Post-level and session-level evaluation shows that when giving a chatbot profile, our model can generate more coherent responses with more language variety.

Semantic understanding, particularly when facilitated by commonsense knowledge or world facts, is essential to many natural language processing tasks, and undoubtedly, it is a key factor to the success of dialogue or conversational systems, as conversational interaction is a semantic activity. We proposed a novel open-domain conversation generation model to demonstrate how large-scale commonsense knowledge can facilitate language understanding and generation. We addressed the issue by designing two novel graph attention mechanisms: a static graph attention mechanism encodes the retrieved commonsense knowledge graphs for a post to augment the semantic representation of the post, which can help understand the post; a dynamic graph attention mechanism attentively reads the knowledge graphs and the triples in each graph, and then uses the semantic information from the graphs and triples for better response generation. Experiments show that the proposed model can generate more appropriate and informative responses than state-of-the-art baselines. 

Our paper “Commonsense Knowledge Aware Conversation Generation with Graph Attention” was selected as IJCAI-ECAI 2018 distinguished papers out of the 710 accepted papers from 3480 submissions.

Learning to ask questions aims to generate a question to a given input. Deciding what to ask and how is a strong indicator of machine understanding. Asking good questions in large-scale open-domain conversational systems is quite significant yet rather less studied. We observe that a good question is a natural composition of interrogatives, topic words, and ordinary words. Interrogatives lexicalize the pattern of questioning, topic words address the key information for topic transition in dialogue, and ordinary words play syntactical and grammatical roles in making a natural sentence. We devise two typed decoders (soft typed decoder and hard typed decoder) in which a type distribution over the three types is estimated and used to modulate the final generation distribution. Extensive experiments show that the typed decoders outperform state-of-the-art baselines and can generate more meaningful questions. Users are more likely to respond to the questions generated by our decoders than to those by the baselines.

References
  • Augmenting End-to-End Dialogue Systems with Commonsense Knowledge. AAAI 2018.
  • Emotional Chatting Machine: Emotional Conversation Generation with Internal and External Memory. AAAI 2018.
  • Assigning Personality/Identity to a Chatting Machine for Coherent Conversation Generation. IJCAI-ECAI 2018.
  • Commonsense Knowledge Aware Conversation Generation with Graph Attention. IJCAI-ECAI 2018.
  • Learning to Ask Questions in Open-domain Conversational Systems with Typed Decoders. ACL 2018.
  • Generating Informative Responses with Controlled Sentence Function. ACL 2018.

Sentiment Classification

Sentiment classification, a major subtopic in sentiment analysis, aims to classify text to sentiment polarities such as positive or negative.

Aspect-level sentiment classification is a fine-grained task of sentiment classification. For example, “Staffs are not that friendly, but the taste covers all.” will be positive if the aspect is food but negative when considering the aspect service. Aiming to explore the connection between an aspect and the opinion expression, we proposed an Attention-based Long Short-Term Memory Network for aspect-level sentiment classification. The model is able to attend different parts of a sentence when different aspects are concerned. Experiments show that our proposed models obtain superior performance over the baseline models.

Linguistic resources (sentiment lexicons, negation words and intensity words) play an important role to sentiment classification. We address the issue of leveraging the linguistic resources by imposing linguistic-inspired regularizers on sequence LSTM models. The central idea is to model the linguistic role of sentiment, negation, and intensity words in sentence-level sentiment classification by regularizing the outputs at adjacent positions of a sentence. Results show that our models are able to capture the linguistic role of sentiment words, negation words, and intensity words in sentiment expression.

Reference
  • Encoding Syntactic Knowledge in Neural Networks for Sentiment Classification. ACM Trans. Inf. Syst. 35, 3, Article 26 (June 2017), 27 pages.
  • Linguistically Regularized LSTM for Sentiment Classification. ACL 2017.
  • Attention-based LSTM for Aspect-level Sentiment Classification. EMNLP 2016.
  • Learning Tag Embeddings and Tag-specific Composition Functions in Recursive Neural Network. ACL 2015.

Reinforcement Learning

Reinforcement learning (RL) is a learning paradigm that allows an agent to learn through interactions with the environment. It has many interesting applications in game control, language processing, system operating, and robotics.

As one of the most common tasks of NLP, text classification depends heavily on the learned representation. Existing structured representation models cannot identify task-relevant structures without explicit annotations on structure. We proposed a reinforcement learning (RL) method to address this problem. Structure discovery in this paper is formulated as a sequential decision problem: current decision (or action) of structure discovery affects following decisions, which can be naturally addressed by policy gradient method. A delayed reward is used to guide the learning of the policy for structure discovery. The reward is computed from the text classifier’s prediction based on the structured representation. We demonstrate two attempts to build structured representation: Information Distilled LSTM (ID-LSTM) and Hierarchically Structured LSTM (HS-LSTM). ID-LSTM selects only important, task-relevant words, and HS-LSTM discovers phrase structures in a sentence. Results show that our method can learn task-friendly representations by identifying important words or task-relevant structures without explicit structure annotations, and thus yields competitive performance.

Relation classification, aiming to categorize semantic relations between two entities given a plain text, is an important problem in natural language processing. Many existing methods are based on distant supervision where the training data are automatically constructed by aligning knowledge base to free text, which suffers from the noisy labeling problem. To address the issue of noisy labeling, we propose a novel relation classification model consisting of two modules: instance selector and relation classifier. By having an explicit instance selector, we are able to first select high quality sentences from a sentence bag, and then predict a relation at the sentence level by the relation classifier. We address the challenge of joint training by casting the instance selection task as a reinforcement learning problem, meaning that the instance selector attempts to choose some sentences and obtain feedback (or reward) on the quality of the selected sentences from the relation classifier. Experiment results show that our model can deal with the noise of data effectively and obtains better performance for relation classification at the sentence level.

Topic structure analysis plays a pivotal role in dialogue understanding. We propose a reinforcement learning (RL) method for topic segmentation and labeling in goal-oriented dialogues, which is a weakly supervised method. First, a set of keywords are manually collected for each topic, and utterances are automatically labeled. Second, we pretrained a policy network on the noisy data. Third, the method iterates between refining data and refining policy by rewarding local topic continuity (the same topic lasts a few turns along the session) and global topic structure (once all local topic assignments are made, the content similarity between adjacent segments should be small, that within a segment should be large).

The central logic leads to this positive loop: starting from noisy data, we can correct noisy labels by optimizing the two rewards to obtain refined data; better data lead to learn better policies; and better policy can correct more data. Results demonstrate that this weakly supervised method obtains substantial improvements over state-of-the-art baselines.

Reference
  • Learning Structured Representation for Text Classification via Reinforcement Learning. AAAI 2018.
  • Reinforcement Learning for Relation Classification from Noisy Data. AAAI 2018.
  • A Weakly Supervised Method for Topic Segmentation and Labeling in Goal-oriented Dialogues via Reinforcement Learning. IJCAI-ECAI 2018.

Ranking is a fundamental and widely studied problem in scenarios such as search, advertising, and recommendation. However, joint optimization for multi-scenario ranking systems, which aims to improve the overall performance of several ranking strategies in different scenarios, is rather untouched. We formulated multi-scenario ranking optimization as a fully cooperative, partially observable, multi-agent sequential decision problem. We proposed a novel model named Multi-Agent Recurrent Deterministic Policy Gradient (MA-RDPG) which has a communication component for passing messages, several private actors (agents) for making actions for ranking, and a centralized critic for evaluating the overall performance of the co-working actors. Each scenario is treated as an agent (actor). Agents collaborate with each other by sharing a global action-value function (the critic) and passing messages that encodes historical information across scenarios. The model is evaluated with online settings on a large E-commerce platform. Results show that the proposed model exhibits significant improvements against baselines in terms of the overall performance.

Reference

  • Learning to Collaborate: Multi-Scenario Ranking via Multi-Agent Reinforcement Learning. WWW 2018.

Language Generation

Story ending generation aims at concluding a story and completing the plot given a story context. We argue that solving this task involves addressing the following issues: 1) Representing the context clues which contain key information for planning a reasonable ending; and 2) Using implicit commonsense knowledge to facilitate understanding of the story and better predict what will happen next. To address the two issues, we devise a model that is equipped with an incremental encoding scheme to encode context clues effectively, and a multi-source attention mechanism to use commonsense knowledge. The representation of the context clues is built through incremental encoding of the sentences in the story context one by one. When encoding a current sentence in a story context, the model can attend not only to the words in the preceding sentence, but also the knowledge graphs which are retrieved from ConceptNet for each word. In this manner, commonsense knowledge can be encoded in the model through graph representation techniques, and therefore, be used to facilitate inferring coherent endings. Integrating the context clues and commonsense knowledge, the model can generate more reasonable endings than state-of-the-art baselines.

Reference

  • Commonsense knowledge aware conversation generation with graph attention. IJCAI 2018
  • Story Ending Generation with Incremental Encoding and Commonsense Knowledge. AAAI 2019