IBM Research AI at ACL 2019

Share this post:

The 57thAnnual Meeting of the Association for Computational Linguistics (ACL 2019) takes place July 28 – August 2nd in Florence, Italy. There, IBM Research AI will present technical papers describing the latest results in our long-term push to help AI systems master language.

Many people interact on a daily basis with chatbots and built-in assistants to share and receive information — but most of these interactions are simple Q&A exchanges supported by natural language processing (NLP). Our long-term goal in IBM Research is to evolve beyond NLP to enable AI to better mirror human-to-human interaction — even reason and form its own responses to inquiries.

IBM Research is specifically focused on moving from NLP to Natural Language Understanding (NLU).  NLU goes beyond today’s basic NLP by outfitting AI systems with the ability to absorb, digest and form responses based on analyzing surrounding contextual information. IBM scientists will unveil new research advancing this field of study at ACL, from semantic parsing, to understanding of incomplete text and improved output of NLP systems.

IBM Research is proudly sponsoring ACL 2019 at the Platinum level. At our booth (#2), we will showcase several of the latest innovations from our global research team, including two from the accepted systems demonstrations track: GLTR, a new ‘visual forensics’ method to allow humans and AI work together to detect fake text, and HEIDL, a tool that helps create human interpretable models for NLP.

Also in our booth is a demo of IBM Research Project Debater, the first AI system to debate humans on complex topics. This project marks an important step towards mastering language. At ACL, the researchers behind the technology will present four papers that further this goal.

Please stop by the booth (pick up some swag) and attend our presentations and workshops listed below. Hope to see you in Florence!

Papers at main conference

Are you Convinced? Choosing the more Convincing Evidence with a Siamese Network(#1224)
Martin Gleize, Eyal Shnarch, Leshem Choshen, Lena Dankin, Guy Moshkowich, Ranit Aharonov and Noam Slonim
Monday July 29, 13:50—14:10
(read blog)

Improved Language Modeling by Decoding the Past(#25)
Siddhartha Brahma
Monday July 29, 13:50–15:30

Extracting Multiple-Relations in One-Pass with Pre-Trained Transformers(#395)
Haoyu Wang, Ming Tan, Mo Yu, Shiyu Chang, Dakuo Wang, Kun Xu, Xiaoxiao Guo and Saloni Potdar
Monday July 29, 13:50–15:30
(read blog)

From Surrogacy to Adoption; From Bitcoin to Cryptocurrency: Debate Topic Expansion(#2447)
Roy Bar-Haim, Dalia Krieger, Orith Toledo-Ronen, Lilach Edelstein, Yonatan Bilu, Alon Halfon,
Yoav Katz, Amir Menczel, Ranit Aharonov and Noam Slonim
Monday July 29, 14:10–14:30
(read blog)

Argument Invention from First Principles(#590)
Yonatan Bilu, Ariel Gera, Daniel Hershcovich, Benjamin Sznajder, Dan Lahav, Guy Moshkowich, Anael Malet, Assaf Gavron and Noam Slonim
Monday July 29, 15:10–15:30
(read blog)

Unsupervised Neural Text Simplification(#1465)
Sai Surya, Abhijit Mishra, Anirban Laha, Parag Jain and Karthik Sankaranarayanan
Monday July 29, 16:00–17:40

Self-Supervised Learning for Contextualized Extractive Summarization(#2216)
Hong Wang, Xin Wang, Wenhan Xiong, Mo Yu, Xiaoxiao Guo, Shiyu Chang and William Yang Wang
Monday July 29, 16:00–17:40

TalkSumm: A Dataset and Scalable Annotation Method for Scientific Paper Summarization
Based on Conference Talks (#2464)
Guy Lev, Michal Shmueli-Scheuer, Jonathan Herzig, Achiya Jerbi and David Konopnicki
Monday July 29, 16:00–17:40
(read blog)

Cross-lingual Knowledge Graph Alignment via Graph Matching Neural Network(#659)
Kun Xu, liwei wang, Mo Yu, Yansong Feng, Yan Song, Zhiguo Wang and Dong Yu
Tuesday July 30, 10:30–12:10

A Large-Scale Corpus for Conversation Disentanglement(#1420)
Jonathan K. Kummerfeld, Sai R. Gouravajhala, Joseph J. Peper, Vignesh Athreya, Chulaka Gunasekara, Jatin Ganhotra, Siva Sankalp Patel, Lazaros C Polymenakos and Walter Lasecki
Tuesday July 30, 13:50–15:30
(read blog)

Rewarding Smatch: Transition-Based AMR Parsing with Reinforcement Learning(#1640)
Tahira Naseem, Abhishek Shah, Hui Wan, Radu Florian, Salim Roukos and Miguel Ballesteros
Wednesday July 31, 10:30–12:10

Unified Semantic Parsing with Weak Supervision(#2285)
Priyanka Agrawal, Ayushi Dalmia, Parag Jain, Abhishek Bansal, Ashish Mittal and Karthik Sankaranarayanan
Wednesday July 31, 10:30–12:10

Improving Question Answering over Incomplete KBs with Knowledge-Aware Reader(#2046)
Wenhan Xiong, Mo Yu, Shiyu Chang, Xiaoxiao Guo and William Yang Wang
Wednesday July 31, 11:43–11:56

Identification of Tasks, Datasets, Evaluation Metrics, and Numeric Scores for Scientific Leaderboards Construction(#1911)
Yufang Hou, Charles Jochim, Martin Gleize, Francesca Bonin and Debasis Ganguly
Wednesday July 31, 13:50–15:30

TWEETQA: A Social Media Focused Question Answering Dataset(#2196)
Wenhan Xiong, Jiawei Wu, Hong Wang, Vivek Kulkarni, Mo Yu, Shiyu Chang, Xiaoxiao Guo and William Yang Wang
Wednesday July 31, 14:30–14:50

Low-resource Deep Entity Resolution with Transfer and Active Learning(#1491)
Jungo Kasai, Kun Qian, Sairam Gurajada, Yunyao Li and Lucian Popa
Wednesday July 31, 16:40–17:00

Accepted workshop papers

Towards Effective Rebuttal: Listening Comprehension Using Corpus-Wide Claim Mining
Tamar Lavee, Matan Orbach, Lili Kotlerman, Yoav Kantor, Shai Gretz, Lena Dankin, Michal
Jacovi, Yonatan Bilu, Ranit Aharonov and Noam Slonim
Workshop: ArgMining 2019: 6th Workshop on Argument Mining
Thursday August 1, 14:00–15:30
(read blog)

Interactive White-Box Models through Collaborative Semantic Inference
Sebastian Gehrmann, Hendrik Strobelt, Robert Krüger, Hanspeter Pfister and Alexander Rush
Workshop: BlackboxNLP 2019
Thursday August 1, 14:50–16:00

Towards Universal Semantic Representation
Huaiyu Zhu, Yunyao Li and Laura Chiticariu
Workshop: The First International Workshop on Designing Meaning Representations (DMR)
Thursday August 1, 16:40–16:50

Content Customization for Micro Learning using Human Augmented AI Techniques Applications
Ayush Shah, Tamer Abuelsaad, Jae-Wook Ahn, Prasenjit Dey, Ravi Kokku, Ruhi Sharma Mittal, Aditya Vempaty, Mourvi Sharma
Workshop: 14th Workshop on Innovative Use of NLP for Building Educational
Friday August 2, 14:45 – 15:30

TALC accepted papers

Learning End-to-End Goal-Oriented Dialog with Maximal User Task Success and Minimal Human Agent Use
Janarthanan Rajendran, Jatin Ganhotra, Lazaros C Polymenakos
(read blog)

Complex Program Induction for Querying Knowledge Bases in the Absence of Gold Programs
Amrita Saha, Ghulam Ahmed Ansari, Abhishek Laddha, Karthik Sankaranarayanan, Soumen Chakrabarti

System demonstration papers

GLTR: Statistical Detection and Visualization of Generated Text
Sebastian Gehrmann, Hendrik Strobelt and Alexander Rush

HEIDL: Learning Linguistic Expressions with Deep Learning and Human-in-the-Loop
Prithviraj Sen, Yunyao Li, Eser Kandogan, Yiwei Yang and Walter Laseck
(read blog)


Storytelling from Structured Data and Knowledge Graphs: An NLG Perspective
Abhijit Mishra, Anirban Laha, Karthik Sankaranarayanan, Parag Jain and Saravanan Krishnan
Sunday July 28, Afternoon

Research Scientist and Team Lead, Question Answering, IBM Research AI

More AI stories

New advances in speaker diarization

In a recent publication, “New Advances in Speaker Diarization,” presented virtually at Interspeech 2020, we describe our new state-of-the-art speaker diarization system that introduces several novel techniques.

Continue reading

Could AI help clinicians to predict Alzheimer’s disease before it develops?

A new AI model, developed by IBM Research and Pfizer, has used short, non-invasive and standardized speech tests to help predict the eventual onset of Alzheimer’s disease within healthy people with an accuracy of 0.7 and an AUC of 0.74 (area under the curve).

Continue reading

State-of-the-Art Results in Conversational Telephony Speech Recognition with a Single-Headed Attention-Based Sequence-to-Sequence Model

Powerful neural networks have enabled the use of “end-to-end” speech recognition models that directly map a sequence of acoustic features to a sequence of words. It is generally believed that direct sequence-to-sequence speech recognition models are competitive with traditional hybrid models only when a large amount of training data is used. However, in our recent […]

Continue reading