AI
IBM Research AI Advancing, Trusting, and Scaling Learning at ICLR
May 2, 2019 | Written by: John R. Smith
Categorized: AI
Share this post:
The annual International Conference on Learning Representations (ICLR) takes place May 6–9, 2019, in New Orleans, LA. IBM researchers will present recent work on advancing, trusting, and scaling learning with applications in vision, speech, language, audio, interpretability, robustness, meta-learning, learning optimization, and reduced precision training. See details on our regular and workshop papers below and plan to attend our sessions. IBM Research is a Gold sponsor of ICLR and will be on-site at booth #211. Stop by to get your hands on our latest AI demos and digital experiences!
Accepted papers at ICLR
Big-Little Net: An Efficient Multi-Scale Feature Representation for Visual and Speech Recognition
Richard Chen, Quanfu Fan, Neil Mallinar, Tom Sercu, Rogerio Feris
Tue May 7th 11:00 AM — 01:00 PM @ Great Hall BC
GAN Dissection: Visualizing and Understanding Generative Adversarial Networks
David Bau, Jun-Yan Zhu, Hendrik Strobelt, Bolei Zhou, Joshua B. Tenenbaum, William T. Freeman, Antonio Torralba
Tue May 7th 11:00 AM — 01:00 PM @ Great Hall BC
Neural Network Gradient-Based Learning of Black-Box Function Interfaces
Ofer Lavi, Alon Jacovi, Guy Hadash, George Kour, Einat Kermany, Boaz Carmeli
Tue May 7th 11:00 AM — 01:00 PM @ Great Hall BC
Defensive Quantization: When Efficiency Meets Robustness
Chuang Gan
Tue May 7th 04:30 — 06:30 PM @ Great Hall BC
L2-Nonexpansive Neural Networks
Haifeng Qian, Mark Wegman
Tue May 7th 04:30 — 06:30 PM @ Great Hall BC
signSGD via Zeroth-Order Oracle
Sijia Liu, Pin-Yu Chen, Xiangyi Chen, Mingyi Hong
Tue May 7th 04:30 — 06:30 PM @ Great Hall BC
Structured Adversarial Attack: Towards General Implementation and Better Interpretability
Kaidi Xu, Sijia Liu, Pu Zhao, Pin-Yu Chen, Huan Zhang, Deniz Erdogmus, Yanzhi Wang, Xue Lin, Quanfu Fan
Tue May 7th 04:30 — 06:30 PM @ Great Hall BC
Learning to Learn without Forgetting by Maximizing Transfer and Minimizing Interference
Matt Riemer, Ignacio Cases, Robert Ajemian, Miao Liu, Irina Rish, Yuhai Tu, Gerald Tesauro
Wed May 8th 11:00 AM — 01:00 PM @ Great Hall BC
Learning Entropic Wasserstein Embeddings
Charlie Frogner, Justin Solomon, Farzaneh Mirzazadeh
Wed May 8th 04:30 — 06:30 PM @ Great Hall BC
On the Convergence of a Class of Adam-Type Algorithms for Non-Convex Optimization
Xiangyi Chen, Sijia Liu, Ruoyu Sun, Mingyi Hong
Wed May 8th 04:30 — 06:30 PM @ Great Hall BC
Query-Efficient Hard-Label Black-Box Attack: An Optimization-Based Approach
Minhao Cheng, Thong Le, Pin-Yu Chen, Jinfeng Yi, Huan Zhang, Cho-Jui Hsieh
Wed May 8th 04:30 — 06:30 PM @ Great Hall BC
The Neuro-Symbolic Concept Learner: Interpreting Scenes, Words, and Sentences from Natural Supervision
Chuang Gan
Thu May 9th 10:00 — 10:15 AM @ Great Hall AD
Characterizing Audio Adversarial Examples Using Temporal Dependency
Zhuolin Yang, Bo Li, Pin-Yu Chen, Dawn Song
Thu May 9th 11:00 AM — 01:00 PM @ Great Hall BC
Accumulation Bit-Width Scaling for Ultra-Low Precision Training of Deep Networks
Charbel Sakr, Naigang Wang, Chia-Yu Chen, Jungwook Choi, Ankur Agrawal, Naresh Shanbhag, Kailash Gopalakrishnan
Thu May 9th 04:30 — 06:30 PM @ Great Hall BC
Information Theoretic Lower Bounds on Negative Log Likelihood
Luis Lastras
Thu May 9th 04:30 — 06:30 PM @ Great Hall BC
Wasserstein Barycenter Model Ensembling
Pierre Dognin, Igor Melnyk, Youssef Mroueh, Jarret Ross, Cicero Nogueira Dos Santos, Tom Sercu
Thu May 9th 04:30 — 06:30 PM @ Great Hall BC
Accepted papers at ICLR workshops
Exploring the Hyperparameter Landscape of Adversarial Robustness
Evelyn Duesterwald, Anupama Murthi, Ganesh Venkataraman, Mathieu Sinn, Deepak Vijaykeerthy
Safe Machine Learning: Specification, Robustness, and Assurance (SafeML) Workshop
Mon May 6th 09:45 AM — 06:30 PM @ Room R06
Evolutionary Search for Adversarially Robust Neural Networks
Mathieu Sinn, Martin Wistuba, Beat Buesser, Maria-Irina Nicolae, Minh Tran
Safe Machine Learning: Specification, Robustness, and Assurance (SafeML) Workshop
Mon May 6th 09:45 AM — 06:30 PM @ Room R06
Fairness GAN: Generating Datasets with Fairness Properties using a Generative Adversarial Network
Prasanna Sattigeri, Samuel Hoffman, Vijil Chenthamarakshan, Kush Varshney
Safe Machine Learning: Specification, Robustness, and Assurance (SafeML) Workshop
Mon May 6th 09:45 AM — 06:30 PM @ Room R06
Improved Adversarial Image Captioning, Deep Generative Models
Pierre Dognin, Igor Melnyk, Youssef Mroueh, Jerret Ross, Tom Sercu
Deep Generative Models for Highly Structured Data Workshop
Mon May 6th 03:15 — 06:30 PM @ Room R02
Interactive Visual Exploration of Latent Space (IVELS) for Peptides Auto-Encoder Model Selection, Deep Generative Models
Tom Sercu, Sebastian Gehrmann, Hendrik Strobelt, Payel Das, Inkit Padhi, Cicero Dos Santos, Kahini Wadhawan, Vijil Chenthamarakshan
Deep Generative Models for Highly Structured Data Workshop
Mon May 6th 03:15 — 06:30 PM @ Room R02
SAGE: Scalable Attributed Graph Embeddings for Graph Classification
Lingfei Wu, Zhen Zhang, Fangli Xu, Liang Zhao, Arye Nehorai
Representation Learning on Graphs and Manifolds Workshop
Mon May 6th 09:45 AM — 06:30 PM @ Room R07
Online Semi-Supervised Learning with Bandit Feedback
Mikhail Yurochkin, Sohini Upadhyay, Djallel Bounnefouf, Mayank Agarwal, Yasaman Khazaeni
Learning from Limited Labeled Data (LLD) Workshop
Mon May 6th 09:45 AM — 06:30 PM @ Room R01
High-frequency crowd insights for public safety and congestion control
Karthik Nandakumar, Sebastien Blandin, Laura Wynter
AI for Social Good Workshop
Mon May 6th 09:45 AM — 06:30 PM @ Room R05

IBM Fellow, IBM Research AI and Future of Computing, IBM Research
Introducing the AI chip leading the world in precision scaling
We’ve made strides in delivering the next-gen AI computational systems with cutting-edge performance and unparalleled energy efficiency.
IBM’s AI goes multilingual — with single language training
At AAAI, our team presented two new multilingual research techniques that enable AI to understand different languages while only trained on one.
IBM researchers check AI bias with counterfactual text
Our team has developed an AI that verifies other AIs’ ‘fairness’ by generating a set of counterfactual text samples and testing machine learning systems without supervision.