Intel AI Research at ICLR
The International Conference on Learning Representations (ICLR) is dedicated to the advancement of deep learning. ICLR is globally renowned for presenting and publishing cutting-edge research on all aspects of deep learning used in the fields of artificial intelligence, statistics and data science, as well as applications for machine vision, computational biology, speech recognition, text understanding, gaming, and robotics.
Save the Date
Intel is also a proud sponsor of the AI for Social Good workshop on Monday, May 6th and the LatinX in AI and Black in AI Joint Workshop on Tuesday, May 7th.
Agenda
Workshops - Day 1
Monday May 6, 2019
Title | Time | Location | Authors | Abstract |
---|---|---|---|---|
Structure & Priors in Reinforcement Learning (SPiRL) | 9:00am - 6:00pm | New Orleans, LA | Varun Kumar, Hanlin Tang – Intel, Arjun Bansal – Intel | Graph-DQN: Fast generalization to novel objects using prior relational knowledge; presented during Poster Session #1 (10:30 – 11:00 AM). |
Task-Agnostic Reinforcement Learning (TARL) Workshop | 9:45am - 6:00pm | New Orleans, LA | Zach Dwiel, Madhavun Vasu, Mariano Phielipp, Arjun Bansal |
Hierarchical Policy Learning is Sensitive to Goal Space Design. |
Limited Labeled Data (LLD) Workshop | 9:45am - 6:30pm | New Orleans, LA | Subarna Tripathi, Anahita Bhiwandiwalla, Alexei Bastidas, Hanlin Tang – Intel | Heuristics for Image Generation from Scene Graphs. |
Limited Labeled Data (LLD) Workshop | 9:45am - 6:30pm | New Orleans, LA | Tyler Lee, Ting Gong, Suchismita Padhy, Andrew Rouditchenko, Anthony Ndirango – Intel |
Label-Efficient Audio Classification Through Multitask Learning and Self-Supervision. |
Poster Sessions - Day 2
Tuesday May 7, 2019
Title | Time | Location | Authors | Abstract |
---|---|---|---|---|
SPIGAN: Privileged Adversarial Learning from Simulation | 4:30pm - 6:30pm | New Orleans, LA | German Ros Research Scientist – Intel Labs, Kuan-Hui Lee – Toyota Research Institute, Jie Li – Toyota Research Institute, Adrien Gaidon Machine Learning Lead – Toyota Research Institute | Authors propose a new unsupervised domain adaptation algorithm, called SPIGAN, relying on Simulator Privileged Information (PI) and Generative Adversarial Networks (GAN). |
Poster Sessions - Day 3
Wednesday May 8, 2019
Title | Time | Location | Authors | Abstract |
---|---|---|---|---|
Deep Layers as Stochastic Solvers | 4:30pm - 6:30pm | New Orleans, LA | Adel Bibi Kaust – Intel Labs, Vladlen Koltun – Intel Labs, Rene Ranftl – Intel Labs, Bernard Ghanem – King Abdullah University of Science and Technology | Authors provide a novel perspective on the forward pass through a block of layers in a deep network. |
Sparse Dictionary Learning by Dynamical Neural Networks | 4:30pm - 6:30pm | New Orleans, LA | Tsung-Han Lin – Intel Labs, Peter Tang – Intel Labs |
By combining ideas of top-down feedback and contrastive learning, a dynamical network for solving the l1-minimizing dictionary learning problem can be constructed, and the true gradients for learning are provably computable by individual neurons. |
Poster Sessions - Day 4
Thursday May 9, 2019
Title | Time | Location | Authors | Abstract |
---|---|---|---|---|
Trellis Networks for Sequence Modeling | 11:00am - 1:00pm | New Orleans, LA | Vladlen Koltun – Intel Labs, Shaojie Bai – Carnegie Mellon University, Zico Kolter – Carnegie Mellon University and Bosch Center for AI | Introducing trellis networks, a new architecture for sequence modeling that outperforms current methods on a variety of benchmarks. |
More Ways to Engage
- Reinforcement Learning Coach — An open source research framework for training and evaluating reinforcement learning (RL) agents by harnessing the power of multi-core CPU processing to achieve state-of-the-art results.
- Aspect Based Sentiment Analysis – Deep-Learning, powerful computing resources and greater access to useful datasets drove many advances in Natural Language Processing (NLP) in recent years. At Intel AI Research, our team of NLP researchers and developers released NLP Architect, an open source library, fully based on DL topologies, to share with the community and create a platform for future research and collaborations.
- Generalization to Novel Objects Using Prior Knowledge – Graph-DQN, a new model, combines information from knowledge graphs and visual scenes, allowing the agent to learn, reason, and apply agent-object and object-object relations.
- AI Lab at Intel – At Intel, we believe there is a virtuous cycle between research, algorithms and compute that’s leading to the tremendous growth we are seeing in AI.
- The Intel AI Lab team will be presenting a talk on NLP Architect at the ICLR 2019 Expo on Tuesday, May 7th at 1:00 PM.
- Intel® AI Research Luncheon at ICLR 2019 on Wednesday, May 8th from 12:00 PM – 2:00 PM CT at Tomas Bistro
- Follow us @IntelAIResearch for the latest happenings at @iclr2019 in New Orleans!
Event Speakers
Hanlin Tang
Senior Director of the AI Lab, Artificial Intelligence Platforms Group
Arjun Bansal
Vice President and General Manager, Artificial Intelligence Software and Lab at Intel
Mariano Phielipp
Senior Deep Learning Data Scientist, Intel AI Lab
Subarna Tripathi
Deep Learning Data Scientist; Artificial Intelligence Product Group
Anahita Bhiwandiwalla
Deep Learning Researcher and Engineer, Intel AI Lab
Alexei Bastidas
Deep Learning Data Scientist, Intel AI Lab
Tyler Lee
Deep Learning Data Scientist, Intel AI Lab
Ting Gong
Deep Learning Data Scientist, Intel AI Lab
Suchismita Padhy
Deep Learning Data Scientist, Intel AI Lab
Vladlen Koltun
Director, Intelligent Systems Lab
René Ranftl
Research Scientist, Intelligent Systems Lab
Ping Tak Peter Tang
Senior Principal Engineer, Core and Visual Computing Group
Research Publications
Deep Layers as Stochastic Solvers
We provide a novel perspective on the forward pass through a block of layers in a deep network. In particular,...
Trellis Networks for Sequence Modeling
We present trellis networks, a new architecture for sequence modeling. On the one hand, a trellis network is a temporal...
Sparse Dictionary Learning by Dynamical Neural Networks
A dynamical neural network consists of a set of interconnected neurons that interact over time continuously. It can exhibit computational...
SPIGAN: Privileged Adversarial Learning from Simulation
Deep Learning for Computer Vision depends mainly on the source of supervision. Photo-realistic simulators can generate large-scale automatically labeled synthetic...
Graph-DQN: Fast Generalization To Novel Objects Using Prior...
Humans have a remarkable ability to both generalize known actions to novel objects, and reason about novel objects once their...
Heuristics For Image Generation From Scene Graphs
Generating realistic images from scene graphs requires neural networks to be able to reason about object relationships and compositionality. Learning...
Label Efficient Audio Classification Through Multitask Learning And...
While deep learning has been incredibly successful in modeling tasks with large, carefully curated labeled datasets, its application to problems...
Hierarchical Policy Learning Is Sensitive To Goal Space...
Hierarchy in reinforcement learning agents allows for control at multiple time scales yielding improved sample efficiency, the ability to deal...