Boosting Dilated Convolutional Networks with Mixed Tensor Decompositions

The driving force behind deep networks is their ability to compactly represent rich classes of functions. The primary notion for formally reasoning about this phenomenon is expressive efficiency, which refers to a situation where one network must grow unfeasibly large in order to realize (or approximate) functions of another. To date, expressive efficiency analyses focused on the architectural feature of depth, showing that deep networks are representationally superior to shallow ones. In this paper we study the expressive efficiency brought forth by connectivity, motivated by the observation that modern networks interconnect their layers in elaborate ways...


Nadav Cohen

Ronen Tamari

Amnon Shashua

Related Content

Stream Aggregation through Order Sampling

This is paper introduces a new single-pass reservoir weighted-sampling stream aggregation algorithm, Priority-Based Aggregation (PBA). While order sampling is a....

View publication

On Sampling from Massive Graph Streams

We propose Graph Priority Sampling (GPS), a new paradigm for order-based reservoir sampling from massive streams of graph edges. GPS....

View publication

Geometric Matrix Completion with Recurrent Multi-Graph Neural Networks

Matrix completion models are among the most common formulations of recommender systems. Recent works have showed a boost of performance....

View publication

Incremental Network Quantization: Towards Lossless CNNS with Low-Precision....

This paper presents incremental network quantization (INQ), a novel method, targeting to efficiently convert any pre-trained full-precision convolutional neural network....

View publication

Stay Connected

Keep tabs on all the latest news with our monthly newsletter.