Deep Mixture of Experts via Shallow Embedding

Xin Wang, Fisher Yu, Lisa Dunlap, Yi-An Ma, Ruth Wang, Azalia Mirhoseini, Trevor Darrell, Joseph E. Gonzalez
UAI 2019

Deep Mixture of Experts via Shallow Embedding

Abstract

Larger networks generally have greater representational power at the cost of increased computational complexity. Sparsifying such networks has been an active area of research but has been generally limited to static regularization or dynamic approaches using reinforcement learning. We explore a mixture of experts (MoE) approach to deep dynamic routing, which activates certain experts in the network on a per-example basis. Our novel DeepMoE architecture increases the representational power of standard convolutional networks by adaptively sparsifying and recalibrating channel-wise features in each convolutional layer. We employ a multi-headed sparse gating network to determine the selection and scaling of channels for each input, leveraging exponential combinations of experts within a single convolutional network. Our proposed architecture is evaluated on four benchmark datasets and tasks, and we show that Deep-MoEs are able to achieve higher accuracy with lower computation than standard convolutional networks.

Paper

Citation

@inproceedings{wang2020deep,
  title={Deep mixture of experts via shallow embedding},
  author={Wang, Xin and Yu, Fisher and Dunlap, Lisa and Ma, Yi-An and Wang, Ruth and Mirhoseini, Azalia and Darrell, Trevor and Gonzalez, Joseph E},
  booktitle={Uncertainty in Artificial Intelligence},
  year={2019},
}

Related


SkipNet: Learning Dynamic Routing in Convolutional Networks

SkipNet: Learning Dynamic Routing in Convolutional Networks

ECCV 2018 We introduce SkipNet, a modified residual network, that uses a gating network to selectively skip convolutional blocks based on the activations of the previous layer.


IDK Cascades: Fast Deep Learning by Learning not to Overthink

IDK Cascades: Fast Deep Learning by Learning not to Overthink

UAI 2018 We introduce the “I Don’t Know” (IDK) prediction cascades framework to accelerate inference without a loss in prediction accuracy.


TAFE-Net: Task-Aware Feature Embeddings for Low Shot Learning

TAFE-Net: Task-Aware Feature Embeddings for Low Shot Learning

CVPR 2019 We propose Task-Aware Feature Embedding Networks (TAFE-Nets) to learn how to adapt the image representation to a new task in a meta learning fashion.


Deep Layer Aggregation

Deep Layer Aggregation

CVPR 2018 Oral We augment standard architectures with deeper aggregation to better fuse information across layers.


Dilated Residual Networks

Dilated Residual Networks

CVPR 2017 We show that dilated residual networks (DRNs) outperform their non-dilated counterparts in image classification without increasing the model’s depth or complexity.