Dense Prediction with Attentive Feature Aggregation

Yung-Hsu Yang, Thomas E. Huang, Samuel Rota Bulò, Peter Kontschieder, Fisher Yu
arXiv 2021

Dense Prediction with Attentive Feature Aggregation

Abstract

Aggregating information from features across different layers is an essential operation for dense prediction models. Despite its limited expressiveness, feature concatenation dominates the choice of aggregation operations. In this paper, we introduce Attentive Feature Aggregation (AFA) to fuse different network layers with more expressive non-linear operations. AFA exploits both spatial and channel attention to compute weighted average of the layer activations. Inspired by neural volume rendering, we extend AFA with Scale-Space Rendering (SSR) to perform late fusion of multi-scale predictions. AFA is applicable to a wide range of existing network designs. Our experiments show consistent and significant improvements on challenging semantic segmentation benchmarks, including Cityscapes, BDD100K, and Mapillary Vistas, at negligible computational and parameter overhead. In particular, AFA improves the performance of the Deep Layer Aggregation (DLA) model by nearly 6% mIoU on Cityscapes. Our experimental analyses show that AFA learns to progressively refine segmentation maps and to improve boundary details, leading to new state-of-the-art results on boundary detection benchmarks on BSDS500 and NYUDv2.

Results

We show the semantic segmentation and boundary prediction results on the videos of Cityscapes, BDD100K, and NYUv2. The predictions are conducted on each individual image without considering the temporal context.

Cityscapes

Examples of running AFA-DLA-X-102 on Cityscapes for semantic segmentation.

BDD100K

Examples of running AFA-DLA-X-169 on BDD100K for semantic segmentation.

NYUDv2

Examples of running AFA-DLA-34 on NYUDv2 for boundary detection.

Paper

Code

paper
github.com/SysCV/dla-afa

Citation

@misc{yang2021dense,
      title={Dense Prediction with Attentive Feature Aggregation}, 
      author={Yung-Hsu Yang and Thomas E. Huang and Samuel Rota Bulò and Peter Kontschieder and Fisher Yu},
      year={2021},
      eprint={2111.00770},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

Related


Prototypical Cross-Attention Networks for Multiple Object Tracking and Segmentation

Prototypical Cross-Attention Networks for Multiple Object Tracking and Segmentation

NeurIPS 2021 Spotlight We propose Prototypical Cross-Attention Network (PCAN), capable of leveraging rich spatio-temporal information for online multiple object tracking and segmentation.


BDD100K: A Diverse Driving Dataset for Heterogeneous Multitask Learning

BDD100K: A Diverse Driving Dataset for Heterogeneous Multitask Learning

CVPR 2020 Oral The largest driving video dataset for heterogeneous multitask learning.


Exploring Cross-Image Pixel Contrast for Semantic Segmentation

Exploring Cross-Image Pixel Contrast for Semantic Segmentation

ICCV 2021 Oral We propose a pixel-wise contrastive algorithm for semantic segmentation in the fully supervised setting.


Learning Saliency Propagation for Semi-Supervised Instance Segmentation

Learning Saliency Propagation for Semi-Supervised Instance Segmentation

CVPR 2020 We propose a ShapeProp module to propagate information between object detection and segmentation supervisions for Semi-Supervised Instance Segmentation.


Characterizing Adversarial Examples Based on Spatial Consistency Information for Semantic Segmentation

Characterizing Adversarial Examples Based on Spatial Consistency Information for Semantic Segmentation

ECCV 2018 We aim to characterize adversarial examples based on spatial context information in semantic segmentation.


Deep Layer Aggregation

Deep Layer Aggregation

CVPR 2018 Oral We augment standard architectures with deeper aggregation to better fuse information across layers.


Dilated Residual Networks

Dilated Residual Networks

CVPR 2017 We show that dilated residual networks (DRNs) outperform their non-dilated counterparts in image classification without increasing the model’s depth or complexity.


FCNs in the Wild: Pixel-level Adversarial and Constraint-based Adaptation

FCNs in the Wild: Pixel-level Adversarial and Constraint-based Adaptation

arXiv 2016 We introduce the first domain adaptive semantic segmentation method, proposing an unsupervised adversarial approach to pixel prediction problems.


Multi-Scale Context Aggregation by Dilated Convolutions

Multi-Scale Context Aggregation by Dilated Convolutions

ICLR 2016 We study dilated convolution in depth. It has become a foundamental network operation.