Preprint / Version 1

Dynamic deformable attention (DDANet) for semantic segmentation

##article.authors##

DOI:

https://doi.org/10.31224/osf.io/wcm24

Keywords:

Attention Mechanism, CCNet, COVID-19, Criss-Cross Attention, Deformable Attention, Segmentation, U-Net

Abstract

Deep learning based medical image segmentation is an important step within diagnosis, which relies strongly on capturing sufficient spatial context without requiring too complex models that are hard to train with limited labelled data. Training data is in particular scarce for segmenting infection regions of CT images of COVID-19 patients. Attention models help gather contextual information within deep networks and benefit semantic segmentation tasks. The recent criss-cross-attention module aims to approximate global self-attention while remaining memory and time efficient by separating horizontal and vertical self-similarity computations. However, capturing attention from all non-local locations can adversely impact the accuracy of semantic segmentation networks. We propose a new Dynamic Deformable Attention Network (DDANet) that enables a more accurate contextual information computation in a similarly efficient way. Our novel technique is based on a deformable criss-cross attention block that learns both attention coefficients and attention offsets in a continuous way. A deep segmentation network (in our case a U-Net \cite{Jo2019}) that employs this attention mechanism is able to capture attention from pertinent non-local locations and also improves the performance on semantic segmentation tasks compared to criss-cross attention within a U-Net on a challenging COVID-19 lesion segmentation task. Our validation experiments show that the performance gain of the recursively applied dynamic deformable attention blocks comes from their ability to capture dynamic and precise (wider) attention context. Our DDANet achieves Dice scores of 73.4\% and 61.3\% for Ground-Glass-Opacity and Consolidation lesions for COVID-19 segmentation and improves the accuracy by 4.9\% points compared to a baseline U-Net.

Downloads

Download data is not yet available.

Posted

2020-08-25