Distil-DCCRN: An Effective Feature Based Knowledge Distillation Method for DCCRN in Speech Enhancement

0. Contents

  1. Abstract
  2. Samples of DNS Challenge test set


1. Abstract

The deep complex convolution recurrent network (DCCRN) achieves excellent speech enhancement performance by utilizing the audio spectrum's complex features. However, it has a large number of model parameters. We propose a smaller model, Distil-DCCRN, which has only 30\% of the parameters compared to the DCCRN. To ensure that the performance of Distil-DCCRN matches that of the DCCRN, we employ the knowledge distillation (KD) method to use a larger teacher model to help train a smaller student model. We design a knowledge distillation (KD) method, integrating attention transfer and kullback-leibler divergence (AT-KL) to train the student model Distil-DCCRN. Additionally, we use a model with better performance and a more complicated structure, Uformer, as the teacher model. Unlike previous KD approaches that mainly focus on model outputs, our method also leverages the intermediate features from the models’ middle layers, facilitating rich knowledge transfer across different structured models despite variations in layer configurations and discrepancies in the channel and time dimensions of intermediate features. Employing our AT-KL approach, Distil-DCCRN outperforms DCCRN as well as several other competitive models in both PESQ and SI-SNR metrics on the DNS test set and achieves comparable results to DCCRN in DNSMOS.



Samples of DNS Challenge test set

Models Sample 1 Sample 2 Sample 3 Sample 4
clean
Sample 1 Image
Sample 2 Image
Sample 3 Image
Sample 4 Image
noisy
Sample 1 Image
Sample 2 Image
Sample 3 Image
Sample 4 Image
Distil-DCCRN (Trained directly without AT-KL)
Sample 1 Image
Sample 2 Image
Sample 3 Image
Sample 4 Image
DCCRN
Sample 1 Image
Sample 2 Image
Sample 3 Image
Sample 4 Image
Distil-DCCRN (AT-KL)
Sample 1 Image
Sample 2 Image
Sample 3 Image
Sample 4 Image


Models Sample 5 Sample 6 Sample 7 Sample 8
clean
Sample 5 Image
Sample 6 Image
Sample 7 Image
Sample 8 Image
noisy
Sample 5 Image
Sample 6 Image
Sample 7 Image
Sample 8 Image
Distil-DCCRN (Trained directly without AT-KL)
Sample 5 Image
Sample 6 Image
Sample 7 Image
Sample 8 Image
DCCRN
Sample 5 Image
Sample 6 Image
Sample 7 Image
Sample 8 Image
Distil-DCCRN (AT-KL)
Sample 5 Image
Sample 6 Image
Sample 7 Image
Sample 8 Image