LauraTSE: Target Speaker Extraction using Auto-Regressive Decoder-Only Language Models

Beilong Tang, Bang Zeng, Ming Li

Abstract

We propose LauraTSE, an Auto-Regressive Decoder-Only Language Model for Target Speaker Extraction (TSE) based on the LauraGPT backbone. It employs a small-scale auto-regressive decoder-only language model which takes the continuous representations for both the mixture and the reference speeches and produces the first few layers of the target speech's discrete codec representations. In addition, a one-step encoder-only language model reconstructs the sum of the predicted codec embeddings using both the mixture and the reference information. Our approach achieves superior or comparable performance to existing generative and discriminative TSE models. We also thoroughly study the data scalability and the role of the Encoder-Only Language Model.

Architecture

Model Overview of LauraTSE.

Demo on Libri2Mix Clean

Mixture Ground Truth LauraTSE Reference