Project C6 - Speaker separation for hearing aids with small-footprint deep learning methods
This project explores deep learning for acoustic separation of speakers' signals captured with hearing aids. The solutions will be compatible with small-footprint hardware and should contribute to improving the communication ability of the respective user.
This is achieved by combining state-of-the-art speaker separation strategies based on recurrent architectures with auditory models of perception for hearing-aid processing in realistic environments. The project advances training algorithms suitable for complex binaural scenes, the preservation of binaural cues in the context of speech separation, as well as quality measures of separated signals.