WPE speech dereverberation

  • Home
  • Download
  • Licence
  • Releases
  • Configurations
  • References
  • Contact

Papers related to WPE implementation

  1. Tomohiro Nakatani, Takuya Yoshioka, Keisuke Kinoshita, Masato Miyoshi, and Biing-Hwang Juang,
    "Speech dereverberation based on variance-normalized delayed linear prediction,"
    IEEE Transactions on Audio, Speech, and Language Processing, vol. 18, no. 7, pp. 1717-1731, Sep. 2010.
  2. Takuya Yoshioka and Tomohiro Nakatani,
    "Generalization of multi-channel linear prediction methods for blind MIMO impulse response shortening,"
    IEEE Transactions on Audio, Speech, and Language Processing, vol. 20, no. 10, pp. 2707-2720, Dec. 2012.

Papers related to applications of WPE

  1. Takuya Yoshioka, Tomohiro Nakatani, and Masato Miyoshi,
    "Integrated speech enhancement method using noise suppression and dereverberation,"
    IEEE Transactions on Audio, Speech and Language Processing, vol. 17, no. 2, pp. 231-246, Feb. 2009.
  2. Takuya Yoshioka, Tomohiro Nakatani, Masato Miyoshi, and Hiroshi G. Okuno,
    "Blind separation and dereverberation of speech mixtures by joint optimization,"
    IEEE Transactions on Audio, Speech, and Language Processing, vol. 19, no. 1, pp. 69-84, Jan. 2011.
  3. Marc Delcroix, Takuya Yoshioka, Atsunori Ogawa, Yotaro Kubo, Masakiyo Fujimoto, Ito Nobutaka,
    Keisuke Kinoshita, Miquel Espi, Takaaki Hori, Tomohiro Nakatani, and Atsushi Nakamura,
    "Linear prediction-based dereverberation with advanced speech enhancement and recognition technologies for the REVERB challenge,"
    in Proceedings of the 2014 REVERB Workshop, May 2014.
  4. Marc Delcroix, Takuya Yoshioka, Atsunori Ogawa, Yotaro Kubo, Masakiyo Fujimoto, Ito Nobutaka,
    Keisuke Kinoshita, Miquel Espi, Takaaki Hori, Tomohiro Nakatani,
    "Strategies for distant speech recognition in reverberant environments,"
    EURASIP Journal on Advances in Signal Processing, 2015.
  5. Takuya Yoshioka and Mark J. F. Gales,
    "Environmentally robust ASR front-end for deep neural network acoustic models,"
    Computer Speech and Language, vol. 31, no. 1, pp. 65-86, May 2015.
  6. Takuya Yoshioka, Ito Nobutaka, Marc Delcroix, Atsunori Ogawa, Keisuke Kinoshita, Masakiyo Fujimoto,
    Chengzhu Yu, Wojciech Fabian, Miquel Espi, Takuya Higuchi, Shoko Araki, and Tomohiro Nakatani,
    "The NTT CHiME-3 system: advances in speech enhancement and recognition for mobile multi-microphone devices,"
    Proc. of IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), 2015.

Other papers and links related to speech dereverberation

  1. Patrick A. Naylor and Nikolay D. Gaubitch (eds.),
    "Speech Dereverberation,"
    Springer, 2010.
  2. Takuya Yoshioka, Armin Sehr, Marc Delcroix, Keisuke Kinoshita, Roland Maas, Tomohiro Nakatani, and Walter Kellermann,
    "Making machines understand us in reverberant rooms: Robustness against reverberation for automatic speech recognition,"
    IEEE Signal Processing Magazine, vol. 29, no. 6, pp. 114-126, Nov. 2012.
  3. Keisuke Kinoshita, Marc Delcroix, Takuya Yoshioka, Tomohiro Nakatani, Emanuel Habets, Reinhold Haeb-Umbach,
    Volker Leutnant, Armin Sehr, Walter Kellermann, Roland Maas, Sharon Gannot and Bhiksha Raj
    "The REVERB challenge: a common evaluation framework for dereverberation and recognition of reverberant speech,"
    Proc. of WASPAA, 2013.
  4. The REVERB challenge, "http://reverb2014.dereverberation.com/", cited Oct. 2015.