Contact

Head of division

Prof. Dr. Bernd T. Meyer

+49 441 798 3280

W2 2-260

Administrative Support

Jessica Jurado Garcia

+49 441 798 3003

+49 441 798 3902

W2 1-169

Postal address

Communication Acoustistic, Fk. VI
Carl von Ossietzky Universität Oldenburg
D-26111 Oldenburg, Germany

How to find us

Publications

Publications

Journal papers

  • Westhausen, N.L. and Meyer, B.T. (2023). "Binaural Multichannel Blind Speaker Separation With a Causal Low-Latency and Low-Complexity Approach", IEEE Open Journal of Signal Processing, https://doi.org/10.1109/OJSP.2023.3343320
  • Ooster, J., Tuschen, L., Meyer, B.T. (2023). Self-conducted speech audiometry using automatic speech recognition: simulation results for listeners with hearing loss, Computer, Speech & Language, doi:10.1016/j.csl.2022.101447 
  • Cooke, M., Scharenborg, O., Meyer, B.T. (2022). "The time course of adaptation to distorted speech", The Journal of the Acoustical Society of America 151, 2636,  doi.org/10.1121/10.0010235 
  • Kayser, H., Hermansky, H., Meyer, B.T. (2022). "Spatial speech detection for binaural hearing aids using deep phoneme classifiers", Acta Acustica, doi.org/10.1051/aacus/2022013
  • Roßbach, J., Kollmeier, B., Meyer, B.T. (2022). A model of speech recognition for hearing-impaired listeners based on deep learning, Journal of the Acoustic Society of America, doi.org/10.1121/10.0009411
  • Castro Martínez, A.M., Spille, C., Roßbach, J., Kollmeier, B., Meyer, B.T. (2021). Prediction of speech intelligibility with DNN-based performance measures. Computer, Speech & Language, doi.org/10.1016/j.csl.2021.101329
  • Westhausen, N.L., Huber, R., Baumgartner, H., Sinha, R., Rennies, J., Meyer, B.T. (2021). Reduction of Subjective Listening Effort for TV Broadcast Signals with Recurrent Neural Networks, IEEE Transactions on Audio, Speech, and Language Processing, doi.org/10.1109/TASLP.2021.3126931
  • Geirnaert, S., Vandecappelle, S., Alickovic, E., de Cheveigne, A., Edmund Lalor, E., Meyer, B.T., Miran, S., Francart, T., Bertrand, A. (2021). "Neuro-Steered Hearing Devices Decoding Auditory Attention From the Brain," IEEE Signal Processing Magazine, doi:10.1109/MSP.2021.3075932
  • Ooster, J., Krüger, M., Bach, J.B., Wagener, K.C., Kollmeier, B., Meyer, B.T. (2020). "Speech audiometry at home: Automated listening tests via smart speakers with normal-hearing and hearing-impaired listeners," Trends in Hearing, doi:10.1177/2331216520970011
  • Kollmeier, B., Spille, C., Castro Martínez, A.M., Ewert, S.D., Meyer, B.T. (2020). Modelling human speech recognition in challenging noise maskers using machine learning, Acoustical Science and Technology, Vol. 31 (1), pp. 94-98. doi.org/10.1250/ast.41.94 
  • De Taillez, T., Denk, F., Mirkovic, B., Kollmeier, B., Meyer, B.T. (2019). "Modeling nonlinear transfer functions from speech envelopes to encephalography with neural networks,"  International Journal of Psychological Studies, doi.org/10.5539/ijps.v11n4p1 
  • Castro Martínez, A.M., Gerlach, L., Payá-Vayá, G., Hermansky, H., Ooster, J., Meyer, B.T. (2019). "DNN-based performance measures for predicting error rates in automatic speech recognition and optimizing hearing aid parameters," Speech Communication, doi.org/10.1016/j.specom.2018.11.006
  • Xiong, F., Goetze, S., Kollmeier, B., Meyer, B.T. (2019). "Joint Estimation of Reverberation Time and Early-to-Late Reverberation Ratios from Single-Channel Speech Signals," IEEE/ACM Transactions on Audio, Speech, and Language Processing, doi.org/10.1109/TASLP.2018.2877894 
  • Huber, R., Ooster, J., and Meyer, B.T. (2018) “Single-ended speech quality prediction based on automatic speech recognition,” Journal of the Audio Engineering Society 66 (10), pp. 759-769, doi.org/10.17743/jaes.2018.0041 
  • Stuckenberg, M.V., Nayak, C.V., Meyer, B.T., Völker, C., Hohmann, V., Bendixen, A. (2018). “Age effects on concurrent speech segregation by onset asynchrony,” Journal of Speech, Language, and Hearing Research. doi:10.1044/2018_JSLHR-H-18-0064
  • Xiong, F., Goetze, S., Kollmeier, B., Meyer, B.T. (2018). "Exploring Auditory-Inspired Acoustic Features for Room Acoustic Parameter Estimation from Monaural Speech," IEEE/ACM Transactions on Audio, Speech, and Language Processing, doi.org/10.1109/TASLP.2018.2843537
  • Spille, C., Kollmeier, B., Meyer, B.T. (2018). "Comparing human and automatic speech recognition in simple and complex acoustic scenes," Computer Speech and Language, doi.org/10.1016/j.csl.2018.04.003.
  • Ooster, J., Huber, R., Kollmeier, B., Meyer, B.T. (2018). "Evaluation of an automated speech-controlled listening test with spontaneous and read responses," Speech Communication, doi.org/10.1016/j.specom.2018.01.005.
  • Huber, R., Krüger, M., Meyer, B.T. (2018). "Single-ended prediction of listening effort using deep neural networks," Hearing Research, doi.org/10.1016/j.heares.2017.12.014
  • de Taillez, T., Kollmeier, B., Meyer, B.T. (2018). "Machine learning for decoding listeners' attention from EEG evoked by continuous speech," European Journal of Neuroscience. doi.org/10.1111/ejn.13790
  • Spille, C., Ewert, S.D., Kollmeier, B., Meyer, B.T. (2018). "Predicting Speech Intelligibility with Deep Neural Networks," Computer Speech and Language 48, pp. 51-66. doi.org/10.1016/j.csl.2017.10.004
  • Spille, C., Kollmeier, B., Meyer, B.T. (2017). "Combining binaural and cortical features for robust speech recognition," IEEE/ACM Transactions on Audio, Speech, and Language Processing 25 (4), pp. 756-767. doi.org/10.1109/TASLP.2017.2661712
  • Castro Martínez, A.M., Mallidi, S.H., Meyer, B.T. (2017). "On the Relevance of Auditory-Based Gabor Features for Deep Learning in Automatic Speech Recognition," Computer, Speech and Language 45, pp. 21-38. doi.org/10.1016/j.csl.2017.02.006
  • Kollmeier, B., Schädler, M.R., Warzybok, A., Meyer, B.T., Brand, T. (2016). "Sentence recognition prediction for hearing-impaired listeners in stationary and fluctuation noise with FADE: Empowering the Attenuation and Distortion concept by Plomp with a quantitative processing model," Trends in Hearing, Sep 7;20. doi:10.1177/2331216516655795.
  • Xiong, F. Meyer, B.T., Moritz, N., Rehr, R., Anemueller, J., Gerkmann, T., Doclo, S., Goetze, S. (2015). "Front-End Technologies for Robust ASR in Reverberant Environments - Spectral Enhancement-based Dereverberation and Auditory Modulation Filterbank Features," EURASIP Journal on Advances in Signal Processing, 2015: 70. doi:10.1186/s13634-015-0256-4
  • Schädler, M.R., Meyer, B.T., and Kollmeier, B. (2012). "Spectro-temporal modulation subspace-spanning filter bank features for robust automatic speech recognition", J. Acoust. Soc. Am. Volume 131, Issue 5, pp. 4134-4151. [pdf - see copyright notice below]
  • Meyer, B.T. and Kollmeier, B. (2011). "Robustness of spectro-temporal features against intrinsic and extrinsic variations in automatic speech recognition", Speech Communication 53 (5) (Special issue on Statistical and Perceptual Audition), pp. 753-767. doi.org/10.1016/j.specom.2010.07.002
  • Meyer, B.T., Brand, T., Kollmeier, B. (2011). "Effect of speech-intrinsic variations on human and automatic recognition of spoken phonemes", J. Acoust. Soc. Am. 129, pp. 388-403, doi:10.1121/1.3514525. [ pdf - see copyright notice below (1)]
  • Meyer, B.T., Jürgens, T., Wesker, T., Brand, T., Kollmeier, B. (2010). "Human phoneme recognition as a function of speech-intrinsic variabilities", J. Acoust. Soc. Am. 128 (5), pp. 3126–3141, doi:10.1121/1.3493450 [ pdf - see copyright notice below (1)]

Peer-reviewed conference proceedings

  • Westhausen, N.L. and Meyer, B.T. (2023). Low Bit Rate Binaural Link for Improved Ultra Low-Latency Low-Complexity Multichannel Speech Enhancement in Hearing Aids, IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA), https://doi.org/10.1109/WASPAA58266.2023.10248154 
  • Reuter, P.M, Rollwage, C., Meyer, B.T. (2023). Multilingual Query-by-Example Keyword Spotting with Metric Learning and Phoneme-to-Embedding Mapping, in Proc. ICASSP, https://doi.org/10.1109/ICASSP49357.2023.10095400
  • Westhausen, N.L. and Meyer, B.T. (2022). tPLCnet: Real-time Deep Packet Loss Concealment in the Time Domain Using a Short Temporal Context, in Proc. Interspeech 2022, https://doi.org/10.21437/Interspeech.2022-10157 
  • Roßbach, J., Huber, R., Röttges, S., Hauth, C.F., Biberger, T., Brand, T., Meyer, B.T., Rennies, J. (2022). Speech Intelligibility Prediction for Hearing-Impaired Listeners with the LEAP Model, in Proc. Interspeech 2022, https://doi.org/10.21437/Interspeech.2022-10460
  • Roßbach, J., Röttges, S., Hauth, C.F., Brand, T., Meyer, B.T. (2021). Non-intrusive binaural prediction of speech intelligibility based on phoneme classification, in Proc. ICASSP, doi.org/10.1109/ICASSP39728.2021.9413874
  • Westhausen, N.L., Meyer, B.T. (2021). Acoustic echo cancellation with the dual-signal transformation LSTM network, in Proc. ICASSP, doi.org/10.1109/ICASSP39728.2021.9413510 
  • Westhausen, N. L.  and Meyer, B.T. (2020). Dual-Signal Transformation LSTM Network for Real-Time Noise Suppression, in Proc. Interspeech, doi.org/10.21437/Interspeech.2020-2631
  • Tammen, M., Fischer, D., Meyer, B.T., Doclo, S. (2020). "DNN-based speech presence probability estimation for multi-frame single-microphone speech enhancement," in Proc. ICASSP, doi.org/10.1109/ICASSP40776.2020.9054196
  • Ooster, J., Porysek Moreta, P. N., Bach, J.-H., Holube, I., Meyer, B.T. (2019). "Computer, test my hearing: Accurate speech audiometry with smart speakers," in Proc. Interspeech, doi.org/10.21437/Interspeech.2019-2118 [ pdf ]
  • Ooster, J. and Meyer, B.T. (2019). "Improving Deep Models of Speech Quality Prediction through Voice Activity Detection and Entropy-based Measures," in Proc. ICASSP, doi.org/10.1109/ICASSP.2019.8682754 [ pdf
  • Ooster, J., Huber, R., Meyer, B.T. (2018). "Prediction of Perceived Speech Quality Using Deep Machine Listening," Interspeech 2018, doi.org/10.21437/Interspeech.2018-1374 
  • Kranzusch, P., Huber, R., Krüger, M., Kollmeier, B., Meyer, B.T. (2018). "Prediction of Subjective Listening Effort from Acoustic Data with Non-Intrusive Deep Models," Interspeech 2018, doi:10.21437/Interspeech.2018-1375.
  • Huber, R., Spille, C., Meyer, B.T. (2017). "Single-ended prediction of listening effort based on automatic speech recognition," in Proc. Interspeech, doi:10.21437/Interspeech.2017-1360. [ pdf ]
  • Spille, C. and Meyer, B.T. (2017). "Listening in the dips: Comparing relevant features for speech recognition in humans and machines," Proc. Interspeech, doi:10.21437/Interspeech.2017-1168. [ pdf ]
  • Meyer, B.T., Mallidi, S.H., Kayser, H., Hermansky, H. (2017). "Predicting error rates for unknown data in automatic speech recognition," in Proc. ICASSP, doi:10.1109/ICASSP.2017.7953174. [ pdf
  • Xiong, F., Goetze, S., Meyer, B.T. (2017). "Combination strategy based on relative performance monitoring for multi-stream reverberant speech recognition," in Proc. ICASSP, doi:10.1109/ICASSP.2017.7953082. [ pdf ]
  • Xiong, F., Goetze, S., Meyer, B.T. (2017). "On DNN posterior probability combination in multi-stream speech recognition for reverberant environments," in Proc. ICASSP, doi:10.1109/ICASSP.2017.7953158. [ pdf ]
  • Xiong, F., Meyer, B.T., Cauchi, B., Jukic, A., Doclo, S., Goetze, S. (2017). "Performance Comparison of Real-Time Single-Channel Speech Dereverberation Algorithms," in Proc. Workshop on Hands-free Speech Communication and Microphone Arrays (HSCMA), San Francisco, CA, doi:10.1109/HSCMA.2017.7895575.
  • Meyer, B.T., Mallidi, S.H., Castro Martínez, A.M., Paya-Vaya, G., Kayser, H., Hermansky, H. (2016). "Performance monitoring for automatic speech recognition in noisy multi-channel environments," IEEE Workshop on Spoken Language Technology, doi:10.1109/SLT.2016.7846244. [ pdf ]
  • Spille, C., Kayser, H., Hermansky, H., Meyer, B.T. (2016). "Assessing speech quality in speech-aware hearing aids based on phoneme posteriorgrams," in Proc. Interspeech, pp. 1755-1759, doi:10.21437/Interspeech.2016-1318. [ pdf ]
  • Exter, M., Meyer, B.T. (2016). "DNN-based automatic speech recognition as a model for human phoneme perception," in Proc. Interspeech, pp. 615-619, doi:10.21437/Interspeech.2016-1285.  [ pdf ]
  • Frye, M., Micheli, C., Schepers, I.M., Schalk, G., Rieger, J.W., Meyer, B.T. (2016). "Neural responses to speech-specific modulations derived from a spectro-temporal filter bank," in Proc Interspeech, doi:10.21437/Interspeech.2016-1327. [ pdf ]
  • Eichenauer, A., Dietz, M., Meyer, B.T., Jürgens, T. (2016). "Introducing temporal rate coding for speech in cochlear implants: A microscopic evaluation in humans and models," in Proc. Interspeech. [ pdf ]
  • Xiong, F., Goetze, S., Meyer, B.T. (2015). "Joint Estimation of Reverberation Time and Direct-To-Reverberation Ratio from Speech Using Auditory-Inspired Features," ACE Challenge Workshop, satellite event of IEEE-WASPAA. 
  • Meyer, B.T., Kollmeier, B., and Ooster, J. (2015). "Autonomous measurement of speech intelligibility utilizing automatic speech recognition," in Proc. Interspeech. [ pdf ]
  • Kayser, H., Spille, C., Marquardt, D., Meyer, B.T. (2015). "Improving automatic speech recognition in spatially-aware hearing aids," in Proc. Interspeech. [ pdf ]
  • Xiong, F., Meyer, B., Goetze, S. (2015). "A Study on Joint Beamforming and Spectral Enhancement for Robust Speech Recognition in Reverberant Environments," Proc. 40th International Conference on Acoustics, Speech, and Signal Processing (ICASSP). [ pdf ]
  • Spille, C., Meyer, B.T. (2014). "Identifying the human-machine differences in complex binaural scenes: What can be learned from our auditory system," in Proc. Interspeech, pp. 626-631. [ pdf ]
  • Castro Martinez, A.M., Moritz, N., Meyer, B.T. (2014). "Should deep neural nets have ears? The role of auditory features in deep learning approaches," in Proc. Interspeech, pp. 2435-2439. [ pdf ]
  • Xiong, F., Moritz, N., Rehr, R., Anemüller, J., Meyer, B.T., Gerkmann, T., Doclo, S., Goetze, S. (2014). "Robust ASR in reverberant environments using temporal cepstrum smoothing for speech enhancement and an amplitude modulation filterbank for feature extraction," in Proc. REVERB (REverberant Voice Enhancement and Recognition Benchmark) challenge. [ pdf ]
  • Xiong, F., Goetze, S., Meyer, B.T. (2014). "Estimating room acoustic parameters for speech recognizer adaptation and combination in reverberant environments," Proc. 39th International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 5522-5526. [ pdf ]
  • Meyer, B.T. (2013). "What's the difference? Comparing humans and machines on the Aurora2 speech database," in Proc. Interspeech 2013, 2634-2638. [ pdf ]
  • Spille, C., Dietz, M., Hohmann, V., Meyer, B.T. (2013). "Using binaural processing for automatic speech recognition in multi-talker scenes," Proc. 38th International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 7805-7809. [ pdf ]
  • Xiong, F., Goetze, S., Meyer, B.T. (2013). "Blind estimation of reverberation time based on spectro-temporal modulation filtering," Proc. 38th International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 443-447. [ pdf ]
  • Chang, S., Meyer, B.T., Morgan, N. (2013). "Spectro-temporal features for noise-robust speech recognition using power-law nonlinearity and power-bias subtraction," Proc. 38th International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 7063-7067. [ pdf ]
  • Moritz, N., Schädler, M.R., Adiloglu, K., Meyer, B.T., Jürgens, T., Gerkmann, T., Goetze, S. (2013). "Noise robust distant automatic speech recognition utilizing NMF based source separation and auditory feature extraction," Workshop on Machine Listening in Multisource Environments (CHiME 2013). [ pdf ]
  • Meyer, B.T., Spille, C., Kollmeier, B., and Morgan, N. (2012). "Hooking up spectro-temporal filters with auditory-inspired representations for robust automatic speech recognition," in Proc. Interspeech. [ pdf ]
  • Kollmeier, B., Schädler, M.R., Meyer, A., Anemüller, J., and Meyer, B.T. (2012). "Do we need STRFs for cocktail parties? - On the relevance of physiologically motivated features for human speech perception derived from automatic speech recognition", in Proc. International Symposium of Hearing (ISH), Cambridge, UK.
  • Lei, H., Meyer, B.T., and Mirghafori, N. (2012). "Spectro-temporal Gabor features for speaker recognition," in Proc. ICASSP, pp. 4241-4244. [ pdf ]
  • Meyer, B.T. (2011). "Improving automatic speech recognition by learning from human errors," in Proc. 162nd Meeting Acoustical Society of America. Selected one of the highlights of the ASA meeting. POMA Volume 14, pp. 060001.
  • Meyer, B.T. (2011). "Extraction of Spectro-Temporal Speech Cues for Robust Automatic Speech Recognition," in Proc. 42nd International Conference of the Acoustic Engineering Society (AES), pp. 108-116. 
  • Meyer, B.T., Ravuri, S., Schädler, M.R., and Morgan, N. (2011). "Comparing different flavors of spectro-temporal features for ASR", in Proc. Interspeech, pp. 1269-1272. [ pdf ]
  • Meyer, B.T. and Kollmeier, B. (2010). "Learning from human errors: Prediction of phoneme confusions based on modified ASR training", in Proc. Interspeech. [ pdf ]
  • Meyer, B. and Kollmeier, B. (2009). "Complementarity of MFCC, PLP and Gabor features in the presence of speech-intrinsic variabilities,” in Proc. Interspeech. [ pdf ]
  • Meyer, B.T. and Kollmeier, B. (2008). "Optimization and Evaluation of Gabor feature sets for ASR,” in Proc. Interspeech. [ pdf ]
  • Garcia Lecumberri, M.L., Cooke, M., Cutugno, F., Giurgiu, M., Meyer, B.T., Scharenborg, O., van Dommelen, W., and Volin, A. (2008). "The non-native consonant challenge for European languages,” in Proc. Interspeech.
  • Meyer, B.T., Wächter, M., Brand, T., and Kollmeier, B. (2007). "Phoneme confusions in human and automatic speech recognition,” in Proc. Interspeech, Antwerpen, Belgium, pp. 1485-1488. [ pdf ]
  • Kollmeier, B., Meyer, B.T., Jürgens, T., Beutelmann, R., Meyer, R., and Brand, T. (2007). "Speech reception in noise: How much do we understand?,” in Proceedings of the International Symposium on Auditory and Audiological Research (ISAAR), Helsingør, Denmark. 
  • Meyer, B.T., Wesker, T., Brand, T., Mertins, A., and Kollmeier, B. (2006). "A human-machine comparison in speech recognition based on a logatome corpus,” in Workshop on Speech Recognition and Intrinsic Variation, Toulouse, France. [ pdf ]
  • Wesker, T., Meyer, B., Wagener, W., Anemüller, J., Mertins, A., and Kollmeier, B. (2005). "Oldenburg Logatome Speech Corpus (OLLO) for speech recognition experiments with humans and machines,” in Proceedings of Interspeech, Lisbon, Portugal, pp. 1273-1276. [ pdf ]

 

Book Chapters

  • Spille, C., Meyer, B.T., Dietz, M., Hohmann, V. (2013). Chapter "Binaural scene analysis with multi-dimensional statistical filters," in "The Technology of Binaural Listening" (Ed. Blauert, J.), Springer, Berlin.
  • Kollmeier, B., Brand, T., and Meyer, B. (2008). Chapter "Perception of speech and sound," in "Springer Handbook of Speech Processing," pp. 61-82, Springer, Berlin.

 

Theses

  • Meyer, B., "Human and automatic speech recognition in the presence of speech-intrinsic variations,” Ph. D. thesis, Carl-von-Ossietzky Universität, Oldenburg, 2009. [ url ]
  • Meyer, B., "Robust Speech Recognition based on Spectro-Temporal Features,” diploma thesis, Carl-von-Ossietzky Universität, Oldenburg, 2004. [ pdf ]

 

Other publications

  • Hülsmeier, D., Hauth, C. F., Röttges, S., Kranzusch, P., Roßbach, J., Schädler, M. R., Meyer, B. T., Warzybok, A., Brand, T. (2021). "Towards Non-Intrusive Prediction of Speech Recognition Thresholds in Binaural Conditions," Proc. ITG Conference on Speech Communication. 
  • Ooster, J., Krueger, M., Bach, J.-H., Wagener, K. C., Kollmeier, B., Meyer, B. T. (2021). "Hearing test using smart speakers: Speech audiometry with Alexa," Proc. Virtual Conference on Computational Audiology (VCCA)
  • Roßbach, J., Röttges, S., Hauth, C. F., Brand, T., Meyer, B.T., (2021). "Binaural prediction of speech intelligibility based on a blind model using automatic phoneme recognition," Proc. Virtual Conference on Computational Audiology (VCCA)
  • Huber, R., Pusch, A., Moritz, N., Rennies, J., Schepker, H., Meyer, B.T. (2018). "Objective Assessment of a Speech Enhancement Scheme with an Automatic Speech Recognition-Based System," Proc. ITG Conference on Speech Communication
  • Kollmeier, B., Spille, C., Castro Martínez, A. M., Ewert, S.D., Meyer, B.T. (2018). "Modelling human speech recognition in challenging noise maskers: Machine learning and auditory modelling," in Proc. International Symposium of Hearing, Copenhagen.
  • Meyer, B.T., Spille, C., Ewert, S.D., Kollmeier, B. (2017). Modelling human speech recognition with automatic speech recognition techniques based on deep neural nets, in Proc. ARCHES (Audiological Research Cores in Europe), Leuven. 
  • de Taillez, T., Kollmeier, B., Meyer, B.T. (2017). “Decoding speaker attendance from EEG data using deep machine learning in continuous speech,” Speech in noise workshop, Oldenburg.
    Spille, C. and Meyer, B.T. (2016). “Are deep neural network speech recognizers still hearing-impaired?,” Speech in noise workshop, Groningen. 
  • Kollmeier, B., Schädler, M.R., Warzybok, A., Meyer, B.T., Brand, T. (2015). "Individual speech recognition in noise, the audiogram & more: Using automatic speech recognition (ASR) as a modelling tool and consistency check across audiological measures," abstract for the International Symposium On Auditory And Audiological Research (ISAAR).
  • Meyer, B. (2011). "Human and automatic speech recognition in the presence of speech-intrinsic variations", Summary of Ph.D. thesis, Zeitschrift für Audiologie 50 (2), pp. 77-78.
  • Schädler, M. R., Meyer, B. and Kollmeier, B. (2011). "Robuste Spracherkennung mit spektro- temporalen Filterbankmerkmalen", Fortschritte der Akustik - Tagungsband der DAGA.
  • Meyer, B. and Kollmeier, B., "Einfluss intrinsischer Sprachvariation auf automatische Spracherkenner – Vergleich spektraler und spektro-temporaler Merkmale", in Proc. DAGA,Berlin, 2010.
  • Meyer, B., Brand, T., and Kollmeier, B., "Phonemverwechslungen bei menschlicher und automatischer Spracherkennung,” in Proceedings of DAGA, Stuttgart, Germany, 2007, pp. 79-80. [ pdf ]
  • Meyer, B. and Kleinschmidt, M., "Robust Speech Recognition Based on Localized Spectro-Temporal Features,” in Proceedings of the Elektronische Sprach- und Signalverarbeitung (ESSV), Karlsruhe, 2003.
  • Wesker, T., Meyer, B., Brand, T., Wagener, K., and Kollmeier, B., "OLLO - Ein Logatom-Sprachkorpus für Sprachverständlichkeitsmesungen und Erkennungsexperimente mit Menschen und Maschinen,” in 9. Jahrestagung der Deutschen Gesellschaft für Audiologie, Zeitschrift für Audiologie, Suppl. IX, 2006.

 

(1) Copyright Acoustical Society of America. This article may be downloaded for personal use only. Any other use requires prior permission of the author and the Acoustical Society of America.

(Changed: 07 Mar 2024)  | 
Zum Seitananfang scrollen Scroll to the top of the page