Preferences

[1] H. Yiteng, J. Benesty, and C. Jingdong, “A blind channel identification-based two-stage approach to separation and dereverberation of speech signals in a reverberant environment,” IEEE Transactions on Speech and Audio Processing, vol. 13, no. 5, pp. 882-895, 2005.

[2] K. Kinoshita et al., “A summary of the REVERB challenge: state-of-the-art and remaining challenges in reverberant speech processing research,” vol. 2016, no. 1, p. 7, 2016.

[3] M. Miyoshi and Y. Kaneda, “Inverse filtering of room acoustics,” Acoustics, Speech and Signal Processing, vol. 36, no. 2, pp. 145-152, 1988.

[4] K. i. Furuya, “Noise reduction and dereverberation using correlation matrix based on the multiple-input/output inverse-filtering theorem (MINT),” in International Workshop on Hands-Free Speech Communication, 2001.

[5] S. C. Douglas, H. Sawada, S. Makino, and A. Processing, “Natural gradient multichannel blind deconvolution and speech separation using causal FIR filters,” IEEE Transactions on Speech and Audio Processing

vol. 13, no. 1, pp. 92-104, 2004.

[6] I. Kodrasi and S. Doclo, “Joint dereverberation and noise reduction based on acoustic multi-channel equalization,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 24, no. 4, pp. 680-693, 2016.

[7] A. V. Oppenheim, “RW schafer digital signal processing,” Prentice-Hall, Englewood Cliffs, New Jersey, vol. 6, pp. 125-136, 1975.

[8] D. Bees, M. Blostein, and P. Kabal, “Reverberant speech enhancement using cepstral processing,” in Acoustics, Speech, and Signal Processing, IEEE International Conference on, 1991, pp. 977-980: IEEE Computer Society.

[9] S. T. Neely and J. B. Allen, “Invertibility of a room impulse response,” The Journal of the Acoustical Society of America, vol. 66, no. 1, pp. 165-169, 1979.

[10] D. Zhang and G. Chen, “Speech signal dereverberation with cepstral processing,” Technical Acoustics, no. 1, pp. 39-44, 2009.

[11] M. Wu and D. Wang, “A two-stage algorithm for one-microphone reverberant speech enhancement,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 14, no. 3, pp. 774-784, 2006.

[12] Q. Liao, R. Kong, Y. Shen, J. Gu, H. Zhao, and Z. Tao, “Dereverberation based on minimym phase decomposition,” Communications Technology, vol. 44, no. 6, pp. 78-82, 2011.

[13] Q.-G. Liu, B. Champagne, and P. Kabal, "A microphone array processing technique for speech enhancement in a reverberant space,"Speech Communication, vol. 18, no. 4, pp. 317-334, 1996.

[14] P. Mowlaee, R. Saeidi, and Y. Stylianou, “Advances in phase-aware signal processing in speech communication,” Speech communication, vol. 81, pp. 1-29, 2016.

[15] K. Paliwal, K. Wójcicki, and B. J. s. c. Shannon, “The importance of phase in speech enhancement,” vol. 53, no. 4, pp. 465-494, 2011.

[16] R. Peng, Z.-H. Tan, X. Li, and C. J. S. C. Zheng, “A perceptually motivated LP residual estimator in noisy and reverberant environments,” Speech Communication, vol. 96, pp. 129-141, 2018.

[17] T. Yoshioka, T. Nakatani, M. Miyoshi, and H. G. J. I. T. o. A. Okuno, Speech, “Blind separation and dereverberation of speech mixtures by joint optimization,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 19, no. 1, pp. 69-84, 2010.

[18] K. Kinoshita, M. Delcroix, H. Kwon, T. Mori, and T. Nakatani, “Neural network-based spectrum estimation for online WPE dereverberation,” in Interspeech, 2017, pp. 384-388.

[19] M. Parchami, W.-P. Zhu, and B. J. S. c. Champagne, “Speech dereverberation using weighted prediction error with correlated inter-frame speech components,” vol. 87, pp. 49-57, 2017.

[20] T. Nakatani and K. Kinoshita, “A unified convolutional beamformer for simultaneous denoising and dereverberation,” IEEE Signal Processing Letters, vol. 26, no. 6, pp. 903-907, 2019.

[21] X. Zhang, Y. Li, C. Zheng, T. Cao, M. Sun, and G. Min, “Research progress and prospect of speech dereverberation technology,” Journal of Acquisition and Processing Data, vol. 32, no. 006, pp. 1069-1081, 2017.

[22] D. Giacobello and T. L. Jensen, “Speech dereverberation based on convex optimization algorithms for group sparse linear prediction,” in 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2018, pp. 446-450: IEEE.

[23] S. Braun and E. A. Habets, "Linear prediction-based online dereverberation and noise reduction using alternating Kalman filters,"IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 26, no. 6, pp. 1119-1129, 2018.

[24] L. Mousavi, F. Razzazi, and A. Haghbin, “Blind speech dereverberation using sparse decomposition and multi-channel linear prediction,” International Journal of Speech Technology, vol. 22, no. 3, pp. 729-738, 2019.

[25] K. Lebart, J.-M. Boucher, and P. N. Denbigh, “A new method based on spectral subtraction for speech dereverberation,” Acta Acustica united with Acustica, vol. 87, no. 3, pp. 359-366, 2001.

[26] Z. Chen, F. Yin, and W. Peng, “An audio reverberation suppression device and suppression method,” China, 2013.

[27] R. Martin, “Speech enhancement based on minimum mean-square error estimation and supergaussian priors,” IEEE Transactions on Speech and Audio Processing, vol. 13, no. 5, pp. 845-856, 2005.

[28] Z. Li, W. Wu, Q. Zhang, and H. Ren, “Multi-band spectral subtraction of speech enhancement based on maximum posteriori phase estimation,” Journal of Electronics and Information Technology, vol. 39, no. 9, pp. 2282-2286, 2017.

[29] Y. Guo, R. Peng, C. Zheng, and X. Li, “Maximum skewness-based multichannel inverse filtering for speech dereverberation,” Applied Acoustics, vol. 38, no. 1, pp. 58-67, 2019.

[30] M. G. Christensen and A. Jakobsson, “Multi-pitch estimation,” Synthesis Lectures on Speech and Audio Processing, vol. 5, no. 1, pp. 1-160, 2009.

[31] B. Harvey and S. O’Young, “A harmonic spectral beamformer for the enhanced localization of propeller-driven aircraft,” Journal of Unmanned Vehicle Systems, vol. 7, no. 2, pp. 156-174, 2019.

[32] A. Schmidt, H. W. Löllmann, and W. Kellermann, “A novel ego-noise suppression algorithm for acoustic signal enhancement in autonomous systems,” in 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2018, pp. 6583-6587: IEEE.

[33] T. Nakatani and M. Miyoshi, “Blind dereverberation of single channel speech signal based on harmonic structure,” in 2003 IEEE International Conference on Acoustics, Speech, and Signal Processing, 2003. Proceedings.(ICASSP’03). 2003, vol. 1, pp. I-92: IEEE.

[34] K. Kinoshita, T. Nakatani, and M. Miyoshi, “Fast estimation of a precise dereverberation filter based on speech harmonicity,” in Proceedings. (ICASSP '05). IEEE International Conference on Acoustics, Speech, and Signal Processing, 2005., 2005, vol. 1, pp. I/1073-I/1076 Vol. 1.

[35] T. Nakatani, K. Kinoshita, M. J. I. T. o. A. Miyoshi, Speech, and L. Processing, “Harmonicity-based blind dereverberation for single-channel speech signals,” vol. 15, no. 1, pp. 80-95, 2006.

[36] N. Roman and D. Wang, “Pitch-based monaural segregation of reverberant speech,” The Journal of the Acoustical Society of America, vol. 120, no. 1, pp. 458-469, 2006.

[37] S. Mosayyebpour, H. Sheikhzadeh, T. A. Gulliver, and M. Esmaeili, “Single-microphone LP residual skewness-based inverse filtering of the room impulse response,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 20, no. 5, pp. 1617-1632, 2012.

[38] T. Hussain, S. M. Siniscalchi, H.-L. S. Wang, Y. Tsao, S. V. Mario, and W.-H. Liao, "Ensemble hierarchical extreme learning machine for speech dereverberation,"IEEE Transactions on Cognitive and Developmental Systems, 2019.

[39] N. Kilis and N. Mitianoudis, “A novel scheme for single-channel speech dereverberation,” in Acoustics, 2019, vol. 1, no. 3, pp. 711-725: Multidisciplinary Digital Publishing Institute.

[40] Y. Zhao, Z.-Q. Wang, and D. Wang, “Two-stage deep learning for noisy-reverberant speech enhancement,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 27, no. 1, pp. 53-62, 2018.

[41] M. Jeub, C. Nelke, C. Beaugeant, and P. Vary, “Blind estimation of the coherent-to-diffuse energy ratio from noisy speech signals,” in 2011 19th European Signal Processing Conference, 2011, pp. 1347-1351: IEEE.

[42] A. Schwarz and W. Kellermann, “Coherent-to-diffuse power ratio estimation for dereverberation,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 23, no. 6, pp. 1006-1018, 2015.

The preferences of “An Overview of Speech Dereverberation“相关推荐

  1. 语音去混响算法之WPE( Weighted Prediction Error for speech dereverberation)

    目录 简介 信号模型 WPE 算法(Weighted prediction error) TVG 模型(time-varying Gaussian model) 目标函数 迭代求权重离线解 参考文献 ...

  2. 【论文笔记之 Speech Separation Overview】Supervised Speech Separation Based on Deep Learning-An Overview

    本文对汪徳亮于 2017 年在 IEEE/ACM Transactions on Audio, Speech, and Language Processing 上发表的论文进行简单地翻译,如有表述不当 ...

  3. 论文翻译:2021_Low-Delay Speech Enhancement Using Perceptually Motivated Target and Loss

    论文地址:使用感知动机目标和损失的低延迟语音增强 引用格式:Zhang X, Ren X, Zheng X, et al. Low-Delay Speech Enhancement Using Per ...

  4. SEGAN: Speech Enhancement Generative Adversarial Network

    论文原文地址, 目录 摘要 一.引言 二.Generative Adversarial Networks 三.Speech Enhancement GAN 四.实验步骤 4.1 数据集 4.2 SEG ...

  5. WangDeLiangReview2018 - (7)讨论总结

    [WangDeLiangOverview2018] Supervised Speech Separation Based on Deep Learning: An Overview DeLiang W ...

  6. Complex Spectral Mapping With Attention Based Convolution Recurrent Neural Network(省略)---论文翻译

    基于注意力的卷积递归神经网络的复杂频谱映射,用于语音增强 Liming Zhou1, Yongyu Gao1,Ziluo Wang1,Jiwei Li1,Wenbin Zhang11CloudWalk ...

  7. 实时通信服务中的语音解混响算法实践

    导读: 随着音视频通信会议越来越普及,与会各方在不同环境中遇到了越来越明显且差异的混响场景,譬如大会议室场景.玻璃会议室场景和小房间且隔音材料不佳场景等.为了保证更好的听音可懂度和舒适度,通信中的语音 ...

  8. 顶会 | 腾讯AI Lab 9篇入选论文解读

    点上方计算机视觉联盟获取更多干货 仅作学术分享,不代表本公众号立场,侵权联系删除 转载于:腾讯AI Lab微信(tencent_ailab) AI博士笔记系列推荐 周志华<机器学习>手推笔 ...

  9. 能骗173万的诈骗电话可以做到多逼真?

    Python开发 点击右侧关注,探讨技术话题! 作者丨世超 来源丨差评(chaping321) https://mp.weixin.qq.com/s/Tafx0f4BZutFMl9o3GYK_Q 在开 ...

  10. Interspeech 2021 | 腾讯AI Lab解读9篇入选论文

    感谢阅读腾讯 AI Lab 微信号第 130 篇文章.本文将介绍腾讯 AI Lab 入选 Interspeech 2021 的 9 篇论文. Interspeech 是由国际语音通讯协会(Intern ...

最新文章

  1. 教你一招---如何把桌面弄到D盘
  2. python判断是否回文_对python判断是否回文数的实例详解
  3. 敏捷团队如何通过Leangoo领歌做迭代管理、迭代规划及任务协同
  4. 使用DjangoUeditor将Ueditor移植到Django(BAE环境下)
  5. openresty luarocks 安装以及openssl 问题处理
  6. 【Flutter】shared_preferences 本地存储 ( 简介 | 安装 shared_preferences 插件 | 使用 shared_preferences 流程 )
  7. java-第七章-数组-循环输出
  8. 吴恩达家免费NLP课程上线啦!
  9. Twitter创始人Jack Dorsey的每日必做和不做清单
  10. 自然语言处理常用标识符<UNK>,<PAD>,<SOS>,<EOS>等
  11. Django day 36 支付宝支付,微信推送
  12. [CMake] include_directories 和 target_include_directories
  13. Matlab 基本画图命令
  14. Arduino—— SSD1306 OLED IIC
  15. python实现前复权及后复权
  16. flutter笔记 图片组件使用base64数据,数据格式报错
  17. CI Weekly #21 | iOS 持续集成快速入门指南
  18. 视频转码编辑工具:Compressor for Mac(4.5.4)
  19. 瑞吉外卖第五天(套餐的增删改和手机端登录功能的实现)
  20. 收藏,核心期刊的投稿、审稿、出刊流程详解

热门文章

  1. 2022年建筑架子工(建筑特殊工种)考试资料及建筑架子工(建筑特殊工种)新版试题
  2. 固建机器人钢结构智能生产线 改善钢结构行业品质
  3. 灰色预测模型matlab预测20个数据,怎么matlab灰色模型预测这组数据的下一...
  4. c语言数字转成大写,c语言 数字变大写 代码
  5. Python:运营自媒体,如何修改图片的MD5值
  6. PR视频转场预设 10个快节奏极限运动空间扭曲效果PR转场过渡预设
  7. 安装3dmax出现:error 1311 找不到源文件
  8. 微信小程序测试注意事项
  9. shp文件根据属性导出若干单独shp
  10. Adobe各种最新版本软件下载 PhotoShop Dreamweaver FW Ai Fl.....