原文链接 LONG-TERM PREDICTION

by: Adit Aviv 
      Kfir Grichman

introduction:

The speech signal has been studied for various reasons and applications by many researchers for many years. Some studies broke down the speech signal into its smallest portions, called phonemes. However, here we will describe the speech signal in terms of its general characteristics. The traditional vocoders which have been in use for many years classify the input speech signal either as voiced or unvoiced. A voiced speech segment is known by its relatively high energy content, but more importantly it contains periodicity which is called the pitch of voiced speech. The unvoiced part of speech, on the other hand, looks more like random noise with no periodicity. However, there are some parts of speech that.are neither voiced nor unvoiced, but a mixture of the two. These are usually called the transition regions, where there is a change either from voiced to unvoiced or unvoiced to voiced.

One of the most powerful speech analysis methods is that of Linear Predictive Coding, or LPC analysis as it is commonly referred to. In LPC analysis the short-term correlations between speech samples (formants) are modelled and removed by a very efficient short order filter. Another equally powerful and related method is pitch prediction. In pitch prediction, the long-term correlation of speech samples are modelled. In the following report, these linear prediction techniques will be examined and discussed.

 

LINEAR PREDICTIVE CODING (LPC) OF SPEECHThe linear predictive coding (LPC) method for speech analysis and synthesis is based on modeling the vocal tract as a linear all-pole (IIR) filter having the system function:

When p is the number of poles, G is the filter gain, and { ap (k) } are the parameters that determine the poles. There are two mutually exclusive excitation function to model voiced and unvoiced speech sounds. On a short time basis, voiced speech is periodic with a fundamental frequency F0, or a pitch period 1/F0, which depend on the speaker. 
Thus voiced speech is generated by exciting the all-pole filter model by a periodic 
impulse train with a period equal to the desired pitch period. 
Unvoiced speech sounds are generated by exciting the all pole filter model by the

Block diagram model for the generation of a speech signal

Given a short-time segment of a speech signal, usually about 20ms or 160 samples at an 8 kHz sampling rate, the speech encoder at the transmitter must determine the proper excitation function, the pitch period for voiced speech, the gain parameter G, and the coefficients ap (k). 
A block diagram that illustrates the speech encoding system is given in the next figure :

Encoder and decoder for LPC

At the  receiver the speech signal is synthesized from the model and the excitation signal.

The parameter of the all pole filter model are easily determined from the speech samples by mans of linear prediction.

To be specific, the output of the fir linear prediction filter is:

                 1.1

and the corresponding error between the observed sample s(n) and the predicted value is,

          1.2

By minimizing the sum of squared errors, that is,

                   1.3

We can determine the pole parameters {ap(k)} of the model. The result of differentiating with respect to each of the parameters and equating the result to zero, is a set of p linear equations,

                  1.4

Where rss(m) is the autocorrelation of the sequence s(n) defined as,

                                        1.5

The linear equation (1.4) can be expressed in matrix form as

                                                         1.6

When Rss is a p*p autocorrelation matrix, rss is a p*1 autocorrelation vector, and a p*1 vector of model parameters. Hence

 1.7
LONG - TERM  PREDICTION (LTP)

In the residual LPC we can see the  ability of LPC analysis to remove the adjacent or neighbouring sample correlations present in speech . As observed, this was equivalent to removing the spectral envelope in the signal spectrum. However, as can be seen from the Figure pitch prediction, after LPC analysis there are still considerable variations in the spectrum, i.e. it is far from white. 
  
  
  
Spectra of (a) original speech envelope, (b) original speech spectrum, and (c) LPC residual spectrum .

Looking at the residual signal in the Figures above, it is clear that long-term correlations, especially during voiced regions, still exist between samples.

To hear the original signal click here... 
To hear the residual LPC click here...

The most evident of these are the sharp periodic pulses which, being the excitation signal, is hardly surprising, as our original source-filter model assumes this type of input signal. This also explains why the LPC analysis, which models our vocal tract, cannot adequately remove them. Consequently, to remove the periodic structure of the residual or excitation signal, a second stage of prediction is required. The objective of this second stage is again to spectrally flatten our signal, i.e. to remove the fine structure. But unlike the LPC analysis, it exploits correlation between the speech samples that are one 'pitch' or multiple 'pitch' period away. For this reason, the pitch prediction (filter) is usually called the long-term prediction (LTP) and the filter delay is called the lag. In the following report, these long-term or distant sample based predictors will be described.

 Pitch predictor (filter) formulation

Before discussing methods of pitch or long-term prediction, it is perhaps worth considering what our objectives are. Our aim is to model the long-term correlation left in the speech residual signal after LPC inverse filtering (or in the original speech signal) such that when the model parameters are used in a filter, it will remove the long-term correlation as much as possible, or spectrally flatten our signal. There are no obvious reasons why we must use the residual and not the original signal to model the long-term correlation in the speech signal, as long as the effects of the formants are taken into account during determination of the long-term delay (pitch) in our model.

The order of the LTP is not too critical if the combination is carefully optimised, e.g. block edge effects must be carefully compensated to avoid 'clicking' type distortions. It is worth noting that the prediction gain of the combined system will always be less than the sum of the gains in systems employing the LTP  in isolation. This is because in reality the vocal tract and excitation are not completely separable, as assumed in our model, but are interconnected. The LTP can be interpreted as 
  
  
 1.8           
  
where T is the 'pitch period', and bj are the 'pitch gain' coefficients which reflect the amount of correlation between the distant samples. The combined analysis model can be represented by a time domain difference equation: 
1.9 

where r(n) is the past excitation signal. Following a similar procedure to that of the LPC analysis, our goal is to determine estimates (Bj, T, aj) of the model parameter (bj, T, ai). Then, the prediction error is given by: 
  
  2.0

The mean squared error solution to equation (2.0) is not as straightforward as for the LPC analysis due to the presence of the delay factor T. To overcome this hurdle, two sub-optimal approaches can be taken:

(1) One-Shot Optimisation: if one assumes that the pitch spectrum information of the residual r(n) is 
     close to the pitch spectrum information of the input speech s(n), then we can solve for ai as before 
     and then use the residual from the LPC inverse filter to determine (B,T).Thus during the first iteration, 
     the STP coefficients are estimated to minimise the intermediate residual energy. The LTP coefficients 
     are then found using this intermediate residual signal. This procedure can be considered to be near 
     optimal provided the long-term lag, T, is greater than the analysis frame size, i.e. T > N. 
(2) Iterative Sequential Approach: an analysis similar to the one-shot method described above is first 
     performed. During subsequent iterations, the STP is re- opfimised given the previously determined 
     LTP coefficients . Also, the LTP is recalculated based on the newly formed intermediate 
     residual. This iteration process can then be continued until a certain threshold, or a fixed number of 
     iterations, is reached.

For practical reasons, the one-shot method is usually preferred as it only requires one iteration. In the iterative sequential method the main difficulty is to set a suitable threshold for the termination of the iteration run. Overall, it is substantially more complicated. However, the iterative method has been reported to give a better prediction gain and better perceptual performance . This is usually achieved with a shifting of the STP prediction gain to the LTP prediction gain. Here, only the one- shot method is considered as follows. 
By removing the STP effect in equation (2.0), we obtain 
 2.1

The estimates can now be determined by mean squared error, i.e. 
 2.2 

Replacing the expectation with finite summations, we get 
2.3 

By setting 
 
to zero, we obtain

2.4  
  
which can be written in matrix form as 
  
2.5  
  
where

2.6  
2.7 

The Bj coefficients can now be solved by inverting V(i,j), e.g. using Cholesky's decomposition. In the above formulation, a 'fix-up' may be used to ensure that the filter so formed is stable, e.g. by adding a small noise source into the formulation, the matrix inversion to obtain [V(i,j)]-l can be made more reliably. However, a stable LTP is not a pre-condition on the LTP analysis as rapid transitions are sometimes desired.

In the above formulation it is assumed that the pitch lag, T, has already been found and that 
Bj =Bj,T To determine T, various pitch measurement algorithms can be used (in our project we took the 3-tap LTP, I = 1, which forms the pitch prediction based on three past samples at T - 1, T, T + 1) . These include the auto-correlation , average magnitude difference function (AMDF) , Cepstrum  and Maximum Likelihood . These methods perform with different characteristics, especially with a noisy input signal. For simplicity, the auto-correlation algorithm is used for the general description below. 
As the preceding analysis to determine Bj has shown, pitch analysis is performed on a block containing N samples. However, the size of our window in which the block is taken is required to be considerably longer than the analysis frame length, N. This is because our pitch value, T, can vary between a minimum, Tmin, of around 16 samples to a maximum, Tmax, of around 150 samples. Therefore, our ideal analysis window is much greater (N +Tmax) in length (200-256 samples) such that it contains more than one complete pitch period. For simplicity, consider a 1-tap LTP, i.e. (I = 0)

2.8 

Thus

2.9 

3.0 

Substituting this into equation (2.3),

3.1 

The main problem

To determine the optimum T, values of the lags are tested between Tmin, and Tmax, and the lag which minimises the error E is the optimum. Having found T, the gain B can be found. A plot of the LPC residual and the signal (secondary excitation) after LTP inverse filtering is shown in the next Figures .

To hear the residual LPC click here...

To hear the residual LTP click here...

Time domain plots of  LPC and pitch residuals

Time domain plots of both LPC and pitch residuals.

It is clear that the secondary excitation no longer possesses the sharp pulse-like characteristics of the residual, i.e. it looks much whiter than the LPC residual. Similar formulation can also be given for multiple tap LTPS.

Multiple tap LTPs tend to provide better performance than the single-tap LTP, in general, but with increased complexity and larger capacity requirement for the extra two filter taps B-1 and B1.

Final conclution: 
After we found the "white residual" we can comprees the error to minimum and by that get a clear signal with low-bit rate. 
  
To hear the final signal after the Resiver click here...

很好的一篇讲LTP在编解码中的作用的文章相关推荐

  1. 一篇讲清:数据采集中的安全与隐私

    1. 数据采集面临的安全与隐私挑战 不管是第三方分析工具,还是企业的第一方分析系统,在分析用户行为时,通常都会选择在客户端(一般是安卓.iOS 和 Web 端)采集用户的行为,然后经过打包.压缩等一系 ...

  2. Redux其实很简单(原理篇)

    在上一篇文章中,我们通过一个示例页面,了解到Redux的使用方法以及各个功能模块的作用.如果还不清楚Redux如何使用,可以先看看Redux其实很简单(示例篇),然后再来看本文,理解起来会更加轻松. ...

  3. Java基础-->一篇讲全Java常用类(详细易懂,建议收藏)

    Java基础–>一篇讲全Java常用类(详细易懂,建议收藏) 文章目录 Java基础-->一篇讲全Java常用类(详细易懂,建议收藏) 1.字符串相关的类 String类 概述 创建Str ...

  4. 很精彩的一篇传道文 (寻找罗素的三激情时发现的)

    很精彩的一篇传道文,父传女道. http://blog.sina.com.cn/s/blog_4cd081e901017heq.html 爸爸: 读了你的<罗丹的启迪>,我真的受到启迪了. ...

  5. 一篇令所有游戏圈的兄弟汗颜的文章

    回想起上个世纪末,华人游戏圈还处于原始阶段,那时候随便竖几条枪占个山头就敢说自己是做游戏的,拿出来的东西勉强有个模样就不错了,Bug少点那得是国货精品.真正的国货精品又怎么样?一些玩家说:" ...

  6. Chisel教程——14.(完结篇)Scala和Chisel中的数据类型

    (完结篇)Scala和Chisel中的数据类型 完结篇开头的碎碎念 这是这个系列的最后一篇文章了,官方的Chisel-Bootcamp中后面还有FIRRTL相关的内容,但设计一个RISC-V CPU这 ...

  7. 逆向知识第十讲,循环在汇编中的表现形式,以及代码还原

    逆向知识第十讲,循环在汇编中的表现形式,以及代码还原 一丶do While在汇编中的表现形式 1.1高级代码: #include "stdafx.h"int main(int ar ...

  8. android 模糊查询控件_第三十二篇:在SOUI2.0中像android一样使用资源

    SOUI2.0之前,在SOUI中使用资源通常是直接使用这个资源的name(一个字符串)来引用.使用字符串的好处在于字符串能够表达这个资源的意义,因此使用字符串也是现代UI引擎常用的方式.尽管直接使用字 ...

  9. access子窗体的控件vba怎么写_第37讲:VBA代码中运行错误的处理方式

    大家好,本来在这一讲要接着我们的上一讲内容讲解二师兄的成长过程之九,但之九的内容是错误的处理,为了大家能更好的掌握之九二师兄的成才内容,我们临时加入一讲专门讲解VBA中错误处理,这一讲中我重点讲一下V ...

最新文章

  1. 基于Ubuntu交叉编译FFmpeg Windows SDK
  2. Windows Server 2003 R2中的“分布式文件系统”案例应用
  3. 完美解决 bash: hexo: command not found
  4. 四个小时不止是敲了30多行代码,还懂了好多
  5. Spring Boot 第一个小程序之又来Hello World了
  6. ASP.NET MVC路由扩展:路由映射
  7. 2.14 加载Firefox配置
  8. Linux 硬盘挂载
  9. SEO关键词优化 - 利用免费资源刷排名
  10. linux make menuconfig快速查找配置项
  11. 愿大家永远用不到的手机自带功能!记得要开启,能救命但别乱用
  12. POI文件上传及使用详解
  13. 大数据时代,做大数据开发要学Java框架吗?
  14. dota自走棋寻找不到服务器,《DOTA自走棋》服务器不对怎么办 服务器不对解决方法介绍...
  15. 推荐两个高质量程序猿国外接单网站—自由开发工作者
  16. 一台 液晶显示器台式计算机总耗电,一台配置液晶显示器的台式计算机总耗电量为()。...
  17. 电子信箱怎么样注册?邮箱格式怎么写?
  18. sql 查询除某列之外的数据
  19. 分裂布雷格曼方法--(Split Bregman Method)
  20. 四川省地图(SVG)——DIY

热门文章

  1. MongoDB之Hadoop驱动介绍
  2. JVM类加载机制(ClassLoader)源码解析
  3. 创建Servlet项目(IDEA版)
  4. 软件工程期末考试复习(四)
  5. python 将os.getcwd()获取路径中的\替换成\\
  6. docker安装问题:E: Package 'docker-ce' has no installation candidate
  7. win7 安装mysql 5.7.9记录
  8. vmware 指定的网络名不可用
  9. Resin 优化配置
  10. 大话设计模式Python实现-观察者模式