作者:66

(转载请注明出处)

参考链接:http://blog.csdn.net/hevc_cjl/article/details/8200793

亮度分量的帧内预测涉及到的模块比较多,CU->PU的划分,参考像素的填充,哈达玛变换计算STAD(代替率失真),参考预测模式的选择,最佳模式的选择等。

先了解相关的知识:1.CU->PU的划分,在帧内预测中,2Nx2N的CU有两种PU划分方式:2Nx2N,NxN。帧间预测模式有八种:分别为:2Nx2N,NxN,2NxN,Nx2N,nRx2N(左右3:1),nLx2N(左右1:3),2NxnU(上下1:3),2NxnD(上下3:1)。

2. 参考像素的填充:前面说过,HEVC较后h.264增加了左下边界,配合增加的角度模式。

3. 哈达玛变换,WHT(Walsh-Hadamard Transform),广义傅里叶变换的一种,变换矩阵Hm是个大小为2的次幂的矩阵,如下:

图一、hadamard矩阵

这种矩阵的特性主要有,元素+1、-1,计算方便;正交对称;奇行(列)偶对称,偶行(列)奇对称;变换前后能量守恒;因残差信号经WHT后的求绝对值和与DCT之后系数绝对值之和接近,可以作为视频编码中的快速模式选择。

4. 参考预测模式的选择:

图二、参考PU与当前PU的位置关系

HEVC中共35种亮度预测模式,包括planar(0)、DC(1)和33种角度模式(2~34)。

如上图,帧内预测参考其他PU,并建立候选列表candModeList[3],三个备选项。

选择的规则如下:

①modeA预测模式与modeB相同

1. 同为planar或DC

candModeList[3] = {planar, DC, 角度26(垂直)}

2. 同为角度模式

candModeList[3] = {modeA, modeA-1, modeA+1}

注:mode2相邻为3和32,mode34相邻为33和3。

②modeA与modeB不同

candModeList = {modeA, modeB, choice}

关于choice的选择:

1. modeA与modeB都不为planar

choice = planar

2. modeA与modeB都不为DC

choice = DC

3. 当前两个都不符合时,

choice = 26

在candModelist建立后,对当前PU信息进行编码,经过率失真遍历计算后,

①最优预测模式modeC在列表中,仅编码模式在列表中的位置

②若不在,依次进行如下步骤:

1. candModeList中的候选模式从小到大重排

2. 比较最优预测模式modeC与列表中每个的标号大小,若modeC >= candModeList[i],则令modeC = modeC - 1,遍历结束后,对modeC最终值进行编码。

整体上看,思路为:

当前CU划分为PU ->遍历每个PU ->对每个PU计算35个模式中的最优模式,与预测对比,最后确定最佳预测模式。

详细情况见代码,目前对代码还存有疑问,之后搞清楚了来修改:

//亮度分量帧内预测VoidTEncSearch::estIntraPredQT( TComDataCU* pcCU,TComYuv*    pcOrgYuv,TComYuv*    pcPredYuv,TComYuv*    pcResiYuv,TComYuv*    pcRecoYuv,UInt&       ruiDistC,Bool        bLumaOnly ){UInt    uiDepth        = pcCU->getDepth(0);//当前CU的深度//uiNumPU,划分后pu的个数,PU的划分模式,帧内有2种:2NX2N,NxN;帧间有8种:4种对称模式:2Nx2N,2NxN,Nx2N,NxN,四种非对称模式,2NxnU(上下1:3),2NxnD(上下3:1),nLx2N(左右1:3),nRx2N(左右3:1)。帧间还有一种skip模式,即不需要编码残差信息时。UInt    uiNumPU        = pcCU->getNumPartitions();//当前cu划分为pu的数目UInt    uiInitTrDepth  = pcCU->getPartitionSize(0) == SIZE_2Nx2N ? 0 : 1;//计算变换深度,实际为uiDepthUInt    uiWidth        = pcCU->getWidth (0) >> uiInitTrDepth;//当前cu的宽度UInt    uiHeight       = pcCU->getHeight(0) >> uiInitTrDepth;//当前cu的长度UInt    uiQNumParts    = pcCU->getTotalNumPart() >> 2;//当前cu包含的最小分区4x4的数目。UInt    uiWidthBit     = pcCU->getIntraSizeIdx(0);UInt    uiOverallDistY = 0;UInt    uiOverallDistC = 0;UInt    CandNum;Double  CandCostList[ FAST_UDI_MAX_RDMODE_NUM ];//===== set QP and clear Cbf =====if ( pcCU->getSlice()->getPPS()->getUseDQP() == true){pcCU->setQPSubParts( pcCU->getQP(0), 0, uiDepth );}else{pcCU->setQPSubParts( pcCU->getSlice()->getSliceQp(), 0, uiDepth );}//===== loop over partitions =====遍历UInt uiPartOffset = 0;//记录当前pu的Zorder坐标for( UInt uiPU = 0; uiPU < uiNumPU; uiPU++, uiPartOffset += uiQNumParts ){//===== init pattern for luma prediction =====Bool bAboveAvail = false;//Bool bLeftAvail  = false;pcCU->getPattern()->initPattern   ( pcCU, uiInitTrDepth, uiPartOffset );//获取当前PU邻域的可用性,对参考像素进行滤波,代码里的宽长都为当前cu的宽长,但是pu与tu的划分是以depth为基础的隐式划分,名字上仍以Cu表示,实际此Cu已经代表了PU或TU VpcCU->getPattern()->initAdiPattern( pcCU, uiPartOffset, uiInitTrDepth, m_piYuvExt, m_iYuvExtStride, m_iYuvExtHeight, bAboveAvail, bLeftAvail );//===== determine set of modes to be tested (using prediction signal only) =====Int numModesAvailable     = 35; //total number of Intra modesPel* piOrg         = pcOrgYuv ->getLumaAddr( uiPU, uiWidth );Pel* piPred        = pcPredYuv->getLumaAddr( uiPU, uiWidth );UInt uiStride      = pcPredYuv->getStride();UInt uiRdModeList[FAST_UDI_MAX_RDMODE_NUM];Int numModesForFullRD = g_aucIntraModeNumFast[ uiWidthBit ];//MPM的数目,像是候选预测模式数目//  g_aucIntraModeNumFast[]={3,8,8,3,3,3,3};2x2,4x4,8x8,16x16,32x32,64x64,128x128Bool doFastSearch = (numModesForFullRD != numModesAvailable);//这里doFastSearch恒为真if (doFastSearch)//此时是肯定会进入{assert(numModesForFullRD < numModesAvailable);//确定numModesForFullRD < numModesAvailablefor( Int i=0; i < numModesForFullRD; i++ ){CandCostList[ i ] = MAX_DOUBLE;//初始化率失真表,全部为最大值,方便后面比较。}CandNum = 0;for( Int modeIdx = 0; modeIdx < numModesAvailable; modeIdx++ )//遍历35中预测模式{UInt uiMode = modeIdx;//调用亮度帧内预测函数 VpredIntraLumaAng( pcCU->getPattern(), uiMode, piPred, uiStride, uiWidth, uiHeight, bAboveAvail, bLeftAvail );// use hadamard transform here哈达玛矩阵计算率失真UInt uiSad = m_pcRdCost->calcHAD(g_bitDepthY, piOrg, uiStride, piPred, uiStride, uiWidth, uiHeight );//计算SATD(残差经HAD后的绝对值总和)UInt   iModeBits = xModeBitsIntra( pcCU, uiMode, uiPU, uiPartOffset, uiDepth, uiInitTrDepth );//计算编码当面所需的bitsDouble cost      = (Double)uiSad + (Double)iModeBits * m_pcRdCost->getSqrtLambda();//率失真代价//iModeBits编码当前模式需要的bits,CandNum += xUpdateCandList( uiMode, cost, numModesForFullRD, uiRdModeList, CandCostList );//帧内预测模式候选列表}#if FAST_UDI_USE_MPMInt uiPreds[3] = {-1, -1, -1};Int iMode = -1;//分两种情况,如果前两个相同iMode=1,否则iMode=2Int numCand = pcCU->getIntraDirLumaPredictor( uiPartOffset, uiPreds, &iMode );//获取亮度预测的前三个MPMsif( iMode >= 0 ){numCand = iMode;}for( Int j=0; j < numCand; j++){Bool mostProbableModeIncluded = false;Int mostProbableMode = uiPreds[j];//取出预测的MPMfor( Int i=0; i < numModesForFullRD; i++){mostProbableModeIncluded |= (mostProbableMode == uiRdModeList[i]);//检查MPMs,是否被uiRdModeList包含}if (!mostProbableModeIncluded)//若没被包含,则将该MPM包含进uiRdModeList里{uiRdModeList[numModesForFullRD++] = mostProbableMode;//计算率失真的备选模式表}}#endif // FAST_UDI_USE_MPM}else{for( Int i=0; i < numModesForFullRD; i++){uiRdModeList[i] = i;}}//check modes 确定帧内预测模式的最佳值主要有以下几个步骤://1. 对numModesForFullRD中预测模式进行遍历,算出RDcosts,但至多对depth=1的CU进行遍历,提高了速度。//2.得到最优,有可能包括次优的两个。//3.最佳模式下的分割模式遍历,以得最优结果。//===== check modes (using r-d costs) =====#if HHI_RQT_INTRA_SPEEDUP_MODUInt   uiSecondBestMode  = MAX_UINT;Double dSecondBestPUCost = MAX_DOUBLE;#endifUInt    uiBestPUMode  = 0;//最佳预测模式UInt    uiBestPUDistY = 0;//最佳预测模式对应的亮度失真UInt    uiBestPUDistC = 0;//最佳预测模式色度失真Double  dBestPUCost   = MAX_DOUBLE;//RDcostsfor( UInt uiMode = 0; uiMode < numModesForFullRD; uiMode++ ){// set luma prediction modeUInt uiOrgMode = uiRdModeList[uiMode];pcCU->setLumaIntraDirSubParts ( uiOrgMode, uiPartOffset, uiDepth + uiInitTrDepth );// set context modelsm_pcRDGoOnSbacCoder->load( m_pppcRDSbacCoder[uiDepth][CI_CURR_BEST] );// determine residual for partitionUInt   uiPUDistY = 0;//当前预测模式的亮度失真UInt   uiPUDistC = 0;//当前色度失真Double dPUCost   = 0.0;//当前预测RDcost#if HHI_RQT_INTRA_SPEEDUPxRecurIntraCodingQT( pcCU, uiInitTrDepth, uiPartOffset, bLumaOnly, pcOrgYuv, pcPredYuv, pcResiYuv, uiPUDistY, uiPUDistC, true, dPUCost );#elsexRecurIntraCodingQT( pcCU, uiInitTrDepth, uiPartOffset, bLumaOnly, pcOrgYuv, pcPredYuv, pcResiYuv, uiPUDistY, uiPUDistC, dPUCost );#endif// check r-d costif( dPUCost < dBestPUCost )//更新最佳预测模式相关参数{#if HHI_RQT_INTRA_SPEEDUP_MOD//次优模式uiSecondBestMode  = uiBestPUMode;dSecondBestPUCost = dBestPUCost;#endifuiBestPUMode  = uiOrgMode;uiBestPUDistY = uiPUDistY;uiBestPUDistC = uiPUDistC;dBestPUCost   = dPUCost;xSetIntraResultQT( pcCU, uiInitTrDepth, uiPartOffset, bLumaOnly, pcRecoYuv );UInt uiQPartNum = pcCU->getPic()->getNumPartInCU() >> ( ( pcCU->getDepth(0) + uiInitTrDepth ) << 1 );::memcpy( m_puhQTTempTrIdx,  pcCU->getTransformIdx()       + uiPartOffset, uiQPartNum * sizeof( UChar ) );::memcpy( m_puhQTTempCbf[0], pcCU->getCbf( TEXT_LUMA     ) + uiPartOffset, uiQPartNum * sizeof( UChar ) );::memcpy( m_puhQTTempCbf[1], pcCU->getCbf( TEXT_CHROMA_U ) + uiPartOffset, uiQPartNum * sizeof( UChar ) );::memcpy( m_puhQTTempCbf[2], pcCU->getCbf( TEXT_CHROMA_V ) + uiPartOffset, uiQPartNum * sizeof( UChar ) );::memcpy( m_puhQTTempTransformSkipFlag[0], pcCU->getTransformSkip(TEXT_LUMA)     + uiPartOffset, uiQPartNum * sizeof( UChar ) );::memcpy( m_puhQTTempTransformSkipFlag[1], pcCU->getTransformSkip(TEXT_CHROMA_U) + uiPartOffset, uiQPartNum * sizeof( UChar ) );::memcpy( m_puhQTTempTransformSkipFlag[2], pcCU->getTransformSkip(TEXT_CHROMA_V) + uiPartOffset, uiQPartNum * sizeof( UChar ) );}#if HHI_RQT_INTRA_SPEEDUP_MODelse if( dPUCost < dSecondBestPUCost ){uiSecondBestMode  = uiOrgMode;dSecondBestPUCost = dPUCost;}#endif} // Mode loop#if HHI_RQT_INTRA_SPEEDUP#if HHI_RQT_INTRA_SPEEDUP_MODfor( UInt ui =0; ui < 2; ++ui )#endif{#if HHI_RQT_INTRA_SPEEDUP_MODUInt uiOrgMode   = ui ? uiSecondBestMode  : uiBestPUMode;if( uiOrgMode == MAX_UINT ){break;}#elseUInt uiOrgMode = uiBestPUMode;//设置为最佳模式#endifpcCU->setLumaIntraDirSubParts ( uiOrgMode, uiPartOffset, uiDepth + uiInitTrDepth );// set context modelsm_pcRDGoOnSbacCoder->load( m_pppcRDSbacCoder[uiDepth][CI_CURR_BEST] );// determine residual for partitionUInt   uiPUDistY = 0;UInt   uiPUDistC = 0;Double dPUCost   = 0.0;xRecurIntraCodingQT( pcCU, uiInitTrDepth, uiPartOffset, bLumaOnly, pcOrgYuv, pcPredYuv, pcResiYuv, uiPUDistY, uiPUDistC, false, dPUCost );//此时倒数第二个参数为false// check r-d costif( dPUCost < dBestPUCost ){uiBestPUMode  = uiOrgMode;uiBestPUDistY = uiPUDistY;uiBestPUDistC = uiPUDistC;dBestPUCost   = dPUCost;xSetIntraResultQT( pcCU, uiInitTrDepth, uiPartOffset, bLumaOnly, pcRecoYuv );UInt uiQPartNum = pcCU->getPic()->getNumPartInCU() >> ( ( pcCU->getDepth(0) + uiInitTrDepth ) << 1 );::memcpy( m_puhQTTempTrIdx,  pcCU->getTransformIdx()       + uiPartOffset, uiQPartNum * sizeof( UChar ) );::memcpy( m_puhQTTempCbf[0], pcCU->getCbf( TEXT_LUMA     ) + uiPartOffset, uiQPartNum * sizeof( UChar ) );::memcpy( m_puhQTTempCbf[1], pcCU->getCbf( TEXT_CHROMA_U ) + uiPartOffset, uiQPartNum * sizeof( UChar ) );::memcpy( m_puhQTTempCbf[2], pcCU->getCbf( TEXT_CHROMA_V ) + uiPartOffset, uiQPartNum * sizeof( UChar ) );::memcpy( m_puhQTTempTransformSkipFlag[0], pcCU->getTransformSkip(TEXT_LUMA)     + uiPartOffset, uiQPartNum * sizeof( UChar ) );::memcpy( m_puhQTTempTransformSkipFlag[1], pcCU->getTransformSkip(TEXT_CHROMA_U) + uiPartOffset, uiQPartNum * sizeof( UChar ) );::memcpy( m_puhQTTempTransformSkipFlag[2], pcCU->getTransformSkip(TEXT_CHROMA_V) + uiPartOffset, uiQPartNum * sizeof( UChar ) );}} // Mode loop#endif//--- update overall distortion ---uiOverallDistY += uiBestPUDistY;uiOverallDistC += uiBestPUDistC;//--- update transform index and cbf ---UInt uiQPartNum = pcCU->getPic()->getNumPartInCU() >> ( ( pcCU->getDepth(0) + uiInitTrDepth ) << 1 );::memcpy( pcCU->getTransformIdx()       + uiPartOffset, m_puhQTTempTrIdx,  uiQPartNum * sizeof( UChar ) );::memcpy( pcCU->getCbf( TEXT_LUMA     ) + uiPartOffset, m_puhQTTempCbf[0], uiQPartNum * sizeof( UChar ) );::memcpy( pcCU->getCbf( TEXT_CHROMA_U ) + uiPartOffset, m_puhQTTempCbf[1], uiQPartNum * sizeof( UChar ) );::memcpy( pcCU->getCbf( TEXT_CHROMA_V ) + uiPartOffset, m_puhQTTempCbf[2], uiQPartNum * sizeof( UChar ) );::memcpy( pcCU->getTransformSkip(TEXT_LUMA)     + uiPartOffset, m_puhQTTempTransformSkipFlag[0], uiQPartNum * sizeof( UChar ) );::memcpy( pcCU->getTransformSkip(TEXT_CHROMA_U) + uiPartOffset, m_puhQTTempTransformSkipFlag[1], uiQPartNum * sizeof( UChar ) );::memcpy( pcCU->getTransformSkip(TEXT_CHROMA_V) + uiPartOffset, m_puhQTTempTransformSkipFlag[2], uiQPartNum * sizeof( UChar ) );//--- set reconstruction for next intra prediction blocks ---if( uiPU != uiNumPU - 1 ){Bool bSkipChroma  = false;Bool bChromaSame  = false;UInt uiLog2TrSize = g_aucConvertToBit[ pcCU->getSlice()->getSPS()->getMaxCUWidth() >> ( pcCU->getDepth(0) + uiInitTrDepth ) ] + 2;if( !bLumaOnly && uiLog2TrSize == 2 ){assert( uiInitTrDepth  > 0 );bSkipChroma  = ( uiPU != 0 );bChromaSame  = true;}UInt    uiCompWidth   = pcCU->getWidth ( 0 ) >> uiInitTrDepth;UInt    uiCompHeight  = pcCU->getHeight( 0 ) >> uiInitTrDepth;UInt    uiZOrder      = pcCU->getZorderIdxInCU() + uiPartOffset;Pel*    piDes         = pcCU->getPic()->getPicYuvRec()->getLumaAddr( pcCU->getAddr(), uiZOrder );UInt    uiDesStride   = pcCU->getPic()->getPicYuvRec()->getStride();Pel*    piSrc         = pcRecoYuv->getLumaAddr( uiPartOffset );UInt    uiSrcStride   = pcRecoYuv->getStride();for( UInt uiY = 0; uiY < uiCompHeight; uiY++, piSrc += uiSrcStride, piDes += uiDesStride ){for( UInt uiX = 0; uiX < uiCompWidth; uiX++ ){piDes[ uiX ] = piSrc[ uiX ];}}if( !bLumaOnly && !bSkipChroma ){if( !bChromaSame ){uiCompWidth   >>= 1;uiCompHeight  >>= 1;}piDes         = pcCU->getPic()->getPicYuvRec()->getCbAddr( pcCU->getAddr(), uiZOrder );uiDesStride   = pcCU->getPic()->getPicYuvRec()->getCStride();piSrc         = pcRecoYuv->getCbAddr( uiPartOffset );uiSrcStride   = pcRecoYuv->getCStride();for( UInt uiY = 0; uiY < uiCompHeight; uiY++, piSrc += uiSrcStride, piDes += uiDesStride ){for( UInt uiX = 0; uiX < uiCompWidth; uiX++ ){piDes[ uiX ] = piSrc[ uiX ];}}piDes         = pcCU->getPic()->getPicYuvRec()->getCrAddr( pcCU->getAddr(), uiZOrder );piSrc         = pcRecoYuv->getCrAddr( uiPartOffset );for( UInt uiY = 0; uiY < uiCompHeight; uiY++, piSrc += uiSrcStride, piDes += uiDesStride ){for( UInt uiX = 0; uiX < uiCompWidth; uiX++ ){piDes[ uiX ] = piSrc[ uiX ];}}}}//=== update PU data ====pcCU->setLumaIntraDirSubParts     ( uiBestPUMode, uiPartOffset, uiDepth + uiInitTrDepth );pcCU->copyToPic                   ( uiDepth, uiPU, uiInitTrDepth );} // PU loopif( uiNumPU > 1 ){ // set Cbf for all blocksUInt uiCombCbfY = 0;UInt uiCombCbfU = 0;UInt uiCombCbfV = 0;UInt uiPartIdx  = 0;for( UInt uiPart = 0; uiPart < 4; uiPart++, uiPartIdx += uiQNumParts ){uiCombCbfY |= pcCU->getCbf( uiPartIdx, TEXT_LUMA,     1 );uiCombCbfU |= pcCU->getCbf( uiPartIdx, TEXT_CHROMA_U, 1 );uiCombCbfV |= pcCU->getCbf( uiPartIdx, TEXT_CHROMA_V, 1 );}for( UInt uiOffs = 0; uiOffs < 4 * uiQNumParts; uiOffs++ ){pcCU->getCbf( TEXT_LUMA     )[ uiOffs ] |= uiCombCbfY;pcCU->getCbf( TEXT_CHROMA_U )[ uiOffs ] |= uiCombCbfU;pcCU->getCbf( TEXT_CHROMA_V )[ uiOffs ] |= uiCombCbfV;}}//===== reset context models =====m_pcRDGoOnSbacCoder->load(m_pppcRDSbacCoder[uiDepth][CI_CURR_BEST]);//===== set distortion (rate and r-d costs are determined later) =====ruiDistC                   = uiOverallDistC;pcCU->getTotalDistortion() = uiOverallDistY + uiOverallDistC;}VoidTEncSearch::estIntraPredChromaQT( TComDataCU* pcCU,TComYuv*    pcOrgYuv,TComYuv*    pcPredYuv,TComYuv*    pcResiYuv,TComYuv*    pcRecoYuv,UInt        uiPreCalcDistC ){UInt    uiDepth     = pcCU->getDepth(0);UInt    uiBestMode  = 0;UInt    uiBestDist  = 0;Double  dBestCost   = MAX_DOUBLE;//----- init mode list -----UInt  uiMinMode = 0;UInt  uiModeList[ NUM_CHROMA_MODE ];pcCU->getAllowedChromaDir( 0, uiModeList );UInt  uiMaxMode = NUM_CHROMA_MODE;//----- check chroma modes -----for( UInt uiMode = uiMinMode; uiMode < uiMaxMode; uiMode++ ){//----- restore context models -----m_pcRDGoOnSbacCoder->load( m_pppcRDSbacCoder[uiDepth][CI_CURR_BEST] );//----- chroma coding -----UInt    uiDist = 0;pcCU->setChromIntraDirSubParts  ( uiModeList[uiMode], 0, uiDepth );xRecurIntraChromaCodingQT       ( pcCU,   0, 0, pcOrgYuv, pcPredYuv, pcResiYuv, uiDist );if( pcCU->getSlice()->getPPS()->getUseTransformSkip() ){m_pcRDGoOnSbacCoder->load( m_pppcRDSbacCoder[uiDepth][CI_CURR_BEST] );}UInt    uiBits = xGetIntraBitsQT( pcCU,   0, 0, false, true, false );Double  dCost  = m_pcRdCost->calcRdCost( uiBits, uiDist );//----- compare -----if( dCost < dBestCost ){dBestCost   = dCost;uiBestDist  = uiDist;uiBestMode  = uiModeList[uiMode];UInt  uiQPN = pcCU->getPic()->getNumPartInCU() >> ( uiDepth << 1 );xSetIntraResultChromaQT( pcCU, 0, 0, pcRecoYuv );::memcpy( m_puhQTTempCbf[1], pcCU->getCbf( TEXT_CHROMA_U ), uiQPN * sizeof( UChar ) );::memcpy( m_puhQTTempCbf[2], pcCU->getCbf( TEXT_CHROMA_V ), uiQPN * sizeof( UChar ) );::memcpy( m_puhQTTempTransformSkipFlag[1], pcCU->getTransformSkip( TEXT_CHROMA_U ), uiQPN * sizeof( UChar ) );::memcpy( m_puhQTTempTransformSkipFlag[2], pcCU->getTransformSkip( TEXT_CHROMA_V ), uiQPN * sizeof( UChar ) );}}//----- set data -----UInt  uiQPN = pcCU->getPic()->getNumPartInCU() >> ( uiDepth << 1 );::memcpy( pcCU->getCbf( TEXT_CHROMA_U ), m_puhQTTempCbf[1], uiQPN * sizeof( UChar ) );::memcpy( pcCU->getCbf( TEXT_CHROMA_V ), m_puhQTTempCbf[2], uiQPN * sizeof( UChar ) );::memcpy( pcCU->getTransformSkip( TEXT_CHROMA_U ), m_puhQTTempTransformSkipFlag[1], uiQPN * sizeof( UChar ) );::memcpy( pcCU->getTransformSkip( TEXT_CHROMA_V ), m_puhQTTempTransformSkipFlag[2], uiQPN * sizeof( UChar ) );pcCU->setChromIntraDirSubParts( uiBestMode, 0, uiDepth );pcCU->getTotalDistortion      () += uiBestDist - uiPreCalcDistC;//----- restore context models -----m_pcRDGoOnSbacCoder->load( m_pppcRDSbacCoder[uiDepth][CI_CURR_BEST] );}

HEVC帧内预测参考相邻帧代码解析相关推荐

  1. HEVC帧内预测参考像素检测获取和滤波

    作者:66 (转载请注明出处) 还是参考HEVC_CJL的博客,理论都清楚,跟着他的进度看代码,感谢前辈的分享,他的代码里没有强滤波过程,在此我稍加补充. 原文链接:http://blog.csdn. ...

  2. 关于帧内预测模式的视频隐写代码介绍

    关于帧内预测模式的视频隐写代码介绍 前言 一.H.265/HEVC的帧内预测过程 二.论文[1]的介绍以及如何复现 前言 在早期的基于帧内预测模式(IPM)的H.265/HEVC视频中,大多是基于自定 ...

  3. HEVC算法和体系结构:预测编码之帧内预测

    预测编码之帧内预测(Intra-Picture Prediction) 预测编码(Prediction Coding)是视频编码的核心技术之一,指利用已编码的一个或几个样本值,根据某种模型或方法,对当 ...

  4. H.266/VVC帧间预测技术学习:帧间和帧内联合预测(Combined inter and intra prediction, CIIP)

    在HEVC中一个CU在预测时要么使用帧内预测要么使用帧间预测,二者只能取其一.而VVC中提出的CIIP技术,是将帧间预测信号与帧内预测信号相结合. 在VVC中,当CU以Merge模式编码时,且CU包含 ...

  5. SVAC1.0帧内预测技术分析

    ##Date:2017/10/21 ##Content:SVAC1.0帧内预测技术分析 帧内预测主要是利用视频图像的空域相关性来进行预测编码.通常I帧只采用帧内预测进行编码.利用宏块之间的相关性,对当 ...

  6. DCC2022:高阶帧内预测

    本文来自DCC2022论文<High-order Intra Prediction for Future Video Coding> 帧内预测 帧内预测是去除空域冗余的重要工具,它从上方或 ...

  7. 【H2645】帧内预测

    1.帧内预测的原理 帧内预测的原理:预测值是该像素周围像素值加权求和(比如平均值)P,它和实际值相减后得到的差值q,如果差值q很小,说明该像素的值可以通过预测得出,可以丢弃了,这就达到压缩编码的目的. ...

  8. HEVC亮度分量帧内预测模式代码详解

    作者:66 (转载请注明出处) 从我自身的学习历程来看,建议大家先把理论扎实了再去理解代码,代码里也有注释,与理论的地方一对应加上一点C编程基础很容易就能理解. 我们先了解相关理论,此处可参考这一篇博 ...

  9. 【HEVC代码阅读】帧内预测

    HEVC的帧内预测的架构分为三个步骤: ①构建参考像素数组:②生成预测像素:③后处理操作. HEVC标准将这三个步骤进行了精密设计,以求达到较高的编码效率,同时降低编码和解码端的运算要求.HEVC标准 ...

最新文章

  1. python库缺少pkg_resource_ImportError: No module named pkg_resources解决方案
  2. php自动轮播图代码,JavaScript如何实现动态轮播图效果?(代码示例)
  3. SAP RETAIL 如何确定自动补货触发的单据类型 II
  4. 车载全景可视系统surroundview
  5. 初步了解BIM模型和超图相关操作
  6. 查看、关闭当前服务器上启动服务 / 进程
  7. AI深入应用,生态越加开放,开发者的机会在哪里?
  8. 气愤ing,身份证丢了到底要不要挂失?
  9. 高性能Mysql数据库表设计原则
  10. Android启动的init进程
  11. 戴尔计算机没有硬盘驱动,如果Dell计算机无法从硬盘驱动器启动怎么办
  12. 蔡勒公式与Python
  13. 硅谷真假u盘测试软件,硅谷硅谷真假u盘测试
  14. 计算机在哪里修改储存默认磁盘,Win7资源管理器修改默认显示磁盘小教程
  15. 接口测试二(App抓包)
  16. 荣耀magic book忘记pin,只能用pin登录,还着急用电脑里的文件
  17. 路由与交换技术-18-热备份路由选择协议HSRP
  18. python二进制格式追加_Python追加/填充二进制文件添加garbag
  19. Regsvr32注册DLL和Regasm
  20. 计算机系统英语作文结尾万能句子,英语作文结尾万能句型整理

热门文章

  1. 用U盘打造专属个人的微型护航系统--winpe
  2. java烟弹,java电子烟是啥牌子
  3. 机器学习实战 —— 决策树(完整代码)
  4. 大数据运维实战第一课 大话 Hadoop 生态圈
  5. 算法---排序--希尔排序和快速排序
  6. Arduino + SmartAirFilter 制作智能感应的 PM 空气净化器
  7. Java笔试面试-设计模式
  8. linux open换行windows,python中遇到的Windows系统中换行符的一个坑
  9. python ui框架哪个最好用_Python UI开发最常用到的库
  10. maven打包jar包到本地仓库