好久没写了,今天碰巧有个小伙伴问我关于音频流这一块的,久了还有点记不起来,我就顺便写一下,后面就不用又找一遍代码了,所谓好记性不如烂笔头。

所以,这里是关于如何从AudioTrack 写入数据到audioflinger,以及audioflinger如何写入到hal层的音频流处理流程,主要写一下audioflinger处理流程,和写一些细节。

获取音频流

1、client写入数据:

app client 通过创建AudioTrack后,在播放的时候会不断的调用audiotrack的write方法,不断的向audioflinger写数据。

//frameworks\av\media\libaudioclient\AudioTrack.cpp
ssize_t AudioTrack::write(const void* buffer, size_t userSize, bool blocking)
{if (mTransfer != TRANSFER_SYNC) {return INVALID_OPERATION;}if (isDirect()) {AutoMutex lock(mLock);int32_t flags = android_atomic_and(~(CBLK_UNDERRUN | CBLK_LOOP_CYCLE | CBLK_LOOP_FINAL | CBLK_BUFFER_END),&mCblk->mFlags);if (flags & CBLK_INVALID) {return DEAD_OBJECT;}}if (ssize_t(userSize) < 0 || (buffer == NULL && userSize != 0)) {// Sanity-check: user is most-likely passing an error code, and it would// make the return value ambiguous (actualSize vs error).ALOGE("AudioTrack::write(buffer=%p, size=%zu (%zd)", buffer, userSize, userSize);return BAD_VALUE;}size_t written = 0;Buffer audioBuffer;while (userSize >= mFrameSize) {audioBuffer.frameCount = userSize / mFrameSize;status_t err = obtainBuffer(&audioBuffer,blocking ? &ClientProxy::kForever : &ClientProxy::kNonBlocking);if (err < 0) {if (written > 0) {break;}if (err == TIMED_OUT || err == -EINTR) {err = WOULD_BLOCK;}return ssize_t(err);}size_t toWrite = audioBuffer.size;memcpy(audioBuffer.i8, buffer, toWrite);buffer = ((const char *) buffer) + toWrite;userSize -= toWrite;written += toWrite;releaseBuffer(&audioBuffer);}if (written > 0) {mFramesWritten += written / mFrameSize;}return written;
}

2、service端的写

作为service端,不断的接收并处理client写入的数据。

//frameworks/av/services/audioflinger/Tracks.cpp
bool AudioFlinger::PlaybackThread::OutputTrack::write(void* data, uint32_t frames)
{Buffer *pInBuffer;Buffer inBuffer;bool outputBufferFull = false;inBuffer.frameCount = frames;inBuffer.raw = data;//处理客户端write数据,这个就不在这里细说了,这里不打算细讲这个环形缓冲,有兴趣的可以自行阅读代码
}

AudioFlinger音频流处理

主要处理流程:
track预处理
混音处理
向HAL输出数据

bool AudioFlinger::PlaybackThread::threadLoop()
{           //1、track预处理mMixerStatus = prepareTracks_l(&tracksToRemove);//2、混音处理threadLoop_mix();if (mMixerBufferValid) {//void *buffer = mEffectBufferValid ? mEffectBuffer : mSinkBuffer;audio_format_t format = mEffectBufferValid ? mEffectBufferFormat : mFormat;// mono blend occurs for mixer threads only (not direct or offloaded)// and is handled here if we're going directly to the sink.if (requireMonoBlend() && !mEffectBufferValid) {mono_blend(mMixerBuffer, mMixerBufferFormat, mChannelCount, mNormalFrameCount,true /*limit*/);}//将mMixerBuffer 按hal 层格式 实际最终还是拷贝到 mSinkBuffer 对象上memcpy_by_audio_format(buffer, format, mMixerBuffer, mMixerBufferFormat,mNormalFrameCount * mChannelCount);}//3、向HAL输出数据ret = threadLoop_write();
}

track预处理

向mAudioMixer更新track对象等操作

//prepareTracks_l
AudioFlinger::PlaybackThread::mixer_state AudioFlinger::MixerThread::prepareTracks_l(Vector< sp<Track> > *tracksToRemove)
{.....for (size_t i=0 ; i<count ; i++) {const sp<Track> t = mActiveTracks[i];....// if an active track doesn't exist in the AudioMixer, create it.if (!mAudioMixer->exists(name)) {status_t status = mAudioMixer->create(name,track->mChannelMask,track->mFormat,track->mSessionId);if (status != OK) {ALOGW("%s: cannot create track name"" %d, mask %#x, format %#x, sessionId %d in AudioMixer",__func__, name, track->mChannelMask, track->mFormat, track->mSessionId);tracksToRemove->add(track);track->invalidate(); // consider it dead.continue;}}....}.....
}

混音处理

void AudioFlinger::MixerThread::threadLoop_mix()
{// mix buffers...mAudioMixer->process();mCurrentWriteLength = mSinkBufferSize;// increase sleep time progressively when application underrun condition clears.// Only increase sleep time if the mixer is ready for two consecutive times to avoid// that a steady state of alternating ready/not ready conditions keeps the sleep time// such that we would underrun the audio HAL.if ((mSleepTimeUs == 0) && (sleepTimeShift > 0)) {sleepTimeShift--;}mSleepTimeUs = 0;mStandbyTimeNs = systemTime() + mStandbyDelayNs;//TODO: delay standby when effects have a tail
}
//frameworks/av/media/libaudioprocessing/include/media/AudioMixerBase.hvoid process() {preProcess();//这里才是处理数据(this->*mHook)();postProcess();}

数据获取&混音处理 (this->*mHook)();
mHook 以process__genericResampling 为例:

//frameworks/av/media/libaudioprocessing/AudioMixerBase.cpp
// generic code with resampling
void AudioMixerBase::process__genericResampling()
{ALOGVV("process__genericResampling\n");int32_t * const outTemp = mOutputTemp.get(); // naked ptrsize_t numFrames = mFrameCount;for (const auto &pair : mGroups) {const auto &group = pair.second;const std::shared_ptr<TrackBase> &t1 = mTracks[group[0]];// clear temp buffermemset(outTemp, 0, sizeof(*outTemp) * t1->mMixerChannelCount * mFrameCount);for (const int name : group) {const std::shared_ptr<TrackBase> &t = mTracks[name];int32_t *aux = NULL;if (CC_UNLIKELY(t->needs & NEEDS_AUX)) {aux = t->auxBuffer;}// this is a little goofy, on the resampling case we don't// acquire/release the buffers because it's done by// the resampler.if (t->needs & NEEDS_RESAMPLE) {(t.get()->*t->hook)(outTemp, numFrames, mResampleTemp.get() /* naked ptr */, aux);} else {size_t outFrames = 0;while (outFrames < numFrames) {t->buffer.frameCount = numFrames - outFrames;//获取client写入的数据t->bufferProvider->getNextBuffer(&t->buffer);t->mIn = t->buffer.raw;// t->mIn == nullptr can happen if the track was flushed just after having// been enabled for mixing.if (t->mIn == nullptr) break;(t.get()->*t->hook)(outTemp + outFrames * t->mMixerChannelCount, t->buffer.frameCount,mResampleTemp.get() /* naked ptr */,aux != nullptr ? aux + outFrames : nullptr);outFrames += t->buffer.frameCount;//通知释放 通知client 有可以写入的buffer了t->bufferProvider->releaseBuffer(&t->buffer);}}}/**这个mainBuffer与 mOutput->write((char *)mSinkBuffer, 0); 中的mSinkBuffer 相关联//相关代码如下://frameworks\av\services\audioflinger\Threads.cppAudioFlinger::PlaybackThread::mixer_state AudioFlinger::MixerThread::prepareTracks_l(Vector< sp<Track> > *tracksToRemove){........mAudioMixer->setParameter(name,AudioMixer::TRACK,AudioMixer::MIXER_FORMAT, (void *)mMixerBufferFormat);mAudioMixer->setParameter(name,AudioMixer::TRACK,AudioMixer::MAIN_BUFFER, (void *)mMixerBuffer);........}*///这里是将outTemp 数据按最终输出格式拷贝给到 mainBufferconvertMixerFormat(t1->mainBuffer, t1->mMixerFormat,outTemp, t1->mMixerInFormat, numFrames * t1->mMixerChannelCount);}
}

向HAL输出数据

  // frameworks\av\services\audioflinger\Threads.cppssize_t AudioFlinger::MixerThread::threadLoop_write()
{// FIXME we should only do one push per cycle; confirm this is true// Start the fast mixer if it's not already runningif (mFastMixer != 0) {FastMixerStateQueue *sq = mFastMixer->sq();FastMixerState *state = sq->begin();if (state->mCommand != FastMixerState::MIX_WRITE &&(kUseFastMixer != FastMixer_Dynamic || state->mTrackMask > 1)) {if (state->mCommand == FastMixerState::COLD_IDLE) {// FIXME workaround for first HAL write being CPU bound on some devicesATRACE_BEGIN("write");//AudioStreamOut  *mOutput;//致此,audioflinger的写入流程基本结束,剩下的就是写入到audiohal层,//最后在audiohal处理之后就会通过tinyalsa写入到驱动完成了声音流的输出过程了mOutput->write((char *)mSinkBuffer, 0);ATRACE_END();int32_t old = android_atomic_inc(&mFastMixerFutex);if (old == -1) {(void) syscall(__NR_futex, &mFastMixerFutex, FUTEX_WAKE_PRIVATE, 1);}
#ifdef AUDIO_WATCHDOGif (mAudioWatchdog != 0) {mAudioWatchdog->resume();}
#endif}state->mCommand = FastMixerState::MIX_WRITE;
#ifdef FAST_THREAD_STATISTICSmFastMixerDumpState.increaseSamplingN(mAudioFlinger->isLowRamDevice() ?FastThreadDumpState::kSamplingNforLowRamDevice : FastThreadDumpState::kSamplingN);
#endifsq->end();sq->push(FastMixerStateQueue::BLOCK_UNTIL_PUSHED);if (kUseFastMixer == FastMixer_Dynamic) {mNormalSink = mPipeSink;}} else {sq->end(false /*didModify*/);}}return PlaybackThread::threadLoop_write();
}

补充

每个client 创建audiotrack的时候都是要通过AudioFlinger来创建的。audioflinger交给Threads.cpp这边创建,创建完毕后会保存在Threads 的mTracks 对象里面,当每次loop的时候会先调用prepareTracks_l 来准备数据,此时会及时将mTracks 同步到AudioMixer(不是将对象直接赋值给AudioMixer对象,AudioMixer会同步创建一个AudioMixer::Track来与mTracks对应,这个AudioMixer::Track然后再包含Track,最后实际获取数据的时候是通过这个Track来获取到client数据的)。

sp<IAudioTrack> AudioFlinger::createTrack(const CreateTrackInput& input,CreateTrackOutput& output,status_t *status)
{......track = thread->createTrack_l(client, streamType, input.attr, &output.sampleRate,input.config.format, input.config.channel_mask,&output.frameCount, &output.notificationFrameCount,input.notificationsPerBuffer, input.speed,input.sharedBuffer, sessionId, &output.flags,input.clientInfo.clientTid, clientUid, &lStatus, portId);
......
}sp<AudioFlinger::PlaybackThread::Track> AudioFlinger::PlaybackThread::createTrack_l(const sp<AudioFlinger::Client>& client,audio_stream_type_t streamType,const audio_attributes_t& attr,uint32_t *pSampleRate,audio_format_t format,audio_channel_mask_t channelMask,size_t *pFrameCount,size_t *pNotificationFrameCount,uint32_t notificationsPerBuffer,float speed,const sp<IMemory>& sharedBuffer,audio_session_t sessionId,audio_output_flags_t *flags,pid_t tid,uid_t uid,status_t *status,audio_port_handle_t portId)
{......track = new Track(this, client, streamType, attr, sampleRate, format,channelMask, frameCount,nullptr /* buffer */, (size_t)0 /* bufferSize */, sharedBuffer,sessionId, uid, *flags, TrackBase::TYPE_DEFAULT, portId);lStatus = track != 0 ? track->initCheck() : (status_t) NO_MEMORY;if (lStatus != NO_ERROR) {ALOGE("createTrack_l() initCheck failed %d; no control block?", lStatus);// track must be cleared from the caller as the caller has the AF lockgoto Exit;}mTracks.add(track);sp<EffectChain> chain = getEffectChain_l(sessionId);if (chain != 0) {ALOGV("createTrack_l() setting main buffer %p", chain->inBuffer());track->setMainBuffer(chain->inBuffer());chain->setStrategy(AudioSystem::getStrategyForStream(track->streamType()));chain->incTrackCnt();}
......
}

最后

sourceinside阅读有些跳转不了,有点恼火。

Android 9 Audio系统笔记:AudioFlinger音频流处理流程相关推荐

  1. Android 9 Audio系统笔记:音频路由实现——从AudioTrack到audiohal

    目录 一.动态路由的初始化 1.获取路由策略 2.向AudioPolicyManager注册路由策略 二.动态路由的路由流程,以AudioTrack创建为例 创建AudioTrack的路由选择 如何定 ...

  2. 【初学音频】Android的Audio系统之AudioTrack

    目录 前言 1. AudioTrack 2. 用例介绍 2.1 过程 2.2 数据加载模式 2.3 音频流的类型 2.4 Buffer分配和Frame的概念 3. AudioTrack (Java空间 ...

  3. Android8.0 Audio系统之AudioFlinger

    继上一篇AudioTrack的分析,本篇我们来看AudioFlinger,AF主要承担音频混合输出,是Audio系统的核心,从AudioTrack来的数据最终都会在这里处理,并被写入到Audio的HA ...

  4. Android 4.0系统打电话和接电话系统流程时序图详解

    鄙人初学Android系统源码还不到半载,之前一段时间由于工作原因,稍微研究了一下Android4.0系统打电话和接电话的大致流程.现将自己的研究结果以时序图奉上,由于经验有限,若有错误还请见谅. 打 ...

  5. [深入理解Android卷一全文-第七章]深入理解Audio系统

    由于<深入理解Android 卷一>和<深入理解Android卷二>不再出版,而知识的传播不应该因为纸质媒介的问题而中断,所以我将在CSDN博客中全文转发这两本书的全部内容. ...

  6. Android 音频(Audio)架构

    一.概述 Android 的音频硬件抽象层 (HAL) 可将 android.media 中特定于音频的较高级别的框架 API 连接到底层音频驱动程序和硬件.本部分介绍了有关提升性能的实现说明和提示. ...

  7. 1.7 深入理解Audio系统

    第7章 深入理解Audio系统 7.1 概述 Audio系统是Android平台的重要组成部分,它主要包括三方面的内容: AudioRcorder和AudioTrack:这两个类属于Audio系统对外 ...

  8. Android N的Audio系统(五)

    AudioFlinger 回放录制线程 AndioFlinger 作为 Android 的音频系统引擎,重任之一是负责输入输出流设备的管理及音频流数据的处理传输,这是由回放线程(PlaybackThr ...

  9. Android ALSA音频系统架构分析(1)----从Loopback了解Audio

    /*********************************** * Author:刘江明 * Environment:MTK Android 6.0 * Date:2017年05月25日 * ...

  10. mtk+android+之mt6577驱动笔记,MTK6577+Android之音频(audio)移植

    MTK6577+Android之音频(audio)移植 备注:audio PA音频功放(power amplifier) 先借用<Y1MT6577 design notice V0.1>关 ...

最新文章

  1. IT行业老程序员的经验之谈:爬虫学到什么程度可以找到工作?
  2. Centos7使用yum安装MySQL5.6的正确姿势
  3. C++ stirng,int 互转(转载)
  4. Winsows VISTA启动过程解析
  5. Async,Await和ConfigureAwait的关系
  6. 警告提示:No archetype found in remote catalog. Archetype not found in any catalog
  7. 前端学习(1321):node.js得异步api
  8. 打造大数据和AI能力底座 联通大数据深度参与“新基建”
  9. loadrunner error 27796 Failed to connect to server
  10. Word文档目录制作
  11. 全卷机神经网络图像分割(U-net)-keras实现
  12. Mac OS X:解决开机总是显示“电脑关机是因为发生了问题”
  13. 在BAE上部署Pomelo
  14. Delphi2010Excel导入数据库
  15. ActiveX控件概述
  16. java cximage_图像处理库比较 OpenCV CxImage ImageMagick CImg FreeImage
  17. 操作系统复习之OS的运行环境
  18. Java Web概述-练习题
  19. c结构体的初使用(学生成绩简单统计)
  20. 并行编程,绝不是你想的那么简单

热门文章

  1. 翻转单词顺序(python)
  2. HMI-41-【节能模式】右侧表小汽车灯光实现
  3. 如何查询手机屏幕尺寸、密度,分辨率
  4. STM32——理解时钟系统
  5. 计算机无纸化考试知识点,2012重庆无纸化考试《会计电算化》知识点:计算机软件...
  6. [proxy:0:0@WORKSTATION-DEV] HYDU_sock_write (utils/sock/sock.c:256): write error (Broken pipe)
  7. OpenHarmony更新编译问题及解决办法
  8. android直播录像,安卓手机怎么录制直播视频
  9. 谷歌浏览器无法使用翻译功能的解决方案,谷歌浏览器无法翻译怎么办?谷歌浏览器右键翻译失效了?
  10. 小米笔记本U盘win10换win7系统操作教程