你好!这里是风筝的博客,

欢迎和我一起交流。


Android framework中的代码每个平台基本都是大同小异,只有Hal上代码才是厂商特制,每个平台都不相同,这里以MTK平台为例,记录下MTK平台Hal audio音频录音代码流程。

调用大致流程如下

AudioALSAHardwa: openInputStream()AudioALSAStreamManager: openInputStream()AudioALSAStreamIn: AudioALSAStreamIn //new AudioALSAStreamInAudioALSAStreamIn: set() //done, devices: 0x80000004, flags: 0, acoustics: 0x0, format: 0x1, sampleRate: 48000/48000, num_channels: 12/2, buffer_size: 3840, tempDebugflag: 0AudioALSAStreamIn: checkOpenStreamFormatAudioALSAStreamIn: checkOpenStreamChannelsAudioALSAStreamIn: checkOpenStreamSampleRate
AudioALSAHardware: createAudioPatch()
AudioALSAStreamManager: setParameters() //IOport = 70, keyValuePairs = input_source=1;routing=-2147483644AudioALSAStreamIn: setParameters()AudioALSAStreamManager: routingInputDevice()// input_device: 0x80000004 => 0x80000004AudioALSAStreamIn: read()AudioALSAStreamIn: open()AudioALSAStreamManager: createCaptureHandler()AudioALSAStreamManager: ulStreamAttributeTargetCustomizationAudioALSACaptureDataProviderNormal: AudioALSACaptureDataProviderNormal()AudioALSACaptureDataProviderBase: AudioALSACaptureDataProviderBase()AudioALSACaptureHandlerNormal: AudioALSACaptureHandlerNormal()AudioALSACaptureHandlerNormal: init()AudioALSACaptureHandlerNormal: open()//input_device = 0x80000004, input_source = 0x1, sample_rate=48000, num_channels=2 //里面配置period_count channels...AudioALSACaptureDataProviderBase: AudioALSACaptureDataProviderBase()AudioALSACaptureDataProviderDspRaw: AudioALSACaptureDataProviderDspRaw()AudioALSACaptureDataClientAurisysNormal: AudioALSACaptureDataClientAurisysNormal(+) //mCaptureDataClient = new AudioALSACaptureDataClientAurisysNormalAudioALSACaptureDataProviderBase: configStreamAttribute() //audio_mode: 0 => 0, input_device: 0x0 => 0x80000004, flag: 0x0 => 0x0, input_source: 0->1, output_device: 0x0 => 0x2, sample_rate: 0 => 48000, period_us: 0 => 0, DSP out sample_rate: 0 => 48000AudioALSACaptureDataProviderBase: attachAudioALSACaptureDataProviderDspRaw: open(+) //里面配置format、channelsAudioALSADeviceConfigManager: ApplyDeviceTurnonSequenceByName() DeviceName = ADDA_TO_CAPTURE1 descriptor->DeviceStatusCounte = 0AudioALSACaptureDataProviderBase: enablePmicInputDeviceAudioALSAHardwareResourceManager: +startInputDevice()AudioALSADeviceConfigManager: ApplyDeviceTurnonSequenceByName() DeviceName = builtin_Mic_DualMic descriptor->DeviceStatusCounte = 0AudioALSACaptureDataProviderBase: getInputSampleRate()AudioMTKGainController: +SetCaptureGain() //mode=0, source=1, input device=0x80000004, output device=0x2AudioMTKGainController: ApplyMicGain()AudioALSACaptureDataProviderDspRaw: openApHwPcm(), mPcm = 0xf2b55260AudioDspStreamManager: addCaptureDataProvider() //添加到mCaptureDataProviderVectorAudioDspStreamManager: openCaptureDspHwPcm(), mDspHwPcm = 0xf2b55340pthread_create(&hReadThread, NULL, AudioALSACaptureDataProviderDspRaw::readThread, (void *)this); //创建线程                     AudioALSACaptureDataProviderDspRaw: +readThread()AudioALSACaptureDataProviderBase: waitPcmStartAudioALSACaptureDataProviderBase: pcm_startAudioALSACaptureDataProviderBase: pcmReadAudioALSACaptureDataProviderBase: provideCaptureDataToAllClientsAudioALSACaptureDataClientAurisysNormal: copyCaptureDataToClient() //提供数据给client,然后继续Read,如此循环pthread_create(&hProcessThread, NULL, AudioALSACaptureDataClientAurisysNormal::processThread, (void *)this); //创建线程AudioALSACaptureDataClientAurisysNormal: processThreadAudioALSACaptureHandlerNormal: read()bytes = mCaptureDataClient->read(buffer, bytes);
AudioMTKStreamInInterface *AudioALSAStreamManager::openInputStream(uint32_t devices,int *format,uint32_t *channels,uint32_t *sampleRate,status_t *status,audio_in_acoustics_t acoustics,uint32_t input_flag) {//检查参数配置if (format == NULL || channels == NULL || sampleRate == NULL || status == NULL) {ALOGE("%s(), NULL pointer!! format = %p, channels = %p, sampleRate = %p, status = %p",__FUNCTION__, format, channels, sampleRate, status);if (status != NULL) { *status = INVALID_OPERATION; }return NULL;}ALOGD("%s(), devices = 0x%x, format = 0x%x, channels = 0x%x, sampleRate = %d, status = %d, acoustics = 0x%x, input_flag 0x%x",__FUNCTION__, devices, *format, *channels, *sampleRate, *status, acoustics, input_flag);// create stream inAudioALSAStreamIn *pAudioALSAStreamIn = new AudioALSAStreamIn();//主要set一些参数pAudioALSAStreamIn->set(devices, format, channels, sampleRate, status, acoustics, input_flag);pAudioALSAStreamIn->setIdentity(mStreamInIndex);mStreamInVector.add(mStreamInIndex, pAudioALSAStreamIn);//添加到mStreamInVectorreturn pAudioALSAStreamIn;
}
status_t AudioALSAStreamIn::set(uint32_t devices,int *format,uint32_t *channels,uint32_t *sampleRate,status_t *status,audio_in_acoustics_t acoustics, uint32_t flags) {// check formatif (checkOpenStreamFormat(static_cast<audio_devices_t>(devices), format) == false) {*status = BAD_VALUE;}// check channel maskif (checkOpenStreamChannels(static_cast<audio_devices_t>(devices), channels) == false) {*status = BAD_VALUE;}// check sample rateif (checkOpenStreamSampleRate(static_cast<audio_devices_t>(devices), sampleRate) == false) {*status = BAD_VALUE;}// config stream attribute,主要配置参数if (*status == NO_ERROR) {// formatmStreamAttributeTarget.audio_format = static_cast<audio_format_t>(*format);// channelmStreamAttributeTarget.audio_channel_mask = static_cast<audio_channel_mask_t>(*channels);mStreamAttributeTarget.num_channels = popcount(*channels);// sample ratemStreamAttributeTarget.sample_rate = *sampleRate;// devicesmStreamAttributeTarget.input_device = static_cast<audio_devices_t>(devices);// acoustics flagsmStreamAttributeTarget.acoustics_mask = static_cast<audio_in_acoustics_t>(acoustics);}
}

上层服务会通过setParameters下发参数给HAL,这里下发的参数是:IOport = 70, keyValuePairs = input_source=1;routing=-2147483644

status_t AudioALSAStreamIn::setParameters(const String8 &keyValuePairs) {/// routingif (param.getInt(keyRouting, value) == NO_ERROR) {status = mStreamManager->routingInputDevice(this, mStreamAttributeTarget.input_device, inputdevice);}
}
//setParameters里面解析参数,本次参数主要是要route操作
status_t AudioALSAStreamManager::routingInputDevice(AudioALSAStreamIn *pAudioALSAStreamIn,const audio_devices_t current_input_device,audio_devices_t input_device) {//如果要route的device和上次的一样,则直接返回if (input_device == AUDIO_DEVICE_NONE ||input_device == current_input_device) {ALOGW("-%s(), input_device(0x%x) is AUDIO_DEVICE_NONE(0x%x) or current_input_device(0x%x), return",__FUNCTION__,input_device, AUDIO_DEVICE_NONE, current_input_device);return NO_ERROR;}//如果要切换设备,先suspend挂起setAllInputStreamsSuspend(true, false);standbyAllInputStreams();if (mStreamInVector.size() > 0) {for (size_t i = 0; i < mStreamInVector.size(); i++) {if ((input_device == AUDIO_DEVICE_IN_FM_TUNER) || (current_input_device == AUDIO_DEVICE_IN_FM_TUNER) ||(input_device == AUDIO_DEVICE_IN_TELEPHONY_RX) || (current_input_device == AUDIO_DEVICE_IN_TELEPHONY_RX)) {if (pAudioALSAStreamIn == mStreamInVector[i]) {status = mStreamInVector[i]->routing(input_device);ASSERT(status == NO_ERROR);}} else {//这里就是实际route操作status = mStreamInVector[i]->routing(input_device);ASSERT(status == NO_ERROR);}}}//suspend恢复setAllInputStreamsSuspend(false, false);
}

接下来就是最重要的read调用了,上层服务会调用read函数来读取录音数据:

ssize_t AudioALSAStreamIn::read(void *buffer, ssize_t bytes) {/// check openif (mStandby == true) {status = open();}
}

open是放在read里面来做的:

status_t AudioALSAStreamIn::open() {if (mStandby == true) {// create capture handlerASSERT(mCaptureHandler == NULL);mCaptureHandler = mStreamManager->createCaptureHandler(&mStreamAttributeTarget);if (mCaptureHandler == NULL) {status = BAD_VALUE;return status;}// open audio hardwarestatus = mCaptureHandler->open();mStandby = false;}return status;
}

先创建CaptureHandler,不同的设备和源会创建不同的CaptureHandler执行类,之后open对应的CaptureHandler。
可以留一下createCaptureHandler的逻辑,主要还是对设备做区分:

AudioALSACaptureHandlerBase *AudioALSAStreamManager::createCaptureHandler(// Init input stream attribute here配置stream_attribute_targetstream_attribute_target->audio_mode = mAudioMode; // set mode to stream attribute for mic gain settingstream_attribute_target->output_devices = current_output_devices; // set output devices to stream attribute for mic gain setting and BesRecord parameterstream_attribute_target->micmute = mMicMute;//客制化通路,设置APP2场景可以使得normal 录音也走voip,这样就可以跑aec算法消除回声/* StreamAttribute customization for scene */ulStreamAttributeTargetCustomization(stream_attribute_target);//语音唤醒if (stream_attribute_target->input_source == AUDIO_SOURCE_HOTWORD) {if (mAudioALSAVoiceWakeUpController->getVoiceWakeUpEnable() == false) {mAudioALSAVoiceWakeUpController->setVoiceWakeUpEnable(true);}if (mVoiceWakeUpNeedOn == true) {mAudioALSAVoiceWakeUpController->SeamlessRecordEnable();}pCaptureHandler = new AudioALSACaptureHandlerVOW(stream_attribute_target);} else if (stream_attribute_target->input_source == AUDIO_SOURCE_VOICE_UNLOCK ||stream_attribute_target->input_source == AUDIO_SOURCE_ECHO_REFERENCE) {pCaptureHandler = new AudioALSACaptureHandlerSyncIO(stream_attribute_target);//实网通话} else if (isPhoneCallOpen() == true) {pCaptureHandler = new AudioALSACaptureHandlerVoice(stream_attribute_target);//客制化AEC场景} else if ((stream_attribute_target->NativePreprocess_Info.PreProcessEffect_AECOn == true) ||(stream_attribute_target->input_source == AUDIO_SOURCE_VOICE_COMMUNICATION) ||(stream_attribute_target->input_source == AUDIO_SOURCE_CUSTOMIZATION1) || //MagiASR enable AEC(stream_attribute_target->input_source == AUDIO_SOURCE_CUSTOMIZATION2)) { //Normal REC with AECAudioALSAHardwareResourceManager::getInstance()->setHDRRecord(false); // turn off HRD record for VoIPswitch (stream_attribute_target->input_device) {case AUDIO_DEVICE_IN_BLUETOOTH_SCO_HEADSET: {if (stream_attribute_target->output_devices & AUDIO_DEVICE_OUT_ALL_SCO) {pCaptureHandler = new AudioALSACaptureHandlerAEC(stream_attribute_target);} else {pCaptureHandler = new AudioALSACaptureHandlerBT(stream_attribute_target);}break;}case AUDIO_DEVICE_IN_USB_DEVICE:case AUDIO_DEVICE_IN_USB_HEADSET:
#if defined(MTK_AURISYS_FRAMEWORK_SUPPORT)pCaptureHandler = new AudioALSACaptureHandlerAEC(stream_attribute_target);
#elsepCaptureHandler = new AudioALSACaptureHandlerUsb(stream_attribute_target);
#endifbreak;default: {if (isAdspOptionEnable() &&((isCaptureOffload(stream_attribute_target) && !isIEMsOn &&!AudioALSACaptureDataProviderNormal::getInstance()->getNormalOn()) ||isBleInputDevice(stream_attribute_target->input_device))) {pCaptureHandler = new AudioALSACaptureHandlerDsp(stream_attribute_target);} else {pCaptureHandler = new AudioALSACaptureHandlerAEC(stream_attribute_target);}break;}}//switch} else {switch (stream_attribute_target->input_device) {case AUDIO_DEVICE_IN_BLUETOOTH_SCO_HEADSET: {pCaptureHandler = new AudioALSACaptureHandlerBT(stream_attribute_target);break;}case AUDIO_DEVICE_IN_USB_DEVICE:case AUDIO_DEVICE_IN_USB_HEADSET:pCaptureHandler = new AudioALSACaptureHandlerUsb(stream_attribute_target);break;case AUDIO_DEVICE_IN_BUILTIN_MIC:case AUDIO_DEVICE_IN_BACK_MIC:case AUDIO_DEVICE_IN_WIRED_HEADSET:case AUDIO_DEVICE_IN_BLE_HEADSET:case AUDIO_DEVICE_IN_BUS:default: {if (AudioSmartPaController::getInstance()->isInCalibration()) {pCaptureHandler = new AudioALSACaptureHandlerNormal(stream_attribute_target);break;}if (isAdspOptionEnable() &&!(AUDIO_INPUT_FLAG_MMAP_NOIRQ & stream_attribute_target->mAudioInputFlags) &&((isCaptureOffload(stream_attribute_target) && !isIEMsOn &&!AudioALSACaptureDataProviderNormal::getInstance()->getNormalOn()) ||isBleInputDevice(stream_attribute_target->input_device))) {if (isPhoneCallOpen() == true) {pCaptureHandler = new AudioALSACaptureHandlerVoice(stream_attribute_target);} else {pCaptureHandler = new AudioALSACaptureHandlerDsp(stream_attribute_target);}} else {pCaptureHandler = new AudioALSACaptureHandlerNormal(stream_attribute_target);}break;}}//switch}// save capture handler object in vectormCaptureHandlerVector.add(mCaptureHandlerIndex, pCaptureHandler);return pCaptureHandler;
}

createCaptureHandler里面逻辑还是比较多的,这里我们分析AUDIO_DEVICE_IN_BUILTIN_MIC,也就是普通MIC录音场景,也就是AudioALSACaptureHandlerNormal。

status_t AudioALSACaptureHandlerNormal::open() {if (!AudioSmartPaController::getInstance()->isInCalibration()) {if (isAdspOptionEnable() &&(AudioDspStreamManager::getInstance()->getDspRawInHandlerEnable(mStreamAttributeTarget->mAudioInputFlags) > 0) &&(AudioDspStreamManager::getInstance()->getDspInHandlerEnable(mStreamAttributeTarget->mAudioInputFlags) > 0) && !isIEMsOn &&!AudioALSACaptureDataProviderNormal::getInstance()->getNormalOn()) {mCaptureDataClient = new AudioALSACaptureDataClientAurisysNormal(AudioALSACaptureDataProviderDspRaw::getInstance(),mStreamAttributeTarget, NULL); // NULL: w/o AEC} else {mCaptureDataClient = new AudioALSACaptureDataClientAurisysNormal(AudioALSACaptureDataProviderNormal::getInstance(),mStreamAttributeTarget, NULL); // NULL: w/o AEC}} else {mCaptureDataClient = new AudioALSACaptureDataClientAurisysNormal(AudioALSACaptureDataProviderEchoRefExt::getInstance(),mStreamAttributeTarget, NULL); // NULL: w/o AEC}
}

这里主要走的是AudioALSACaptureDataClientAurisysNormal,DataProvider是AudioALSACaptureDataProviderDspRaw(这个后面表述)。

继续看下AudioALSACaptureDataClientAurisysNormal

AudioALSACaptureDataClientAurisysNormal::AudioALSACaptureDataClientAurisysNormal(AudioALSACaptureDataProviderBase *pCaptureDataProvider,stream_attribute_t *stream_attribute_target,AudioALSACaptureDataProviderBase *pCaptureDataProviderEchoRef) {// config attribute for input devicemCaptureDataProvider->configStreamAttribute(mStreamAttributeTarget);// attach client to capture data provider (after data buf ready)mCaptureDataProvider->attach(this);// get latency (for library, but not data provider)mLatency = (IsLowLatencyCapture()) ? UPLINK_LOW_LATENCY_MS : UPLINK_NORMAL_LATENCY_MS;if (mAudioALSAVolumeController != NULL) {mAudioALSAVolumeController->SetCaptureGain(mStreamAttributeTarget->audio_mode,mStreamAttributeTarget->input_source,mStreamAttributeTarget->input_device,mStreamAttributeTarget->output_devices);}// create lib manager,aurisys用于dsp里面传递算法CreateAurisysLibManager();// depop,MTK在录音时会丢弃前面一段数据避免杂音drop_ms = getDropMs(mStreamAttributeTarget);if (drop_ms) {if ((drop_ms % mLatency) != 0) { // drop data size need to align interrupt ratedrop_ms = ((drop_ms / mLatency) + 1) * mLatency; // cell()}mDropPopSize = (audio_bytes_per_sample(mStreamAttributeTarget->audio_format) *mStreamAttributeTarget->num_channels *mStreamAttributeTarget->sample_rate *drop_ms) / 1000;}// processThreadret = pthread_create(&hProcessThread, NULL,AudioALSACaptureDataClientAurisysNormal::processThread,(void *)this);
}
  • 配置Attribute
  • DataProvider attach,提供录音数据
  • 计算Latency ,录音间隔延时
  • 设置gain增益
  • 创建aurisys,算法会用到
  • 计算录音开头需要丢弃的数据大小mDropPopSize,避免录音开头有杂音
  • 创建线程去读取录音数据
void *AudioALSACaptureDataClientAurisysNormal::processThread(void *arg) {/* process thread created */client->mProcessThreadLaunched = true;/* get buffer address */raw_ul    = &client->mRawDataBuf;//ringbuf地址processed = &client->mProcessedDataBuf;while (client->mEnable == true) {data_count_raw_ul = audio_ringbuf_count(raw_ul);// data not reary, wait dataif ((data_count_raw_ul < client->mRawDataPeriodBufSize) ||(client->IsAECEnable() == true &&((client->mIsEchoRefDataSync == false && client->isNeedSkipSyncEchoRef() == false) ||data_count_raw_aec < client->mEchoRefDataPeriodBufSize))) {wait_result = AL_WAIT_MS(client->mRawDataBufLock, MAX_PROCESS_DATA_WAIT_TIME_OUT_MS);}// copy dataaudio_pool_buf_copy_from_ringbuf(ul_in, raw_ul, client->mRawDataPeriodBufSize);aurisys_process_ul_only(manager,ul_in,ul_out,ul_aec);// depopif (client->mDropPopSize > 0) {ALOGV("data_count %u, mDropPopSize %u, %dL", data_count, client->mDropPopSize, __LINE__);if (data_count >= client->mDropPopSize) {audio_ringbuf_drop_data(&ul_out->ringbuf, client->mDropPopSize);data_count -= client->mDropPopSize;client->mDropPopSize = 0;} else {audio_ringbuf_drop_data(&ul_out->ringbuf, data_count);client->mDropPopSize -= data_count;data_count = 0;}}// copy to processed buf and signal read()audio_ringbuf_copy_from_linear(processed, effect_buf, data_count);}pthread_exit(NULL);return NULL;
}

线程里面会不停循环的从client->mRawDataBuf这个ringbuf的里面copy数据出来,这个数据也就是录音数据,同时会通过aurisys ul过一道算法处理,然后通过之前在AudioALSACaptureDataClientAurisysNormal算出的mDropPopSize,丢弃一段数据避免录音杂音。

那么问题来了,为什么ringbuf里面会有录音数据呢?是谁提供的呢?

这就是之前说的:DataProvider是AudioALSACaptureDataProviderDspRaw

之前在AudioALSACaptureDataClientAurisysNormal::AudioALSACaptureDataClientAurisysNormal中有调用:mCaptureDataProvider->attach(this);
mCaptureDataProvider也就是传入的AudioALSACaptureDataProviderDspRaw!

void AudioALSACaptureDataProviderBase::attach(IAudioALSACaptureDataClient *pCaptureDataClient) {mCaptureDataClientVector.add(pCaptureDataClient->getIdentity(), pCaptureDataClient);size = (uint32_t)mCaptureDataClientVector.size();// open pcm interface when 1st attachif (size == 1) {mOpenIndex++;open();} else {if (!hasLowLatencyCapture && pCaptureDataClient->IsLowLatencyCapture()) {// update HW interrupt rate by HW sample rateupdateReadSize(getPeriodBufSize(pStreamAttr, UPLINK_NORMAL_LATENCY_MS) *lowLatencyMs / UPLINK_NORMAL_LATENCY_MS);if (mCaptureDataProviderType != CAPTURE_PROVIDER_DSP) {mHardwareResourceManager->setULInterruptRate(mStreamAttributeSource.sample_rate *lowLatencyMs / 1000);} else if (isAdspOptionEnable()) {AudioDspStreamManager::getInstance()->UpdateCaptureDspLatency();}}enablePmicInputDevice(true);}
}

第一次录音时,size必定等于1,所以会进入open函数:

status_t AudioALSACaptureDataProviderDspRaw::open() {unsigned int feature_id = CAPTURE_RAW_FEATURE_ID;//发送给DSP需要录音原数据mAudioMessengerIPI->registerAdspFeature(feature_id);if (AudioALSAHardwareResourceManager::getInstance()->getNumPhoneMicSupport() > 2 && mStreamAttributeSource.input_device != AUDIO_DEVICE_IN_WIRED_HEADSET) {mApTurnOnSequence = AUDIO_CTL_ADDA_TO_CAPTURE1_4CH;} else {mApTurnOnSequence = AUDIO_CTL_ADDA_TO_CAPTURE1;}//打开对应通路控件AudioALSADeviceConfigManager::getInstance()->ApplyDeviceTurnonSequenceByName(mApTurnOnSequence);//配置mStreamAttributeSource参数/* Reset frames readed counter */mStreamAttributeSource.Time_Info.total_frames_readed = 0;mStreamAttributeSource.sample_rate = getInputSampleRate(mStreamAttributeSource.input_device,mStreamAttributeSource.output_devices);mStreamAttributeSource.audio_format = AUDIO_FORMAT_PCM_8_24_BIT;if (mStreamAttributeSource.input_device == AUDIO_DEVICE_IN_WIRED_HEADSET ||mStreamAttributeSource.input_source == AUDIO_SOURCE_UNPROCESSED){mStreamAttributeSource.num_channels = 1;} else {mStreamAttributeSource.num_channels = 2;}mStreamAttributeSource.latency = mlatency;//配置mConfigsetApHwPcm();//DMA传输完成之后的回调函数:processDmaMsgWrappermAudioMessengerIPI->registerDmaCbk(TASK_SCENE_CAPTURE_RAW,0x2000,0xF000,processDmaMsgWrapper,this);mAudioALSAVolumeController->SetCaptureGain(mStreamAttributeSource.audio_mode,mStreamAttributeSource.input_source,mStreamAttributeSource.input_device,mStreamAttributeSource.output_devices);openApHwPcm();// pcmOpenAudioDspStreamManager::getInstance()->addCaptureDataProvider(this);//pcm_prepare pcm_startint ret = pthread_create(&hReadThread, NULL, AudioALSACaptureDataProviderDspRaw::readThread, (void *)this);
}

open里面,主要就是操作到底层里面pcm_open和pcm_start,开始录音,然后就是最重要的readThread线程了!

void *AudioALSACaptureDataProviderDspRaw::readThread(void *arg) {pDataProvider->waitPcmStart();// read raw data from alsa driverchar linear_buffer[kReadBufferSizeNormal];while (pDataProvider->mEnable == true) {//从底层读取录音数据,放到linear_bufferret = pDataProvider->pcmRead(pDataProvider->mPcm, linear_buffer, kReadBufferSize);// use ringbuf format to save buffer infopDataProvider->mPcmReadBuf.pBufBase = linear_buffer;pDataProvider->mPcmReadBuf.bufLen   = Read_Size + 1; // +1: avoid pRead == pWritepDataProvider->mPcmReadBuf.pRead    = linear_buffer;pDataProvider->mPcmReadBuf.pWrite   = linear_buffer + Read_Size;pDataProvider->provideCaptureDataToAllClients(open_index);}pthread_exit(NULL);return NULL;
}

重点来了,这里就是在线程中不断的从alsa底层中读取到录音数据到linear_buffer中,然后放到pDataProvider->mPcmReadBuf.pBufBase中,提供给所有clien,也就是我们AudioALSACaptureDataClientAurisysNormal。

void AudioALSACaptureDataProviderBase::provideCaptureDataToAllClients(const uint32_t open_index) {for (size_t i = 0; i < mCaptureDataClientVector.size(); i++) {pCaptureDataClient = mCaptureDataClientVector[i];pCaptureDataClient->copyCaptureDataToClient(mPcmReadBuf);}
}

类似于广播一样的机制,查询所有ClientVector,并且将录音数据提供出去。

为什么MTK要设计成这样子呢?

这样有个好处,就是使得即使有多个AudioStreamIn,但每个AudioStreamIn实例client中有各自的环形Readbuffer,从硬件来的数据会扔到各自的环形Readbuffer中,从而互不影响。

uint32_t AudioALSACaptureDataClientAurisysNormal::copyCaptureDataToClient(RingBuf pcm_read_buf) {pcm_read_buf_wrap.base = pcm_read_buf.pBufBase;pcm_read_buf_wrap.read = pcm_read_buf.pRead;pcm_read_buf_wrap.write = pcm_read_buf.pWrite;pcm_read_buf_wrap.size = pcm_read_buf.bufLen;audio_ringbuf_copy_from_ringbuf_all(&mRawDataBuf, &pcm_read_buf_wrap);
}

最后就是将数据填装到mRawDataBuf了,这样就可以在AudioALSACaptureDataClientAurisysNormal::processThread中拿到录音数据了。

总的来说,录音流程还是比较简单的,核心就是AudioALSACaptureDataProviderDspRaw::readThread负责生产数据,AudioALSACaptureDataClientAurisysNormal::processThread负责消耗数据,一个生产者-消费者模型。

Android音频子系统(十)------MTK Audio录音流程代码解析相关推荐

  1. Android音频子系统(十二)------抖音直播功耗问题解析

    你好!这里是风筝的博客, 欢迎和我一起交流. [前提条件] 移动卡纯5G,120HZ,最小亮度,最小音量,开启定位 [操作步骤] 1.从软件商店下载最新版本APK 2.进入抖音并登录账号,点击右上方的 ...

  2. Android音频子系统(十四)------耳机杂音问题解析

    你好!这里是风筝的博客, 欢迎和我一起交流. 背景介绍: [前提条件]OPPO的模拟有线耳机 [操作步骤]打开全民K歌进行任意一首音乐K歌的时候 [实际结果]耳机里面有滋滋的杂音 [期望结果]耳机里面 ...

  3. Android音频子系统(五)------AudioFlinger处理流程

    你好!这里是风筝的博客, 欢迎和我一起交流. AndioFlinger 作为 Android 的音频系统引擎,重任之一是负责输入输出流设备的管理及音频流数据的处理传输,这是由回放线程(Playback ...

  4. Android 音频源码分析——AndroidRecord录音(一)

    Android 音频源码分析--AndroidRecord录音(一) Android 音频源码分析--AndroidRecord录音(二) Android 音频源码分析--AndroidRecord音 ...

  5. Android音频框架之二 用户录音启动流程源码走读

    前言 此篇是对<Android音频框架之一 详解audioPolicy流程及HAL驱动加载>的延续,此系列博文是记录在Android7.1系统即以后版本实现 内录音功能. 当用户使用 Au ...

  6. Android音频框架之一 详解audioPolicy流程及HAL驱动加载与配置

    前言 此音频架构梳理笔记.主要是因工作上需要在 Android8.1 以上版本中,增加 snd-aloop 虚拟声卡做前期准备工作, 本篇文章提纲挈领的把音频框架主线梳理清晰,通过这篇文章能够清晰如下 ...

  7. Android音频子系统(十五)------Audio调试经验

    你好!这里是风筝的博客, 欢迎和我一起交流. 两年前,我初来手机厂,还不会怎么分析log,当时刚从珠海芯片厂出来,遇到问题都是接上串口线,然后自己手动复现问题,然后对着串口查看下打出来的log分析异常 ...

  8. Android音频子系统(十三)------audio音频测试工具

    你好!这里是风筝的博客, 欢迎和我一起交流. 测试音频延时的话,一般使用WALT来测试是最为准确的,他是借助了外部硬件来捕获音频信号,某宝上有卖: 就是有丢丢小贵,本打工人还是想想白嫖的法子- 谷歌有 ...

  9. Android音频子系统(一)------openOutput打开流程

    你好!这里是风筝的博客, 欢迎和我一起交流. Audio在Android也算是比较复杂的系统,我也是一边学习一边做笔记,如果有不对的地方可以在评论区指出. 这里以Android N为例 为了防止代码看 ...

最新文章

  1. php 经纬度 摩卡 转换,WGS84经纬度坐标与WEB摩卡托坐标转换
  2. 熵权法excel计算过程_翅片式蒸发器如何最简单的进行计算和仿真?
  3. 解决java.net.ConnectException: Connection refused:connect报错
  4. [转载]FPGA/CPLD重要设计思想及工程应用(时序及同步设计)
  5. 当前标签: Entity Framework
  6. Linux安装setuptools
  7. 什么是JDK,什么是JRE?JDK的安装和环境变量的配置
  8. 使用管道和rm命令遇到的问题
  9. Atitit 模板引擎总结 目录 1. 模板引擎 1 2. 常见模板步骤 1 2.1. 1)定义模板字符串  1 2.2. 2)预编译模板  2 2.3. 渲染模板  2 3. 流程渲染 if el
  10. 正确激活报表插件的方法
  11. ad14 drc报错_AD怎么设置DRC检查常规检查项报错?
  12. python获取浏览器network_如何使用python selenium获取浏览器网络日志
  13. 电子计算机和过去的计算工具相比,电子计算机与过去的计算工具相比,所具有的特点有()....
  14. idea 中 maven Process terminated
  15. 悉尼大学INFO1112Assignment1课业解析
  16. 安全设计:加速传输软件镭速传输安全技术解读
  17. uniapp使用高德地图线路规划
  18. html励志素材,名人励志故事素材
  19. <学习笔记> VBA_Line list_01
  20. Node.js安装配置

热门文章

  1. 例举计算机网络连接的主要对象,《计算机网络技术基础教程》课后习题答案_刘四清版...
  2. outlook附件无法打开_通过键盘在Outlook 2007中打开附件
  3. 在Outlook2007中设置QQ邮箱为IMAP/SMTP服务器
  4. Android SearchView
  5. 盖世帝尊 I 分享(一叶青天)
  6. YOLOv5 5.0版本检测FPS
  7. 为人处事说话技巧思维
  8. 地铁听书系列之“看破不说破,81个为人处事潜规则”8月圆满收尾20220831
  9. php基础教程推荐,php基础教程-绝对推荐
  10. 联想a366t 刷android4,联想A366t线刷刷机教程(刷官方rom)