1. 既然是分析音频输出设备,我们首先需要知道当前手机支持的音频输出设备有哪些

adb shell dumpsys media.audio_policy > /home/jon/audio_policy.txt

我们关注如下字段:

- Available output devices:Device 1:- id:  1- tag name: Earpiece- type: AUDIO_DEVICE_OUT_EARPIECE                       - Profiles:Profile 0:- format: AUDIO_FORMAT_PCM_16_BIT- sampling rates:48000- channel masks:0x0010Device 2:- id:  2- tag name: Speaker- type: AUDIO_DEVICE_OUT_SPEAKER                        - Profiles:Profile 0:- format: AUDIO_FORMAT_PCM_16_BIT- sampling rates:48000- channel masks:0x0003Device 3:- id:  6- tag name: Telephony Tx- type: AUDIO_DEVICE_OUT_TELEPHONY_TX                   - Profiles:Profile 0:- format: AUDIO_FORMAT_PCM_16_BIT- sampling rates:8000, 16000- channel masks:0x0001, 0x0003

从中我们得到当前终端支持的音频输出方式有:Earpiece(听筒),Speaker(外放),Telephony Tx(用于传输路由到手机无线装置的音频)
同时,我们看下这个输出参数是怎么得到的:

frameworks\av\media\audioserver\main_audioserver.cpp

AudioPolicyService::instantiate();

frameworks\av\services\audiopolicy\service\audioPolicyService.cpp

void AudioPolicyService::onFirstRef()
{...mAudioPolicyClient = new AudioPolicyClient(this);mAudioPolicyManager = createAudioPolicyManager(mAudioPolicyClient);...
}

frameworks\av\services\audiopolicy\managerdefault\audioPolicyManager.cpp

AudioPolicyManager::AudioPolicyManager(AudioPolicyClientInterface *clientInterface)
{...mVolumeCurves = new VolumeCurvesCollection();AudioPolicyConfig config(mHwModules, mAvailableOutputDevices, mAvailableInputDevices,mDefaultOutputDevice, speakerDrcEnabled,static_cast<VolumeCurvesCollection *>(mVolumeCurves));PolicySerializer serializer;if (serializer.deserialize(AUDIO_POLICY_XML_CONFIG_FILE, config) != NO_ERROR) ...}

如上通过解析AUDIO_POLICY_XML_CONFIG_FILE(/system/etc/audio_policy_configuration.xml)配置文件而来

2. 输出设备类型的选择(例如Speak(usb接口,或者primary),耳机(usb接口,或者primary))
我们知道无论是哪种类型的音频播放(音乐,铃声,电话等),最终我们都会在Native的AudioTrack创建一个实例。这里我们以Music为例来说明:参考我的native播放例子

AudioTrack::AudioTrackAudioTrack::setAudioTrack::createTrack_l

我们分析下这个方法:

status_t AudioTrack::createTrack_l()
{//这里和audioFlinger关联起来了,通过AudioSystem的静态成员函数get_audio_flinger获取audioFlinger的代理const sp<IAudioFlinger>& audioFlinger = AudioSystem::get_audio_flinger();audio_io_handle_t output;audio_stream_type_t streamType = mStreamType;//这里的streamType是Music类型,因此得到属性attr为NULLaudio_attributes_t *attr = (mStreamType == AUDIO_STREAM_DEFAULT) ? &mAttributes : NULL;status_t status;//这里通过属性以及stream的类型获取当前track输出的output设备status = AudioSystem::getOutputForAttr(attr, &output,mSessionId, &streamType, mClientUid,mSampleRate, mFormat, mChannelMask,mFlags, mSelectedDeviceId, mOffloadInfo);   ...//如下4步是通过选中的输出设备,获取该输出设备的延迟,帧率,采样率status = AudioSystem::getLatency(output, &mAfLatency);status = AudioSystem::getFrameCount(output, &mAfFrameCount);status = AudioSystem::getFrameCountHAL(output, &afFrameCountHAL);status = AudioSystem::getSamplingRate(output, &mAfSampleRate);.../*这里就是将track的创建交给audioflinger完成,之后audioflinger会最后在playback thread的线程中创建用于音频播放的track*/sp<IAudioTrack> track = audioFlinger->createTrack(streamType,mSampleRate,mFormat,mChannelMask,&temp,&trackFlags,mSharedBuffer,output,mClientPid,tid,&mSessionId,mClientUid,&status);                    ...
}

ok,我们回到本文的主旋律中,看看是如何选中output设备的,分析getOutputForAttr

frameworks\av\media\libmedia\audioSystem.cpp

 status_t AudioSystem::getOutputForAttr(const audio_attributes_t *attr,audio_io_handle_t *output,audio_session_t session,audio_stream_type_t *stream,uid_t uid,uint32_t samplingRate,audio_format_t format,audio_channel_mask_t channelMask,audio_output_flags_t flags,audio_port_handle_t selectedDeviceId,const audio_offload_info_t *offloadInfo){const sp<IAudioPolicyService>& aps = AudioSystem::get_audio_policy_service();if (aps == 0) return NO_INIT;return aps->getOutputForAttr(attr, output, session, stream, uid,samplingRate, format, channelMask,flags, selectedDeviceId, offloadInfo);}

frameworks\av\services\audiopolicy\service\audioPolicyInterfaceImpl.cpp

status_t AudioPolicyService::getOutputForAttr(const audio_attributes_t *attr,audio_io_handle_t *output,audio_session_t session,audio_stream_type_t *stream,uid_t uid,uint32_t samplingRate,audio_format_t format,audio_channel_mask_t channelMask,audio_output_flags_t flags,audio_port_handle_t selectedDeviceId,const audio_offload_info_t *offloadInfo)
{if (mAudioPolicyManager == NULL) {return NO_INIT;}ALOGV("getOutput()");Mutex::Autolock _l(mLock);const uid_t callingUid = IPCThreadState::self()->getCallingUid();if (!isTrustedCallingUid(callingUid) || uid == (uid_t)-1) {ALOGW_IF(uid != (uid_t)-1 && uid != callingUid,"%s uid %d tried to pass itself off as %d", __FUNCTION__, callingUid, uid);uid = callingUid;}return mAudioPolicyManager->getOutputForAttr(attr, output, session, stream, uid, samplingRate,format, channelMask, flags, selectedDeviceId, offloadInfo);
}

frameworks\av\services\audiopolicy\managerdefault\audioPolicyManager.cpp

status_t AudioPolicyManager::getOutputForAttr(...)
{audio_attributes_t attributes;//对于我们当前music的track,传入的attr 是NULL,如前文分析if (attr != NULL) {attributes = *attr;} else {/*      对于当前music的场景,attributes会被如下赋值attr->content_type = AUDIO_CONTENT_TYPE_MUSIC;attr->usage = AUDIO_USAGE_MEDIA;*/stream_type_to_audio_attributes(*stream, &attributes);}...//根据当前音频的属性,获取当前track的音频策略routing_strategy strategy = (routing_strategy) getStrategyForAttr(&attributes);//根据当前track的音频策略,获取当前音频的输出终端audio_devices_t device = getDeviceForStrategy(strategy, false /*fromCache*/);//根据音频设备,音频输出标识,format等选择输出路径,这个函数的详细分析请查看本文最后*output = getOutputForDevice(device, session, *stream,samplingRate, format, channelMask,flags, offloadInfo);...
}

那么,我们当前需要进一步分析的就是如下2个函数了getStrategyForAttr,getDeviceForStrategy。
首先根据,上文得到的音频属性获取音频策略,再根据音频策略选中输出设备的类型

uint32_t AudioPolicyManager::getStrategyForAttr(const audio_attributes_t *attr) {// flags to strategy mappingif ((attr->flags & AUDIO_FLAG_BEACON) == AUDIO_FLAG_BEACON) {return (uint32_t) STRATEGY_TRANSMITTED_THROUGH_SPEAKER;}if ((attr->flags & AUDIO_FLAG_AUDIBILITY_ENFORCED) == AUDIO_FLAG_AUDIBILITY_ENFORCED) {return (uint32_t) STRATEGY_ENFORCED_AUDIBLE;}// usage to strategy mapping//这里范围音频策略是:STRATEGY_MEDIAreturn static_cast<uint32_t>(mEngine->getStrategyForUsage(attr->usage));
}

如上getStrategyForAttr返回的音频策略是STRATEGY_MEDIA,之后我们再根据这个音频策略选中输出设备

frameworks\av\services\audiopolicy\enginedefault\src\Engine.cpp

audio_devices_t Engine::getDeviceForStrategy(routing_strategy strategy) const
{DeviceVector availableOutputDevices = mApmObserver->getAvailableOutputDevices();DeviceVector availableInputDevices = mApmObserver->getAvailableInputDevices();const SwAudioOutputCollection &outputs = mApmObserver->getOutputs();return getDeviceForStrategyInt(strategy, availableOutputDevices,availableInputDevices, outputs);
}

继续:

audio_devices_t Engine::getDeviceForStrategyInt(routing_strategy strategy,DeviceVector availableOutputDevices,DeviceVector availableInputDevices,const SwAudioOutputCollection &outputs) const
{uint32_t availableOutputDevicesType = availableOutputDevices.types();...switch (strategy) {...case STRATEGY_MEDIA: {uint32_t device2 = AUDIO_DEVICE_NONE;...//device2在前面没有被选中,而且没有设置这个setForceUse(setBluetoothA2dpOn(false)的时                 候,会设置//AudioSystem.FOR_MEDIA,AudioSystem.FORCE_NO_BT_A2DP标记)if ((device2 == AUDIO_DEVICE_NONE) &&(mForceUse[AUDIO_POLICY_FORCE_FOR_MEDIA] != AUDIO_POLICY_FORCE_NO_BT_A2DP) &&(outputs.getA2dpOutput() != 0)) {//第一个优先项出现了.如果此时a2dp可用,直接到下面我用****标记的特殊情况device2 = availableOutputDevicesType & AUDIO_DEVICE_OUT_BLUETOOTH_A2DP;if (device2 == AUDIO_DEVICE_NONE) {device2 = availableOutputDevicesType & AUDIO_DEVICE_OUT_BLUETOOTH_A2DP_HEADPHONES;}if (device2 == AUDIO_DEVICE_NONE) {device2 = availableOutputDevicesType & AUDIO_DEVICE_OUT_BLUETOOTH_A2DP_SPEAKER;}}//这次轮到了AudioSystem.FOR_MEDIA,AudioSystem.FORCE_SPEAKER这种情况,speaker胜出if ((device2 == AUDIO_DEVICE_NONE) &&(mForceUse[AUDIO_POLICY_FORCE_FOR_MEDIA] == AUDIO_POLICY_FORCE_SPEAKER)) {device2 = availableOutputDevicesType & AUDIO_DEVICE_OUT_SPEAKER;}  //接下来就是依据优先级去选择设备了.规则就是//选中一个就结束,直接去和特殊设备做共存if (device2 == AUDIO_DEVICE_NONE) {device2 = availableOutputDevicesType & AUDIO_DEVICE_OUT_WIRED_HEADPHONE;}if (device2 == AUDIO_DEVICE_NONE) {device2 = availableOutputDevicesType & AUDIO_DEVICE_OUT_LINE;}if (device2 == AUDIO_DEVICE_NONE) {device2 = availableOutputDevicesType & AUDIO_DEVICE_OUT_WIRED_HEADSET;}if (device2 == AUDIO_DEVICE_NONE) {device2 = availableOutputDevicesType & AUDIO_DEVICE_OUT_USB_HEADSET;}if (device2 == AUDIO_DEVICE_NONE) {device2 = availableOutputDevicesType & AUDIO_DEVICE_OUT_LINE;}if (device2 == AUDIO_DEVICE_NONE) {device2 = availableOutputDevicesType & AUDIO_DEVICE_OUT_WIRED_HEADSET;}if (device2 == AUDIO_DEVICE_NONE) {device2 = availableOutputDevicesType & AUDIO_DEVICE_OUT_USB_HEADSET;}if (device2 == AUDIO_DEVICE_NONE) {device2 = availableOutputDevicesType & AUDIO_DEVICE_OUT_USB_ACCESSORY;}if (device2 == AUDIO_DEVICE_NONE) {device2 = availableOutputDevicesType & AUDIO_DEVICE_OUT_USB_DEVICE;}if (device2 == AUDIO_DEVICE_NONE) {device2 = availableOutputDevicesType & AUDIO_DEVICE_OUT_DGTL_DOCK_HEADSET;}if ((device2 == AUDIO_DEVICE_NONE) && (strategy != STRATEGY_SONIFICATION)) {// no sonification on aux digital (e.g. HDMI)device2 = availableOutputDevicesType & AUDIO_DEVICE_OUT_AUX_DIGITAL;}if ((device2 == AUDIO_DEVICE_NONE) &&(mForceUse[AUDIO_POLICY_FORCE_FOR_DOCK] == AUDIO_POLICY_FORCE_ANALOG_DOCK)) {device2 = availableOutputDevicesType & AUDIO_DEVICE_OUT_ANLG_DOCK_HEADSET;}if (device2 == AUDIO_DEVICE_NONE) {device2 = availableOutputDevicesType & AUDIO_DEVICE_OUT_SPEAKER;}***********************************************************************************//特殊情况int device3 = AUDIO_DEVICE_NONE;if (strategy == STRATEGY_MEDIA) {//如果arc,spdif,aux_line可用,赋值给device3// ARC, SPDIF and AUX_LINE can co-exist with others.device3 = availableOutputDevicesType & AUDIO_DEVICE_OUT_HDMI_ARC;device3 |= (availableOutputDevicesType & AUDIO_DEVICE_OUT_SPDIF);device3 |= (availableOutputDevicesType & AUDIO_DEVICE_OUT_AUX_LINE);}//device2和arc,spdif,aux_line做一个共存device2 |= device3;// 一般情况下,在这之前device还是AUDIO_DEVICE_NONEdevice |= device2;// If hdmi system audio mode is on, remove speaker out of output list.if ((strategy == STRATEGY_MEDIA) &&(mForceUse[AUDIO_POLICY_FORCE_FOR_HDMI_SYSTEM_AUDIO] ==AUDIO_POLICY_FORCE_HDMI_SYSTEM_AUDIO_ENFORCED)) {device &= ~AUDIO_DEVICE_OUT_SPEAKER;}} break;***********************************************************************************//特殊情况int device3 = AUDIO_DEVICE_NONE;if (strategy == STRATEGY_MEDIA) {//如果arc,spdif,aux_line可用,赋值给device3// ARC, SPDIF and AUX_LINE can co-exist with others.device3 = availableOutputDevicesType & AUDIO_DEVICE_OUT_HDMI_ARC;device3 |= (availableOutputDevicesType & AUDIO_DEVICE_OUT_SPDIF);device3 |= (availableOutputDevicesType & AUDIO_DEVICE_OUT_AUX_LINE);}//device2和arc,spdif,aux_line做一个共存device2 |= device3;// 一般情况下,在这之前device还是AUDIO_DEVICE_NONEdevice |= device2;// If hdmi system audio mode is on, remove speaker out of output list.if ((strategy == STRATEGY_MEDIA) &&(mForceUse[AUDIO_POLICY_FORCE_FOR_HDMI_SYSTEM_AUDIO] ==AUDIO_POLICY_FORCE_HDMI_SYSTEM_AUDIO_ENFORCED)) {device &= ~AUDIO_DEVICE_OUT_SPEAKER;}} break;
...
}

这里需要说明的是availableOutputDevicesType 来自于哪儿?在本函数开头处有

uint32_t availableOutputDevicesType = availableOutputDevices.types();

因此,我们追踪availableOutputDevices,在getDeviceForStrategy函数中

audio_devices_t Engine::getDeviceForStrategy(routing_strategy strategy) const
{DeviceVector availableOutputDevices = mApmObserver->getAvailableOutputDevices();DeviceVector availableInputDevices = mApmObserver->getAvailableInputDevices();
}

这里的getAvailableOutputDevices来自于

virtual const DeviceVector &getAvailableOutputDevices() const
{return mAvailableOutputDevices;
}

而mAvailableOutputDevices正是来自于本文第一步。
3. 决定到底是选择哪种类型(usb,primary,蓝牙,usb)接口的哪种具体设备

a. 首先我们确定当前终端都支持哪些类型的音频接口

和之前选择设备类型一致,也是通过AUDIO_POLICY_XML_CONFIG_FILE(/system/etc/audio_policy_configuration.xml)配置文件解析而来,需要注意的是这个xml文件会同时include:a2dp_audio_policy_configuration.xml以及usb_audio_policy_configuration.xmlr_submix_audio_policy_configuration.xml这三个xml文件,因此总共会加入三个hardware module

<module name="primary" halVersion="2.0">
<module name="a2dp" halVersion="2.0">
<module name="usb" halVersion="2.0">
<module name="r_submix" halVersion="2.0">

另外我们还可以通过dump下media.audio_policy服务来看当前系统支持的module

adb shell dumpsys media.audio_policy > /home/jon/audio_policy.txt

关注如下字段:

HW Modules dump:
- HW Module 1:- name: primary
...
- HW Module 2:- name: a2dp
...
- HW Module 3:- name: usb
...
- HW Module 4:- name: submix

如上,里面会有关于对应module以及device的具体配置。对于每个HW Module我们对于dump数据做下初步分析

- HW Module 1:- name: primary      //具体的音频硬件设备类型- handle: 10- version: 2.0- outputs:        //由于应用层面有不同的stream,对应到配置也会有多个outputoutput 0:    - name: primary output            //匹配到具体的stream类型- Profiles:Profile 0:   //不同的stream也会有多个音频编码格式,因此存在多个profile- format: AUDIO_FORMAT_PCM_16_BIT- sampling rates:48000- channel masks:0x0003- flags: 0x0006- Supported devices: //当然同样的stream也会存在多种device的输出方式(耳机,Speaker等)Device 1:- id:  1- tag name: Earpiece- type: AUDIO_DEVICE_OUT_EARPIECE                       Device 2:- id:  2- tag name: Speaker- type: AUDIO_DEVICE_OUT_SPEAKER          ...

如上,每种硬件接口类型,会根据stream的应用场景区分多个output,最后我们需要在这些output 中选择满足我们要求的output ,再从output 中选择最终的device。
b. 其次对于audio相关的音频路径是如何被加载到系统,同时对于每个音频路径是如何设置其默认输出设备的

我们从代码中分析下,系统是如何将众多的output保存下来的

frameworks\av\services\audiopolicy\managerdefault\audioPolicyManager.cpp

AudioPolicyManager::AudioPolicyManager(AudioPolicyClientInterface *clientInterface)
{mpClientInterface = clientInterface;...//这里的mHwModules.size()就是上面得到信息,当前为4for (size_t i = 0; i < mHwModules.size(); i++) {//逐个加载音频的硬件库,以方便之后的调用mHwModules[i]->mHandle = mpClientInterface->loadHwModule(mHwModules[i]->getName());//这里的每个OutputProfile对应一个output,outProfile->getTagName().string()对应的就是output的名称for (size_t j = 0; j < mHwModules[i]->mOutputProfiles.size(); j++){//获取每个音频输出路径的配置const sp<IOProfile> outProfile = mHwModules[i]->mOutputProfiles[j];ALOGE("Jon,outProfile name = %s\n",outProfile->getTagName().string());//如果当前的音频输出路径没有对应的device支持,则放弃这条路径if (!outProfile->hasSupportedDevices()) {ALOGW("Output profile contains no device on module %s", mHwModules[i]->getName());continue;}//如果当前音频输出路径支持TTS,则标注if ((outProfile->getFlags() & AUDIO_OUTPUT_FLAG_TTS) != 0) {mTtsOutputAvailable = true;}//由于direct(不需要混音,例如HDMI输出)的音频流不需要创建direct的Playback Thread,因此也不需要加到系统默认的outputs中if ((outProfile->getFlags() & AUDIO_OUTPUT_FLAG_DIRECT) != 0) {continue;}audio_devices_t profileType = outProfile->getSupportedDevicesType();if ((profileType & mDefaultOutputDevice->type()) != AUDIO_DEVICE_NONE) {profileType = mDefaultOutputDevice->type();} else {// chose first device present in profile's SupportedDevices also part of// outputDeviceTypesprofileType = outProfile->getSupportedDeviceForType(outputDeviceTypes);}if ((profileType & outputDeviceTypes) == 0) {continue;}//获取当前音频路径支持的输出设备有哪些(Speaker,耳机等)audio_devices_t profileType = outProfile->getSupportedDevicesType();/**  1.这里首先说明,默认的输出设备是Speaker*  2.因此,这儿就是判断当前音频路径的输出设备中是否包含了Speaker,如果是,那么设置profileType 为Speaker*  3.如果当前音频路径中没有包含Speaker,那么从可用的输出设备中按照顺序找出一个设备出来。(耳机,Speaker,听筒)*/if ((profileType & mDefaultOutputDevice->type()) != AUDIO_DEVICE_NONE) {profileType = mDefaultOutputDevice->type();} else {// chose first device present in profile's SupportedDevices also part of// outputDeviceTypesprofileType = outProfile->getSupportedDeviceForType(outputDeviceTypes);}//如果当前的音频路径中并没有可用的输出设备,那么果断的放弃这条音频路径if ((profileType & outputDeviceTypes) == 0) {continue;}                //这里根据outProfile(当前音频路径配置)和mpClientInterface创建一个outputDesc 的描述符sp<SwAudioOutputDescriptor> outputDesc = new SwAudioOutputDescriptor(outProfile,mpClientInterface);//从当前的音频路径中获取所有支持的输出设备                                                                 const DeviceVector &supportedDevices = outProfile->getSupportedDevices();//从所有的设备中查找和选中的设备(默认是Speaker)匹配的设备列表const DeviceVector &devicesForType = supportedDevices.getDevicesFromType(profileType);//如果列表元素个数大于0,则取出第一个元素的Address字段,否则为空。//BTW,这个字段一般都是为空String8 address = devicesForType.size() > 0 ? devicesForType.itemAt(0)->mAddress: String8("");                                                                                   ...//设置当前音频路径的默认输出设备,一般是speakeroutputDesc->mDevice = profileType;audio_io_handle_t output = AUDIO_IO_HANDLE_NONE;//这里的output需要说明下(后面有用到),openOutput会将output初始化为一个全局唯一的变量,而且这个全局唯一的变量会关联到PlaybackThread线程的句柄。//使用音频路径和具体的设备打开该输出设备status_t status = mpClientInterface->openOutput(outProfile->getModuleHandle(),&output,&config,&outputDesc->mDevice,address,&outputDesc->mLatency,outputDesc->mFlags);if (status != NO_ERROR) {ALOGW("Cannot open output stream for device %08x on hw module %s",outputDesc->mDevice,mHwModules[i]->getName());} else {outputDesc->mSamplingRate = config.sample_rate;outputDesc->mChannelMask = config.channel_mask;outputDesc->mFormat = config.format;for (size_t k = 0; k  < supportedDevices.size(); k++) {ssize_t index = mAvailableOutputDevices.indexOf(supportedDevices[k]);// give a valid ID to an attached device once confirmed it is reachableif (index >= 0 && !mAvailableOutputDevices[index]->isAttached()) {mAvailableOutputDevices[index]->attach(mHwModules[i]);}}if (mPrimaryOutput == 0 &&outProfile->getFlags() & AUDIO_OUTPUT_FLAG_PRIMARY) {mPrimaryOutput = outputDesc;}//将全局唯一的变量和音频路径加入到全局的mOutputs中,之后我们会通过device和stream的类型从mOutputs中选择合适的音频路径。addOutput(output, outputDesc);//设置当前音频路径的默认输出设备setOutputDevice(outputDesc,outputDesc->mDevice,true,0,NULL,address.string());}}...}
}

先分析mpClientInterface->openOutput关注2个字段outputoutputDesc->mDevice是怎么赋值的

frameworks\av\services\audiopolicy\service\audioPolicyClientImpl.cpp

status_t AudioPolicyService::AudioPolicyClient::openOutput(audio_module_handle_t module,audio_io_handle_t *output,audio_config_t *config,audio_devices_t *devices,const String8& address,uint32_t *latencyMs,audio_output_flags_t flags)
{sp<IAudioFlinger> af = AudioSystem::get_audio_flinger();if (af == 0) {ALOGW("%s: could not get AudioFlinger", __func__);return PERMISSION_DENIED;}return af->openOutput(module, output, config, devices, address, latencyMs, flags);
}

frameworks\av\services\audioflinger\AudioFlinger.cpp

sp<AudioFlinger::PlaybackThread> AudioFlinger::openOutput_l(audio_module_handle_t module,audio_io_handle_t *output,audio_config_t *config,audio_devices_t devices,const String8& address,audio_output_flags_t flags)
{//在AudioPolicyManager中已经完成了音频硬件库的加载,这里是直接取出outHwDev的指针AudioHwDevice *outHwDev = findSuitableHwDev_l(module, devices);if (*output == AUDIO_IO_HANDLE_NONE) {//对于每个音频路径生成全局唯一的output变量*output = nextUniqueId(AUDIO_UNIQUE_ID_USE_OUTPUT);} else {}//调用openOutputStreamstatus_t status = outHwDev->openOutputStream(&outputStream,*output,devices,flags,config,address.string());    if (status == NO_ERROR) {PlaybackThread *thread;//根据stream的类型生成对应的PlaybackThreadif (flags & AUDIO_OUTPUT_FLAG_COMPRESS_OFFLOAD) {thread = new OffloadThread(this, outputStream, *output, devices, mSystemReady);} else if ((flags & AUDIO_OUTPUT_FLAG_DIRECT) || !isValidPcmSinkFormat(config->format || !isValidPcmSinkChannelMask(config->channel_mask)) {thread = new DirectOutputThread(this, outputStream, *output, devices, mSystemReady);} else {thread = new MixerThread(this, outputStream, *output, devices, mSystemReady);}mPlaybackThreads.add(*output, thread);return thread;}
}

这里的openOutputStream会调用到hardware了,我们以primary为例说明如下:

hardware\libhardware\modules\audio\audio_hw.c

adev->device.open_output_stream = adev_open_output_stream;

adev_open_output_stream会设置相关对音频硬件操作的函数指针

c. 上面的b中对每路音频路径都设置了一个默认的输出设备,之前在track的创建中有一步是getOutputForAttr,我们讲了音频策略的获取,音频设备的选择,最后还有一个 函数没讲getOutputForDevice,这个函数是根据我们下发的device,format,以及请求的audio policy从系统保存的众多音频路径中选择符合要求的路径

audio_io_handle_t AudioPolicyManager::getOutputForDevice(audio_devices_t device,audio_session_t session __unused,audio_stream_type_t stream,uint32_t samplingRate,audio_format_t format,audio_channel_mask_t channelMask,audio_output_flags_t flags,const audio_offload_info_t *offloadInfo)
{...//到AudioTrack了,基本都是pcm码流了if (audio_is_linear_pcm(format)) {// get which output is suitable for the specified stream. The actual// routing change will happen when startOutput() will be called//这步的意思就是在系统保存的mOutputs中选择包含device的音频路径列表,注意这依然是个列表,因为包含device的音频路径很有可能不止一条SortedVector<audio_io_handle_t> outputs = getOutputsForDevice(device, mOutputs);// at this stage we should ignore the DIRECT flag as no direct output could be found earlier//将请求的policy flags中剔除掉directflags = (audio_output_flags_t)(flags & ~AUDIO_OUTPUT_FLAG_DIRECT);//根据音频格式,policy flags再从包含device的音频路径列表中选择最适合的音频路径output = selectOutput(outputs, flags, format);}
}
/**  函数参数说明*  outputs:包含目标device的音频路径列表*  flags:应用请求的Policy Flags*  format:应用请求的格式
*/
audio_io_handle_t AudioPolicyManager::selectOutput(const SortedVector<audio_io_handle_t>& outputs,audio_output_flags_t flags,audio_format_t format)
{// select one output among several that provide a path to a particular device or set of// devices (the list was previously build by getOutputsForDevice()).// The priority is as follows:// 1: the output with the highest number of requested policy flags// 2: the output with the bit depth the closest to the requested one// 3: the primary output// 4: the first output in the list//如果满足要求的音频路径列表为空,那么很遗憾没有音频路径提供,返回0if (outputs.size() == 0) {return 0;}//如果满足要求的音频路径只有一条,那也没有选择的可能了,返回这条即可if (outputs.size() == 1) {return outputs[0];}int maxCommonFlags = 0;audio_io_handle_t outputForFlags = 0;audio_io_handle_t outputForPrimary = 0;audio_io_handle_t outputForFormat = 0;audio_format_t bestFormat = AUDIO_FORMAT_INVALID;audio_format_t bestFormatForFlags = AUDIO_FORMAT_INVALID;for (size_t i = 0; i < outputs.size(); i++) {//逐一取出满足要求的音频路径sp<SwAudioOutputDescriptor> outputDesc = mOutputs.valueFor(outputs[i]);if (!outputDesc->isDuplicated()) {// if a valid format is specified, skip output if not compatibleif (format != AUDIO_FORMAT_INVALID) {//首先判断音频输出标识是否是不需要混音(HDMI等),如果是,进一步判断Format是否匹配,如果不匹配,则直接放弃这条路径if (outputDesc->mFlags & AUDIO_OUTPUT_FLAG_DIRECT) {if (!audio_formats_match(format, outputDesc->mFormat)) {continue;}} else if (!audio_is_linear_pcm(format)) { //再判断请求的格式是不是pcm,如果不是,则放弃continue;}//之后我们从所有满足要求的音频路径中选择和请求的格式最匹配的一条音频路径if (AudioPort::isBetterFormatMatch(outputDesc->mFormat, bestFormat, format)) {outputForFormat = outputs[i];bestFormat = outputDesc->mFormat;}}//我们从音频路径列表Policy Flags中选择尽可能多的满足请求的Flags的音频路径int commonFlags = popcount(outputDesc->mProfile->getFlags() & flags);if (commonFlags >= maxCommonFlags) {if (commonFlags == maxCommonFlags) {if (AudioPort::isBetterFormatMatch(outputDesc->mFormat, bestFormatForFlags, format)) {outputForFlags = outputs[i];bestFormatForFlags = outputDesc->mFormat;}} else {outputForFlags = outputs[i];maxCommonFlags = commonFlags;bestFormatForFlags = outputDesc->mFormat;}ALOGV("selectOutput() commonFlags for output %d, %04x", outputs[i], commonFlags);}//确认音频路径中是否包含主输出,如果有,直接选择第一条满足的即可if (outputDesc->mProfile->getFlags() & AUDIO_OUTPUT_FLAG_PRIMARY) {outputForPrimary = outputs[i];}}}if (outputForFlags != 0) {return outputForFlags;}if (outputForFormat != 0) {return outputForFormat;}if (outputForPrimary != 0) {return outputForPrimary;}return outputs[0];
}

对于音频路径的选择我们可以得出如下结论了:
1. 优先选择最满足请求Policy Flags(例如:deep buffer | fast | direct | primary)的音频路径;
2. 退一步选择最匹配请求Format(例如:AUDIO_FORMAT_PCM_16_BIT)的音频路径;
3. 再退一步选择主输出的音频(例如支持铃声类)路径;
4. 如果都不满足,那没办法了选谁都是选,直接选第一个音频路径输出;

这里对于selectOutput的第二个参数,flags有必要详细说一下,我们可以直接看这个变量的定义:

typedef enum {AUDIO_OUTPUT_FLAG_NONE = 0x0,       // no attributesAUDIO_OUTPUT_FLAG_DIRECT = 0x1,     // this output directly connects a track// to one output stream: no software mixerAUDIO_OUTPUT_FLAG_PRIMARY = 0x2,    // this output is the primary output of// the device. It is unique and must be// present. It is opened by default and// receives routing, audio mode and volume// controls related to voice calls.AUDIO_OUTPUT_FLAG_FAST = 0x4,       // output supports "fast tracks",// defined elsewhereAUDIO_OUTPUT_FLAG_DEEP_BUFFER = 0x8, // use deep audio buffersAUDIO_OUTPUT_FLAG_COMPRESS_OFFLOAD = 0x10,  // offload playback of compressed// streams to hardware codecAUDIO_OUTPUT_FLAG_NON_BLOCKING = 0x20, // use non-blocking writeAUDIO_OUTPUT_FLAG_HW_AV_SYNC = 0x40,   // output uses a hardware A/V synchronization sourceAUDIO_OUTPUT_FLAG_TTS = 0x80,          // output for streams transmitted through speaker// at a sample rate high enough to accommodate// lower-range ultrasonic playbackAUDIO_OUTPUT_FLAG_RAW = 0x100,         // minimize signal processingAUDIO_OUTPUT_FLAG_SYNC = 0x200,        // synchronize I/O streamsAUDIO_OUTPUT_FLAG_IEC958_NONAUDIO = 0x400, // Audio stream contains compressed audio in// SPDIF data bursts, not PCM.AUDIO_OUTPUT_FLAG_VOIP_RX = 0x800,  // use this flag in combination with DIRECT to// start voip over voice path.AUDIO_OUTPUT_FLAG_COMPRESS_PASSTHROUGH = 0x1000, // flag for HDMI compressed passthroughAUDIO_OUTPUT_FLAG_DIRECT_PCM = 0x2000, // flag for Direct PCM
} audio_output_flags_t;

其代表的就是各种音频标识

最后我们还需要知道,系统众多的音频路径怎么查看其支持的输出音频标识呢

adb shell dumpsys media.audio_policy > /home/jon/audio_policy.txt

我们关注如下字段:

音频输出设备是如何决定的相关推荐

  1. 【Android 高性能音频】OboeTester 音频性能测试应用 ( Oboe 输出测试参数 | API 选择 | 音频输出设备选择 | 采样率 | 通道 | 采样格式 | 播放偏好 )

    文章目录 一.Oboe 输出测试参数面板 二.Oboe 输出测试参数 API 及 设备选择 三.Oboe 输出测试参数 音频参数 四.Oboe 输出测试参数 播放偏好 五.Oboe 输出测试参数 ( ...

  2. 使用命令设置Windows音量和音频输出设备

    前言 Windows似乎并没有音量设置的命令,也没有输出设备的设置命令.如果你知道,请告诉我一下~ 因此,这里使用了一个神级小工具:nircmd 官网下载地址: 32位:http://www.nirs ...

  3. ThinkPad 声卡出现未安装任何音频输出设备

    电脑型号: ThinkPad L14 (报错应该是与电脑无关的,windows下都可能出现此问题) 操作系统: Windows 10 64位 专业版 报错现象: windows自动安装了一个更新,然后 ...

  4. windows代码设置默认音频输出设备

    1.windows音频输入设备与输出设备 1)windows中音频输入设备是指麦克风,如下图: 2)windows中音频输出设备是指扬声器,如下图: 2.有些时候,windows音频播放设备可能有多个 ...

  5. windows C 调用音频输出设备 实现播放

    在之前有写过一篇 使用C来调用音频输入设备录音:https://mp.csdn.net/console/editor/html/105217385 本次将在调用输入设备的基础上再调用输出设备将录音内容 ...

  6. android音频系统(7):通话过程中的音频输出设备切换

    前言:由于通话比较特殊,Android对于通话过程中音频输出设备的切换做了特殊处理,它在上层也是通过切换音频播放状态来完成切换操作的,android用CallAudioState来封装通话过程中的音频 ...

  7. Android音频输出设备判断 Headset(耳机)在位状态查询

    文章目录 Audio Jack(耳机插孔) AudioDeviceInfo.TYPE_BUILTIN_SPEAKER AudioDeviceInfo.TYPE_BUILTIN_EARPIECE 判断耳 ...

  8. 如何在webrtc中切换音频输出设备

    webrtc中音频模块由虚拟基类AudioDeviceModule 管理,在调用webrtc::CreatePeerConnectionFactory创建peerconnectionFactory实例 ...

  9. win10右下角声音符号红叉提示解决未安装任何音频输出设备

    win10右下角声音符号红叉提示解决未安装任何音频输出设备 左下角windows图标右键,选windows powershell管理员:分别输入下面两行代码: net localgroup Admin ...

最新文章

  1. 2022-2028年中国阻燃母料行业市场深度分析及发展规模预测报告
  2. 这回导师们颤抖了,这个网站能匿名评价其“人品”,已有大量“不良”导师被爆...
  3. 高压测试平台:高压包产生高电压基本测试参数
  4. 【Java线程】Thread Runnable必知必会
  5. oracle ctl文件7c1b,批量生成控制文件,用sqlldr自动导入多个表的数据
  6. Windows编程—控制面板程序显示信息修改(程序图标、名称、链接等)
  7. 吴恩达深度学习 —— 3.2 神经网络表示
  8. 深度学习(二十五)基于Mutil-Scale CNN的图片语义分割、法向量估计
  9. 如何 Scale Up/Down 应用?- 每天5分钟玩转 Docker 容器技术(126)
  10. 被绿以后,我成了年薪百万的“小三劝退师”
  11. Flink案例代码,面试题
  12. H5 iOS微信端点击图片触发3Dtouch,导致无法扫描二维码【解决方法】
  13. Kafka安装与简介
  14. v2ray各种版本+一键搭建+bbr提速脚本
  15. 计算机技术与软件业余资格测验证书,在大学能否领到网络工程师资格证?网络工程师考核..._出版资格_帮考网...
  16. 《面向对象程序设计(Java)》第四周学习总结
  17. 基于 Nios II 的串口打印和流水灯设计【使用 Quartus 软件】【掌握 SOPC 开发流程】
  18. 《游戏学习》纯JS中国象棋人机对战html游戏源码
  19. 打工还是创业?思路决定出路![转载]
  20. windows远程下载的新方案

热门文章

  1. C# 操作Word批注(一) 插入、修改、删除Word批注
  2. ESPnet-SE 开源工具介绍
  3. douyin_xl,xa,xg,xk
  4. 物资学院、草房、常营点过的外卖总结
  5. html按键精灵怎么设置,按键精灵如何设置循环?按键精灵设置循环的方法步骤...
  6. (EPROCESS/KPROCESS/ETHREAD/KTHREAD)进程与线程内核层中的结构
  7. 镜头的焦距与视场角简介!
  8. InvalidDefinitionException: No serializer found for class java.lang.Object and no properties discove
  9. C++基础用法—冒号的用法
  10. Centos7下areaDetector IOC的编译后记