policy: 设备的选择

https://www.cxyzjd.com/article/VNanyesheshou/115659838

Android 音频源码分析——AudioTrack设备选择_奋斗的菜鸟ing-程序员宅基地

https://me.zhoujinjian.com/categories/Audio/

https://www.dazhuanlan.com/winterttr/topics/1258074

AudioPolicyService 所在进程

frameworks/av/media/audioserver
➜  audioserver git:(0111) ✗ tree
.
├── Android.mk
├── audioserver.rc
├── main_audioserver.cpp
└── OWNERS

int main(int argc __unused, char **argv)
{
        android::hardware::configureRpcThreadpool(4, false /*callerWillJoin*/);
        sp<ProcessState> proc(ProcessState::self());
        sp<IServiceManager> sm = defaultServiceManager();
        ALOGI("ServiceManager: %p", sm.get());

AudioFlinger::instantiate();
        ALOGI("ServiceManager: AudioFlinger instantiate done %p", sm.get());

AudioPolicyService::instantiate();
        ALOGI("ServiceManager: AudioPolicyService instantiate done %p", sm.get());

instantiateVRAudioServer();
        ALOGI("ServiceManager: VRAudioServer instantiate done %p", sm.get());

ALOGI("ServiceManager: done %p", sm.get());
        ProcessState::self()->startThreadPool();
        IPCThreadState::self()->joinThreadPool();
}

AudioPolicyService::instantiate()

就这么一句话包含了整个AudioPolicyService的初始化

在 class AudioPolicyService中没有找到instantiate(),看基类BinderService
template<typename SERVICE>
class BinderService
{
public:
    static status_t publish(bool allowIsolated = false,
                            int dumpFlags = IServiceManager::DUMP_FLAG_PRIORITY_DEFAULT) {
        sp<IServiceManager> sm(defaultServiceManager());
        return sm->addService(String16(SERVICE::getServiceName()),

new SERVICE(),

allowIsolated, dumpFlags);
    }

static void instantiate() { publish(); }
};

new SERVICE() -> new AudioPolicyService()

AudioPolicyService()

BinderService::instantiate 调用到 new AudioPolicyService()

触发对象的创建和初次使用AudioPolicyService() and AudioPolicyService::onFirstRef()

AudioPolicyService::AudioPolicyService()
    : BnAudioPolicyService(), //构造函数
      mAudioPolicyManager(NULL), //变量初始化
      mAudioPolicyClient(NULL),
      mPhoneState(AUDIO_MODE_INVALID),
      mCaptureStateNotifier(false) {
}

void AudioPolicyService::onFirstRef()
{
    {
        // start audio commands thread
        mAudioCommandThread = new AudioCommandThread(String8("ApmAudio"), this);
        // start output activity command thread
        mOutputCommandThread = new AudioCommandThread(String8("ApmOutput"), this);

mAudioPolicyClient = new AudioPolicyClient(this);
        mAudioPolicyManager = createAudioPolicyManager(mAudioPolicyClient);
    }
    // load audio processing modules
    sp<AudioPolicyEffects> audioPolicyEffects = new AudioPolicyEffects();

sp<UidPolicy> uidPolicy = new UidPolicy(this);
    uidPolicy->registerSelf();
}

AudioCommandThread

AudioCommandThread: 哪里有对user permission的判断?
// Thread used to send audio config commands to audio flinger
// For audio config commands, it is necessary because audio flinger requires that the calling
// process (user) has permission to modify audio settings.
class AudioCommandThread : public Thread {}

AudioPolicyClient

class AudioPolicyClient : public AudioPolicyClientInterface

构造函数很简单

AudioPolicyClient(AudioPolicyService *service) : mAudioPolicyService(service) {}

AudioPolicyManager

mAudioPolicyManager = createAudioPolicyManager(mAudioPolicyClient);

大部分工作是这个函数完成的,在详细分析该函数之前先看下AudioPolicyInterface 和 AudioPolicyClientInterface

IAudioPolicyService  AudioPolicyInterface  AudioPolicyClientInterface

这三个接口有些函数比较相似,看上去比较乱。IAudioPolicyService是向外提供的接口,其他两个接口是服务它的, AudioPoliocyInterface是平台无关或者说是它会去调用平台相关的AudioPolicyClientInterface, AudioPolicyClientInterface会调用AudioFlinger service 去执行策略。

APM 去实现AudioPolicyInterface, AudioPolicyClient去实现AudioPolicyClientInterface

// The AudioPolicyInterface and AudioPolicyClientInterface classes define the communication interfaces
// between the platform specific audio policy manager and Android generic audio policy manager.
// The platform specific audio policy manager must implement methods of the AudioPolicyInterface class.
// This implementation makes use of the AudioPolicyClientInterface to control the activity and
// configuration of audio input and output streams.

// The platform specific audio policy manager is in charge of the audio routing and volume control
// policies for a given platform.
// The main roles of this module are:
[1] - keep track of current system state (removable device connections, phone state, user requests...).
//   System state changes and user actions are notified to audio policy manager with methods of the AudioPolicyInterface.

[2]   - process getOutput() queries received when AudioTrack objects are created: Those queries
//   return a handler on an output that has been selected, configured and opened by the audio policy manager and that
//   must be used by the AudioTrack when registering to the AudioFlinger with the createTrack() method.
//   When the AudioTrack object is released, a putOutput() query is received and the audio policy manager can decide
//   to close or reconfigure the output depending on other streams using this output and current system state.

[3]   - similarly process getInput() and putInput() queries received from AudioRecord objects and configure audio inputs.

[4]   - process volume control requests: the stream volume is converted from an index value (received from UI) to a float value
//   applicable to each output as a function of platform specific settings and current output route (destination device). It
//   also make sure that streams are not muted if not allowed (e.g. camera shutter sound in some countries).
//

AudioPolicyManager (APM)

AudioPolicyManager implements audio policy manager behavior common to all platforms
class AudioPolicyManager : public AudioPolicyInterface, public AudioPolicyManagerObserver
{}

mAudioPolicyManager = createAudioPolicyManager(mAudioPolicyClient);

frameworks/av/services/audiopolicy/manager/AudioPolicyFactory.cpp
extern "C" AudioPolicyInterface* createAudioPolicyManager(
        AudioPolicyClientInterface *clientInterface)
{
    AudioPolicyManager *apm = new AudioPolicyManager(clientInterface);
    status_t status = apm->initialize();
    if (status != NO_ERROR) {
        delete apm;
        apm = nullptr;
    }
    return apm;
}

// These methods should be used when finer control over APM initialization
// is needed, e.g. in tests. Must be used in conjunction with the constructor
// that only performs fields initialization. The public constructor comprises
// these steps in the following sequence:
//   - field initializing constructor;
//   - loadConfig;
//   - initialize.

createAudioPolicyManager分为两部分:

1. new AudioPolicyManager(AudioPolicyClientInterface)

2. apm->initialize()

new AudioPolicyManager(clientInterface)

这里构造函数有两个:一个参数、两个参数,但最关键的函数还是 loadConfig()

AudioPolicyManager::AudioPolicyManager(AudioPolicyClientInterface *clientInterface)
        : AudioPolicyManager(clientInterface, false /*forTesting*/)
{
    loadConfig();
}

AudioPolicyManager::AudioPolicyManager(AudioPolicyClientInterface *clientInterface,
                                       bool /*forTesting*/)
    :
    mUidCached(AID_AUDIOSERVER), // no need to call getuid(), there's only one of us running.
    mpClientInterface(clientInterface),
    mLimitRingtoneVolume(false), mLastVoiceVolume(-1.0f),
    mA2dpSuspended(false),
    mConfig(mHwModulesAll, mOutputDevicesAll, mInputDevicesAll, mDefaultOutputDevice),
    mAudioPortGeneration(1),
    mEnableAudioLoopback(false),
    mIsCERegion(false),
    mBeaconMuteRefCount(0),
    mBeaconPlayingRefCount(0),
    mBeaconMuted(false),     
    mTtsOutputAvailable(false),
    mMasterMono(false),
    mMusicEffectOutput(AUDIO_IO_HANDLE_NONE)
{
}

loadConfig()

audio_policy_configuration.xml的格式

audio_policy_configuration.xml
    <!-- Modules section:
        There is one section per audio HW module present on the platform.
        Each module section will contains two mandatory tags for audio HAL “halVersion” and “name”.
        The module names are the same as in current .conf file:
                “primary”, “A2DP”, “remote_submix”, “USB”
        Each module will contain the following sections:
        “devicePorts”: a list of device descriptors for all input and output devices accessible via this
        module.
        This contains both permanently attached devices and removable devices.
        “mixPorts”: listing all output and input streams exposed by the audio HAL

(HAL 不导出devicePorts?)
        “routes”: list of possible connections between input and output devices or between stream                          and devices.
            "route": is defined by an attribute:
                -"type": <mux|mix> means all sources are mutual exclusive (mux) or can be mixed (mix)
                -"sink": the sink involved in this route
                -"sources": all the sources than can be connected to the sink via vis route
        “attachedDevices”: permanently attached devices.
        The attachedDevices section is a list of devices names. The names correspond to device names
        defined in <devicePorts> section.
        “defaultOutputDevice”: device to be used by default when no policy rule applies
    -->

deserializeAudioPolicyXmlConfig解析配置文件

deserializeAudioPolicyXmlConfig(AudioPolicyConfig &config) ->
        deserializeAudioPolicyFile(audioPolicyXmlConfigFile,  //"audio_policy_configuration.xml"

&config);

frameworks/av/services/audiopolicy/common/managerdefinitions/src/Serializer.cpp

status_t deserializeAudioPolicyFile(const char *fileName, AudioPolicyConfig *config)
{    //向外导出的接口函数:输入参数是配置文件,解析后结果保存到config
    PolicySerializer serializer;
    return serializer.deserialize(fileName, config);
}

模板类:把元素E 添加到集合C中

template<typename E, typename C>
struct AndroidCollectionTraits {
    typedef sp<E> Element;
    typedef C Collection;
    typedef void* PtrSerializingCtx;

static status_t addElementToCollection(const Element &element, Collection *collection) {
        return collection->add(element) >= 0 ? NO_ERROR : BAD_VALUE;
    }
};

配置文件中 各字段的描述结构体

// A profile section contains a name,  one audio format and the list of supported sampling rates
// and channel masks for this format
struct AudioProfileTraits : public AndroidCollectionTraits<AudioProfile, AudioProfileVector>
{   //怎样解析profile 段. audio_format(audio 数据的格式 pcm还是其他)
    static constexpr const char *tag = "profile";
    static constexpr const char *collectionTag = "profiles";

struct Attributes
    {
        static constexpr const char *samplingRates = "samplingRates";
        static constexpr const char *format = "format";
        static constexpr const char *channelMasks = "channelMasks";
    };

static Return<Element> deserialize(const xmlNode *cur, PtrSerializingCtx serializingContext);
};

struct MixPortTraits : public AndroidCollectionTraits<IOProfile, IOProfileCollection>
{ //怎样解析 mixPort段 不包括其中的profile段。flags: audio_output/input_flag
    static constexpr const char *tag = "mixPort";
    static constexpr const char *collectionTag = "mixPorts";

struct Attributes
    {
        static constexpr const char *name = "name";
        static constexpr const char *role = "role";
        static constexpr const char *roleSource = "source"; /**< <attribute role source value>. */
        static constexpr const char *flags = "flags";
        static constexpr const char *maxOpenCount = "maxOpenCount";
        static constexpr const char *maxActiveCount = "maxActiveCount";
    };

static Return<Element> deserialize(const xmlNode *cur, PtrSerializingCtx serializingContext);
    // Children: GainTraits
};

struct DevicePortTraits : public AndroidCollectionTraits<DeviceDescriptor, DeviceVector>
{// 怎样解析 devicePort字段, tagName/type 从字段上看mixPort和devicePort 差别还挺大的
    static constexpr const char *tag = "devicePort";
    static constexpr const char *collectionTag = "devicePorts";

struct Attributes
    {
        /**  <device tag name>: any string without space. */
        static constexpr const char *tagName = "tagName";
        static constexpr const char *type = "type"; /**< <device type>. */
        static constexpr const char *role = "role"; /**< <device role: sink or source>. */
        static constexpr const char *roleSource = "source"; /**< <attribute role source value>. */
        /** optional: device address, char string less than 64. */
        static constexpr const char *address = "address";
        /** optional: the list of encoded audio formats that are known to be supported. */
        static constexpr const char *encodedFormats = "encodedFormats";
    };

static Return<Element> deserialize(const xmlNode *cur, PtrSerializingCtx serializingContext);
    // Children: GainTraits (optional)
};

struct RouteTraits : public AndroidCollectionTraits<AudioRoute, AudioRouteVector>
{// route描述的sink 和sources的可能组合关系
    static constexpr const char *tag = "route";
    static constexpr const char *collectionTag = "routes";

struct Attributes
    {
        static constexpr const char *type = "type"; /**< <route type>: mix or mux. */
        static constexpr const char *typeMix = "mix"; /**< type attribute mix value. */
        static constexpr const char *sink = "sink"; /**< <sink: involved in this route>. */
        /** sources: all source that can be involved in this route. */
        static constexpr const char *sources = "sources";
    };

typedef HwModule *PtrSerializingCtx;

static Return<Element> deserialize(const xmlNode *cur, PtrSerializingCtx serializingContext);
};

struct ModuleTraits : public AndroidCollectionTraits<HwModule, HwModuleCollection>
{
    static constexpr const char *tag = "module";
    static constexpr const char *collectionTag = "modules";

static constexpr const char *childAttachedDevicesTag = "attachedDevices";
    static constexpr const char *childAttachedDeviceTag = "item";
    static constexpr const char *childDefaultOutputDeviceTag = "defaultOutputDevice";

struct Attributes
    {
        static constexpr const char *name = "name";
        static constexpr const char *version = "halVersion";
    };

typedef AudioPolicyConfig *PtrSerializingCtx;

static Return<Element> deserialize(const xmlNode *cur, PtrSerializingCtx serializingContext);
    // Children: mixPortTraits, devicePortTraits, and routeTraits
    // Need to call deserialize on each child
};

每个xxxTraints都有自己的deserialize 函数

ModuleTraits::deserialize

Return<ModuleTraits::Element> ModuleTraits::deserialize(const xmlNode *cur, PtrSerializingCtx ctx)
{
1]: <module name="primary" halVersion="2.0">
    std::string name = getXmlAttribute(cur, Attributes::name);

uint32_t versionMajor = 0, versionMinor = 0;
    std::string versionLiteral = getXmlAttribute(cur, Attributes::version);
    if (!versionLiteral.empty()) {
        sscanf(versionLiteral.c_str(), "%u.%u", &versionMajor, &versionMinor);
    }

Element module = new HwModule(name.c_str(), versionMajor, versionMinor);

// Deserialize childrens: Audio Mix Port, Audio Device Ports (Source/Sink), Audio Routes

2]: <mixPorts>
    MixPortTraits::Collection mixPorts;
    status_t status = deserializeCollection<MixPortTraits>(cur, &mixPorts, NULL);
    if (status != NO_ERROR) {
        return Status::fromStatusT(status);
    }
    module->setProfiles(mixPorts); // class HwModule -> addProfile-> addOut/InputProfile: 
                                   // mOut/InputProfiles.add(profile) and mPorts.add(profile)
                   
3]: <devicePorts>
    DevicePortTraits::Collection devicePorts;
    status = deserializeCollection<DevicePortTraits>(cur, &devicePorts, NULL);
    if (status != NO_ERROR) {
        return Status::fromStatusT(status);
    }
    module->setDeclaredDevices(devicePorts); // mDeclaredDevices = devices; mPorts.add(device[i])

4]: <routes>
    RouteTraits::Collection routes;
    status = deserializeCollection<RouteTraits>(cur, &routes, module.get());
    if (status != NO_ERROR) {
        return Status::fromStatusT(status);
    }
    module->setRoutes(routes);

5]: attachedDevices
     sp<DeviceDescriptor> device = module->getDeclaredDevices().
                                getDeviceFromTagName(std::string(reinterpret_cast<const char*>(
                                                        attachedDevice.get())));
     ctx->addDevice(device); // ctx指的是 class AudioPolicyConfig

6]: defaultOutputDevice
    sp<DeviceDescriptor> device = module->getDeclaredDevices().getDeviceFromTagName(
                        std::string(reinterpret_cast<const char*>(defaultOutputDevice.get())));
    ctx->setDefaultOutputDevice(device);
}

attachedDevice 和 devicePort有什么区别?mixPort和devicePort 有什么区别?

AudioPolicyManager::initialize()

 APM的实现 libaudiopolicymanagerdefault

cc_library_shared {
    name: "libaudiopolicymanagerdefault",
    srcs: [
        "AudioPolicyManager.cpp",
        "EngineLibrary.cpp",
    ],

shared_libs: [
        "libaudiofoundation",
        // The default audio policy engine is always present in the system image.
        "libaudiopolicyenginedefault",
    ],
}

engine 通过代码加载,什么用途?

engine的功能就是分析配置文件,根据输入参数和配置文件描述的策略得到对应的设备等。Engine 有两种实现一个是enginedefault 和engineconfigurable, 选择的是哪一个?

从上图可以看出:engineconfigurable、enginedefault两者是并列的都是EngineInterface的实现, 当前只看managerdefault就行。

libaudiopolicymanagerdefault的构成

managerdefault使用的是libaudiopolicymanagerdefault
frameworks/av/services/audiopolicy/managerdefault/Android.bp
cc_library_shared {
    name: "libaudiopolicymanagerdefault",

srcs: [
        "AudioPolicyManager.cpp",
        "EngineLibrary.cpp",
    ],

shared_libs: [
        // The default audio policy engine is always present in the system image.
        // libaudiopolicyengineconfigurable can be built in addition by specifying
        // a dependency on it in the device makefile. There will be no build time
        // conflict with libaudiopolicyenginedefault.
        "libaudiopolicyenginedefault",
    ],
}

frameworks/av/services/audiopolicy/enginedefault/Android.bp
cc_library_shared {
    name: "libaudiopolicyenginedefault",
    srcs: [
        "src/Engine.cpp",
        "src/EngineInstance.cpp",
    ],
}

基类EngineBase

frameworks/av/services/audiopolicy/engine/common/include/EngineBase.h
class EngineBase : public EngineInterface
{
// method
// data member
private: // 并没有构造函数去赋值这些成员变量
    AudioPolicyManagerObserver *mApmObserver = nullptr;

ProductStrategyMap mProductStrategies;
    ProductStrategyPreferredRoutingMap mProductStrategyPreferredDevices;
    VolumeGroupMap mVolumeGroups;
    LastRemovableMediaDevices mLastRemovableMediaDevices;
    audio_mode_t mPhoneState = AUDIO_MODE_NORMAL;  /**< current phone state. */

/* if display-port is connected and can be used for voip/voice */
    bool mDpConnAndAllowedForVoice = false;

/** current forced use configuration. */
    audio_policy_forced_cfg_t mForceUse[AUDIO_POLICY_FORCE_USE_CNT] = {};
};

engineConfig::ParsingResult EngineBase::loadAudioPolicyEngineConfig()
{
    auto result = engineConfig::parse();
    // 处理result, 赋值mProductStrategies
    mProductStrategies.initialize();
}

engineConfig::parse(); 怎么没有参数 path?
ParsingResult parse(const char* path = DEFAULT_PATH);
constexpr char DEFAULT_PATH[] = "/vendor/etc/audio_policy_engine_configuration.xml";

而audio_policy_engine_configuration.xml 的内容是
 17 <configuration version="1.0" xmlns:xi="http://www.w3.org/2001/XInclude">
 18 
 19     <xi:include href="audio_policy_engine_product_strategies.xml"/>
 20     <xi:include href="audio_policy_engine_stream_volumes.xml"/>
 21     <xi:include href="audio_policy_engine_default_stream_volumes.xml"/>
 22 
 23 </configuration>

实际上展开了上面的xml 文件

怎样调到解析函数loadAudioPolicyEngineConfig的

status_t AudioPolicyManager::initialize() {
    {
        [1] auto engLib = EngineLibrary::load( // 加载库:libaudiopolicyenginedefault.so
                        "libaudiopolicyengine" + getConfig().getEngineLibraryNameSuffix() + ".so");

[2] mEngine = engLib->createEngine();
        if (mEngine == nullptr) {
            ALOGE("%s: Failed to instantiate the APM engine", __FUNCTION__);
            return NO_INIT;
        }
    }
    mEngine->setObserver(this);
    status_t status = mEngine->initCheck();
    if (status != NO_ERROR) {
        LOG_FATAL("Policy engine not initialized(err=%d)", status);
        return status;
    }
    
    //---
}

[1] EngineLibrary::load

//EngineLibrary::load 创建EngineLibrary, 并用librayPath初始化它

std::shared_ptr<EngineLibrary> EngineLibrary::load(std::string libraryPath)
{
    std::shared_ptr<EngineLibrary> engLib(new EngineLibrary());
    return engLib->init(std::move(libraryPath)) ? engLib : nullptr;
}

怎样调用so 库的接口函数

bool EngineLibrary::init(std::string libraryPath)
{
    mLibraryHandle = dlopen(libraryPath.c_str(), 0);

mCreateEngineInstance = (EngineInterface* (*)())dlsym(mLibraryHandle, "createEngineInstance");
    mDestroyEngineInstance = (void (*)(EngineInterface*))dlsym(
            mLibraryHandle, "destroyEngineInstance");
    ALOGD("Loaded engine from %s", libraryPath.c_str());
    return true;
}

[2] mEngine = engLib->createEngine()

EngineInstance EngineLibrary::createEngine()
{

return EngineInstance(mCreateEngineInstance(), // Engine的 static 接口函数
            [lib = shared_from_this(), destroy = mDestroyEngineInstance] (EngineInterface* e) {
                destroy(e);
            });
}

extern "C" EngineInterface* createEngineInstance()  也就是 mCreateEngineInstance
{
    return new (std::nothrow) Engine();
}

Engine::Engine()
{auto result = EngineBase::loadAudioPolicyEngineConfig();auto legacyStrategy = getLegacyStrategy();for (const auto &strategy : legacyStrategy) {mLegacyStrategyMap[getProductStrategyByName(strategy.name)] = strategy.id;}
}

EngineBase::loadAudioPolicyEngineConfig()

终于到了分析Policy 文件的接口

engineConfig::ParsingResult EngineBase::loadAudioPolicyEngineConfig()
{
    auto result = engineConfig::parse();
    // 处理result, 赋值mProductStrategies
    mProductStrategies.initialize();
}

分析后的结果如下

Policy文件内容

1.  audio_policy_engine_product_strategies.xml

<ProductStrategy name="STRATEGY_MEDIA">
        <AttributesGroup streamType="AUDIO_STREAM_ASSISTANT" volumeGroup="assistant">
            <Attributes>
                <ContentType value="AUDIO_CONTENT_TYPE_SPEECH"/>
                <Usage value="AUDIO_USAGE_ASSISTANT"/>
            </Attributes>
        </AttributesGroup>
         <AttributesGroup streamType="AUDIO_STREAM_MUSIC" volumeGroup="music">
            <Attributes> <Usage value="AUDIO_USAGE_MEDIA"/> </Attributes>
            <Attributes> <Usage value="AUDIO_USAGE_GAME"/> </Attributes>
            <Attributes> <Usage value="AUDIO_USAGE_ASSISTANT"/> </Attributes>
            <Attributes> <Usage value="AUDIO_USAGE_ASSISTANCE_NAVIGATION_GUIDANCE"/> </Attributes>
            <Attributes></Attributes>
        </AttributesGroup>
        <AttributesGroup streamType="AUDIO_STREAM_SYSTEM" volumeGroup="system">
            <Attributes> <Usage value="AUDIO_USAGE_ASSISTANCE_SONIFICATION"/> </Attributes>
        </AttributesGroup>
    </ProductStrategy>

2. audio_policy_engine_stream_volumes.xml

<volumeGroup>
        <name>music</name>
        <indexMin>0</indexMin>
        <indexMax>150</indexMax>
        <volume deviceCategory="DEVICE_CATEGORY_HEADSET" ref="DEFAULT_MEDIA_VOLUME_CURVE"/>
        <volume deviceCategory="DEVICE_CATEGORY_SPEAKER">
          <point>1,-5978</point>
          <point>13,-4617</point>
          <point>20,-3956</point>
          <point>27,-3368</point>
          <point>33,-3020</point>
          <point>40,-2656</point>
          <point>47,-1885</point>
          <point>53,-1709</point>
          <point>60,-1169</point>
          <point>66,-892</point>
          <point>73,-514</point>
          <point>80,-349</point>
          <point>87,-338</point>
          <point>93,-260</point>
          <point>100,0</point>
        </volume>
        <volume deviceCategory="DEVICE_CATEGORY_A2DP" ref="DEFAULT_MEDIA_VOLUME_CURVE"/>
    </volumeGroup>

3. audio_policy_engine_default_stream_volumes.xml

<reference name="DEFAULT_MEDIA_VOLUME_CURVE">                                                                    
    <!-- Default Media reference Volume Curve -->
        <point>1,-5800</point>
        <point>20,-4000</point>
        <point>60,-1700</point>
        <point>100,0</point>
    </reference>

4 audio_policy_configuration.xml

这个文件的结构比较复杂

modules: audio_policy_volumes.xml: default_volume_tables.xml

( audio_policy_volumes.xml是每个 stream type对应的volume audio_policy_engine_default_stream_volumes说的是volumeGroup)

对每个module的描述

<module name="primary" halVersion="2.0">
    attachedDevices
    defaultOutputDevice
    mixPortes
        mixPort name="primary input" role="sink"
            profile name="" format="AUDIO_FORMAT_PCM_8_24_BIT"
                samplingRates="48000" channelMasks="AUDIO_CHANNEL_IN_STEREO"
    devicePorts
        devicePort tagName="Built-In Mic" type="AUDIO_DEVICE_IN_BUILTIN_MIC" role="source"
        
    routes  //route描述的是 单个sink 可能对应的sources: sink或source指的是mixPort或devicePort的name/tagName
        route type="mix" sink="primary input"
              sources="Built-In Mic,Built-In Back Mic,BT SCO Headset Mic,USB Device In,USB Headset In"/>

这个结果又是怎么和device 关联上的

policy 文件的Load 有两部分:

EngineBase::loadAudioPolicyEngineConfig(); // audio_policy_engine_configuration.xml

AudioPolicyManager::AudioPolicyManager {loadConfig();} //audio_policy_configuration.xml

两者怎么关联上的?

大概流程是APM 通过成员变量mEngine 调用Engine成员函数, Engine成员函数的实现又通过其成员变量mProductStragies 调用ProductStragies的接口, APM本身传入Engine 这样Engine就可以知道device等信息,这里两者关联起来,代码的细节还是不清楚,不知道match的准则是什么?

另外描述Module 的config 文件的结构体在路径 audiopolicy/common/managerdefinitions/include/ 下

AudioPolicyManager::getOutput(audio_stream_type_t stream) ->

mEngine->getOutputDevicesForStream(stream, false /*fromCache*/); ->

{mProductStrategies.getAttributesForStreamType(stream);} ->

ProductStrategy::getAttributesForStreamType

const DeviceVector availableOutputDevices = getApmObserver()->getAvailableOutputDevices();

const SwAudioOutputCollection &outputs = getApmObserver()->getOutputs();

onNewAudioModulesAvailableInt

遍历所用的mHwModulesAll, 加载 HwModule, 遍历每个HwModle所有的OutputProfiles 形成AvailableOutputDevices, 遍历HwModle所有的InputProfiles得到AvailableInputDevices.

到此,AudioPolicyService的初始化代码分析完毕。

mpClientInterface->loadHwModule(module_name)

onNewAudioModulesAvailableInt 遍历HwModule 其中有个重要的工作loadHwModule

/* implementation of the client interface from the policy manager */
audio_module_handle_t AudioPolicyService::AudioPolicyClient::loadHwModule(const char *name) // module name 包括: primary, A2DP, remote_submix, USB
{
    sp<IAudioFlinger> af = AudioSystem::get_audio_flinger();
    if (af == 0) {
        ALOGW("%s: could not get AudioFlinger", __func__);
        return AUDIO_MODULE_HANDLE_NONE;
    }

return af->loadHwModule(name);
}

最终调到AudioFlinger service.

AudioPolicySerivce整体框架

上面分析的只是AudioPolicyService的 一部分 Instantiate, 整体框架大概如下,遇到具体问题后再做分析

Android Audio代码分析(2): AudioPoilicyService 启动相关推荐

  1. Android Audio代码分析(4): audiohalservice 启动

    hal interface IDeviceFactory openDevice得到IDevice, IDevice openInput/outputStream得到IStream android.ha ...

  2. Android Audio代码分析8 - AudioHardwareALSA::openOutputStream函数

    发现以前写的东西,对调用函数的展开放在了函数的前面,导致不方便找到原来代码及设置的函数参数. 以后打算稍作改动,把对被调函数的展开放在原代码的后面,这样看起来应该方便些. 闲言少叙,跳入代码. 前两天 ...

  3. android audio代码分析,Android10.0AudioFocus之源码分析(二)

    前言 上一篇我们简单说了AudioFocus如何使用,那么今天就从源码角度看一下AudioFocus的实现原理. 正文 先说下requestAudioFocus,源码如下: public int re ...

  4. Android Audio代码分析25 - JNI callback

    今天来说说 native 中的代码是如何调用 java 侧代码的. 在看 setEnabled 代码的时候,我们了解到,最终在函数 EffectHandle::setEnabled 中会调用 java ...

  5. Android Audio代码分析7 - stream type

    在看AudioTrack代码的时候,我们看到,要创建一个AudioTrack对象,需要指定一个StreamType. 今天我们只把stream type相关的代码抽取出来,详细看看stream typ ...

  6. Android Audio代码分析2 - 函数getMinBufferSize

    AudioTrack的使用示例中,用到了函数getMinBufferSize,今天把它倒出来,再嚼嚼. *****************************************源码***** ...

  7. 基于Android T代码分析: 在freeform窗口的标题栏拖动时移动窗口流程和拖动freeform窗口边沿改变大小流程

    基于Android T代码分析: 在freeform窗口的标题栏拖动时移动窗口流程和拖动freeform窗口边沿改变大小流程在线看Android源代码网址: http://aospxref.com/a ...

  8. Android 源码分析 (十一) ContentProvider 启动

    ContentProvider (内容提供者) 属于四大组件之一,可以说它是在四大组件中开发者使用率最少的一个,它的作用就是进程间进行数据交互,底层采用 Binder 机制进行进程间通信. 下面我们就 ...

  9. Android 源码分析 Activity的启动模式

    一直想抽空针对AMS进行源码分析,无奈一方面因为很忙,另外AMS很复杂,涉及的知识点也比较多,今天利用五一假期对AMS的一个方面,Activity的启动模式进行源码分析,这里面包括了ActivityR ...

最新文章

  1. Linux bash管道符“|”使用介绍与例子
  2. XmlReader and XmlWriter in .NET
  3. linux将ipv6地址改成ipv4,虚拟机ip地址从ipv6改为ipv4相关问题
  4. MQTT protocol level的处理
  5. 理解node.js(Understanding node.js)
  6. efinance获取基金、股票、债券、期货K线数据
  7. 如何保证redis数据都是热点数据
  8. hdfs java 权限管理,HDFS的权限管理
  9. 记一个SwipeMenuListView侧滑删除错乱的Bug
  10. nc(NetCat)命令
  11. Soft-Skills-software-developers-manual
  12. 题目1026:又一版 A+B
  13. abp(net core)+easyui+efcore实现仓储管理系统——使用 WEBAPI实现CURD (十三)
  14. sas导出数据串行解决方案
  15. 用计算机指令 自动化测试 信号源,一种应用于铁路微机监测采集板的自动检测工装系统的制作方法...
  16. 单像空间后方交会编程实现
  17. Windows10关于拨号上网热点分享的操作
  18. 剑指 Offer II 017. 含有所有字符的最短字符串
  19. KANZI-01-安装
  20. python需要cpu还是显卡问题_如果研究深度学习方向,是CPU更重要还是显卡更重要?...

热门文章

  1. 操作系统接口之系统调用
  2. 【Unity 25】 Unity的UI基础1 (基本UI介绍,Text, Button)
  3. 切入政务大数据领域 美亚柏科设合资公司
  4. 2021-2027全球与中国面部脂肪注射市场现状及未来发展趋势
  5. 面试官不讲码德,欺负我一个年轻的开发工程师
  6. Android自定义控件(二) Android下聚光灯实现
  7. LOJ #10166. 数字游戏【数位DP】
  8. 手机摄像头不可思议的3个功能,普通人一定不知道!
  9. kali默认登录密码
  10. 联发科天玑1200(MT6893)5G 移动芯片详细性能及规格参数