音频的基本概念

采样频率

单位时间内对模拟信号的采样次数。采样频率越高,声音的还原就越真实越自然,当然数据量就越大。采样率根据使用类型不同大概有以下几种:

8khz:电话等使用,对于记录人声已经足够使用。
22.05khz:广播使用频率。
44.1khz:音频CD。
48khz:DVD、数字电视中使用。
96khz-192khz:DVD-Audio、蓝光高清等使用。
采样精度常用范围为 8bit-32bit,而 CD 中一般都使用 16bit。
        采样位数

采样位数,也称量化级、样本尺寸、量化数据位数,指每个采样点能够表示的数据范围,它以位(Bit)为单位。采样位数通常有 8bits 或 16bits 两种,采样位数越大,所能记录声音的变化度就越细腻,相应的数据量就越大。8 位字长量化(低品质)和 16 位字长量化(高品质),16 bit 是最常见的采样精度。
        声道数

声道数是指支持能不同发声的音响的个数,它是衡量音响设备的重要指标之一。
        量化

将采样后离散信号的幅度用二进制数表示出来的过程称为量化。(日常生活所说的量化,就是设定一个范围或者区间,然后看获取到的数据在这个条件内的收集出来)。
        编码

采样和量化后的信号还不是数字信号,需要将它转化为数字编码脉冲,这一过程称为编码。模拟音频进采样、量化和编码后形成的二进制序列就是数字音频信号。
        PCM

PCM(Pulse Code Modulation),即脉冲编码调制,对声音进行采样、量化过程,未经过任何编码和压缩处理。
        比特率

比特率(也称位速、比特率),是指在一个数据流中每秒钟能通过的信息量,代表了压缩质量。比如 MP3 常用码率有 128kbit/s、160kbit/s、320kbit/s 等等,越高代表着声音音质越好。

比特率 = 采样率 × 采样深度 × 通道数。比如 采样率 = 44100,采样深度 = 16,通道 = 2 的音频的的比特率就是 44100 * 16 * 2 = 1411200 bps。

AudioDeviceModule

AudioDeviceModule是一个抽象类,定义了音频相关的接口,如设备枚举(Device enumeration)、设备选择(Device selection)、扬声器音量控制(Speaker volume controls)、麦克风音量控制(Microphone volume controls)、扬声器静音控制(Speaker mute control)、麦克风静音控制(Microphone mute control)、立体声支持( Stereo support)和音频传输设置(Audio transport control)等。

#ifndef MODULES_AUDIO_DEVICE_INCLUDE_AUDIO_DEVICE_H_
#define MODULES_AUDIO_DEVICE_INCLUDE_AUDIO_DEVICE_H_#include "modules/audio_device/include/audio_device_defines.h"
#include "rtc_base/scoped_ref_ptr.h"
#include "rtc_base/refcount.h"namespace webrtc {class AudioDeviceModule : public rtc::RefCountInterface {public:// Deprecated.// TODO(henrika): to be removed.enum ErrorCode {kAdmErrNone = 0,kAdmErrArgument = 1};enum AudioLayer {kPlatformDefaultAudio = 0,kWindowsCoreAudio = 2,kLinuxAlsaAudio = 3,kLinuxPulseAudio = 4,kAndroidJavaAudio = 5,kAndroidOpenSLESAudio = 6,kAndroidJavaInputAndOpenSLESOutputAudio = 7,kAndroidAAudioAudio = 8,kAndroidJavaInputAndAAudioOutputAudio = 9,kDummyAudio = 10};enum WindowsDeviceType {kDefaultCommunicationDevice = -1,kDefaultDevice = -2};// TODO(bugs.webrtc.org/7306): deprecated.enum ChannelType {kChannelLeft = 0,kChannelRight = 1,kChannelBoth = 2};public:// Creates an ADM.static rtc::scoped_refptr<AudioDeviceModule> Create(const AudioLayer audio_layer);// TODO(bugs.webrtc.org/7306): deprecated (to be removed).static rtc::scoped_refptr<AudioDeviceModule> Create(const int32_t id,const AudioLayer audio_layer);// Retrieve the currently utilized audio layervirtual int32_t ActiveAudioLayer(AudioLayer* audioLayer) const = 0;// Full-duplex transportation of PCM audiovirtual int32_t RegisterAudioCallback(AudioTransport* audioCallback) = 0;// Main initialization and terminationvirtual int32_t Init() = 0;virtual int32_t Terminate() = 0;virtual bool Initialized() const = 0;// Device enumerationvirtual int16_t PlayoutDevices() = 0;virtual int16_t RecordingDevices() = 0;virtual int32_t PlayoutDeviceName(uint16_t index,char name[kAdmMaxDeviceNameSize],char guid[kAdmMaxGuidSize]) = 0;virtual int32_t RecordingDeviceName(uint16_t index,char name[kAdmMaxDeviceNameSize],char guid[kAdmMaxGuidSize]) = 0;// Device selectionvirtual int32_t SetPlayoutDevice(uint16_t index) = 0;virtual int32_t SetPlayoutDevice(WindowsDeviceType device) = 0;virtual int32_t SetRecordingDevice(uint16_t index) = 0;virtual int32_t SetRecordingDevice(WindowsDeviceType device) = 0;// Audio transport initializationvirtual int32_t PlayoutIsAvailable(bool* available) = 0;virtual int32_t InitPlayout() = 0;virtual bool PlayoutIsInitialized() const = 0;virtual int32_t RecordingIsAvailable(bool* available) = 0;virtual int32_t InitRecording() = 0;virtual bool RecordingIsInitialized() const = 0;// Audio transport controlvirtual int32_t StartPlayout() = 0;virtual int32_t StopPlayout() = 0;virtual bool Playing() const = 0;virtual int32_t StartRecording() = 0;virtual int32_t StopRecording() = 0;virtual bool Recording() const = 0;// Audio mixer initializationvirtual int32_t InitSpeaker() = 0;virtual bool SpeakerIsInitialized() const = 0;virtual int32_t InitMicrophone() = 0;virtual bool MicrophoneIsInitialized() const = 0;// Speaker volume controlsvirtual int32_t SpeakerVolumeIsAvailable(bool* available) = 0;virtual int32_t SetSpeakerVolume(uint32_t volume) = 0;virtual int32_t SpeakerVolume(uint32_t* volume) const = 0;virtual int32_t MaxSpeakerVolume(uint32_t* maxVolume) const = 0;virtual int32_t MinSpeakerVolume(uint32_t* minVolume) const = 0;// Microphone volume controlsvirtual int32_t MicrophoneVolumeIsAvailable(bool* available) = 0;virtual int32_t SetMicrophoneVolume(uint32_t volume) = 0;virtual int32_t MicrophoneVolume(uint32_t* volume) const = 0;virtual int32_t MaxMicrophoneVolume(uint32_t* maxVolume) const = 0;virtual int32_t MinMicrophoneVolume(uint32_t* minVolume) const = 0;// Speaker mute controlvirtual int32_t SpeakerMuteIsAvailable(bool* available) = 0;virtual int32_t SetSpeakerMute(bool enable) = 0;virtual int32_t SpeakerMute(bool* enabled) const = 0;// Microphone mute controlvirtual int32_t MicrophoneMuteIsAvailable(bool* available) = 0;virtual int32_t SetMicrophoneMute(bool enable) = 0;virtual int32_t MicrophoneMute(bool* enabled) const = 0;// Stereo supportvirtual int32_t StereoPlayoutIsAvailable(bool* available) const = 0;virtual int32_t SetStereoPlayout(bool enable) = 0;virtual int32_t StereoPlayout(bool* enabled) const = 0;virtual int32_t StereoRecordingIsAvailable(bool* available) const = 0;virtual int32_t SetStereoRecording(bool enable) = 0;virtual int32_t StereoRecording(bool* enabled) const = 0;// Playout delayvirtual int32_t PlayoutDelay(uint16_t* delayMS) const = 0;// Only supported on Android.virtual bool BuiltInAECIsAvailable() const = 0;virtual bool BuiltInAGCIsAvailable() const = 0;virtual bool BuiltInNSIsAvailable() const = 0;// Enables the built-in audio effects. Only supported on Android.virtual int32_t EnableBuiltInAEC(bool enable) = 0;virtual int32_t EnableBuiltInAGC(bool enable) = 0;virtual int32_t EnableBuiltInNS(bool enable) = 0;// Only supported on iOS.
#if defined(WEBRTC_IOS)virtual int GetPlayoutAudioParameters(AudioParameters* params) const = 0;virtual int GetRecordAudioParameters(AudioParameters* params) const = 0;
#endif  // WEBRTC_IOSprotected:~AudioDeviceModule() override {}
};}  // namespace webrtc#endif  // MODULES_AUDIO_DEVICE_INCLUDE_AUDIO_DEVICE_H_

AudioTransport是个关键的对外接口,负责音频数据的传入(调用NeedMorePlayData方法,供Playout使用)和输出(调用RecordedDataIsAvailable方法,数据由Record采集操作产生)。

// ----------------------------------------------------------------------------
//  AudioTransport
// ----------------------------------------------------------------------------class AudioTransport {public:virtual int32_t RecordedDataIsAvailable(const void* audioSamples,const size_t nSamples,const size_t nBytesPerSample,const size_t nChannels,const uint32_t samplesPerSec,const uint32_t totalDelayMS,const int32_t clockDrift,const uint32_t currentMicLevel,const bool keyPressed,uint32_t& newMicLevel) = 0;  // NOLINT// Implementation has to setup safe values for all specified out parameters.virtual int32_t NeedMorePlayData(const size_t nSamples,const size_t nBytesPerSample,const size_t nChannels,const uint32_t samplesPerSec,void* audioSamples,size_t& nSamplesOut,  // NOLINTint64_t* elapsed_time_ms,int64_t* ntp_time_ms) = 0;  // NOLINT// Method to pull mixed render audio data from all active VoE channels.// The data will not be passed as reference for audio processing internally.virtual void PullRenderData(int bits_per_sample,int sample_rate,size_t number_of_channels,size_t number_of_frames,void* audio_data,int64_t* elapsed_time_ms,int64_t* ntp_time_ms) = 0;virtual void OnSpeakerError() = 0;protected:virtual ~AudioTransport() {}
};

AudioDeviceModuleImpl实现了AudioDeviceModule接口,创建的时候调用CreatePlatformSpecificObjects方法创建平台相关的AudioDeviceGeneric接口实现。该接口抽象了音频的采集和播放逻辑,在Windows平台下有两种实现方案:

  • AudioDeviceWindowsWave实现的是传统的Windows Wave APIs方案。
  • AudioDeviceWindowsCore实现的是Vista之后才支持的Windows Core Audio APIs方案。

AudioDeviceModuleImpl还维护了一个AudioDeviceBuffer对象来管理音频数据的缓冲区,由它直接与对外接口AudioTransport交互。比如:

  • 当AudioDeviceWindowsWave或者AudioDeviceWindowsCore需要播放音频数据的时候,会调用AudioDeviceBuffer的RequestPlayoutData方法请求播放数据,然后通过GetPlayoutData方法来获取刚请求到的数据。AudioDeviceBuffer的RequestPlayoutData就是调用AudioTransport接口的NeedMorePlayData方法来请求待播放的音频流数据。
  • 当AudioDeviceWindowsWave或者AudioDeviceWindowsCore采集到音频数据后,会调用AudioDeviceBuffer的SetRecordedBuffer方法将采集到的音频数据传递进去,然后调用DeliverRecordedData方法来派发出去,该派发方法就是通过调用AudioTransport接口的RecordedDataIsAvailable来实现。
#ifndef MODULES_AUDIO_DEVICE_AUDIO_DEVICE_IMPL_H_
#define MODULES_AUDIO_DEVICE_AUDIO_DEVICE_IMPL_H_#if defined(WEBRTC_INCLUDE_INTERNAL_AUDIO_DEVICE)#include <memory>#include "modules/audio_device/audio_device_buffer.h"
#include "modules/audio_device/include/audio_device.h"
#include "rtc_base/checks.h"
#include "rtc_base/criticalsection.h"namespace webrtc {class AudioDeviceGeneric;
class AudioManager;class AudioDeviceModuleImpl : public AudioDeviceModule {public:enum PlatformType {kPlatformNotSupported = 0,kPlatformWin32 = 1,kPlatformWinCe = 2,kPlatformLinux = 3,kPlatformMac = 4,kPlatformAndroid = 5,kPlatformIOS = 6};int32_t CheckPlatform();int32_t CreatePlatformSpecificObjects();int32_t AttachAudioBuffer();AudioDeviceModuleImpl(const AudioLayer audioLayer);~AudioDeviceModuleImpl() override;// Retrieve the currently utilized audio layerint32_t ActiveAudioLayer(AudioLayer* audioLayer) const override;// Full-duplex transportation of PCM audioint32_t RegisterAudioCallback(AudioTransport* audioCallback) override;// Main initializaton and terminationint32_t Init() override;int32_t Terminate() override;bool Initialized() const override;// Device enumerationint16_t PlayoutDevices() override;int16_t RecordingDevices() override;int32_t PlayoutDeviceName(uint16_t index,char name[kAdmMaxDeviceNameSize],char guid[kAdmMaxGuidSize]) override;int32_t RecordingDeviceName(uint16_t index,char name[kAdmMaxDeviceNameSize],char guid[kAdmMaxGuidSize]) override;// Device selectionint32_t SetPlayoutDevice(uint16_t index) override;int32_t SetPlayoutDevice(WindowsDeviceType device) override;int32_t SetRecordingDevice(uint16_t index) override;int32_t SetRecordingDevice(WindowsDeviceType device) override;// Audio transport initializationint32_t PlayoutIsAvailable(bool* available) override;int32_t InitPlayout() override;bool PlayoutIsInitialized() const override;int32_t RecordingIsAvailable(bool* available) override;int32_t InitRecording() override;bool RecordingIsInitialized() const override;// Audio transport controlint32_t StartPlayout() override;int32_t StopPlayout() override;bool Playing() const override;int32_t StartRecording() override;int32_t StopRecording() override;bool Recording() const override;// Audio mixer initializationint32_t InitSpeaker() override;bool SpeakerIsInitialized() const override;int32_t InitMicrophone() override;bool MicrophoneIsInitialized() const override;// Speaker volume controlsint32_t SpeakerVolumeIsAvailable(bool* available) override;int32_t SetSpeakerVolume(uint32_t volume) override;int32_t SpeakerVolume(uint32_t* volume) const override;int32_t MaxSpeakerVolume(uint32_t* maxVolume) const override;int32_t MinSpeakerVolume(uint32_t* minVolume) const override;// Microphone volume controlsint32_t MicrophoneVolumeIsAvailable(bool* available) override;int32_t SetMicrophoneVolume(uint32_t volume) override;int32_t MicrophoneVolume(uint32_t* volume) const override;int32_t MaxMicrophoneVolume(uint32_t* maxVolume) const override;int32_t MinMicrophoneVolume(uint32_t* minVolume) const override;// Speaker mute controlint32_t SpeakerMuteIsAvailable(bool* available) override;int32_t SetSpeakerMute(bool enable) override;int32_t SpeakerMute(bool* enabled) const override;// Microphone mute controlint32_t MicrophoneMuteIsAvailable(bool* available) override;int32_t SetMicrophoneMute(bool enable) override;int32_t MicrophoneMute(bool* enabled) const override;// Stereo supportint32_t StereoPlayoutIsAvailable(bool* available) const override;int32_t SetStereoPlayout(bool enable) override;int32_t StereoPlayout(bool* enabled) const override;int32_t StereoRecordingIsAvailable(bool* available) const override;int32_t SetStereoRecording(bool enable) override;int32_t StereoRecording(bool* enabled) const override;// Delay information and controlint32_t PlayoutDelay(uint16_t* delayMS) const override;bool BuiltInAECIsAvailable() const override;int32_t EnableBuiltInAEC(bool enable) override;bool BuiltInAGCIsAvailable() const override;int32_t EnableBuiltInAGC(bool enable) override;bool BuiltInNSIsAvailable() const override;int32_t EnableBuiltInNS(bool enable) override;#if defined(WEBRTC_IOS)int GetPlayoutAudioParameters(AudioParameters* params) const override;int GetRecordAudioParameters(AudioParameters* params) const override;
#endif  // WEBRTC_IOS#if defined(WEBRTC_ANDROID)// Only use this acccessor for test purposes on Android.AudioManager* GetAndroidAudioManagerForTest() {return audio_manager_android_.get();}
#endifAudioDeviceBuffer* GetAudioDeviceBuffer() { return &audio_device_buffer_; }private:PlatformType Platform() const;AudioLayer PlatformAudioLayer() const;AudioLayer audio_layer_;PlatformType platform_type_ = kPlatformNotSupported;bool initialized_ = false;
#if defined(WEBRTC_ANDROID)// Should be declared first to ensure that it outlives other resources.std::unique_ptr<AudioManager> audio_manager_android_;
#endifAudioDeviceBuffer audio_device_buffer_;std::unique_ptr<AudioDeviceGeneric> audio_device_;
};}  // namespace webrtc#endif  // defined(WEBRTC_INCLUDE_INTERNAL_AUDIO_DEVICE)#endif  // MODULES_AUDIO_DEVICE_AUDIO_DEVICE_IMPL_H_

AudioDeviceBuffer保存和Device的交互的音频数据,audio_device_采集的音频会交由audio_device_buffer_处理,在由AudioTransport传输

#ifndef MODULES_AUDIO_DEVICE_AUDIO_DEVICE_BUFFER_H_
#define MODULES_AUDIO_DEVICE_AUDIO_DEVICE_BUFFER_H_#include "modules/audio_device/include/audio_device.h"
#include "rtc_base/buffer.h"
#include "rtc_base/criticalsection.h"
#include "rtc_base/task_queue.h"
#include "rtc_base/thread_annotations.h"
#include "rtc_base/thread_checker.h"
#include "typedefs.h"  // NOLINT(build/include)namespace webrtc {// Delta times between two successive playout callbacks are limited to this
// value before added to an internal array.
const size_t kMaxDeltaTimeInMs = 500;
// TODO(henrika): remove when no longer used by external client.
const size_t kMaxBufferSizeBytes = 3840;  // 10ms in stereo @ 96kHzclass AudioDeviceBuffer {public:enum LogState {LOG_START = 0,LOG_STOP,LOG_ACTIVE,};struct Stats {void ResetRecStats() {rec_callbacks = 0;rec_samples = 0;max_rec_level = 0;}void ResetPlayStats() {play_callbacks = 0;play_samples = 0;max_play_level = 0;}// Total number of recording callbacks where the source provides 10ms audio// data each time.uint64_t rec_callbacks = 0;// Total number of playback callbacks where the sink asks for 10ms audio// data each time.uint64_t play_callbacks = 0;// Total number of recorded audio samples.uint64_t rec_samples = 0;// Total number of played audio samples.uint64_t play_samples = 0;// Contains max level (max(abs(x))) of recorded audio packets over the last// 10 seconds where a new measurement is done twice per second. The level// is reset to zero at each call to LogStats().int16_t max_rec_level = 0;// Contains max level of recorded audio packets over the last 10 seconds// where a new measurement is done twice per second.int16_t max_play_level = 0;};AudioDeviceBuffer();virtual ~AudioDeviceBuffer();int32_t RegisterAudioCallback(AudioTransport* audio_callback);void StartPlayout();void StartRecording();void StopPlayout();void StopRecording();int32_t SetRecordingSampleRate(uint32_t fsHz);int32_t SetPlayoutSampleRate(uint32_t fsHz);int32_t RecordingSampleRate() const;int32_t PlayoutSampleRate() const;int32_t SetRecordingChannels(size_t channels);int32_t SetPlayoutChannels(size_t channels);size_t RecordingChannels() const;size_t PlayoutChannels() const;virtual int32_t SetRecordedBuffer(const void* audio_buffer,size_t samples_per_channel);virtual void SetVQEData(int play_delay_ms, int rec_delay_ms);virtual int32_t DeliverRecordedData();uint32_t NewMicLevel() const;virtual int32_t RequestPlayoutData(size_t samples_per_channel);virtual int32_t GetPlayoutData(void* audio_buffer);virtual void OnSpeakerError();int32_t SetTypingStatus(bool typing_status);// Called on iOS and Android where the native audio layer can be interrupted// by other audio applications. These methods can then be used to reset// internal states and detach thread checkers to allow for new audio sessions// where native callbacks may come from a new set of I/O threads.void NativeAudioPlayoutInterrupted();void NativeAudioRecordingInterrupted();private:// Starts/stops periodic logging of audio stats.void StartPeriodicLogging();void StopPeriodicLogging();// Called periodically on the internal thread created by the TaskQueue.// Updates some stats but dooes it on the task queue to ensure that access of// members is serialized hence avoiding usage of locks.// state = LOG_START => members are initialized and the timer starts.// state = LOG_STOP => no logs are printed and the timer stops.// state = LOG_ACTIVE => logs are printed and the timer is kept alive.void LogStats(LogState state);// Updates counters in each play/record callback. These counters are later// (periodically) read by LogStats() using a lock.void UpdateRecStats(int16_t max_abs, size_t samples_per_channel);void UpdatePlayStats(int16_t max_abs, size_t samples_per_channel);// Clears all members tracking stats for recording and playout.// These methods both run on the task queue.void ResetRecStats();void ResetPlayStats();// This object lives on the main (creating) thread and most methods are// called on that same thread. When audio has started some methods will be// called on either a native audio thread for playout or a native thread for// recording. Some members are not annotated since they are "protected by// design" and adding e.g. a race checker can cause failuries for very few// edge cases and it is IMHO not worth the risk to use them in this class.// TODO(henrika): see if it is possible to refactor and annotate all members.// Main thread on which this object is created.rtc::ThreadChecker main_thread_checker_;// Native (platform specific) audio thread driving the playout side.rtc::ThreadChecker playout_thread_checker_;// Native (platform specific) audio thread driving the recording side.rtc::ThreadChecker recording_thread_checker_;rtc::CriticalSection lock_;// Task queue used to invoke LogStats() periodically. Tasks are executed on a// worker thread but it does not necessarily have to be the same thread for// each task.rtc::TaskQueue task_queue_;// Raw pointer to AudioTransport instance. Supplied to RegisterAudioCallback()// and it must outlive this object. It is not possible to change this member// while any media is active. It is possible to start media without calling// RegisterAudioCallback() but that will lead to ignored audio callbacks in// both directions where native audio will be acive but no audio samples will// be transported.AudioTransport* audio_transport_cb_;// The members below that are not annotated are protected by design. They are// all set on the main thread (verified by |main_thread_checker_|) and then// read on either the playout or recording audio thread. But, media will never// be active when the member is set; hence no conflict exists. It is too// complex to ensure and verify that this is actually the case.// Sample rate in Hertz.uint32_t rec_sample_rate_;uint32_t play_sample_rate_;// Number of audio channels.size_t rec_channels_;size_t play_channels_;// Keeps track of if playout/recording are active or not. A combination// of these states are used to determine when to start and stop the timer.// Only used on the creating thread and not used to control any media flow.bool playing_ RTC_GUARDED_BY(main_thread_checker_);bool recording_ RTC_GUARDED_BY(main_thread_checker_);// Buffer used for audio samples to be played out. Size can be changed// dynamically. The 16-bit samples are interleaved, hence the size is// proportional to the number of channels.rtc::BufferT<int16_t> play_buffer_ RTC_GUARDED_BY(playout_thread_checker_);// Byte buffer used for recorded audio samples. Size can be changed// dynamically.rtc::BufferT<int16_t> rec_buffer_ RTC_GUARDED_BY(recording_thread_checker_);// Contains true of a key-press has been detected.bool typing_status_ RTC_GUARDED_BY(recording_thread_checker_);// Delay values used by the AEC.int play_delay_ms_ RTC_GUARDED_BY(recording_thread_checker_);int rec_delay_ms_ RTC_GUARDED_BY(recording_thread_checker_);// Counts number of times LogStats() has been called.size_t num_stat_reports_ RTC_GUARDED_BY(task_queue_);// Time stamp of last timer task (drives logging).int64_t last_timer_task_time_ RTC_GUARDED_BY(task_queue_);// Counts number of audio callbacks modulo 50 to create a signal when// a new storage of audio stats shall be done.int16_t rec_stat_count_ RTC_GUARDED_BY(recording_thread_checker_);int16_t play_stat_count_ RTC_GUARDED_BY(playout_thread_checker_);// Time stamps of when playout and recording starts.int64_t play_start_time_ RTC_GUARDED_BY(main_thread_checker_);int64_t rec_start_time_ RTC_GUARDED_BY(main_thread_checker_);// Contains counters for playout and recording statistics.Stats stats_ RTC_GUARDED_BY(lock_);// Stores current stats at each timer task. Used to calculate differences// between two successive timer events.Stats last_stats_ RTC_GUARDED_BY(task_queue_);// Set to true at construction and modified to false as soon as one audio-// level estimate larger than zero is detected.bool only_silence_recorded_;// Set to true when logging of audio stats is enabled for the first time in// StartPeriodicLogging() and set to false by StopPeriodicLogging().// Setting this member to false prevents (possiby invalid) log messages from// being printed in the LogStats() task.bool log_stats_ RTC_GUARDED_BY(task_queue_);// Should *never* be defined in production builds. Only used for testing.
// When defined, the output signal will be replaced by a sinus tone at 440Hz.
#ifdef AUDIO_DEVICE_PLAYS_SINUS_TONEdouble phase_ RTC_GUARDED_BY(playout_thread_checker_);
#endif
};}  // namespace webrtc#endif  // MODULES_AUDIO_DEVICE_AUDIO_DEVICE_BUFFER_H_

Audio Processing Module (APM)音频处理模块(APM)提供了语音处理的集合,为实时通信软件设计的组件。

// The Audio Processing Module (APM) provides a collection of voice processing
// components designed for real-time communications software.
//
// APM operates on two audio streams on a frame-by-frame basis. Frames of the
// primary stream, on which all processing is applied, are passed to
// |ProcessStream()|. Frames of the reverse direction stream are passed to
// |ProcessReverseStream()|. On the client-side, this will typically be the
// near-end (capture) and far-end (render) streams, respectively. APM should be
// placed in the signal chain as close to the audio hardware abstraction layer
// (HAL) as possible.
//
// On the server-side, the reverse stream will normally not be used, with
// processing occurring on each incoming stream.
//
// Component interfaces follow a similar pattern and are accessed through
// corresponding getters in APM. All components are disabled at create-time,
// with default settings that are recommended for most situations. New settings
// can be applied without enabling a component. Enabling a component triggers
// memory allocation and initialization to allow it to start processing the
// streams.
//
// Thread safety is provided with the following assumptions to reduce locking
// overhead:
//   1. The stream getters and setters are called from the same thread as
//      ProcessStream(). More precisely, stream functions are never called
//      concurrently with ProcessStream().
//   2. Parameter getters are never called concurrently with the corresponding
//      setter.
//
// APM accepts only linear PCM audio data in chunks of 10 ms. The int16
// interfaces use interleaved data, while the float interfaces use deinterleaved
// data.
//
// Usage example, omitting error checking:
// AudioProcessing* apm = AudioProcessingBuilder().Create();
//
// AudioProcessing::Config config;
// config.high_pass_filter.enabled = true;
// config.gain_controller2.enabled = true;
// apm->ApplyConfig(config)
//
// apm->echo_cancellation()->enable_drift_compensation(false);
// apm->echo_cancellation()->Enable(true);
//
// apm->noise_reduction()->set_level(kHighSuppression);
// apm->noise_reduction()->Enable(true);
//
// apm->gain_control()->set_analog_level_limits(0, 255);
// apm->gain_control()->set_mode(kAdaptiveAnalog);
// apm->gain_control()->Enable(true);
//
// apm->voice_detection()->Enable(true);
//
// // Start a voice call...
//
// // ... Render frame arrives bound for the audio HAL ...
// apm->ProcessReverseStream(render_frame);
//
// // ... Capture frame arrives from the audio HAL ...
// // Call required set_stream_ functions.
// apm->set_stream_delay_ms(delay_ms);
// apm->gain_control()->set_stream_analog_level(analog_level);
//
// apm->ProcessStream(capture_frame);
//
// // Call required stream_ functions.
// analog_level = apm->gain_control()->stream_analog_level();
// has_voice = apm->stream_has_voice();
//
// // Repeate render and capture processing for the duration of the call...
// // Start a new call...
// apm->Initialize();
//
// // Close the application...
// delete apm;
//

webrtc audio相关推荐

  1. WebRTC Audio 接收和发送的关键过程

    本文基于 WebRTC 中的示例应用 peerconnection_client 分析 WebRTC Audio 接收和发送的关键过程.首先是发送的过程,然后是接收的过程. 创建 webrtc::Au ...

  2. WebRTC Audio Encoder/Decoder Factory 的实现

    Audio encoder factory 用于创建完成各种 audio codec 编码的 encoder 对象,audio decoder factory 则用于创建完成各种 audio code ...

  3. Apple quietly slips WebRTC audio, video into Safari's WebKit spec

    转自:http://www.zdnet.com/article/apple-quietly-slips-webrtc-audio-video-into-safaris-webkit-spec/?fro ...

  4. 资讯|WebRTC M90 更新

    WebRTC M90目前已在Chrome测试版中发布,包含2个新特性和超过29个bug修复,以及功能增强.稳定性与性能等方面的改进. 欢迎关注网易云信公众号,我们将定期翻译WebRTC相关内容,帮助开 ...

  5. 浅析webrtc中音频的录制和播放流程

    前言 本文是基于PineAppRtc项目https://github.com/thfhongfeng/PineAppRtc) 在webrtc中音频的录制和播放都是封装在内部,一般情况下我们也不需要关注 ...

  6. WebRTC 音频发送和接收处理过程

    曾经整理过一个 WebRTC 音频发送和接收处理的关键过程,WebRTC Audio 接收和发送的关键过程 ,不过之前的分析是基于比较老的版本做的.分析所基于的应用程序,依然选择 WebRTC 的示例 ...

  7. WebRTC 中的基本音频处理操作

    在 RTC,即实时音视频通信中,要解决的音频相关的问题,主要包括如下这些: 音频数据的采集及播放. 音频数据的处理.主要是对采集录制的音频数据的处理,即所谓的 3A 处理,AEC (Acoustic ...

  8. WebRTC音频系统 音频发送和接收

    文章目录 3.1音频数据流发送流程 3.2 发送中的编码.RTP打包 3.3 AudioSendStream类关系 3.4`webrtc::AudioSendStream` 创建和初始化 3.5 创建 ...

  9. Android端WebRTC本地音视频采集流程源码分析

    WebRTC源码版本为:org.webrtc:google-webrtc:1.0.32006 本文仅分析Java层源码,在分析之前,先说明一下一些重要类的基本概念. MediaSource:WebRT ...

最新文章

  1. 《分布式系统:概念与设计》一1.6 实例研究:万维网
  2. android_home is not set mac,mac解决appium-doctor报ANDROID_HOME is NOT set
  3. 欧拉函数的相关应用 noj欧拉函数求和+noj 最大公约数求和
  4. 在linux下玩转usb摄像头
  5. html的首选参数设置,设置 Animate 中的首选参数
  6. 【虚拟主机篇】asp页面实现301重定向方法
  7. vue的使用(引用/创建vue项目)(一)
  8. MyBatisPlus自动生成代码springboot+mybatis+mysql 以及动态sql生成方法(测试可用版)
  9. html5图像映射坐标怎么看,html学习之创建图像映射
  10. 使用Python处理声音文件(一):让歌曲重复两次
  11. Spring学习总结(33)—— 用 Spring 的 @Transactional 注解控制事务有哪些不生效的场景?
  12. 开源 TiDB Operator 让 TiDB 成为真正的 Cloud-Native 数据库
  13. 【经典算法】第八回:桶排序
  14. k8s查看pod的yaml文件_k8s监控系统prometheus-operator
  15. 黑苹果alc269声卡仿冒id_AppleALC仿冒声卡驱动alc269优化版(Lenovo Z580亲测)
  16. Python数据分析案例05——影响经济增长的因素(随机森林回归)
  17. maple中plot和plots函数的区别——笔记1
  18. webrtc-audio-processing pulseaudio最新版本1.0交叉编译到ARM
  19. XML Guest Book
  20. EXCEL如何设置,使表格能自动调整列宽以适应文字长度

热门文章

  1. 多人配音播报软件哪个好?
  2. 小县城适合做什么兼职?
  3. [USACO 2021.02 Feb]Problem 3. Clockwise Fence
  4. java中的映像存储数据,如何在GCE和容器映像上使用持久存储
  5. c# reportviewer rdlc
  6. Python pickle 介绍及使用理解
  7. SSH远程登录配置过程
  8. NET USE丨共享访问登录命令的使用
  9. lcd屏和oled屏的优缺点 lcd屏和oled屏哪个省电
  10. Node.js 事件驱动