项目场景:

基于 owt 的 webrtc 应用,windows exe 和 chrome 浏览器建立连接,做一个类似远程控制的工具(例如向日葵)

问题描述:

windows 应用和 chrome 建立连接后听不到声音。声音是通过业务层参数传下来 推流(publish) 或者 拉流 (subscribe) 的

原因分析:

owt webrtc 线程调度关系,有一个初始化线程,还有一个取数据线程,取数据的线程发现取数据异常,就退出了,导致一个音频包都没有发

 //=======打印显示音频采集打开了=========
(core_audio_input_win.cc:276): --- Input audio stream is alive ---
(audio_device_buffer.cc:239): Size of recording buffer: 960
(render_delay_buffer.cc:362): Applying total delay of 5 blocks.
(matched_filter.cc:450): Filter 0: start: 0 ms, end: 128 ms.
(matched_filter.cc:450): Filter 1: start: 96 ms, end: 224 ms.
(matched_filter.cc:450): Filter 2: start: 192 ms, end: 320 ms.
(matched_filter.cc:450): Filter 3: start: 288 ms, end: 416 ms.
(matched_filter.cc:450): Filter 4: start: 384 ms, end: 512 ms.
(render_delay_buffer.cc:330): Receiving a first externally reported audio buffer delay of 16 ms.
// ==========发了第一个音频包=============
(rtp_sender_audio.cc:309): First audio RTP packet sent to pacer
// ==========发了第一个视频包=============
(rtp_sender_video.cc:667): Sent first RTP packet of the first video frame (pre-pacer)
(rtp_sender_video.cc:671): Sent last RTP packet of the first video frame (pre-pacer)
(video_send_stream_impl.cc:467): SignalEncoderActive, Encoder is active.

异常的打印:(没有发第一个音频包的逻辑)

//========windows 音频 api 打开报错===========
(core_audio_base_win.cc:955): [Input] WASAPI streaming failed.
(channel.cc:821): Changing voice state, recv=0 send=1
(thread.cc:668): Message took 307ms to dispatch. Posted from: cricket::BaseChannel::UpdateMediaSendRecvState@../../third_party/webrtc/pc/channel.cc:800
(webrtc_video_engine.cc:1193): SetSend: true
(video_send_stream.cc:148): UpdateActiveSimulcastLayers: {1}
(bitrate_allocator.cc:523): UpdateAllocationLimits : total_requested_min_bitrate: 62 kbps, total_requested_padding_bitrate: 0 bps, total_requested_max_bitrate: 2532 kbps
(pacing_controller.cc:213): bwe:pacer_updated pacing_kbps=750 padding_budget_kbps=0
(video_stream_encoder.cc:1594): OnBitrateUpdated, bitrate 268000 stable bitrate = 268000 link allocation bitrate = 268000 packet loss 0 rtt 0
(video_stream_encoder.cc:1619): Video suspend state changed to: not suspended
(channel.cc:970): Changing video state, send=1
(video_stream_encoder.cc:1130): Encoder settings changed from EncoderInfo { ScalingSettings { min_pixels_per_frame = 57600 }, requested_resolution_alignment = 1, supports_native_handle = 0, implementation_name = 'unknown', has_trusted_rate_controller = 0, is_hardware_accelerated = 1, has_internal_source = 0, fps_allocation = [[ 1] ], resolution_bitrate_limits = [] , supports_simulcast = 0} to EncoderInfo { ScalingSettings { Thresholds { low = 29, high = 95}, min_pixels_per_frame = 57600 }, requested_resolution_alignment = 1, supports_native_handle = 0, implementation_name = 'libvpx', has_trusted_rate_controller = 0, is_hardware_accelerated = 0, has_internal_source = 0, fps_allocation = [[ 1] ], resolution_bitrate_limits = [] , supports_simulcast = 1}
// =========仅仅发了第一个视频包============
(rtp_sender_video.cc:667): Sent first RTP packet of the first video frame (pre-pacer)
(rtp_sender_video.cc:671): Sent last RTP packet of the first video frame (pre-pacer)
(video_send_stream_impl.cc:467): SignalEncoderActive, Encoder is activ//yin

采集音频线程的逻辑:(core_audio_base_win.cc)

void Run(void* obj) {  //启动线程 RTC_DCHECK(obj);reinterpret_cast<CoreAudioBase*>(obj)->ThreadRun();
}
void CoreAudioBase::ThreadRun() {   // 线程体
//...HANDLE wait_array[] = {stop_event_.Get(), restart_event_.Get(),audio_samples_event_.Get()};// Keep streaming audio until the stop event or the stream-switch event// is signaled. An error event can also break the main thread loop.while (streaming && !error) {   //死循环执行,直到有错,立即退出线程// Wait for a close-down event, stream-switch event or a new render event.DWORD wait_result = WaitForMultipleObjects(arraysize(wait_array),wait_array, false, INFINITE);switch (wait_result) {case WAIT_OBJECT_0 + 0:// |stop_event_| has been set.streaming = false;break;case WAIT_OBJECT_0 + 1:// |restart_event_| has been set.error = !HandleRestartEvent();break;case WAIT_OBJECT_0 + 2:{// |audio_samples_event_| has been set.error = !on_data_callback_(device_frequency);if(!initialized_ || !is_active_)   // 原来没有这里的判断条件,修改的这里{RTC_LOG(INFO) << "audio base not init, initialized:" << initialized_ << " is_active_:" << is_active_;error = 0;}break;}default:error = true;break;}}if (streaming && error) {  //退出的逻辑RTC_LOG(LS_ERROR) << "[" << DirectionToString(direction())<< "] WASAPI streaming failed. streaming:" << streaming<< " error:" << error;// Stop audio streaming since something has gone wrong in our main thread// loop. Note that, we are still in a "started" state, hence a Stop() call// is required to join the thread properly.result = audio_client_->Stop();if (FAILED(result.Error())) {RTC_LOG(LS_ERROR) << "IAudioClient::Stop failed: "<< core_audio_utility::ErrorToString(result);}// TODO(henrika): notify clients that something has gone wrong and that// this stream should be destroyed instead of reused in the future.}RTC_DLOG(INFO) << "[" << DirectionToString(direction())<< "] ...ThreadRun stops";
}
//回调数据的线程
bool CoreAudioInput::OnDataCallback(uint64_t device_frequency) {RTC_DCHECK_RUN_ON(&thread_checker_audio_);if (!initialized_ || !is_active_) {  // 还没有初始化,则直接返回false,这里使用全局变量来判断// This is concurrent examination of state across multiple threads so will// be somewhat error prone, but we should still be defensive and not use// audio_capture_client_ if we know it's not there.RTC_LOG(INFO) << "data call back, initialized:" << initialized_ << " is_active_:" << is_active_;return false;}   // ...
}

开启线程的逻辑:

CoreAudioBase::CoreAudioBase(Direction direction,bool automatic_restart,OnDataCallback data_callback,OnErrorCallback error_callback): format_(),direction_(direction),automatic_restart_(automatic_restart),on_data_callback_(data_callback),on_error_callback_(error_callback),device_index_(kUndefined),is_restarting_(false) {RTC_DLOG(INFO) << __FUNCTION__ << "[" << DirectionToString(direction) << "]";RTC_DLOG(INFO) << "Automatic restart: " << automatic_restart;RTC_DLOG(INFO) << "Windows version: " << rtc::rtc_win::GetVersion();// Create the event which the audio engine will signal each time a buffer// becomes ready to be processed by the client.// 从这里往下的三个条件变量要注意audio_samples_event_.Set(CreateEvent(nullptr, false, false, nullptr));RTC_DCHECK(audio_samples_event_.IsValid());// Event to be set in Stop() when rendering/capturing shall stop.stop_event_.Set(CreateEvent(nullptr, false, false, nullptr));RTC_DCHECK(stop_event_.IsValid());// Event to be set when it has been detected that an active device has been// invalidated or the stream format has changed.restart_event_.Set(CreateEvent(nullptr, false, false, nullptr));RTC_DCHECK(restart_event_.IsValid());enumerator_ = core_audio_utility::CreateDeviceEnumerator();enumerator_->RegisterEndpointNotificationCallback(this);RTC_LOG(INFO) << __FUNCTION__<< ":Registered endpoint notification callback.";
}

解决方案:

1.没有初始化成功不算返回报错,去掉这种错误。
2.如果上面的流程都是正确的,webrtc 的打印 log 显示音频通道正常,wireshark 发了音频 rtp 包,仍然听不到音频。可能是麦克风没有打卡,通话建立的时候往往会有一个麦克风的标志,检查下此标志是否打开。

owt webrtc 音频没有声音相关推荐

  1. webrtc c++(二) webrtc音频操作麦克风录音与播放,声音控制

    由于新版本的webrtc工程太过于庞大,有一千读个工程,对于理解源码很不方便,所以以后都采用老版本的webrtc,以方便理解,这个版本有一百多个工程,相对于最新的工程要小很多 webrtc源码  下载 ...

  2. WebRTC音频系统 音频发送和接收

    文章目录 3.1音频数据流发送流程 3.2 发送中的编码.RTP打包 3.3 AudioSendStream类关系 3.4`webrtc::AudioSendStream` 创建和初始化 3.5 创建 ...

  3. WebRTC 音频发送和接收处理过程

    曾经整理过一个 WebRTC 音频发送和接收处理的关键过程,WebRTC Audio 接收和发送的关键过程 ,不过之前的分析是基于比较老的版本做的.分析所基于的应用程序,依然选择 WebRTC 的示例 ...

  4. Qt Creator 使用 QMediaPlayer 播放音频无声音

    Qt Creator 使用 QMediaPlayer 播放音频无声音 如果需要使用QMediaPlayer ,播放音视频,需要先在 .pro 文件中添加 Multimedia 模块.使用 QMedia ...

  5. Chrome浏览器播放HTML5音频没声音的解决方案

    近一个月电脑反复出现某个H5音频播放会没有声音这个问题,但是播放的进度条是正常在走的. 十分诡异,偶尔出现偶尔消失,试过了目前已知的所有互联网上能找到的方案都没有效果,甚至还换了很多浏览器,均未能奏效 ...

  6. win11音频无声音排查解决方案

    win11音频无声音排查解决方案 可能的原因 1.默认播放设备不对 2."增强音频"影响 可能的原因 1.默认播放设备不对 2."增强音频"影响 1.默认播放设 ...

  7. 解决华硕前置音频没声音,但后置音频有声音,没有Realtak音频管理器如何处理的问题

    最进自己组装台电脑,然后前置音频没声音,音频线已接好,然后后置音频有声音,百度一直说有个Realtak音频管理器,但我用的华硕主板没有呀,重装驱动也没有,最后在打开菜单,找到realtak audio ...

  8. webrtc音频数据之来去有无踪

    作者:iyangfeng      联系方式:yangfeng6159@163.com webrtc音频数据之来去有无踪 初次写东西,大家见谅! 本文只要说说webrtc的音频从哪里来去哪里?持续更新 ...

  9. 单独编译和使用webrtc音频降噪模块(附完整源码+测试音频文件)

    单独编译和使用webrtc音频增益模块(附完整源码+测试音频文件) 单独编译和使用webrtc音频回声消除模块(附完整源码+测试音频文件) webrtc的音频处理模块分为降噪ns,回音消除aec,回声 ...

最新文章

  1. linux命令face,linux下配置face_recognition
  2. VMware三种网络模式根本区别(图)
  3. poj 3040 Allowance
  4. 微服务架构下,解决数据一致性问题的实践
  5. filter solutions安装教程
  6. mysql basedao_JDBC之BaseDao类
  7. 漫画 | 如果面试时大家都说真话…
  8. java 判断两个时间相差的天数
  9. Quartus II 9.0正式版下载
  10. pytest与coverage联合使用
  11. java 限流器实现
  12. 2022-2028年全球与中国机身导线行业产销需求与投资预测分析
  13. linux vim 命令无效,Linux vim 命令 command not found vim 命令详解 vim 命令未找到 vim 命令安装 - CommandNotFound ⚡️ 坑否...
  14. JSON字符串转数组并取值
  15. 剖析大数据、人工智能、机器学习、神经网络、深度学习五者之区别与联系
  16. python播放网络音乐_Python实现在线音乐播放器
  17. vue中实现tag标签
  18. Java基础案例3-3:多功能手机
  19. 区块链供应链金融实战1
  20. 字符串中去掉特定字符

热门文章

  1. 蓝桥杯Python组的规矩
  2. Docker容器添加映射端口
  3. ASP.NET 中验证的自定义返回和统一社会信用代码的内置验证实现
  4. C中printf的输出格式类型和%g的用法
  5. 使用 Tableau 连接到 Hortonworks Hadoop Hive
  6. Tableau导出PDF格式文件
  7. 程序员把开发搬到云服务器,如何将IDEA开发的java web项目移植到腾讯云服务器
  8. 大华 / 海康威视(HIKVISION) 网络视像头的连接及使用
  9. QT——http协议(大华摄像头保活,根据Id获取大华摄像头播放地址rtsp流)
  10. ubuntu的学习记录-安装vmtols,更换软件源,开启远程服务