当前位置: 首页 > backend >正文

【WebRTC-12】CreatePeerConnection究竟创建了什么?

Android-RTC系列软重启,改变以往细读源代码的方式 改为 带上实际问题分析代码。增加实用性,方便形成肌肉记忆。同时不分种类、不分难易程度,在线征集问题切入点。

问题:CreatePeerConnection究竟创建了什么?

问这个问题,其实是想梳理类结构脉络,理清依赖关系,方便往后展开。最好阅读之前关于PeerConnectionFactory的文章。在之前的文章中我们知道PeerConnectionFactory中有一个关键类ConnectionContext,请大家多关注一下它。再然后回头回答上方的问题,我们可以从PeerConnectionFactory->CreatePeerConnection出发开始分析。

Java-JNI层面的代码我就忽略了,我们直接进入实体函数。

//  代码位置 pc\peer_connection_factory.ccRTCErrorOr<rtc::scoped_refptr<PeerConnectionInterface>>
PeerConnectionFactory::CreatePeerConnectionOrError(const PeerConnectionInterface::RTCConfiguration& configuration,PeerConnectionDependencies dependencies) {... ... ...std::unique_ptr<Call> call =worker_thread()->BlockingCall([this, &env, &configuration] {return CreateCall_w(env, configuration);});auto result = PeerConnection::Create(env, context_, options_, std::move(call),configuration, std::move(dependencies));if (!result.ok()) {return result.MoveError();}rtc::scoped_refptr<PeerConnectionInterface> result_proxy =PeerConnectionProxy::Create(signaling_thread(), network_thread(),result.MoveValue());return result_proxy;
}

看到PeerConnection::Create的入参,configuration 和 dependencies是外部传入,context_就是ConnectionContext,还有其他PeerConnectionFactory创建时候的成员变量,其中有一个新创建的 Call对象,我们往下看看这个Call。

std::unique_ptr<Call> PeerConnectionFactory::CreateCall_w(const Environment& env,const PeerConnectionInterface::RTCConfiguration& configuration) {CallConfig call_config(env, network_thread());if (!media_engine() || !context_->call_factory()) {return nullptr;}... ... ...return context_->call_factory()->CreateCall(call_config);
}

这里调用ConnectionContext.CallFactory的CreateCall,需要回溯ConnectionContext.CallFactory的位置,在之前的文章我们可以快速的定位到ConnectionContext的构造函数,堆栈调用如下。

CreatePeerConnectionFactoryForJava -> EnableMedia(dependencies)
|-->dependencies.media_factory = std::make_unique<MediaFactoryImpl>();
|---->Connection::Create
|------>call_factory_(std::move(dependencies->media_factory))
// 文件位置 api\enable_media.cc
class MediaFactoryImpl : public MediaFactory {public:MediaFactoryImpl() = default;MediaFactoryImpl(const MediaFactoryImpl&) = delete;MediaFactoryImpl& operator=(const MediaFactoryImpl&) = delete;~MediaFactoryImpl() override = default;std::unique_ptr<Call> CreateCall(const CallConfig& config) override {return webrtc::CreateCall(config);}std::unique_ptr<MediaEngineInterface> CreateMediaEngine(const Environment& env,PeerConnectionFactoryDependencies& deps) override {auto audio_engine = std::make_unique<WebRtcVoiceEngine>(&env.task_queue_factory(), deps.adm.get(),std::move(deps.audio_encoder_factory),std::move(deps.audio_decoder_factory), std::move(deps.audio_mixer),std::move(deps.audio_processing), std::move(deps.audio_frame_processor),env.field_trials());auto video_engine = std::make_unique<WebRtcVideoEngine>(std::move(deps.video_encoder_factory),std::move(deps.video_decoder_factory), env.field_trials());return std::make_unique<CompositeMediaEngine>(std::move(audio_engine),std::move(video_engine));}
};

我们在这里看到CreateCall的同时也看到了CreateMediaEngine的创建过程。继续跟踪webrtc::CreateCall 到达 Call::Create

// 文件位置 call\create_call.cc
std::unique_ptr<Call> CreateCall(const CallConfig& config) {std::vector<DegradedCall::TimeScopedNetworkConfig> send_degradation_configs =GetNetworkConfigs(config.env.field_trials(), /*send=*/true);std::vector<DegradedCall::TimeScopedNetworkConfig> receive_degradation_configs =GetNetworkConfigs(config.env.field_trials(), /*send=*/false);std::unique_ptr<Call> call = Call::Create(config);if (!send_degradation_configs.empty() ||!receive_degradation_configs.empty()) {return std::make_unique<DegradedCall>(std::move(call), send_degradation_configs, receive_degradation_configs);}return call;
}// 文件位置 call\call.cc
std::unique_ptr<Call> Call::Create(const CallConfig& config) {std::unique_ptr<RtpTransportControllerSendInterface> transport_send;if (config.rtp_transport_controller_send_factory != nullptr) {transport_send = config.rtp_transport_controller_send_factory->Create(config.ExtractTransportConfig());} else {transport_send = RtpTransportControllerSendFactory().Create(config.ExtractTransportConfig());}return std::make_unique<internal::Call>(config, std::move(transport_send));
}

最后锁定在internal::Call类对象,我先看看Call这个类的继承。

// A Call represents a two-way connection carrying zero or more outgoing
// and incoming media streams, transported over one or more RTP transports.
一个Call对象代表的是 承载零个或多个输出的双向连接,以及通过一个或多个RTP传输的传入媒体流。
// A Call instance can contain several send and/or receive streams. All streams
// are assumed to have the same remote endpoint and will share bitrate estimates etc.
一个Call实例可以包含多个发送和/或接收流。假设所有流具有相同的远程端点,并将共享比特率估计等。
// When using the PeerConnection API, there is an one to one relationship
// between the PeerConnection and the Call.
使用其api的时候要注意,PeerConnection和Call对象是一对一的关系。
 

// 文件位置 call\call.cc
 

class Call final : public webrtc::Call, // Call逻辑抽象接口
                           public PacketReceiver, // rtcp逻辑解析接口
                           public TargetTransferRateObserver, // 目标传输速率回调
                           public BitrateAllocator::LimitObserver // 带宽估计 转成编码码率
{... ... ...}

整体概述可以看上面, 我们再来深入看看Call的关键变量&方法,从功能上看看Call是做啥的。

class Call final : public webrtc::Call,public PacketReceiver,public TargetTransferRateObserver,public BitrateAllocator::LimitObserver {// 构建函数,重点关注 RtpTransportController InterfaceCall(const CallConfig& config,std::unique_ptr<RtpTransportControllerSendInterface> transport_send);// Audio Send/Receive Stream.webrtc::AudioSendStream* CreateAudioSendStream(const webrtc::AudioSendStream::Config& config) override;void DestroyAudioSendStream(webrtc::AudioSendStream* send_stream) override;webrtc::AudioReceiveStreamInterface* CreateAudioReceiveStream(const webrtc::AudioReceiveStreamInterface::Config& config) override;void DestroyAudioReceiveStream(webrtc::AudioReceiveStreamInterface* receive_stream) override;// Video Send/Receive Stream.webrtc::VideoSendStream* CreateVideoSendStream(webrtc::VideoSendStream::Config config,VideoEncoderConfig encoder_config) override;void DestroyVideoSendStream(webrtc::VideoSendStream* send_stream) override;webrtc::VideoReceiveStreamInterface* CreateVideoReceiveStream(webrtc::VideoReceiveStreamInterface::Config configuration) override;void DestroyVideoReceiveStream(webrtc::VideoReceiveStreamInterface* receive_stream) override;// flexfec 纠错处理FlexfecReceiveStream* CreateFlexfecReceiveStream(const FlexfecReceiveStream::Config config) override;void DestroyFlexfecReceiveStream(FlexfecReceiveStream* receive_stream) override;// rtcp packet handlervoid DeliverRtcpPacket(rtc::CopyOnWriteBuffer packet) override;// rtp packet handlervoid DeliverRtpPacket(MediaType media_type,RtpPacketReceived packet,OnUndemuxablePacketHandler undemuxable_packet_handler) override;// 接收流+ssrc 关联管理void OnLocalSsrcUpdated(webrtc::AudioReceiveStreamInterface& stream,uint32_t local_ssrc) override;void OnLocalSsrcUpdated(VideoReceiveStreamInterface& stream,uint32_t local_ssrc) override;void OnLocalSsrcUpdated(FlexfecReceiveStream& stream,uint32_t local_ssrc) override;// Implements TargetTransferRateObserver,...// Implements BitrateAllocator::LimitObserver,...
}

看下来我想大家都了解到这个Call对象,实质是 媒体流收发的管理者,直接关联传输模块。随着源码的深入,肯定会频发和这个Call对象打交道。

Call 这部分暂时到这,现在是时候回头看PeerConnection::Create,毕竟我们的切入问题是PeerConnection创建了什么。

RTCErrorOr<rtc::scoped_refptr<PeerConnection>> PeerConnection::Create(const Environment& env,rtc::scoped_refptr<ConnectionContext> context,const PeerConnectionFactoryInterface::Options& options,std::unique_ptr<Call> call,const PeerConnectionInterface::RTCConfiguration& configuration,PeerConnectionDependencies dependencies) {... ... ...// The PeerConnection constructor consumes some, but not all, dependencies.auto pc = rtc::make_ref_counted<PeerConnection>(env, context, options, is_unified_plan, std::move(call), dependencies,dtls_enabled);RTCError init_error = pc->Initialize(configuration, std::move(dependencies));if (!init_error.ok()) {RTC_LOG(LS_ERROR) << "PeerConnection initialization failed";return init_error;}return pc;
}

PeerConnection的构造函数只是类成员的赋值,我们需要关注的是PeerConnection::Initialize。

RTCError PeerConnection::Initialize(const PeerConnectionInterface::RTCConfiguration& configuration,PeerConnectionDependencies dependencies) {... ... ...cricket::ServerAddresses stun_servers;std::vector<cricket::RelayServerConfig> turn_servers;RTCError parse_error = ParseAndValidateIceServersFromConfiguration(configuration, stun_servers, turn_servers, usage_pattern_);if (!parse_error.ok()) {return parse_error;}// Network thread initialization.transport_controller_copy_ = network_thread()->BlockingCall([&] {RTC_DCHECK_RUN_ON(network_thread());network_thread_safety_ = PendingTaskSafetyFlag::Create();... ... ...InitializePortAllocatorResult pa_result =InitializePortAllocator_n(stun_servers, turn_servers, configuration);... ... ...return InitializeTransportController_n(configuration, dependencies);});... ... ...sdp_handler_ = SdpOfferAnswerHandler::Create(this, configuration,dependencies, context_.get());rtp_manager_ = std::make_unique<RtpTransmissionManager>(IsUnifiedPlan(), context_.get(), &usage_pattern_, observer_,legacy_stats_.get(), [this]() {RTC_DCHECK_RUN_ON(signaling_thread());sdp_handler_->UpdateNegotiationNeeded();});// Add default audio/video transceivers for Plan B SDP.if (!IsUnifiedPlan()) {rtp_manager()->transceivers()->Add(RtpTransceiverProxyWithInternal<RtpTransceiver>::Create(signaling_thread(), rtc::make_ref_counted<RtpTransceiver>(cricket::MEDIA_TYPE_AUDIO, context())));rtp_manager()->transceivers()->Add(RtpTransceiverProxyWithInternal<RtpTransceiver>::Create(signaling_thread(), rtc::make_ref_counted<RtpTransceiver>(cricket::MEDIA_TYPE_VIDEO, context())));}return RTCError::OK();
}

上面梳理保留了主要逻辑,其中有以下几个核心变量:

  // The transport controller is set and used on the network thread.
  // Some functions pass the value of the transport_controller_ pointer
  // around as arguments while running on the signaling thread;
  // these use the transport_controller_copy.
  std::unique_ptr<JsepTransportController> transport_controller_
  JsepTransportController* transport_controller_copy_ = nullptr;


  // The machinery for handling offers and answers. Const after initialization.
  std::unique_ptr<SdpOfferAnswerHandler> sdp_handler_;

  // Administration of senders, receivers and transceivers
  // Accessed on both signaling and network thread. Const after Initialize().
  std::unique_ptr<RtpTransmissionManager> rtp_manager_;

总结现在的信息可知,PeerConnection = Call + SDP handler + rtp_manager + 其他未介绍部分(ICE-STUN-TURN)这三个部分是客户端核心类,其他部分我们后续结合网络传输再分析。

http://www.xdnf.cn/news/4706.html

相关文章:

  • 开发函数踩坑记 sum(1) over(partition by stock_code order by trade_date asc)
  • 信息系统项目管理工程师备考计算类真题讲解十五
  • java面试OOM汇总
  • 边缘网关(边缘计算)
  • 云平台的技术方向和总体规划
  • 基于卫星遥感数据进行农作物长势监测原理简述
  • BeeWorks IM:专业安全的企业私有化即时通讯软件
  • Linux——Mysql数据库
  • 数据结构*二叉树
  • 软件测试学习笔记
  • 数据结构 - 9( 位图 布隆过滤器 并查集 LRUCache 6000 字详解 )
  • 数据结构 - 10( B- 树 B+ 树 B* 树 4000 字详解 )
  • 谷云科技iPaaS技术实践:集成平台如何解决库存不准等问题
  • 智能外呼机器人的核心优势
  • 《算法导论(第4版)》阅读笔记:p11-p13
  • 图形渲染+事件处理最终版
  • 含铜废水循环利用体系
  • 【杂谈】Godot 2D游戏窗口设置
  • Nginx +Nginx-http-flv-module 推流拉流
  • JAVA:Spring Boot 集成 Lua 的技术博客
  • 深入理解进程与线程、进程池与线程池:企业级开发实战指南
  • Perspective,数据可视化的超级引擎!
  • 【图片合并PDF】一次性将多个文件夹里的图片批量按文件夹为单位合并PDF,多个文件夹图片合并PDF,基于WPF的实现方案
  • win64下cmake+mingw64编译libhv
  • 基于智能家居项目 RGB彩灯(P9813)
  • MIST:一键解锁 macOS 历史版本,旧系统安装不再难!
  • 小米 MiMo 开源:7B 参数凭什么 “叫板” AI行业巨头?
  • COLT_CMDB_linux_userInfo_20250508.sh修复历史脚本输出指标信息中userName与输出信息不一致问题
  • 学习c语言的链表的概念、操作(另一篇链表的笔记在其他的栏目先看这个)
  • 边缘智能:当AI撕掉“云端依赖症”的标签——从纳米级芯片到城市级网络的算力觉醒之路