WebRTC的JitterBuffer笔记
?接收類圖不全
一? 主要的類
類video_coding::PacketBuffer是接收RTP包,類RtpVideoStreamReceiver2中有用到。
packet_buffer_(clock_, kPacketBufferStartSize, PacketBufferMaxSize()),最小值512,最大2048。
類video_coding::FrameBuffer是一幀完整的視頻數(shù)據(jù),類VideoReceiveStream2中有用到。
二? 判斷一幀數(shù)據(jù)是否完整
判斷第一包、最后一包是否在一幀中
parsed_payload->type.Video.is_first_packet_in_frame = first_fragment;
//FU_A第二字節(jié)的SER的S
video_header.is_last_packet_in_frame |= rtp_packet.Marker(); //RTP包的mark位
void RtpVideoStreamReceiver2::OnInsertedPacket(video_coding::PacketBuffer::InsertResult result) {for (auto& packet : result.packets) {...//把rtp包寫成完整一幀數(shù)據(jù),給video_coding::RtpFrameReferenceFinder管理OnAssembledFrame(std::make_unique<video_coding::RtpFrameObject>()} }RtpVideoStreamReceiver2::OnAssembledFrame( --| reference_finder_->ManageFrame(std::move(frame));三? 幀間完整性判斷—gop--沒懂
把最后一包的seq作為pid-圖片唯一標(biāo)識。
四? FrameBuffer用在那 ??解碼
VideoReceiveStream2::OnCompleteFrame() frame_buffer_->InsertFrame(std::move(frame));VideoReceiveStream2::StartNextDecode() { //NextFrame()獲得frame,給HandleEncodedFrame解碼 frame_buffer_->NextFrame()--| frame = absl::WrapUnique(GetNextFrame()); HandleEncodedFrame(std::move(frame)); }五? JitterDelay的計算
//用卡爾曼kalman濾波估計(或預(yù)測) JitterDelay 計算公式如下:跟最大幀長度,平均幀長度等有關(guān) JitterDelay = theta[0] * (MaxFS – AvgFS) + [noiseStdDevs * sqrt(varNoise) – noiseStdDevOffset]double VCMJitterEstimator::CalculateEstimate(),返回值就是JitterDelay。VCMJitterEstimator::UpdateEstimate --| KalmanEstimateChannel(frameDelayMS, deltaFS);--更新公式參數(shù)的值//獲取jitter估計值,單位是毫秒 int VCMJitterEstimator::GetJitterEstimate(double rttMultiplier,absl::optional<double> rttMultAddCapMs)// Updates the estimates with the new measurements. void VCMJitterEstimator::UpdateEstimate(int64_t frameDelayMS,uint32_t frameSizeBytes,bool incompleteFrame /* = false */)六? JitterDelay:1 作用于從接收隊列取數(shù)據(jù)的時間(意義在那?)? 2 賦值給視頻幀渲染時間
//幀間延時-->JitterDelay-->wait_ms:從接收隊列取frame的時間 //wait_ms值在[0、3000] int64_t VCMTiming::RenderTimeMsInternal( {int64_t estimated_complete_time_ms =ts_extrapolator_->ExtrapolateLocalTime(frame_timestamp); //?return estimated_complete_time_ms + actual_delay; }EncodedFrame* FrameBuffer::GetNextFrame() { //幀間延時frame_delay傳給濾波 if (inter_frame_delay_.CalculateDelay(first_frame->Timestamp(),&frame_delay, receive_time_ms)) {jitter_estimator_.UpdateEstimate(frame_delay, superframe_size); } ... timing_->SetJitterDelay(jitter_estimator_.GetJitterEstimate(rtt_mult, rtt_mult_add_cap_ms)); ... }int64_t FrameBuffer::FindNextFrame(int64_t now_ms) {EncodedFrame* frame = frame_it->second.frame.get();if (frame->RenderTime() == -1) { frame->SetRenderTime(timing_->RenderTimeMs(frame->Timestamp(), now_ms)); }//frame->RenderTime()即期望渲染時間-當(dāng)前時間-解碼所需時間-渲染延遲時間(值10)wait_ms = timing_->MaxWaitingTime(frame->RenderTime(), now_ms);return wait_ms; }七 RenderTime視頻幀渲染時間,算出渲染等待時間?
看有的文章說,動態(tài)調(diào)整jitterbuffer大小,沒有,只是調(diào)整了渲染的時間。
void IncomingVideoStream::Dequeue() { absl::optional<VideoFrame> frame_to_render = render_buffers_.FrameToRender();if (frame_to_render)callback_->OnFrame(*frame_to_render);//即VideoReceiveStream2::OnFrame(//過wait_time,再去渲染if (render_buffers_.HasPendingFrames()) {uint32_t wait_time = render_buffers_.TimeToNextFrameRelease(); incoming_render_queue_.PostDelayedTask([this]() { Dequeue(); }, wait_time);} }uint32_t VideoRenderFrames::TimeToNextFrameRelease() {//render_time_ms()是frame->SetRenderTime()設(shè)置進去的時間。const int64_t time_to_release = incoming_frames_.front().render_time_ms() -render_delay_ms_ - rtc::TimeMillis();return time_to_release < 0 ? 0u : static_cast<uint32_t>(time_to_release); } //賦值的地方 frame_info.renderTimeMs = frame.RenderTimeMs(); decodedImage.set_timestamp_us(frameInfo->renderTimeMs *rtc::kNumMicrosecsPerMillisec);八 Jitterbuffer調(diào)優(yōu)參數(shù)?
1 rtp_video_header.playout_delay,rtp擴展頭字段。
2 jitterbuffer和nack模塊關(guān)系?沒有直接關(guān)系
總結(jié)
以上是生活随笔為你收集整理的WebRTC的JitterBuffer笔记的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 【第十八篇】Flowable之多人会签
- 下一篇: 理解频率与带宽