生活随笔
收集整理的這篇文章主要介紹了
ffplay.c学习-4-⾳频输出和⾳频重采样
小編覺得挺不錯的,現在分享給大家,幫大家做個參考.
ffplay.c學習-4-?頻輸出和?頻重采樣
目錄
?頻輸出模塊 打開SDL?頻設備 打開?頻設備audio_open 回調函數邏輯sdl_audio_callback 回調函數讀取數據 ?頻重采樣 重采樣邏輯 樣本補償
1. ?頻輸出模塊
ffplay的?頻輸出通過SDL實現。 ?頻輸出的主要流程: 打開SDL?頻設備,設置參數 啟動SDL?頻設備播放 SDL?頻回調函數讀取數據,這個時候我們就要從FrameQueue讀取frame填充回調函數提供的buffer空間。 audio的輸出在SDL下是被動的,即在開啟SDL?頻后,當SDL需要數據輸出時則通過回調函數的?式告訴應?者需要傳?多少數據,但這?存在?些問題: ffmpeg解碼?個AVPacket的?頻到AVFrame后,在AVFrame中存儲的?頻數據??與SDL回調所需要的數據不?定相等 (回調函數每次要獲取的數據量都是固定); 特別是如果要實現聲?變速播放功能,那每幀AVFrame做變速后的數據???概率和SDL回調鎖需要的數據??不?致。 這就需要再增加?級緩沖區解決問題,即是從FrameQueue隊列讀取到Frame的數據后,先緩存到?個buffer?,然后再從該buffer讀取數據給到SDL回調函數。 在audio輸出時,主要模型如下圖: 在這個模型中,sdl通過sdl_audio_callback函數向ffplay要?頻數據,ffplay將sampq中的數據通過audio_decode_frame 函數取出,放? is->audio_buf ,然后送出給sdl。在后續回調時先找audio_buf 要數據,數據不?的情況下,再調? audio_decode_frame 補充 audio_buf 注意 audio_decode_frame 這個函數名很具有迷惑性,實際上,這個函數是沒有解碼功能的。這個函數主要是處理sampq到audio_buf的過程,最多只是執?了重采樣(數據源和輸出參數不?致時則做重采樣)。
1. 打開SDL?頻設備
SDL?頻輸出的參數是?開始就設置好的,當碼流的解出來的?頻參數和預設的輸出參數不?致時,則需要重采樣成預設參數?致數據,這樣才能正常播放。 ?頻設備的打開實際是在解復?線程中實現的。解復?線程中先打開?頻設備(設定?頻回調函數供SDL?頻播放線程回調),然后再創建?頻解碼線程。調?鏈如下:
main ( ) -- >
stream_open ( ) -- >
read_thread ( ) -- >
stream_component_open ( ) -- > audio_open ( is
, channel_layout
, nb_channels
, sample_rate
, & is
- > audio_tgt
) ;
先看打開sdl?頻輸出的代碼(stream_component_open函數):
#
else sample_rate
= avctx
- > sample_rate
; nb_channels
= avctx
- > channels
; channel_layout
= avctx
- > channel_layout
;
#endif
if ( ( ret
= audio_open ( is
, channel_layout
, nb_channels
, sample_rate
, & is
- > audio_tgt
) ) < 0 ) goto fail
; is
- > audio_hw_buf_size
= ret
; is
- > audio_src
= is
- > audio_tgt
; is
- > audio_buf_size
= 0 ; is
- > audio_buf_index
= 0 ;
由于不同的?頻輸出設備?持的參數不同,?軌的參數不?定能被輸出設備?持(此時就需要重采樣了), audio_tgt 就保存了輸出設備參數。 audio_open是ffplay封裝的函數,會優先嘗試請求參數能否打開輸出設備,嘗試失敗后會?動查找最佳的參數重新嘗試。不再具體分析。 audio_src ?開始與 audio_tgt 是?樣的,如果輸出設備?持?軌參數,那么 audio_src 可以?直保持與 audio_tgt ?致,否則將在后?代碼中?動修正為?軌參數,并引?重采樣機制。 最后初始化了?個audio_buf相關的參數。這?介紹下audio_buf相關的?個變量: audio_buf: 從要輸出的AVFrame中取出的?頻數據(PCM),如果有必要,則對該數據重采樣。 audio_buf_size: audio_buf的總?? audio_buf_index: 下?次可讀的audio_buf的index位置。 audio_write_buf_size:audio_buf剩余的buffer?度,即audio_buf_size - audio_buf_index 在 audio_open 函數內,通過通過 SDL_OpenAudioDevice 注冊 sdl_audio_callback 函數為?頻輸出的回調函數。那么,主要的?頻輸出的邏輯就在 sdl_audio_callback 函數內了。
2. 打開?頻設備audio_open
audio_open()函數填?期望的?頻參數,打開?頻設備后,將實際的?頻參數存?輸出參數is->audio_tgt中,后??頻播放線程?會?到此參數,使?此參數將原始?頻數據重采樣,轉換為?頻設備?持的格式。
static
int audio_open ( void
* opaque
, int64_t wanted_channel_layout
, int wanted_nb_channels
, int wanted_sample_rate
, struct AudioParams
* audio_hw_params
) { SDL_AudioSpec wanted_spec
, spec
; const char
* env
; static
const int next_nb_channels
[ ] = { 0 , 0 , 1 , 6 , 2 , 6 , 4 , 6 } ; static
const int next_sample_rates
[ ] = { 0 , 44100 , 48000 , 96000 , 192000 } ; int next_sample_rate_idx
= FF_ARRAY_ELEMS ( next_sample_rates
) - 1 ; env
= SDL_getenv ( "SDL_AUDIO_CHANNELS" ) ; if ( env
) { wanted_nb_channels
= atoi ( env
) ; wanted_channel_layout
= av_get_default_channel_layout ( wanted_nb_channels
) ; } if ( ! wanted_channel_layout
|| wanted_nb_channels
!= av_get_channel_layout_nb_channels ( wanted_channel_layout
) ) { wanted_channel_layout
= av_get_default_channel_layout ( wanted_nb_channels
) ; wanted_channel_layout
&= ~AV_CH_LAYOUT_STEREO_DOWNMIX
; } wanted_nb_channels
= av_get_channel_layout_nb_channels ( wanted_channel_layout
) ; wanted_spec
. channels
= wanted_nb_channels
; wanted_spec
. freq
= wanted_sample_rate
; if ( wanted_spec
. freq
<= 0 || wanted_spec
. channels
<= 0 ) { av_log ( NULL
, AV_LOG_ERROR
, "Invalid sample rate or channel count!\n" ) ; return - 1 ; } while
( next_sample_rate_idx
&& next_sample_rates
[ next_sample_rate_idx
] >= wanted_spec
. freq
) next_sample_rate_idx
-- ; wanted_spec
. format
= AUDIO_S16SYS
; wanted_spec
. silence
= 0 ; wanted_spec
. samples
= FFMAX ( SDL_AUDIO_MIN_BUFFER_SIZE
, 2 << av_log2 ( wanted_spec
. freq
/ SDL_AUDIO_MAX_CALLBACKS_PER_SEC
) ) ; wanted_spec
. callback
= sdl_audio_callback
; wanted_spec
. userdata
= opaque
; while
( ! ( audio_dev
= SDL_OpenAudioDevice ( NULL
, 0 , & wanted_spec
, & spec
, SDL_AUDIO_ALLOW_FREQUENCY_CHANGE
| SDL_AUDIO_ALLOW_CHANNELS_CHANGE
) ) ) { av_log ( NULL
, AV_LOG_WARNING
, "SDL_OpenAudio (%d channels, %d Hz): %s\n" , wanted_spec
. channels
, wanted_spec
. freq
, SDL_GetError ( ) ) ; wanted_spec
. channels
= next_nb_channels
[ FFMIN ( 7 , wanted_spec
. channels
) ] ; if ( ! wanted_spec
. channels
) { wanted_spec
. freq
= next_sample_rates
[ next_sample_rate_idx
-- ] ; wanted_spec
. channels
= wanted_nb_channels
; if ( ! wanted_spec
. freq
) { av_log ( NULL
, AV_LOG_ERROR
, "No more combinations to try, audio open failed\n" ) ; return - 1 ; } } wanted_channel_layout
= av_get_default_channel_layout ( wanted_spec
. channels
) ; } if ( spec
. format
!= AUDIO_S16SYS
) { av_log ( NULL
, AV_LOG_ERROR
, "SDL advised audio format %d is not supported!\n" , spec
. format
) ; return - 1 ; } if ( spec
. channels
!= wanted_spec
. channels
) { wanted_channel_layout
= av_get_default_channel_layout ( spec
. channels
) ; if ( ! wanted_channel_layout
) { av_log ( NULL
, AV_LOG_ERROR
, "SDL advised channel count %d is not supported!\n" , spec
. channels
) ; return - 1 ; } } audio_hw_params
- > fmt
= AV_SAMPLE_FMT_S16
; audio_hw_params
- > freq
= spec
. freq
; audio_hw_params
- > channel_layout
= wanted_channel_layout
; audio_hw_params
- > channels
= spec
. channels
; audio_hw_params
- > frame_size
= av_samples_get_buffer_size ( NULL
, audio_hw_params
- > channels
, 1 , audio_hw_params
- > fmt
, 1 ) ; audio_hw_params
- > bytes_per_sec
= av_samples_get_buffer_size ( NULL
, audio_hw_params
- > channels
, audio_hw_params
- > freq
, audio_hw_params
- > fmt
, 1 ) ; if ( audio_hw_params
- > bytes_per_sec
<= 0 || audio_hw_params
- > frame_size
<= 0 ) { av_log ( NULL
, AV_LOG_ERROR
, "av_samples_get_buffer_size failed\n" ) ; return - 1 ; } return spec
. size
;
}
3. 回調函數邏輯sdl_audio_callback
再來看 sdl_audio_callback
static void
sdl_audio_callback ( void
* opaque
, Uint8
* stream
, int len ) { VideoState
* is
= opaque
; int audio_size
, len1
; audio_callback_time
= av_gettime_relative ( ) ; while
( len > 0 ) { if ( is
- > audio_buf_index
>= is
- > audio_buf_size
) { audio_size
= audio_decode_frame ( is
) ; if ( audio_size
< 0 ) { is
- > audio_buf
= NULL
; is
- > audio_buf_size
= SDL_AUDIO_MIN_BUFFER_SIZE
/ is
- > audio_tgt
. frame_size
* is
- > audio_tgt
. frame_size
; } else { if ( is
- > show_mode
!= SHOW_MODE_VIDEO
) update_sample_display ( is
, ( int16_t
* ) is
- > audio_buf
, audio_size
) ; is
- > audio_buf_size
= audio_size
; } is
- > audio_buf_index
= 0 ; } len1
= is
- > audio_buf_size
- is
- > audio_buf_index
; if ( len1
> len ) len1
= len ; if ( ! is
- > muted
&& is
- > audio_buf
&& is
- > audio_volume
== SDL_MIX_MAXVOLUME
) memcpy ( stream
, ( uint8_t
* ) is
- > audio_buf
+ is
- > audio_buf_index
, len1
) ; else { memset ( stream
, 0 , len1
) ; if ( ! is
- > muted
&& is
- > audio_buf
) SDL_MixAudioFormat ( stream
, ( uint8_t
* ) is
- > audio_buf
+ is
- > audio_buf_index
, AUDIO_S16SYS
, len1
, is
- > audio_volume
) ; } len -= len1
; stream
+= len1
; is
- > audio_buf_index
+= len1
; } is
- > audio_write_buf_size
= is
- > audio_buf_size
- is
- > audio_buf_index
; if ( ! isnan ( is
- > audio_clock
) ) { set_clock_at ( & is
- > audclk
, is
- > audio_clock
- ( double
) ( 2 * is
- > audio_hw_buf_size
+ is
- > audio_write_buf_size
) / is
- > audio_tgt
. bytes_per_sec
, is
- > audio_clock_serial
, audio_callback_time
/ 1000000.0 ) ; sync_clock_to_slave ( & is
- > extclk
, & is
- > audclk
) ; }
}
sdl_audio_callback 函數是?個典型的緩沖區輸出過程,看代碼和注釋應該可以理解。具體看3個細節: 輸出audio_buf到stream,如果audio_volume為最??量,則只需memcpy復制給stream即可。否則,可以利?SDL_MixAudioFormat進??量調整和混? 如果audio_buf消耗完了,就調? audio_decode_frame 重新填充audio_buf。接下來會繼續分析audio_decode_frame函數 set_clock_at更新audclk時,audio_clock是當前audio_buf的顯示結束時間(pts+duration),由于audio driver本身會持有??塊緩沖區,典型地會是兩塊交替使?,所以有 2 * is->audio_hw_buf_size,?于為什么還要 audio_write_buf_size,?圖勝千?。 我們先來is->audio_clock是在audio_decode_frame賦值:is->audio_clock = af->pts + (double) af->frame->nb_samples / af->frame->sample_rate; 從這?可以看出來,這?的時間戳是audio_buf結束位置的時間戳,?不是audio_buf起始位置的時間戳,所以當audio_buf有剩余時,那實際數據的pts就變成is->audio_clock - (double)(is->audio_write_buf_size) / is->audio_tgt.bytes_per_sec,即是 再考慮到,實質上audio_hw_buf_size*2這些數據實際都沒有播放出去,所以就有is->audio_clock - (double)(2 * is->audio_hw_buf_size + is->audio_write_buf_size) / is->audio_tgt.bytes_per_sec。 再加上我們在SDL回調進?填充 時,實際上 是有開始被播放,所以我們這?采?的相對時間是,剛回調產?的,就是內部 在播放的時候,那相對時間實際也在?. 最終
set_clock_at ( & is
- > audclk
, is
- > audio_clock
- ( double
) ( 2 * is
- > audio_hw_buf_size
+ is
-
> audio_write_buf_size
) / is
- > audio_tgt
. bytes_per_sec
, is
- > audio_clock_serial
,
audio_callback_time
/ 1000000.0 ) ;
4. 回調函數讀取數據
接下來看下 audio_decode_frame :
static
int audio_decode_frame ( VideoState
* is
)
{ int data_size
, resampled_data_size
; int64_t dec_channel_layout
; av_unused double audio_clock0
; int wanted_nb_samples
; Frame
* af
; if ( is
- > paused
) return - 1 ; do
{
#
if defined ( _WIN32
) while
( frame_queue_nb_remaining ( & is
- > sampq
) == 0 ) { if ( ( av_gettime_relative ( ) - audio_callback_time
) > 1000000 LL
* is
- > audio_hw_buf_size
/ is
- > audio_tgt
. bytes_per_sec
/ 2 ) return - 1 ; av_usleep
( 1000 ) ; }
#endif
if ( ! ( af
= frame_queue_peek_readable ( & is
- > sampq
) ) ) return - 1 ; frame_queue_next ( & is
- > sampq
) ; } while
( af
- > serial
!= is
- > audioq
. serial
) ; data_size
= av_samples_get_buffer_size ( NULL
, af
- > frame
- > channels
, af
- > frame
- > nb_samples
, af
- > frame
- > format
, 1 ) ; dec_channel_layout
= ( af
- > frame
- > channel_layout
&& af
- > frame
- > channels
== av_get_channel_layout_nb_channels ( af
- > frame
- > channel_layout
) ) ?af
- > frame
- > channel_layout
: av_get_default_channel_layout ( af
- > frame
- > channels
) ; wanted_nb_samples
= synchronize_audio ( is
, af
- > frame
- > nb_samples
) ; if ( af
- > frame
- > format
!= is
- > audio_src
. fmt
|| dec_channel_layout
!= is
- > audio_src
. channel_layout
|| af
- > frame
- > sample_rate
!= is
- > audio_src
. freq
|| ( wanted_nb_samples
!= af
- > frame
- > nb_samples
&& ! is
- > swr_ctx
) ) { ... . } if ( is
- > swr_ctx
) { const uint8_t
* * in
= ( const uint8_t
* * ) af
- > frame
- > extended_data
; uint8_t
* * out
= & is
- > audio_buf1
; int out_count
= ( int64_t
) wanted_nb_samples
* is
- > audio_tgt
. freq
/ af
- > frame
- > sample_rate
+ 256 ; int out_size
= av_samples_get_buffer_size ( NULL
, is
- > audio_tgt
. channels
, out_count
, is
- > audio_tgt
. fmt
, 0 ) ; int len2
; if ( out_size
< 0 ) { av_log ( NULL
, AV_LOG_ERROR
, "av_samples_get_buffer_size() failed\n" ) ; return - 1 ; } if ( wanted_nb_samples
!= af
- > frame
- > nb_samples
) { int sample_delta
= ( wanted_nb_samples
- af
- > frame
- > nb_samples
) * is
- > audio_tgt
. freq
/ af
- > frame
- > sample_rate
; int compensation_distance
= wanted_nb_samples
* is
- > audio_tgt
. freq
/ af
- > frame
- > sample_rate
; if ( swr_set_compensation ( is
- > swr_ctx
, sample_delta
, compensation_distance
) < 0 ) { av_log ( NULL
, AV_LOG_ERROR
, "swr_set_compensation() failed\n" ) ; return - 1 ; } } av_fast_malloc ( & is
- > audio_buf1
, & is
- > audio_buf1_size
, out_size
) ; if ( ! is
- > audio_buf1
) return AVERROR ( ENOMEM
) ; len2
= swr_convert ( is
- > swr_ctx
, out
, out_count
, in
, af
- > frame
- > nb_samples
) ; if ( len2
< 0 ) { av_log ( NULL
, AV_LOG_ERROR
, "swr_convert() failed\n" ) ; return - 1 ; } if ( len2
== out_count
) { av_log ( NULL
, AV_LOG_WARNING
, "audio buffer is probably too small\n" ) ; if ( swr_init ( is
- > swr_ctx
) < 0 ) swr_free ( & is
- > swr_ctx
) ; } is
- > audio_buf
= is
- > audio_buf1
; resampled_data_size
= len2
* is
- > audio_tgt
. channels
* av_get_bytes_per_sample ( is
- > audio_tgt
. fmt
) ; } else { is
- > audio_buf
= af
- > frame
- > data
[ 0 ] ; resampled_data_size
= data_size
; } audio_clock0
= is
- > audio_clock
; if ( ! isnan ( af
- > pts
) ) is
- > audio_clock
= af
- > pts
+ ( double
) af
- > frame
- > nb_samples
/ af
- > frame
- > sample_rate
; else is
- > audio_clock
= NAN
; is
- > audio_clock_serial
= af
- > serial
;
#ifdef DEBUG
{ static double last_clock
; printf ( "audio: delay=%0.3f clock=%0.3f clock0=%0.3f\n" , is
- > audio_clock
- last_clock
, is
- > audio_clock
, audio_clock0
) ; last_clock
= is
- > audio_clock
; }
#endif
return resampled_data_size
;
}
audio_decode_frame 并沒有真正意義上的 decode 代碼,最多是進?了重采樣。主流程有以下步驟: 從sampq取?幀,必要時丟幀。如發?了seek,此時serial會不連續,就需要丟幀處理 計算這?幀的字節數。通過av_samples_get_buffer_size可以?便計算出結果 獲取這?幀的數據。對于frame格式和輸出設備不同的,需要重采樣;如果格式相同,則直接拷?指針輸出即可。總之,需要在audio_buf中保存與輸出設備格式相同的?頻數據 更新audio_clock,audio_clock_serial。?于設置audclk.
2. ?頻重采樣
FFmpeg解碼得到的?頻幀的格式未必能被SDL?持,在這種情況下,需要進??頻重采樣,即將?頻幀格式轉換為SDL?持的?頻格式,否則是?法正常播放的。 ?頻重采樣涉及兩個步驟: 打開?頻設備時進?的準備?作:確定SDL?持的?頻格式,作為后期?頻重采樣的?標格式。這?部分內容參考?頻輸出模塊 ?頻播放線程中,取出?頻幀后,若有需要(?頻幀格式與SDL?持?頻格式不匹配)則進?重采樣,否則直接輸出
1. 重采樣邏輯
?頻重采樣在 audio_decode_frame() 中實現, audio_decode_frame() 就是從?頻frame隊列中取出?個frame,按指定格式經過重采樣后輸出(解碼不是在該函數進?)。 重采樣的細節很瑣碎,直接看注釋:
static
int audio_decode_frame ( VideoState
* is
)
{ int data_size
, resampled_data_size
; int64_t dec_channel_layout
; av_unused double audio_clock0
; int wanted_nb_samples
; Frame
* af
; if ( is
- > paused
) return - 1 ; do
{
#
if defined ( _WIN32
) while
( frame_queue_nb_remaining ( & is
- > sampq
) == 0 ) { if ( ( av_gettime_relative ( ) - audio_callback_time
) > 1000000 LL
* is
- > audio_hw_buf_size
/ is
- > audio_tgt
. bytes_per_sec
/ 2 ) return - 1 ; av_usleep
( 1000 ) ; }
#endif
if ( ! ( af
= frame_queue_peek_readable ( & is
- > sampq
) ) ) return - 1 ; frame_queue_next ( & is
- > sampq
) ; } while
( af
- > serial
!= is
- > audioq
. serial
) ; data_size
= av_samples_get_buffer_size ( NULL
, af
- > frame
- > channels
, af
- > frame
- > nb_samples
, af
- > frame
- > format
, 1 ) ; dec_channel_layout
= ( af
- > frame
- > channel_layout
&& af
- > frame
- > channels
== av_get_channel_layout_nb_channels ( af
- > frame
- > channel_layout
) ) ?af
- > frame
- > channel_layout
: av_get_default_channel_layout ( af
- > frame
- > channels
) ; wanted_nb_samples
= synchronize_audio ( is
, af
- > frame
- > nb_samples
) ; if ( af
- > frame
- > format
!= is
- > audio_src
. fmt
|| dec_channel_layout
!= is
- > audio_src
. channel_layout
|| af
- > frame
- > sample_rate
!= is
- > audio_src
. freq
|| ( wanted_nb_samples
!= af
- > frame
- > nb_samples
&& ! is
- > swr_ctx
) ) { swr_free ( & is
- > swr_ctx
) ; is
- > swr_ctx
= swr_alloc_set_opts ( NULL
, is
- > audio_tgt
. channel_layout
, is
- > audio_tgt
. fmt
, is
- > audio_tgt
. freq
, dec_channel_layout
, af
- > frame
- > format
, af
- > frame
- > sample_rate
, 0 , NULL
) ; if ( ! is
- > swr_ctx
|| swr_init ( is
- > swr_ctx
) < 0 ) { av_log ( NULL
, AV_LOG_ERROR
, "Cannot create sample rate converter for conversion of %d Hz %s %d channels to %d Hz %s %d channels!\n" , af
- > frame
- > sample_rate
, av_get_sample_fmt_name ( af
- > frame
- > format
) , af
- > frame
- > channels
, is
- > audio_tgt
. freq
, av_get_sample_fmt_name ( is
- > audio_tgt
. fmt
) , is
- > audio_tgt
. channels
) ; swr_free ( & is
- > swr_ctx
) ; return - 1 ; } is
- > audio_src
. channel_layout
= dec_channel_layout
; is
- > audio_src
. channels
= af
- > frame
- > channels
; is
- > audio_src
. freq
= af
- > frame
- > sample_rate
; is
- > audio_src
. fmt
= af
- > frame
- > format
; } if ( is
- > swr_ctx
) { const uint8_t
* * in
= ( const uint8_t
* * ) af
- > frame
- > extended_data
; uint8_t
* * out
= & is
- > audio_buf1
; int out_count
= ( int64_t
) wanted_nb_samples
* is
- > audio_tgt
. freq
/ af
- > frame
- > sample_rate
+ 256 ; int out_size
= av_samples_get_buffer_size ( NULL
, is
- > audio_tgt
. channels
, out_count
, is
- > audio_tgt
. fmt
, 0 ) ; int len2
; if ( out_size
< 0 ) { av_log ( NULL
, AV_LOG_ERROR
, "av_samples_get_buffer_size() failed\n" ) ; return - 1 ; } if ( wanted_nb_samples
!= af
- > frame
- > nb_samples
) { int sample_delta
= ( wanted_nb_samples
- af
- > frame
- > nb_samples
) * is
- > audio_tgt
. freq
/ af
- > frame
- > sample_rate
; int compensation_distance
= wanted_nb_samples
* is
- > audio_tgt
. freq
/ af
- > frame
- > sample_rate
; if ( swr_set_compensation ( is
- > swr_ctx
, sample_delta
, compensation_distance
) < 0 ) { av_log ( NULL
, AV_LOG_ERROR
, "swr_set_compensation() failed\n" ) ; return - 1 ; } } av_fast_malloc ( & is
- > audio_buf1
, & is
- > audio_buf1_size
, out_size
) ; if ( ! is
- > audio_buf1
) return AVERROR ( ENOMEM
) ; len2
= swr_convert ( is
- > swr_ctx
, out
, out_count
, in
, af
- > frame
- > nb_samples
) ; if ( len2
< 0 ) { av_log ( NULL
, AV_LOG_ERROR
, "swr_convert() failed\n" ) ; return - 1 ; } if ( len2
== out_count
) { av_log ( NULL
, AV_LOG_WARNING
, "audio buffer is probably too small\n" ) ; if ( swr_init ( is
- > swr_ctx
) < 0 ) swr_free ( & is
- > swr_ctx
) ; } is
- > audio_buf
= is
- > audio_buf1
; resampled_data_size
= len2
* is
- > audio_tgt
. channels
* av_get_bytes_per_sample ( is
- > audio_tgt
. fmt
) ; } else { is
- > audio_buf
= af
- > frame
- > data
[ 0 ] ; resampled_data_size
= data_size
; } audio_clock0
= is
- > audio_clock
; if ( ! isnan ( af
- > pts
) ) is
- > audio_clock
= af
- > pts
+ ( double
) af
- > frame
- > nb_samples
/ af
- > frame
- > sample_rate
; else is
- > audio_clock
= NAN
; is
- > audio_clock_serial
= af
- > serial
;
#ifdef DEBUG
{ static double last_clock
; printf ( "audio: delay=%0.3f clock=%0.3f clock0=%0.3f\n" , is
- > audio_clock
- last_clock
, is
- > audio_clock
, audio_clock0
) ; last_clock
= is
- > audio_clock
; }
#endif
return resampled_data_size
;
}
2. 樣本補償
swr_set_compensation說明
int swr_set_compensation ( struct SwrContext
* s
, int sample_delta
, int compensation_distance
) ;
總結
以上是生活随笔 為你收集整理的ffplay.c学习-4-⾳频输出和⾳频重采样 的全部內容,希望文章能夠幫你解決所遇到的問題。
如果覺得生活随笔 網站內容還不錯,歡迎將生活随笔 推薦給好友。