FFmpeg音视频播放器系列(第三篇:seek实现播放进度控制)
生活随笔
收集整理的這篇文章主要介紹了
FFmpeg音视频播放器系列(第三篇:seek实现播放进度控制)
小編覺得挺不錯的,現在分享給大家,幫大家做個參考.
文章目錄
- 如何實現播放進度控制
- av_seek_frame
- seek操作要點
- 按視頻流seek
- 按音頻流seek
- 代碼實現
上一篇基本實現了音視頻的播放同步,簡單的按鍵控制暫停、恢復、退出操作,這一篇就打算實現播放的進度控制主要是實現快進、快退、重新播放等,但是不打算用SDL來實現GUI操作,主要是用按鍵操作實現、GUI的部分還是放到用QT實現吧,畢竟不是主要研究SDL的GUI的。
如何實現播放進度控制
想實現播放進度控制,就意味著需要隨機的訪問流媒體文件,那么就需要使用av_seek_frame或者avformat_seek_file函數。
av_seek_frame
- 函數:int av_seek_frame(AVFormatContext *s, int stream_index, int64_t timestamp, int flags)
- AVFormatContext *s:流媒體打開的上下文結構指針
- int stream_index:流媒體索引,視頻流或者是音頻流,根據其中一個來檢索,實現seek操作
- int64_t timestamp:檢索時間戳,若指定流媒體索引,則時間單位是流媒體對應的AVStream.time_base,若不指定流媒體索引,則時間單位是AV_TIME_BASE
- int flags:指明seek操作的標志
#define AVSEEK_FLAG_BACKWARD 1 ///< seek backward
#define AVSEEK_FLAG_BYTE 2 ///< seeking based on position in bytes
#define AVSEEK_FLAG_ANY 4 ///< seek to any frame, even non-keyframes
#define AVSEEK_FLAG_FRAME 8 ///< seeking based on frame number
當flag中有AVSEEK_FLAG_BYTE時,時間戳要改為byte字節計數
當flag中有AVSEEK_FLAG_FRAME時,表示seek到離timestamp最近的關鍵幀,對于視頻流來說就是I幀,音頻流未知。
seek操作要點
- seek操作基準:即seek操作當前時間點,因此在進行播放時需要實時記錄音視頻流的DTS或者PTS
- seek時間單位:進行seek操作時,根據流索引,確定其時間單位time_base,
- seek時間大小:即向前或者向后seek的時間大小,根據此時間大小和當前時間點,可以計算出向前或者向后進行seek操作時的目標時間
- seek操作載體:即是按照音頻還是視頻進行seek操作,只需要按照其中一個流進行seek即可,不需要分別對音視頻流進行seek操作
- seek操作刷新:在進行seek操作之后,正常播放之前需要將編解碼的內部緩沖區進行刷新,同時也要將自行控制的緩沖區內緩存的音視頻清零
按視頻流seek
首先定義視頻流控制結構:
typedef struct __VideoCtrlStruct {AVFormatContext *pFormatCtx;AVStream *pStream;AVCodec *pCodec;AVCodecContext *pCodecCtx;SwsContext *pConvertCtx;AVFrame *pVideoFrame, *pFrameYUV;unsigned char *pVideoOutBuffer;int VideoIndex;int VideoCnt;int RefreshTime;int screen_w,screen_h;SDL_Window *screen;SDL_Renderer* sdlRenderer;SDL_Texture* sdlTexture;SDL_Rect sdlRect;SDL_Thread *video_tid;sem_t frame_put;sem_t video_refresh;PacketArrayStruct Video; }VideoCtrlStruct;根據seek操作的要點,其中seek時間大小我們可以自行定義,這里暫定為3S
- CurVideoDts:記錄視頻流當前的DTS
- seek的時間單位:VideoCtrl.pStream->time_base就是視頻流的時間單位,
- 前進:int64_t DstVideoDts = CurVideoDts + (int64_t) ( 3 / av_q2d(VideoCtrl.pStream->time_base));
- 后退:int64_t DstVideoDts = CurVideoDts - (int64_t) ( 3 / av_q2d(VideoCtrl.pStream->time_base));
- 因為pStream->time_base結構表示一個分數數據,而視頻流中的DTS與PTS時間都是以time_base為時間單位,因此需要將seek時間大小轉換為以time_base為時間單位的大小
- seek操作:ret = av_seek_frame(pFormatCtx, VideoIndex, DstVideoDts, AVSEEK_FLAG_FRAME);向后seek是需要在flag參數中加上AVSEEK_FLAG_BACKWARD
- 緩沖區刷新:avcodec_flush_buffers(VideoCtrl.pCodecCtx);
- 因為time_base分數結構中,num基本都為1,因此為了減少浮點數乘除法,可以直接乘以time_base的分母,見以下代碼實例
按音頻流seek
音頻流控制結構
typedef struct __AudioCtrlStruct {AVFormatContext *pFormatCtx;AVStream *pStream;AVCodec *pCodec;AVCodecContext *pCodecCtx;SwrContext *pConvertCtx;Uint8 *audio_chunk;Sint32 audio_len;Uint8 *audio_pos;int AudioIndex;int AudioCnt;uint64_t AudioOutChannelLayout;int out_nb_samples; //nb_samples: AAC-1024 MP3-1152AVSampleFormat out_sample_fmt;int out_sample_rate;int out_channels;int out_buffer_size;unsigned char* pAudioOutBuffer;sem_t frame_put;sem_t frame_get;PacketArrayStruct Audio; }AudioCtrlStruct;根據seek操作的要點,其中seek時間大小我們可以自行定義,這里暫定為3S
- CurAudioDts:記錄視頻流當前的DTS
- seek的時間單位:AudioCtrl.pStream->time_base就是視頻流的時間單位,
- 前進:int64_t DstAudioDts = CurAudioDts + (int64_t) ( 3 / av_q2d(AudioCtrl.pStream->time_base));
- 后退:int64_t DstAudioDts = CurAudioDts - (int64_t) ( 3 / av_q2d(AudioCtrl.pStream->time_base));
- 因為pStream->time_base結構表示一個分數數據,而視頻流中的DTS與PTS時間都是以time_base為時間單位,因此需要將seek時間大小轉換為以time_base為時間單位的大小
-
- seek操作:ret = av_seek_frame(pFormatCtx, AudioIndex, DstAudioDts, AVSEEK_FLAG_FRAME);向后seek是需要在flag參數中加上AVSEEK_FLAG_BACKWARD
- 緩沖區刷新:avcodec_flush_buffers(AudioCtrl.pCodecCtx);
代碼實現
此代碼用左右方向鍵表示前進與后退,并且在文件播放完畢時,重頭開始播放,使用音頻流進行seek操作
#include <stdio.h> #include <stdlib.h> #include <string.h> #define __STDC_CONSTANT_MACROS#ifdef __cplusplus extern "C" { #endif #include <libavutil/time.h> #include <libavutil/imgutils.h> #include <libavutil/mathematics.h> #include <libavcodec/avcodec.h> #include <libavformat/avformat.h> #include <libavdevice/avdevice.h> #include <libswscale/swscale.h> #include <libswresample/swresample.h> #include <SDL2/SDL.h>#include <errno.h>#include <unistd.h> #include <assert.h> #include <pthread.h> #include <semaphore.h>#ifdef __cplusplus }; #endif#define MAX_AUDIO_FRAME_SIZE 192000 // 1 second of 48khz 32bit audio#define PACKET_ARRAY_SIZE (60) typedef struct __PacketStruct {AVPacket Packet;int64_t dts;int64_t pts;int state; }PacketStruct;typedef struct {unsigned int rIndex;unsigned int wIndex;PacketStruct PacketArray[PACKET_ARRAY_SIZE]; }PacketArrayStruct;typedef struct __AudioCtrlStruct {AVFormatContext *pFormatCtx;AVStream *pStream;AVCodec *pCodec;AVCodecContext *pCodecCtx;SwrContext *pConvertCtx;Uint8 *audio_chunk;Sint32 audio_len;Uint8 *audio_pos;int AudioIndex;int AudioCnt;uint64_t AudioOutChannelLayout;int out_nb_samples; //nb_samples: AAC-1024 MP3-1152AVSampleFormat out_sample_fmt;int out_sample_rate;int out_channels;int out_buffer_size;unsigned char* pAudioOutBuffer;sem_t frame_put;sem_t frame_get;PacketArrayStruct Audio; }AudioCtrlStruct;typedef struct __VideoCtrlStruct {AVFormatContext *pFormatCtx;AVStream *pStream;AVCodec *pCodec;AVCodecContext *pCodecCtx;SwsContext *pConvertCtx;AVFrame *pVideoFrame, *pFrameYUV;unsigned char *pVideoOutBuffer;int VideoIndex;int VideoCnt;int RefreshTime;int screen_w,screen_h;SDL_Window *screen;SDL_Renderer* sdlRenderer;SDL_Texture* sdlTexture;SDL_Rect sdlRect;SDL_Thread *video_tid;sem_t frame_put;sem_t video_refresh;PacketArrayStruct Video; }VideoCtrlStruct;//Refresh Event #define SFM_REFRESH_VIDEO_EVENT (SDL_USEREVENT + 1) #define SFM_REFRESH_AUDIO_EVENT (SDL_USEREVENT + 2) #define SFM_BREAK_EVENT (SDL_USEREVENT + 3)int thread_exit = 0; int thread_pause = 0; int audio_pause = 0; //音頻播放是否暫停,1表示暫停,0表示在播放 int video_pause = 0; //視頻播放是否暫停,1表示暫停,0表示在播放 SDL_Keycode CurKeyCode; //記錄按下的是前進還是后退鍵,即小鍵盤的左右方向鍵,右表示前進,左表示后退 int CurKeyProcess; //按鍵seek操作是否已經被處理,0表示未處理,1表示已經處理 int64_t CurVideoDts; //記錄當前播放的視頻流Packet的DTS int64_t CurVideoPts; //記錄當前播放的視頻流Packet的DTSint64_t CurAudioDts; //記錄當前播放的音頻流Packet的DTS int64_t CurAudioPts; //記錄當前播放的音頻流Packet的DTSint64_t DstAudioDts; //進行seek操作時,計算后的目標音頻流的DTS int64_t DstAudioPts; //進行seek操作時,計算后的目標音頻流的PTS int64_t DstVideoDts; //進行seek操作時,計算后的目標視頻流的DTS int64_t DstVideoPts; //進行seek操作時,計算后的目標視頻流的DTSVideoCtrlStruct VideoCtrl; AudioCtrlStruct AudioCtrl; //video time_base.num:1, time_base.den:16, avg_frame_rate.num:8, avg_frame_rate.den:1 //audio time_base.num:1, time_base.den:48000, avg_frame_rate.num:0, avg_frame_rate.den:0 int IsPacketArrayFull(PacketArrayStruct* p) {int i = 0;i = p->wIndex % PACKET_ARRAY_SIZE;if(p->PacketArray[i].state != 0) return 1;return 0; }int IsPacketArrayEmpty(PacketArrayStruct* p) {int i = 0;i = p->rIndex % PACKET_ARRAY_SIZE;if(p->PacketArray[i].state == 0) return 1;return 0; }int PacketArrayClear(PacketArrayStruct* p) {int i = 0;for(i = 0; i < PACKET_ARRAY_SIZE; i++){if(p->PacketArray[i].state != 0){av_packet_unref(&p->PacketArray[i].Packet);p->PacketArray[i].state = 0;}}p->rIndex = 0;p->wIndex = 0;return 0; }int SDL_event_thread(void *opaque) {SDL_Event event;while(1){SDL_WaitEvent(&event);if(event.type == SDL_KEYDOWN){//Pauseif(event.key.keysym.sym == SDLK_SPACE){thread_pause = !thread_pause;printf("video got pause event!\n");}if(event.key.keysym.sym == SDLK_RIGHT){thread_pause = !thread_pause;CurKeyProcess = 0;CurKeyCode = SDLK_RIGHT;printf("video got right key event!\n");}if(event.key.keysym.sym == SDLK_LEFT){thread_pause = !thread_pause;CurKeyProcess = 0;CurKeyCode = SDLK_LEFT;printf("video got left key event!\n");}}else if(event.type == SDL_QUIT){thread_exit = 1;printf("------------------------------>video got SDL_QUIT event!\n");break;}else if(event.type == SFM_BREAK_EVENT){break;}}printf("---------> SDL_event_thread end !!!! \n");return 0; }int video_refresh_thread(void *opaque) {while (1){if(thread_exit) break;if(thread_pause){SDL_Delay(40);continue;}//SDL_Delay(40);usleep(VideoCtrl.RefreshTime);sem_post(&VideoCtrl.video_refresh);}printf("---------> video_refresh_thread end !!!! \n");return 0; }static void *thread_audio(void *arg) {AVCodecContext *pAudioCodecCtx;AVFrame *pAudioFrame;unsigned char *pAudioOutBuffer;AVPacket *Packet;int i, ret, GotAudioPicture;struct SwrContext *AudioConvertCtx;AudioCtrlStruct* AudioCtrl = (AudioCtrlStruct*)arg;pAudioCodecCtx = AudioCtrl->pCodecCtx;pAudioOutBuffer = AudioCtrl->pAudioOutBuffer;AudioConvertCtx = AudioCtrl->pConvertCtx;printf("---------> thread_audio start !!!! \n");pAudioFrame = av_frame_alloc();while(1){if(thread_exit) break;if(thread_pause){usleep(10000);audio_pause = 1;continue;}if(IsPacketArrayEmpty(&AudioCtrl->Audio)){SDL_Delay(1);printf("---------> thread_audio empty !!!! \n");continue;}audio_pause = 0;i = AudioCtrl->Audio.rIndex;Packet = &AudioCtrl->Audio.PacketArray[i].Packet;CurAudioDts = AudioCtrl->Audio.PacketArray[i].dts;CurAudioPts = AudioCtrl->Audio.PacketArray[i].pts;if(Packet->stream_index == AudioCtrl->AudioIndex){ret = avcodec_decode_audio4( pAudioCodecCtx, pAudioFrame, &GotAudioPicture, Packet);if ( ret < 0 ){printf("Error in decoding audio frame.\n");return 0;}if ( GotAudioPicture > 0 ){swr_convert(AudioConvertCtx,&pAudioOutBuffer, MAX_AUDIO_FRAME_SIZE,(const uint8_t **)pAudioFrame->data , pAudioFrame->nb_samples);printf("Auduo index:%5d\t pts:%ld\t packet size:%d, pFrame->nb_samples:%d\n",AudioCtrl->AudioCnt, Packet->pts, Packet->size, pAudioFrame->nb_samples);AudioCtrl->AudioCnt++;}while(AudioCtrl->audio_len > 0)//Wait until finishSDL_Delay(1);//Set audio buffer (PCM data)AudioCtrl->audio_chunk = (Uint8 *) pAudioOutBuffer;AudioCtrl->audio_pos = AudioCtrl->audio_chunk;AudioCtrl->audio_len = AudioCtrl->out_buffer_size;av_packet_unref(Packet);AudioCtrl->Audio.PacketArray[i].state = 0;i++;if(i >= PACKET_ARRAY_SIZE) i = 0;AudioCtrl->Audio.rIndex = i;}}printf("---------> thread_audio end !!!! \n");return 0; }static void *thread_video(void *arg) {//AVFormatContext *pFormatCtx;AVCodecContext *pVideoCodecCtx;//AVCodec *pVideoCodec;AVFrame *pVideoFrame,*pFrameYUV;//unsigned char *pVideoOutBuffer;AVPacket *Packet;int i, ret, GotPicture;struct SwsContext *VideoConvertCtx;VideoCtrlStruct* VideoCtrl = (VideoCtrlStruct*)arg;pVideoCodecCtx = VideoCtrl->pCodecCtx;//pVideoOutBuffer = VideoCtrl->pVideoOutBuffer;VideoConvertCtx = VideoCtrl->pConvertCtx;pVideoFrame = VideoCtrl->pVideoFrame;pFrameYUV = VideoCtrl->pFrameYUV;printf("---------> thread_video start !!!! \n");while(1){if(thread_exit) break;if(thread_pause){usleep(10000);video_pause = 1;continue;}if(IsPacketArrayEmpty(&VideoCtrl->Video)){SDL_Delay(1);continue;}video_pause = 0;i = VideoCtrl->Video.rIndex;Packet = &VideoCtrl->Video.PacketArray[i].Packet;CurVideoDts = VideoCtrl->Video.PacketArray[i].dts;CurVideoPts = VideoCtrl->Video.PacketArray[i].pts;if(Packet->stream_index == VideoCtrl->VideoIndex){ret = avcodec_decode_video2(pVideoCodecCtx, pVideoFrame, &GotPicture, Packet);if(ret < 0){printf("Video Decode Error.\n");return 0;}printf("Video index:%5d\t dts:%ld\t, pts:%ld\t packet size:%d, GotVideoPicture:%d\n",VideoCtrl->VideoCnt, Packet->dts, Packet->pts, Packet->size, GotPicture); // printf("Video index:%5d\t pFrame->pkt_dts:%ld, pFrame->pkt_pts:%ld, pFrame->pts:%ld, pFrame->pict_type:%d, " // "pFrame->best_effort_timestamp:%ld, pFrame->pkt_pos:%ld, pVideoFrame->pkt_duration:%ld\n", // VideoCtrl->VideoCnt, pVideoFrame->pkt_dts, pVideoFrame->pkt_pts, pVideoFrame->pts, // pVideoFrame->pict_type, pVideoFrame->best_effort_timestamp, // pVideoFrame->pkt_pos, pVideoFrame->pkt_duration);VideoCtrl->VideoCnt++;if(GotPicture){sws_scale(VideoConvertCtx, (const unsigned char* const*)pVideoFrame->data,pVideoFrame->linesize, 0, pVideoCodecCtx->height, pFrameYUV->data, pFrameYUV->linesize);sem_wait(&VideoCtrl->video_refresh);//SDL---------------------------SDL_UpdateTexture( VideoCtrl->sdlTexture, NULL, pFrameYUV->data[0], pFrameYUV->linesize[0] );SDL_RenderClear( VideoCtrl->sdlRenderer );//SDL_RenderCopy( sdlRenderer, sdlTexture, &sdlRect, &sdlRect );SDL_RenderCopy( VideoCtrl->sdlRenderer, VideoCtrl->sdlTexture, NULL, NULL);SDL_RenderPresent( VideoCtrl->sdlRenderer );//SDL End-----------------------}av_packet_unref(Packet);VideoCtrl->Video.PacketArray[i].state = 0;i++;if(i >= PACKET_ARRAY_SIZE) i = 0;VideoCtrl->Video.rIndex = i;}}printf("---------> thread_video end !!!! \n");return 0; }/* The audio function callback takes the following parameters:* stream: A pointer to the audio buffer to be filled* len: The length (in bytes) of the audio buffer */ void fill_audio(void *udata,Uint8 *stream,int len) {AudioCtrlStruct* AudioCtrl = (AudioCtrlStruct*)udata;//SDL 2.0SDL_memset(stream, 0, len);if(AudioCtrl->audio_len == 0) return;len=(len > AudioCtrl->audio_len ? AudioCtrl->audio_len : len); /* Mix as much data as possible */SDL_MixAudio(stream, AudioCtrl->audio_pos, len, SDL_MIX_MAXVOLUME);AudioCtrl->audio_pos += len;AudioCtrl->audio_len -= len; }int main(int argc, char* argv[]) {AVFormatContext *pFormatCtx;AVCodecContext *pVideoCodecCtx, *pAudioCodecCtx;AVCodec *pVideoCodec, *pAudioCodec;AVPacket *Packet;unsigned char *pVideoOutBuffer, *pAudioOutBuffer;int ret;unsigned int i;pthread_t audio_tid, video_tid;uint64_t AudioOutChannelLayout;//nb_samples: AAC-1024 MP3-1152int out_nb_samples;AVSampleFormat out_sample_fmt;int out_sample_rate;int out_channels;//Out Buffer Sizeint out_buffer_size;//------------SDL----------------struct SwsContext *VideoConvertCtx;struct SwrContext *AudioConvertCtx;int VideoIndex, VideoCnt;int AudioIndex, AudioCnt;memset(&AudioCtrl, 0, sizeof(AudioCtrlStruct));memset(&VideoCtrl, 0, sizeof(VideoCtrlStruct));char *filepath = argv[1];sem_init(&VideoCtrl.video_refresh, 0, 0);sem_init(&VideoCtrl.frame_put, 0, 0);sem_init(&AudioCtrl.frame_put, 0, 0);thread_exit = 0;thread_pause = 0;CurKeyProcess = 1;CurKeyCode = 0;av_register_all();avformat_network_init();pFormatCtx = avformat_alloc_context();if(avformat_open_input(&pFormatCtx, filepath, NULL, NULL) !=0 ){printf("Couldn't open input stream.\n");return -1;}if(avformat_find_stream_info(pFormatCtx,NULL) < 0){printf("Couldn't find stream information.\n");return -1;}VideoIndex = -1;AudioIndex = -1;for(i = 0; i < pFormatCtx->nb_streams; i++){if(pFormatCtx->streams[i]->codec->codec_type==AVMEDIA_TYPE_VIDEO){VideoIndex = i;//打印輸出視頻流的信息printf("video time_base.num:%d, time_base.den:%d, avg_frame_rate.num:%d, avg_frame_rate.den:%d\n",pFormatCtx->streams[VideoIndex]->time_base.num,pFormatCtx->streams[VideoIndex]->time_base.den,pFormatCtx->streams[VideoIndex]->avg_frame_rate.num,pFormatCtx->streams[VideoIndex]->avg_frame_rate.den);}if(pFormatCtx->streams[i]->codec->codec_type==AVMEDIA_TYPE_AUDIO){AudioIndex = i;//打印輸出音頻流的信息//pFormatCtx->streams[AudioIndex]->time_base.den <<= 1;printf("audio time_base.num:%d, time_base.den:%d, avg_frame_rate.num:%d, avg_frame_rate.den:%d\n",pFormatCtx->streams[AudioIndex]->time_base.num,pFormatCtx->streams[AudioIndex]->time_base.den,pFormatCtx->streams[AudioIndex]->avg_frame_rate.num,pFormatCtx->streams[AudioIndex]->avg_frame_rate.den);}}if(VideoIndex != -1){ //準備視頻的解碼操作上下文數據結構,pVideoCodecCtx = pFormatCtx->streams[VideoIndex]->codec;pVideoCodec = avcodec_find_decoder(pVideoCodecCtx->codec_id);if(pVideoCodec == NULL){printf("Video Codec not found.\n");return -1;}if(avcodec_open2(pVideoCodecCtx, pVideoCodec,NULL) < 0){printf("Could not open video codec.\n");return -1;}// prepare videoVideoCtrl.pVideoFrame = av_frame_alloc();VideoCtrl.pFrameYUV = av_frame_alloc();ret = av_image_get_buffer_size(AV_PIX_FMT_YUV420P, pVideoCodecCtx->width, pVideoCodecCtx->height, 1);pVideoOutBuffer = (unsigned char *)av_malloc(ret);av_image_fill_arrays(VideoCtrl.pFrameYUV->data, VideoCtrl.pFrameYUV->linesize, pVideoOutBuffer,AV_PIX_FMT_YUV420P, pVideoCodecCtx->width, pVideoCodecCtx->height, 1);VideoConvertCtx = sws_getContext(pVideoCodecCtx->width, pVideoCodecCtx->height, pVideoCodecCtx->pix_fmt,pVideoCodecCtx->width, pVideoCodecCtx->height,AV_PIX_FMT_YUV420P, SWS_BICUBIC, NULL, NULL, NULL);VideoCtrl.pFormatCtx = pFormatCtx;VideoCtrl.pStream = pFormatCtx->streams[VideoIndex];VideoCtrl.pCodec = pVideoCodec;VideoCtrl.pCodecCtx = pFormatCtx->streams[VideoIndex]->codec;VideoCtrl.pConvertCtx = VideoConvertCtx;VideoCtrl.pVideoOutBuffer = pVideoOutBuffer;VideoCtrl.VideoIndex = VideoIndex;if(pFormatCtx->streams[VideoIndex]->avg_frame_rate.num == 0 ||pFormatCtx->streams[VideoIndex]->avg_frame_rate.den == 0){VideoCtrl.RefreshTime = 40000;}else{VideoCtrl.RefreshTime = 1000000 * pFormatCtx->streams[VideoIndex]->avg_frame_rate.den;VideoCtrl.RefreshTime /= pFormatCtx->streams[VideoIndex]->avg_frame_rate.num;}printf("VideoCtrl.RefreshTime:%d\n", VideoCtrl.RefreshTime);}else{printf("Didn't find a video stream.\n");}if(AudioIndex != -1){ //準備音頻的解碼操作上下文數據結構,pAudioCodecCtx = pFormatCtx->streams[AudioIndex]->codec;pAudioCodec = avcodec_find_decoder(pAudioCodecCtx->codec_id);if(pAudioCodec == NULL){printf("Audio Codec not found.\n");return -1;}if(avcodec_open2(pAudioCodecCtx, pAudioCodec,NULL) < 0){printf("Could not open audio codec.\n");return -1;}// prepare Out Audio ParamAudioOutChannelLayout = AV_CH_LAYOUT_STEREO;out_nb_samples = pAudioCodecCtx->frame_size * 2; //nb_samples: AAC-1024 MP3-1152out_sample_fmt = AV_SAMPLE_FMT_S16;out_sample_rate = pAudioCodecCtx->sample_rate * 2;// 此處一定使用pAudioCodecCtx->sample_rate這個變量賦值,否則使用不一樣的值會造成音頻少采樣或者過采樣,導致音頻播放出現雜音out_channels = av_get_channel_layout_nb_channels(AudioOutChannelLayout);out_buffer_size = av_samples_get_buffer_size(NULL,out_channels ,out_nb_samples,out_sample_fmt, 1);//mp3:out_nb_samples:1152, out_channels:2, out_buffer_size:4608, pCodecCtx->channels:2//aac:out_nb_samples:1024, out_channels:2, out_buffer_size:4096, pCodecCtx->channels:2printf("out_nb_samples:%d, out_channels:%d, out_buffer_size:%d, pCodecCtx->channels:%d\n",out_nb_samples, out_channels, out_buffer_size, pAudioCodecCtx->channels);pAudioOutBuffer = (uint8_t *)av_malloc(MAX_AUDIO_FRAME_SIZE*2);//FIX:Some Codec's Context Information is missingint64_t in_channel_layout = av_get_default_channel_layout(pAudioCodecCtx->channels);//SwrAudioConvertCtx = swr_alloc();AudioConvertCtx = swr_alloc_set_opts(AudioConvertCtx, AudioOutChannelLayout,out_sample_fmt, out_sample_rate,in_channel_layout, pAudioCodecCtx->sample_fmt ,pAudioCodecCtx->sample_rate, 0, NULL);swr_init(AudioConvertCtx);AudioCtrl.pFormatCtx = pFormatCtx;AudioCtrl.pStream = pFormatCtx->streams[AudioIndex];AudioCtrl.pCodec = pAudioCodec;AudioCtrl.pCodecCtx = pFormatCtx->streams[AudioIndex]->codec;AudioCtrl.pConvertCtx = AudioConvertCtx;AudioCtrl.AudioOutChannelLayout = AudioOutChannelLayout;AudioCtrl.out_nb_samples = out_nb_samples;AudioCtrl.out_sample_fmt = out_sample_fmt;AudioCtrl.out_sample_rate = out_sample_rate;AudioCtrl.out_channels = out_channels;AudioCtrl.out_buffer_size = out_buffer_size;AudioCtrl.pAudioOutBuffer = pAudioOutBuffer;AudioCtrl.AudioIndex = AudioIndex;}else{printf("Didn't find a audio stream.\n");}//Output Info-----------------------------printf("---------------- File Information ---------------\n");av_dump_format(pFormatCtx, 0, filepath, 0);printf("-------------- File Information end -------------\n");if(SDL_Init(SDL_INIT_VIDEO | SDL_INIT_AUDIO | SDL_INIT_TIMER)){printf( "Could not initialize SDL - %s\n", SDL_GetError());return -1;}if(VideoIndex != -1){//SDL 2.0 Support for multiple windows//SDL_VideoSpecVideoCtrl.screen_w = pVideoCodecCtx->width;VideoCtrl.screen_h = pVideoCodecCtx->height;VideoCtrl.screen = SDL_CreateWindow("Simplest ffmpeg player's Window", SDL_WINDOWPOS_UNDEFINED,SDL_WINDOWPOS_UNDEFINED, VideoCtrl.screen_w, VideoCtrl.screen_h, SDL_WINDOW_OPENGL);if(!VideoCtrl.screen){printf("SDL: could not create window - exiting:%s\n",SDL_GetError());return -1;}VideoCtrl.sdlRenderer = SDL_CreateRenderer(VideoCtrl.screen, -1, 0);//IYUV: Y + U + V (3 planes)//YV12: Y + V + U (3 planes)VideoCtrl.sdlTexture = SDL_CreateTexture(VideoCtrl.sdlRenderer, SDL_PIXELFORMAT_IYUV, SDL_TEXTUREACCESS_STREAMING,pVideoCodecCtx->width, pVideoCodecCtx->height);VideoCtrl.sdlRect.x = 0;VideoCtrl.sdlRect.y = 0;VideoCtrl.sdlRect.w = VideoCtrl.screen_w;VideoCtrl.sdlRect.h = VideoCtrl.screen_h;VideoCtrl.video_tid = SDL_CreateThread(video_refresh_thread, NULL, NULL);ret = pthread_create(&video_tid, NULL, thread_video, &VideoCtrl);if (ret){printf("create thr_rvs video thread failed, error = %d \n", ret);return -1;}}if(AudioIndex != -1){//SDL_AudioSpecSDL_AudioSpec AudioSpec;AudioSpec.freq = out_sample_rate;AudioSpec.format = AUDIO_S16SYS;AudioSpec.channels = out_channels;AudioSpec.silence = 0;AudioSpec.samples = out_nb_samples;AudioSpec.callback = fill_audio;AudioSpec.userdata = (void*)&AudioCtrl;if (SDL_OpenAudio(&AudioSpec, NULL) < 0){printf("can't open audio.\n");return -1;}ret = pthread_create(&audio_tid, NULL, thread_audio, &AudioCtrl);if (ret){printf("create thr_rvs video thread failed, error = %d \n", ret);return -1;}SDL_PauseAudio(0);}SDL_Thread *event_tid;event_tid = SDL_CreateThread(SDL_event_thread, NULL, NULL);VideoCnt = 0;AudioCnt = 0;Packet = (AVPacket *)av_malloc(sizeof(AVPacket));av_init_packet(Packet);while(1){if( thread_pause ){if((CurKeyProcess == 0) && video_pause && audio_pause){switch(CurKeyCode){case SDLK_RIGHT://DstAudioDts = CurAudioDts + (int64_t) (3 / av_q2d(AudioCtrl.pStream->time_base));//因為time_base分數結構中,num基本都為1,因此為了減少浮點數乘除法,可以直接乘以time_base的分母DstAudioDts = CurAudioDts + 3*AudioCtrl.pStream->time_base.den;DstVideoDts = CurVideoDts + 3*VideoCtrl.pStream->time_base.den;ret = av_seek_frame(pFormatCtx, AudioIndex, DstAudioDts, AVSEEK_FLAG_FRAME);//ret = av_seek_frame(pFormatCtx, VideoIndex, DstVideoDts, AVSEEK_FLAG_FRAME);printf("SDLK_RIGHT av_seek_frame ret = %d, CurAudioDts:%ld, CurVideoDts:%ld, DstVideoDts:%ld, DstAudioDts:%ld\n",ret, CurAudioDts, CurVideoDts, DstVideoDts, DstAudioDts);avcodec_flush_buffers(AudioCtrl.pCodecCtx);avcodec_flush_buffers(VideoCtrl.pCodecCtx);PacketArrayClear(&VideoCtrl.Video);PacketArrayClear(&AudioCtrl.Audio);break;case SDLK_LEFT:DstAudioDts = CurAudioDts - 3*AudioCtrl.pStream->time_base.den;DstVideoDts = CurVideoDts - 3*VideoCtrl.pStream->time_base.den;if(DstAudioDts < 0) DstAudioDts = 0;if(DstVideoDts < 0) DstVideoDts = 0;ret = av_seek_frame(pFormatCtx, AudioIndex, DstAudioDts, AVSEEK_FLAG_BACKWARD | AVSEEK_FLAG_FRAME);//ret = av_seek_frame(pFormatCtx, VideoIndex, DstVideoDts, AVSEEK_FLAG_BACKWARD | AVSEEK_FLAG_FRAME);printf("SDLK_LEFT av_seek_frame ret = %d, CurAudioDts:%ld, CurVideoDts:%ld, DstVideoDts:%ld, DstAudioDts:%ld\n",ret, CurAudioDts, CurVideoDts, DstVideoDts, DstAudioDts);avcodec_flush_buffers(AudioCtrl.pCodecCtx);avcodec_flush_buffers(VideoCtrl.pCodecCtx);PacketArrayClear(&VideoCtrl.Video);PacketArrayClear(&AudioCtrl.Audio);break;default:break;}CurKeyProcess = 1;thread_pause = !thread_pause;}usleep(10000);continue;}if(av_read_frame(pFormatCtx, Packet) < 0){ // thread_exit = 1; // SDL_Event event; // event.type = SFM_BREAK_EVENT; // SDL_PushEvent(&event); // printf("---------> av_read_frame < 0, thread_exit = 1 !!!\n"); // break;av_seek_frame(pFormatCtx, AudioIndex, 0, AVSEEK_FLAG_BACKWARD | AVSEEK_FLAG_FRAME);continue;}if(Packet->stream_index == VideoIndex){if(VideoCtrl.Video.wIndex >= PACKET_ARRAY_SIZE){VideoCtrl.Video.wIndex = 0;}while(IsPacketArrayFull(&VideoCtrl.Video)){usleep(5000);//printf("---------> VideoCtrl.Video.PacketArray FULL !!!\n");}i = VideoCtrl.Video.wIndex;VideoCtrl.Video.PacketArray[i].Packet = *Packet;VideoCtrl.Video.PacketArray[i].dts = Packet->dts;VideoCtrl.Video.PacketArray[i].pts = Packet->pts;VideoCtrl.Video.PacketArray[i].state = 1;VideoCtrl.Video.wIndex++;//printf("VideoCtrl packet put,dts:%ld, pts:%ld, VideoCnt:%d\n", Packet->dts, Packet->pts, VideoCnt++);}if(Packet->stream_index == AudioIndex){if(AudioCtrl.Audio.wIndex >= PACKET_ARRAY_SIZE){AudioCtrl.Audio.wIndex = 0;}while(IsPacketArrayFull(&AudioCtrl.Audio)){usleep(5000);//printf("---------> AudioCtrl.Audio.PacketArray FULL !!!\n");}i = AudioCtrl.Audio.wIndex;AudioCtrl.Audio.PacketArray[i].Packet = *Packet;AudioCtrl.Audio.PacketArray[i].dts = Packet->dts;AudioCtrl.Audio.PacketArray[i].pts = Packet->pts;AudioCtrl.Audio.PacketArray[i].state = 1;AudioCtrl.Audio.wIndex++;//printf("AudioCtrl.frame_put, AudioCnt:%d\n", AudioCnt++);}if(thread_exit) break;}SDL_WaitThread(event_tid, NULL);SDL_WaitThread(VideoCtrl.video_tid, NULL);pthread_join(audio_tid, NULL);pthread_join(video_tid, NULL);SDL_CloseAudio();//Close SDLSDL_Quit();swr_free(&AudioConvertCtx);sws_freeContext(VideoConvertCtx);av_free(pVideoOutBuffer);avcodec_close(pVideoCodecCtx);av_free(pAudioOutBuffer);avcodec_close(pAudioCodecCtx);avformat_close_input(&pFormatCtx);printf("--------------------------->main exit 8 !!\n");}好了,下一篇就要開始研究怎么用QT與FFmpeg相結合了,估計需要一點時間了。
總結
以上是生活随笔為你收集整理的FFmpeg音视频播放器系列(第三篇:seek实现播放进度控制)的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 2020美赛赛后感想总结
- 下一篇: 日语假名小写怎么打出来?