Ijkplayer音频流程

ijkplayer的音频解码是不支持硬解的,音频播放使用的API是OpenSL ES或AudioTrack。

  • AudioTrack
  • AudioTrack是专门为Android应用提供的java API。

    使用AudioTrack API来输出音频就需要把音频数据从java层拷贝到native层。而OpenSL ES API是Android NDK提供的native接口,它可以在native层直接获取和处理数据,因此为了提高效率,应该使用OpenSL ES API。通过如下java接口设置音频输出API:

      ijkMediaPlayer.setOption(IjkMediaPlayer.OPT_CATEGORY_PLAYER, "opensles", 0);
    

    Ijkplayer使用jni4android来为AudioTrack的java API自动生成JNI native代码。

    我们尽量选择底层的代码来进行研究,因此本篇文章梳理一遍OpenSL ES API在ijkplayer中的使用。

    创建播放器音频输出对象

    调用如下函数生成音频输出对象:

    SDL_Aout *SDL_AoutAndroid_CreateForOpenSLES()
    

    创建并初始化Audio Engine:

    SLObjectItf slObject = NULL; ret = slCreateEngine(&slObject, 0, NULL, 0, NULL, NULL); CHECK_OPENSL_ERROR(ret, "%s: slCreateEngine() failed", __func__); opaque->slObject = slObject; //初始化 ret = (*slObject)->Realize(slObject, SL_BOOLEAN_FALSE); CHECK_OPENSL_ERROR(ret, "%s: slObject->Realize() failed", __func__); //获取SLEngine接口对象slEngine SLEngineItf slEngine = NULL; ret = (*slObject)->GetInterface(slObject, SL_IID_ENGINE, &slEngine); CHECK_OPENSL_ERROR(ret, "%s: slObject->GetInterface() failed", __func__); opaque->slEngine = slEngine;

    打开音频输出设备:

    //使用slEngine打开输出设备
    SLObjectItf slOutputMixObject = NULL;
    const SLInterfaceID ids1[] = {SL_IID_VOLUME};
    const SLboolean req1[] = {SL_BOOLEAN_FALSE};
    ret = (*slEngine)->CreateOutputMix(slEngine, &slOutputMixObject, 1, ids1, req1);
    CHECK_OPENSL_ERROR(ret, "%s: slEngine->CreateOutputMix() failed", __func__);
    opaque->slOutputMixObject = slOutputMixObject;
    //初始化
    ret = (*slOutputMixObject)->Realize(slOutputMixObject, SL_BOOLEAN_FALSE);
    CHECK_OPENSL_ERROR(ret, "%s: slOutputMixObject->Realize() failed", __func__);
    

    将上述创建的OpenSL ES相关对象保存到SDL_Aout_Opaque中。

    设置播放器音频输出对象的回调函数:

    aout->free_l       = aout_free_l;
    aout->opaque_class = &g_opensles_class;
    aout->open_audio   = aout_open_audio;
    aout->pause_audio  = aout_pause_audio;
    aout->flush_audio  = aout_flush_audio;
    aout->close_audio  = aout_close_audio;
    aout->set_volume   = aout_set_volume;
    aout->func_get_latency_seconds = aout_get_latency_seconds;
    

    配置并创建音频播放器

    通过如下函数进行:

    static int aout_open_audio(SDL_Aout *aout, const SDL_AudioSpec *desired, SDL_AudioSpec *obtained)
    

    配置数据源

     SLDataLocator_AndroidSimpleBufferQueue loc_bufq = {
     SL_DATALOCATOR_ANDROIDSIMPLEBUFFERQUEUE,
     OPENSLES_BUFFERS
     SLDataFormat_PCM *format_pcm = &opaque->format_pcm;
     format_pcm->formatType       = SL_DATAFORMAT_PCM;
     format_pcm->numChannels      = desired->channels;
     format_pcm->samplesPerSec    = desired->freq * 1000; // milli Hz
     format_pcm->bitsPerSample    = SL_PCMSAMPLEFORMAT_FIXED_16;
     format_pcm->containerSize    = SL_PCMSAMPLEFORMAT_FIXED_16;
     switch (desired->channels) {
         case 2:
             format_pcm->channelMask  = SL_SPEAKER_FRONT_LEFT | SL_SPEAKER_FRONT_RIGHT;
             break;
         case 1:
             format_pcm->channelMask  = SL_SPEAKER_FRONT_CENTER;
         break;
         default:
         ALOGE("%s, invalid channel %d", __func__, desired->channels);
         goto fail;
     format_pcm->endianness       = SL_BYTEORDER_LITTLEENDIAN;
     SLDataSource audio_source = {&loc_bufq, format_pcm};
    
     const SLInterfaceID ids2[] = { SL_IID_ANDROIDSIMPLEBUFFERQUEUE, SL_IID_VOLUME, SL_IID_PLAY };
     static const SLboolean req2[] = { SL_BOOLEAN_TRUE, SL_BOOLEAN_TRUE, SL_BOOLEAN_TRUE };
    

    创建播放器

     ret = (*slEngine)->CreateAudioPlayer(slEngine, &slPlayerObject, &audio_source,
                             &audio_sink, sizeof(ids2) / sizeof(*ids2),
                             ids2, req2);
    

    获取相关接口

      //获取seek和play接口
      ret = (*slPlayerObject)->GetInterface(slPlayerObject, SL_IID_PLAY, &opaque->slPlayItf);
      CHECK_OPENSL_ERROR(ret, "%s: slPlayerObject->GetInterface(SL_IID_PLAY) failed", __func__);
      //音量调节接口
      ret = (*slPlayerObject)->GetInterface(slPlayerObject, SL_IID_VOLUME, &opaque->slVolumeItf);
      CHECK_OPENSL_ERROR(ret, "%s: slPlayerObject->GetInterface(SL_IID_VOLUME) failed", __func__);
      //获取音频输出的BufferQueue接口
      ret = (*slPlayerObject)->GetInterface(slPlayerObject, SL_IID_ANDROIDSIMPLEBUFFERQUEUE, &opaque->slBufferQueueItf);
      CHECK_OPENSL_ERROR(ret, "%s: slPlayerObject->GetInterface(SL_IID_ANDROIDSIMPLEBUFFERQUEUE) failed", __func__);      
    
  • 设置回调函数

  • 回调函数并不传递音频数据,它只是告诉程序:我已经准备好接受处理(播放)数据了。这时候就可以调用Enqueue向BufferQueue中插入音频数据了。

        ret = (*opaque->slBufferQueueItf)->RegisterCallback(opaque->slBufferQueueItf, aout_opensles_callback, (void*)aout);
        CHECK_OPENSL_ERROR(ret, "%s: slBufferQueueItf->RegisterCallback() failed", __func__);    
    

    初始化其它参数

     opaque->bytes_per_frame   = format_pcm->numChannels * format_pcm->bitsPerSample / 8;//每一帧的bytes数,此处将一个采样点作为一帧
     opaque->milli_per_buffer  = OPENSLES_BUFLEN;//一个buffer中的音频时长,单位为milliseconds
     opaque->frames_per_buffer = opaque->milli_per_buffer * format_pcm->samplesPerSec / 1000000; // samplesPerSec is in milli,一个buffer中的音频时长*每秒的样本(帧)数,得到每个音频buffer中的帧数
     opaque->bytes_per_buffer  = opaque->bytes_per_frame * opaque->frames_per_buffer;//最后求出每个buffer中含有的byte数目。
     opaque->buffer_capacity   = OPENSLES_BUFFERS * opaque->bytes_per_buffer;      
    

    音频数据的处理为典型的生产者消费者模型,解码线程解码出音频数据插入到队列中,音频驱动程序取出数据将声音播放出来。

    audio_thread函数为音频解码线程主函数:

    static int audio_thread(void *arg){
        ffp_audio_statistic_l(ffp);
        if ((got_frame = decoder_decode_frame(ffp, &is->auddec, frame, NULL)) < 0)//从PacketQueue中取出pakcet并进行解码,生成一帧数据
        if (!(af = frame_queue_peek_writable(&is->sampq)))
            goto the_end;
        af->pts = (frame->pts == AV_NOPTS_VALUE) ? NAN : frame->pts * av_q2d(tb);
        af->pos = frame->pkt_pos;
        af->serial = is->auddec.pkt_serial;
        af->duration = av_q2d((AVRational){frame->nb_samples, frame->sample_rate});
        av_frame_move_ref(af->frame, frame);
        frame_queue_push(&is->sampq);//将帧数据插入帧队列 FrameQueue
    

    aout_thread_n 为音频输出线程主函数:

    static int aout_thread_n(SDL_Aout *aout){
        SDL_LockMutex(opaque->wakeup_mutex);
        //如果没有退出播放&&(当前播放器状态为暂停||插入音频BufferQueue中的数据条数大于OPENSLES_BUFFERS)
        if (!opaque->abort_request && (opaque->pause_on || slState.count >= OPENSLES_BUFFERS)) {
        //不知道为什么if下面又加了一层while??
            while (!opaque->abort_request && (opaque->pause_on || slState.count >= OPENSLES_BUFFERS)) {
                //如果此时为非暂停状态,将播放器状态置为PLAYING
                if (!opaque->pause_on) {
                    (*slPlayItf)->SetPlayState(slPlayItf, SL_PLAYSTATE_PLAYING);
                //如果暂停或者队列中数据过多,这里都会等待一个条件变量,并将过期时间置为1秒,应该是防止BufferQueue中的数据不再快速增加
                SDL_CondWaitTimeout(opaque->wakeup_cond, opaque->wakeup_mutex, 1000);
                SLresult slRet = (*slBufferQueueItf)->GetState(slBufferQueueItf, &slState);
                if (slRet != SL_RESULT_SUCCESS) {
                    ALOGE("%s: slBufferQueueItf->GetState() failed\n", __func__);
                    SDL_UnlockMutex(opaque->wakeup_mutex);
                //暂停播放
                if (opaque->pause_on)
                    (*slPlayItf)->SetPlayState(slPlayItf, SL_PLAYSTATE_PAUSED);
            //恢复播放
            if (!opaque->abort_request && !opaque->pause_on) {
                (*slPlayItf)->SetPlayState(slPlayItf, SL_PLAYSTATE_PLAYING);
        next_buffer = opaque->buffer + next_buffer_index * bytes_per_buffer;
        next_buffer_index = (next_buffer_index + 1) % OPENSLES_BUFFERS;
        //调用回调函数生成插入到BufferQueue中的数据
        audio_cblk(userdata, next_buffer, bytes_per_buffer);
        //如果需要刷新BufferQueue数据,则清除数据,何时需要清理数据??解释在下面
        if (opaque->need_flush) {
            (*slBufferQueueItf)->Clear(slBufferQueueItf);
            opaque->need_flush = false;
        //不知道为什么会判断两次??
        if (opaque->need_flush) {
            ALOGE("flush");
            opaque->need_flush = 0;
            (*slBufferQueueItf)->Clear(slBufferQueueItf);
        } else {
        //最终将数据插入到BufferQueue中。
        slRet = (*slBufferQueueItf)->Enqueue(slBufferQueueItf, next_buffer, bytes_per_buffer);
    

    以下是为条件变量opaque->wakeup_cond 发送signal的几个函数,目的是让输出线程快速响应

  • static void aout_opensles_callback(SLAndroidSimpleBufferQueueItf caller, void *pContext)

  • static void aout_close_audio(SDL_Aout *aout)

  • static void aout_pause_audio(SDL_Aout *aout, int pause_on)

  • static void aout_flush_audio(SDL_Aout *aout)

  • static void aout_set_volume(SDL_Aout *aout, float left_volume, float right_volume)

  • 第一个为音频播放器的BufferQueue设置的回调函数,每从队列中取出一条数据执行一次,这个可以理解,队列中去除一条数据,立刻唤醒线程Enqueue数据。

  • 第二个为关闭音频播放器的时候调用的函数,立马退出线程。

  • 第三个为暂停/播放音频播放器函数,马上设置播放器状态。

  • 第四个为清空BufferQueue时调用的函数,立刻唤醒线程Enqueue数据。

  • 第五个为设置音量函数,马上设置音量。

  • 通过调用如下函数生成插入到BufferQueue中的数据 :

    static void sdl_audio_callback(void *opaque, Uint8 *stream, int len){
            if (is->audio_buf_index >= is->audio_buf_size) {
            //如果buffer中没有数据了,生成新数据。
                   audio_size = audio_decode_frame(ffp);
            if (!is->muted && is->audio_buf && is->audio_volume == SDL_MIX_MAXVOLUME)
                //直接拷贝到stream
                memcpy(stream, (uint8_t *)is->audio_buf + is->audio_buf_index, len1);
            else {
                memset(stream, 0, len1);
                if (!is->muted && is->audio_buf)
                //进行音量调整和混音
                    SDL_MixAudio(stream, (uint8_t *)is->audio_buf + is->audio_buf_index, len1, is->audio_volume);
    

    生成新数据的函数不是对音频数据进行解码,而是对帧数据进行了二次处理,对音频进行了必要的重采样或者变速变调。

    static int audio_decode_frame(FFPlayer *ffp){
        //重采样
        len2 = swr_convert(is->swr_ctx, out, out_count, in, af->frame->nb_samples);
        //音频变速变调
        int ret_len = ijk_soundtouch_translate(is->handle, is->audio_new_buf, (float)(ffp->pf_playback_rate), (float)(1.0f/ffp->pf_playback_rate),
                    resampled_data_size / 2, bytes_per_sample, is->audio_tgt.channels, af->frame->sample_rate);
        //最后将数据保存到audio_buf中
         is->audio_buf = (uint8_t*)is->audio_new_buf;
    

    主要分析变速播放框架实现细节,不分析sonic以及soundtouch变速算法。在我的sonic变速变调原理一文中会详细讲解基于基音周期来实现变速变调的原理

    1.变速入口分析
    从jni层的_setPropertyFloat函数

    static void ijkMediaPlayer_setPropertyFloat(JNIEnv *env, jobject thiz, jint id, jfloat value)
        IjkMediaPlayer *mp = jni_get_media_player(env, thiz);
        JNI_CHECK_GOTO(mp, env, NULL, "mpjni: setPropertyFloat: null mp", LABEL_RETURN);
        ijkmp_set_property_float(mp, id, value);
      LABEL_RETURN:
        ijkmp_dec_ref_p(&mp);
        return;
    

    到ff_ffplay.c中的ffp_set_property_float函数来设置速度

    void ffp_set_property_float(FFPlayer *ffp, int id, float value)
        switch (id) {
            case FFP_PROP_FLOAT_PLAYBACK_RATE:
                ffp_set_playback_rate(ffp, value);
                break;
            case FFP_PROP_FLOAT_PLAYBACK_VOLUME:
                ffp_set_playback_volume(ffp, value);
                break;
            default:
                return;
    

    跟踪ffp_set_playback_rate函数,我们可以看到这里主要把速度变量设置给了ffp->pf_playback_rate以及把ffp->pf_playback_rate_changed置1.

    void ffp_set_playback_rate(FFPlayer *ffp, float rate)
        if (!ffp)
            return;
        av_log(ffp, AV_LOG_INFO, "Playback rate: %f\n", rate);
        ffp->pf_playback_rate = rate;
        ffp->pf_playback_rate_changed = 1;
    

    2.音频变速实现
    跟踪这俩个变量,我们可以看到在audio_decode_frame中,我们新增了音频的变速变调算法来处理音频变速,

    #if defined(__ANDROID__)
            if (ffp->soundtouch_enable && ffp->pf_playback_rate != 1.0f && !is->abort_request) {
                av_fast_malloc(&is->audio_new_buf, &is->audio_new_buf_size, out_size * translate_time);
                for (int i = 0; i < (resampled_data_size / 2); i++)
                    is->audio_new_buf[i] = (is->audio_buf1[i * 2] | (is->audio_buf1[i * 2 + 1] << 8));
                int ret_len = ijk_soundtouch_translate(is->handle, is->audio_new_buf, (float)(ffp->pf_playback_rate), (float)(1.0f/ffp->pf_playback_rate),
                        resampled_data_size / 2, bytes_per_sample, is->audio_tgt.channels, af->frame->sample_rate);
                if (ret_len > 0) {
                    is->audio_buf = (uint8_t*)is->audio_new_buf;
                    resampled_data_size = ret_len;
                } else {
                    translate_time++;
                    goto reload;
            } else if (ffp->sonic_enabled && ffp->pf_playback_rate != 1.0f && !is->abort_request) {
                av_fast_malloc(&is->audio_new_buf, &is->audio_new_buf_size, out_size * translate_time * 2);
                for (int i = 0; i < (resampled_data_size / 2); i++)
                    is->audio_new_buf[i] = (is->audio_buf1[i * 2] | (is->audio_buf1[i * 2 + 1] << 8));
                int ret_len = sonicStream_translate(is->sonic_handle,is->audio_new_buf,ffp->pf_playback_rate,
                (float)(1.0f/ffp->pf_playback_rate),is->audio_tgt.channels, af->frame->sample_rate,resampled_data_size / 2,bytes_per_sample);
                if (ret_len > 0) {
                    is->audio_buf = (uint8_t*)is->audio_new_buf;
                    resampled_data_size = ret_len;
                } else {
                    translate_time++;
                    goto reload;
    #endif
    

    sonic变速是我加的,下面在音频回调sdl_audio_callback函数中,

    if (ffp->pf_playback_rate_changed) {
            ffp->pf_playback_rate_changed = 0;
    #if defined(__ANDROID__)
            if (!ffp->soundtouch_enable && !ffp->sonic_enabled) {
                SDL_AoutSetPlaybackRate(ffp->aout, ffp->pf_playback_rate);
    #else
            SDL_AoutSetPlaybackRate(ffp->aout, ffp->pf_playback_rate);
    #endif
        if (ffp->pf_playback_volume_changed) {
            ffp->pf_playback_volume_changed = 0;
            SDL_AoutSetPlaybackVolume(ffp->aout, ffp->pf_playback_volume);
    

    上面主要处理对ffp->pf_playback_rate_changed的判断,如果没有开启soundtouch变速或sonic变速,直接走audiotrack的变速来处理。在android6.0系统之下,audiotrack的变速存在变调问题。

    从上面我们可以看到这里处理了音频的变速,但是没有处理只有视频流的视频(也就是没有音频流)。

    3.视频变速实现
    ijk中默认选择音频流作为时间基准的,我们看下视频是如何来做变速的。视频上很容易就可以做到倍速播放,一般的视频格式都是每秒固定的帧数,按比例跳帧就可以了。

    static void video_refresh(FFPlayer *opaque, double *remaining_time)
        FFPlayer *ffp = opaque;
        VideoState *is = ffp->is;
        double time;
        Frame *sp, *sp2;
        if (!is->paused && get_master_sync_type(is) == AV_SYNC_EXTERNAL_CLOCK && is->realtime)
            check_external_clock_speed(is);
        if (!ffp->display_disable && is->show_mode != SHOW_MODE_VIDEO && is->audio_st) {
            time = av_gettime_relative() / 1000000.0;
            if (is->force_refresh || is->last_vis_time + ffp->rdftspeed < time) {
                video_display2(ffp);
                is->last_vis_time = time;
            *remaining_time = FFMIN(*remaining_time, is->last_vis_time + ffp->rdftspeed - time);
        if (is->video_st) {
    retry:
        //add by hxk,support only video change speed
            if(!is->audio_st && get_master_sync_type(is) == AV_SYNC_EXTERNAL_CLOCK) {
                if(ffp->pf_playback_rate != 1.0f){
                    change_external_clock_speed(is,ffp->pf_playback_rate);
        //add end
            if (frame_queue_nb_remaining(&is->pictq) == 0) {
                // nothing to do, no picture to display in the queue
            } else {
                double last_duration, duration, delay;
                Frame *vp, *lastvp;
                /* dequeue the picture */
                lastvp = frame_queue_peek_last(&is->pictq);
                vp = frame_queue_peek(&is->pictq);
                if (vp->serial != is->videoq.serial) {
                    frame_queue_next(&is->pictq);
                    goto retry;
                if (lastvp->serial != vp->serial)
                    is->frame_timer = av_gettime_relative() / 1000000.0;
                if (is->paused)
                    goto display;
                /* compute nominal last_duration */
                last_duration = vp_duration(is, lastvp, vp);
                delay = compute_target_delay(ffp, last_duration, is);
                time= av_gettime_relative()/1000000.0;
                if (isnan(is->frame_timer) || time < is->frame_timer)
                    is->frame_timer = time;
                if (time < is->frame_timer + delay) {
                    *remaining_time = FFMIN(is->frame_timer + delay - time, *remaining_time);
                    goto display;
                is->frame_timer += delay;
                if (delay > 0 && time - is->frame_timer > AV_SYNC_THRESHOLD_MAX)
                    is->frame_timer = time;
                SDL_LockMutex(is->pictq.mutex);
                if (!isnan(vp->pts))
                    update_video_pts(is, vp->pts, vp->pos, vp->serial);
                SDL_UnlockMutex(is->pictq.mutex);
                if (frame_queue_nb_remaining(&is->pictq) > 1) {
                    Frame *nextvp = frame_queue_peek_next(&is->pictq);
                    duration = vp_duration(is, vp, nextvp);
                    if(!is->step && (ffp->framedrop > 0 || (ffp->framedrop && get_master_sync_type(is) != AV_SYNC_VIDEO_MASTER)) && time > is->frame_timer + duration) {
                        frame_queue_next(&is->pictq);
                        goto retry;
                if (is->subtitle_st) {
                    while (frame_queue_nb_remaining(&is->subpq) > 0) {
                        sp = frame_queue_peek(&is->subpq);
                        if (frame_queue_nb_remaining(&is->subpq) > 1)
                            sp2 = frame_queue_peek_next(&is->subpq);
                            sp2 = NULL;
                        if (sp->serial != is->subtitleq.serial
                                || (is->vidclk.pts > (sp->pts + ((float) sp->sub.end_display_time / 1000)))
                                || (sp2 && is->vidclk.pts > (sp2->pts + ((float) sp2->sub.start_display_time / 1000))))
                            if (sp->uploaded) {
                                ffp_notify_msg4(ffp, FFP_MSG_TIMED_TEXT, 0, 0, "", 1);
                            frame_queue_next(&is->subpq);
                        } else {
                            break;
                frame_queue_next(&is->pictq);
                is->force_refresh = 1;
                SDL_LockMutex(ffp->is->play_mutex);
                if (is->step) {
                    is->step = 0;
                    if (!is->paused)
                        stream_update_pause_l(ffp);
                SDL_UnlockMutex(ffp->is->play_mutex);
    display:
            /* display picture */
            if (!ffp->display_disable && is->force_refresh && is->show_mode == SHOW_MODE_VIDEO && is->pictq.rindex_shown)
                video_display2(ffp);
        is->force_refresh = 0;
        if (ffp->show_status) {
            static int64_t last_time;
            int64_t cur_time;
            int aqsize, vqsize, sqsize __unused;
            double av_diff;
            cur_time = av_gettime_relative();
            if (!last_time || (cur_time - last_time) >= 30000) {
                aqsize = 0;
                vqsize = 0;
                sqsize = 0;
                if (is->audio_st)
                    aqsize = is->audioq.size;
                if (is->video_st)
                    vqsize = is->videoq.size;
    #ifdef FFP_MERGE
                if (is->subtitle_st)
                    sqsize = is->subtitleq.size;
    #else
                sqsize = 0;
    #endif
                av_diff = 0;
                if (is->audio_st && is->video_st)
                    av_diff = get_clock(&is->audclk) - get_clock(&is->vidclk);
                else if (is->video_st)
                    av_diff = get_master_clock(is) - get_clock(&is->vidclk);
                else if (is->audio_st)
                    av_diff = get_master_clock(is) - get_clock(&is->audclk);
                av_log(NULL, AV_LOG_INFO,
                       "%7.2f %s:%7.3f fd=%4d aq=%5dKB vq=%5dKB sq=%5dB f=%"PRId64"/%"PRId64"   \r",
                       get_master_clock(is),
                       (is->audio_st && is->video_st) ? "A-V" : (is->video_st ? "M-V" : (is->audio_st ? "M-A" : "   ")),
                       av_diff,
                       is->frame_drops_early + is->frame_drops_late,
                       aqsize / 1024,
                       vqsize / 1024,
                       sqsize,
                       is->video_st ? is->viddec.avctx->pts_correction_num_faulty_dts : 0,
                       is->video_st ? is->viddec.avctx->pts_correction_num_faulty_pts : 0);
                fflush(stdout);
                last_time = cur_time;
    

    其实这里就是视频变速的实现,也是视频同步音频的实现,来实现视频变速。

    3.1.没有音频的视频变速
    通过前面的分析,我们可以知道ijk没有处理只有视频流的视频变速(没有音频流的视频)。当没有音频流的时候,在read_thread函数中

     /* open the streams */
        if (st_index[AVMEDIA_TYPE_AUDIO] >= 0) {
            stream_component_open(ffp, st_index[AVMEDIA_TYPE_AUDIO]);
        } else {//如果没有音频流
            ffp->av_sync_type = AV_SYNC_VIDEO_MASTER;
            is->av_sync_type  = ffp->av_sync_type;
    

    我们可以看到如果没有音频流,ijk中选择视频流作为时间基准。个人觉得还是外部时钟作为时间基准比较好,比较准。

    那么我们可以如何修改呢?

    3.1.1.修改没有音频的视频的同步方式为外部时钟

      /* open the streams */
        if (st_index[AVMEDIA_TYPE_AUDIO] >= 0) {
            stream_component_open(ffp, st_index[AVMEDIA_TYPE_AUDIO]);
        } else {
            ffp->av_sync_type = AV_SYNC_EXTERNAL_CLOCK;
            is->av_sync_type  = ffp->av_sync_type;
    

    当没有音频流的时候,选择外部时钟作为时间基准。

    3.1.2.修改外部时钟变速
    修改video_refresh函数

    /* called to display each frame */
    static void video_refresh(FFPlayer *opaque, double *remaining_time)
        FFPlayer *ffp = opaque;
        VideoState *is = ffp->is;
        double time;
        Frame *sp, *sp2;
        if (!is->paused && get_master_sync_type(is) == AV_SYNC_EXTERNAL_CLOCK && is->realtime)
            check_external_clock_speed(is);
        if (!ffp->display_disable && is->show_mode != SHOW_MODE_VIDEO && is->audio_st) {
            time = av_gettime_relative() / 1000000.0;
            if (is->force_refresh || is->last_vis_time + ffp->rdftspeed < time) {
                video_display2(ffp);
                is->last_vis_time = time;
            *remaining_time = FFMIN(*remaining_time, is->last_vis_time + ffp->rdftspeed - time);
        if (is->video_st) {
    retry:
        //当没有音频流的,有视频流时,且时间基准为外部时钟
        //add by hxk,support only video change speed
            if(!is->audio_st && get_master_sync_type(is) == AV_SYNC_EXTERNAL_CLOCK) {
    //如果速度不等于1,改变外部时钟速度
                if(ffp->pf_playback_rate != 1.0f){
                    change_external_clock_speed(is,ffp->pf_playback_rate);
        //add end
            if (frame_queue_nb_remaining(&is->pictq) == 0) {
                // nothing to do, no picture to display in the queue
            } else {
                double last_duration, duration, delay;
                Frame *vp, *lastvp;
                /* dequeue the picture */
                lastvp = frame_queue_peek_last(&is->pictq);
                vp = frame_queue_peek(&is->pictq);
                if (vp->serial != is->videoq.serial) {
                    frame_queue_next(&is->pictq);
                    goto retry;
                if (lastvp->serial != vp->serial)
                    is->frame_timer = av_gettime_relative() / 1000000.0;
                if (is->paused)
                    goto display;
                /* compute nominal last_duration */
                last_duration = vp_duration(is, lastvp, vp);
                delay = compute_target_delay(ffp, last_duration, is);
                time= av_gettime_relative()/1000000.0;
                if (isnan(is->frame_timer) || time < is->frame_timer)
                    is->frame_timer = time;
                if (time < is->frame_timer + delay) {
                    *remaining_time = FFMIN(is->frame_timer + delay - time, *remaining_time);
                    goto display;
                is->frame_timer += delay;
                if (delay > 0 && time - is->frame_timer > AV_SYNC_THRESHOLD_MAX)
                    is->frame_timer = time;
                SDL_LockMutex(is->pictq.mutex);
                if (!isnan(vp->pts))
                    update_video_pts(is, vp->pts, vp->pos, vp->serial);
                SDL_UnlockMutex(is->pictq.mutex);
                if (frame_queue_nb_remaining(&is->pictq) > 1) {
                    Frame *nextvp = frame_queue_peek_next(&is->pictq);
                    duration = vp_duration(is, vp, nextvp);
                    if(!is->step && (ffp->framedrop > 0 || (ffp->framedrop && get_master_sync_type(is) != AV_SYNC_VIDEO_MASTER)) && time > is->frame_timer + duration) {
                        frame_queue_next(&is->pictq);
                        goto retry;
                if (is->subtitle_st) {
                    while (frame_queue_nb_remaining(&is->subpq) > 0) {
                        sp = frame_queue_peek(&is->subpq);
                        if (frame_queue_nb_remaining(&is->subpq) > 1)
                            sp2 = frame_queue_peek_next(&is->subpq);
                            sp2 = NULL;
                        if (sp->serial != is->subtitleq.serial
                                || (is->vidclk.pts > (sp->pts + ((float) sp->sub.end_display_time / 1000)))
                                || (sp2 && is->vidclk.pts > (sp2->pts + ((float) sp2->sub.start_display_time / 1000))))
                            if (sp->uploaded) {
                                ffp_notify_msg4(ffp, FFP_MSG_TIMED_TEXT, 0, 0, "", 1);
                            frame_queue_next(&is->subpq);
                        } else {
                            break;
                frame_queue_next(&is->pictq);
                is->force_refresh = 1;
                SDL_LockMutex(ffp->is->play_mutex);
                if (is->step) {
                    is->step = 0;
                    if (!is->paused)
                        stream_update_pause_l(ffp);
                SDL_UnlockMutex(ffp->is->play_mutex);
    display:
            /* display picture */
            if (!ffp->display_disable && is->force_refresh && is->show_mode == SHOW_MODE_VIDEO && is->pictq.rindex_shown)
                video_display2(ffp);
        is->force_refresh = 0;
        if (ffp->show_status) {
            static int64_t last_time;
            int64_t cur_time;
            int aqsize, vqsize, sqsize __unused;
            double av_diff;
            cur_time = av_gettime_relative();
            if (!last_time || (cur_time - last_time) >= 30000) {
                aqsize = 0;
                vqsize = 0;
                sqsize = 0;
                if (is->audio_st)
                    aqsize = is->audioq.size;
                if (is->video_st)
                    vqsize = is->videoq.size;
    #ifdef FFP_MERGE
                if (is->subtitle_st)
                    sqsize = is->subtitleq.size;
    #else
                sqsize = 0;
    #endif
                av_diff = 0;
                if (is->audio_st && is->video_st)
                    av_diff = get_clock(&is->audclk) - get_clock(&is->vidclk);
                else if (is->video_st)
                    av_diff = get_master_clock(is) - get_clock(&is->vidclk);
                else if (is->audio_st)
                    av_diff = get_master_clock(is) - get_clock(&is->audclk);
                av_log(NULL, AV_LOG_INFO,
                       "%7.2f %s:%7.3f fd=%4d aq=%5dKB vq=%5dKB sq=%5dB f=%"PRId64"/%"PRId64"   \r",
                       get_master_clock(is),
                       (is->audio_st && is->video_st) ? "A-V" : (is->video_st ? "M-V" : (is->audio_st ? "M-A" : "   ")),
                       av_diff,
                       is->frame_drops_early + is->frame_drops_late,
                       aqsize / 1024,
                       vqsize / 1024,
                       sqsize,
                       is->video_st ? is->viddec.avctx->pts_correction_num_faulty_dts : 0,
                       is->video_st ? is->viddec.avctx->pts_correction_num_faulty_pts : 0);
                fflush(stdout);
                last_time = cur_time;
    

    新增外部时钟变速函数。

    //add by hxk
    static void change_external_clock_speed(VideoState *is,float speed) {
        if (speed != 1.0f){
            set_clock_speed(&is->extclk, speed + EXTERNAL_CLOCK_SPEED_STEP * (1.0 - speed) / fabs(1.0 - speed));
    //add end
    

    这样我们就可以解决没有音频的视频流变速问题了。当然如果还是选择视频流作为时间基准的话,我们可以修改刷帧速度来实现视频变速。

    最后一个比较让人困惑的问题是何时才会清理BufferQueue,看一下清理的命令是在何时发出的:

    static void sdl_audio_callback(void *opaque, Uint8 *stream, int len)
             if (is->auddec.pkt_serial != is->audioq.serial) {
            is->audio_buf_index = is->audio_buf_size;
            memset(stream, 0, len);
            // stream += len;
            // len = 0;
            SDL_AoutFlushAudio(ffp->aout);
            break;
    

    它是在音频输出线程中获取即将插入到BufferQueue的音频数据,调用回调函数时发出的,发出的条件如上所示,其中pkt_serial 为从PacketQueue队列中取出的需要解码的packet的serial,serial为当前PacketQueue队列的serial。也就是说,如果两者不等,就需要清理BufferQueue。这里的serial是要保证前后数据包的连续性,例如发生了Seek,数据不连续,就需要清理旧数据。

    注:在播放器中的VideoState成员中,audioq和解码成员auddec中的queue是同一个队列。