MediaRecorder 录像配置主要涉及输出文件路径、音频来源、视频来源、输出格式、音频编码格式、视频编码格式、比特率、帧率和视频尺寸等。

我们假设视频输入源来自 Camera,Camera2 API 将相机图像渲染到 MediaRecorder 提供的 Surface 上,而 MediaRecorder 将这个渲染数据编码为 H264。

    /*** 配置录制视频相关数据*/private void configMediaRecorder(){File file = new File(getExternalCacheDir(),"demo.mp4");if (file.exists()){file.delete();}mMediaRecorder.setAudioSource(MediaRecorder.AudioSource.MIC);//设置音频来源mMediaRecorder.setVideoSource(MediaRecorder.VideoSource.SURFACE);//设置视频来源mMediaRecorder.setOutputFormat(MediaRecorder.OutputFormat.MPEG_4);//设置输出格式mMediaRecorder.setAudioEncoder(MediaRecorder.AudioEncoder.AAC);//设置音频编码格式mMediaRecorder.setVideoEncoder(MediaRecorder.VideoEncoder.H264);//设置视频编码格式mMediaRecorder.setVideoEncodingBitRate(1920 * 1080 * 30 * 0.2);//设置比特率mMediaRecorder.setVideoFrameRate(30);//设置帧数mMediaRecorder.setVideoSize(1920, 1080);mMediaRecorder.setOutputFile(file.getAbsolutePath());try {mMediaRecorder.prepare();} catch (IOException e) {e.printStackTrace();}}...mPreviewBuilder = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_RECORD);List<Surface> surfaces = new ArrayList<>();// Set up Surface for the camera previewSurface previewSurface = new Surface(texture);surfaces.add(previewSurface);mPreviewBuilder.addTarget(previewSurface);// Set up Surface for the MediaRecorderSurface recorderSurface = mMediaRecorder.getSurface();surfaces.add(recorderSurface);mPreviewBuilder.addTarget(recorderSurface);// Start a capture session// Once the session starts, we can update the UI and start recordingmCameraDevice.createCaptureSession(surfaces, new CameraCaptureSession.StateCallback() {@Overridepublic void onConfigured(@NonNull CameraCaptureSession cameraCaptureSession) {...   }@Overridepublic void onConfigureFailed(@NonNull CameraCaptureSession cameraCaptureSession) {...}}, mBackgroundHandler);

获取使用 surface 视频源时要录制的 SURFACE。必须要在 prepare(…) 之后调用。getSurface() 是个 native 方法。

frameworks/base/media/java/android/media/MediaRecorder.java

public class MediaRecorder implements AudioRouting,AudioRecordingMonitor,AudioRecordingMonitorClient,MicrophoneDirection
{....../*** Gets the surface to record from when using SURFACE video source.** <p> May only be called after {@link #prepare}. Frames rendered to the Surface before* {@link #start} will be discarded.</p>** @throws IllegalStateException if it is called before {@link #prepare}, after* {@link #stop}, or is called when VideoSource is not set to SURFACE.* @see android.media.MediaRecorder.VideoSource*/public native Surface getSurface();    ......
}
  1. 调用 getMediaRecorder(…) 从 Java MediaRecorder 类 mNativeContext field 中取出 jlong 类型的值,并强转为 MediaRecorder* 指针,这是在初始化 MediaRecorder 的时候写入的。
  2. 调用 MediaRecorder 类 querySurfaceMediaSourceFromMediaServer(…) 获取指向 IGraphicBufferProducer 的代理强指针。
  3. 调用 android_view_Surface_createFromIGraphicBufferProducer(…) 将 IGraphicBufferProducer 包装成 Java Surface 返回。

frameworks/base/media/jni/android_media_MediaRecorder.cpp

static sp<MediaRecorder> getMediaRecorder(JNIEnv* env, jobject thiz)
{Mutex::Autolock l(sLock);MediaRecorder* const p = (MediaRecorder*)env->GetLongField(thiz, fields.context);return sp<MediaRecorder>(p);
}
......
static jobject
android_media_MediaRecorder_getSurface(JNIEnv *env, jobject thiz)
{ALOGV("getSurface");sp<MediaRecorder> mr = getMediaRecorder(env, thiz);if (mr == NULL) {jniThrowException(env, "java/lang/IllegalStateException", NULL);return NULL;}sp<IGraphicBufferProducer> bufferProducer = mr->querySurfaceMediaSourceFromMediaServer();if (bufferProducer == NULL) {jniThrowException(env,"java/lang/IllegalStateException","failed to get surface");return NULL;}// Wrap the IGBP in a Java-language Surface.return android_view_Surface_createFromIGraphicBufferProducer(env,bufferProducer);
}

使用 binder 接口,通过 Mediaserver 查询 SurfaceMediaSurface。mSurfaceMediaSource 是个 sp 类型的字段,mMediaRecorder 是个 sp 类型的字段。由初始化一节《【Android 10 源码】MediaRecorder 录像流程:MediaRecorder 初始化》不难知道此处 querySurfaceMediaSource() 调用最终会调到 MediaRecorderClient 同名方法处理。

frameworks/av/media/libmedia/mediarecorder.cpp

// Query a SurfaceMediaSurface through the Mediaserver, over the
// binder interface. This is used by the Filter Framework (MediaEncoder)
// to get an <IGraphicBufferProducer> object to hook up to ANativeWindow.
sp<IGraphicBufferProducer> MediaRecorder::querySurfaceMediaSourceFromMediaServer()
{Mutex::Autolock _l(mLock);mSurfaceMediaSource =mMediaRecorder->querySurfaceMediaSource();if (mSurfaceMediaSource == NULL) {ALOGE("SurfaceMediaSource could not be initialized!");}return mSurfaceMediaSource;
}

MediaRecorderClient::querySurfaceMediaSource() 仅仅做了简单的检查后,最终委托给 StagefrightRecorder::querySurfaceMediaSource() 处理。

frameworks/av/media/libmediaplayerservice/MediaRecorderClient.cpp

sp<IGraphicBufferProducer> MediaRecorderClient::querySurfaceMediaSource()
{ALOGV("Query SurfaceMediaSource");Mutex::Autolock lock(mLock);if (mRecorder == NULL) {ALOGE("recorder is not initialized");return NULL;}return mRecorder->querySurfaceMediaSource();
}

注释里写的很明白了,mediaserver 的客户端要求它创建一个 SurfaceMediaSource 并返回一个接口引用。客户端将在编码 GL 帧时使用它。

这里只是简单的返回了 StagefrightRecorder 类的 mGraphicBufferProducer field 指向的值。给 mGraphicBufferProducer 赋值是在什么位置呢?分析后面的配置流程相信一定会水落石出的!

frameworks/av/media/libmediaplayerservice/StagefrightRecorder.cpp

// The client side of mediaserver asks it to create a SurfaceMediaSource
// and return a interface reference. The client side will use that
// while encoding GL Frames
sp<IGraphicBufferProducer> StagefrightRecorder::querySurfaceMediaSource() const {ALOGV("Get SurfaceMediaSource");return mGraphicBufferProducer;
}

再来分析 android_view_Surface_createFromIGraphicBufferProducer(…)。

  1. 调用 Surface cpp 构造函数创建 Surface 对象,入参 controlledByApp 设置为 true,代表这个缓冲区 producer 由应用程序控制。
  2. 接着调用 android_view_Surface_createFromSurface(…) 将 cpp Surface 转化为 java Surface。

frameworks/base/core/jni/android_view_Surface.cpp

jobject android_view_Surface_createFromSurface(JNIEnv* env, const sp<Surface>& surface) {jobject surfaceObj = env->NewObject(gSurfaceClassInfo.clazz,gSurfaceClassInfo.ctor, (jlong)surface.get());if (surfaceObj == NULL) {if (env->ExceptionCheck()) {ALOGE("Could not create instance of Surface from IGraphicBufferProducer.");LOGE_EX(env);env->ExceptionClear();}return NULL;}surface->incStrong(&sRefBaseOwner);return surfaceObj;
}jobject android_view_Surface_createFromIGraphicBufferProducer(JNIEnv* env,const sp<IGraphicBufferProducer>& bufferProducer) {if (bufferProducer == NULL) {return NULL;}sp<Surface> surface(new Surface(bufferProducer, true));return android_view_Surface_createFromSurface(env, surface);
}

现在继续分析主线:setAudioSource(MediaRecorder.AudioSource.MIC)。setAudioSource(…) 是个 JNI 方法。

frameworks/base/media/java/android/media/MediaRecorder.java

public class MediaRecorder implements AudioRouting,AudioRecordingMonitor,AudioRecordingMonitorClient,MicrophoneDirection
{....../*** Sets the audio source to be used for recording. If this method is not* called, the output file will not contain an audio track. The source needs* to be specified before setting recording-parameters or encoders. Call* this only before setOutputFormat().** @param audioSource the audio source to use* @throws IllegalStateException if it is called after setOutputFormat()* @see android.media.MediaRecorder.AudioSource*/public native void setAudioSource(@Source int audioSource)throws IllegalStateException;......
}
  1. 检查入参 as,对应 Java 方法入参 audioSource,不满足条件则直接调动 jniThrowException(…) 抛出 IllegalArgumentException 异常;
  2. 调用 getMediaRecorder(…) 获取 MediaRecorder native 对象;
  3. 调用 process_media_recorder_call(…) 检查 MediaRecorder native 对象的 setAudioSource(…) 调用是否存在非法操作,只要不等于 OK,就调用 jniThrowException(…) 抛出异常。

frameworks/base/media/jni/android_media_MediaRecorder.cpp

// Returns true if it throws an exception.
static bool process_media_recorder_call(JNIEnv *env, status_t opStatus, const char* exception, const char* message)
{ALOGV("process_media_recorder_call");if (opStatus == (status_t)INVALID_OPERATION) {jniThrowException(env, "java/lang/IllegalStateException", NULL);return true;} else if (opStatus != (status_t)OK) {jniThrowException(env, exception, message);return true;}return false;
}
......
static void
android_media_MediaRecorder_setAudioSource(JNIEnv *env, jobject thiz, jint as)
{ALOGV("setAudioSource(%d)", as);if (as < AUDIO_SOURCE_DEFAULT ||(as >= AUDIO_SOURCE_CNT && as != AUDIO_SOURCE_FM_TUNER)) {jniThrowException(env, "java/lang/IllegalArgumentException", "Invalid audio source");return;}sp<MediaRecorder> mr = getMediaRecorder(env, thiz);if (mr == NULL) {jniThrowException(env, "java/lang/IllegalStateException", NULL);return;}process_media_recorder_call(env, mr->setAudioSource(as), "java/lang/RuntimeException", "setAudioSource failed.");
}
  1. mMediaRecorder 是个 sp 类型的强指针,实际指向 BpMediaRecorder,如果为空直接返回 INVALID_OPERATION(无效操作);
  2. 在 MediaRecorder 构造函数中,mCurrentState 被赋值为 MEDIA_RECORDER_IDLE,因此此处会先调用 init() 进行初始化;
  3. 判断 mIsAudioSourceSet 标志是否置位,如果已经被置位那么直接返回 INVALID_OPERATION;
  4. 现在判断 mCurrentState 状态是否为 MEDIA_RECORDER_INITIALIZED(已经初始化),如果不是说明初始化发生了异常,这里也返回 INVALID_OPERATION;
  5. 现在开始调用 BpMediaRecorder setAudioSource(…) 进一步处理,最终实际上由 MediaRecorderClient::setAudioSource(…) 服务,此时 mIsAudioSourceSet 可以设置为 true 了,表征 Audio 源已设置。

frameworks/av/media/libmedia/mediarecorder.cpp

status_t MediaRecorder::setAudioSource(int as)
{ALOGV("setAudioSource(%d)", as);if (mMediaRecorder == NULL) {ALOGE("media recorder is not initialized yet");return INVALID_OPERATION;}if (mCurrentState & MEDIA_RECORDER_IDLE) {ALOGV("Call init() since the media recorder is not initialized yet");status_t ret = init();if (OK != ret) {return ret;}}if (mIsAudioSourceSet) {ALOGE("audio source has already been set");return INVALID_OPERATION;}if (!(mCurrentState & MEDIA_RECORDER_INITIALIZED)) {ALOGE("setAudioSource called in an invalid state(%d)", mCurrentState);return INVALID_OPERATION;}status_t ret = mMediaRecorder->setAudioSource(as);if (OK != ret) {ALOGV("setAudioSource failed: %d", ret);mCurrentState = MEDIA_RECORDER_ERROR;return ret;}mIsAudioSourceSet = true;return ret;
}

init() 方法中主要调用 MediaRecorderClient::init() 和 MediaRecorderClient::setListener(…) 进一步处理,并且把 mCurrentState 置为 MEDIA_RECORDER_INITIALIZED。

frameworks/av/media/libmedia/mediarecorder.cpp

status_t MediaRecorder::init()
{ALOGV("init");if (mMediaRecorder == NULL) {ALOGE("media recorder is not initialized yet");return INVALID_OPERATION;}if (!(mCurrentState & MEDIA_RECORDER_IDLE)) {ALOGE("init called in an invalid state(%d)", mCurrentState);return INVALID_OPERATION;}status_t ret = mMediaRecorder->init();if (OK != ret) {ALOGV("init failed: %d", ret);mCurrentState = MEDIA_RECORDER_ERROR;return ret;}ret = mMediaRecorder->setListener(this);if (OK != ret) {ALOGV("setListener failed: %d", ret);mCurrentState = MEDIA_RECORDER_ERROR;return ret;}mCurrentState = MEDIA_RECORDER_INITIALIZED;return ret;
}

MediaRecorderClient::init() 仅仅将 init 动作交给 StagefrightRecorder::init() 进一步处理。

MediaRecorderClient::setListener(…) 看似很复杂,其实做的事非常简单,首先调用 StagefrightRecorder::setListener(…) 设置 Listener;接着将 DeathNotifier 对象逐个插入到 mDeathNotifiers 引用的 std::vector 容器中。从代码不难看出分别在 camera service、OMX service、Codec2 service 死掉的情况下发送相应通知告知 MediaRecorder native 对象,进而会调用其 notify(…) 方法将消息丢给 JNIMediaRecorderListener 继续抛到 Java 层。

frameworks/av/media/libmediaplayerservice/MediaRecorderClient.cpp

status_t MediaRecorderClient::init()
{ALOGV("init");Mutex::Autolock lock(mLock);if (mRecorder == NULL) {ALOGE("recorder is not initialized");return NO_INIT;}return mRecorder->init();
}
......status_t MediaRecorderClient::setListener(const sp<IMediaRecorderClient>& listener)
{ALOGV("setListener");Mutex::Autolock lock(mLock);mDeathNotifiers.clear();if (mRecorder == NULL) {ALOGE("recorder is not initialized");return NO_INIT;}mRecorder->setListener(listener);sp<IServiceManager> sm = defaultServiceManager();// WORKAROUND: We don't know if camera exists here and getService might block for 5 seconds.// Use checkService for camera if we don't know it exists.static std::atomic<bool> sCameraChecked(false);  // once true never becomes false.static std::atomic<bool> sCameraVerified(false); // once true never becomes false.sp<IBinder> binder = (sCameraVerified || !sCameraChecked)? sm->getService(String16("media.camera")) : sm->checkService(String16("media.camera"));// If the device does not have a camera, do not create a death listener for it.if (binder != NULL) {sCameraVerified = true;mDeathNotifiers.emplace_back(binder, [l = wp<IMediaRecorderClient>(listener)](){sp<IMediaRecorderClient> listener = l.promote();if (listener) {ALOGV("media.camera service died. ""Sending death notification.");listener->notify(MEDIA_ERROR, MEDIA_ERROR_SERVER_DIED,MediaPlayerService::CAMERA_PROCESS_DEATH);} else {ALOGW("media.camera service died without a death handler.");}});}sCameraChecked = true;{using ::android::hidl::base::V1_0::IBase;// Listen to OMX's IOmxStore/default{sp<IBase> base = ::android::hardware::media::omx::V1_0::IOmx::getService();if (base == nullptr) {ALOGD("OMX service is not available");} else {mDeathNotifiers.emplace_back(base, [l = wp<IMediaRecorderClient>(listener)](){sp<IMediaRecorderClient> listener = l.promote();if (listener) {ALOGV("OMX service died. ""Sending death notification.");listener->notify(MEDIA_ERROR, MEDIA_ERROR_SERVER_DIED,MediaPlayerService::MEDIACODEC_PROCESS_DEATH);} else {ALOGW("OMX service died without a death handler.");}});}}// Listen to Codec2's IComponentStore instances{for (std::shared_ptr<Codec2Client> const& client :Codec2Client::CreateFromAllServices()) {sp<IBase> base = client->getBase();mDeathNotifiers.emplace_back(base, [l = wp<IMediaRecorderClient>(listener),name = std::string(client->getServiceName())]() {sp<IMediaRecorderClient> listener = l.promote();if (listener) {ALOGV("Codec2 service \"%s\" died. ""Sending death notification",name.c_str());listener->notify(MEDIA_ERROR, MEDIA_ERROR_SERVER_DIED,MediaPlayerService::MEDIACODEC_PROCESS_DEATH);} else {ALOGW("Codec2 service \"%s\" died ""without a death handler",name.c_str());}});}}}mAudioDeviceUpdatedNotifier = new AudioDeviceUpdatedNotifier(listener);mRecorder->setAudioDeviceCallback(mAudioDeviceUpdatedNotifier);return OK;
}

这里 DeathNotifier 之所以可以在服务死亡的情况下收到服务死亡通知,是因为在插入到容器的时候,其构造函数中调用 service linkToDeath(…) 做了绑定。

frameworks/av/media/libmediaplayerservice/DeathNotifier.cpp

DeathNotifier::DeathNotifier(sp<IBinder> const& service, Notify const& notify): mService{std::in_place_index<1>, service},mDeathRecipient{new DeathRecipient(notify)} {service->linkToDeath(mDeathRecipient);
}DeathNotifier::DeathNotifier(sp<HBase> const& service, Notify const& notify): mService{std::in_place_index<2>, service},mDeathRecipient{new DeathRecipient(notify)} {service->linkToDeath(mDeathRecipient, 0);
}

StagefrightRecorder::init() 方法实现非常简单。仅仅 new 了一个 ALooper 对象,并设置其名称为 recorder_looper 后调用 start() 启用这个 Looper,为后续消息处理做好准备。

frameworks/av/media/libmediaplayerservice/StagefrightRecorder.cpp

status_t StagefrightRecorder::init() {ALOGV("init");mLooper = new ALooper;mLooper->setName("recorder_looper");mLooper->start();return OK;
}

继续回到 setAudioSource(…) 调用主线。

  1. 调用 checkPermission(…) 检查权限,如果调用进程 ID 和当前进程 ID 相等,直接赋予权限。否则进一步调用 checkCallingPermission(…) 检查。
  2. 调用 StagefrightRecorder::setAudioSource(…) 进一步处理。

frameworks/av/media/libmediaplayerservice/MediaRecorderClient.cpp

static bool checkPermission(const char* permissionString) {if (getpid() == IPCThreadState::self()->getCallingPid()) return true;bool ok = checkCallingPermission(String16(permissionString));if (!ok) ALOGE("Request requires %s", permissionString);return ok;
}
......
status_t MediaRecorderClient::setAudioSource(int as)
{ALOGV("setAudioSource(%d)", as);if (!checkPermission(recordAudioPermission)) {return PERMISSION_DENIED;}Mutex::Autolock lock(mLock);if (mRecorder == NULL)  {ALOGE("recorder is not initialized");return NO_INIT;}return mRecorder->setAudioSource((audio_source_t)as);
}

StagefrightRecorder::setAudioSource(…) 中首先检查入参是否满足条件,不满足直接返回 BAD_VALUE。接着给 mAudioSource 这个 field 赋值,如果入参为 AUDIO_SOURCE_DEFAULT 缺省类型就将其设置为 AUDIO_SOURCE_MIC(MIC 源),否则将入参 as 直接赋给 mAudioSource。

在当前情景中,显然 mAudioSource 被设置为了 AUDIO_SOURCE_MIC,我们在 Java 层的入参便是 MIC 源。

frameworks/av/media/libmediaplayerservice/StagefrightRecorder.cpp

status_t StagefrightRecorder::setAudioSource(audio_source_t as) {ALOGV("setAudioSource: %d", as);if (as < AUDIO_SOURCE_DEFAULT ||(as >= AUDIO_SOURCE_CNT && as != AUDIO_SOURCE_FM_TUNER)) {ALOGE("Invalid audio source: %d", as);return BAD_VALUE;}if (as == AUDIO_SOURCE_DEFAULT) {mAudioSource = AUDIO_SOURCE_MIC;} else {mAudioSource = as;}return OK;
}

再来分析 Java MediaRecorder setVideoSource(MediaRecorder.VideoSource.SURFACE) 设置视频源调用流程。android_media_MediaRecorder_setVideoSource(…) 是对应于 setVideoSource(…) 的 JNI 方法 native 实现,和前面的代码差不多,主要调用了 native MediaRecorder setVideoSource(…) 进一步处理。

frameworks/base/media/jni/android_media_MediaRecorder.cpp

static void
android_media_MediaRecorder_setVideoSource(JNIEnv *env, jobject thiz, jint vs)
{ALOGV("setVideoSource(%d)", vs);if (vs < VIDEO_SOURCE_DEFAULT || vs >= VIDEO_SOURCE_LIST_END) {jniThrowException(env, "java/lang/IllegalArgumentException", "Invalid video source");return;}sp<MediaRecorder> mr = getMediaRecorder(env, thiz);if (mr == NULL) {jniThrowException(env, "java/lang/IllegalStateException", NULL);return;}process_media_recorder_call(env, mr->setVideoSource(vs), "java/lang/RuntimeException", "setVideoSource failed.");
}

MediaRecorder::setVideoSource(…) 设置流程和 MediaRecorder::setAudioSource(…) 也差不多,区别在于这里最终会调用 MediaRecorderClient setVideoSource(…) 处理,并且此处修改的布尔量为 mIsVideoSourceSet。

frameworks/av/media/libmedia/mediarecorder.cpp

status_t MediaRecorder::setVideoSource(int vs)
{ALOGV("setVideoSource(%d)", vs);if (mMediaRecorder == NULL) {ALOGE("media recorder is not initialized yet");return INVALID_OPERATION;}if (mIsVideoSourceSet) {ALOGE("video source has already been set");return INVALID_OPERATION;}if (mCurrentState & MEDIA_RECORDER_IDLE) {ALOGV("Call init() since the media recorder is not initialized yet");status_t ret = init();if (OK != ret) {return ret;}}if (!(mCurrentState & MEDIA_RECORDER_INITIALIZED)) {ALOGE("setVideoSource called in an invalid state(%d)", mCurrentState);return INVALID_OPERATION;}// following call is made over the Binder Interfacestatus_t ret = mMediaRecorder->setVideoSource(vs);if (OK != ret) {ALOGV("setVideoSource failed: %d", ret);mCurrentState = MEDIA_RECORDER_ERROR;return ret;}mIsVideoSourceSet = true;return ret;
}

MediaRecorderClient::setVideoSource(…) 做权限检查后,调用 StagefrightRecorder::setVideoSource(…) 进一步处理。

frameworks/av/media/libmediaplayerservice/MediaRecorderClient.cpp

status_t MediaRecorderClient::setVideoSource(int vs)
{ALOGV("setVideoSource(%d)", vs);// Check camera permission for sources other than SURFACEif (vs != VIDEO_SOURCE_SURFACE && !checkPermission(cameraPermission)) {return PERMISSION_DENIED;}Mutex::Autolock lock(mLock);if (mRecorder == NULL)     {ALOGE("recorder is not initialized");return NO_INIT;}return mRecorder->setVideoSource((video_source)vs);
}

StagefrightRecorder::setVideoSource(…) 中首先检查入参 vs 是否越界,如果没有则将其赋值给 mVideoSource,如果 vs == VIDEO_SOURCE_DEFAULT,也就是缺省值则直接赋值 mVideoSource = VIDEO_SOURCE_CAMERA。

frameworks/av/media/libmediaplayerservice/StagefrightRecorder.cpp

status_t StagefrightRecorder::setVideoSource(video_source vs) {ALOGV("setVideoSource: %d", vs);if (vs < VIDEO_SOURCE_DEFAULT ||vs >= VIDEO_SOURCE_LIST_END) {ALOGE("Invalid video source: %d", vs);return BAD_VALUE;}if (vs == VIDEO_SOURCE_DEFAULT) {mVideoSource = VIDEO_SOURCE_CAMERA;} else {mVideoSource = vs;}return OK;
}

设置输出格式、音频编码格式、视频编码格式、比特率、帧数、视频尺寸和文件输出路径这些调用最终均会在 StagefrightRecorder 相应函数做处理,这些函数处理流程大同小异,大部分都是先检查入参是否越界,如果没有就将入参赋值给相应的 field。并且如果入参为缺省值,就给 field 赋一个默认值。

StagefrightRecorder::setOutputFile(int fd) 中略有区别,首先调用 ftruncate(int fd,off_t length) 将参数 fd 指定的文件大小改为参数 length 指定的大小。参数 fd 为已打开的文件描述符,而且必须是以写入模式打开的文件。如果原来的文件大小比参数 length 大,则超过的部分会被删去。此处 length 为 0,也就是将文件清空。如果 mOutputFd 大于等于 0,说明已经存在一个文件描述符了,先调用 close(…) 关闭。最后调用 dup(…) (由 dup 返回的新文件描述符一定是当前可用文件描述中的最小数值)后将文件描述符赋给 mOutputFd 这个 field。

frameworks/av/media/libmediaplayerservice/StagefrightRecorder.cpp


status_t StagefrightRecorder::setOutputFormat(output_format of) {ALOGV("setOutputFormat: %d", of);if (of < OUTPUT_FORMAT_DEFAULT ||of >= OUTPUT_FORMAT_LIST_END) {ALOGE("Invalid output format: %d", of);return BAD_VALUE;}if (of == OUTPUT_FORMAT_DEFAULT) {mOutputFormat = OUTPUT_FORMAT_THREE_GPP;} else {mOutputFormat = of;}return OK;
}status_t StagefrightRecorder::setAudioEncoder(audio_encoder ae) {ALOGV("setAudioEncoder: %d", ae);if (ae < AUDIO_ENCODER_DEFAULT ||ae >= AUDIO_ENCODER_LIST_END) {ALOGE("Invalid audio encoder: %d", ae);return BAD_VALUE;}if (ae == AUDIO_ENCODER_DEFAULT) {mAudioEncoder = AUDIO_ENCODER_AMR_NB;} else {mAudioEncoder = ae;}return OK;
}status_t StagefrightRecorder::setVideoEncoder(video_encoder ve) {ALOGV("setVideoEncoder: %d", ve);if (ve < VIDEO_ENCODER_DEFAULT ||ve >= VIDEO_ENCODER_LIST_END) {ALOGE("Invalid video encoder: %d", ve);return BAD_VALUE;}mVideoEncoder = ve;return OK;
}status_t StagefrightRecorder::setVideoSize(int width, int height) {ALOGV("setVideoSize: %dx%d", width, height);if (width <= 0 || height <= 0) {ALOGE("Invalid video size: %dx%d", width, height);return BAD_VALUE;}// Additional check on the dimension will be performed latermVideoWidth = width;mVideoHeight = height;return OK;
}status_t StagefrightRecorder::setVideoFrameRate(int frames_per_second) {ALOGV("setVideoFrameRate: %d", frames_per_second);if ((frames_per_second <= 0 && frames_per_second != -1) ||frames_per_second > kMaxHighSpeedFps) {ALOGE("Invalid video frame rate: %d", frames_per_second);return BAD_VALUE;}// Additional check on the frame rate will be performed latermFrameRate = frames_per_second;return OK;
}
......
status_t StagefrightRecorder::setOutputFile(int fd) {ALOGV("setOutputFile: %d", fd);if (fd < 0) {ALOGE("Invalid file descriptor: %d", fd);return -EBADF;}// start with a clean, empty fileftruncate(fd, 0);if (mOutputFd >= 0) {::close(mOutputFd);}mOutputFd = dup(fd);return OK;
}

现在终于可以分析 prepare 了。

  1. 判断文件路径 mPath 有没有设置,如果设置了使用 mPath 作为入参构造 RandomAccessFile 对象,接着调用 _setOutputFile(…) 进一步设置,最终会调用到 StagefrightRecorder::setOutputFile(…),这里 dup 了文件描述符,最后 close 这个 RandomAccessFile 对象。不难知道如果不去 dup 这个文件描述符,Java 层传递过去的描述符最终会被关闭,也就不存在了对真正文件的“引用”了。
  2. mFd 和 mFile 是否为 null,这两个分支处理方式类似。
  3. 调用 _prepare() 这个 JNI 接口转到 native 层处理。

frameworks/base/media/java/android/media/MediaRecorder.java

public class MediaRecorder implements AudioRouting,AudioRecordingMonitor,AudioRecordingMonitorClient,MicrophoneDirection
{......@UnsupportedAppUsage(maxTargetSdk = Build.VERSION_CODES.P, trackingBug = 115609023)private native void _prepare() throws IllegalStateException, IOException;/*** Prepares the recorder to begin capturing and encoding data. This method* must be called after setting up the desired audio and video sources,* encoders, file format, etc., but before start().** @throws IllegalStateException if it is called after* start() or before setOutputFormat().* @throws IOException if prepare fails otherwise.*/public void prepare() throws IllegalStateException, IOException{if (mPath != null) {RandomAccessFile file = new RandomAccessFile(mPath, "rw");try {_setOutputFile(file.getFD());} finally {file.close();}} else if (mFd != null) {_setOutputFile(mFd);} else if (mFile != null) {RandomAccessFile file = new RandomAccessFile(mFile, "rw");try {_setOutputFile(file.getFD());} finally {file.close();}} else {throw new IOException("No valid output file");}_prepare();}......
}

_prepare() jni native 实现对应 android_media_MediaRecorder_prepare(…)。首先从 java MediaRecorder mSurface field 获取 java Surface 对应的 jobject,当前情景下显然没有赋值,所以直接走 MediaRecorder native prepare() 方法。

frameworks/base/media/jni/android_media_MediaRecorder.cpp

static void
android_media_MediaRecorder_prepare(JNIEnv *env, jobject thiz)
{ALOGV("prepare");sp<MediaRecorder> mr = getMediaRecorder(env, thiz);if (mr == NULL) {jniThrowException(env, "java/lang/IllegalStateException", NULL);return;}jobject surface = env->GetObjectField(thiz, fields.surface);if (surface != NULL) {......}process_media_recorder_call(env, mr->prepare(), "java/io/IOException", "prepare failed.");
}
  1. 检查 mMediaRecorder 是否为空,它实际指向 BpMediaRecorder。
  2. 检查当前状态是否为 MEDIA_RECORDER_DATASOURCE_CONFIGURED,如果不是直接返回 INVALID_OPERATION,这个状态是在 MediaRecorder::setOutputFormat(…) 中设置的。
  3. 检查音频源和音频编码器是否设置,二者只设置其一,就会打印错误级别日志提示问题点,并直接返回 INVALID_OPERATION。
  4. 检查视频源和视频编码器是否设置,和上一步类似。
  5. 调用 BnMediaRecorder prepare() 进一步处理,实际由其子类 MediaRecorderClient 实现。
  6. 最后设置 mCurrentState 状态为 MEDIA_RECORDER_PREPARED,代表 MediaRecorder 已经准备就绪。

frameworks/av/media/libmedia/mediarecorder.cpp

status_t MediaRecorder::prepare()
{ALOGV("prepare");if (mMediaRecorder == NULL) {ALOGE("media recorder is not initialized yet");return INVALID_OPERATION;}if (!(mCurrentState & MEDIA_RECORDER_DATASOURCE_CONFIGURED)) {ALOGE("prepare called in an invalid state: %d", mCurrentState);return INVALID_OPERATION;}if (mIsAudioSourceSet != mIsAudioEncoderSet) {if (mIsAudioSourceSet) {ALOGE("audio source is set, but audio encoder is not set");} else {  // must not happen, since setAudioEncoder checks this alreadyALOGE("audio encoder is set, but audio source is not set");}return INVALID_OPERATION;}if (mIsVideoSourceSet != mIsVideoEncoderSet) {if (mIsVideoSourceSet) {ALOGE("video source is set, but video encoder is not set");} else {  // must not happen, since setVideoEncoder checks this alreadyALOGE("video encoder is set, but video source is not set");}return INVALID_OPERATION;}status_t ret = mMediaRecorder->prepare();if (OK != ret) {ALOGE("prepare failed: %d", ret);mCurrentState = MEDIA_RECORDER_ERROR;return ret;}mCurrentState = MEDIA_RECORDER_PREPARED;return ret;
}

MediaRecorderClient::prepare() 仅仅调用 StagefrightRecorder::prepare() 进一步处理。

frameworks/av/media/libmediaplayerservice/MediaRecorderClient.cpp

status_t MediaRecorderClient::prepare()
{ALOGV("prepare");Mutex::Autolock lock(mLock);if (mRecorder == NULL) {ALOGE("recorder is not initialized");return NO_INIT;}return mRecorder->prepare();
}

StagefrightRecorder::prepare() 内部先判断 mVideoSource 视频源是否为 VIDEO_SOURCE_SURFACE,如果是就进一步调用 prepareInternal() 处理,当前情景下显然视频源是 Surface。

prepareInternal() 主要流程:

  1. 获取调用 UID 和 PID,做权限检查相关;
  2. 由于配置输出格式为 OUTPUT_FORMAT_MPEG_4,因此会调用 setupMPEG4orWEBMRecording() 进一步处理。

frameworks/av/media/libmediaplayerservice/StagefrightRecorder.cpp

status_t StagefrightRecorder::prepareInternal() {ALOGV("prepare");if (mOutputFd < 0) {ALOGE("Output file descriptor is invalid");return INVALID_OPERATION;}// Get UID and PID here for permission checkingmClientUid = IPCThreadState::self()->getCallingUid();mClientPid = IPCThreadState::self()->getCallingPid();status_t status = OK;switch (mOutputFormat) {case OUTPUT_FORMAT_DEFAULT:case OUTPUT_FORMAT_THREE_GPP:case OUTPUT_FORMAT_MPEG_4:case OUTPUT_FORMAT_WEBM:status = setupMPEG4orWEBMRecording();break;case OUTPUT_FORMAT_AMR_NB:case OUTPUT_FORMAT_AMR_WB:status = setupAMRRecording();break;case OUTPUT_FORMAT_AAC_ADIF:case OUTPUT_FORMAT_AAC_ADTS:status = setupAACRecording();break;case OUTPUT_FORMAT_RTP_AVP:status = setupRTPRecording();break;case OUTPUT_FORMAT_MPEG2TS:status = setupMPEG2TSRecording();break;case OUTPUT_FORMAT_OGG:status = setupOggRecording();break;default:ALOGE("Unsupported output file format: %d", mOutputFormat);status = UNKNOWN_ERROR;break;}ALOGV("Recording frameRate: %d captureFps: %f",mFrameRate, mCaptureFps);return status;
}status_t StagefrightRecorder::prepare() {ALOGV("prepare");Mutex::Autolock autolock(mLock);if (mVideoSource == VIDEO_SOURCE_SURFACE) {return prepareInternal();}return OK;
}
  1. 由于设置了输出格式为 MP4,会走到新建 MPEG4Writer 的分支;
  2. 视频源是 Surface,因此会调用到 setDefaultVideoEncoderIfNecessary(),这方法在编码器没有配置的情况下才会起作用去选择一个默认的编码器配置;
  3. 调用 setupMediaSource(…) 进一步设置视频源;
  4. 调用 setupVideoEncoder(…) 设置视频编码器;
  5. 调用 MPEG4Writer addSource(…) 添加视频编码器;
  6. 如果音频没有被禁用,并且音频源已配置,调用 setupAudioEncoder(…) 设置音频编码器;
  7. 根据配置参数给 MPEG4Writer 设置帧率、interleave-duration、经纬度、最大文件时长、最大文件尺寸和起播偏移时长(由于配置的视频源是 Surface,这里直接赋值为 100ms);
  8. 给 MPEG4Writer 设置 Listener。

frameworks/av/media/libmediaplayerservice/StagefrightRecorder.cpp

status_t StagefrightRecorder::setupMPEG4orWEBMRecording() {mWriter.clear();mTotalBitRate = 0;status_t err = OK;sp<MediaWriter> writer;sp<MPEG4Writer> mp4writer;if (mOutputFormat == OUTPUT_FORMAT_WEBM) {writer = new WebmWriter(mOutputFd);} else {writer = mp4writer = new MPEG4Writer(mOutputFd);}if (mVideoSource < VIDEO_SOURCE_LIST_END) {setDefaultVideoEncoderIfNecessary();sp<MediaSource> mediaSource;err = setupMediaSource(&mediaSource);if (err != OK) {return err;}sp<MediaCodecSource> encoder;err = setupVideoEncoder(mediaSource, &encoder);if (err != OK) {return err;}writer->addSource(encoder);mVideoEncoderSource = encoder;mTotalBitRate += mVideoBitRate;}// Audio source is added at the end if it exists.// This help make sure that the "recoding" sound is suppressed for// camcorder applications in the recorded files.// disable audio for time lapse recordingconst bool disableAudio = mCaptureFpsEnable && mCaptureFps < mFrameRate;if (!disableAudio && mAudioSource != AUDIO_SOURCE_CNT) {err = setupAudioEncoder(writer);if (err != OK) return err;mTotalBitRate += mAudioBitRate;}if (mOutputFormat != OUTPUT_FORMAT_WEBM) {if (mCaptureFpsEnable) {mp4writer->setCaptureRate(mCaptureFps);}if (mInterleaveDurationUs > 0) {mp4writer->setInterleaveDuration(mInterleaveDurationUs);}if (mLongitudex10000 > -3600000 && mLatitudex10000 > -3600000) {mp4writer->setGeoData(mLatitudex10000, mLongitudex10000);}}if (mMaxFileDurationUs != 0) {writer->setMaxFileDuration(mMaxFileDurationUs);}if (mMaxFileSizeBytes != 0) {writer->setMaxFileSize(mMaxFileSizeBytes);}if (mVideoSource == VIDEO_SOURCE_DEFAULT|| mVideoSource == VIDEO_SOURCE_CAMERA) {mStartTimeOffsetMs = mEncoderProfiles->getStartTimeOffsetMs(mCameraId);} else if (mVideoSource == VIDEO_SOURCE_SURFACE) {// surface source doesn't need large initial delaymStartTimeOffsetMs = 100;}if (mStartTimeOffsetMs > 0) {writer->setStartTimeOffsetMs(mStartTimeOffsetMs);}writer->setListener(mListener);mWriter = writer;return OK;
}

setDefaultVideoEncoderIfNecessary() 只有在 mVideoEncoder 为缺省值 VIDEO_ENCODER_DEFAULT 才处理。如果文件格式为 WEBM,mVideoEncoder 直接赋值为 VP8;否则为 CAMCORDER_QUALITY_LOW 选择默认的编码器,如果 CAMCORDER 配置文件不可用,则默认为 H.264。

frameworks/av/media/libmediaplayerservice/StagefrightRecorder.cpp

void StagefrightRecorder::setDefaultVideoEncoderIfNecessary() {if (mVideoEncoder == VIDEO_ENCODER_DEFAULT) {if (mOutputFormat == OUTPUT_FORMAT_WEBM) {// default to VP8 for WEBM recordingmVideoEncoder = VIDEO_ENCODER_VP8;} else {// pick the default encoder for CAMCORDER_QUALITY_LOWint videoCodec = mEncoderProfiles->getCamcorderProfileParamByName("vid.codec", mCameraId, CAMCORDER_QUALITY_LOW);if (videoCodec > VIDEO_ENCODER_DEFAULT &&videoCodec < VIDEO_ENCODER_LIST_END) {mVideoEncoder = (video_encoder)videoCodec;} else {// default to H.264 if camcorder profile not availablemVideoEncoder = VIDEO_ENCODER_H264;}}}
}

当前情景下 mVideoSource 等于 VIDEO_SOURCE_SURFACE,所以给 *mediaSource 直接赋 NULL。

frameworks/av/media/libmediaplayerservice/StagefrightRecorder.cpp

// Set up the appropriate MediaSource depending on the chosen option
status_t StagefrightRecorder::setupMediaSource(sp<MediaSource> *mediaSource) {if (mVideoSource == VIDEO_SOURCE_DEFAULT|| mVideoSource == VIDEO_SOURCE_CAMERA) {sp<CameraSource> cameraSource;status_t err = setupCameraSource(&cameraSource);if (err != OK) {return err;}*mediaSource = cameraSource;} else if (mVideoSource == VIDEO_SOURCE_SURFACE) {*mediaSource = NULL;} else {return INVALID_OPERATION;}return OK;
}
  1. 准备各种视频编码器参数,将它们写到 AMessage 内;
  2. 调用 MediaCodecSource::Create(…) 创建 MediaCodecSource 对象;
  3. 从 MediaCodecSource 中获取 IGraphicBufferProducer(IGBP),用来构建 Surface,这个 Surface 作为 MediaRecorder 提供给 Camera 用来渲染数据的“表面”。

frameworks/av/media/libmediaplayerservice/StagefrightRecorder.cpp

status_t StagefrightRecorder::setupVideoEncoder(const sp<MediaSource> &cameraSource,sp<MediaCodecSource> *source) {source->clear();sp<AMessage> format = new AMessage();switch (mVideoEncoder) {......case VIDEO_ENCODER_H264:format->setString("mime", MEDIA_MIMETYPE_VIDEO_AVC);break;......}......if (cameraSource != NULL) {......} else {format->setInt32("width", mVideoWidth);format->setInt32("height", mVideoHeight);format->setInt32("stride", mVideoWidth);format->setInt32("slice-height", mVideoHeight);format->setInt32("color-format", OMX_COLOR_FormatAndroidOpaque);// set up time lapse/slow motion for surface sourceif (mCaptureFpsEnable) {if (!(mCaptureFps > 0.)) {ALOGE("Invalid mCaptureFps value: %lf", mCaptureFps);return BAD_VALUE;}format->setDouble("time-lapse-fps", mCaptureFps);}}format->setInt32("bitrate", mVideoBitRate);format->setInt32("frame-rate", mFrameRate);format->setInt32("i-frame-interval", mIFramesIntervalSec);if (mVideoTimeScale > 0) {format->setInt32("time-scale", mVideoTimeScale);}if (mVideoEncoderProfile != -1) {format->setInt32("profile", mVideoEncoderProfile);}if (mVideoEncoderLevel != -1) {format->setInt32("level", mVideoEncoderLevel);}uint32_t tsLayers = 1;bool preferBFrames = true; // we like B-frames as it produces better quality per bitrateformat->setInt32("priority", 0 /* realtime */);float maxPlaybackFps = mFrameRate; // assume video is only played back at normal speedif (mCaptureFpsEnable) {format->setFloat("operating-rate", mCaptureFps);// enable layering for all time lapse and high frame rate recordingsif (mFrameRate / mCaptureFps >= 1.9) { // time lapsepreferBFrames = false;tsLayers = 2; // use at least two layers as resulting video will likely be sped up} else if (mCaptureFps > maxPlaybackFps) { // slow-momaxPlaybackFps = mCaptureFps; // assume video will be played back at full capture speedpreferBFrames = false;}}// Enable temporal layering if the expected (max) playback frame rate is greater than ~11% of// the minimum display refresh rate on a typical device. Add layers until the base layer falls// under this limit. Allow device manufacturers to override this limit.// TODO: make this configurable by the applicationstd::string maxBaseLayerFpsProperty =::android::base::GetProperty("ro.media.recorder-max-base-layer-fps", "");float maxBaseLayerFps = (float)::atof(maxBaseLayerFpsProperty.c_str());// TRICKY: use !> to fix up any NaN valuesif (!(maxBaseLayerFps >= kMinTypicalDisplayRefreshingRate / 0.9)) {maxBaseLayerFps = kMinTypicalDisplayRefreshingRate / 0.9;}for (uint32_t tryLayers = 1; tryLayers <= kMaxNumVideoTemporalLayers; ++tryLayers) {if (tryLayers > tsLayers) {tsLayers = tryLayers;}// keep going until the base layer fps falls below the typical display refresh ratefloat baseLayerFps = maxPlaybackFps / (1 << (tryLayers - 1));if (baseLayerFps < maxBaseLayerFps) {break;}}if (tsLayers > 1) {uint32_t bLayers = std::min(2u, tsLayers - 1); // use up-to 2 B-layersuint32_t pLayers = tsLayers - bLayers;format->setString("ts-schema", AStringPrintf("android.generic.%u+%u", pLayers, bLayers));// TODO: some encoders do not support B-frames with temporal layering, and we have a// different preference based on use-case. We could move this into camera profiles.format->setInt32("android._prefer-b-frames", preferBFrames);}if (mMetaDataStoredInVideoBuffers != kMetadataBufferTypeInvalid) {format->setInt32("android._input-metadata-buffer-type", mMetaDataStoredInVideoBuffers);}uint32_t flags = 0;if (cameraSource == NULL) {flags |= MediaCodecSource::FLAG_USE_SURFACE_INPUT;} else {// require dataspace setup even if not using surface inputformat->setInt32("android._using-recorder", 1);}sp<MediaCodecSource> encoder = MediaCodecSource::Create(mLooper, format, cameraSource, mPersistentSurface, flags);if (encoder == NULL) {ALOGE("Failed to create video encoder");// When the encoder fails to be created, we need// release the camera source due to the camera's lock// and unlock mechanism.if (cameraSource != NULL) {cameraSource->stop();}return UNKNOWN_ERROR;}if (cameraSource == NULL) {mGraphicBufferProducer = encoder->getGraphicBufferProducer();}*source = encoder;return OK;
}

MediaCodecSource::Create(…)方法中,入参中 source 为空,persistentSurface 也为空。方法内构建了 MediaCodecSource 对象,并调用其 init() 进一步初始化。

frameworks/av/media/libstagefright/MediaCodecSource.cpp

sp<MediaCodecSource> MediaCodecSource::Create(const sp<ALooper> &looper,const sp<AMessage> &format,const sp<MediaSource> &source,const sp<PersistentSurface> &persistentSurface,uint32_t flags) {sp<MediaCodecSource> mediaSource = new MediaCodecSource(looper, format, source, persistentSurface, flags);if (mediaSource->init() == OK) {return mediaSource;}return NULL;
}

MediaCodecSource::init() 内主要调用 initEncoder() 进一步处理。

initEncoder() 内部初始化了编码器。

  1. 创建 ALooper 对象,并设置名称为 codec_looper;
  2. 调用 MediaCodecList::findMatchingCodecs(…) 查找匹配的编码器列表;
  3. 遍历编码器列表,调用 MediaCodec::CreateByComponentName(…) 创建 MediaCodec 对象,只要 mEncoder 不为空,并且这个编码器 configure(…) 正常,break 出循环;
  4. 现在主角上场,调用 MediaCodec createInputSurface(…) 创建 Surface;
  5. 调用 MediaCodec start() 启动编码器。

具体编解码器配置相关文件请参见 MediaCodec 系列文章。

frameworks/av/media/libstagefright/MediaCodecSource.cpp

status_t MediaCodecSource::init() {status_t err = initEncoder();if (err != OK) {releaseEncoder();}return err;
}status_t MediaCodecSource::initEncoder() {mReflector = new AHandlerReflector<MediaCodecSource>(this);mLooper->registerHandler(mReflector);mCodecLooper = new ALooper;mCodecLooper->setName("codec_looper");mCodecLooper->start();if (mFlags & FLAG_USE_SURFACE_INPUT) {mOutputFormat->setInt32(KEY_CREATE_INPUT_SURFACE_SUSPENDED, 1);}AString outputMIME;CHECK(mOutputFormat->findString("mime", &outputMIME));mIsVideo = outputMIME.startsWithIgnoreCase("video/");AString name;status_t err = NO_INIT;if (mOutputFormat->findString("testing-name", &name)) {......} else {Vector<AString> matchingCodecs;MediaCodecList::findMatchingCodecs(outputMIME.c_str(), true /* encoder */,((mFlags & FLAG_PREFER_SOFTWARE_CODEC) ? MediaCodecList::kPreferSoftwareCodecs : 0),&matchingCodecs);for (size_t ix = 0; ix < matchingCodecs.size(); ++ix) {mEncoder = MediaCodec::CreateByComponentName(mCodecLooper, matchingCodecs[ix]);if (mEncoder == NULL) {continue;}ALOGV("output format is '%s'", mOutputFormat->debugString(0).c_str());mEncoderActivityNotify = new AMessage(kWhatEncoderActivity, mReflector);mEncoder->setCallback(mEncoderActivityNotify);err = mEncoder->configure(mOutputFormat,NULL /* nativeWindow */,NULL /* crypto */,MediaCodec::CONFIGURE_FLAG_ENCODE);if (err == OK) {break;}mEncoder->release();mEncoder = NULL;}}if (err != OK) {return err;}mEncoder->getOutputFormat(&mOutputFormat);sp<MetaData> meta = new MetaData;convertMessageToMetaData(mOutputFormat, meta);mMeta.lock().set(meta);if (mFlags & FLAG_USE_SURFACE_INPUT) {CHECK(mIsVideo);if (mPersistentSurface != NULL) {// When using persistent surface, we are only interested in the// consumer, but have to use PersistentSurface as a wrapper to// pass consumer over messages (similar to BufferProducerWrapper)err = mEncoder->setInputSurface(mPersistentSurface);} else {err = mEncoder->createInputSurface(&mGraphicBufferProducer);}if (err != OK) {return err;}}sp<AMessage> inputFormat;int32_t usingSwReadOften;mSetEncoderFormat = false;if (mEncoder->getInputFormat(&inputFormat) == OK) {mSetEncoderFormat = true;if (inputFormat->findInt32("using-sw-read-often", &usingSwReadOften)&& usingSwReadOften) {// this is a SW encoder; signal source to allocate SW readable buffersmEncoderFormat = kDefaultSwVideoEncoderFormat;} else {mEncoderFormat = kDefaultHwVideoEncoderFormat;}if (!inputFormat->findInt32("android._dataspace", &mEncoderDataSpace)) {mEncoderDataSpace = kDefaultVideoEncoderDataSpace;}ALOGV("setting dataspace %#x, format %#x", mEncoderDataSpace, mEncoderFormat);}err = mEncoder->start();if (err != OK) {return err;}{Mutexed<Output>::Locked output(mOutput);output->mEncoderReachedEOS = false;output->mErrorCode = OK;}return OK;
}

StagefrightRecorder::setupAudioEncoder(…) 内主要调用 createAudioSource() 创建 MediaCodecSource 对象,其内部会启动音频硬编码器 MediaCodec。然后调用 MPEG4Writer addSource(…) 添加这个 MediaCodecSource。

frameworks/av/media/libmediaplayerservice/StagefrightRecorder.cpp

status_t StagefrightRecorder::setupAudioEncoder(const sp<MediaWriter>& writer) {status_t status = BAD_VALUE;if (OK != (status = checkAudioEncoderCapabilities())) {return status;}switch(mAudioEncoder) {case AUDIO_ENCODER_AMR_NB:case AUDIO_ENCODER_AMR_WB:case AUDIO_ENCODER_AAC:case AUDIO_ENCODER_HE_AAC:case AUDIO_ENCODER_AAC_ELD:case AUDIO_ENCODER_OPUS:break;default:ALOGE("Unsupported audio encoder: %d", mAudioEncoder);return UNKNOWN_ERROR;}sp<MediaCodecSource> audioEncoder = createAudioSource();if (audioEncoder == NULL) {return UNKNOWN_ERROR;}writer->addSource(audioEncoder);mAudioEncoderSource = audioEncoder;return OK;
}

最后重点来关注一下 MPEG4Writer addSource(…) 干了些什么?首先检查是否在开始录制后才去添加源,如果是就打印 ERR log 并退出。否则继续执行,创建对应的 Track,并将这个轨迹 push 到 mTracks 引用的 List。

frameworks/av/media/libstagefright/MPEG4Writer.cpp

status_t MPEG4Writer::addSource(const sp<MediaSource> &source) {Mutex::Autolock l(mLock);if (mStarted) {ALOGE("Attempt to add source AFTER recording is started");return UNKNOWN_ERROR;}CHECK(source.get() != NULL);const char *mime;source->getFormat()->findCString(kKeyMIMEType, &mime);if (Track::getFourCCForMime(mime) == NULL) {ALOGE("Unsupported mime '%s'", mime);return ERROR_UNSUPPORTED;}// This is a metadata track or the first track of either audio or video// Go ahead to add the track.Track *track = new Track(this, source, 1 + mTracks.size());mTracks.push_back(track);mHasMoovBox |= !track->isHeic();mHasFileLevelMeta |= track->isHeic();return OK;
}

【Android 10 源码】MediaRecorder 录像流程:MediaRecorder 配置相关推荐

  1. 【Android 10 源码】MediaRecorder 录像流程:MediaRecorder 初始化

    MediaRecorder 用于录制音频和视频,录制控制基于一个简单的状态机.下面是典型的使用 camera2 API 录制,需要使用到的 MediaRecorder API. MediaRecord ...

  2. 【Android 10 源码】healthd 模块 HAL 2.0 分析

    Android 9 引入了从 health@1.0 HAL 升级的主要版本 android.hardware.health HAL 2.0.这一新 HAL 具有以下优势: 框架代码和供应商代码之间的区 ...

  3. 【Android 10 源码】深入理解 MediaCodec 组件分配

    MediaCodec 系列文章: [Android 10 源码]深入理解 MediaCodec 硬解码初始化 [Android 10 源码]深入理解 Omx 初始化 [Android 10 源码]深入 ...

  4. Ubuntu18.04 编译Android 10源码 并烧录源码到pixel3的避坑指南

    Ubuntu18.04 编译Android 10源码 并烧录源码到pixel3的避坑指南 实验环境 下载Android源码树 在pixel3上安装手机驱动版本 编译Android源码 Android ...

  5. 【Android 10 源码】深入理解 Omx 初始化

    MediaCodec 系列文章: [Android 10 源码]深入理解 MediaCodec 硬解码初始化 [Android 10 源码]深入理解 Omx 初始化 [Android 10 源码]深入 ...

  6. 【Android 10 源码】深入理解 software Codec2 服务启动

    MediaCodec 系列文章: [Android 10 源码]深入理解 MediaCodec 硬解码初始化 [Android 10 源码]深入理解 Omx 初始化 [Android 10 源码]深入 ...

  7. 【Android 10 源码】MediaRecorder 录像流程:MediaRecorder 开始录制

    前面已经分析过 MediaRecorder 初始化和配置过程,接下来就可以真正进入录制流程了.现在不难得出这个结论:MediaRecorder 录制 Camera 的数据实际上是将预览数据经过 Med ...

  8. uboot 2021.10源码分析(启动流程)

    uboot版本:2021.10 平台:armv8  rk3399  eMMC 16G  LPDDR4 4G 本文主要基于uboot的执行流程进行分析而忽略了相关细节,从uboot的基本框架结构着手,新 ...

  9. 【Android 10 源码】Camera v1 startPreview 流程

    Camera v1 startPreview 起点位于 android.hardware 包下的 Camera 类中,这是老版本的 Camera 预览的起点. 上面这张相机架构图左边就是关于 Came ...

最新文章

  1. LeetCode简单题之键盘行
  2. 使用自己的数据集训练MobileNet、ResNet实现图像分类(TensorFlow)| CSDN博文精选
  3. token 生成有哪几种常用方式_实现一个线程有哪几种方式,各有什么优缺点,比较常用的是那种?...
  4. plsq如何快捷整理代码_PLSQL Developer使用技巧整理(转)
  5. elasticsearch index doc过程概述
  6. [zz]GMM-HMM语音识别模型 原理篇
  7. WWDC 2014 Session笔记 - 可视化开发,IB 的新时代
  8. 我与TCP连接不得不说的故事
  9. 数据库高级知识——索引优化分析(一)
  10. 【原创】请不要对Boost Format使用Byte作为参数
  11. chown -r oracle:oinstall /oracle,CentOS7安装Oracle12c图文详解
  12. 2D转换中心点transform-origin(CSS3)
  13. 使用QT的qmake工具生成VS工程
  14. 各种有用、有趣网站整理
  15. vpx计算机论文,高端计算机系统架构设计与VPX总线
  16. 科普 | 到底什么是移动边缘计算?
  17. 测试流程||功能测试
  18. 【Latex】一、TeX Live和TeXstudio安装及使用教程
  19. python 调用函数
  20. FreeMarker生成复杂word(包含图片,表格)

热门文章

  1. yum repomd.xml: [Errno -1]
  2. 八年级下册册计算机计划,初二下册学习计划(精选3篇)
  3. 2021-0904—0919
  4. 英雄联盟怎么解除小窗口_英雄联盟手游皇子怎么玩-英雄联盟手游皇子玩法介绍...
  5. 一台新电脑从零搭建iOS开发环境
  6. 计算机技术在档案管理中的应用研究,计算机技术在档案管理中的应用研究
  7. php file_put_contents(quot;,PHP中危险的file_put_contents函数详解
  8. oracle 每周日凌晨4点执行job
  9. Linux操作系统中的7件武器
  10. 开发问题记录 4个 (微信无法获取头像和昵称)