Android的Camera比较难用,从一代到二代的API接口,对开发者都不友好,这两天正好做个项目,要实时获取到预览画面的图片,使用Camera2做,看了Google的Demo,只能拍照,没法有实时回调,类似于Camera1的OnPreviewCallBack,查阅得知,Camera2用ImageReader来实现,基于源码,自己封装了一个类,先看看分装后的写法
public class MainActivity extends AppCompatActivity {
private AutoFitTextureView mTextureView;
private Camera2Helper helper;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
init();
@Override
public void onResume() {
super.onResume();
helper.open(); //会动态请求权限,请重写 onRequestPermissionsResult
public void init() {
mTextureView = findViewById(R.id.texture);
helper = new Camera2Helper(this, mTextureView);
helper.setOnImageAvailableListener(new Camera2Helper.OnPreviewCallbackListener() {
@Override
public void onImageAvailable(Image image) {
Log.d("weijw1", "helper onImageAvailable");
@Override
public void onPause() {
helper.closeCamera();
super.onPause();
@Override
public void onRequestPermissionsResult(int requestCode, @NonNull String[] permissions,
@NonNull int[] grantResults) {
if (requestCode == helper.getCameraRequestCode()) {
if (grantResults.length != 1 || grantResults[0] != PackageManager.PERMISSION_GRANTED) {
Toast.makeText(getApplicationContext(), R.string.request_permission, Toast.LENGTH_SHORT).show();
} else {
super.onRequestPermissionsResult(requestCode, permissions, grantResults);
封装的类,交Camera2Helper,对外常用的接口,就只有 open,closeCamera,和一个setOnImageAvailableListener来接收回调
public class Camera2Helper {
private static final String TAG = "Camera2Helper";
private Activity mActivity;
private AutoFitTextureView mTextureView;
private HandlerThread mBackgroundThread;
private Handler mBackgroundHandler;
private CaptureRequest.Builder mPreviewRequestBuilder;
private CaptureRequest mPreviewRequest;
private static final int REQUEST_CAMERA_PERMISSION = 1;
private ImageReader mImageReader;
private static final int STATE_PREVIEW = 0;
private static final int STATE_WAITING_LOCK = 1;
private static final int STATE_WAITING_PRECAPTURE = 2;
private static final int STATE_WAITING_NON_PRECAPTURE = 3;
private static final int STATE_PICTURE_TAKEN = 4;
private static final int MAX_PREVIEW_WIDTH = 1920;
private static final int MAX_PREVIEW_HEIGHT = 1080;
private int mImageFormat = ImageFormat.YUV_420_888;
private int mState = STATE_PREVIEW;
private int mSensorOrientation; //Orientation of the camera sensor
private String mCameraId;
private Semaphore mCameraOpenCloseLock = new Semaphore(1);
private CameraCaptureSession mCaptureSession;
private CameraDevice mCameraDevice;
private Size mPreviewSize;
private OnOpenErrorListener mOpenErrorListener; //打开错误,不设置才用默认策略
private OnPreviewCallbackListener mImageAvaiableListener; //摄像头画面可达的时候
public Camera2Helper(@NonNull Activity activity, @NonNull AutoFitTextureView textureView) {
this.mActivity = activity;
this.mTextureView = textureView;
* {@link TextureView.SurfaceTextureListener} handles several lifecycle events on a
* {@link TextureView}.
private final TextureView.SurfaceTextureListener mSurfaceTextureListener
= new TextureView.SurfaceTextureListener() {
@Override
public void onSurfaceTextureAvailable(SurfaceTexture texture, int width, int height) {
openCamera(width, height);
@Override
public void onSurfaceTextureSizeChanged(SurfaceTexture texture, int width, int height) {
configureTransform(width, height);
@Override
public boolean onSurfaceTextureDestroyed(SurfaceTexture texture) {
return true;
@Override
public void onSurfaceTextureUpdated(SurfaceTexture texture) {
* {@link CameraDevice.StateCallback} is called when {@link CameraDevice} changes its state.
private final CameraDevice.StateCallback mStateCallback = new CameraDevice.StateCallback() {
@Override
public void onOpened(@NonNull CameraDevice cameraDevice) {
// This method is called when the camera is opened. We start camera preview here.
mCameraOpenCloseLock.release();
mCameraDevice = cameraDevice;
createCameraPreviewSession();
@Override
public void onDisconnected(@NonNull CameraDevice cameraDevice) {
mCameraOpenCloseLock.release();
cameraDevice.close();
mCameraDevice = null;
@Override
public void onError(@NonNull CameraDevice cameraDevice, int error) {
mCameraOpenCloseLock.release();
cameraDevice.close();
mCameraDevice = null;
if (mOpenErrorListener != null) {
mOpenErrorListener.onOpenError();
} else {
mActivity.finish();
* A {@link CameraCaptureSession.CaptureCallback} that handles events related to JPEG capture.
private CameraCaptureSession.CaptureCallback mCaptureCallback
= new CameraCaptureSession.CaptureCallback() {
private void process(CaptureResult result) {
switch (mState) {
case STATE_PREVIEW: {
// We have nothing to do when the camera preview is working normally.
break;
case STATE_WAITING_LOCK: {
Integer afState = result.get(CaptureResult.CONTROL_AF_STATE);
if (afState == null) {
} else if (CaptureResult.CONTROL_AF_STATE_FOCUSED_LOCKED == afState ||
CaptureResult.CONTROL_AF_STATE_NOT_FOCUSED_LOCKED == afState) {
// CONTROL_AE_STATE can be null on some devices
Integer aeState = result.get(CaptureResult.CONTROL_AE_STATE);
if (aeState == null ||
aeState == CaptureResult.CONTROL_AE_STATE_CONVERGED) {
mState = STATE_PICTURE_TAKEN;
} else {
runPrecaptureSequence();
break;
case STATE_WAITING_PRECAPTURE: {
// CONTROL_AE_STATE can be null on some devices
Integer aeState = result.get(CaptureResult.CONTROL_AE_STATE);
if (aeState == null ||
aeState == CaptureResult.CONTROL_AE_STATE_PRECAPTURE ||
aeState == CaptureRequest.CONTROL_AE_STATE_FLASH_REQUIRED) {
mState = STATE_WAITING_NON_PRECAPTURE;
break;
case STATE_WAITING_NON_PRECAPTURE: {
// CONTROL_AE_STATE can be null on some devices
Integer aeState = result.get(CaptureResult.CONTROL_AE_STATE);
if (aeState == null || aeState != CaptureResult.CONTROL_AE_STATE_PRECAPTURE) {
mState = STATE_PICTURE_TAKEN;
break;
@Override
public void onCaptureProgressed(@NonNull CameraCaptureSession session,
@NonNull CaptureRequest request,
@NonNull CaptureResult partialResult) {
process(partialResult);
@Override
public void onCaptureCompleted(@NonNull CameraCaptureSession session,
@NonNull CaptureRequest request,
@NonNull TotalCaptureResult result) {
process(result);
* Run the precapture sequence for capturing a still image. This method should be called when
* we get a response in {@link #mCaptureCallback} from {}.
private void runPrecaptureSequence() {
try {
// This is how to tell the camera to trigger.
mPreviewRequestBuilder.set(CaptureRequest.CONTROL_AE_PRECAPTURE_TRIGGER,
CaptureRequest.CONTROL_AE_PRECAPTURE_TRIGGER_START);
// Tell #mCaptureCallback to wait for the precapture sequence to be set.
mState = STATE_WAITING_PRECAPTURE;
mCaptureSession.capture(mPreviewRequestBuilder.build(), mCaptureCallback,
mBackgroundHandler);
} catch (CameraAccessException e) {
e.printStackTrace();
* Creates a new {@link CameraCaptureSession} for camera preview.
private void createCameraPreviewSession() {
try {
SurfaceTexture texture = mTextureView.getSurfaceTexture();
assert texture != null;
// We configure the size of default buffer to be the size of camera preview we want.
texture.setDefaultBufferSize(mPreviewSize.getWidth(), mPreviewSize.getHeight());
// This is the output Surface we need to start preview.
Surface surface = new Surface(texture);
// We set up a CaptureRequest.Builder with the output Surface.
mPreviewRequestBuilder
= mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
mPreviewRequestBuilder.addTarget(surface);
mPreviewRequestBuilder.addTarget(mImageReader.getSurface()); //不加这句话没有回调
// Here, we create a CameraCaptureSession for camera preview.
mCameraDevice.createCaptureSession(Arrays.asList(surface, mImageReader.getSurface()),
new CameraCaptureSession.StateCallback() {
@Override
public void onConfigured(@NonNull CameraCaptureSession cameraCaptureSession) {
// The camera is already closed
if (null == mCameraDevice) {
return;
// When the session is ready, we start displaying the preview.
mCaptureSession = cameraCaptureSession;
try {
// Auto focus should be continuous for camera preview.
mPreviewRequestBuilder.set(CaptureRequest.CONTROL_AF_MODE,
CaptureRequest.CONTROL_AF_MODE_CONTINUOUS_PICTURE);
// Flash is automatically enabled when necessary.
// Finally, we start displaying the camera preview.
mPreviewRequest = mPreviewRequestBuilder.build();
mCaptureSession.setRepeatingRequest(mPreviewRequest,
mCaptureCallback, mBackgroundHandler);
} catch (CameraAccessException e) {
e.printStackTrace();
@Override
public void onConfigureFailed(
@NonNull CameraCaptureSession cameraCaptureSession) {
Toast.makeText(mActivity, "Config Session Failed", Toast.LENGTH_SHORT).show();
}, null
} catch (CameraAccessException e) {
e.printStackTrace();
private void requestCameraPermission() {
ActivityCompat.requestPermissions(mActivity, new String[]{
Manifest.permission.CAMERA,
}, REQUEST_CAMERA_PERMISSION);
public void open() {
startBackgroundThread();
// When the screen is turned off and turned back on, the SurfaceTexture is already
// available, and "onSurfaceTextureAvailable" will not be called. In that case, we can open
// a camera and start preview from here (otherwise, we wait until the surface is ready in
// the SurfaceTextureListener).
if (mTextureView.isAvailable()) {
openCamera(mTextureView.getWidth(), mTextureView.getHeight());
} else {
mTextureView.setSurfaceTextureListener(mSurfaceTextureListener);
* Opens the camera specified
private void openCamera(int width, int height) {
if (ActivityCompat.checkSelfPermission(mActivity, Manifest.permission.CAMERA)
!= PackageManager.PERMISSION_GRANTED) {
requestCameraPermission();
return;
setUpCameraOutputs(width, height);
configureTransform(width, height);
CameraManager manager = (CameraManager) mActivity.getSystemService(Context.CAMERA_SERVICE);
if (manager == null) {
return;
try {
if (!mCameraOpenCloseLock.tryAcquire(2500, TimeUnit.MILLISECONDS)) {
throw new RuntimeException("Time out waiting to lock camera opening.");
manager.openCamera(mCameraId, mStateCallback, mBackgroundHandler);
} catch (CameraAccessException e) {
e.printStackTrace();
} catch (InterruptedException e) {
throw new RuntimeException("Interrupted while trying to lock camera opening.", e);
* Sets up member variables related to camera.
* @param width The width of available size for camera preview
* @param height The height of available size for camera preview
@SuppressWarnings("SuspiciousNameCombination")
private void setUpCameraOutputs(int width, int height) {
CameraManager manager = (CameraManager) mActivity.getSystemService(Context.CAMERA_SERVICE);
if (manager == null) {
return;
try {
for (String cameraId : manager.getCameraIdList()) {
CameraCharacteristics characteristics
= manager.getCameraCharacteristics(cameraId);
// We don't use a front facing camera in this sample.
Integer facing = characteristics.get(CameraCharacteristics.LENS_FACING);
if (facing != null && facing == CameraCharacteristics.LENS_FACING_FRONT) {
continue;
StreamConfigurationMap map = characteristics.get(
CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP);
if (map == null) {
continue;
// For still image captures, we use the largest availablIe size.
Size largest = Collections.max(
Arrays.asList(map.getOutputSizes(mImageFormat)),
new CompareSizesByArea());
mImageReader = ImageReader.newInstance(largest.getWidth(), largest.getHeight(),
mImageFormat, /*maxImages*/2);
mImageReader.setOnImageAvailableListener(
mOnImageAvailableListener, mBackgroundHandler);
// Find out if we need to swap dimension to get the preview size relative to sensor
// coordinate.
int displayRotation = mActivity.getWindowManager().getDefaultDisplay().getRotation();
//noinspection ConstantConditions
mSensorOrientation = characteristics.get(CameraCharacteristics.SENSOR_ORIENTATION);
boolean swappedDimensions = false;
switch (displayRotation) {
case Surface.ROTATION_0:
case Surface.ROTATION_180:
if (mSensorOrientation == 90 || mSensorOrientation == 270) {
swappedDimensions = true;
break;
case Surface.ROTATION_90:
case Surface.ROTATION_270:
if (mSensorOrientation == 0 || mSensorOrientation == 180) {
swappedDimensions = true;
break;
default:
Log.e(TAG, "Display rotation is invalid: " + displayRotation);
Point displaySize = new Point();
mActivity.getWindowManager().getDefaultDisplay().getSize(displaySize);
int rotatedPreviewWidth = width;
int rotatedPreviewHeight = height;
int maxPreviewWidth = displaySize.x;
int maxPreviewHeight = displaySize.y;
if (swappedDimensions) {
rotatedPreviewWidth = height;
rotatedPreviewHeight = width;
maxPreviewWidth = displaySize.y;
maxPreviewHeight = displaySize.x;
if (maxPreviewWidth > MAX_PREVIEW_WIDTH) {
maxPreviewWidth = MAX_PREVIEW_WIDTH;
if (maxPreviewHeight > MAX_PREVIEW_HEIGHT) {
maxPreviewHeight = MAX_PREVIEW_HEIGHT;
// Danger, W.R.! Attempting to use too large a preview size could exceed the camera
// bus' bandwidth limitation, resulting in gorgeous previews but the storage of
// garbage capture data.
mPreviewSize = chooseOptimalSize(map.getOutputSizes(SurfaceTexture.class),
rotatedPreviewWidth, rotatedPreviewHeight, maxPreviewWidth,
maxPreviewHeight, largest);
// We fit the aspect ratio of TextureView to the size of preview we picked.
int orientation = mActivity.getResources().getConfiguration().orientation;
if (orientation == Configuration.ORIENTATION_LANDSCAPE) {
mTextureView.setAspectRatio(
mPreviewSize.getWidth(), mPreviewSize.getHeight());
} else {
mTextureView.setAspectRatio(
mPreviewSize.getHeight(), mPreviewSize.getWidth());
mCameraId = cameraId;
return;
} catch (CameraAccessException e) {
e.printStackTrace();
} catch (NullPointerException e) {
// Currently an NPE is thrown when the Camera2API is used but not supported on the
// device this code runs.
private void startBackgroundThread() {
mBackgroundThread = new HandlerThread("imageAvailableListener");
mBackgroundThread.start();
mBackgroundHandler = new Handler(mBackgroundThread.getLooper());
* Configures the necessary {@link android.graphics.Matrix} transformation to `mTextureView`.
* This method should be called after the camera preview size is determined in
* setUpCameraOutputs and also the size of `mTextureView` is fixed.
* @param viewWidth The width of `mTextureView`
* @param viewHeight The height of `mTextureView`
private void configureTransform(int viewWidth, int viewHeight) {
if (null == mTextureView || null == mPreviewSize) {
return;
int rotation = mActivity.getWindowManager().getDefaultDisplay().getRotation();
Matrix matrix = new Matrix();
RectF viewRect = new RectF(0, 0, viewWidth, viewHeight);
RectF bufferRect = new RectF(0, 0, mPreviewSize.getHeight(), mPreviewSize.getWidth());
float centerX = viewRect.centerX();
float centerY = viewRect.centerY();
if (Surface.ROTATION_90 == rotation || Surface.ROTATION_270 == rotation) {
bufferRect.offset(centerX - bufferRect.centerX(), centerY - bufferRect.centerY());
matrix.setRectToRect(viewRect, bufferRect, Matrix.ScaleToFit.FILL);
float scale = Math.max(
(float) viewHeight / mPreviewSize.getHeight(),
(float) viewWidth / mPreviewSize.getWidth());
matrix.postScale(scale, scale, centerX, centerY);
matrix.postRotate(90 * (rotation - 2), centerX, centerY);
} else if (Surface.ROTATION_180 == rotation) {
matrix.postRotate(180, centerX, centerY);
mTextureView.setTransform(matrix);
* Closes the current {@link CameraDevice}.
public void closeCamera() {
stopBackgroundThread();
try {
mCameraOpenCloseLock.acquire();
if (null != mCaptureSession) {
mCaptureSession.close();
mCaptureSession = null;
if (null != mCameraDevice) {
mCameraDevice.close();
mCameraDevice = null;
if (null != mImageReader) {
mImageReader.close();
mImageReader = null;
} catch (InterruptedException e) {
throw new RuntimeException("Interrupted while trying to lock camera closing.", e);
} finally {
mCameraOpenCloseLock.release();
* Stops the background thread and its {@link Handler}.
private void stopBackgroundThread() {
mBackgroundThread.quitSafely();
try {
mBackgroundThread.join();
mBackgroundThread = null;
mBackgroundHandler = null;
} catch (InterruptedException e) {
e.printStackTrace();
* 返回摄像头权限的请求码
* @return 返回请求码
public int getCameraRequestCode() {
return REQUEST_CAMERA_PERMISSION;
* Given {@code choices} of {@code Size}s supported by a camera, choose the smallest one that
* is at least as large as the respective texture view size, and that is at most as large as the
* respective max size, and whose aspect ratio matches with the specified value. If such size
* doesn't exist, choose the largest one that is at most as large as the respective max size,
* and whose aspect ratio matches with the specified value.
* @param choices The list of sizes that the camera supports for the intended output
* class
* @param textureViewWidth The width of the texture view relative to sensor coordinate
* @param textureViewHeight The height of the texture view relative to sensor coordinate
* @param maxWidth The maximum width that can be chosen
* @param maxHeight The maximum height that can be chosen
* @param aspectRatio The aspect ratio
* @return The optimal {@code Size}, or an arbitrary one if none were big enough
private static Size chooseOptimalSize(Size[] choices, int textureViewWidth,
int textureViewHeight, int maxWidth, int maxHeight, Size aspectRatio) {
// Collect the supported resolutions that are at least as big as the preview Surface
List<Size> bigEnough = new ArrayList<>();
// Collect the supported resolutions that are smaller than the preview Surface
List<Size> notBigEnough = new ArrayList<>();
int w = aspectRatio.getWidth();
int h = aspectRatio.getHeight();
for (Size option : choices) {
if (option.getWidth() <= maxWidth && option.getHeight() <= maxHeight &&
option.getHeight() == option.getWidth() * h / w) {
if (option.getWidth() >= textureViewWidth &&
option.getHeight() >= textureViewHeight) {
bigEnough.add(option);
} else {
notBigEnough.add(option);
// Pick the smallest of those big enough. If there is no one big enough, pick the
// largest of those not big enough.
if (bigEnough.size() > 0) {
return Collections.min(bigEnough, new CompareSizesByArea());
} else if (notBigEnough.size() > 0) {
return Collections.max(notBigEnough, new CompareSizesByArea());
} else {
Log.e(TAG, "Couldn't find any suitable preview size");
return choices[0];
* Compares two {@code Size}s based on their areas.
static class CompareSizesByArea implements Comparator<Size> {
@Override
public int compare(Size lhs, Size rhs) {
// We cast here to ensure the multiplications won't overflow
return Long.signum((long) lhs.getWidth() * lhs.getHeight() -
(long) rhs.getWidth() * rhs.getHeight());
* This a callback object for the {@link ImageReader}. "onImageAvailable" will be called when a
* still image is ready to be saved.
private final ImageReader.OnImageAvailableListener mOnImageAvailableListener
= new ImageReader.OnImageAvailableListener() {
@Override
public void onImageAvailable(ImageReader reader) {
Image image = reader.acquireNextImage();
if (image == null) {
Log.d(TAG, "onImageAvailable,image is null");
return;
if (mImageAvaiableListener != null) {
mImageAvaiableListener.onImageAvailable(image);
image.close(); //一定要关掉,否则回调和预览都会阻塞
* 设置视频预览后的回调格式
* @param format 遵循ImageFormat格式
public Camera2Helper setImageFormat(int format) {
mImageFormat = format;
return this;
* 获取摄像头方向
* @return 方向,详见mSensorOrientation赋值 {@link Camera2Helper#setUpCameraOutputs}
public int getSensorOrientation() {
return mSensorOrientation;
* 打开错误的回调,可以不设置,不设置采用默认策略
* @param listener 回调listener
public Camera2Helper setOnOpenErrorListener(OnOpenErrorListener listener) {
mOpenErrorListener = listener;
return this;
* 摄像头图像回调,类似于Camera1的PreviewCallback
* @param listener 回调listener
public Camera2Helper setOnImageAvailableListener(OnPreviewCallbackListener listener) {
mImageAvaiableListener = listener;
return this;
* 打开摄像头错误
interface OnOpenErrorListener {
void onOpenError();
* 当摄像头数据回调可达的时候
interface OnPreviewCallbackListener {
void onImageAvailable(Image image);
代码比较多,不多说了,都是Camera2的封装,主要是加了
mPreviewRequestBuilder.addTarget(mImageReader.getSurface()); //不加这句话没有回调
Camera2为了避免频繁回调,弱化了onPreviewCallback的接口,它可以根据自己添加的target去回调画面,因为有些人用的场景是拍照,有的人是为了实时画面比对一些东西
然后是AutoFitTextureView,这是GoogleDemo里写的可以自动调节大小的类,现在推荐用TextureView,而不是SurfaceView,前者特性更好点
public class AutoFitTextureView extends TextureView {
private int mRatioWidth = 0;
private int mRatioHeight = 0;
public AutoFitTextureView(Context context) {
this(context, null);
public AutoFitTextureView(Context context, AttributeSet attrs) {
this(context, attrs, 0);
public AutoFitTextureView(Context context, AttributeSet attrs, int defStyle) {
super(context, attrs, defStyle);
* Sets the aspect ratio for this view. The size of the view will be measured based on the ratio
* calculated from the parameters. Note that the actual sizes of parameters don't matter, that
* is, calling setAspectRatio(2, 3) and setAspectRatio(4, 6) make the same result.
* @param width Relative horizontal size
* @param height Relative vertical size
public void setAspectRatio(int width, int height) {
if (width < 0 || height < 0) {
throw new IllegalArgumentException("Size cannot be negative.");
mRatioWidth = width;
mRatioHeight = height;
requestLayout();
@Override
protected void onMeasure(int widthMeasureSpec, int heightMeasureSpec) {
super.onMeasure(widthMeasureSpec, heightMeasureSpec);
int width = MeasureSpec.getSize(widthMeasureSpec);
int height = MeasureSpec.getSize(heightMeasureSpec);
if (0 == mRatioWidth || 0 == mRatioHeight) {
setMeasuredDimension(width, height);
} else {
if (width < height * mRatioWidth / mRatioHeight) {
setMeasuredDimension(width, width * mRatioHeight / mRatioWidth);
} else {
setMeasuredDimension(height * mRatioWidth / mRatioHeight, height);
总得来说,接口比一代好用,但是还是不够友好,期待三代的接口
Android的Camera比较难用,从一代到二代的API接口,对开发者都不友好,这两天正好做个项目,要实时获取到预览画面的图片,使用Camera2做,看了Google的Demo,只能拍照,没法有实时回调,类似于Camera1的OnPreviewCallBack,查阅得知,Camera2用ImageReader来实现,基于源码,自己封装了一个类,先看看分装后的写法public cla...
ShaderCamera v0.1
Android 5.0及更高版本中的最新Camera2 API可轻松使用glsl着色器。
请查看SimpleCameraRenderer了解如何扩展这些类以使用着色器的基础知识,以及MainActivity了解如何使用CameraFragment的基础知识,因为它封装了所有相机功能,并且对UI本身不做任何事情。
基本代码最初由并与Google的Camera2示例重新混合。
任何问题/评论/等-
需要一个可以访问的5.0+设备,市场上的大多数最新设备就足够了,我认为模拟器不能在所有情况下都可以使用,或者根本不可以使用。 让我知道。
也使用Jake Wharton的 ,因为我最近一直在使用 ,它很棒。 通过gradle添加,因此不必担心任何事情。
只玩预览-需要研究更多有关api的信息,以便确定对我们实际记录在屏幕上显示
如果你看到这个仓库,非常荣幸,如果想要使用您的项目中,建议先看原始码,因为这是我用来做替换放置快速开发的库,里面很多内容适合我的项目但不一定适合您的项目,当然,如果需要,您可以克隆二进制中的部分代码为您的项目中,如有雷同,不甚荣幸
使用前请查看注意事项,3.x及以后版本仅支持AndroidX,可切换分支查看早期代码
最低支持api21
minSdkVersion 21
targetSdkVersion 29
AndroidStudio 4.1.+
Gradle 6.5
quicklib依赖QMUI ,需要在主项目中配置QMUI的styles ,参考app中
最近总想写写,但又不知写些什么,想了想,今天写个
Camera拍照的教程吧。本例子的流程为 首先通过SurfaceView将
Camera的
实时画面显示在屏幕上,然后通过点击拍照对当前
画面进行捕捉,最后将获得的
图片保存至本地。
首先创建一个SurfaceHolder实现对SurfaceView的回调,然后重写SurfaceCreate函数,实现对
Camera的初始化等一系列工作:代码如下:
最近刚入职了一家公司,这家公司是做人眼虹膜识别的,所以用到的主要就是Camera相关的知识,公司之前的产品都是基于Camera1,公司就派我去了解研究Camera2。这里我就写下这10天左右我对Camera一窍不通到现在的认识与了解吧。
(注:本文不是从底层讲起,讲Camera2.0的架构之类的,只是讲讲我所掌握了的实用技巧,因为我学习过程中我的很多需求百度不到,写在次一是加深自己印象,而是献给...
之前公司有一个需求:在
Android上实现视频
预览,然后使用公司在移动端的物体识别函数
实时识别
预览中的
图像,并
实时框出识别出的物体。
需要解决的问题有:
Android调用
相机在自己的应用内实现视频
预览:google自己的Camear2Basic例子中就有这部分内容,需要修改(Google/
Camera Samples)。
实时获取预览的
图像资源:这一部分花了我最长时间,网上比较少有详细的...
一、本文重点说明
本文基于 android camera2 实现视频预览,暂未兼容 camera1 API,基础实现可以参考 googlesample Camera2 例子 android-Camera2Basic ,本文以工具类形式实现一步调用。
谷歌例子中没有具体指明预览帧的获取,即 camera1 setPreviewCallback 类似功能实现,具体是通过 ImageReader 中...
一、Camera2简介
Camera2是Google在Android 5.0后推出的一个全新的相机API,Camera2和Camera没有继承关系,是完全重新设计的,且Camera2支持的功能也更加丰富,但是提供了更丰富的功能的同时也增加了使用的难度。Google的官方Demo:https://github.com/googlesamples/android-Camera2Basic
二、...
使用 camera2 API 可以更加灵活、可定制和高效地完成 Android 相机应用开发,其相比 camera1 API 的性能有大幅提升。
在使用 camera2 API 完成预览和拍照前,需要进行以下几个步骤:
1. 获取 CameraManager 对象,查找可用的摄像头列表,并选择需要打开的摄像头。
2. 创建 CameraCaptureSession 对象,用于处理相机触发器的请求,并连接 CameraDevice 和 Surface。
3. 匹配预览和图片输出的 Surface,设置相应的尺寸和格式。
4. 创建 CaptureRequest 对象,设置相应的参数,如自动对焦模式、曝光模式等。
5. 使用 CameraCaptureSession 进行预览或拍照。
在预览时,可以使用 TextureView 或 SurfaceView 进行实时数据渲染,比如显示相机预览画面、拍照后处理和显示等,同时可以通过设置监听器动态获取相机输出的图像流数据。
在拍照时,需要创建 ImageReader 对象,设置输出数据的格式和尺寸,同时需要建立对应的 Surface,将其传入 CaptureRequest.Builder,设置请求类型并发起拍照请求。通过设置 ImageReader 的 OnImageAvailableListener 接口,即可接收到图片数据,并进行后续处理和保存。
以上是使用 camera2 API 完成预览和拍照的基本流程,实际开发中需要根据具体需求进行优化和调整。
### 回答2:
Android Camera2 API 是 Android 系统中相机功能的一种全新的 API,使用 Camera2 可以更灵活地操作相机设备并获得更高质量的照片。
使用 Camera2 实现预览非常简单,我们只需要实现一个 CameraDevice.StateCallback 接口实现类和一个 SurfaceView 主界面。在 StateCallback 的 onOpened 回调中获得 CameraDevice 的实例,然后通过 ImageReader 创建 SurfaceHolder,最后将 SurfaceHolder 通过 CameraDevice.createCaptureSession 接口跟 CameraDevice 进行绑定即可实现预览。
拍照的实现过程与预览类似,首先获得 CameraDevice 实例,然后创建一个 CaptureRequest.Builder 对象,将拍照设置参数通过 CaptureRequest.Builder.set 方法设置到 CaptureRequest.Builder 对象中,最后通过 CameraCaptureSession.capture 接口启动拍照操作即可。
当然,在使用 Camera2 API 进行操作相机时,还需要注意一些其他问题,比如不同的相机设备有不同的特性,需要针对不同的设备进行优化和适配,还需要保证应用的流畅性和稳定性,以达到更好的用户体验。
总之,使用 Camera2 API 实现预览和拍照是 Android 开发的一个重要技能,需要开发者深入了解该 API 的机制和使用方式,才能更好地实现优秀的相机应用。
### 回答3:
Android中的camera2是一种相机应用程序接口(API),旨在提高相机应用程序的功能和性能。相较于早期版本的camera API,camera2 API提供了更多的控制选项,允许开发者定制相机应用程序的功能,从而实现更好的用户体验。
使用camera2 API实现预览和拍照需要以下步骤:
1. 获取CameraManager对象。使用该对象可以获取系统中可用的相机列表,并在需要的时候打开指定相机。
2. 打开指定相机。调用CameraManager.openCamera()方法打开相机。
3. 创建CaptureSession。CaptureSession是与相机关联的一组输出Surface的集合。
4. 创建CaptureRequest。CaptureRequest是一个指定相机操作和设置的重要对象,可以通过它来设置各种模式、参数和目标Surface。
5. 创建Preview Request。处理预览界面。
6. 启动相机预览。启动前,可以使用CaptureRequest.Builder设置其他预览参数。
7. 拍照。当用户点击拍照按钮时,调用CaptureSession.capture()方法,即可拍照并接收回调。
8. 关闭相机。释放所有占用的资源,以便其他应用程序可以使用相机。
总之,在使用camera2 API实现预览和拍照时,需要使用许多类和方法。但只要开发者掌握了API中的基本概念和流程,就可以自由地使用该API,设计新型的相机应用程序,提供更好的功能和性能。