banner



How To Read Text From Camera In Android Studio

The Android framework includes support for various cameras and photographic camera features available on devices, allowing you to capture pictures and videos in your applications. This document discusses a quick, uncomplicated arroyo to paradigm and video capture and outlines an advanced arroyo for creating custom camera experiences for your users.

Note: This page describes the Camera course, which has been deprecated. We recommend using the CameraX Jetpack library or, for specific use cases, the camera2, course. Both CameraX and Camera2 work on Android v.0 (API level 21) and higher.

Considerations

Before enabling your application to apply cameras on Android devices, you should consider a few questions virtually how your app intends to utilize this hardware feature.

  • Camera Requirement - Is the use of a camera so important to your application that y'all practise not want your application installed on a device that does non have a photographic camera? If so, you should declare the camera requirement in your manifest.
  • Quick Motion picture or Customized Camera - How volition your awarding use the camera? Are y'all just interested in snapping a quick picture or video clip, or will your application provide a new way to use cameras? For getting a quick snap or prune, consider Using Existing Camera Apps. For developing a customized photographic camera feature, bank check out the Edifice a Camera App section.
  • Foreground Services Requirement - When does your app collaborate with the photographic camera? On Android 9 (API level 28) and later, apps running in the groundwork cannot access the camera. Therefore, you lot should employ the photographic camera either when your app is in the foreground or equally part of a foreground service.
  • Storage - Are the images or videos your application generates intended to be only visible to your application or shared so that other applications such as Gallery or other media and social apps can employ them? Practice you lot want the pictures and videos to be available even if your awarding is uninstalled? Cheque out the Saving Media Files section to come across how to implement these options.

The basics

The Android framework supports capturing images and video through the android.hardware.camera2 API or photographic camera Intent. Hither are the relevant classes:

android.hardware.camera2
This bundle is the primary API for controlling device cameras. It tin be used to take pictures or videos when you lot are building a camera application.
Photographic camera
This grade is the older deprecated API for controlling device cameras.
SurfaceView
This class is used to present a live camera preview to the user.
MediaRecorder
This form is used to record video from the camera.
Intent
An intent action type of MediaStore.ACTION_IMAGE_CAPTURE or MediaStore.ACTION_VIDEO_CAPTURE tin exist used to capture images or videos without directly using the Photographic camera object.

Manifest declarations

Earlier starting development on your application with the Camera API, you should make certain your manifest has the appropriate declarations to allow apply of photographic camera hardware and other related features.

  • Camera Permission - Your application must request permission to utilize a device camera.
    <uses-permission android:proper name="android.permission.CAMERA" />            

    Note: If you lot are using the camera by invoking an existing camera app, your application does not need to request this permission.

  • Photographic camera Features - Your application must also declare use of photographic camera features, for example:
    <uses-feature android:proper noun="android.hardware.camera" />            

    For a list of photographic camera features, see the manifest Features Reference.

    Adding camera features to your manifest causes Google Play to prevent your application from being installed to devices that do not include a camera or do non support the camera features you specify. For more information nearly using feature-based filtering with Google Play, run into Google Play and Feature-Based Filtering.

    If your application tin employ a camera or camera feature for proper operation, but does non crave it, you should specify this in the manifest by including the android:required attribute, and setting information technology to fake:

    <uses-feature android:name="android.hardware.camera" android:required="fake" />            
  • Storage Permission - Your application can save images or videos to the device'south external storage (SD Carte) if information technology targets Android 10 (API level 29) or lower and specifies the following in the manifest.
    <uses-permission android:proper name="android.permission.WRITE_EXTERNAL_STORAGE" />            
  • Audio Recording Permission - For recording sound with video capture, your awarding must asking the audio capture permission.
    <uses-permission android:name="android.permission.RECORD_AUDIO" />            
  • Location Permission - If your application tags images with GPS location information, you must request the ACCESS_FINE_LOCATION permission. Annotation that, if your app targets Android 5.0 (API level 21) or higher, you also demand to declare that your app uses the device'southward GPS:

    <uses-permission android:name="android.permission.ACCESS_FINE_LOCATION" /> ... <!-- Needed only if your app targets Android 5.0 (API level 21) or higher. --> <uses-characteristic android:name="android.hardware.location.gps" />            

    For more information about getting user location, see Location Strategies.

Using existing camera apps

A quick way to enable taking pictures or videos in your application without a lot of actress code is to use an Intent to invoke an existing Android photographic camera awarding. The details are described in the training lessons Taking Photos Simply and Recording Videos Simply.

Edifice a camera app

Some developers may require a camera user interface that is customized to the await of their awarding or provides special features. Writing your own motion-picture show-taking lawmaking tin provide a more compelling experience for your users.

Note: The following guide is for the older, deprecated Camera API. For new or advanced camera applications, the newer android.hardware.camera2 API is recommended.

The general steps for creating a custom camera interface for your application are every bit follows:

  • Discover and Access Photographic camera - Create code to check for the existence of cameras and request access.
  • Create a Preview Class - Create a camera preview class that extends SurfaceView and implements the SurfaceHolder interface. This class previews the alive images from the camera.
  • Build a Preview Layout - Once you lot have the camera preview class, create a view layout that incorporates the preview and the user interface controls you desire.
  • Setup Listeners for Capture - Connect listeners for your interface controls to start image or video capture in response to user deportment, such as pressing a push.
  • Capture and Save Files - Setup the code for capturing pictures or videos and saving the output.
  • Release the Camera - Subsequently using the camera, your application must properly release it for use by other applications.

Photographic camera hardware is a shared resource that must be carefully managed so your awarding does not collide with other applications that may also want to use it. The post-obit sections discusses how to detect camera hardware, how to request admission to a camera, how to capture pictures or video and how to release the camera when your application is done using it.

Caution: Remember to release the Camera object past calling the Camera.release() when your application is done using it! If your application does not properly release the camera, all subsequent attempts to access the camera, including those past your own application, will fail and may crusade your or other applications to exist close down.

Detecting camera hardware

If your application does not specifically require a camera using a manifest declaration, you should cheque to come across if a camera is bachelor at runtime. To perform this check, use the PackageManager.hasSystemFeature() method, as shown in the example lawmaking beneath:

Kotlin

/** Cheque if this device has a camera */ private fun checkCameraHardware(context: Context): Boolean {     if (context.packageManager.hasSystemFeature(PackageManager.FEATURE_CAMERA)) {         // this device has a camera         return true     } else {         // no camera on this device         return false     } }            

Coffee

/** Check if this device has a photographic camera */ private boolean checkCameraHardware(Context context) {     if (context.getPackageManager().hasSystemFeature(PackageManager.FEATURE_CAMERA)){         // this device has a photographic camera         return true;     } else {         // no camera on this device         render false;     } }            

Android devices can accept multiple cameras, for example a back-facing camera for photography and a front-facing camera for video calls. Android two.3 (API Level 9) and later allows you to check the number of cameras available on a device using the Camera.getNumberOfCameras() method.

Accessing cameras

If you accept determined that the device on which your awarding is running has a camera, you lot must request to access it by getting an example of Photographic camera (unless y'all are using an intent to access the camera).

To access the primary camera, use the Camera.open() method and be certain to catch any exceptions, as shown in the code below:

Kotlin

/** A prophylactic manner to get an case of the Photographic camera object. */ fun getCameraInstance(): Camera? {     return endeavour {         Photographic camera.open() // attempt to get a Camera instance     } catch (e: Exception) {         // Photographic camera is non available (in utilize or does non be)         goose egg // returns null if camera is unavailable     } }            

Java

/** A safe way to get an instance of the Camera object. */ public static Camera getCameraInstance(){     Camera c = nothing;     try {         c = Photographic camera.open up(); // attempt to get a Photographic camera instance     }     catch (Exception e){         // Photographic camera is not bachelor (in utilise or does not exist)     }     render c; // returns null if camera is unavailable }            

Caution: Always check for exceptions when using Camera.open(). Failing to check for exceptions if the camera is in utilize or does not exist volition cause your application to be shut down by the organisation.

On devices running Android ii.3 (API Level 9) or higher, you can admission specific cameras using Photographic camera.open up(int). The case lawmaking above will admission the offset, back-facing camera on a device with more than one camera.

Checking photographic camera features

Once yous obtain access to a camera, you can get farther data about its capabilities using the Camera.getParameters() method and checking the returned Camera.Parameters object for supported capabilities. When using API Level 9 or higher, use the Photographic camera.getCameraInfo() to decide if a camera is on the front or back of the device, and the orientation of the image.

Creating a preview form

For users to finer take pictures or video, they must be able to run across what the device camera sees. A camera preview grade is a SurfaceView that can display the live epitome information coming from a photographic camera, and so users can frame and capture a picture or video.

The following example code demonstrates how to create a bones camera preview form that tin can exist included in a View layout. This course implements SurfaceHolder.Callback in order to capture the callback events for creating and destroying the view, which are needed for assigning the photographic camera preview input.

Kotlin

/** A basic Photographic camera preview class */ class CameraPreview(         context: Context,         private val mCamera: Photographic camera ) : SurfaceView(context), SurfaceHolder.Callback {      private val mHolder: SurfaceHolder = holder.apply {         // Install a SurfaceHolder.Callback and then we go notified when the         // underlying surface is created and destroyed.         addCallback(this@CameraPreview)         // deprecated setting, simply required on Android versions prior to 3.0         setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS)     }      override fun surfaceCreated(holder: SurfaceHolder) {         // The Surface has been created, now tell the photographic camera where to draw the preview.         mCamera.apply {             endeavour {                 setPreviewDisplay(holder)                 startPreview()             } take hold of (due east: IOException) {                 Log.d(TAG, "Error setting camera preview: ${e.bulletin}")             }         }     }      override fun surfaceDestroyed(holder: SurfaceHolder) {         // empty. Take care of releasing the Camera preview in your activity.     }      override fun surfaceChanged(holder: SurfaceHolder, format: Int, w: Int, h: Int) {         // If your preview can modify or rotate, take care of those events hither.         // Make sure to stop the preview before resizing or reformatting information technology.         if (mHolder.surface == null) {             // preview surface does not be             render         }          // terminate preview before making changes         endeavour {             mCamera.stopPreview()         } grab (due east: Exception) {             // ignore: tried to finish a non-existent preview         }          // set preview size and make whatsoever resize, rotate or         // reformatting changes here          // starting time preview with new settings         mCamera.apply {             try {                 setPreviewDisplay(mHolder)                 startPreview()             } grab (e: Exception) {                 Log.d(TAG, "Error starting photographic camera preview: ${east.message}")             }         }     } }            

Java

/** A basic Photographic camera preview course */ public form CameraPreview extends SurfaceView implements SurfaceHolder.Callback {     private SurfaceHolder mHolder;     private Camera mCamera;      public CameraPreview(Context context, Camera camera) {         super(context);         mCamera = camera;          // Install a SurfaceHolder.Callback so we go notified when the         // underlying surface is created and destroyed.         mHolder = getHolder();         mHolder.addCallback(this);         // deprecated setting, just required on Android versions prior to iii.0         mHolder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS);     }      public void surfaceCreated(SurfaceHolder holder) {         // The Surface has been created, now tell the camera where to draw the preview.         try {             mCamera.setPreviewDisplay(holder);             mCamera.startPreview();         } grab (IOException e) {             Log.d(TAG, "Error setting camera preview: " + eastward.getMessage());         }     }      public void surfaceDestroyed(SurfaceHolder holder) {         // empty. Take care of releasing the Camera preview in your activeness.     }      public void surfaceChanged(SurfaceHolder holder, int format, int w, int h) {         // If your preview tin modify or rotate, take care of those events here.         // Make certain to stop the preview before resizing or reformatting it.          if (mHolder.getSurface() == zip){           // preview surface does not exist           render;         }          // stop preview earlier making changes         endeavour {             mCamera.stopPreview();         } catch (Exception e){           // ignore: tried to stop a non-existent preview         }          // set preview size and make any resize, rotate or         // reformatting changes here          // start preview with new settings         try {             mCamera.setPreviewDisplay(mHolder);             mCamera.startPreview();          } catch (Exception e){             Log.d(TAG, "Error starting camera preview: " + east.getMessage());         }     } }            

If you lot want to set a specific size for your camera preview, set this in the surfaceChanged() method as noted in the comments higher up. When setting preview size, you must utilise values from getSupportedPreviewSizes(). Do non prepare arbitrary values in the setPreviewSize() method.

Note: With the introduction of the Multi-Window characteristic in Android vii.0 (API level 24) and higher, you can no longer assume the aspect ratio of the preview is the same as your activity even after calling setDisplayOrientation(). Depending on the window size and aspect ratio, you may may take to fit a wide photographic camera preview into a portrait-orientated layout, or vice versa, using a letterbox layout.

Placing preview in a layout

A camera preview class, such as the example shown in the previous section, must be placed in the layout of an action along with other user interface controls for taking a motion picture or video. This section shows you how to build a bones layout and activity for the preview.

The following layout code provides a very basic view that can be used to brandish a camera preview. In this case, the FrameLayout chemical element is meant to be the container for the camera preview class. This layout blazon is used so that additional picture data or controls can be overlaid on the live photographic camera preview images.

<?xml version="i.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"     android:orientation="horizontal"     android:layout_width="fill_parent"     android:layout_height="fill_parent"     >   <FrameLayout     android:id="@+id/camera_preview"     android:layout_width="fill_parent"     android:layout_height="fill_parent"     android:layout_weight="1"     />    <Push button     android:id="@+id/button_capture"     android:text="Capture"     android:layout_width="wrap_content"     android:layout_height="wrap_content"     android:layout_gravity="centre"     /> </LinearLayout>        

On most devices, the default orientation of the camera preview is landscape. This example layout specifies a horizontal (landscape) layout and the code below fixes the orientation of the application to mural. For simplicity in rendering a camera preview, you should change your application's preview activity orientation to landscape by adding the following to your manifest.

<activeness android:name=".CameraActivity"           android:label="@string/app_name"            android:screenOrientation="mural">           <!-- configure this activity to use mural orientation -->            <intent-filter>         <activeness android:name="android.intent.action.MAIN" />         <category android:proper name="android.intent.category.LAUNCHER" />     </intent-filter> </activeness>        

Note: A camera preview does not have to be in landscape mode. Starting in Android 2.ii (API Level eight), yous tin can employ the setDisplayOrientation() method to set the rotation of the preview image. In club to alter preview orientation as the user re-orients the phone, within the surfaceChanged() method of your preview grade, first stop the preview with Camera.stopPreview() alter the orientation and so start the preview again with Camera.startPreview().

In the activeness for your camera view, add your preview class to the FrameLayout chemical element shown in the example in a higher place. Your photographic camera action must also ensure that information technology releases the camera when it is paused or shut downwards. The following example shows how to modify a camera activity to adhere the preview class shown in Creating a preview course.

Kotlin

grade CameraActivity : Activity() {      private var mCamera: Camera? = aught     individual var mPreview: CameraPreview? = zippo      override fun onCreate(savedInstanceState: Bundle?) {         super.onCreate(savedInstanceState)         setContentView(R.layout.activity_main)          // Create an case of Camera         mCamera = getCameraInstance()          mPreview = mCamera?.let {             // Create our Preview view             CameraPreview(this, it)         }          // Set the Preview view every bit the content of our activity.         mPreview?.also {             val preview: FrameLayout = findViewById(R.id.camera_preview)             preview.addView(it)         }     } }            

Java

public grade CameraActivity extends Activity {      private Camera mCamera;     individual CameraPreview mPreview;      @Override     public void onCreate(Bundle savedInstanceState) {         super.onCreate(savedInstanceState);         setContentView(R.layout.main);          // Create an instance of Camera         mCamera = getCameraInstance();          // Create our Preview view and ready it equally the content of our activity.         mPreview = new CameraPreview(this, mCamera);         FrameLayout preview = (FrameLayout) findViewById(R.id.camera_preview);         preview.addView(mPreview);     } }            

Notation: The getCameraInstance() method in the example to a higher place refers to the case method shown in Accessing cameras.

Capturing pictures

Once you have built a preview class and a view layout in which to display it, you are ready to start capturing images with your application. In your awarding code, you lot must set upwards listeners for your user interface controls to answer to a user activeness by taking a motion-picture show.

In order to recollect a motion-picture show, use the Camera.takePicture() method. This method takes three parameters which receive data from the camera. In order to receive data in a JPEG format, y'all must implement an Camera.PictureCallback interface to receive the image data and write it to a file. The following code shows a basic implementation of the Camera.PictureCallback interface to save an image received from the camera.

Kotlin

private val mPicture = Photographic camera.PictureCallback { data, _ ->     val pictureFile: File = getOutputMediaFile(MEDIA_TYPE_IMAGE) ?: run {         Log.d(TAG, ("Mistake creating media file, check storage permissions"))         return@PictureCallback     }      endeavour {         val fos = FileOutputStream(pictureFile)         fos.write(data)         fos.shut()     } catch (east: FileNotFoundException) {         Log.d(TAG, "File non found: ${e.message}")     } catch (e: IOException) {         Log.d(TAG, "Error accessing file: ${east.message}")     } }            

Java

private PictureCallback mPicture = new PictureCallback() {      @Override     public void onPictureTaken(byte[] data, Camera camera) {          File pictureFile = getOutputMediaFile(MEDIA_TYPE_IMAGE);         if (pictureFile == zero){             Log.d(TAG, "Error creating media file, check storage permissions");             return;         }          attempt {             FileOutputStream fos = new FileOutputStream(pictureFile);             fos.write(information);             fos.close();         } catch (FileNotFoundException east) {             Log.d(TAG, "File non plant: " + e.getMessage());         } grab (IOException e) {             Log.d(TAG, "Fault accessing file: " + e.getMessage());         }     } };            

Trigger capturing an image past calling the Camera.takePicture() method. The following example code shows how to call this method from a button View.OnClickListener.

Kotlin

val captureButton: Push button = findViewById(R.id.button_capture) captureButton.setOnClickListener {     // go an image from the camera     mCamera?.takePicture(null, nothing, picture) }            

Java

// Add a listener to the Capture button Button captureButton = (Button) findViewById(R.id.button_capture); captureButton.setOnClickListener(     new View.OnClickListener() {         @Override         public void onClick(View v) {             // get an prototype from the camera             mCamera.takePicture(null, goose egg, picture);         }     } );            

Note: The mPicture member in the following example refers to the instance lawmaking above.

Caution: Remember to release the Photographic camera object by calling the Camera.release() when your awarding is done using it! For data near how to release the camera, encounter Releasing the photographic camera.

Capturing videos

Video capture using the Android framework requires conscientious direction of the Camera object and coordination with the MediaRecorder class. When recording video with Camera, you must manage the Camera.lock() and Photographic camera.unlock() calls to allow MediaRecorder access to the photographic camera hardware, in addition to the Photographic camera.open() and Camera.release() calls.

Note: Starting with Android iv.0 (API level fourteen), the Camera.lock() and Camera.unlock() calls are managed for yous automatically.

Dissimilar taking pictures with a device camera, capturing video requires a very particular telephone call gild. You must follow a specific lodge of execution to successfully ready for and capture video with your application, as detailed below.

  1. Open Camera - Use the Photographic camera.open() to become an instance of the camera object.
  2. Connect Preview - Gear up a alive camera image preview by connecting a SurfaceView to the camera using Photographic camera.setPreviewDisplay().
  3. Commencement Preview - Telephone call Camera.startPreview() to begin displaying the live camera images.
  4. Start Recording Video - The following steps must be completed in society to successfully tape video:
    1. Unlock the Camera - Unlock the camera for employ by MediaRecorder past calling Camera.unlock().
    2. Configure MediaRecorder - Call in the following MediaRecorder methods in this order. For more data, run into the MediaRecorder reference documentation.
      1. setCamera() - Set the camera to be used for video capture, use your application's current instance of Camera.
      2. setAudioSource() - Fix the sound source, apply MediaRecorder.AudioSource.CAMCORDER.
      3. setVideoSource() - Set the video source, use MediaRecorder.VideoSource.CAMERA.
      4. Prepare the video output format and encoding. For Android two.ii (API Level 8) and higher, use the MediaRecorder.setProfile method, and get a profile instance using CamcorderProfile.get(). For versions of Android prior to 2.ii, you must set up the video output format and encoding parameters:
        1. setOutputFormat() - Gear up the output format, specify the default setting or MediaRecorder.OutputFormat.MPEG_4.
        2. setAudioEncoder() - Set up the sound encoding type, specify the default setting or MediaRecorder.AudioEncoder.AMR_NB.
        3. setVideoEncoder() - Set the video encoding type, specify the default setting or MediaRecorder.VideoEncoder.MPEG_4_SP.
      5. setOutputFile() - Prepare the output file, apply getOutputMediaFile(MEDIA_TYPE_VIDEO).toString() from the example method in the Saving Media Files section.
      6. setPreviewDisplay() - Specify the SurfaceView preview layout element for your awarding. Use the aforementioned object you specified for Connect Preview.

      Caution: You must telephone call these MediaRecorder configuration methods in this order, otherwise your application volition encounter errors and the recording volition fail.

    3. Prepare MediaRecorder - Ready the MediaRecorder with provided configuration settings past calling MediaRecorder.ready().
    4. Commencement MediaRecorder - Showtime recording video past calling MediaRecorder.start().
  5. Stop Recording Video - Call the following methods in club, to successfully complete a video recording:
    1. Stop MediaRecorder - Stop recording video past calling MediaRecorder.stop().
    2. Reset MediaRecorder - Optionally, remove the configuration settings from the recorder by calling MediaRecorder.reset().
    3. Release MediaRecorder - Release the MediaRecorder past calling MediaRecorder.release().
    4. Lock the Camera - Lock the camera then that hereafter MediaRecorder sessions can use it past calling Photographic camera.lock(). Starting with Android 4.0 (API level 14), this call is not required unless the MediaRecorder.prepare() call fails.
  6. Stop the Preview - When your activity has finished using the photographic camera, cease the preview using Photographic camera.stopPreview().
  7. Release Camera - Release the camera so that other applications can use it by calling Camera.release().

Note: It is possible to utilize MediaRecorder without creating a camera preview beginning and skip the starting time few steps of this process. However, since users typically prefer to see a preview before starting a recording, that procedure is non discussed here.

Tip: If your application is typically used for recording video, set setRecordingHint(boolean) to true prior to starting your preview. This setting can help reduce the time information technology takes to starting time recording.

Configuring MediaRecorder

When using the MediaRecorder form to record video, yous must perform configuration steps in a specific lodge and and so phone call the MediaRecorder.prepare() method to check and implement the configuration. The following case lawmaking demonstrates how to properly configure and prepare the MediaRecorder grade for video recording.

Kotlin

individual fun prepareVideoRecorder(): Boolean {     mediaRecorder = MediaRecorder()      mCamera?.let { camera ->         // Pace 1: Unlock and set camera to MediaRecorder         camera?.unlock()          mediaRecorder?.run {             setCamera(camera)              // Footstep 2: Set sources             setAudioSource(MediaRecorder.AudioSource.CAMCORDER)             setVideoSource(MediaRecorder.VideoSource.CAMERA)              // Step 3: Set a CamcorderProfile (requires API Level 8 or higher)             setProfile(CamcorderProfile.go(CamcorderProfile.QUALITY_HIGH))              // Stride 4: Set output file             setOutputFile(getOutputMediaFile(MEDIA_TYPE_VIDEO).toString())              // Step 5: Fix the preview output             setPreviewDisplay(mPreview?.holder?.surface)              setOutputFormat(MediaRecorder.OutputFormat.MPEG_4)             setAudioEncoder(MediaRecorder.AudioEncoder.DEFAULT)             setVideoEncoder(MediaRecorder.VideoEncoder.DEFAULT)               // Stride 6: Prepare configured MediaRecorder             return attempt {                 prepare()                 true             } catch (e: IllegalStateException) {                 Log.d(TAG, "IllegalStateException preparing MediaRecorder: ${e.message}")                 releaseMediaRecorder()                 false             } catch (eastward: IOException) {                 Log.d(TAG, "IOException preparing MediaRecorder: ${due east.message}")                 releaseMediaRecorder()                 false             }         }      }     return false }            

Java

private boolean prepareVideoRecorder(){      mCamera = getCameraInstance();     mediaRecorder = new MediaRecorder();      // Footstep 1: Unlock and fix photographic camera to MediaRecorder     mCamera.unlock();     mediaRecorder.setCamera(mCamera);      // Step 2: Set up sources     mediaRecorder.setAudioSource(MediaRecorder.AudioSource.CAMCORDER);     mediaRecorder.setVideoSource(MediaRecorder.VideoSource.Photographic camera);      // Step 3: Set a CamcorderProfile (requires API Level viii or higher)     mediaRecorder.setProfile(CamcorderProfile.get(CamcorderProfile.QUALITY_HIGH));      // Step 4: Set output file     mediaRecorder.setOutputFile(getOutputMediaFile(MEDIA_TYPE_VIDEO).toString());      // Step five: Set the preview output     mediaRecorder.setPreviewDisplay(mPreview.getHolder().getSurface());      // Footstep half-dozen: Fix configured MediaRecorder     try {         mediaRecorder.prepare();     } take hold of (IllegalStateException e) {         Log.d(TAG, "IllegalStateException preparing MediaRecorder: " + eastward.getMessage());         releaseMediaRecorder();         render imitation;     } take hold of (IOException e) {         Log.d(TAG, "IOException preparing MediaRecorder: " + e.getMessage());         releaseMediaRecorder();         return false;     }     return true; }            

Prior to Android 2.ii (API Level 8), you must gear up the output format and encoding formats parameters direct, instead of using CamcorderProfile. This arroyo is demonstrated in the following code:

Kotlin

              // Step 3: Ready output format and encoding (for versions prior to API Level 8)     mediaRecorder?.apply {         setOutputFormat(MediaRecorder.OutputFormat.MPEG_4)         setAudioEncoder(MediaRecorder.AudioEncoder.DEFAULT)         setVideoEncoder(MediaRecorder.VideoEncoder.DEFAULT)     }            

Java

              // Step iii: Set output format and encoding (for versions prior to API Level 8)     mediaRecorder.setOutputFormat(MediaRecorder.OutputFormat.MPEG_4);     mediaRecorder.setAudioEncoder(MediaRecorder.AudioEncoder.DEFAULT);     mediaRecorder.setVideoEncoder(MediaRecorder.VideoEncoder.DEFAULT);            

The following video recording parameters for MediaRecorder are given default settings, nevertheless, yous may desire to adjust these settings for your application:

  • setVideoEncodingBitRate()
  • setVideoSize()
  • setVideoFrameRate()
  • setAudioEncodingBitRate()
  • setAudioChannels()
  • setAudioSamplingRate()

Starting and stopping MediaRecorder

When starting and stopping video recording using the MediaRecorder class, you lot must follow a specific society, as listed below.

  1. Unlock the photographic camera with Camera.unlock()
  2. Configure MediaRecorder every bit shown in the code example above
  3. Start recording using MediaRecorder.start()
  4. Record the video
  5. Stop recording using MediaRecorder.cease()
  6. Release the media recorder with MediaRecorder.release()
  7. Lock the camera using Camera.lock()

The following instance code demonstrates how to wire up a button to properly start and stop video recording using the camera and the MediaRecorder class.

Note: When completing a video recording, do not release the camera or else your preview volition be stopped.

Kotlin

var isRecording = simulated val captureButton: Button = findViewById(R.id.button_capture) captureButton.setOnClickListener {     if (isRecording) {         // end recording and release camera         mediaRecorder?.stop() // stop the recording         releaseMediaRecorder() // release the MediaRecorder object         mCamera?.lock() // take camera access dorsum from MediaRecorder          // inform the user that recording has stopped         setCaptureButtonText("Capture")         isRecording = false     } else {         // initialize video camera         if (prepareVideoRecorder()) {             // Camera is available and unlocked, MediaRecorder is prepared,             // now you lot tin can start recording             mediaRecorder?.start()              // inform the user that recording has started             setCaptureButtonText("Cease")             isRecording = true         } else {             // prepare didn't work, release the camera             releaseMediaRecorder()             // inform user         }     } }            

Java

private boolean isRecording = false;  // Add a listener to the Capture button Push button captureButton = (Button) findViewById(id.button_capture); captureButton.setOnClickListener(     new View.OnClickListener() {         @Override         public void onClick(View v) {             if (isRecording) {                 // stop recording and release camera                 mediaRecorder.stop();  // stop the recording                 releaseMediaRecorder(); // release the MediaRecorder object                 mCamera.lock();         // accept photographic camera admission dorsum from MediaRecorder                  // inform the user that recording has stopped                 setCaptureButtonText("Capture");                 isRecording = false;             } else {                 // initialize video photographic camera                 if (prepareVideoRecorder()) {                     // Camera is available and unlocked, MediaRecorder is prepared,                     // now you can starting time recording                     mediaRecorder.outset();                      // inform the user that recording has started                     setCaptureButtonText("Stop");                     isRecording = true;                 } else {                     // prepare didn't work, release the camera                     releaseMediaRecorder();                     // inform user                 }             }         }     } );

Notation: In the above example, the prepareVideoRecorder() method refers to the example code shown in Configuring MediaRecorder. This method takes intendance of locking the camera, configuring and preparing the MediaRecorder instance.

Releasing the camera

Cameras are a resource that is shared past applications on a device. Your application can brand utilize of the camera after getting an instance of Camera, and y'all must be particularly conscientious to release the camera object when your application stops using it, and as presently as your awarding is paused (Activity.onPause()). If your awarding does not properly release the camera, all subsequent attempts to access the camera, including those past your own application, will neglect and may cause your or other applications to be shut downwardly.

To release an instance of the Camera object, use the Photographic camera.release() method, as shown in the instance code below.

Kotlin

grade CameraActivity : Activity() {     private var mCamera: Camera?     individual var preview: SurfaceView?     private var mediaRecorder: MediaRecorder?      override fun onPause() {         super.onPause()         releaseMediaRecorder() // if you lot are using MediaRecorder, release it first         releaseCamera() // release the camera immediately on intermission issue     }      private fun releaseMediaRecorder() {         mediaRecorder?.reset() // articulate recorder configuration         mediaRecorder?.release() // release the recorder object         mediaRecorder = zilch         mCamera?.lock() // lock photographic camera for later use     }      individual fun releaseCamera() {         mCamera?.release() // release the camera for other applications         mCamera = aught     } }            

Coffee

public class CameraActivity extends Activity {     private Camera mCamera;     private SurfaceView preview;     private MediaRecorder mediaRecorder;      ...      @Override     protected void onPause() {         super.onPause();         releaseMediaRecorder();       // if you are using MediaRecorder, release it beginning         releaseCamera();              // release the photographic camera immediately on pause event     }      private void releaseMediaRecorder(){         if (mediaRecorder != naught) {             mediaRecorder.reset();   // clear recorder configuration             mediaRecorder.release(); // release the recorder object             mediaRecorder = null;             mCamera.lock();           // lock photographic camera for after apply         }     }      private void releaseCamera(){         if (mCamera != null){             mCamera.release();        // release the camera for other applications             mCamera = goose egg;         }     } }            

Caution: If your awarding does non properly release the photographic camera, all subsequent attempts to admission the camera, including those past your own awarding, will neglect and may crusade your or other applications to be close down.

Media files created by users such as pictures and videos should be saved to a device's external storage directory (SD Card) to conserve organisation space and to let users to admission these files without their device. There are many possible directory locations to salvage media files on a device, yet there are only two standard locations you should consider as a developer:

  • Surroundings.getExternalStoragePublicDirectory(Environment.DIRECTORY_PICTURES) - This method returns the standard, shared and recommended location for saving pictures and videos. This directory is shared (public), and then other applications can easily detect, read, change and delete files saved in this location. If your awarding is uninstalled by the user, media files saved to this location will not be removed. To avoid interfering with users existing pictures and videos, y'all should create a sub-directory for your application's media files within this directory, every bit shown in the code sample below. This method is available in Android 2.2 (API Level eight), for equivalent calls in earlier API versions, see Saving Shared Files.
  • Context.getExternalFilesDir(Environment.DIRECTORY_PICTURES) - This method returns a standard location for saving pictures and videos which are associated with your application. If your awarding is uninstalled, any files saved in this location are removed. Security is not enforced for files in this location and other applications may read, change and delete them.

The following example code demonstrates how to create a File or Uri location for a media file that tin can exist used when invoking a device'southward photographic camera with an Intent or as part of a Building a Camera App.

Kotlin

val MEDIA_TYPE_IMAGE = 1 val MEDIA_TYPE_VIDEO = 2  /** Create a file Uri for saving an image or video */ private fun getOutputMediaFileUri(type: Int): Uri {     render Uri.fromFile(getOutputMediaFile(type)) }  /** Create a File for saving an image or video */ individual fun getOutputMediaFile(type: Int): File? {     // To be safe, y'all should check that the SDCard is mounted     // using Surround.getExternalStorageState() before doing this.      val mediaStorageDir = File(             Surround.getExternalStoragePublicDirectory(Environment.DIRECTORY_PICTURES),             "MyCameraApp"     )     // This location works best if yous want the created images to be shared     // betwixt applications and persist after your app has been uninstalled.      // Create the storage directory if it does not exist     mediaStorageDir.apply {         if (!exists()) {             if (!mkdirs()) {                 Log.d("MyCameraApp", "failed to create directory")                 render null             }         }     }      // Create a media file proper noun     val timeStamp = SimpleDateFormat("yyyyMMdd_HHmmss").format(Appointment())     render when (blazon) {         MEDIA_TYPE_IMAGE -> {             File("${mediaStorageDir.path}${File.separator}IMG_$timeStamp.jpg")         }         MEDIA_TYPE_VIDEO -> {             File("${mediaStorageDir.path}${File.separator}VID_$timeStamp.mp4")         }         else -> null     } }            

Coffee

public static final int MEDIA_TYPE_IMAGE = one; public static final int MEDIA_TYPE_VIDEO = two;  /** Create a file Uri for saving an image or video */ private static Uri getOutputMediaFileUri(int type){       render Uri.fromFile(getOutputMediaFile(type)); }  /** Create a File for saving an epitome or video */ private static File getOutputMediaFile(int blazon){     // To be safe, you should check that the SDCard is mounted     // using Environment.getExternalStorageState() before doing this.      File mediaStorageDir = new File(Surround.getExternalStoragePublicDirectory(               Environment.DIRECTORY_PICTURES), "MyCameraApp");     // This location works all-time if y'all want the created images to be shared     // between applications and persist after your app has been uninstalled.      // Create the storage directory if it does non exist     if (! mediaStorageDir.exists()){         if (! mediaStorageDir.mkdirs()){             Log.d("MyCameraApp", "failed to create directory");             return null;         }     }      // Create a media file name     String timeStamp = new SimpleDateFormat("yyyyMMdd_HHmmss").format(new Engagement());     File mediaFile;     if (blazon == MEDIA_TYPE_IMAGE){         mediaFile = new File(mediaStorageDir.getPath() + File.separator +         "IMG_"+ timeStamp + ".jpg");     } else if(type == MEDIA_TYPE_VIDEO) {         mediaFile = new File(mediaStorageDir.getPath() + File.separator +         "VID_"+ timeStamp + ".mp4");     } else {         render nothing;     }      return mediaFile; }            

Note: Environment.getExternalStoragePublicDirectory() is available in Android 2.ii (API Level 8) or higher. If y'all are targeting devices with earlier versions of Android, use Environment.getExternalStorageDirectory() instead. For more information, encounter Saving Shared Files.

To make the URI support work profiles, first convert the file URI to a content URI. And then, add the content URI to EXTRA_OUTPUT of an Intent.

For more information virtually saving files on an Android device, see Data Storage.

Camera features

Android supports a broad array of camera features you can control with your camera application, such every bit picture format, flash style, focus settings, and many more. This section lists the mutual camera features, and briefly discusses how to utilize them. Most photographic camera features tin be accessed and set using the through Camera.Parameters object. Even so, at that place are several important features that crave more than simple settings in Camera.Parameters. These features are covered in the following sections:

  • Metering and focus areas
  • Face up detection
  • Time lapse video

For general information about how to utilize features that are controlled through Photographic camera.Parameters, review the Using camera features section. For more than detailed information about how to employ features controlled through the camera parameters object, follow the links in the feature list beneath to the API reference documentation.

Tabular array 1. Common camera features sorted by the Android API Level in which they were introduced.

Feature API Level Description
Confront Detection 14 Place human faces inside a picture and use them for focus, metering and white balance
Metering Areas 14 Specify i or more areas within an paradigm for calculating white balance
Focus Areas 14 Set up 1 or more areas within an image to utilise for focus
White Balance Lock fourteen Stop or showtime automatic white remainder adjustments
Exposure Lock 14 Finish or outset automatic exposure adjustments
Video Snapshot xiv Take a picture while shooting video (frame grab)
Fourth dimension Lapse Video 11 Tape frames with set delays to record a time lapse video
Multiple Cameras 9 Support for more than one camera on a device, including front-facing and back-facing cameras
Focus Distance nine Reports distances between the photographic camera and objects that announced to be in focus
Zoom viii Set paradigm magnification
Exposure Compensation 8 Increase or subtract the light exposure level
GPS Data 5 Include or omit geographic location data with the image
White Rest 5 Ready the white balance mode, which affects color values in the captured image
Focus Way 5 Set how the photographic camera focuses on a subject such as automated, fixed, macro or infinity
Scene Style five Apply a preset manner for specific types of photography situations such as night, embankment, snowfall or candlelight scenes
JPEG Quality 5 Gear up the compression level for a JPEG image, which increases or decreases image output file quality and size
Flash Way 5 Turn flash on, off, or use automatic setting
Color Effects v Apply a color effect to the captured image such as black and white, sepia tone or negative.
Anti-Banding 5 Reduces the consequence of banding in color gradients due to JPEG compression
Picture Format 1 Specify the file format for the picture show
Motion picture Size 1 Specify the pixel dimensions of the saved motion-picture show

Note: These features are not supported on all devices due to hardware differences and software implementation. For information on checking the availability of features on the device where your application is running, see Checking feature availability.

Checking feature availability

The offset thing to understand when setting out to utilise photographic camera features on Android devices is that not all photographic camera features are supported on all devices. In add-on, devices that support a particular feature may back up them to different levels or with dissimilar options. Therefore, part of your decision process every bit you develop a camera application is to decide what camera features you want to support and to what level. After making that conclusion, you should plan on including code in your camera application that checks to come across if device hardware supports those features and fails gracefully if a characteristic is non available.

Yous can check the availability of photographic camera features past getting an instance of a camera's parameters object, and checking the relevant methods. The post-obit code sample shows you how to obtain a Camera.Parameters object and check if the photographic camera supports the autofocus feature:

Kotlin

val params: Camera.Parameters? = camera?.parameters val focusModes: List<String>? = params?.supportedFocusModes if (focusModes?.contains(Camera.Parameters.FOCUS_MODE_AUTO) == true) {     // Autofocus style is supported }            

Java

// get Photographic camera parameters Camera.Parameters params = camera.getParameters();  List<String> focusModes = params.getSupportedFocusModes(); if (focusModes.contains(Photographic camera.Parameters.FOCUS_MODE_AUTO)) {   // Autofocus mode is supported }            

Y'all can use the technique shown higher up for near camera features. The Camera.Parameters object provides a getSupported...(), is...Supported() or getMax...() method to determine if (and to what extent) a feature is supported.

If your application requires certain camera features in order to function properly, you can require them through additions to your application manifest. When you declare the employ of specific camera features, such as flash and motorcar-focus, Google Play restricts your application from beingness installed on devices which do not support these features. For a list of camera features that can be declared in your app manifest, see the manifest Features Reference.

Using camera features

Most photographic camera features are activated and controlled using a Camera.Parameters object. Yous obtain this object by first getting an instance of the Camera object, calling the getParameters() method, changing the returned parameter object and then setting information technology back into the camera object, every bit demonstrated in the following example code:

Kotlin

val params: Photographic camera.Parameters? = camera?.parameters params?.focusMode = Camera.Parameters.FOCUS_MODE_AUTO photographic camera?.parameters = params            

Coffee

// become Camera parameters Camera.Parameters params = camera.getParameters(); // set the focus fashion params.setFocusMode(Camera.Parameters.FOCUS_MODE_AUTO); // set Camera parameters photographic camera.setParameters(params);            

This technique works for nearly all camera features, and most parameters can be changed at any time after you lot have obtained an instance of the Photographic camera object. Changes to parameters are typically visible to the user immediately in the awarding's photographic camera preview. On the software side, parameter changes may take several frames to really take event equally the photographic camera hardware processes the new instructions and and then sends updated image data.

Important: Some camera features cannot be changed at will. In particular, irresolute the size or orientation of the photographic camera preview requires that y'all kickoff stop the preview, change the preview size, so restart the preview. Starting with Android 4.0 (API Level fourteen) preview orientation can be changed without restarting the preview.

Other photographic camera features require more code in order to implement, including:

  • Metering and focus areas
  • Face detection
  • Fourth dimension lapse video

A quick outline of how to implement these features is provided in the following sections.

Metering and focus areas

In some photographic scenarios, automatic focusing and lite metering may non produce the desired results. Starting with Android 4.0 (API Level fourteen), your photographic camera application can provide additional controls to allow your app or users to specify areas in an epitome to utilise for determining focus or light level settings and laissez passer these values to the camera hardware for apply in capturing images or video.

Areas for metering and focus piece of work very similarly to other photographic camera features, in that you control them through methods in the Camera.Parameters object. The following code demonstrates setting two calorie-free metering areas for an instance of Camera:

Kotlin

// Create an case of Camera camera = getCameraInstance()  // set up Camera parameters val params: Camera.Parameters? = camera?.parameters  params?.utilise {     if (maxNumMeteringAreas > 0) { // check that metering areas are supported         meteringAreas = ArrayList<Camera.Expanse>().apply {             val areaRect1 = Rect(-100, -100, 100, 100) // specify an area in eye of image             add together(Camera.Area(areaRect1, 600)) // set weight to 60%             val areaRect2 = Rect(800, -1000, 1000, -800) // specify an area in upper right of image             add(Camera.Area(areaRect2, 400)) // set weight to 40%         }     }     photographic camera?.parameters = this }            

Coffee

// Create an instance of Photographic camera camera = getCameraInstance();  // set up Photographic camera parameters Photographic camera.Parameters params = camera.getParameters();  if (params.getMaxNumMeteringAreas() > 0){ // bank check that metering areas are supported     List<Camera.Expanse> meteringAreas = new ArrayList<Camera.Expanse>();      Rect areaRect1 = new Rect(-100, -100, 100, 100);    // specify an area in center of epitome     meteringAreas.add together(new Camera.Area(areaRect1, 600)); // ready weight to 60%     Rect areaRect2 = new Rect(800, -1000, k, -800);  // specify an area in upper right of epitome     meteringAreas.add(new Camera.Area(areaRect2, 400)); // ready weight to xl%     params.setMeteringAreas(meteringAreas); }  photographic camera.setParameters(params);            

The Camera.Area object contains two information parameters: A Rect object for specifying an expanse within the photographic camera's field of view and a weight value, which tells the camera what level of importance this expanse should be given in light metering or focus calculations.

The Rect field in a Camera.Area object describes a rectangular shape mapped on a 2000 10 2000 unit grid. The coordinates -1000, -one thousand stand for the top, left corner of the camera epitome, and coordinates thousand, chiliad represent the lesser, correct corner of the photographic camera epitome, every bit shown in the illustration below.

Figure 1. The red lines illustrate the coordinate system for specifying a Camera.Area inside a camera preview. The blue box shows the location and shape of an camera area with the Rect values 333,333,667,667.

The bounds of this coordinate organisation ever stand for to the outer edge of the image visible in the photographic camera preview and do not shrink or expand with the zoom level. Similarly, rotation of the image preview using Photographic camera.setDisplayOrientation() does not remap the coordinate organization.

Face detection

For pictures that include people, faces are usually the most important role of the flick, and should be used for determining both focus and white balance when capturing an image. The Android 4.0 (API Level 14) framework provides APIs for identifying faces and calculating picture settings using face recognition engineering.

Note: While the confront detection feature is running, setWhiteBalance(String), setFocusAreas(Listing<Photographic camera.Area>) and setMeteringAreas(List<Camera.Area>) take no effect.

Using the face detection characteristic in your camera application requires a few general steps:

  • Check that face detection is supported on the device
  • Create a face detection listener
  • Add the face detection listener to your camera object
  • Start face up detection afterwards preview (and subsequently every preview restart)

The face detection feature is not supported on all devices. You tin cheque that this feature is supported by calling getMaxNumDetectedFaces(). An example of this bank check is shown in the startFaceDetection() sample method beneath.

In order to exist notified and respond to the detection of a face, your photographic camera application must set a listener for face detection events. In order to do this, you must create a listener grade that implements the Camera.FaceDetectionListener interface as shown in the case code below.

Kotlin

internal class MyFaceDetectionListener : Camera.FaceDetectionListener {      override fun onFaceDetection(faces: Array<Camera.Face>, photographic camera: Camera) {         if (faces.isNotEmpty()) {             Log.d("FaceDetection", ("face up detected: ${faces.size}" +                     " Face 1 Location X: ${faces[0].rect.centerX()}" +                     "Y: ${faces[0].rect.centerY()}"))         }     } }            

Java

class MyFaceDetectionListener implements Photographic camera.FaceDetectionListener {      @Override     public void onFaceDetection(Face[] faces, Photographic camera camera) {         if (faces.length > 0){             Log.d("FaceDetection", "face detected: "+ faces.length +                     " Face ane Location Ten: " + faces[0].rect.centerX() +                     "Y: " + faces[0].rect.centerY() );         }     } }            

Afterward creating this form, you then set information technology into your application's Photographic camera object, as shown in the example lawmaking below:

Kotlin

camera?.setFaceDetectionListener(MyFaceDetectionListener())            

Coffee

camera.setFaceDetectionListener(new MyFaceDetectionListener());            

Your awarding must start the face detection function each time you start (or restart) the camera preview. Create a method for starting confront detection so you tin telephone call information technology as needed, as shown in the example lawmaking below.

Kotlin

fun startFaceDetection() {     // Try starting Confront Detection     val params = mCamera?.parameters     // starting time face up detection merely *after* preview has started      params?.use {         if (maxNumDetectedFaces > 0) {             // camera supports face detection, so can start it:             mCamera?.startFaceDetection()         }     } }            

Java

public void startFaceDetection(){     // Effort starting Confront Detection     Camera.Parameters params = mCamera.getParameters();      // commencement face detection only *subsequently* preview has started     if (params.getMaxNumDetectedFaces() > 0){         // camera supports confront detection, so can start information technology:         mCamera.startFaceDetection();     } }            

You must commencement face detection each time you start (or restart) the camera preview. If yous use the preview class shown in Creating a preview class, add your startFaceDetection() method to both the surfaceCreated() and surfaceChanged() methods in your preview form, equally shown in the sample code below.

Kotlin

override fun surfaceCreated(holder: SurfaceHolder) {     effort {         mCamera.setPreviewDisplay(holder)         mCamera.startPreview()          startFaceDetection() // start face detection feature     } catch (e: IOException) {         Log.d(TAG, "Error setting photographic camera preview: ${e.bulletin}")     } }  override fun surfaceChanged(holder: SurfaceHolder, format: Int, w: Int, h: Int) {     if (holder.surface == null) {         // preview surface does not exist         Log.d(TAG, "holder.getSurface() == null")         return     }     endeavor {         mCamera.stopPreview()     } take hold of (e: Exception) {         // ignore: tried to stop a non-existent preview         Log.d(TAG, "Error stopping camera preview: ${e.message}")     }     try {         mCamera.setPreviewDisplay(holder)         mCamera.startPreview()          startFaceDetection() // re-kickoff face detection feature     } take hold of (due east: Exception) {         // ignore: tried to stop a non-existent preview         Log.d(TAG, "Error starting camera preview: ${e.message}")     } }            

Java

public void surfaceCreated(SurfaceHolder holder) {     try {         mCamera.setPreviewDisplay(holder);         mCamera.startPreview();          startFaceDetection(); // start face up detection characteristic      } catch (IOException e) {         Log.d(TAG, "Error setting camera preview: " + due east.getMessage());     } }  public void surfaceChanged(SurfaceHolder holder, int format, int w, int h) {      if (holder.getSurface() == null){         // preview surface does non exist         Log.d(TAG, "holder.getSurface() == null");         return;     }      try {         mCamera.stopPreview();      } take hold of (Exception e){         // ignore: tried to stop a non-existent preview         Log.d(TAG, "Error stopping camera preview: " + e.getMessage());     }      try {         mCamera.setPreviewDisplay(holder);         mCamera.startPreview();          startFaceDetection(); // re-offset confront detection characteristic      } take hold of (Exception e){         // ignore: tried to stop a non-existent preview         Log.d(TAG, "Error starting camera preview: " + e.getMessage());     } }            

Note: Retrieve to telephone call this method after calling startPreview(). Do non attempt to start face up detection in the onCreate() method of your photographic camera app'south main activeness, as the preview is not available by this point in your application'due south the execution.

Fourth dimension lapse video

Time lapse video allows users to create video clips that combine pictures taken a few seconds or minutes apart. This feature uses MediaRecorder to record the images for a fourth dimension lapse sequence.

To record a fourth dimension lapse video with MediaRecorder, you must configure the recorder object every bit if you are recording a normal video, setting the captured frames per 2nd to a low number and using one of the time lapse quality settings, every bit shown in the lawmaking case below.

Kotlin

mediaRecorder.setProfile(CamcorderProfile.become(CamcorderProfile.QUALITY_TIME_LAPSE_HIGH)) mediaRecorder.setCaptureRate(0.1) // capture a frame every 10 seconds            

Coffee

// Footstep iii: Set a CamcorderProfile (requires API Level 8 or higher) mediaRecorder.setProfile(CamcorderProfile.become(CamcorderProfile.QUALITY_TIME_LAPSE_HIGH)); ... // Stride 5.5: Set up the video capture rate to a low number mediaRecorder.setCaptureRate(0.1); // capture a frame every ten seconds            

These settings must be done as part of a larger configuration procedure for MediaRecorder. For a total configuration code instance, see Configuring MediaRecorder. Once the configuration is complete, you start the video recording as if you were recording a normal video prune. For more information about configuring and running MediaRecorder, see Capturing videos.

The Camera2Video and HdrViewfinder samples further demonstrate the use of the APIs covered on this folio.

Camera fields that require permission

Apps running Android 10 (API level 29) or college must have the CAMERA permission in guild to admission the values of the following fields that the getCameraCharacteristics() method returns:

  • LENS_POSE_ROTATION
  • LENS_POSE_TRANSLATION
  • LENS_INTRINSIC_CALIBRATION
  • LENS_RADIAL_DISTORTION
  • LENS_POSE_REFERENCE
  • LENS_DISTORTION
  • LENS_INFO_HYPERFOCAL_DISTANCE
  • LENS_INFO_MINIMUM_FOCUS_DISTANCE
  • SENSOR_REFERENCE_ILLUMINANT1
  • SENSOR_REFERENCE_ILLUMINANT2
  • SENSOR_CALIBRATION_TRANSFORM1
  • SENSOR_CALIBRATION_TRANSFORM2
  • SENSOR_COLOR_TRANSFORM1
  • SENSOR_COLOR_TRANSFORM2
  • SENSOR_FORWARD_MATRIX1
  • SENSOR_FORWARD_MATRIX2

Additional sample code

To download sample apps, see the Camera2Basic sample and Official CameraX sample app.

Source: https://developer.android.com/guide/topics/media/camera

Posted by: garciagratin.blogspot.com

0 Response to "How To Read Text From Camera In Android Studio"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel