Tải bản đầy đủ - 0 (trang)
Chapter 30. Photo Library and Image Capture

Chapter 30. Photo Library and Image Capture

Tải bản đầy đủ - 0trang

You should call the class method isSourceTypeAvailable: beforehand; if it doesn’t return YES, don’t present the controller with that source type.

You’ll probably want to specify an array of mediaTypes you’re interested in. This array

will usually contain kUTTypeImage, kUTTypeMovie, or both; or you can specify all available

types by calling the class method availableMediaTypesForSourceType:.

After doing all of that, and having supplied a delegate, present the view controller:

UIImagePickerControllerSourceType type =


BOOL ok = [UIImagePickerController isSourceTypeAvailable:type];

if (!ok) {




UIImagePickerController* picker = [[UIImagePickerController alloc] init];

picker.sourceType = type;

picker.mediaTypes =

[UIImagePickerController availableMediaTypesForSourceType:type];

picker.delegate = self;

[self presentViewController:picker animated:YES completion:nil]; // iPhone

On the iPhone, the delegate (UIImagePickerControllerDelegate) will receive one of

these messages:

• imagePickerController:didFinishPickingMediaWithInfo:

• imagePickerControllerDidCancel:

On the iPad, there’s no Cancel button, so there’s no imagePickerControllerDidCancel:; you can detect the dismissal of the popover through the popover delegate. On

the iPhone, if a UIImagePickerControllerDelegate method is not implemented, the view

controller is dismissed automatically; but rather than relying on this, you should implement both delegate methods and dismiss the view controller yourself in both.

The didFinish... method is handed a dictionary of information about the chosen item.

The keys in this dictionary depend on the media type.

An image

The keys are:


A UTI; probably @"public.image", which is the same as kUTTypeImage.


A UIImage.


An ALAsset URL (discussed later in this chapter).

A movie

The keys are:

810 | Chapter 30: Photo Library and Image Capture



A UTI; probably @"public.movie", which is the same as kUTTypeMovie.


A file URL to a copy of the movie saved into a temporary directory. This would

be suitable, for example, to display the movie with an MPMoviePlayerController (Chapter 28).


An ALAsset URL (discussed later in this chapter).

Optionally, you can set the view controller’s allowsEditing to YES. In the case of an

image, the interface then allows the user to scale the image up and to move it so as to

be cropped by a preset rectangle; the dictionary will include two additional keys:


An NSValue wrapping a CGRect.


A UIImage.

In the case of a movie, if the view controller’s allowsEditing is YES, the user can trim

the movie just as with a UIVideoEditorController (Chapter 28). The dictionary keys

are the same as before, but the file URL points to the trimmed copy in the temporary


Because of restrictions on how many movies can play at once (“There

Can Be Only One,” see Chapter 28), if you use a UIImagePickerController to let the user choose a movie and you then want to play that

movie in an MPMoviePlayerController, you must destroy the UIImagePickerController first. How you do this depends on how you displayed

the UIImagePickerController. If you’re using a presented view controller on the iPhone, you can use the completion handler to ensure that the

MPMoviePlayerController isn’t configured until after the animation

dismissing the presented view. If you’re using a popover on the iPad,

you can release the UIPopoverController (probably by nilifying the instance variable that’s retaining it) after dismissing the popover without


Using the Camera

To prompt the user to take a photo or video in an interface similar to the Camera app,

instantiate UIImagePickerController and set its source type to UIImagePickerControllerSourceTypeCamera. Be sure to check isSourceTypeAvailable: beforehand; it

will be NO if the user’s device has no camera or the camera is unavailable. If it is YES,

call availableMediaTypesForSourceType: to learn whether the user can take a still photo

(kUTTypeImage), a video (kUTTypeMovie), or both. The result will guide your media-

UIImagePickerController | 811


Types setting. Set a delegate, and present the view controller. In this situation, it is legal

(and preferable) to use a presented view controller even on the iPad.

For video, you can also specify the videoQuality and videoMaximumDuration. Moreover,

these additional properties and class methods allow you to discover the camera capabilities:


Checks to see whether the front or rear camera is available, using one of these


• UIImagePickerControllerCameraDeviceFront

• UIImagePickerControllerCameraDeviceRear


Lets you learn and set which camera is being used.


Checks whether the given camera can capture still images, video, or both. You

specify the front or rear camera; returns an NSArray of NSNumbers, from which

you can extract the integer value. Possible modes are:

• UIImagePickerControllerCameraCaptureModePhoto

• UIImagePickerControllerCameraCaptureModeVideo


Lets you learn and set the capture mode (still or video).


Checks whether flash is available.


Lets you learn and set the flash mode (or, for a movie, toggles the LED “torch”).

Your choices are:

• UIImagePickerControllerCameraFlashModeOff

• UIImagePickerControllerCameraFlashModeAuto

• UIImagePickerControllerCameraFlashModeOn

Setting camera-related properties such as cameraDevice when there is no

camera or when the UIImagePickerController is not set to camera mode

can crash your app.

When the view controller appears, the user will see the interface for taking a picture,

familiar from the Camera app, possibly including flash button, camera selection button,

and digital zoom (if the hardware supports these), still/video switch (if your mediaTypes setting allows both), and Cancel and Shutter buttons. If the user takes a picture,

the presented view offers an opportunity to use the picture or to retake it.

812 | Chapter 30: Photo Library and Image Capture


Allowing the user to edit the captured image or movie, and handling the outcome with

the delegate messages, is the same as I described in the previous section. There won’t

be any UIImagePickerControllerReferenceURL key in the dictionary delivered to the

delegate because the image isn’t in the photo library. A still image might report a UIImagePickerControllerMediaMetadata key containing the metadata for the photo.

Here’s a very simple example in which we offer the user a chance to take a still image;

if the user does so, we insert the image into our interface in a UIImageView (iv):

- (IBAction)doTake:(id)sender {

BOOL ok = [UIImagePickerController isSourceTypeAvailable:


if (!ok) {

NSLog(@"no camera");



NSArray* arr = [UIImagePickerController availableMediaTypesForSourceType:


if ([arr indexOfObject:(NSString*)kUTTypeImage] == NSNotFound) {

NSLog(@"no stills");



UIImagePickerController* pick = [UIImagePickerController new];

pick.sourceType = UIImagePickerControllerSourceTypeCamera;

pick.mediaTypes = [NSArray arrayWithObject:(NSString*)kUTTypeImage];

pick.delegate = self;

[self presentViewController:pick animated:YES completion:nil];


- (void)imagePickerControllerDidCancel:(UIImagePickerController *)picker {

[self dismissViewControllerAnimated:YES completion:nil];


- (void)imagePickerController:(UIImagePickerController *)picker

didFinishPickingMediaWithInfo:(NSDictionary *)info {

UIImage* im = [info objectForKey:UIImagePickerControllerOriginalImage];

if (im)

self->iv.image = im;

[self dismissViewControllerAnimated:YES completion:nil];


In the image capture interface, you can hide the standard controls by setting showsCameraControls to NO, replacing them with your own overlay view, which you supply

as the value of the cameraOverlayView. In this case, you’re probably going to want some

means in your overlay view to allow the user to take a picture! You can do that through

these methods:

• takePicture

• startVideoCapture

• stopVideoCapture

UIImagePickerController | 813


You can supply a cameraOverlayView even if you don’t set showsCameraControls to NO;

but in that case you’ll need to negotiate the position of your added controls if you don’t

want them to cover the existing controls.

The key to customizing the look and behavior of the image capture interface is that a

UIImagePickerController is a UINavigationController; the controls shown at the bottom of the default interface are the navigation controller’s toolbar. In this example, I’ll

remove all the default controls and allow the user to double-tap the image in order to

take a picture:

// ... starts out as before ...

picker.delegate = self;

picker.showsCameraControls = NO;

CGRect f = self.view.window.bounds;

UIView* v = [[UIView alloc] initWithFrame:f];

UITapGestureRecognizer* t =

[[UITapGestureRecognizer alloc] initWithTarget:self action:@selector(tap:)];

t.numberOfTapsRequired = 2;

[v addGestureRecognizer:t];

picker.cameraOverlayView = v;

[self presentViewController:picker animated:YES completion:nil];

self->p = picker;

// ...

- (void) tap: (id) g {

[self->p takePicture];


The interface is marred by a blank area the size of the toolbar at the bottom of the

screen, below the preview image. What are we to do about this? You can zoom or

otherwise transform the preview image by setting the cameraViewTransform property.

But this can be tricky, because different versions of iOS apply your transform differently; in iOS 4 and later, it is applied from the center, but before that it is applied from

the top. In this situation it is even more tricky, because we don’t know what values to

use; it’s hard to achieve a transform such that the way the image is framed in full-screen

is the same as how the final image is framed. A better solution might be simply to show

the toolbar and cover the blank area; in that case, the framing of the image as displayed

will match the framing of the image as captured.

Since we are the UIImagePickerController’s delegate, we are not only its UIImagePickerControllerDelegate but also its UINavigationControllerDelegate. We can therefore get some control over the navigation controller’s interface, and populate its root

view controller’s toolbar — but only if we wait until the root view controller’s view

actually appears. Here, I’ll increase the height of the toolbar to ensure that it covers the

blank area, and put a Cancel button into it:

- (void)navigationController:(UINavigationController *)nc

didShowViewController:(UIViewController *)vc

animated:(BOOL)animated {

[nc setToolbarHidden:NO];

CGRect f = nc.toolbar.frame;

814 | Chapter 30: Photo Library and Image Capture


CGFloat h = 56; // determined experimentally

CGFloat diff = h - f.size.height;

f.size.height = h;

f.origin.y -= diff;

nc.toolbar.frame = f;

UIBarButtonItem* b =

[[UIBarButtonItem alloc] initWithTitle:@"Cancel"




[nc.topViewController setToolbarItems:[NSArray arrayWithObject:b]];


When the user double-taps to take a picture, our didFinishPickingMediaWithInfo delegate method is called, just as before. We don’t automatically get the secondary interface where the user is shown the resulting image and offered an opportunity to use it

or retake the image. But we can provide such an interface ourselves by pushing another

view controller onto the navigation controller:

- (void)imagePickerController:(UIImagePickerController *)picker

didFinishPickingMediaWithInfo:(NSDictionary *)info {

UIImage* im = [info objectForKey:UIImagePickerControllerOriginalImage];

if (!im)


SecondViewController* svc =

[[SecondViewController alloc] initWithNibName:nil bundle:nil image:im];

[picker pushViewController:svc animated:YES];


(Designing the SecondViewController class is left as an exercise for the reader.)

Image Capture With AV Foundation

Instead of using UIImagePickerController, you can control the camera and capture

images using the AV Foundation framework (Chapter 28). You get no help with interface (except for displaying in your interface what the camera “sees”), but you get far

more detailed control than UIImagePickerController can give you; for example, for

stills, you can control focus and exposure directly and independently, and for video,

you can determine the quality, size, and framerate of the resulting movie. You can also

capture audio, of course.

The heart of all AV Foundation capture operations is an AVCaptureSession object. You

configure this and provide it as desired with inputs (such as a camera) and outputs

(such as a file); then you call startRunning to begin the actual capture. You can reconfigure an AVCaptureSession, possibly adding or removing an input or output, while it

is running — indeed, doing so is far more efficient than stopping the session and starting

it again — but you should wrap your configuration changes in beginConfiguration and


Image Capture With AV Foundation | 815


As a rock-bottom example, let’s start by displaying in our interface, in real time, what

the camera sees. This requires an AVCaptureVideoPreviewLayer, a CALayer subclass.

This layer is not an AVCaptureSession output; rather, the layer receives its imagery by

owning the AVCaptureSession:

self.sess = [AVCaptureSession new];

AVCaptureDevice* cam =

[AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];

AVCaptureDeviceInput* input =

[AVCaptureDeviceInput deviceInputWithDevice:cam error:nil];

// error-checking omitted

[self.sess addInput:input];

AVCaptureVideoPreviewLayer* lay =

[[AVCaptureVideoPreviewLayer alloc] initWithSession:self.sess];

lay.frame = CGRectMake(10,30,300,300);

[self.view.layer addSublayer:lay];

[self.sess startRunning];

Presto! Our interface now contains a window on the world, so to speak. Next, let’s

permit the user to snap a still photo, which our interface will display instead of the realtime view of what the camera sees. As a first step, we’ll need to revise what happens as

we create our AVCaptureSession in the previous code. Since this image is to go directly

into our interface, we won’t need the full eight megapixel size of which the iPhone 4

camera is capable, so we’ll configure our AVCaptureSession’s sessionPreset to ask for

a much smaller image. We’ll also provide an output for our AVCaptureSession, an


self.sess = [AVCaptureSession new];

self.sess.sessionPreset = AVCaptureSessionPreset640x480;

self.snapper = [AVCaptureStillImageOutput new];

self.snapper.outputSettings =

[NSDictionary dictionaryWithObject:AVVideoCodecJPEG forKey:AVVideoCodecKey];

[self.sess addOutput:self.snapper];

// ... and the rest is as before ...

When the user asks to snap a picture, we send captureStillImageAsynchronouslyFromConnection:completionHandler: to our AVCaptureStillImageOutput object. This call

requires some preparation. The first argument is an AVCaptureConnection; to find it,

we ask the output for its connection that is currently inputting video. The second argument is the block that will be called, possibly on a background thread, when the

image data is ready. We capture the data into a UIImage and, moving onto the main

thread (Chapter 38), we construct in the interface a UIImageView containing that image, in place of the AVCaptureVideoPreviewLayer we were displaying previously:

AVCaptureConnection *vc = [self.snapper connectionWithMediaType:AVMediaTypeVideo];

typedef void(^MyBufBlock)(CMSampleBufferRef, NSError*);

MyBufBlock h = ^(CMSampleBufferRef buf, NSError *err) {

NSData* data =

[AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:buf];

UIImage* im = [UIImage imageWithData:data];

816 | Chapter 30: Photo Library and Image Capture


dispatch_async(dispatch_get_main_queue(), ^{

UIImageView* iv =

[[UIImageView alloc] initWithFrame:CGRectMake(10,30,300,300)];

iv.contentMode = UIViewContentModeScaleAspectFit;

iv.image = im;

[self.sess stopRunning];

[[self.view.layer.sublayers lastObject] removeFromSuperlayer];

[self.view addSubview: iv];



[self.snapper captureStillImageAsynchronouslyFromConnection:vc


Our code has not illustrated setting the focus, changing the flash settings, and so forth;

doing so is not difficult (see the class documentation on AVCaptureDevice), but note

that you should wrap such changes in calls to lockForConfiguration: and unlockForConfiguration. You can turn on the LED “torch” by setting the back camera’s torchMode to AVCaptureTorchModeOn, even if no AVCaptureSession is running (new in iOS 5).

AV Foundation’s control over the camera, and its ability to process incoming data —

especially video data — goes far deeper than there is room to discuss here, so consult

the documentation; in particular, see the “Media Capture” chapter of the AV Foundation Programming Guide, plus the AV Foundation Release Notes for iOS 5. There are

also excellent WWDC videos on AV Foundation, and some fine sample code; in particular, I found Apple’s AVCam example very helpful while preparing this discussion.

The Assets Library Framework

The Assets Library framework does for the photo library roughly what the Media Player

framework does for the music library (Chapter 29), letting your code explore the library’s contents. You’ll need to link to AssetsLibrary.framework and import . One obvious use of the Assets Library framework might be

to implement your own interface for letting the user choose an image in a way that

transcends the limitations of UIImagePickerController.

A photo or video in the photo library is an ALAsset. Like a media entity (Chapter 29),

an ALAsset can describe itself through key–value pairs called properties. (This use of

the word “properties” has nothing to do with the Objective-C properties discussed in

Chapter 12.) For example, it can report its type (photo or video), its creation date, its

orientation if it is a photo whose metadata contains this information, and its duration

if it is a video. You fetch a property value with valueForProperty:. The properties have

names like ALAssetPropertyType.

A photo can provide multiple representations (roughly, image file formats). A given

photo ALAsset lists these representations as one of its properties, ALAssetPropertyRepresentations, an array of strings giving the UTIs identifying the file formats; a typical

UTI might be @"public.jpeg" (kUTTypeJPEG, if you’ve linked to MobileCoreServices.framework). A representation is an ALAssetRepresentation. You can get a

The Assets Library Framework | 817


photo’s defaultRepresentation, or ask for a particular representation by submitting a

file format’s UTI to representationForUTI:.

Once you have an ALAssetRepresentation, you can interrogate it to get the actual image, either as raw data or as a CGImage (see Chapter 15). The simplest way is to ask

for its fullResolutionImage or its fullScreenImage (the latter is more suitable for display

in your interface, and is identical in iOS 5 to what the Photos app displays); you may

then want to derive a UIImage from this using imageWithCGImage:scale:orientation:.

The original scale and orientation of the image are available as the ALAssetRepresentation’s scale and orientation. (In iOS 5, if all you need is a small version of

the image to display in your interface, you can ask the ALAsset itself for its aspectRatioThumbnail.) An ALAssetRepresentation also has a url, which is the unique identifier for

the ALAsset.

The photo library itself is an ALAssetsLibrary instance. It is divided into groups

(ALAssetsGroup), which have types. For example, the user might have multiple albums; each of these is a group of type ALAssetsGroupAlbum. (In iOS 5, you also have

access to the new PhotoStream album.) An ALAssetsGroup also has properties, such

as a name, which you can fetch with valueForProperty:; new in iOS 5, a group has a

URL, which is its unique identifier. To fetch assets from the library, you either fetch

one specific asset by providing its URL, or you can start with a group, in which case

you can then enumerate the group’s assets. To obtain a group, you can enumerate the

library’s groups of a certain type, in which case you are handed each group as an

ALAssetsGroup, or (new in iOS 5) you can provide a particular group’s URL. Before

enumerating a group’s assets, you may optionally filter the group using a simple

ALAssetsFilter; this limits any subsequent enumeration to photos only, videos only, or


The Assets Library framework uses Objective-C blocks for fetching and enumerating

assets and groups. These blocks behave rather oddly: at the end of the enumeration,

they are called one extra time with a nil first parameter. Thus, you must code your block

defensively to avoid treating the first parameter as real on that final call.

We now know enough for an example! I’ll fetch the first photo from the album named

“mattBestVertical” in my photo library and stick it into a UIImageView in the interface.

For readability, I’ve set up the blocks in my code separately as variables before they are

used, so it will help to read backward: we enumerate (at the end of the code) using the

getGroups block (previously defined), which itself enumerates using the getPix block

(defined before that). We must also be prepared with a block that handles the possibility

of an error. Here we go:

// what I'll do with the assets from the group

ALAssetsGroupEnumerationResultsBlock getPix =

^ (ALAsset *result, NSUInteger index, BOOL *stop) {

if (!result)


ALAssetRepresentation* rep = [result defaultRepresentation];

CGImageRef im = [rep fullScreenImage];

818 | Chapter 30: Photo Library and Image Capture


UIImage* im2 = [UIImage imageWithCGImage:im scale:0


[self->iv setImage:im2]; // put image into our UIImageView

*stop = YES; // got first image, all done


// what I'll do with the groups from the library

ALAssetsLibraryGroupsEnumerationResultsBlock getGroups =

^ (ALAssetsGroup *group, BOOL *stop) {

if (!group)


NSString* title = [group valueForProperty: ALAssetsGroupPropertyName];

if ([title isEqualToString: @"mattBestVertical"]) {

[group enumerateAssetsUsingBlock:getPix];

*stop = YES; // got target group, all done



// might not be able to access library at all

ALAssetsLibraryAccessFailureBlock oops = ^ (NSError *error) {

NSLog(@"oops! %@", [error localizedDescription]);

// e.g., "Global denied access"


// and here we go with the actual enumeration!

ALAssetsLibrary* library = [[ALAssetsLibrary alloc] init];

[library enumerateGroupsWithTypes: ALAssetsGroupAlbum

usingBlock: getGroups

failureBlock: oops];

[library release];

You can write files into the Camera Roll / Saved Photos album. The basic function for

writing an image file to this location is UIImageWriteToSavedPhotosAlbum. Some kinds

of video file can also be saved here; in an example in Chapter 28, I checked whether

this was true of a certain video file by calling UIVideoAtPathIsCompatibleWithSavedPhotosAlbum, and I saved the file by calling UISaveVideoAtPathToSavedPhotosAlbum.

The ALAssetsLibrary class extends these abilities by providing five additional methods:


Takes a CGImageRef and orientation.


Takes a CGImageRef and optional metadata dictionary (such as might arrive

through the UIImagePickerControllerMediaMetadata key when the user takes a picture using UIImagePickerController).


Takes raw image data (NSData) and optional metadata.


Takes a file path string. Returns a boolean.


Takes a file path string.

The Assets Library Framework | 819


Saving takes time, so a completion block allows you to be notified when it’s over. The

completion block supplies two parameters: an NSURL and an NSError. If the first

parameter is not nil, the write succeeded, and this is the URL of the resulting ALAsset.

If the first parameter is nil, the write failed, and the second parameter describes the


Starting in iOS 5, you can create in the Camera Roll / Saved Photos album an image or

video that is considered to be a modified version of an existing image or video, by calling

an instance method on the original asset:

• writeModifiedImageDataToSavedPhotosAlbum:metadata:completionBlock:

• writeModifiedVideoAtPathToSavedPhotosAlbum:completionBlock:

Afterwards, you can get from the modified asset to the original asset through the former’s originalAsset property.

New in iOS 5, you are allowed to “edit” an asset — that is, you can replace an image

or video in the library with a different image or video — but only if your application

created the asset. Check the asset’s editable property; if it is YES, you can call either

of these methods:

• setImageData:metadata:completionBlock:

• setVideoAtPath:completionBlock:

Also new in iOS 5, you are allowed to create an album. If an album is editable, which

would be because you created it, you can add an existing asset to it by calling addAsset:. (This is not the same as saving a new asset to an album other than the Camera

Roll / Saved Photos album; you can’t do that, but once an asset exists, it can belong to

more than one album.)

820 | Chapter 30: Photo Library and Image Capture


Tài liệu bạn tìm kiếm đã sẵn sàng tải về

Chapter 30. Photo Library and Image Capture

Tải bản đầy đủ ngay(0 tr)