如何将 UIImage 数组导出为电影?

我有一个严重的问题:我有一个带有几个UIImage对象的NSArray。我现在要做的是从这些UIImages中创建电影。但我不知道该怎么做。

我希望有人能帮助我或给我一个代码片段,做一些像我想要的。

编辑:供将来参考-应用解决方案后,如果视频看起来扭曲,请确保您捕捉的图像/区域的宽度是16的倍数。经过几个小时的挣扎才发现:
为什么我在UIImages中的影片会被扭曲? < / p > 这是完整的解决方案(只是确保宽度是16的倍数)
http://codethink.no-ip.org/wordpress/archives/673 < / p >

98270 次浏览

好吧,这在纯Objective-C中实现有点困难....如果你开发的是越狱设备,最好在应用中使用命令行工具ffmpeg。 使用如下命令从图像创建一个电影非常简单:

ffmpeg -r 10 -b 1800 -i %03d.jpg test1800.mp4
注意,这些图像必须按顺序命名,并且也要放在同一个目录中。 欲了解更多信息,请查看: http://electron.mit.edu/~gsteele/ffmpeg/ < / p >

看看AVAssetWriterAVFoundation框架的其余部分。写入器有一个类型为AVAssetWriterInput的输入,该输入又有一个名为appendSampleBuffer:的方法,该方法允许你向视频流中添加单独的帧。基本上你必须:

1)电报作者:

NSError *error = nil;
AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:
[NSURL fileURLWithPath:somePath] fileType:AVFileTypeQuickTimeMovie
error:&error];
NSParameterAssert(videoWriter);


NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecH264, AVVideoCodecKey,
[NSNumber numberWithInt:640], AVVideoWidthKey,
[NSNumber numberWithInt:480], AVVideoHeightKey,
nil];
AVAssetWriterInput* writerInput = [[AVAssetWriterInput
assetWriterInputWithMediaType:AVMediaTypeVideo
outputSettings:videoSettings] retain]; //retain should be removed if ARC


NSParameterAssert(writerInput);
NSParameterAssert([videoWriter canAddInput:writerInput]);
[videoWriter addInput:writerInput];

2)开始会话:

[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:…] //use kCMTimeZero if unsure

3)写一些例子:

// Or you can use AVAssetWriterInputPixelBufferAdaptor.
// That lets you feed the writer input data from a CVPixelBuffer
// that’s quite easy to create from a CGImage.
[writerInput appendSampleBuffer:sampleBuffer];

4)完成会话:

[writerInput markAsFinished];
[videoWriter endSessionAtSourceTime:…]; //optional can call finishWriting without specifying endTime
[videoWriter finishWriting]; //deprecated in ios6
/*
[videoWriter finishWritingWithCompletionHandler:...]; //ios 6.0+
*/

你仍然需要填充很多空白,但我认为唯一真正困难的部分是从CGImage中获得一个像素缓冲区:

- (CVPixelBufferRef) newPixelBufferFromCGImage: (CGImageRef) image
{
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVPixelBufferRef pxbuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, frameSize.width,
frameSize.height, kCVPixelFormatType_32ARGB, (CFDictionaryRef) options,
&pxbuffer);
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);


CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata != NULL);


CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, frameSize.width,
frameSize.height, 8, 4*frameSize.width, rgbColorSpace,
kCGImageAlphaNoneSkipFirst);
NSParameterAssert(context);
CGContextConcatCTM(context, frameTransform);
CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image),
CGImageGetHeight(image)), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);


CVPixelBufferUnlockBaseAddress(pxbuffer, 0);


return pxbuffer;
}

frameSize是一个描述目标帧大小的CGSize,而frameTransform是一个CGAffineTransform,它允许你在将图像绘制成帧时转换图像。

我采用了Zoul的主要思想,并结合了AVAssetWriterInputPixelBufferAdaptor方法,并从中创建了一个小框架。

请随意查看并改进它!CEMovieMaker

以下是iOS8在Objective-C中的最新工作代码。

我们不得不对@Zoul的答案进行各种调整,以使其适用于最新版本的Xcode和iOS8。下面是我们完整的工作代码,获取一个UIImages数组,将它们转换成一个。mov文件,保存到临时目录,然后移动到相机胶卷。我们从多个不同的帖子中汇编了代码来实现这个功能。我们在注释中突出显示了为了使代码正常工作而必须解决的陷阱。

(1)创建一个UIImages的集合

[self saveMovieToLibrary]




- (IBAction)saveMovieToLibrary
{
// You just need the height and width of the video here
// For us, our input and output video was 640 height x 480 width
// which is what we get from the iOS front camera
ATHSingleton *singleton = [ATHSingleton singletons];
int height = singleton.screenHeight;
int width = singleton.screenWidth;


// You can save a .mov or a .mp4 file
//NSString *fileNameOut = @"temp.mp4";
NSString *fileNameOut = @"temp.mov";


// We chose to save in the tmp/ directory on the device initially
NSString *directoryOut = @"tmp/";
NSString *outFile = [NSString stringWithFormat:@"%@%@",directoryOut,fileNameOut];
NSString *path = [NSHomeDirectory() stringByAppendingPathComponent:[NSString stringWithFormat:outFile]];
NSURL *videoTempURL = [NSURL fileURLWithPath:[NSString stringWithFormat:@"%@%@", NSTemporaryDirectory(), fileNameOut]];


// WARNING: AVAssetWriter does not overwrite files for us, so remove the destination file if it already exists
NSFileManager *fileManager = [NSFileManager defaultManager];
[fileManager removeItemAtPath:[videoTempURL path]  error:NULL];




// Create your own array of UIImages
NSMutableArray *images = [NSMutableArray array];
for (int i=0; i<singleton.numberOfScreenshots; i++)
{
// This was our routine that returned a UIImage. Just use your own.
UIImage *image =[self uiimageFromCopyOfPixelBuffersUsingIndex:i];
// We used a routine to write text onto every image
// so we could validate the images were actually being written when testing. This was it below.
image = [self writeToImage:image Text:[NSString stringWithFormat:@"%i",i ]];
[images addObject:image];
}


// If you just want to manually add a few images - here is code you can uncomment
// NSString *path = [NSHomeDirectory() stringByAppendingPathComponent:[NSString stringWithFormat:@"Documents/movie.mp4"]];
//    NSArray *images = [[NSArray alloc] initWithObjects:
//                      [UIImage imageNamed:@"add_ar.png"],
//                      [UIImage imageNamed:@"add_ja.png"],
//                      [UIImage imageNamed:@"add_ru.png"],
//                      [UIImage imageNamed:@"add_ru.png"],
//                      [UIImage imageNamed:@"add_ar.png"],
//                      [UIImage imageNamed:@"add_ja.png"],
//                      [UIImage imageNamed:@"add_ru.png"],
//                      [UIImage imageNamed:@"add_ar.png"],
//                      [UIImage imageNamed:@"add_en.png"], nil];






[self writeImageAsMovie:images toPath:path size:CGSizeMake(height, width)];
}

这是创建AssetWriter并向其添加图像以进行写入的主要方法。

(2)连接AVAssetWriter

-(void)writeImageAsMovie:(NSArray *)array toPath:(NSString*)path size:(CGSize)size
{


NSError *error = nil;


// FIRST, start up an AVAssetWriter instance to write your video
// Give it a destination path (for us: tmp/temp.mov)
AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:[NSURL fileURLWithPath:path]
fileType:AVFileTypeQuickTimeMovie
error:&error];




NSParameterAssert(videoWriter);


NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecH264, AVVideoCodecKey,
[NSNumber numberWithInt:size.width], AVVideoWidthKey,
[NSNumber numberWithInt:size.height], AVVideoHeightKey,
nil];


AVAssetWriterInput* writerInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo
outputSettings:videoSettings];


AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput
sourcePixelBufferAttributes:nil];
NSParameterAssert(writerInput);
NSParameterAssert([videoWriter canAddInput:writerInput]);
[videoWriter addInput:writerInput];

(3)开始写作会话(注意:该方法从上面继续)

    //Start a SESSION of writing.
// After you start a session, you will keep adding image frames
// until you are complete - then you will tell it you are done.
[videoWriter startWriting];
// This starts your video at time = 0
[videoWriter startSessionAtSourceTime:kCMTimeZero];


CVPixelBufferRef buffer = NULL;


// This was just our utility class to get screen sizes etc.
ATHSingleton *singleton = [ATHSingleton singletons];


int i = 0;
while (1)
{
// Check if the writer is ready for more data, if not, just wait
if(writerInput.readyForMoreMediaData){


CMTime frameTime = CMTimeMake(150, 600);
// CMTime = Value and Timescale.
// Timescale = the number of tics per second you want
// Value is the number of tics
// For us - each frame we add will be 1/4th of a second
// Apple recommend 600 tics per second for video because it is a
// multiple of the standard video rates 24, 30, 60 fps etc.
CMTime lastTime=CMTimeMake(i*150, 600);
CMTime presentTime=CMTimeAdd(lastTime, frameTime);


if (i == 0) {presentTime = CMTimeMake(0, 600);}
// This ensures the first frame starts at 0.




if (i >= [array count])
{
buffer = NULL;
}
else
{
// This command grabs the next UIImage and converts it to a CGImage
buffer = [self pixelBufferFromCGImage:[[array objectAtIndex:i] CGImage]];
}




if (buffer)
{
// Give the CGImage to the AVAssetWriter to add to your video
[adaptor appendPixelBuffer:buffer withPresentationTime:presentTime];
i++;
}
else
{

(4)完成会话(注意:方法从上面继续)

                //Finish the session:
// This is important to be done exactly in this order
[writerInput markAsFinished];
// WARNING: finishWriting in the solution above is deprecated.
// You now need to give a completion handler.
[videoWriter finishWritingWithCompletionHandler:^{
NSLog(@"Finished writing...checking completion status...");
if (videoWriter.status != AVAssetWriterStatusFailed && videoWriter.status == AVAssetWriterStatusCompleted)
{
NSLog(@"Video writing succeeded.");


// Move video to camera roll
// NOTE: You cannot write directly to the camera roll.
// You must first write to an iOS directory then move it!
NSURL *videoTempURL = [NSURL fileURLWithPath:[NSString stringWithFormat:@"%@", path]];
[self saveToCameraRoll:videoTempURL];


} else
{
NSLog(@"Video writing failed: %@", videoWriter.error);
}


}]; // end videoWriter finishWriting Block


CVPixelBufferPoolRelease(adaptor.pixelBufferPool);


NSLog (@"Done");
break;
}
}
}
}
< p > (5)转换你的UIImages为CVPixelBufferRef < br > 此方法将为您提供一个CV像素缓冲区引用,这是AssetWriter所需要的。这是从你的UIImage(上图)中获得的CGImageRef中获得的
- (CVPixelBufferRef) pixelBufferFromCGImage: (CGImageRef) image
{
// This again was just our utility class for the height & width of the
// incoming video (640 height x 480 width)
ATHSingleton *singleton = [ATHSingleton singletons];
int height = singleton.screenHeight;
int width = singleton.screenWidth;


NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVPixelBufferRef pxbuffer = NULL;


CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, width,
height, kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef) options,
&pxbuffer);


NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);


CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata != NULL);


CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();


CGContextRef context = CGBitmapContextCreate(pxdata, width,
height, 8, 4*width, rgbColorSpace,
kCGImageAlphaNoneSkipFirst);
NSParameterAssert(context);
CGContextConcatCTM(context, CGAffineTransformMakeRotation(0));
CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image),
CGImageGetHeight(image)), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);


CVPixelBufferUnlockBaseAddress(pxbuffer, 0);


return pxbuffer;
}

(6)移动你的视频到相机卷 因为AVAssetWriter不能直接写入相机卷,这将把视频从“tmp/temp.”Mov "(或任何你上面命名的文件名)到相机胶卷

- (void) saveToCameraRoll:(NSURL *)srcURL
{
NSLog(@"srcURL: %@", srcURL);


ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];
ALAssetsLibraryWriteVideoCompletionBlock videoWriteCompletionBlock =
^(NSURL *newURL, NSError *error) {
if (error) {
NSLog( @"Error writing image with metadata to Photo Library: %@", error );
} else {
NSLog( @"Wrote image with metadata to Photo Library %@", newURL.absoluteString);
}
};


if ([library videoAtPathIsCompatibleWithSavedPhotosAlbum:srcURL])
{
[library writeVideoAtPathToSavedPhotosAlbum:srcURL
completionBlock:videoWriteCompletionBlock];
}
}

Zoul上面的回答很好地概括了你要做的事情。我们对这段代码进行了大量的注释,这样您就可以看到它是如何使用工作代码完成的。

更新到Swift 5

上周,我开始编写从图像生成视频的iOS代码。我有一点AVFoundation的经验,但从来没有听说过CVPixelBuffer。我在本页上看到了答案,还有在这里。我花了好几天的时间来分析所有内容,并以一种对我的大脑有意义的方式将它们重新组合在Swift中。以下是我想到的。

注意:如果你复制/粘贴下面所有的代码到一个Swift文件,它应该编译。你只需要调整loadImages()RenderSettings值。

第1部分:设置

在这里,我将所有与导出相关的设置组合到一个RenderSettings结构体中。

import AVFoundation
import UIKit
import Photos


struct RenderSettings {


var size : CGSize = .zero
var fps: Int32 = 6   // frames per second
var avCodecKey = AVVideoCodecType.h264
var videoFilename = "render"
var videoFilenameExt = "mp4"




var outputURL: URL {
// Use the CachesDirectory so the rendered video file sticks around as long as we need it to.
// Using the CachesDirectory ensures the file won't be included in a backup of the app.
let fileManager = FileManager.default
if let tmpDirURL = try? fileManager.url(for: .cachesDirectory, in: .userDomainMask, appropriateFor: nil, create: true) {
return tmpDirURL.appendingPathComponent(videoFilename).appendingPathExtension(videoFilenameExt)
}
fatalError("URLForDirectory() failed")
}

第二部分:ImageAnimator

ImageAnimator类知道你的图像,并使用VideoWriter类来执行渲染。其思想是将视频内容代码与底层AVFoundation代码分开。我还在这里添加了saveToLibrary()作为类函数,它在链的末尾被调用,以将视频保存到照片库中。

class ImageAnimator {


// Apple suggests a timescale of 600 because it's a multiple of standard video rates 24, 25, 30, 60 fps etc.
static let kTimescale: Int32 = 600


let settings: RenderSettings
let videoWriter: VideoWriter
var images: [UIImage]!


var frameNum = 0


class func saveToLibrary(videoURL: URL) {
PHPhotoLibrary.requestAuthorization { status in
guard status == .authorized else { return }


PHPhotoLibrary.shared().performChanges({
PHAssetChangeRequest.creationRequestForAssetFromVideo(atFileURL: videoURL)
}) { success, error in
if !success {
print("Could not save video to photo library:", error)
}
}
}
}


class func removeFileAtURL(fileURL: URL) {
do {
try FileManager.default.removeItem(atPath: fileURL.path)
}
catch _ as NSError {
// Assume file doesn't exist.
}
}


init(renderSettings: RenderSettings) {
settings = renderSettings
videoWriter = VideoWriter(renderSettings: settings)
//images = loadImages()
}


func render(completion: (()->Void)?) {


// The VideoWriter will fail if a file exists at the URL, so clear it out first.
ImageAnimator.removeFileAtURL(fileURL: settings.outputURL)


videoWriter.start()
videoWriter.render(appendPixelBuffers: appendPixelBuffers) {
ImageAnimator.saveToLibrary(videoURL: self.settings.outputURL)
completion?()
}


}


// This is the callback function for VideoWriter.render()
func appendPixelBuffers(writer: VideoWriter) -> Bool {


let frameDuration = CMTimeMake(value: Int64(ImageAnimator.kTimescale / settings.fps), timescale: ImageAnimator.kTimescale)


while !images.isEmpty {


if writer.isReadyForData == false {
// Inform writer we have more buffers to write.
return false
}


let image = images.removeFirst()
let presentationTime = CMTimeMultiply(frameDuration, multiplier: Int32(frameNum))
let success = videoWriter.addImage(image: image, withPresentationTime: presentationTime)
if success == false {
fatalError("addImage() failed")
}


frameNum += 1
}


// Inform writer all buffers have been written.
return true
}

第三部分:视频编写器

VideoWriter类做了所有AVFoundation的繁重工作。它主要是AVAssetWriterAVAssetWriterInput的包装器。它还包含由不知道如何将图像转换为CVPixelBuffer的花哨代码。

class VideoWriter {


let renderSettings: RenderSettings


var videoWriter: AVAssetWriter!
var videoWriterInput: AVAssetWriterInput!
var pixelBufferAdaptor: AVAssetWriterInputPixelBufferAdaptor!


var isReadyForData: Bool {
return videoWriterInput?.isReadyForMoreMediaData ?? false
}


class func pixelBufferFromImage(image: UIImage, pixelBufferPool: CVPixelBufferPool, size: CGSize) -> CVPixelBuffer {


var pixelBufferOut: CVPixelBuffer?


let status = CVPixelBufferPoolCreatePixelBuffer(kCFAllocatorDefault, pixelBufferPool, &pixelBufferOut)
if status != kCVReturnSuccess {
fatalError("CVPixelBufferPoolCreatePixelBuffer() failed")
}


let pixelBuffer = pixelBufferOut!


CVPixelBufferLockBaseAddress(pixelBuffer, CVPixelBufferLockFlags(rawValue: 0))


let data = CVPixelBufferGetBaseAddress(pixelBuffer)
let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
let context = CGContext(data: data, width: Int(size.width), height: Int(size.height),
bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(pixelBuffer), space: rgbColorSpace, bitmapInfo: CGImageAlphaInfo.premultipliedFirst.rawValue)


context!.clear(CGRect(x:0,y: 0,width: size.width,height: size.height))


let horizontalRatio = size.width / image.size.width
let verticalRatio = size.height / image.size.height
//aspectRatio = max(horizontalRatio, verticalRatio) // ScaleAspectFill
let aspectRatio = min(horizontalRatio, verticalRatio) // ScaleAspectFit


let newSize = CGSize(width: image.size.width * aspectRatio, height: image.size.height * aspectRatio)


let x = newSize.width < size.width ? (size.width - newSize.width) / 2 : 0
let y = newSize.height < size.height ? (size.height - newSize.height) / 2 : 0


context?.draw(image.cgImage!, in: CGRect(x:x,y: y, width: newSize.width, height: newSize.height))
CVPixelBufferUnlockBaseAddress(pixelBuffer, CVPixelBufferLockFlags(rawValue: 0))


return pixelBuffer
}


init(renderSettings: RenderSettings) {
self.renderSettings = renderSettings
}


func start() {


let avOutputSettings: [String: Any] = [
AVVideoCodecKey: renderSettings.avCodecKey,
AVVideoWidthKey: NSNumber(value: Float(renderSettings.size.width)),
AVVideoHeightKey: NSNumber(value: Float(renderSettings.size.height))
]


func createPixelBufferAdaptor() {
let sourcePixelBufferAttributesDictionary = [
kCVPixelBufferPixelFormatTypeKey as String: NSNumber(value: kCVPixelFormatType_32ARGB),
kCVPixelBufferWidthKey as String: NSNumber(value: Float(renderSettings.size.width)),
kCVPixelBufferHeightKey as String: NSNumber(value: Float(renderSettings.size.height))
]
pixelBufferAdaptor = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: videoWriterInput,
sourcePixelBufferAttributes: sourcePixelBufferAttributesDictionary)
}


func createAssetWriter(outputURL: URL) -> AVAssetWriter {
guard let assetWriter = try? AVAssetWriter(outputURL: outputURL, fileType: AVFileType.mp4) else {
fatalError("AVAssetWriter() failed")
}


guard assetWriter.canApply(outputSettings: avOutputSettings, forMediaType: AVMediaType.video) else {
fatalError("canApplyOutputSettings() failed")
}


return assetWriter
}


videoWriter = createAssetWriter(outputURL: renderSettings.outputURL)
videoWriterInput = AVAssetWriterInput(mediaType: AVMediaType.video, outputSettings: avOutputSettings)


if videoWriter.canAdd(videoWriterInput) {
videoWriter.add(videoWriterInput)
}
else {
fatalError("canAddInput() returned false")
}


// The pixel buffer adaptor must be created before we start writing.
createPixelBufferAdaptor()


if videoWriter.startWriting() == false {
fatalError("startWriting() failed")
}


videoWriter.startSession(atSourceTime: CMTime.zero)


precondition(pixelBufferAdaptor.pixelBufferPool != nil, "nil pixelBufferPool")
}


func render(appendPixelBuffers: ((VideoWriter)->Bool)?, completion: (()->Void)?) {


precondition(videoWriter != nil, "Call start() to initialze the writer")


let queue = DispatchQueue(label: "mediaInputQueue")
videoWriterInput.requestMediaDataWhenReady(on: queue) {
let isFinished = appendPixelBuffers?(self) ?? false
if isFinished {
self.videoWriterInput.markAsFinished()
self.videoWriter.finishWriting() {
DispatchQueue.main.async {
completion?()
}
}
}
else {
// Fall through. The closure will be called again when the writer is ready.
}
}
}


func addImage(image: UIImage, withPresentationTime presentationTime: CMTime) -> Bool {


precondition(pixelBufferAdaptor != nil, "Call start() to initialze the writer")


let pixelBuffer = VideoWriter.pixelBufferFromImage(image: image, pixelBufferPool: pixelBufferAdaptor.pixelBufferPool!, size: renderSettings.size)
return pixelBufferAdaptor.append(pixelBuffer, withPresentationTime: presentationTime)
}

第四部分:实现梦想

一旦一切就绪,这就是你的3条魔法台词:

let settings = RenderSettings()
let imageAnimator = ImageAnimator(renderSettings: settings)
imageAnimator.render() {
print("yes")
}

这是斯威夫特2。x版本在iOS 8上测试。它结合了来自@Scott Raposa和@Praxiteles的答案以及@acj为另一个问题贡献的代码。@acj的代码在这里:https://gist.github.com/acj/6ae90aa1ebb8cad6b47b。@TimBull也提供了代码。

像@Scott Raposa一样,我甚至从未听说过CVPixelBufferPoolCreatePixelBuffer和其他几个函数,更不用说了解如何使用它们了。

你在下面看到的大部分内容都是通过反复试验和阅读苹果文档拼凑起来的。请谨慎使用,如有错误请提出建议。

用法:

import UIKit
import AVFoundation
import Photos


writeImagesAsMovie(yourImages, videoPath: yourPath, videoSize: yourSize, videoFPS: 30)

代码:

func writeImagesAsMovie(allImages: [UIImage], videoPath: String, videoSize: CGSize, videoFPS: Int32) {
// Create AVAssetWriter to write video
guard let assetWriter = createAssetWriter(videoPath, size: videoSize) else {
print("Error converting images to video: AVAssetWriter not created")
return
}


// If here, AVAssetWriter exists so create AVAssetWriterInputPixelBufferAdaptor
let writerInput = assetWriter.inputs.filter{ $0.mediaType == AVMediaTypeVideo }.first!
let sourceBufferAttributes : [String : AnyObject] = [
kCVPixelBufferPixelFormatTypeKey as String : Int(kCVPixelFormatType_32ARGB),
kCVPixelBufferWidthKey as String : videoSize.width,
kCVPixelBufferHeightKey as String : videoSize.height,
]
let pixelBufferAdaptor = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: writerInput, sourcePixelBufferAttributes: sourceBufferAttributes)


// Start writing session
assetWriter.startWriting()
assetWriter.startSessionAtSourceTime(kCMTimeZero)
if (pixelBufferAdaptor.pixelBufferPool == nil) {
print("Error converting images to video: pixelBufferPool nil after starting session")
return
}


// -- Create queue for <requestMediaDataWhenReadyOnQueue>
let mediaQueue = dispatch_queue_create("mediaInputQueue", nil)


// -- Set video parameters
let frameDuration = CMTimeMake(1, videoFPS)
var frameCount = 0


// -- Add images to video
let numImages = allImages.count
writerInput.requestMediaDataWhenReadyOnQueue(mediaQueue, usingBlock: { () -> Void in
// Append unadded images to video but only while input ready
while (writerInput.readyForMoreMediaData && frameCount < numImages) {
let lastFrameTime = CMTimeMake(Int64(frameCount), videoFPS)
let presentationTime = frameCount == 0 ? lastFrameTime : CMTimeAdd(lastFrameTime, frameDuration)


if !self.appendPixelBufferForImageAtURL(allImages[frameCount], pixelBufferAdaptor: pixelBufferAdaptor, presentationTime: presentationTime) {
print("Error converting images to video: AVAssetWriterInputPixelBufferAdapter failed to append pixel buffer")
return
}


frameCount += 1
}


// No more images to add? End video.
if (frameCount >= numImages) {
writerInput.markAsFinished()
assetWriter.finishWritingWithCompletionHandler {
if (assetWriter.error != nil) {
print("Error converting images to video: \(assetWriter.error)")
} else {
self.saveVideoToLibrary(NSURL(fileURLWithPath: videoPath))
print("Converted images to movie @ \(videoPath)")
}
}
}
})
}




func createAssetWriter(path: String, size: CGSize) -> AVAssetWriter? {
// Convert <path> to NSURL object
let pathURL = NSURL(fileURLWithPath: path)


// Return new asset writer or nil
do {
// Create asset writer
let newWriter = try AVAssetWriter(URL: pathURL, fileType: AVFileTypeMPEG4)


// Define settings for video input
let videoSettings: [String : AnyObject] = [
AVVideoCodecKey  : AVVideoCodecH264,
AVVideoWidthKey  : size.width,
AVVideoHeightKey : size.height,
]


// Add video input to writer
let assetWriterVideoInput = AVAssetWriterInput(mediaType: AVMediaTypeVideo, outputSettings: videoSettings)
newWriter.addInput(assetWriterVideoInput)


// Return writer
print("Created asset writer for \(size.width)x\(size.height) video")
return newWriter
} catch {
print("Error creating asset writer: \(error)")
return nil
}
}




func appendPixelBufferForImageAtURL(image: UIImage, pixelBufferAdaptor: AVAssetWriterInputPixelBufferAdaptor, presentationTime: CMTime) -> Bool {
var appendSucceeded = false


autoreleasepool {
if  let pixelBufferPool = pixelBufferAdaptor.pixelBufferPool {
let pixelBufferPointer = UnsafeMutablePointer<CVPixelBuffer?>.alloc(1)
let status: CVReturn = CVPixelBufferPoolCreatePixelBuffer(
kCFAllocatorDefault,
pixelBufferPool,
pixelBufferPointer
)


if let pixelBuffer = pixelBufferPointer.memory where status == 0 {
fillPixelBufferFromImage(image, pixelBuffer: pixelBuffer)
appendSucceeded = pixelBufferAdaptor.appendPixelBuffer(pixelBuffer, withPresentationTime: presentationTime)
pixelBufferPointer.destroy()
} else {
NSLog("Error: Failed to allocate pixel buffer from pool")
}


pixelBufferPointer.dealloc(1)
}
}


return appendSucceeded
}




func fillPixelBufferFromImage(image: UIImage, pixelBuffer: CVPixelBufferRef) {
CVPixelBufferLockBaseAddress(pixelBuffer, 0)


let pixelData = CVPixelBufferGetBaseAddress(pixelBuffer)
let rgbColorSpace = CGColorSpaceCreateDeviceRGB()


// Create CGBitmapContext
let context = CGBitmapContextCreate(
pixelData,
Int(image.size.width),
Int(image.size.height),
8,
CVPixelBufferGetBytesPerRow(pixelBuffer),
rgbColorSpace,
CGImageAlphaInfo.PremultipliedFirst.rawValue
)


// Draw image into context
CGContextDrawImage(context, CGRectMake(0, 0, image.size.width, image.size.height), image.CGImage)


CVPixelBufferUnlockBaseAddress(pixelBuffer, 0)
}




func saveVideoToLibrary(videoURL: NSURL) {
PHPhotoLibrary.requestAuthorization { status in
// Return if unauthorized
guard status == .Authorized else {
print("Error saving video: unauthorized access")
return
}


// If here, save video to library
PHPhotoLibrary.sharedPhotoLibrary().performChanges({
PHAssetChangeRequest.creationRequestForAssetFromVideoAtFileURL(videoURL)
}) { success, error in
if !success {
print("Error saving video: \(error)")
}
}
}
}

下面是swift3版本如何将图像数组转换为视频

import Foundation
import AVFoundation
import UIKit


typealias CXEMovieMakerCompletion = (URL) -> Void
typealias CXEMovieMakerUIImageExtractor = (AnyObject) -> UIImage?




public class ImagesToVideoUtils: NSObject {


static let paths = NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true)
static let tempPath = paths[0] + "/exprotvideo.mp4"
static let fileURL = URL(fileURLWithPath: tempPath)
//    static let tempPath = NSTemporaryDirectory() + "/exprotvideo.mp4"
//    static let fileURL = URL(fileURLWithPath: tempPath)




var assetWriter:AVAssetWriter!
var writeInput:AVAssetWriterInput!
var bufferAdapter:AVAssetWriterInputPixelBufferAdaptor!
var videoSettings:[String : Any]!
var frameTime:CMTime!
//var fileURL:URL!


var completionBlock: CXEMovieMakerCompletion?
var movieMakerUIImageExtractor:CXEMovieMakerUIImageExtractor?




public class func videoSettings(codec:String, width:Int, height:Int) -> [String: Any]{
if(Int(width) % 16 != 0){
print("warning: video settings width must be divisible by 16")
}


let videoSettings:[String: Any] = [AVVideoCodecKey: AVVideoCodecJPEG, //AVVideoCodecH264,
AVVideoWidthKey: width,
AVVideoHeightKey: height]


return videoSettings
}


public init(videoSettings: [String: Any]) {
super.init()




if(FileManager.default.fileExists(atPath: ImagesToVideoUtils.tempPath)){
guard (try? FileManager.default.removeItem(atPath: ImagesToVideoUtils.tempPath)) != nil else {
print("remove path failed")
return
}
}




self.assetWriter = try! AVAssetWriter(url: ImagesToVideoUtils.fileURL, fileType: AVFileTypeQuickTimeMovie)


self.videoSettings = videoSettings
self.writeInput = AVAssetWriterInput(mediaType: AVMediaTypeVideo, outputSettings: videoSettings)
assert(self.assetWriter.canAdd(self.writeInput), "add failed")


self.assetWriter.add(self.writeInput)
let bufferAttributes:[String: Any] = [kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_32ARGB)]
self.bufferAdapter = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: self.writeInput, sourcePixelBufferAttributes: bufferAttributes)
self.frameTime = CMTimeMake(1, 5)
}


func createMovieFrom(urls: [URL], withCompletion: @escaping CXEMovieMakerCompletion){
self.createMovieFromSource(images: urls as [AnyObject], extractor:{(inputObject:AnyObject) ->UIImage? in
return UIImage(data: try! Data(contentsOf: inputObject as! URL))}, withCompletion: withCompletion)
}


func createMovieFrom(images: [UIImage], withCompletion: @escaping CXEMovieMakerCompletion){
self.createMovieFromSource(images: images, extractor: {(inputObject:AnyObject) -> UIImage? in
return inputObject as? UIImage}, withCompletion: withCompletion)
}


func createMovieFromSource(images: [AnyObject], extractor: @escaping CXEMovieMakerUIImageExtractor, withCompletion: @escaping CXEMovieMakerCompletion){
self.completionBlock = withCompletion


self.assetWriter.startWriting()
self.assetWriter.startSession(atSourceTime: kCMTimeZero)


let mediaInputQueue = DispatchQueue(label: "mediaInputQueue")
var i = 0
let frameNumber = images.count


self.writeInput.requestMediaDataWhenReady(on: mediaInputQueue){
while(true){
if(i >= frameNumber){
break
}


if (self.writeInput.isReadyForMoreMediaData){
var sampleBuffer:CVPixelBuffer?
autoreleasepool{
let img = extractor(images[i])
if img == nil{
i += 1
print("Warning: counld not extract one of the frames")
//continue
}
sampleBuffer = self.newPixelBufferFrom(cgImage: img!.cgImage!)
}
if (sampleBuffer != nil){
if(i == 0){
self.bufferAdapter.append(sampleBuffer!, withPresentationTime: kCMTimeZero)
}else{
let value = i - 1
let lastTime = CMTimeMake(Int64(value), self.frameTime.timescale)
let presentTime = CMTimeAdd(lastTime, self.frameTime)
self.bufferAdapter.append(sampleBuffer!, withPresentationTime: presentTime)
}
i = i + 1
}
}
}
self.writeInput.markAsFinished()
self.assetWriter.finishWriting {
DispatchQueue.main.sync {
self.completionBlock!(ImagesToVideoUtils.fileURL)
}
}
}
}


func newPixelBufferFrom(cgImage:CGImage) -> CVPixelBuffer?{
let options:[String: Any] = [kCVPixelBufferCGImageCompatibilityKey as String: true, kCVPixelBufferCGBitmapContextCompatibilityKey as String: true]
var pxbuffer:CVPixelBuffer?
let frameWidth = self.videoSettings[AVVideoWidthKey] as! Int
let frameHeight = self.videoSettings[AVVideoHeightKey] as! Int


let status = CVPixelBufferCreate(kCFAllocatorDefault, frameWidth, frameHeight, kCVPixelFormatType_32ARGB, options as CFDictionary?, &pxbuffer)
assert(status == kCVReturnSuccess && pxbuffer != nil, "newPixelBuffer failed")


CVPixelBufferLockBaseAddress(pxbuffer!, CVPixelBufferLockFlags(rawValue: 0))
let pxdata = CVPixelBufferGetBaseAddress(pxbuffer!)
let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
let context = CGContext(data: pxdata, width: frameWidth, height: frameHeight, bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(pxbuffer!), space: rgbColorSpace, bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue)
assert(context != nil, "context is nil")


context!.concatenate(CGAffineTransform.identity)
context!.draw(cgImage, in: CGRect(x: 0, y: 0, width: cgImage.width, height: cgImage.height))
CVPixelBufferUnlockBaseAddress(pxbuffer!, CVPixelBufferLockFlags(rawValue: 0))
return pxbuffer
}
}

我将它与屏幕捕捉一起使用,基本上创建了一个屏幕捕捉的视频,这里是完整的例子

刚刚翻译了@Scott Raposa对swift3的回答(有一些很小的改动):

import AVFoundation
import UIKit
import Photos


struct RenderSettings {


var size : CGSize = .zero
var fps: Int32 = 6   // frames per second
var avCodecKey = AVVideoCodecH264
var videoFilename = "render"
var videoFilenameExt = "mp4"




var outputURL: URL {
// Use the CachesDirectory so the rendered video file sticks around as long as we need it to.
// Using the CachesDirectory ensures the file won't be included in a backup of the app.
let fileManager = FileManager.default
if let tmpDirURL = try? fileManager.url(for: .cachesDirectory, in: .userDomainMask, appropriateFor: nil, create: true) {
return tmpDirURL.appendingPathComponent(videoFilename).appendingPathExtension(videoFilenameExt)
}
fatalError("URLForDirectory() failed")
}
}




class ImageAnimator {


// Apple suggests a timescale of 600 because it's a multiple of standard video rates 24, 25, 30, 60 fps etc.
static let kTimescale: Int32 = 600


let settings: RenderSettings
let videoWriter: VideoWriter
var images: [UIImage]!


var frameNum = 0


class func saveToLibrary(videoURL: URL) {
PHPhotoLibrary.requestAuthorization { status in
guard status == .authorized else { return }


PHPhotoLibrary.shared().performChanges({
PHAssetChangeRequest.creationRequestForAssetFromVideo(atFileURL: videoURL)
}) { success, error in
if !success {
print("Could not save video to photo library:", error)
}
}
}
}


class func removeFileAtURL(fileURL: URL) {
do {
try FileManager.default.removeItem(atPath: fileURL.path)
}
catch _ as NSError {
// Assume file doesn't exist.
}
}


init(renderSettings: RenderSettings) {
settings = renderSettings
videoWriter = VideoWriter(renderSettings: settings)
//        images = loadImages()
}


func render(completion: (()->Void)?) {


// The VideoWriter will fail if a file exists at the URL, so clear it out first.
ImageAnimator.removeFileAtURL(fileURL: settings.outputURL)


videoWriter.start()
videoWriter.render(appendPixelBuffers: appendPixelBuffers) {
ImageAnimator.saveToLibrary(videoURL: self.settings.outputURL)
completion?()
}


}


//    // Replace this logic with your own.
//    func loadImages() -> [UIImage] {
//        var images = [UIImage]()
//        for index in 1...10 {
//            let filename = "\(index).jpg"
//            images.append(UIImage(named: filename)!)
//        }
//        return images
//    }


// This is the callback function for VideoWriter.render()
func appendPixelBuffers(writer: VideoWriter) -> Bool {


let frameDuration = CMTimeMake(Int64(ImageAnimator.kTimescale / settings.fps), ImageAnimator.kTimescale)


while !images.isEmpty {


if writer.isReadyForData == false {
// Inform writer we have more buffers to write.
return false
}


let image = images.removeFirst()
let presentationTime = CMTimeMultiply(frameDuration, Int32(frameNum))
let success = videoWriter.addImage(image: image, withPresentationTime: presentationTime)
if success == false {
fatalError("addImage() failed")
}


frameNum += 1
}


// Inform writer all buffers have been written.
return true
}


}




class VideoWriter {


let renderSettings: RenderSettings


var videoWriter: AVAssetWriter!
var videoWriterInput: AVAssetWriterInput!
var pixelBufferAdaptor: AVAssetWriterInputPixelBufferAdaptor!


var isReadyForData: Bool {
return videoWriterInput?.isReadyForMoreMediaData ?? false
}


class func pixelBufferFromImage(image: UIImage, pixelBufferPool: CVPixelBufferPool, size: CGSize) -> CVPixelBuffer {


var pixelBufferOut: CVPixelBuffer?


let status = CVPixelBufferPoolCreatePixelBuffer(kCFAllocatorDefault, pixelBufferPool, &pixelBufferOut)
if status != kCVReturnSuccess {
fatalError("CVPixelBufferPoolCreatePixelBuffer() failed")
}


let pixelBuffer = pixelBufferOut!


CVPixelBufferLockBaseAddress(pixelBuffer, CVPixelBufferLockFlags(rawValue: 0))


let data = CVPixelBufferGetBaseAddress(pixelBuffer)
let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
let context = CGContext(data: data, width: Int(size.width), height: Int(size.height),
bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(pixelBuffer), space: rgbColorSpace, bitmapInfo: CGImageAlphaInfo.premultipliedFirst.rawValue)


context!.clear(CGRect(x:0,y: 0,width: size.width,height: size.height))


let horizontalRatio = size.width / image.size.width
let verticalRatio = size.height / image.size.height
//aspectRatio = max(horizontalRatio, verticalRatio) // ScaleAspectFill
let aspectRatio = min(horizontalRatio, verticalRatio) // ScaleAspectFit


let newSize = CGSize(width: image.size.width * aspectRatio, height: image.size.height * aspectRatio)


let x = newSize.width < size.width ? (size.width - newSize.width) / 2 : 0
let y = newSize.height < size.height ? (size.height - newSize.height) / 2 : 0


context?.draw(image.cgImage!, in: CGRect(x:x,y: y, width: newSize.width, height: newSize.height))
CVPixelBufferUnlockBaseAddress(pixelBuffer, CVPixelBufferLockFlags(rawValue: 0))


return pixelBuffer
}


init(renderSettings: RenderSettings) {
self.renderSettings = renderSettings
}


func start() {


let avOutputSettings: [String: Any] = [
AVVideoCodecKey: renderSettings.avCodecKey,
AVVideoWidthKey: NSNumber(value: Float(renderSettings.size.width)),
AVVideoHeightKey: NSNumber(value: Float(renderSettings.size.height))
]


func createPixelBufferAdaptor() {
let sourcePixelBufferAttributesDictionary = [
kCVPixelBufferPixelFormatTypeKey as String: NSNumber(value: kCVPixelFormatType_32ARGB),
kCVPixelBufferWidthKey as String: NSNumber(value: Float(renderSettings.size.width)),
kCVPixelBufferHeightKey as String: NSNumber(value: Float(renderSettings.size.height))
]
pixelBufferAdaptor = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: videoWriterInput,
sourcePixelBufferAttributes: sourcePixelBufferAttributesDictionary)
}


func createAssetWriter(outputURL: URL) -> AVAssetWriter {
guard let assetWriter = try? AVAssetWriter(outputURL: outputURL, fileType: AVFileTypeMPEG4) else {
fatalError("AVAssetWriter() failed")
}


guard assetWriter.canApply(outputSettings: avOutputSettings, forMediaType: AVMediaTypeVideo) else {
fatalError("canApplyOutputSettings() failed")
}


return assetWriter
}


videoWriter = createAssetWriter(outputURL: renderSettings.outputURL)
videoWriterInput = AVAssetWriterInput(mediaType: AVMediaTypeVideo, outputSettings: avOutputSettings)


if videoWriter.canAdd(videoWriterInput) {
videoWriter.add(videoWriterInput)
}
else {
fatalError("canAddInput() returned false")
}


// The pixel buffer adaptor must be created before we start writing.
createPixelBufferAdaptor()


if videoWriter.startWriting() == false {
fatalError("startWriting() failed")
}


videoWriter.startSession(atSourceTime: kCMTimeZero)


precondition(pixelBufferAdaptor.pixelBufferPool != nil, "nil pixelBufferPool")
}


func render(appendPixelBuffers: ((VideoWriter)->Bool)?, completion: (()->Void)?) {


precondition(videoWriter != nil, "Call start() to initialze the writer")


let queue = DispatchQueue(label: "mediaInputQueue")
videoWriterInput.requestMediaDataWhenReady(on: queue) {
let isFinished = appendPixelBuffers?(self) ?? false
if isFinished {
self.videoWriterInput.markAsFinished()
self.videoWriter.finishWriting() {
DispatchQueue.main.async {
completion?()
}
}
}
else {
// Fall through. The closure will be called again when the writer is ready.
}
}
}


func addImage(image: UIImage, withPresentationTime presentationTime: CMTime) -> Bool {


precondition(pixelBufferAdaptor != nil, "Call start() to initialze the writer")


let pixelBuffer = VideoWriter.pixelBufferFromImage(image: image, pixelBufferPool: pixelBufferAdaptor.pixelBufferPool!, size: renderSettings.size)
return pixelBufferAdaptor.append(pixelBuffer, withPresentationTime: presentationTime)
}


}

对于那些在2020年还在旅行的人来说,他们的电影因为宽度不是16px而失真

改变

CGContextRef context = CGBitmapContextCreate(pxdata,
width, height,
8, 4 * width,
rgbColorSpace,
kCGImageAlphaNoneSkipFirst);

CGContextRef context = CGBitmapContextCreate(pxdata,
width, height,
8, CVPixelBufferGetBytesPerRow(pxbuffer),
rgbColorSpace,
kCGImageAlphaNoneSkipFirst);

这要归功于@bluedays AVAssetWriter(写入视频的UIImages)输出失真