A GPU accelerated image and video processing framework built on Metal.

Overview

MetalPetal

Swift
Platforms Version
Apple Silicon Mac Catalyst Simulator
CocoaPods Swift PM

An image processing framework based on Metal.

Design Overview

MetalPetal is an image processing framework based on Metal designed to provide real-time processing for still image and video with easy to use programming interfaces.

This chapter covers the key concepts of MetalPetal, and will help you to get a better understanding of its design, implementation, performance implications and best practices.

Goals

MetalPetal is designed with the following goals in mind.

  • Easy to use API

    Provides convenience APIs and avoids common pitfalls.

  • Performance

    Use CPU, GPU and memory efficiently.

  • Extensibility

    Easy to create custom filters as well as plugin your custom image processing unit.

  • Swifty

    Provides a fluid experience for Swift programmers.

Core Components

Some of the core concepts of MetalPetal are very similar to those in Apple's Core Image framework.

MTIContext

Provides an evaluation context for rendering MTIImages. It also stores a lot of caches and state information, so it's more efficient to reuse a context whenever possible.

MTIImage

A MTIImage object is a representation of an image to be processed or produced. It does directly represent image bitmap data instead it has all the information necessary to produce an image or more precisely a MTLTexture. It consists of two parts, a recipe of how to produce the texture (MTIImagePromise) and other information such as how a context caches the image (cachePolicy), and how the texture should be sampled (samplerDescriptor).

MTIFilter

A MTIFilter represents an image processing effect and any parameters that control that effect. It produces a MTIImage object as output. To use a filter, you create a filter object, set its input images and parameters, and then access its output image. Typically, a filter class owns a static kernel (MTIKernel), when you access its outputImage property, it asks the kernel with the input images and parameters to produce an output MTIImage.

MTIKernel

A MTIKernel represents an image processing routine. MTIKernel is responsible for creating the corresponding render or compute pipeline state for the filter, as well as building the MTIImagePromise for a MTIImage.

Optimizations

MetalPetal does a lot of optimizations for you under the hood.

It automatically caches functions, kernel states, sampler states, etc.

It utilizes Metal features like programmable blending, memoryless render targets, resource heaps and metal performance shaders to make the render fast and efficient. On macOS, MetalPetal can also take advantage of the TBDR architecture of Apple silicon.

Before rendering, MetalPetal can look into your image render graph and figure out the minimal number of intermediate textures needed to do the rendering, saving memory, energy and time.

It can also re-organize the image render graph if multiple “recipes” can be concatenated to eliminate redundant render passes. (MTIContext.isRenderGraphOptimizationEnabled)

Concurrency Considerations

MTIImage objects are immutable, which means they can be shared safely among threads.

However, MTIFilter objects are mutable and thus cannot be shared safely among threads.

A MTIContext contains a lot of states and caches. There's a thread-safe mechanism for MTIContext objects, making it safe to share a MTIContext object among threads.

Advantages over Core Image

  • Fully customizable vertex and fragment functions.

  • MRT (Multiple Render Targets) support.

  • Generally better performance. (Detailed benchmark data needed)

Builtin Filters

  • Color Matrix

  • Color Lookup

    Uses an color lookup table to remap the colors in an image.

  • Opacity

  • Exposure

  • Saturation

  • Brightness

  • Contrast

  • Color Invert

  • Vibrance

    Adjusts the saturation of an image while keeping pleasing skin tones.

  • RGB Tone Curve

  • Blend Modes

    • Normal
    • Multiply
    • Overlay
    • Screen
    • Hard Light
    • Soft Light
    • Darken
    • Lighten
    • Color Dodge
    • Add (Linear Dodge)
    • Color Burn
    • Linear Burn
    • Lighter Color
    • Darker Color
    • Vivid Light
    • Linear Light
    • Pin Light
    • Hard Mix
    • Difference
    • Exclusion
    • Subtract
    • Divide
    • Hue
    • Saturation
    • Color
    • Luminosity
    • ColorLookup512x512
    • Custom Blend Mode
  • Blend with Mask

  • Transform

  • Crop

  • Pixellate

  • Multilayer Composite

  • MPS Convolution

  • MPS Gaussian Blur

  • MPS Definition

  • MPS Sobel

  • MPS Unsharp Mask

  • MPS Box Blur

  • High Pass Skin Smoothing

  • CLAHE (Contrast-Limited Adaptive Histogram Equalization)

  • Lens Blur (Hexagonal Bokeh Blur)

  • Surface Blur

  • Bulge Distortion

  • Chroma Key Blend

  • Color Halftone

  • Dot Screen

  • Round Corner (Circular/Continuous Curve)

  • All Core Image Filters

Example Code

Create a MTIImage

You can create a MTIImage object from nearly any source of image data, including:

  • URLs referencing image files to be loaded
  • Metal textures
  • CoreVideo image or pixel buffers (CVImageBufferRef or CVPixelBufferRef)
  • Image bitmap data in memory
  • Texture data from a given texture or image asset name
  • Core Image CIImage objects
  • MDLTexture objects
  • SceneKit and SpriteKit scenes
let imageFromCGImage = MTIImage(cgImage: cgImage, isOpaque: true)

let imageFromCIImage = MTIImage(ciImage: ciImage)

let imageFromCoreVideoPixelBuffer = MTIImage(cvPixelBuffer: pixelBuffer, alphaType: .alphaIsOne)

let imageFromContentsOfURL = MTIImage(contentsOf: url)

// unpremultiply alpha if needed
let unpremultipliedAlphaImage = image.unpremultiplyingAlpha()

Apply a Filter

let inputImage = ...

let filter = MTISaturationFilter()
filter.saturation = 0
filter.inputImage = inputImage

let outputImage = filter.outputImage

Render a MTIImage

let options = MTIContextOptions()

guard let device = MTLCreateSystemDefaultDevice(), let context = try? MTIContext(device: device, options: options) else {
    return
}

let image: MTIImage = ...

do {
    try context.render(image, to: pixelBuffer) 
    
    //context.makeCIImage(from: image)
    
    //context.makeCGImage(from: image)
} catch {
    print(error)
}

Display a MTIImage

let imageView = MTIImageView(frame: self.view.bounds)

// You can optionally assign a `MTIContext` to the image view. If no context is assigned and `automaticallyCreatesContext` is set to `true` (the default value), a `MTIContext` is created automatically when the image view renders its content.
imageView.context = ...

imageView.image = image

If you'd like to move the GPU command encoding process out of the main thread, you can use a MTIThreadSafeImageView. You may assign a MTIImage to a MTIThreadSafeImageView in any thread.

Connect Filters (Swift)

MetalPetal has a type-safe Swift API for connecting filters. You can use => operator in FilterGraph.makeImage function to connect filters and get the output image.

Here are some examples:

let image = try? FilterGraph.makeImage { output in
    inputImage => saturationFilter => exposureFilter => output
}
let image = try? FilterGraph.makeImage { output in
    inputImage => saturationFilter => exposureFilter => contrastFilter => blendFilter.inputPorts.inputImage
    exposureFilter => blendFilter.inputPorts.inputBackgroundImage
    blendFilter => output
}
  • You can connect unary filters (MTIUnaryFilter) directly using =>.

  • For a filter with multiple inputs, you need to connect to one of its inputPorts.

  • => operator only works in FilterGraph.makeImage method.

  • One and only one filter's output can be connected to output.

Process Video Files

Working with AVPlayer:

let context = try MTIContext(device: device)
let asset = AVAsset(url: videoURL)
let composition = MTIVideoComposition(asset: asset, context: context, queue: DispatchQueue.main, filter: { request in
    return FilterGraph.makeImage { output in
        request.anySourceImage! => filterA => filterB => output
    }!
}

let playerItem = AVPlayerItem(asset: asset)
playerItem.videoComposition = composition.makeAVVideoComposition()
player.replaceCurrentItem(with: playerItem)
player.play()

Export a video:

VideoIO is required for the following examples.

import VideoIO

var configuration = AssetExportSession.Configuration(fileType: .mp4, videoSettings: .h264(videoSize: composition.renderSize), audioSettings: .aac(channels: 2, sampleRate: 44100, bitRate: 128 * 1000))
configuration.videoComposition = composition.makeAVVideoComposition()
self.exporter = try! AssetExportSession(asset: asset, outputURL: outputURL, configuration: configuration)
exporter.export(progress: { progress in
    
}, completion: { error in
    
})

Process Live Video (with VideoIO)

VideoIO is required for this example.

import VideoIO

// Setup Image View
let imageView = MTIImageView(frame: self.view.bounds)
...

// Setup Camera
let camera = Camera(captureSessionPreset: .hd1920x1080, configurator: .portraitFrontMirroredVideoOutput)
try camera.enableVideoDataOutput(on: DispatchQueue.main, delegate: self)
camera.videoDataOutput?.videoSettings = [kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_420YpCbCr8BiPlanarFullRange]

...

// AVCaptureVideoDataOutputSampleBufferDelegate

let filter = MTIColorInvertFilter()

func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
    guard let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else {
        return
    }
    let inputImage = MTIImage(cvPixelBuffer: pixelBuffer, alphaType: .alphaIsOne)
    filter.inputImage = inputImage
    self.imageView.image = filter.outputImage
}

Please refer to the CameraFilterView.swift in the example project for more about previewing and recording filtered live video.

Best Practices

  • Reuse a MTIContext whenever possible.

    Contexts are heavyweight objects, so if you do create one, do so as early as possible, and reuse it each time you need to render an image.

  • Use MTIImage.cachePolicy wisely.

    Use MTIImageCachePolicyTransient when you do not want to preserve the render result of an image, for example when the image is just an intermediate result in a filter chain, so the underlying texture of the render result can be reused. It is the most memory efficient option. However, when you ask the context to render a previously rendered image, it may re-render that image since its underlying texture has been reused.

    By default, a filter's output image has the transient policy.

    Use MTIImageCachePolicyPersistent when you want to prevent the underlying texture from being reused.

    By default, images created from external sources have the persistent policy.

  • Understand that MTIFilter.outputImage is a compute property.

    Each time you ask a filter for its output image, the filter may give you a new output image object even if the inputs are identical with the previous call. So reuse output images whenever possible.

    For example,

    //          ╭→ filterB
    // filterA ─┤
    //          ╰→ filterC
    // 
    // filterB and filterC use filterA's output as their input.

    In this situation, the following solution:

    let filterOutputImage = filterA.outputImage
    filterB.inputImage = filterOutputImage
    filterC.inputImage = filterOutputImage

    is better than:

    filterB.inputImage = filterA.outputImage
    filterC.inputImage = filterA.outputImage

Build Custom Filter

If you want to include the MTIShaderLib.h in your .metal file, you need to add the path of MTIShaderLib.h file to the Metal Compiler - Header Search Paths (MTL_HEADER_SEARCH_PATHS) setting.

For example, if you use CocoaPods you can set the MTL_HEADER_SEARCH_PATHS to ${PODS_CONFIGURATION_BUILD_DIR}/MetalPetal/MetalPetal.framework/Headers or ${PODS_ROOT}/MetalPetal/Frameworks/MetalPetal/Shaders. If you use Swift Package Manager, set the MTL_HEADER_SEARCH_PATHS to $(HEADER_SEARCH_PATHS)

Shader Function Arguments Encoding

MetalPetal has a built-in mechanism to encode shader function arguments for you. You can pass the shader function arguments as name: value dictionaries to the MTIRenderPipelineKernel.apply(toInputImages:parameters:outputDescriptors:), MTIRenderCommand(kernel:geometry:images:parameters:), etc.

For example, the parameter dictionary for the metal function vibranceAdjust can be:

// Swift
let amount: Float = 1.0
let vibranceVector = float4(1, 1, 1, 1)
let parameters = ["amount": amount,
                  "vibranceVector": MTIVector(value: vibranceVector),
                  "avoidsSaturatingSkinTones": true,
                  "grayColorTransform": MTIVector(value: float3(0,0,0))]
// vibranceAdjust metal function
fragment float4 vibranceAdjust(...,
                constant float & amount [[ buffer(0) ]],
                constant float4 & vibranceVector [[ buffer(1) ]],
                constant bool & avoidsSaturatingSkinTones [[ buffer(2) ]],
                constant float3 & grayColorTransform [[ buffer(3) ]])
{
    ...
}

The shader function argument types and the corresponding types to use in a parameter dictionary is listed below.

Shader Function Argument Type Swift Objective-C
float Float float
int Int32 int
uint UInt32 uint
bool Bool bool
simd (float2,float4,float4x4,int4, etc.) simd (with MetalPetal/Swift) / MTIVector MTIVector
struct Data / MTIDataBuffer NSData / MTIDataBuffer
other (float *, struct *, etc.) immutable Data / MTIDataBuffer NSData / MTIDataBuffer
other (float *, struct *, etc.) mutable MTIDataBuffer MTIDataBuffer

Simple Single Input / Output Filters

To build a custom unary filter, you can subclass MTIUnaryImageRenderingFilter and override the methods in the SubclassingHooks category. Examples: MTIPixellateFilter, MTIVibranceFilter, MTIUnpremultiplyAlphaFilter, MTIPremultiplyAlphaFilter, etc.

//Objective-C

@interface MTIPixellateFilter : MTIUnaryImageRenderingFilter

@property (nonatomic) float fractionalWidthOfAPixel;

@end

@implementation MTIPixellateFilter

- (instancetype)init {
    if (self = [super init]) {
        _fractionalWidthOfAPixel = 0.05;
    }
    return self;
}

+ (MTIFunctionDescriptor *)fragmentFunctionDescriptor {
    return [[MTIFunctionDescriptor alloc] initWithName:@"pixellateEffect" libraryURL:[bundle URLForResource:@"default" withExtension:@"metallib"]];
}

- (NSDictionary<NSString *,id> *)parameters {
    return @{@"fractionalWidthOfAPixel": @(self.fractionalWidthOfAPixel)};
}

@end
//Swift

class MTIPixellateFilter: MTIUnaryImageRenderingFilter {
    
    var fractionalWidthOfAPixel: Float = 0.05

    override var parameters: [String : Any] {
        return ["fractionalWidthOfAPixel": fractionalWidthOfAPixel]
    }
    
    override class func fragmentFunctionDescriptor() -> MTIFunctionDescriptor {
        return MTIFunctionDescriptor(name: "pixellateEffect", libraryURL: MTIDefaultLibraryURLForBundle(Bundle.main))
    }
}

Fully Custom Filters

To build more complex filters, all you need to do is create a kernel (MTIRenderPipelineKernel/MTIComputePipelineKernel/MTIMPSKernel), then apply the kernel to the input image(s). Examples: MTIChromaKeyBlendFilter, MTIBlendWithMaskFilter, MTIColorLookupFilter, etc.

@interface MTIChromaKeyBlendFilter : NSObject <MTIFilter>

@property (nonatomic, strong, nullable) MTIImage *inputImage;

@property (nonatomic, strong, nullable) MTIImage *inputBackgroundImage;

@property (nonatomic) float thresholdSensitivity;

@property (nonatomic) float smoothing;

@property (nonatomic) MTIColor color;

@end

@implementation MTIChromaKeyBlendFilter

@synthesize outputPixelFormat = _outputPixelFormat;

+ (MTIRenderPipelineKernel *)kernel {
    static MTIRenderPipelineKernel *kernel;
    static dispatch_once_t onceToken;
    dispatch_once(&onceToken, ^{
        kernel = [[MTIRenderPipelineKernel alloc] initWithVertexFunctionDescriptor:[[MTIFunctionDescriptor alloc] initWithName:MTIFilterPassthroughVertexFunctionName] fragmentFunctionDescriptor:[[MTIFunctionDescriptor alloc] initWithName:@"chromaKeyBlend"]];
    });
    return kernel;
}

- (instancetype)init {
    if (self = [super init]) {
        _thresholdSensitivity = 0.4;
        _smoothing = 0.1;
        _color = MTIColorMake(0.0, 1.0, 0.0, 1.0);
    }
    return self;
}

- (MTIImage *)outputImage {
    if (!self.inputImage || !self.inputBackgroundImage) {
        return nil;
    }
    return [self.class.kernel applyToInputImages:@[self.inputImage, self.inputBackgroundImage]
                                      parameters:@{@"color": [MTIVector vectorWithFloat4:(simd_float4){self.color.red, self.color.green, self.color.blue,self.color.alpha}],
                                    @"thresholdSensitivity": @(self.thresholdSensitivity),
                                               @"smoothing": @(self.smoothing)}
                         outputTextureDimensions:MTITextureDimensionsMake2DFromCGSize(self.inputImage.size)
                               outputPixelFormat:self.outputPixelFormat];
}

@end

Multiple Draw Calls in One Render Pass

You can use MTIRenderCommand to issue multiple draw calls in one render pass.

// Create a draw call with kernelA, geometryA, and imageA.
let renderCommandA = MTIRenderCommand(kernel: self.kernelA, geometry: self.geometryA, images: [imageA], parameters: [:])

// Create a draw call with kernelB, geometryB, and imageB.
let renderCommandB = MTIRenderCommand(kernel: self.kernelB, geometry: self.geometryB, images: [imageB], parameters: [:])

// Create an output descriptor
let outputDescriptor = MTIRenderPassOutputDescriptor(dimensions: MTITextureDimensions(width: outputWidth, height: outputHeight, depth: 1), pixelFormat: .bgra8Unorm, loadAction: .clear, storeAction: .store)

// Get the output images, the output image count is equal to the output descriptor count.
let images = MTIRenderCommand.images(byPerforming: [renderCommandA, renderCommandB], outputDescriptors: [outputDescriptor])

You can also create multiple output descriptors to output multiple images in one render pass (MRT, See https://en.wikipedia.org/wiki/Multiple_Render_Targets).

Custom Vertex Data

When MTIVertex cannot fit your needs, you can implement the MTIGeometry protocol to provide your custom vertex data to the command encoder.

Use the MTIRenderCommand API to issue draw calls and pass your custom MTIGeometry.

Custom Processing Module

In rare scenarios, you may want to access the underlying texture directly, use multiple MPS kernels in one render pass, do 3D rendering, or encode the render commands yourself.

MTIImagePromise protocol provides direct access to the underlying texture and the render context for a step in MetalPetal.

You can create new input sources or fully custom processing units by implementing the MTIImagePromise protocol. You will need to import an additional module to do so.

Objective-C

@import MetalPetal.Extension;

Swift

// CocoaPods
import MetalPetal.Extension

// Swift Package Manager
import MetalPetalObjectiveC.Extension

See the implementation of MTIComputePipelineKernel, MTICLAHELUTRecipe or MTIImage for example.

Alpha Types

If an alpha channel is used in an image, there are two common representations that are available: unpremultiplied (straight/unassociated) alpha, and premultiplied (associated) alpha.

With unpremultiplied alpha, the RGB components represent the color of the pixel, disregarding its opacity.

With premultiplied alpha, the RGB components represent the color of the pixel, adjusted for its opacity by multiplication.

MetalPetal handles alpha type explicitly. You are responsible for providing the correct alpha type during image creation.

There are three alpha types in MetalPetal.

MTIAlphaType.nonPremultiplied: the alpha value in the image is not premultiplied.

MTIAlphaType.premultiplied: the alpha value in the image is premultiplied.

MTIAlphaType.alphaIsOne: there's no alpha channel in the image or the image is opaque.

Typically, CGImage, CVPixelBuffer and CIImage objects have premultiplied alpha channels. MTIAlphaType.alphaIsOne is strongly recommended if the image is opaque, e.g. a CVPixelBuffer from camera feed, or a CGImage loaded from a jpg file.

You can call unpremultiplyingAlpha() or premultiplyingAlpha() on a MTIImage to convert the alpha type of the image.

For performance reasons, alpha type validation only happens in debug build.

Alpha Handling of Built-in Filters

  • Most of the filters in MetalPetal accept unpremultiplied alpha and opaque images and output unpremultiplied alpha images.

  • Filters with outputAlphaType property accept inputs of all alpha types. And you can use outputAlphaType to specify the alpha type of the output image.

    e.g. MTIBlendFilter, MTIMultilayerCompositingFilter, MTICoreImageUnaryFilter, MTIRGBColorSpaceConversionFilter

  • Filters that do not actually modify colors have passthrough alpha handling rule, that means the alpha types of the output images are the same with the input images.

    e.g. MTITransformFilter, MTICropFilter, MTIPixellateFilter, MTIBulgeDistortionFilter

For more about alpha types and alpha compositing, please refer to this amazing interactive article by Bartosz Ciechanowski.

Color Spaces

Color spaces are vital for image processing. The numeric values of the red, green, and blue components have no meaning without a color space.

Before continuing on how MetalPetal handles color spaces, you may want to know what a color space is and how it affects the representation of color values. There are many articles on the web explaining color spaces, to get started, the suggestion is Color Spaces - by Bartosz Ciechanowski.

Different softwares and frameworks have different ways of handling color spaces. For example, Photoshop has a default sRGB IEC61966-2.1 working color space, while Core Image, by default, uses linear sRGB working color space.

Metal textures do not store any color space information with them. Most of the color space handling in MetalPetal happens during the input (MTIImage(...)) and the output (MTIContext.render...) of image data.

Color Spaces for Inputs

Specifying a color space for an input means that MetalPetal should convert the source color values to the specified color space during the creation of the texture.

  • When loading from URL or CGImage, you can specify which color space you'd like the texture data to be in, using MTICGImageLoadingOptions. If you do not specify any options when loading an image, the device RGB color space is used (MTICGImageLoadingOptions.default). A nil color space disables color matching, this is the equivalent of using the color space of the input image to create MTICGImageLoadingOptions. If the model of the specified color space is not RGB, the device RGB color space is used as a fallback.

  • When loading from CIImage, you can specify which color space you'd like the texture data to be in, using MTICIImageRenderingOptions. If you do not specify any options when loading a CIImage, the device RGB color space is used (MTICIImageRenderingOptions.default). A nil color space disables color matching, color values are loaded in the working color space of the CIContext.

Color Spaces for Outputs

When specifying a color space for an output, the color space serves more like a tag which is used to communicate with the rest of the system on how to represent the color values in the output. There is no actual color space conversion performed.

  • You can specify the color space of an output CGImage using MTIContext.makeCGImage... or MTIContext.startTaskTo... methods with a colorSpace parameter.

  • You can specify the color space of an output CIImage using MTICIImageCreationOptions.

MetalPetal assumes that the output color values are in device RGB color space when no output color space is specified.

Color Spaces for CVPixelBuffer

MetalPetal uses CVMetalTextureCache and IOSurface to directly map CVPixelBuffers to Metal textures. So you cannot specify a color space for loading from or rendering to a CVPixelBuffer. However you can specify whether to use a texture with a sRGB pixel format for the mapping.

In Metal, if the pixel format name has the _sRGB suffix, then sRGB gamma compression and decompression are applied during the reading and writing of color values in the pixel. That means a texture with the _sRGB pixel format assumes the color values it stores are sRGB gamma corrected, when the color values are read in a shader, sRGB to linear RGB conversions are performed. When the color values are written in a shader, linear RGB to sRGB conversions are performed.

Color Space Conversions

You can use MTIRGBColorSpaceConversionFilter to perform color space conversions. Color space conversion functions are also available in MTIShaderLib.h.

  • metalpetal::sRGBToLinear (sRGB IEC61966-2.1 to linear sRGB)
  • metalpetal::linearToSRGB (linear sRGB to sRGB IEC61966-2.1)
  • metalpetal::linearToITUR709 (linear sRGB to ITU-R 709)
  • metalpetal::ITUR709ToLinear (ITU-R 709 to linear sRGB)

Extensions

Working with SceneKit

You can use MTISCNSceneRenderer to generate MTIImages from a SCNScene. You may want to handle the SceneKit renderer's linear RGB color space, see issue #76 The image from SceneKit is darker than normal.

Working with SpriteKit

You can use MTISKSceneRenderer to generate MTIImages from a SKScene.

Working with Core Image

You can create MTIImages from CIImages.

You can render a MTIImage to a CIImage using a MTIContext.

You can use a CIFilter directly with MTICoreImageKernel or the MTICoreImageUnaryFilter class. (Swift Only)

Working with JavaScript

See MetalPetalJS

With MetalPetalJS you can create render pipelines and filters using JavaScript, making it possible to download your filters/renderers from "the cloud".

Texture Loader

It is recommended that you use APIs that accept MTICGImageLoadingOptions to load CGImages and images from URL, instead of using APIs that accept MTKTextureLoaderOption.

When you use APIs that accept MTKTextureLoaderOption, MetalPetal, by default, uses MTIDefaultTextureLoader to load CGImages, images from URL, and named images. MTIDefaultTextureLoader uses MTKTextureLoader internally and has some workarounds for MTKTextureLoader's inconsistencies and bugs at a small performance cost. You can also create your own texture loader by implementing the MTITextureLoader protocol. Then assign your texture loader class to MTIContextOptions.textureLoaderClass when creating a MTIContext.

Install

CocoaPods

You can use CocoaPods to install the latest version.

use_frameworks!

pod 'MetalPetal'

# Required if you are using Swift.
pod 'MetalPetal/Swift'

# Recommended if you'd like to run MetalPetal on Apple silicon Macs.
pod 'MetalPetal/AppleSilicon'

Sub-pod Swift

Provides Swift-specific additions and modifications to the Objective-C APIs to improve their mapping into Swift. Highly recommended if you are using Swift.

Sub-pod AppleSilicon

Provides the default shader library compiled in Metal Shading Language v2.3 which is required for enabling programmable blending support on Apple silicon Macs.

Swift Package Manager

Adding Package Dependencies to Your App

iOS Simulator Support

MetalPetal can run on Simulator with Xcode 11+ and macOS 10.15+.

MetalPerformanceShaders.framework is not available on Simulator, so filters that rely on MetalPerformanceShaders, such as MTIMPSGaussianBlurFilter, MTICLAHEFilter, do not work.

Simulator supports fewer features or different implementation limits than an actual Apple GPU. See Developing Metal Apps that Run in Simulator for detail.

Quick Look Debug Support

If you do a Quick Look on a MTIImage, it'll show you the image graph that you constructed to produce that image.

Quick Look Debug Preview

Trivia

Why Objective-C?

Contribute

Thank you for considering contributing to MetalPetal. Please read our Contributing Guidelines.

License

MetalPetal is MIT-licensed. LICENSE

The files in the /MetalPetalExamples directory are licensed under a separate license. LICENSE.md

Documentation is licensed CC-BY-4.0.

Comments
  • ios13 beta Produces strange orientations

    ios13 beta Produces strange orientations

    I have a landscape app that uses Metal Petal and on the ios13 beta the previews in MTIImageView appear squashed. Videos recorded and images taken are the correct orientation and shape but the preview seems to squash the landscape image into a portrait frame regardless of the size of the images. Is this a known issue?

    This appears to be the same with the metal petal demo.

    opened by kallipigous 20
  • Memory issue

    Memory issue

    Hello Yuao! Hope you are doing good!

    As you might remember, I was experimenting using MetalPetal to generate a video asset from a set of images. I added a feature to it where on completion the class returns all the images it used to render the video for any post processing video needs. I used let cgImages = try images.map({ try self.context.makeCGImage(from: $0) }) after that code is executed I notice a huge memory footprint remaining in memory. I'm not sure if I'm doing anything wrong, but I checked the leak profiler and there was no leak detected.

    Usually when I am generating a video out of images the render method increments a 15mb - 20mb per image. After the video generation is complete I am left with a 600mb memory in RAM, before I would be left with approximately 140mb, which is also bad, however, it was easier to get by with. Additionally, it's important for me to mention that I am rendering really high resolution images directly from the iOS camera, they are approximately 3k or 4k resolution. You might be wondering why not just transform the images to a smaller resolution and render them then? That's because it's a requirement for me to be able to render really high resolution images as I would want the video to be high resolution itself.

    I also tried checking the allocation profiler, but there is an unusual case where the allocation shows the memory spike to 30mb when the video generation begins and then it gradually goes down to 10mb; on the other hand, on the leak profiler actually shows the spike to 400mb 600mb. It is very weird.

    I have set up the project here if you are willing to do some investigations. I have set warnings where I think things are going wrong so that it would be easy to navigate to the problem directly. Thank you in advance for reading this long issue.

    opened by SerjPridestar 18
  • How to render text on top of saved video recording

    How to render text on top of saved video recording

    Not sure this should really be flagged as "issue", rather a "How does one achieve it?". Apologies if this creates issues.

    How do I dynamically render text on top of a movie recording and save it to the movie? I have managed to do it via the suggested answer in issue 89, but the output seems to continue to "stretch" the rendered text to full screen (width and height).

    I have tried to set lower values for maximum bounding size but it only seems to set "outer bound" (as expected), leading to truncation, but not minimizing font size/extrapolation/scaling.

    Is there a simple way to:

    • Add a string (ideally over multiple lines, but I can manage with line break)
    • Ensure string is "reasonably sized" (e.g. via font size?)
    • Have it placed in top left hand corner (with some padding)
    • Change font color?

    Any help appreciated and THANK YOU for a WONDERFUL framework 👍

    Where merging of MTIImages is occurring

    // Converting pixelBuffer to MTIImage
    let image = MTIImage(cvPixelBuffer: pixelBuffer, alphaType: .alphaIsOne)
    let filterOutputImage = self.filter(image, faces)
    let outputImage = self.recordingState.isVideoMirrored ? filterOutputImage.oriented(.upMirrored) : filterOutputImage
                    
    // Draw text to MTIImage
    let str = "Testing"
    let textBoxSize = CGSize(width: 200, height: 200)
    let mtiImage = CapturePipeline.renderSingleLineAttributedText(NSAttributedString(string: str), maximumBoundingSize: (textBoxSize))
    
    // Merging two images
    let mtiFilter = MTIBlendFilter(blendMode: .normal)
    mtiFilter.inputImage = mtiImage
    mtiFilter.inputBackgroundImage = image
    var renderOutput = try self.imageRenderer.render(outputImage, using: renderContext)
    if let outputImg = mtiFilter.outputImage {
        renderOutput = try self.imageRenderer.render(outputImg, using: renderContext)
    }
    
    try self.recorder?.appendSampleBuffer(SampleBufferUtilities.makeSampleBufferByReplacingImageBuffer(of: sampleBuffer, with: renderOutput.pixelBuffer)!)
    

    The drawing function which likely needs some tweaking?

    //    static func renderSingleLineAttributedText(_ text: NSAttributedString, maximumBoundingSize: CGSize) throws -> MTIImage {
            static func renderSingleLineAttributedText(_ text: NSAttributedString, maximumBoundingSize: CGSize) -> MTIImage? {
                    print("Maximum bounding size: \(maximumBoundingSize)")
                    let scale: CGFloat = 20
                    let textStorage = NSTextStorage(attributedString: text)
                    let layoutManager = NSLayoutManager()
                    layoutManager.allowsNonContiguousLayout = true
                    textStorage.addLayoutManager(layoutManager)
                    let textContainer = NSTextContainer(size: maximumBoundingSize)
                    textContainer.lineBreakMode = .byTruncatingTail
    //                textContainer.maximumNumberOfLines = 1
                    textContainer.maximumNumberOfLines = 0
                    textContainer.lineFragmentPadding = 0
                    layoutManager.addTextContainer(textContainer)
                    let range = layoutManager.glyphRange(for: textContainer)
                print("range: \(range)")
                    let usedRect = layoutManager.usedRect(for: textContainer).integral
                print("usedRect: \(usedRect)")
                UIGraphicsBeginImageContextWithOptions(usedRect.size, false, scale)
                    layoutManager.ensureLayout(for: textContainer)
                    layoutManager.drawGlyphs(forGlyphRange: range, at: usedRect.origin)
                    let image = UIGraphicsGetImageFromCurrentImageContext()
                    UIGraphicsEndImageContext()
                    guard let cgImage = image?.cgImage else {
        //                throw Error.imageRenderFailure
                return nil
            }
        return MTIImage(cgImage: cgImage, options: [.SRGB: false], isOpaque: false).unpremultiplyingAlpha()
    }
    
    

    What the output currently looks like IMG_F08088DF886B-1

    Checklist

    opened by wesselpeder 14
  • New pixel formats for HDR videos

    New pixel formats for HDR videos

    Checklist

    Context

    MetalPetal already supports a great range of pixel formats including the typical ones being used by iPhone cameras to capture videos (kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange and kCVPixelFormatType_420YpCbCr8BiPlanarFullRange).

    With the iPhone 12 series being introduced app users are now able to capture videos in kCVPixelFormatType_420YpCbCr10BiPlanarVideoRange pixel format expecting their videos being captured / played back / exported in HDR.

    Problem

    Using MetalPetal for processing HDR videos without converting to lower dynamic ranges within the pipeline is not possible as kCVPixelFormatType_420YpCbCr10BiPlanarVideoRange is not yet supported in MetalPetal.

    See the supported CVPixelFormats here: https://github.com/MetalPetal/MetalPetal/blob/b5254c1191cbad91585307ba3025706354675085/Frameworks/MetalPetal/MTICVPixelBufferPromise.m#L77-L112

    @YuAo are you planning to add this capability to MetalPetal? Also what do you think would be the right approach to process video/images in this new format considering the key concepts of MetalPetal?

    opened by gazadge 14
  • Banding issue

    Banding issue

    Hi guys,

    I have a problem with the lut feature, but i am not blaming MetalPetal for this. As I also experienced the same problem with the other libraries.

    The problem is banding on some luts. I know how to create luts and i know what to use or not. What i dont understand why they happen? And how to get rid of it? Working with p3 colorspace is the remedy? So, thats why i decided to open the thread here. and i am attaching a final output images that showing the banding. You cant see this kind of problems on high quality software.

    On the other hand, if i convert the 3dlut image to cube by using https://github.com/YuAo/ColorLookupTable2Cube then that cube file working perfectly on photoshop without banding.

    and here are the images that have bandings: https://imgur.com/67CPAGg https://imgur.com/7q77DSx

    opened by Umity 13
  • Get Single Color Channel

    Get Single Color Channel

    Hello there,

    I am trying to get R, G and B channels of image for compositing later.

    This is what I've currently tried:

    let options = MTIContextOptions()
            
    guard let device = MTLCreateSystemDefaultDevice(), let context = try? MTIContext(device: device, options: options) else {
        return
    }
            
    imageView.context = context
            
    guard let eximage = UIImage(named: "example") else { return }
    guard let examp = CIImage(image: eximage) else { return }
    let image: MTIImage = MTIImage(ciImage: examp).unpremultiplyingAlpha()
    
    let filter = MTIRGBToneCurveFilter()
    filter.blueControlPoints = [MTIVector(value: CGPoint(x: 0, y: 0)), MTIVector(value: CGPoint(x: 0, y: 0))]
    filter.redControlPoints = [MTIVector(value: CGPoint(x: 0, y: 0)), MTIVector(value: CGPoint(x: 0, y: 0))]
    filter.greenControlPoints = [MTIVector(value: CGPoint(x: 0, y: 0)), MTIVector(value: CGPoint(x: 1, y: 0))]
    filter.inputImage = image
            
    let filter2 = MTIRGBToneCurveFilter()
    filter2.blueControlPoints = [MTIVector(value: CGPoint(x: 0, y: 0)), MTIVector(value: CGPoint(x: 1, y: 0))]
    filter2.redControlPoints = [MTIVector(value: CGPoint(x: 0, y: 0)), MTIVector(value: CGPoint(x: 0, y: 0))]
    filter2.greenControlPoints = [MTIVector(value: CGPoint(x: 0, y: 0)), MTIVector(value: CGPoint(x: 0, y: 0))]
    filter2.inputImage = image
            
    let filter3 = MTIRGBToneCurveFilter()
    filter3.blueControlPoints = [MTIVector(value: CGPoint(x: 0, y: 0)), MTIVector(value: CGPoint(x: 0, y: 0))]
    filter3.redControlPoints = [MTIVector(value: CGPoint(x: 0, y: 0)), MTIVector(value: CGPoint(x: 1, y: 1))]
    filter3.greenControlPoints = [MTIVector(value: CGPoint(x: 0, y: 0)), MTIVector(value: CGPoint(x: 0, y: 0))]
    filter3.inputImage = image
            
    guard let outputImage = filter.outputImage else { return }
    guard let outputImage2 = filter2.outputImage else { return }
    guard let outputImage3 = filter3.outputImage else { return }
            
    let compositeFilter = MTIMultilayerCompositingFilter()
    compositeFilter.layers = [MTILayer(content: outputImage, contentRegion: CGRect(x: 0, y: 0, width: outputImage.size.width, height: outputImage.size.height), compositingMask: nil, layoutUnit: .pixel, position: CGPoint(x: 0, y: 0), size: outputImage.size, rotation: 0, opacity: 1, blendMode: .normal),
    MTILayer(content: outputImage2, contentRegion: CGRect(x: 0, y: 0, width: outputImage2.size.width, height: outputImage2.size.height), compositingMask: nil, layoutUnit: .pixel, position: CGPoint(x: 0, y: 0), size: outputImage2.size, rotation: 0, opacity: 1, blendMode: .softLight),
    MTILayer(content: outputImage3, contentRegion: CGRect(x: 0, y: 0, width: outputImage3.size.width, height: outputImage3.size.height), compositingMask: nil, layoutUnit: .pixel, position: CGPoint(x: 0, y: 0), size: outputImage3.size, rotation: 0, opacity: 1, blendMode: .softLight)]
    compositeFilter.inputBackgroundImage = image
    guard let compositeImage = compositeFilter.outputImage else { return }
    imageView.image = outputImage
    

    Am I missing something? Or are there any simpler method for this?

    Thanks in advance.

    opened by onurgenes 13
  • Saving filtered video

    Saving filtered video

    Hi, after hours of working I couldnt find a way to record the filtered video in the demo. Previously, i was able to save the filtered video but now it is swiftUI and couldnt do it.

    this didnt work, saves non-filtered video:

    func save(videoFileUrl: URL) {
                PHPhotoLibrary.shared().performChanges({
                    PHAssetChangeRequest.creationRequestForAssetFromVideo(atFileURL: (videoPlayer?.currentItem?.asset as? AVURLAsset)!.url )
                }, completionHandler: { succeeded, error in
                  guard error == nil, succeeded else {
                    return
                  }
            })
              }
    
    opened by Umity 12
  • Filters not applied for specific CGImage

    Filters not applied for specific CGImage

    Hello,

    My application works with PHAsset. It loads UIImage for PHAsset and applies MTI filters then.

    I have faced with a case when filters don't work for specific images. This are iPhone screenshots and Panoramas.

    Please see below a details about CGImages.

    CGImage that don't work with filters

    <CGImage 0x10638c1f0> (DP)
    	<<CGColorSpace 0x2835ce0a0> (kCGColorSpaceICCBased; kCGColorSpaceModelRGB; Display P3)>
    		width = 375, height = 376, bpc = 16, bpp = 64, row bytes = 3008 
    		kCGImageAlphaPremultipliedLast | kCGImageByteOrder16Little  | kCGImagePixelFormatPacked 
    		is mask? No, has masking color? No, has soft mask? No, has matte? No, should interpolate? Yes
    

    The scaled version of the CGImage above that works with filters.

    <CGImage 0x118228710> (DP)
    	<<CGColorSpace 0x282198ea0> (kCGColorSpaceICCBased; kCGColorSpaceModelRGB; Display P3)>
    		width = 300, height = 300, bpc = 8, bpp = 32, row bytes = 1216 
    		kCGImageAlphaNoneSkipFirst | kCGImageByteOrder32Little  | kCGImagePixelFormatPacked 
    		is mask? No, has masking color? No, has soft mask? No, has matte? No, should interpolate? Yes
    

    The code below fixes the issue.

    The key is CGImage.normalized. It looks as a hack for me, but I'm not good in image processing. I believe that it can be solved on MetalPetal level.

    extension CGImage {
        var normalized: CGImage? {
            guard bitsPerComponent != 8 else {
                return self
            }
    
            let colorSpace = CGColorSpaceCreateDeviceRGB()
    
            var bitmapInfo: UInt32 =
                CGImageAlphaInfo.noneSkipFirst.rawValue |
                CGImageByteOrderInfo.order32Little.rawValue
    
            if #available(iOS 12.0, *) {
                bitmapInfo |= CGImagePixelFormatInfo.packed.rawValue
            }
    
            guard let context = CGContext(
                data: nil,
                width: width,
                height: height,
                bitsPerComponent: 8,
                bytesPerRow:  4 * width,
                space: colorSpace,
                bitmapInfo: bitmapInfo
            ) else { return nil }
    
            context.interpolationQuality = .default
    
            let destinationRect = CGRect(x: 0, y: 0, width: width, height: height)
    
            context.clear(destinationRect)
            context.draw(self, in: destinationRect)
    
            return context.makeImage()
        }
    }
    
    extension ... {
        func mtiImage(from image: UIImage) -> MTIImage? {
            guard
                let cgImageNotNormalized = image.cgImage,
                let cgImage = cgImageNotNormalized.normalized
            else {
                return nil
            }
    
            let mtiImage = MTIImage(
                cgImage: cgImage,
                options: [.SRGB : false],
                isOpaque: true
            )
    
            let lutFilter = id.flatMap { MTIColorLookupFilter(filterID: $0) }
            lutFilter?.intensity = adjustments.intensity
    
            let clarityFilterPort = adjustments.clarityFilter
                .flatMap { AnyIOPort($0) } ?? AnyIOPort(ImagePassthroughPort())
            let contrastFilterPort = adjustments.contrastFilter
                .flatMap { AnyIOPort($0) } ?? AnyIOPort(ImagePassthroughPort())
            let exposureFilterPort = adjustments.exposureFilter
                .flatMap { AnyIOPort($0) } ?? AnyIOPort(ImagePassthroughPort())
            let saturationFilterPort = adjustments.saturationFilter
                .flatMap { AnyIOPort($0) } ?? AnyIOPort(ImagePassthroughPort())
            let shadowsFilterPort = adjustments.shadowsFilter
                .flatMap { AnyIOPort($0) } ?? AnyIOPort(ImagePassthroughPort())
            let temperatureFilterPort = adjustments.temperatureFilter
                .flatMap { AnyIOPort($0) } ?? AnyIOPort(ImagePassthroughPort())
            let vibranceFilterPort = adjustments.vibranceFilter
                .flatMap { AnyIOPort($0) } ?? AnyIOPort(ImagePassthroughPort())
    
            return FilterGraph.makeImage { output in
                if let lutFilter = lutFilter {
                    mtiImage => lutFilter.inputPorts.inputImage
    
                    lutFilter
                        => clarityFilterPort
                        => contrastFilterPort
                        => exposureFilterPort
    
                    exposureFilterPort
                        => saturationFilterPort
                        => shadowsFilterPort
                        => temperatureFilterPort
    
                    temperatureFilterPort
                        => vibranceFilterPort
                        => output
                } else {
                    mtiImage
                        => clarityFilterPort
                        => contrastFilterPort
                        => exposureFilterPort
    
                    exposureFilterPort
                        => saturationFilterPort
                        => shadowsFilterPort
                        => temperatureFilterPort
    
                    temperatureFilterPort
                        => vibranceFilterPort
                        => output
                }
            }
        }
    }
    

    Could you please help with the following case. Thank you!

    PS. Thank you for such a great library.

    Checklist

    enhancement 
    opened by larryonoff 12
  • Distortion problem [失真问题]

    Distortion problem [失真问题]

    Large pictures are resized to very small, and the distortion is very serious. Are there any good methods or solutions? Like the size 58x58 area, is it possible to have a smaller area?

    [大图片resize到很小,失真很厉害,有那些好办法,或者解决方案吗? 好比大小58x58区域,有可能更小的区域?]

    Checklist

    opened by ccworld1000 11
  • Possible to use MetalPetal in this case?

    Possible to use MetalPetal in this case?

    Hello, I am sorry to ask you with my trouble but I already spent several days to make prototype using your library. Unfortunately I did not find satisfied solution yet to make what I want. I need to clone below app but not sure finally whether I can archeive my goal using yours library.

    features

    • local video filtering
    • face blur with Vision
    • Overlay image and merge
    • configure audio setting

    At first I tried to do filtering but has many problems, so when apply filtering using MTIVideoComposition, and preview using AVPlayer, I need to change values real timely(ex, brightness) but AVPlayer doesn't preview real timely because MTIVideoComposition doesn't update real timely as CADisplay. and so on...

    How can I make UML to use MetalPetal, in order to implement what I want. What kind of components should I use? If you can help me, I'm very thank to you!

    opened by mgstar1021 11
  • How to get aspect MTIImage ?

    How to get aspect MTIImage ?

    sorry, I'm newbie with MetalPetal. currently I can get aspect MTIImage very easy if I render the texture which from MKTView but for some reasons, I hope I can get an aspect MTImage directly, How should I do ?

    opened by PatrickSCLin 11
  • MTIHistogramDisplayFilter with transparent background

    MTIHistogramDisplayFilter with transparent background

    I'm trying to display a histogram view over a camera viewfinder, and having a transparent background would be a big benefit here.

    It seems that there is no way to alter the output of MTIHistogramDisplayFilter to have no background, which would be most convenient, and so I have tried ways to remove it after the histogram image is created, with no success.

    I've looked through some other issues for how this might be achieved, but have had no success.

    Is there a way to replace black color with transparent color in MTIImage? #188 suggests using MTIChromaKeyBlendFilter, however this results in the white part of the histogram also being made transparent.

    I have also looked at Why MTIChromaKeyBlendFilter removes white colored pixels also in video? #311, but I don't think the suggestion of a screen blend filter works for my use. The histogram is added to a separate layer of the UI, and blending directly with the viewfinder is not feasible.

    Any help appreciated.

    • [x] I have read the README
    • [x] I have reached for similar issues
    opened by BenRiceM 0
  • FaceTrackingPixellate camera flip issue

    FaceTrackingPixellate camera flip issue

    Hi, I am using MetalPetal everything is ok but, when I am using FaceTrackingPixellate for camera Preview showing in example it is working fine with front but changing to back camera the front pixellate area showing without face in back camera. I try to remove filter and add again FaceTrackingPixellate filter but not working.

    opened by dinesh-advent 0
  • How to achieve Whites & Blacks adjustment similar to Lightroom

    How to achieve Whites & Blacks adjustment similar to Lightroom

    I've been trying to write my own metal shader to achieve a whites & blacks adjustment similar to Lightroom with no success so far. Any advice, guidance or resource that you think can help would be greatly appreciated 🙏

    Thanks!

    Checklist

    opened by twomedia 0
  • Apply a background image/blur to a video

    Apply a background image/blur to a video

    Checklist

    Hi all, I have a question. Is it possible to make blurred video background (or image) then save a new video? Something like that:

    310005815_415460560749469_6808951995841546742_n

    Thank you for help.

    opened by noho501 1
  • Performance improvement

    Performance improvement

    Checklist

    Hi @YuAo

    As ever thanks for the great library.

    I'm trying to improve performance of our application. On occassions we are getting ever-increasing memory usage, and I have narrowed it down to the context.startTask(toRender: image, to: pixelBuffer, sRGB: false, completion call.

    We are using PixelBufferPoolBackedImageRenderer.swift from the examples.

    The startTask call can take a much as 0.2 secs to return, and of course even at 30 fps this is going to progressively build up.

    Any thoughts or suggestions would be much appreciated!

    opened by dsmurfin 8
  • Color of exported video differs from the input video

    Color of exported video differs from the input video

    First of all thank you for this awesome library, it has helped me a lot. My issue is that the colors of the exported video are slightly different from the input video although I did not specify any custom colorspaces. I modified the demo project to reproduce the issue as shown below.

    Checklist

    • [x] I've read the README
    • [x] If possible, I've reproduced the issue using the master branch of this repo
    • [x] I've searched for existing GitHub issues

    Environment

    Info | Value | -------------------------|-------------------------------------| MetalPetal Version | 1.22.0 Integration Method | CocoaPods Platform & Version | iOS 14.5 Device | iPhone X

    Steps to Reproduce

    To see the color change in export, you can replace the function of VideoProcessorView#updateVideoUrl(url: URL) in the VideoProcessorView file with the following code and export the video.

    • This is a stripped down version of the demo which exports a brown (hex string #B5463D) square on a blue background.
    • In the Video Processing screen, you can select the brown square test video I attached below and export it.
            func updateVideoURL(_ url: URL) {
                let asset = AVURLAsset(url: url, options: [AVURLAssetPreferPreciseDurationAndTimingKey: true])
                let presentationSize = CGSize(width: 1080, height: 1920)
                let outputFilter = MultilayerCompositingFilter()
    
                let videoComposition = MTIVideoComposition(asset: asset, context: renderContext, queue: DispatchQueue.main, filter: { request in
                    guard let sourceImage = request.anySourceImage else {
                        return MTIImage.white
                    }
    
                    // Add a 7 sec brown square video.
                    let videoWidth: CGFloat = 500
                    let videoHeight: CGFloat = 500
                    let videoRect = CGRect(x: presentationSize.width / 2 - videoWidth / 2,
                                           y: presentationSize.height / 2 - videoHeight / 2,
                                           width: videoWidth,
                                           height: videoHeight)
                    let videoLayer = MultilayerCompositingFilter.Layer(content: sourceImage).frame(videoRect, layoutUnit: .pixel)
                    outputFilter.layers = [ videoLayer ]
                    
                    // Make the background blue
                    let color = MTIColor(red: 0, green: 0, blue: 1, alpha: 1)
                    let backgroundImage = MTIImage(color: color, sRGB: true, size: presentationSize)
                    outputFilter.inputBackgroundImage = backgroundImage
                    return outputFilter.outputImage!
                })
                
                videoComposition.renderSize = presentationSize
                
                let playerItem = AVPlayerItem(asset: asset)
                playerItem.videoComposition = videoComposition.makeAVVideoComposition()
                self.videoComposition = videoComposition
                videoAsset = asset
                videoPlayer = AVPlayer(playerItem: playerItem)
                videoPlayer?.play()
            }
    

    Expected behavior

    I expected the color of the exported video to be the same as the input video.

    Actual behavior

    Firstly, the color of the brown square in the preview video is very slightly different (#B5463F compared to the input #B5463D) but it is unnoticeable and thus not an issue. However, the exported video's brown square is quite different (#AB3B40) and is noticeable when the input video and exported videos are placed side by side.

    Input brown square 7s video https://user-images.githubusercontent.com/21056777/132044499-1201b67b-138d-49bd-ba1c-a3b7505880a4.mov

    Exported brown square on blue background video https://user-images.githubusercontent.com/21056777/132044571-568fe3ec-46d1-4b64-8ef6-a87f9dd96525.mp4

    Screenshot of preview video preview screenshot

    I intend to use MetalPetal in my video collage app which allows users to add stickers (images), text, etc onto a video and export it. Thus, it is important that the video colors do not deviate too much from the original. Even though this example uses a single video on a background, the same issue is observed when I try to add images onto the video using the MultilayerCompositingFilter.Layer method similar to the demo.

    What I've tried so far

    I suspected that this was a colorspace issue and thus tried to use a custom colorspace when initializing the images like this:

    // Tried different colorspaces like sRGB, linearSRGB etc
    let options = MTICGImageLoadingOptions(colorSpace: CGColorSpace(name: CGColorSpace.displayP3))
    let colorMtiImage = MTIImage(cgImage: cgImage, options: options)
    

    I also tried applying a colorspace conversion filter like so:

    let curveFilter = MTIRGBColorSpaceConversionFilter()
    curveFilter.outputPixelFormat = .bgra8Unorm_srgb
    return curveFilter.outputImage
    

    But the output colors are still different from the input video.

    If I'm not wrong, iOS' default colorspace is sRGB and this should be used by default when a MTIImage is initialized using a CGImage according to the docs, so I'm not sure what is causing the colors to be slightly off. Thanks once again for maintaining this library and I would appreciate it if you could offer some advice on how to fix this issue.

    opened by wilfredbtan 2
Releases(1.25.1)
  • 1.25.1(Nov 3, 2022)

  • 1.25.0(Oct 27, 2022)

  • 1.24.2(Jul 14, 2022)

    Enhancements

    • [Shaders] Support unified Metal language. https://github.com/MetalPetal/MetalPetal/commit/bdb515635033288b9d8f13aeab4e5343894f7aa9

      This also fixes the SwiftPM integration on iOS 16 and macOS 13

    Source code(tar.gz)
    Source code(zip)
  • 1.24.1(Mar 30, 2022)

    Bug fixes

    • Fix rendering of CIImages that have non-zero origin. #314
    • [MTIDataBuffer] Use stride instead of size for raw pointer access. 58b226325e52ee13d18e869f28912522dc476ca1

    Enhancements

    • MTITexturePromise.texture is now public. 3e9bc59f02a8ed6e20f3642ba55f48ceb262891f
    • MTICVPixelBufferPromise.pixelBuffer is now public. 16d627405c3a9c7ac228a98998d319816583d038
    Source code(tar.gz)
    Source code(zip)
  • 1.24.0(Dec 7, 2021)

    Deprecation

    • Drops iOS 10 support.

    Enhancements

    • Refine swift interfaces for MTIRenderPipelineKernel #293
    • Reporting error instead of trap when rendering a zero size image #292
    Source code(tar.gz)
    Source code(zip)
  • 1.23.0(Oct 14, 2021)

    Enhancements

    • Automatically fall back on CoreImage to create textures for non-IOSurface-Backed CVPixelBuffers. 5b74467543412a41643f0f65f41271f5c29079d8
    • [MTIVideoComposition] Minor performance improvements by caching track transforms. 9f524836a5980880de646475007052fa870e98ab
    • [MTIVideoComposition] Add the support for colorPrimaries, colorYCbCrMatrix and colorTransferFunction. 601636764678ac3e52ff71831fced506155542e2
    Source code(tar.gz)
    Source code(zip)
  • 1.22.0(Jun 23, 2021)

    Enhancements

    • [MTIAsyncVideoCompositionRequestHandler] Do not report noSourceFrame error. 84be369be4402122a3afd97b6985bbf5099839b2
    • [MTIAsyncVideoCompositionRequestHandler] Make Request.anySourceImage an optional value. #256
    • Improve memory handling on some failure branches. 7388f540a23da10d4aaf240c512135e0e58bcff5
    • Silent some compiler warnings on Xcode 13. eb67c48060989c67ece5dc75fa811b2489dd9da8
    Source code(tar.gz)
    Source code(zip)
  • 1.21.1(May 13, 2021)

    Bug fixes

    • Remove the load action assertion for memoryless render targets while the render target is not actually memoryless (Intel Macs) e7335c6440688b2c49c9dac51bce126c8bc7bfdf
    Source code(tar.gz)
    Source code(zip)
  • 1.21.0(May 10, 2021)

    Enhancements

    • MTIContext now automatically chooses to use MTIHeapTexturePool on supported devices. a0fa22797ea3b18fd7d2eb6324a06b19da887791

    Deprecation

    • MTIVector no longer conforms to NSCoding.

      MTIVector is designed for encoding small vector values for the shader functions. Data serialization should be done using other methods.

    • MTIContextOptions no longer conforms to NSCopying.

      MTIContextOptions is designed to be a temporary object. The context does not keep references to the context options. There is no need for the MTIContextOptions to conform to the NSCopying protocol.

    Source code(tar.gz)
    Source code(zip)
  • 1.20.0(Apr 30, 2021)

    Features

    • Refactor MTIRoundCornerFilter to support both circular and continuous corner curve.
    • Add round corner support for MTILayer.

    Enhancements

    • Add Hashable conformance to MTIVertex.
    Source code(tar.gz)
    Source code(zip)
  • 1.19.0(Apr 19, 2021)

    Features

    • Add MTIRGBColorSpaceConversionFilter for converting between linear sRGB, sRGB and ITU-R 709 color spaces.

    Enhancements

    • MTIUnpremultiplyAlphaFilter and MTIPremultiplyAlphaFilter no longer inherit from MTIUnaryImageRenderingFilter, this avoids some misuses.
    • Internal logic improvements for MTIMultilayerCompositeKernel.
    • Color space handling improvements. https://github.com/MetalPetal/MetalPetal#color-spaces

    Deprecation

    • Remove the MTIUnpremultiplyAlphaWithSRGBToLinearRGBFilter, use MTIRGBColorSpaceConversionFilter instead.
    Source code(tar.gz)
    Source code(zip)
  • 1.18.0(Apr 10, 2021)

    Features

    • You can now set the outputAlphaType of MTIMultilayerCompositingFilter, MTICoreImageUnaryFilter and MTIBlendFilter.
    • MTIBlendFilter now accepts images with premultiplied alpha channels.
    • You can now assign a mask to MTILayer. #237
    • You can now use MTIVideoComposition to process videos. VideoIO is no longer a requirement for video processing. #239 #236

    Enhancements

    • Improve the Swift API of MultilayerCompositingFilter, MultilayerCompositingFilter.Layer now supports method chaining.
    • Performance improvements for MTIBlendFilter and MTIMultilayerCompositingFilter.
    Source code(tar.gz)
    Source code(zip)
  • 1.17.0(Mar 26, 2021)

  • 1.16.0(Feb 16, 2021)

    Features

    • You can now register new blend mode using a metal shader function string using MTIBlendFunctionDescriptors(blendFormula:).

    Enhancements

    • Improvements for Apple silicon Macs. #229
    • Swift API enhancements. e9b192b d1ce01f
    • Update the parameter encoding logic of render and compute pipeline kernels. Number types are automatically converted. For example you can now pass int values to float parameters. 1ec9966

    Bug fixes

    • Fix a bug of multi-layer compositing filter when its rasterSampleCount is greater than zero on Intel Macs.
    Source code(tar.gz)
    Source code(zip)
  • 1.15.0(Dec 21, 2020)

    Features

    • Added tvOS support. #220
    • Added kCVPixelFormatType_420YpCbCr10BiPlanarVideo/FullRange support. #218

    Bug Fixes

    • MTIHighPassSkinSmoothingFilter initial state fix. https://github.com/MetalPetal/MetalPetal/commit/31a9aee5efd362432f824862b93d9d3cee1ca9ee
    Source code(tar.gz)
    Source code(zip)
  • 1.14.0(Oct 20, 2020)

    Features

    • Added MTISKSceneRenderer
    • Added antialiasingMode and sRGB support to MTISCNSceneRenderer

    Enhancements

    • Added an option to prevent automatic context creation for MTIImageView and MTIThreadSafeImageView. 27c11f666bd61299f63351572b9f09cff4e0ca4d
    • Improve the internal logic of passthroughAlphaTypeHandlingRule 46f164effda7f0f32429e7e3bfd8ed04067d4c75
    Source code(tar.gz)
    Source code(zip)
  • 1.13.0(Jul 24, 2020)

  • 1.12.0(Jul 22, 2020)

    Demo

    • Add a "Sketch Board" demo.

    Enhancements

    • Add the support of using SIMD vectors directly as shader function parameters in Swift. #171
    • Add the support for short/ushort/char/uchar SIMD vectors to MTIVector. #171
    • Add the support for tintColor to MTILayer. 1463c6ddcda446b8b38a50d8bd1091d158b708df
    • Improve colorspace handling of MTIDefaultTextureLoader 7e67cee8141ff32a06c8ff9bc4cb7806110198ee and MTIThreadSafeImageView 3bf5f67c8dbf8d49aa315af69b3e5ab05c503138

    Bug Fixes

    • MTIContext.renderedBuffer(for:) now respect the targetImage's dimensions. 14e5e40b3e3ed806e193e3048386736af5609aef
    Source code(tar.gz)
    Source code(zip)
  • 1.11.2(Jul 16, 2020)

  • 1.11.1(Jul 6, 2020)

  • 1.11.0(Jul 6, 2020)

    Features

    • Add MSAA support for MTIMultilayerCompositingFilter, MTITransformFilter and MTIRenderCommand. #166

    Enhancements

    • MTITransformFilter now respects its outputPixelFormat #161
    • A new default texture loader. Fixed a lot of MTKTextureLoader related problems. #164
    • Update public headers to be angle-bracketed instead of double-quoted. #163
    • Restrict subclassing for most of the Objective-C classes. #163
    • Tweak the Swift interface for MTICVMetalTextureBridging. b8fa1e7c8df07589144b19ad6925a7d2dffb8ea3
    Source code(tar.gz)
    Source code(zip)
  • 1.10.6(Jun 9, 2020)

  • 1.10.5(May 16, 2020)

    • Fixed an issue that may prevent MTIThreadSafeImageView's drawableSize from being updated. #149
    • Error handling improvements. #153 #152
    • Cleaned up a loop in MTIRenderPipelineKernel. #151
    Source code(tar.gz)
    Source code(zip)
  • 1.10.4(Apr 24, 2020)

  • 1.10.3(Apr 10, 2020)

    • Update podspec to support static linkage (MetalPetal/Static subpod).

      This is a workaround for https://github.com/CocoaPods/CocoaPods/issues/8403

    Source code(tar.gz)
    Source code(zip)
  • 1.10.2(Apr 7, 2020)

  • 1.10.1(Mar 22, 2020)

  • 1.10.0(Mar 17, 2020)

  • 1.9.0(Feb 22, 2020)

  • 1.8.0(Jan 21, 2020)

    • Added MTIRoundCornerFilter.
    • Added some utilities to work with VideoIO (Swift)
    • Forward MTIImageView's contentMode to its internal MTKView. #122
    • Added a method to render an image and discards the result. 459cfa770cf18fcf927426d67283d7773aed2741
    • Added some convenience methods to MTIAlphaTypeHandlingRule, MTIFunctionDescriptor and MTIImage. (Swift)
    • Improve the way how image promises are resolved. Makes it easier to implement -[MTIImagePromise resolveWithContext:error:]. d302fab805e8f11d34fba6e854747f0b67ac47f2
    Source code(tar.gz)
    Source code(zip)
GPUImage 2 is a BSD-licensed Swift framework for GPU-accelerated video and image processing.

GPUImage 2 Brad Larson http://www.sunsetlakesoftware.com @bradlarson [email protected] Overview GPUImage 2 is the second generation of th

Brad Larson 4.8k Dec 29, 2022
An open source iOS framework for GPU-based image and video processing

GPUImage Brad Larson http://www.sunsetlakesoftware.com @bradlarson [email protected] Overview The GPUImage framework is a BSD-licensed iO

Brad Larson 20k Jan 1, 2023
GPU-based media processing library using Metal written in Swift

GPU-based media processing library using Metal written in Swift. Overview MetalAcc is a GPU-Based media processing library that lets you apply GPU-acc

Jiawei Wang 259 Dec 17, 2022
Swift Package Manager plug-in to compile Metal files that can be debugged in Xcode Metal Debugger.

MetalCompilerPlugin Swift Package Manager plug-in to compile Metal files that can be debugged in Xcode Metal Debugger. Description Swift Package Manag

Jonathan Wight 10 Oct 30, 2022
📷 A composable image editor using Core Image and Metal.

Brightroom - Composable image editor - building your own UI Classic Image Editor PhotosCrop Face detection Masking component ?? v2.0.0-alpha now open!

Muukii 2.8k Jan 3, 2023
📷 A composable image editor using Core Image and Metal.

Brightroom - Composable image editor - building your own UI Classic Image Editor PhotosCrop Face detection Masking component ?? v2.0.0-alpha now open!

Muukii 2.8k Jan 2, 2023
A high-performance image library for downloading, caching, and processing images in Swift.

Features Asynchronous image downloader with priority queuing Advanced memory and database caching using YapDatabase (SQLite) Guarantee of only one ima

Yap Studios 72 Sep 19, 2022
Fabulous Image Processing in Swift

Toucan is a Swift library that provides a clean, quick API for processing images. It greatly simplifies the production of images, supporting resizing,

Gavin Bunney 2.4k Jan 6, 2023
Patch out the GPU checks for any x86-64 macOS Unreal Engine-based game

UnrealGPUPatcher Download here Patch out the GPU checks for any x86-64 macOS Unreal Engine-based game, particularly ARK: Survival Evolved. Requirement

Jacob Greenfield 35 Jan 1, 2023
Playing with Core Image and Metal Shader Language for fun.

Playing with Core Image and Metal Shader Language for fun.

Makeeyaf 6 Jan 5, 2023
Advanced framework for loading, caching, processing, displaying and preheating images.

Advanced framework for loading, caching, processing, displaying and preheating images. This framework is no longer maintained. Programming in Swift? C

Alexander Grebenyuk 1.2k Dec 23, 2022
IOS UIImage processing functions using the vDSP/Accellerate framework for speed.

UIImage Image Processing extensions using the vDSP/Accelerate framework.

null 372 Sep 1, 2022
PublisherKit - An open source implementation of Apple's Combine framework for processing asynchronous events over time

Publisher Kit Overview PublisherKit provides a declarative Swift API for processing asynchronous events over time. It is an open source version of App

null 5 Feb 22, 2022
Twitter Image Pipeline is a robust and performant image loading and caching framework for iOS clients

Twitter Image Pipeline (a.k.a. TIP) Background The Twitter Image Pipeline is a streamlined framework for fetching and storing images in an application

Twitter 1.8k Dec 17, 2022
new home for the non-Metal framework shims!

Moraea non-Metal Frameworks The core of the non-Metal patches: wrappers for downgraded frameworks, consisting of a mixture of autogenerated stubs and

Moraea 9 Jan 1, 2023
📹 Your next favorite image and video picker

Description We all love image pickers, don't we? You may already know of ImagePicker, the all in one solution for capturing pictures and selecting ima

HyperRedink 1.4k Dec 25, 2022
BeatboxiOS - A sample implementation for merging multiple video files and/or image files using AVFoundation

MergeVideos This is a sample implementation for merging multiple video files and

null 3 Oct 24, 2022
A simple mesh viewer for MacOS based on Swift and Metal and using Assimp for loading meshes

Metal Mesh Viewer A simple triangle mesh viewer for MacOS This application is a simple (triangle) mesh viewer that should be capable of rendering even

J. Andreas Bærentzen 0 Dec 13, 2021
1D, 2D, and 3D variations of Fast Fourier Transforms for a Metal S4TF backend

MetalFFT MetalFFT is an experiment in adding GPU acceleration for 1D, 2D, and 3D variations of Fast Fourier Transforms. This framework's original purp

Philip Turner 21 Oct 11, 2022