A camera designed in Swift for easily integrating CoreML models - as well as image streaming, QR/Barcode detection, and many other features

Overview

Travis CI Status MIT License Standard README Compliant Platforms


Would you like to use a fully-functional camera in an iOS application in seconds? Would you like to do CoreML image recognition in just a few more seconds on the same camera? Lumina is here to help.

Cameras are used frequently in iOS applications, and the addition of CoreML and Vision to iOS 11 has precipitated a rash of applications that perform live object recognition from images - whether from a still image or via a camera feed.

Writing AVFoundation code can be fun, if not sometimes interesting. Lumina gives you an opportunity to skip having to write AVFoundation code, and gives you the tools you need to do anything you need with a camera you've already built.

Lumina can:

  • capture still images
  • capture videos
  • capture live photos
  • capture depth data for still images from dual camera systems
  • stream video frames to a delegate
  • scan any QR or barcode and output its metadata
  • detect the presence of a face and its location
  • use any CoreML compatible model to stream object predictions from the camera feed

Table of Contents

Requirements

  • Xcode 12.0+ (by loading Swift 4 Toolchain)
  • iOS 13.0
  • Swift 5.2

Background

David Okun has experience working with image processing, and he thought it would be a nice thing to have a camera module that allows you to stream images, capture photos and videos, and have a module that lets you plug in a CoreML model, and it streams the object predictions back to you alongside the video frames.

Contribute

See the contribute file!

PRs accepted.

Small note: If editing the README, please conform to the standard-readme specification.

Install

Lumina fully supports Swift Package Manager. You can either add the repo url in your Xcode project or in your Package.swift file under dependencies.

Usage

NB: This repository contains a sample application. This application is designed to demonstrate the entire feature set of the library. We recommend trying this application out.

Initialization

Consider that the main use of Lumina is to present a ViewController. Here is an example of what to add inside a boilerplate ViewController:

import Lumina

We recommend creating a single instance of the camera in your ViewController as early in your lifecycle as possible with:

let camera = LuminaViewController()

Presenting Lumina goes like so:

present(camera, animated: true, completion:nil)

Remember to add a description for Privacy - Camera Usage Description and Privacy - Microphone Usage Description in your Info.plist file, so that system permissions are handled properly.

Logging

Lumina allows you to set a level of logging for actions happening within the module. The logger in use is swift-log, made by the Swift Server Working Group team. The deeper your level of logging, the more you'll see in your console.

NB: While Lumina is licensed by the MIT license, swift-log is licensed by Apache 2.0. A copy of the license is also included in the source code.

To set a level of logging, set the static var on LuminaViewController like so:

LuminaViewController.loggingLevel = .notice

Levels read like so, from least to most logging:

  • CRITICAL
  • ERROR
  • WARNING
  • NOTICE
  • INFO
  • DEBUG
  • TRACE

Functionality

There are a number of properties you can set before presenting Lumina. You can set them before presentation, or during use, like so:

camera.position = .front // could also be .back
camera.recordsVideo = true // if this is set, streamFrames and streamingModel are invalid
camera.streamFrames = true // could also be false
camera.textPrompt = "This is how to test the text prompt view" // assigning an empty string will make the view fade away
camera.trackMetadata = true // could also be false
camera.resolution = .highest // follows an enum
camera.captureLivePhotos = true // for this to work, .resolution must be set to .photo
camera.captureDepthData = true // for this to work, .resolution must be set to .photo, .medium1280x720, or .vga640x480
camera.streamDepthData = true // for this to work, .resolution must be set to .photo, .medium1280x720, or .vga640x480
camera.frameRate = 60 // can be any number, defaults to 30 if selection cannot be loaded
camera.maxZoomScale = 5.0 // not setting this defaults to the highest zoom scale for any given camera device

Object Recognition

NB: This only works for iOS 11.0 and up.

You must have a CoreML compatible model(s) to try this. Ensure that you drag the model file(s) into your project file, and add it to your current application target.

The sample in this repository comes with the MobileNet and SqueezeNet image recognition models, but again, any CoreML compatible model will work with this framework. Assign your model(s) to the framework using the convenient class called LuminaModel like so:

camera.streamingModels = [LuminaModel(model: MobileNet().model, type: "MobileNet"), LuminaModel(model: SqueezeNet().model, type: "SqueezeNet")]

You are now set up to perform live video object recognition.

Handling output

To handle any output, such as still images, video frames, or scanned metadata, you will need to make your controller adhere to LuminaDelegate and assign it like so:

camera.delegate = self

Because the functionality of the camera can be updated at runtime, all delegate functions are required.

To handle the Cancel button being pushed, which is likely used to dismiss the camera in most use cases, implement:

func dismissed(controller: LuminaViewController) {
    // here you can call controller.dismiss(animated: true, completion:nil)
}

To handle a still image being captured with the photo shutter button, implement:

func captured(stillImage: UIImage, livePhotoAt: URL?, depthData: Any?, from controller: LuminaViewController) {
        controller.dismiss(animated: true) {
    // still images always come back through this function, but live photos and depth data are returned here as well for a given still image
    // depth data must be manually cast to AVDepthData, as AVDepthData is only available in iOS 11.0 or higher.
}

To handle a video being captured with the photo shutter button being held down, implement:

func captured(videoAt: URL, from controller: LuminaViewController) {
    // here you can load the video file from the URL, which is located in NSTemporaryDirectory()
}

NB: It's import to note that, if you are in video recording mode with Lumina, streaming frames is not possible. In order to enable frame streaming, you must set .recordsVideo to false, and .streamFrames to true.

To handle a video frame being streamed from the camera, implement:

func streamed(videoFrame: UIImage, from controller: LuminaViewController) {
    // here you can take the image called videoFrame and handle it however you'd like
}

To handle depth data being streamed from the camera on iOS 11.0 or higher, implement:

func streamed(depthData: Any, from controller: LuminaViewController) {
    // here you can take the depth data and handle it however you'd like
    // NB: you must cast the object to AVDepthData manually. It is returned as Any to maintain backwards compatibility with iOS 10.0
}

To handle metadata being detected and streamed from the camera, implement:

func detected(metadata: [Any], from controller: LuminaViewController) {
    // here you can take the metadata and handle it however you'd like
    // you must find the right kind of data to downcast from, whether it is of a barcode, qr code, or face detection
}

To handle the user tapping the screen (outside of a button), implement:

func tapped(from controller: LuminaViewController, at: CGPoint) {
    // here you can take the position of the tap and handle it however you'd like
    // default behavior for a tap is to focus on tapped point
}

To handle a CoreML model and its predictions being streamed with each video frame, implement:

func streamed(videoFrame: UIImage, with predictions: [LuminaRecognitionResult]?, from controller: LuminaViewController) {
  guard let predicted = predictions else {
    return
  }
  var resultString = String()
  for prediction in predicted {
    guard let values = prediction.predictions else {
      continue
    }
    guard let bestPrediction = values.first else {
      continue
    }
    resultString.append("\(String(describing: prediction.type)): \(bestPrediction.name)" + "\r\n")
  }
  controller.textPrompt = resultString
}

Note that this returns a class type representation associated with the detected results. The example above also makes use of the built-in text prompt mechanism for Lumina.

Changing the user interface

To adapt the user interface to your needs, you can set the visibility of the buttons by calling these methods on LuminaViewController:

camera.setCancelButton(visible: Bool)
camera.setShutterButton(visible: Bool)
camera.setSwitchButton(visible: Bool)
camera.setTorchButton(visible: Bool)

Per default, all of the buttons are visible.

Adding your own controls outside the camera view

For some UI designs, apps may want to embed LuminaViewController within a custom View Controler, adding controls adjacent to the camera view rather than putting all the controls inside the camera view.

Here is a code snippet that demonstrates adding a torch buttons and controlling the camera zoom level via the externally accessible API:

class MyCustomViewController: UIViewController {
    @IBOutlet weak var flashButton: UIButton!
    @IBOutlet weak var zoomButton: UIButton!
    var luminaVC: LuminaViewController? //set in prepare(for segue:) via the embed segue in the storyboard
    var flashState = false
    var zoomLevel:Float = 2.0
    let flashOnImage = UIImage(named: "Flash_On") #assumes an image with this name is in your Assets Library
    let flashOffImage = UIImage(named: "Flash_Off") #assumes an image with this name is in your Assets Library

    override public func viewDidLoad() {
        super.viewDidLoad()

        luminaVC?.delegate = self
        luminaVC?.trackMetadata = true
        luminaVC?.position = .back
        luminaVC?.setTorchButton(visible: false)
        luminaVC?.setCancelButton(visible: false)
        luminaVC?.setSwitchButton(visible: false)
        luminaVC?.setShutterButton(visible: false)
        luminaVC?.camera?.torchState = flashState ? .on(intensity: 1.0) : .off
        luminaVC?.currentZoomScale = zoomLevel
    }

    override public func prepare(for segue: UIStoryboardSegue, sender: Any?) {
    if segue.identifier == "Lumina" { #name this segue in storyboard
            self.luminaVC = segue.destination as? LuminaViewController
        }
    }

    @IBAction func flashTapped(_ sender: Any) {
        flashState = !flashState
        luminaVC?.camera?.torchState = flashState ? .on(intensity: 1.0) : .off
        let image = flashState ? flashOnImage : flashOffImage
        flashButton.setImage(image, for: .normal)
    }

    @IBAction func zoomTapped(_ sender: Any) {
        if zoomLevel == 1.0 {
            zoomLevel = 2.0
            zoomButton.setTitle("2x", for: .normal)
        } else {
            zoomLevel = 1.0
            zoomButton.setTitle("1x", for: .normal)
        }
        luminaVC?.currentZoomScale = zoomLevel
    }

Maintainers

  • David Okun Twitter Follow GitHub followers
  • Richard Littauer Twitter Follow GitHub followers
  • Daniel Conde Twitter Follow GitHub followers
  • Zach Falgout Twitter Follow GitHub followers
  • Gerriet Backer Twitter Follow GitHub followers
  • Greg Heo Twitter Follow GitHub followers

License

MIT © 2019 David Okun

Comments
  • Use safe area guides for text label and shutter button

    Use safe area guides for text label and shutter button

    To improve the layout especially on iPhoneX, the safe area guides are used for

    • text label - might be partially occluded by the notch
    • shutter button - might be too close to the border
    opened by gerriet 7
  • could not install Lumina

    could not install Lumina

    Dear Friend,

    i followed your instruction and add the Lumina to Podfile. did update,install when i opened the Xcode after build , i getting build failed with 46 error. i using deployment target of 10.3 (would like to use 11.0) and xcode 9.1

    what i doing wrong :(

    bug help wanted 
    opened by zeevmindali 6
  • Upgraded to latest Swift syntax

    Upgraded to latest Swift syntax

    Nothing fancy, just fixing each compiler error and warning until it compiles without issue.

    Description

    Started with the CocoaPods version. Wasn't getting the same errors opening up your workspace from your git repo.

    Motivation and Context

    Can't compile on latest XCode without it. I'm running XCode 10.0 Beta 2 and targeting iOS 10.3+.

    How Has This Been Tested?

    Screenshots (if appropriate):

    Types of changes

    • [x] Bug fix (non-breaking change which fixes an issue)
    • [ ] New feature (non-breaking change which adds functionality)
    • [x] Breaking change (fix or feature that would cause existing functionality to change)

    Checklist:

    • [x] My code follows the code style of this project.
    • [x] My change requires a change to the documentation.
    • [x] I have updated the documentation accordingly.
    • [x] I have read the CONTRIBUTING document.
    opened by SamuelMarks 5
  • Custom icons

    Custom icons

    Is your feature request related to a problem? Please describe. Would it be possible to change the icons?

    Describe the solution you'd like A way to provide custom icons for the capture/cancel/switch/torch icons. Either using swifticonfont? Or pure assets

    Describe alternatives you've considered A clear and concise description of any alternative solutions or features you've considered.

    Additional context Add any other context or screenshots about the feature request here.

    opened by aabanaag 4
  • is there a way to save 3 seconds before and and after detection found?

    is there a way to save 3 seconds before and and after detection found?

    is there a way to save 3 seconds before and and after detection found? if it is, what will be the way to do it? and it will be continuous camera used but only save video if detection found

    opened by steve21124 4
  • Flash

    Flash

    Describe the bug When tapping the torch button causes the app to crash with the error of flashMode must be set to a value present in the supportedFlashModes array.

    To Reproduce Steps to reproduce the behavior:

    1. Show LuminaController (with torch not hidden)
    2. Tap torch button (either turning it on or using the auto)
    3. Tap shutter button (the app will crash)

    Expected behavior It should take a picture with the flash on.

    Smartphone (please complete the following information): iPad 2017 (12.1)

    opened by aabanaag 3
  • Depth data not delivered to captured delegate

    Depth data not delivered to captured delegate

    In the current release (1.2.1) and also in the actual version of the repository, it is not possible to capture depth images in the captured delegate.

    Expected Behavior

    If camera.captureDepthData = true is set, in the captured delegate the depth data should not be null:

    if let data = depthData as? AVDepthData {
    	guard let depthImage = data.depthDataMap.normalizedImage(with: controller.position) else {
    	    print("could not convert depth data")
    	    return
    	}
    }
    

    Current Behavior

    The depth data is always not set (Optional<Any>).

    Steps to Reproduce (for bugs)

    Check out the ViewController.swift of my test project.

    Your Environment

    Tested on iPhone X.

    opened by cansik 3
  • Fix for issue #35

    Fix for issue #35

    Implementing pinch to zoom functionality in the ImageViewController.swift file. Embedded the UIImageView in a ScrollView and then embedded that in a parent UIView. Set the delegate in viewWillAppear since that's where the rest of the setup is occurring. The function viewForZooming is what gives the imageView the ability to be zoomed in on.

    opened by Zfalgout 3
  • Logging level: switched logic

    Logging level: switched logic

    Logging should only happen if the logging level is at least as high as the one set, before it was the other way around. E.g. when set to notice, notices, warning, errors, critical should be logged but info, debug and trace should not be.

    Motivation and Context

    Before this, one could not set the logging level to something meaningful, because one would miss the messages with the highest priority.

    How Has This Been Tested?

    Added a helper method wouldLog to LuminaLogger with the same logic as the logging methods. Added a unittest with the levels as example. Had to update the deployment target to 11.0 at two missing occasions for the tests to compile.

    Types of changes

    • [x] Bug fix (non-breaking change which fixes an issue)
    • [ ] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)

    Checklist:

    • [x] My code follows the code style of this project.
    • [x] My change requires a change to the documentation.
    • [x] I have updated the documentation accordingly.
    • [x] I have read the CONTRIBUTING document.
    opened by gerriet 2
  • PR 94 made TorchState enum, torch var and currentZoomScale var public…

    PR 94 made TorchState enum, torch var and currentZoomScale var public…

    … for external use

    Made TorchState public, pulled it up out of LuminaCamera so that that class could remain internal in LuminaViewController, made var torchState a wrapper property of camera?.torchState and public while keeping camera itself internal only also in LuminaViewController, made var currentZoomScale public for external use

    Description

    Motivation and Context

    for the use case of implementing apps having custom torch and zoom control UIs outside of the LuminaViewController itself.

    PR #94

    How Has This Been Tested?

    Built and run in simulator. Integrated and tested via external application

    XCode 10.3, iOS 10.

    Screenshots (if appropriate):

    Types of changes

    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [ x] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)

    Checklist:

    • [x ] My code follows the code style of this project.
    • [ ] My change requires a change to the documentation.
    • [ ] I have updated the documentation accordingly.
    • [x ] I have read the CONTRIBUTING document.
    opened by drmarkpowell 2
  • `streamed` not being called

    `streamed` not being called

    Describe the bug A clear and concise description of what the bug is.

    To Reproduce Check console. Never get "streamed" message. Never get any text on the screen (unless I uncomment the "testing" one.

    import UIKit
    import AVFoundation
    import Lumina
    
    class MainVC: UIViewController {
        override func viewDidAppear(_ animated: Bool) {
            super.viewDidAppear(animated)
            
            LuminaViewController.loggingLevel = .verbose
            
            let camera = LuminaViewController()
            camera.delegate = self
            
            if #available(iOS 11.0, *) {
                camera.streamFrames = true
                camera.textPrompt = ""
                camera.trackMetadata = true
                camera.streamingModels = [MobileNet(), Inceptionv3()]
            } else {
                print("Warning: this iOS version doesn't support CoreML")
            }
            
            // camera.textPrompt = "testing"
            present(camera, animated: true, completion: nil)
        }
    }
    
    extension MainVC: LuminaDelegate {
        func dismissed(controller: LuminaViewController) {
            controller.dismiss(animated: true, completion: nil)
        }
        
        func streamed(videoFrame: UIImage, with predictions: [LuminaRecognitionResult]?, from controller: LuminaViewController) {
            print("streamed")
            if #available(iOS 11.0, *) {
                guard let predicted = predictions else {
                    return
                }
                var resultString = String()
                for prediction in predicted {
                    guard let values = prediction.predictions else {
                        continue
                    }
                    guard let bestPrediction = values.first else {
                        continue
                    }
                    resultString.append("\(String(describing: prediction.type)): \(bestPrediction.name)" + "\r\n")
                }
                controller.textPrompt = resultString
            } else {
                print("Warning: this iOS version doesn't support CoreML")
            }
        }
    }
    

    Expected behavior Followed YouTube video. It should show predictions on screen. Currently it shows no text, and doesn't debug to console that it even hit that function. dismissed works though.

    Screenshots If applicable, add screenshots to help explain your problem.

    Smartphone (please complete the following information):

    • Device: iPhone 6S
    • OS: iOS11.4
    • Version: 1.3.1 (with the edits from my PR)

    Additional context Add any other context about the problem here.

    opened by SamuelMarks 2
  • Choosing aspect ratio

    Choosing aspect ratio

    I want to be able to choose the aspect ratio for photo and video and not have have the camera use the wrong one and then just crop it.

    just a settings (formats) available for photo and for video

    opened by gojogo88 0
  • Get Depth Information of every pixel in the image

    Get Depth Information of every pixel in the image

    Thanks for this sample code and its very useful.

    I want to develop 1 feature, in which I want to get distance of every pixel in the image. So can you please help how can we achieve this?

    opened by mohitgorakhiya 0
  • AVCaptureDevice setActiveColorSpace

    AVCaptureDevice setActiveColorSpace

    Describe the bug My app crashes after calling self.present(camera, animated: true, completion: nil) twice.

    To Reproduce Steps to reproduce the behavior: Call self.present(camera, animated: true, completion: nil) twice the app will crash

    Expected behavior Should not crash when re-opening the camera.

    Screenshots image

    Smartphone (please complete the following information):

    • Device: iPad Pro 11inch
    • OS: 12.3
    • Version [e.g. 22]

    Additional context Add any other context about the problem here.

    bug 
    opened by aabanaag 1
  • Depth image normalization seems to loose details

    Depth image normalization seems to loose details

    I am currently trying to capture both, the color image and the depth image. This is working, but the depth image looks really bad. The nearer areas are all blown out, sometimes the whole image is white.

    The depth stream instead looks like how the death image should look like. I use the same method to convert the CVPixelBuffer into a grayscale image (the one you are using in the example).

    I am missing something there, or why are these depth images not normalised the same way? I also tried to capture a depth image with the Halide App and they seem much better in depth resolution then the one I can capture / stream with lumina. Are there other ways to capture depth images or normalize them?

    Here is the code I use to normalize the depth images:

    func captured(stillImage: UIImage, livePhotoAt: URL?, depthData: Any?, from controller: LuminaViewController) {
    	// save color image
    	CustomPhotoAlbum.shared.save(image: stillImage)
    
    	// save depth image if possible
    	print("trying to save depth image")
    	if #available(iOS 11.0, *) {
    	    if var data = depthData as? AVDepthData {
    	        
    	        // be sure its DisparityFloat32
    	        if data.depthDataType != kCVPixelFormatType_DisparityFloat32 {
    	            data = data.converting(toDepthDataType: kCVPixelFormatType_DisparityFloat32)
    	        }
    
    	        guard let depthImage = data.depthDataMap.normalizedImage(with: controller.position) else {
    	            print("could not convert depth data")
    	            return
    	        }
    
    	        ...
    
    extension CVPixelBuffer {
        func normalizedImage(with position: CameraPosition) -> UIImage? {
            let ciImage = CIImage(cvPixelBuffer: self)
            let context = CIContext(options: nil)
            if let cgImage = context.createCGImage(ciImage, from: CGRect(x: 0, y: 0, width: CVPixelBufferGetWidth(self), height: CVPixelBufferGetHeight(self))) {
                return UIImage(cgImage: cgImage , scale: 1.0, orientation: getImageOrientation(with: position))
            } else {
                return nil
            }
        }
        
        private func getImageOrientation(with position: CameraPosition) -> UIImageOrientation {
            switch UIApplication.shared.statusBarOrientation {
            case .landscapeLeft:
                return position == .back ? .down : .upMirrored
            case .landscapeRight:
                return position == .back ? .up : .downMirrored
            case .portraitUpsideDown:
                return position == .back ? .left : .rightMirrored
            case .portrait:
                return position == .back ? .right : .leftMirrored
            case .unknown:
                return position == .back ? .right : .leftMirrored
            }
        }
    }
    

    Here are the three images:

    Color Image

    img_3517

    Depth Image from the captured method

    img_3518

    Depth image from the streamed method

    img_3519

    opened by cansik 4
  • QRCode box overlay

    QRCode box overlay

    Is it possible to implement an overlay to show where the qrcode is in the camera feed? just like this https://www.appcoda.com/barcode-reader-swift/

    Was wondering how can I get the videoPreviewLayer of the camera so I can use transformedMetadataObject

    enhancement help wanted good first issue 
    opened by aabanaag 2
Releases(1.6.0)
  • 1.6.0(Jul 18, 2021)

    This is a change to Lumina that hits the following high notes:

    • Compiles on Xcode 13.0b2
    • Minimum iOS version compatibility of iOS 13.0
    • Drops support for Cocoapods
    • Drops support for Carthage
    • Exclusive support for Swift Package Manager
    • A handy-dandy sample app icon generated by Bakery (thanks, @jordibruin !!)
    Source code(tar.gz)
    Source code(zip)
  • 1.2.1(Apr 3, 2018)

Owner
David Okun
iOS @ Charles Schwab. Really into frameworks.
David Okun
BarcodeScanner is a simple and beautiful wrapper around the camera with barcode capturing functionality and a great user experience.

Description BarcodeScanner is a simple and beautiful wrapper around the camera with barcode capturing functionality and a great user experience. Barco

HyperRedink 1.6k Jan 7, 2023
ALCameraViewController - A camera view controller with custom image picker and image cropping.

ALCameraViewController A camera view controller with custom image picker and image cropping. Features Front facing and rear facing camera Simple and c

Alex Littlejohn 2k Dec 6, 2022
Library for iOS Camera API. CameraKit helps you add reliable camera to your app quickly.

CameraKit helps you add reliable camera to your app quickly. Our open source camera platform provides consistent capture results, service that scales, and endless camera possibilities.

CameraKit 628 Dec 27, 2022
This plugin defines a global navigator.camera object, which provides an API for taking pictures and for choosing images from the system's image library.

title description Camera Take pictures with the device camera. AppVeyor Travis CI cordova-plugin-camera This plugin defines a global navigator.camera

null 0 Nov 2, 2021
📸 iOS Media Capture – features touch-to-record video, slow motion, and photography

PBJVision PBJVision is a camera library for iOS that enables easy integration of special capture features and camera interface customizations in your

patrick piemonte 1.9k Dec 26, 2022
Instagram-like photo browser and a camera feature with a few line of code in Swift.

Fusuma is a Swift library that provides an Instagram-like photo browser with a camera feature using only a few lines of code.

Yuta Akizuki 2.4k Dec 31, 2022
Custom camera with AVFoundation. Beautiful, light and easy to integrate with iOS projects.

?? Warning This repository is DEPRECATED and not maintained anymore. Custom camera with AVFoundation. Beautiful, light and easy to integrate with iOS

Tudo Gostoso Internet 1.4k Dec 16, 2022
A fully customisable and modern camera implementation for iOS made with AVFoundation.

Features Extremely simple and easy to use Controls autofocus & exposure Customizable interface Code-made UI assets that do not lose resolution quality

Gabriel Alvarado 1.3k Nov 30, 2022
Fasttt and easy camera framework for iOS with customizable filters

FastttCamera is a wrapper around AVFoundation that allows you to build your own powerful custom camera app without all the headaches of using AVFounda

IFTTT 1.8k Dec 10, 2022
An iOS framework that uses the front camera, detects your face and takes a selfie.

TakeASelfie An iOS framework that uses the front camera, detects your face and takes a selfie. This api opens the front camera and draws an green oval

Abdullah Selek 37 Jan 3, 2023
Video and photo camera for iOS

Features: Description Records video ?? takes photos ?? Flash on/off ⚡ Front / Back camera ↕️ Hold to record video ✊ Tap to take photo ?? Tap to focus

André J 192 Dec 17, 2022
Simple Swift class to provide all the configurations you need to create custom camera view in your app

Camera Manager This is a simple Swift class to provide all the configurations you need to create custom camera view in your app. It follows orientatio

Imaginary Cloud 1.3k Dec 29, 2022
A light weight & simple & easy camera for iOS by Swift.

DKCamera Description A light weight & simple & easy camera for iOS by Swift. It uses CoreMotion framework to detect device orientation, so the screen-

Bannings 86 Aug 18, 2022
Camera engine for iOS, written in Swift, above AVFoundation. :monkey:

?? The most advanced Camera framework in Swift ?? CameraEngine is an iOS camera engine library that allows easy integration of special capture feature

Remi ROBERT 575 Dec 25, 2022
A Snapchat Inspired iOS Camera Framework written in Swift

Overview SwiftyCam is a a simple, Snapchat-style iOS Camera framework for easy photo and video capture. SwiftyCam allows users to capture both photos

Andrew Walz 2k Dec 21, 2022
UIView+CameraBackground - Show camera layer as a background to any UIView.

UIView+CameraBackground Show camera layer as a background to any UIView. Features Both front and back camera supported. Flash modes: auto, on, off. Co

Yonat Sharon 63 Nov 15, 2022
iOS camera engine with Vine-like tap to record, animatable filters, slow motion, segments editing

SCRecorder A Vine/Instagram like audio/video recorder and filter framework in Objective-C. In short, here is a short list of the cool things you can d

Simon Corsin 3.1k Dec 25, 2022
A simple, customizable camera control - video recorder for iOS.

LLSimpleCamera: A simple customizable camera - video recorder control LLSimpleCamera is a library for creating a customized camera - video recorder sc

Ömer Faruk Gül 1.2k Dec 12, 2022
Mock UIImagePickerController for testing camera based UI in simulator

Mock UIImagePickerController to simulate the camera in iOS simulator.

Yonat Sharon 18 Aug 18, 2022