A video composition framework build on top of AVFoundation. It's simple to use and easy to extend.

Overview

中文说明 中文使用文档

A high-level video composition framework build on top of AVFoundation. It's simple to use and easy to extend. Use it and make life easier if you are implementing video composition feature.

This project has a Timeline concept. Any resource can put into Timeline. A resource can be Image, Video, Audio, Gif and so on.

Features

  • Build result content objcet with only few step.
  1. Create resource
  2. Set configuration
  3. Put them into Timeline
  4. Use Timeline to generate AVPlayerItem/AVAssetImageGenerator/AVExportSession
  • Resouce: Support video, audio, and image. Resource is extendable, you can create your customized resource type. e.g gif image resource
  • Video configuration support: transform, opacity and so on. The configuration is extendable.
  • Audio configuration support: change volume or process with audio raw data in real time. The configuration is extendable.
  • Transition: Clips may transition with previous and next clip

Usage

Below is the simplest example. Create a resource from AVAsset, set the video frame's scale mode to aspect fill, then insert trackItem to timeline, after all use CompositionGenerator to build AVAssetExportSession/AVAssetImageGenerator/AVPlayerItem.

// 1. Create a resource
let asset: AVAsset = ...     
let resource = AVAssetTrackResource(asset: asset)

// 2. Create a TrackItem instance, TrackItem can configure video&audio configuration
let trackItem = TrackItem(resource: resource)
// Set the video scale mode on canvas
trackItem.configuration.videoConfiguration.baseContentMode = .aspectFill

// 3. Add TrackItem to timeline
let timeline = Timeline()
timeline.videoChannel = [trackItem]
timeline.audioChannel = [trackItem]

// 4. Use CompositionGenerator to create AVAssetExportSession/AVAssetImageGenerator/AVPlayerItem
let compositionGenerator = CompositionGenerator(timeline: timeline)
// Set the video canvas's size
compositionGenerator.renderSize = CGSize(width: 1920, height: 1080)
let exportSession = compositionGenerator.buildExportSession(presetName: AVAssetExportPresetMediumQuality)
let playerItem = compositionGenerator.buildPlayerItem()
let imageGenerator = compositionGenerator.buildImageGenerator()

Basic Concept

Timeline

Use to construct resource, the developer is responsible for putting resources at the right time range.

CompositionGenerator

Use CompositionGenerator to create AVAssetExportSession/AVAssetImageGenerator/AVPlayerItem

CompositionGenerator use Timeline instance translate to AVFoundation API.

Resource

Resource provider Image or/and audio data. It also provide time infomation about the data.

Currently support

  • Image type:
    • ImageResource: Provide a CIImage as video frame
    • PHAssetImageResource: Provide a PHAsset, load CIImage as video frame
    • AVAssetReaderImageResource: Provide AVAsset, reader samplebuffer as video frame using AVAssetReader
    • AVAssetReverseImageResource: Provide AVAsset, reader samplebuffer as video frame using AVAssetReader, but reverse the order
  • Video&Audio type:
    • AVAssetTrackResource: Provide AVAsset, use AVAssetTrack as video frame and audio frame.
    • PHAssetTrackResource: Provide PHAsset, load AVAsset from it.

TrackItem

A TrackItem contains Resource, VideoConfiguration and AudioConfiguration.

Currently support

  • Video Configuration
    • baseContentMode, video frame's scale mode base on canvas size
    • transform
    • opacity
    • configurations, custom filter can be added here.
  • Audio Configuration
    • volume
    • nodes, apply custom audio process operation, e.g VolumeAudioConfiguration
  • videoTransition, audioTransition

Advance usage

Custom Resource

You can provide custom resource type by subclass Resource, and implement func tracks(for type: AVMediaType) -> [AVAssetTrack].

By subclass ImageResource, you can use CIImage as video frame.

Custom Image Filter

Image filter need Implement VideoConfigurationProtocol protocol, then it can be added to TrackItem.configuration.videoConfiguration.configurations

KeyframeVideoConfiguration is a concrete class.

Custom Audio Mixer

Audio Mixer need implement AudioConfigurationProtocol protocol, then it can be added to TrackItem.configuration.audioConfiguration.nodes

VolumeAudioConfiguration is a concrete class.

Why I create this project

AVFoundation aready provide powerful composition API for video and audio, but these API are far away from easy to use.

1.AVComposition

We need to know how and when to connect different tracks. Say we save the time range info for a track, finnaly we will realize the time range info is very easy to broken, consider below scenarios

  • Change previous track's time range info
  • Change speed
  • Add new track
  • Add/remove transition

These operations will affect the timeline and all tracks' time range info need to be updated.

Bad thing is that AVComposition only supports video track and audio track. If we want to combine photo and video, it's very difficult to implement.

2.AVVideoCompostion

Use AVVideoCompositionInstruction to construct timeline, use AVVideoCompositionLayerInstruction to configure track's transform. If we want to operate raw video frame data, need implement AVVideoCompositing protocol.

After I write the code, I realized there are many codes unrelated to business logic, they should be encapsulated.

3.Difffcult to extend features

AVFoundation only supports a few basic composition features. As far as I know, it only can change video frame transform and audio volume. If a developer wants to implement other features, e.g apply a filter to a video frame, then need to rewrite AVVideoCompostion's AVVideoCompositing protocol. The workload suddenly become very large.

Life is hard why should I write hard code too? So I create Cabbage, easy to understand API, flexible feature scalability.

Installation

Cocoapods

platform :ios, '9.0'
use_frameworks!

target 'MyApp' do
  # your other pod
  # ...
  pod 'VFCabbage'
end

Manually

It is not recommended to install the framework manually, but if you have to do it manually. You can

  • simplely drag Cabbage/Sources folder to you project.
  • Or add Cabbage as a submodule.
$ git submodule add https://github.com/VideoFlint/Cabbage.git

Requirements

  • iOS 9.0+
  • Swift 4.x

Projects using Cabbage

  • VideoCat: A demo project demonstrates how to use Cabbage.

LICENSE

Under MIT

Special Thanks

Comments
  • Custom Filter

    Custom Filter

    Hi Vito, I'm wondering what the process looks like for processing a filter now that filterProcessor was removed in version 0.2. The VideoCat demo used this when applying Lookup table filters but I'm wondering how I should go about doing this now it's gone. Do you have an example I could try? Thank you for your work and time!

    opened by cosmicsalad 10
  • Merge Videos with different aspectRatios

    Merge Videos with different aspectRatios

    I have a list of videos AVAssets. I want to merge them into 1 video with their corresponding audios.

    But the videos sometimes are portrait, sometimes square, sometimes landscape (they may have infinite different width and heights). I want the videos to merge and stay aspectFit to the size frame of the first video.

    Is this possible with Cabbage ? Im having a hard time understanding your ¨timeline¨ concept.

    opened by omarojo 6
  • Cannot not play audio in time range after added

    Cannot not play audio in time range after added

    Hi vitoziv, I cannot play audio in a specific time range after added. This is my source code. I cannot play audio1.mp3 after set startTime and duration, if I don't set this, it plays well. Thank you.

        let video1TrackItem: TrackItem = {
            let url = Bundle.main.url(forResource: "video1", withExtension: "mp4")!
            let resource = AVAssetTrackResource(asset: AVAsset(url: url))
            let trackItem = TrackItem(resource: resource)
            trackItem.videoConfiguration.contentMode = .aspectFit
            return trackItem
        }()
    
       let mp3TrackItem: TrackItem = {
            let url = Bundle.main.url(forResource: "audio1", withExtension: "mp3")!
            let resource = AVAssetTrackResource(asset: AVAsset(url: url))
            let trackItem = TrackItem(resource: resource)
            trackItem.startTime = CMTime(value: 1, timescale: 1)
            trackItem.duration = CMTime(value: 10, timescale: 1)
            return trackItem
        }()
    
       let timeline = Timeline()
        timeline.videoChannel = [video1TrackItem]
        timeline.audioChannel = [video1TrackItem]
        timeline.audios = [mp3TrackItem]
    
    opened by mvn-thanhluu-hn 5
  • 关于对视频添加字幕和动画的实现讨论?

    关于对视频添加字幕和动画的实现讨论?

    你好,首先非常感谢vitoziv的无私分享!大致把Cabbage的中文说明和源代码梳理了一下,但对Cabbage的设计思路和使用还有一些不太清楚的地方希望请教讨论一下。目前,项目需要实现给视频添加字幕和动画贴图的功能,我不太清楚是通过timeline上的overlays实现,还是使用CALayer去实现。但如果使用CALayer但的话,预览和渲染需要维护两套业务逻辑感觉比较麻烦。想请教一下vitoziv,Cabbage对这方面需求的支持是怎样考虑的以及你的建议?

    opened by rayn0r126 5
  • 通过 Images 创建 AVPlayerItem,视频后半段变黑屏了

    通过 Images 创建 AVPlayerItem,视频后半段变黑屏了

    DEBUG 后发现 VideoCompositionInstruction 的方法 open func apply(request: AVAsynchronousVideoCompositionRequest) -> CIImage? 在 16 秒后 requestsourceTrackIDsnil,调用 sourceFrame(byTrackID trackID: CMPersistentTrackID) 也返回 nil。

    求助问题原因所在。

    下面是我通过 images 创建 AVPlayerItem 的代码:

    func makePlayerItemFromImages(_ images: [UIImage]) {
            let ciImages = images.compactMap { $0.cgImage }.map { CIImage.init(cgImage: $0) }
            
            let resources = ciImages.map { ImageResource(image: $0) }
    
            for item in resources {
                let timeRange = CMTimeRange(start: kCMTimeZero, duration: CMTimeMake(100, 100))
                item.duration = timeRange.duration
                item.selectedTimeRange = timeRange
            }
            
            let items = resources.map { TrackItem(resource: $0) }
            
            var timeLineDuration = kCMTimeZero
            items.forEach {
                $0.configuration.videoConfiguration.baseContentMode = .aspectFit
                
                let timeRange = CMTimeRange(start: timeLineDuration, duration: $0.resource.duration)
                $0.configuration.timelineTimeRange = timeRange
                timeLineDuration = CMTimeAdd(timeLineDuration, timeRange.duration)
    
            }
            
            let timeline = Timeline()
            timeline.videoChannel = items
            
            let compositionGenerator = CompositionGenerator(timeline: timeline)
            compositionGenerator.renderSize = CGSize(width: 480, height: 480)
            let playerItem = compositionGenerator.buildPlayerItem()
            
            let controller = AVPlayerViewController.init()
            controller.player = AVPlayer.init(playerItem: playerItem)
            controller.view.backgroundColor = UIColor.white
            present(controller, animated: true, completion: nil)
        }
    
    opened by NeverAgain11 5
  • Applying user transform before content mode in TrackConfiguration

    Applying user transform before content mode in TrackConfiguration

    Context: TrackConfiguration's transform property lets developers applying custom transformations on TrackItem while TrackConfiguration's contentMode is responsible for how the TrackItem's content is displayed on screen (aspectFit/aspectFill/custom).

    Issue: The CGAffineTransform stored in TrackConfiguration's transform property gets applied only after the contentMode transformations, so if a user transformation (e.g. rotation by 90 degrees) is set then .aspectFit and .aspectFill content modes won't work as expected:

    https://github.com/VideoFlint/Cabbage/blob/e65b80fc04c026d14ca0a3a3ed0567d597233f86/Cabbage/Sources/Track/Configuration/TrackConfiguration.swift#L61-L84

    Proposed solution: Move the user transform implementation just before the contentMode transformations.

    @vitoziv what do you think? I would expect the content modes to work even if the TrackItem is e.g. rotated via the transform property, but I also don't know how you specify the purpose of transform property.

    I'll create a pull request for this, please let me know what's your take on this proposal.

    opened by gazadge 4
  • How can I change the black background when create track iteam with image resource

    How can I change the black background when create track iteam with image resource

    I make a simple demo to add text overlay on video as your suggestion:

    (About text overlay, I suggest you add text's image to Timeline.passingThroughVideoCompositionProvider)

    It works but I got the issue: the text overlay always had a black background.

    I understand you used 1 black video to render image as video frame but in this case how can I delete this black background

    P/s: How can I custom the video resource to create a track item from different resource type as a GIF file?

    opened by mvn-tony-hn 4
  • 图片无法在VIPlayer播放

    图片无法在VIPlayer播放

    我参考VideoCat的Demo从相册中选择了图片

    let resource = PHAssetImageResource.init(asset: asset, duration: CMTime.init(value: 3000, 600))
                guard let task = resource.prepare(progressHandler: progressHandler, completion: { (status, error) in
                    if status == .avaliable {
                        resource.selectedTimeRange = CMTimeRange.init(start: CMTime.zero, end: resource.duration)
                        let trackItem: TrackItem = TrackItem(resource: resource)
                        let transition = CrossDissolveTransition()
                        transition.duration = CMTime(value: 900, timescale: 600)
                        trackItem.videoTransition = transition
                        let audioTransition = FadeInOutAudioTransition(duration: CMTime(value: 66150, timescale: 44100))
                        trackItem.audioTransition = audioTransition
                        if resource.isKind(of: ImageResource.self) {
                            trackItem.videoConfiguration.contentMode = .custom
                        } else {
                            trackItem.videoConfiguration.contentMode = .aspectFill
                        }
                        complete(trackItem)
                    } else {
                        Log.error("image track status is \(status), check prepare function, error: \(error?.localizedDescription ?? "")")
                        complete(nil)
                    }
                })
    

    然后检查了,图片的选择,在 resource.prepare中,Image是存在的,status也是 available的。

    public func reloadPlayerItem(_ items: [TrackItem]) -> AVPlayerItem {
            let timeLine = TimeLineManager.current.timeline
            let width = UIScreen.main.bounds.width * UIScreen.main.scale
            let height = width
            timeLine.videoChannel = items
            timeLine.audioChannel = items
            do {
                try Timeline.reloadVideoStartTime(providers: timeLine.videoChannel)
            } catch {
                assert(false, error.localizedDescription)
            }
            timeLine.renderSize = CGSize.init(width: width, height: height)
            let compositonGenerator = CompositionGenerator.init(timeline: timeLine)
            return compositonGenerator.buildPlayerItem()
        }
    

    这是buildItem的方法,视屏和livephoto都没有问题,只有图片无法正常播放,在播放器的时长也不对。

    master分支。 xcode 10.2.2 swift 5.0

    opened by AbySwifter 4
  • 关于Timeline中的overlays轨道

    关于Timeline中的overlays轨道

    没有找到关于overlayers轨道的demo。研究了下你的源码,不知道理解的对不对,请教一下。如果要使用overlays轨道,实现当前track中x:50,y,50的坐标中放一个size为50*50的overlay。参考了了ImageOverlayItem的实现方法,可以通过trackItem.configuration.videoConfiguration.transform配置来传相应的transform实现,这样可以实现,但使用起来不是很方便。

    我的理解,overlays: [VideoProvider] ,overlays不只是VideoProvider协议,更合适的是进一步封装了frame的协议。

    opened by xuzhenhao 4
  • 关于TrackItem拷贝的问题。About TrackItem Copy.

    关于TrackItem拷贝的问题。About TrackItem Copy.

    当我通过以下方式去填充音频轨道时,我遇到了以下问题: 1、对resouorce实例不进行深拷贝,所有音频资源的selectTimeRange属性总是会与最后一次循环的设置一致。 2、对resource实例进行深拷贝,音频资源的scaledDuration属性,总是为音频长度。 代码如下:

    private func caculateMusicTrack(resource: AVAssetTrackResource, duration: CMTime) -> [TrackItem] {
            Log.out(">> Total VIDEO DURATION: \(duration.seconds)")
            Log.out(">> Music File Duration: \(resource.duration.seconds)")
            let numOfLoops = (duration.seconds - currentMusicStartOffset) / resource.duration.seconds
            let numOfLoopsRoundedUp = numOfLoops.rounded(.up)
            var sumPartsTotals = CMTime.zero
            var endS = CMTime.zero
            var result: [TrackItem] = []
            //Audio Trim
            for i in 0..<Int(numOfLoopsRoundedUp) {
                let mResource = resource.copy() as! AVAssetTrackResource
                Log.out(mResource)
                guard let musicAsset = mResource.asset else {
                    continue
                }
                //Audio Trim
                let start = CMTimeMake(value: Int64(0.0 * 600), timescale: 600)
                if i == Int(numOfLoopsRoundedUp) - 1 { //is the last chunk of audio
                    let lastChunkTimeFrac = numOfLoops.truncatingRemainder(dividingBy: 1) // ex 1.5 will give 0.5
                    let lastChunkTimeSecs = musicAsset.duration.seconds * lastChunkTimeFrac //music from 0 to this value
                    endS = CMTimeMake(value: Int64((lastChunkTimeSecs-0.05) * 600), timescale: 600)
                } else {
                    endS = CMTimeMake(value: Int64(musicAsset.duration.seconds * 600), timescale: 600)
                }
                let timeOffset = CMTime.init(seconds: currentMusicStartOffset, preferredTimescale: 600)
                if i == 0 {
                    let startTime = currentMusicStartOffset < 0 ? start - timeOffset : start
                    mResource.selectedTimeRange = CMTimeRange.init(start: startTime, end: endS)
                } else {
                    mResource.selectedTimeRange = CMTimeRange(start:start , end: endS)
                }
                mResource.selectedTimeRange = CMTimeRange(start:start , end: endS)
                Log.out("selectedStart:\(mResource.selectedTimeRange.start.seconds) - totalPart:\(mResource.selectedTimeRange.end.seconds)")
                let partMyTrackItem = TrackItem(resource: mResource)
                let zeroOffsetTime = CMTimeMultiply(musicAsset.duration, multiplier: Int32(i))
                if i == 0 {
                     partMyTrackItem.startTime = zeroOffsetTime + (currentMusicStartOffset < 0 ? CMTime.zero : timeOffset)
                } else {
                    partMyTrackItem.startTime = zeroOffsetTime + timeOffset
                }
                partMyTrackItem.startTime = zeroOffsetTime
                Log.out("start:\(partMyTrackItem.startTime.seconds) - totalPart:\(mResource.scaledDuration.seconds)")
                sumPartsTotals = CMTimeAdd(sumPartsTotals, mResource.scaledDuration)
                result.append(partMyTrackItem)
            }
            return result
        }
    
    opened by AbySwifter 3
  • How to set Music for entire Timeline or individual Video tracks

    How to set Music for entire Timeline or individual Video tracks

    Hi im back :)

    So previously I successfully implemented your suggestions to merge videos with their corresponding audio tracks. Now Im wondering if it's possible 2 things.

    1- Is it possible to define separate audio for each video with specific range of that audio ? example:

    let tLine = Timeline()
    var vChannel = [TrackItem]()
    var aChannel = [TrackItem]()
    //VIDEO Tracks
    ... trackVideoItem1, trackVideoItem2, trackVideoItem3...
    //AUDIO Tracks
            let musicUrl = Bundle.main.url(forResource: "HumansWater", withExtension: "MP3")!
            let musicAsset = AVAsset(url: musicUrl)
            let resourceA = AVAssetTrackResource(asset: musicAsset)
            let trackAudioItem1 = TrackItem(resource: resourceA)
    ... same for trackAudioItem2, trackAudioItem3...
    But how do I specify the start-end and duration of those tracks.  ??
    
    tLine.videoChannel = [trackVideoItem1,trackVideoItem2,trackVideoItem3]
    tLine.audioChannel = [trackAudioItem1, trackAudioItem2, trackAudioItem3]
    try! Timeline.reloadVideoStartTime(providers: tLine.videoChannel)
    
    

    Currently the above creates an unreadable video.

    2- The other question is, If it's possible to define 1 music track for the entire video composition. example:

    let tLine = Timeline()
    var vChannel = [TrackItem]()
    var aChannel = [TrackItem]()
    //VIDEO Tracks
    ... trackVideoItem1, trackVideoItem2, trackVideoItem3...
    //AUDIO Track for everything
            let musicUrl = Bundle.main.url(forResource: "HumansWater", withExtension: "MP3")!
            let musicAsset = AVAsset(url: musicUrl)
            let resourceA = AVAssetTrackResource(asset: musicAsset)
            let trackMusicItem = TrackItem(resource: resourceA)
    But how do I specify the start-end of the audio (trimming the audio)
    
    tLine.videoChannel = [trackVideoItem1,trackVideoItem2,trackVideoItem3]
    tLine.audioChannel = [trackMusicItem]
    try! Timeline.reloadVideoStartTime(providers: tLine.videoChannel)
    
    

    Is it possible to set 1 audio for everything? and what happens if the audioTrack is shorter than the entire video composition or the video composition is shorter than the audioTrack, would it repeat the audioTrack ?

    Thanks in advance :) :)

    opened by omarojo 3
  • Exported video appears to be darker (colour shifting)

    Exported video appears to be darker (colour shifting)

    I noticed that while exporting video using AVAssetExportSession, the result appears to be darker. Attached below are the original and exported screenshot of the videos.

    Is this expected? Any suggestions/workarounds to preserve as much of the visual quality of the original videos in the exported?

    Original: IMG_0876

    Exported: IMG_0877

    The code:

            let timeline = Timeline()
            
            var videoChannels = [TrackItem]()
            var audioChannels = [TrackItem]()
            
            var currentTime = CMTime.zero
            for asset in self.assets {
                let resource = AVAssetTrackResource(asset: asset)
    
                let trackItem = TrackItem(resource: resource)
                trackItem.videoConfiguration.contentMode = .aspectFit
                trackItem.startTime = currentTime
                currentTime = CMTimeAdd(currentTime, asset.duration)
                
                videoChannels.append(trackItem)
                audioChannels.append(trackItem)
            }
            
            timeline.videoChannel = videoChannels
            timeline.audioChannel = audioChannels
            timeline.renderSize = CGSize(width: 1080, height: 1920)
    
            let compositionGenerator = CompositionGenerator(timeline: timeline)
            let exportSession = compositionGenerator.buildExportSession(presetName: AVAssetExportPresetHighestQuality)
            exportSession?.outputFileType = .mov
    
            let outputURL = URL(fileURLWithPath: NSTemporaryDirectory().appending("test.mov"))
            exportSession?.outputURL = outputURL
            exportSession?.exportAsynchronously {
                if let error = exportSession?.error {
                    print("Failed to export: \(error)")
                } else {
                    print("Movie file generated: \(outputURL)")
                }
            }
    
    opened by m-at-drigmo 3
  • Is this framework support multiple layer blend mode?

    Is this framework support multiple layer blend mode?

    The Cabbage framework looks so nice. I have a question, Is Cabbage support multiple layers blend mode? Some basic blend modes like overlay, multiply, screen.

    opened by phongle6893 1
  • black screen when the audio overlay goes to finish

    black screen when the audio overlay goes to finish

                let voiceResource = AVAssetTrackResource(asset: asset)
                
                if videoTimelineView.duration <= ((voiceOvers[currentVoiceOver]?.startTime ?? 0.0) + asset.duration.seconds) {
                    voiceResource.selectedTimeRange = CMTimeRange(start: .zero, duration: CMTime(seconds: asset.duration.seconds, preferredTimeScale: 10))
                }
                item.identifier = voiceOvers[currentVoiceOver]?.name ?? ""
                item.startTime = CMTime(seconds: voiceOvers[currentVoiceOver]?.startTime ?? 0.0, preferredTimescale: 10)
    
    
    opened by ahmedsafadii 2
  • Multiple timelines

    Multiple timelines

    Hello,

    just wanted to ask a question. Is it possible to create multiple timelines and create single composition + videoComposition + audioMix? I'm trying to lay two videos on top of each other (e.g. like facetime calls). However every timeline consists of multiple subsequent video clips.

    So it would look like this with two timelines and every letter representing a video file (mp4): aaaaaaabbbccccc dddeeefffffffffffff

    Thank you

    opened by progstre 1
  • Sound gone when scaled video

    Sound gone when scaled video

    When i try to scale video speed it up or slow it the voice is gone but the videos works

    let resource = AVAssetTrackResource(asset: asset)
                        resource.scaledDuration = CMTime(seconds: 5.0, preferredTimeScale: 600)
                        resource.duration = CMTime(seconds: 5.0, preferredTimeScale: 600)
                        let trackItem = TrackItem(resource: resource)
                        trackItem.videoConfiguration.contentMode = .aspectFill
    
    opened by ahmedsafadii 1
Owner
VideoFlint
Audio&Video development tools
VideoFlint
A video decoder built on ffmpeg which allows libpag to use ffmpeg as its software decoder for h264 decoding.

ffavc ffavc is a video decoder built on ffmpeg which allows libpag to use ffmpeg as its software decoder for h264 decoding. Build ffmpeg First, make s

Portable Animated Graphics 7 Jun 3, 2022
A high-performance, flexible, and easy-to-use Video compressor library written by Swift.

FYVideoCompressor A high-performance, flexible and easy to use Video compressor library written by Swift. Using hardware-accelerator APIs in AVFoundat

null 26 Sep 23, 2022
YHPlayer - An easy-to-use video player based on swift language

YHPlayer An easy-to-use video player based on swift language Features Plays loca

null 8 Feb 21, 2022
Video mp4 record save display - How to Take , Save and Display a .mp4 Video

Technicalisto How to Take , Save and Display a .mp4 Video Add your design with v

Aya Baghdadi 2 Aug 7, 2022
A SwiftUI framework which makes it easy to integrate Video Call and Chat within a few lines of code.

Welcome to iStream! This SwiftUI Framework allows you to add Video Call and Chat to your project within a few lines of code. To use this Framework, yo

null 2 Aug 19, 2022
A fully functional short video app project.Record a six secends video while playing prank sounds.

prankPro A fully functional short video app project How to Install 1. use coconapod to init your xcode environment. 2. change the app-keys in `applica

huijimuhe 258 Jun 19, 2022
JDVideoKit - You can easily transfer your video into Three common video type.

JDVideoKit Introduction You can easily transfer your video into Three common video type. You can use set up camera easily. Installation pod 'JDVideoK

郭介騵 24 Sep 9, 2021
A Swift library to upload video files to api.video platform.

api.video IOS video uploader api.video is the video infrastructure for product builders. Lightning fast video APIs for integrating, scaling, and manag

api.video 7 Jan 10, 2022
api.video is the video infrastructure for product builders

api.video is the video infrastructure for product builders. Lightning fast video APIs for integrating, scaling, and managing on-demand & low latency live streaming features in your app.

api.video 4 Jun 27, 2022
PiPifier - a macOS and iOS Safari extension that lets you use every HTML5 video in Picture in Picture mode

PiPifier is a macOS 10.12 and iOS Safari (action) extension that lets you use every HTML5 video in Picture in Picture mode macOS Download It'

Arno Appenzeller 706 Sep 19, 2022
Swift Package used for video where I demonstrate how to extract a package to a local framework and modify it.

SegmentedPicker NOTE: This sample code is taken from the article by Frank Jia in his article titled Build a Custom iOS Segmented Control With SwiftUI

Stewart Lynch 1 Oct 11, 2021
​ This framework allows developers to quickly manipulate audio and video splicing operations.

MTrack This framework allows developers to quickly manipulate audio and video splicing operations.We welcome your feedback in issues and pull requests

null 1 Nov 15, 2021
📹 Framework to Play a Video in the Background of any UIView

SwiftVideoBackground is an easy to use Swift framework that provides the ability to play a video on any UIView. This provides a beautiful UI for login

Wilson Ding 330 Aug 17, 2022
▶️ video player in Swift, simple way to play and stream media on iOS/tvOS

Player Player is a simple iOS video player library written in Swift. Looking for an obj-c video player? Check out PBJVideoPlayer (obj-c). Looking for

patrick piemonte 2k Sep 21, 2022
VGPlayer - 📺 A simple iOS video player by Vein.

Swift developed based on AVPlayer iOS player,support horizontal gestures Fast forward, pause, vertical gestures Support brightness and volume adjustment, support full screen, adaptive screen rotation direction.

Wen Rong 394 Aug 29, 2022
Simple macOS app that applies Apple's Person Segmentation algorithm to a video.

Simple macOS app that applies Apple's Person Segmentation algorithm to a video.

Fabio 4 Sep 6, 2022
Yattee: video player for Invidious and Piped built for iOS 15, tvOS 15 and macOS Monterey

Video player with support for Invidious and Piped instances built for iOS 15, tvOS 15 and macOS Monterey.

Yattee 702 Sep 20, 2022
Yattee: video player for Invidious and Piped built for iOS, tvOS and macOS

Video player for Invidious and Piped instances built for iOS, tvOS and macOS. Features Native user interface built with SwiftUI Multiple instances and

Yattee 709 Sep 21, 2022
Demonstrates how to build a live broadcast app(Swift 3)

This project is to demonstrate how to build a live broadcast app. It include these features: Create a room to broadcast your live stream Join a room t

Leo 2.4k Sep 10, 2022