Easily craft fast Neural Networks on iOS! Use TensorFlow models. Metal under the hood.

Overview

Bender

Build status Platform iOS Swift 4 compatible CocoaPods compatible Carthage compatible License: MIT

Bender

Bender is an abstraction layer over MetalPerformanceShaders useful for working with neural networks.

Contents

The documentation can be found under the Documentation folder:

  • API contains the most important information to get started.
  • Supported Layers explains which layers are supported and how they map to TensorFlow ops.
  • Importing explains how to import models from other frameworks such as TensorFlow. You can also find information on how to enhance this functionality for custom implementations.

Introduction

Bender is an abstraction layer over MetalPerformanceShaders which is used to work with neural networks. It is of growing interest in the AI environment to execute neural networks on mobile devices even if the training process has been done previously. We want to make it easier for everyone to execute pretrained networks on iOS.

Bender allows you to easily define and run neural networks using the most common layers like Convolution, Pooling, FullyConnected and some normalizations among others. It is also flexible in the way it receives the parameters for these layers.

We also want to support loading models trained on other frameworks such as TensorFlow or Caffe2. Currently Bender includes an adapter for TensorFlow that loads a graph with variables and "translates" it to Bender layers. This feature supports a subset of TensorFlow's operations but we plan to enhance it to cover more cases.

Why did we need Bender?

At Xmartlabs we were about to start a Machine Learning project and investigated frameworks to use in iOS. We found MetalPerformanceShaders useful but not very user friendly and we saw ourselves repeating a lot of code and information. That is why we starting building a framework to handle that kind of stuff.

We also found ourselves creating scripts to translate the models we had from training with TensorFlow to iOS. This means transposing the weights to the MPSCNN format and also mapping the parameters of the different kinds of layers in TensorFlow to the parameters used by the MPSCNN kernels. TensorFlow can be compiled for iOS but currently it does not support running on GPU which we wanted to do. We also did not want to include TensorFlow's static library into our project. This is why we also started to work on an adapter that would parse a TF graph and translate it to our Bender layers.

Usage

You can define your own network in Bender using our custom operator or you can load a model exported from TensorFlow. Defining a network and loading a model can be done like this:

import MetalBender

let url = Bundle.main.url(forResource: "myModel", withExtension: "pb")! // A TensorFlow model.
let network = Network.load(url: url, inputSize: LayerSize(h: 256, w: 256, f: 3))

network.run(input: /* ... */) { output in
    // ...
}

You can read more information about this in Importing.

If you want to define your network yourself you can do it like this:

let network = Network(inputSize: LayerSize(h: 256, w: 256, f: 3))

network.start
    ->> Convolution(convSize: ConvSize(outputChannels: 16, kernelSize: 3, stride: 2))
    ->> InstanceNorm()
    ->> Convolution(convSize: ConvSize(outputChannels: 32, kernelSize: 3, stride: 2), neuronType: .relu)
    ->> InstanceNorm()
    ->> FullyConnected(neurons: 128)
    ->> Neuron(type: .tanh)
    ->> FullyConnected(neurons: 10)
    ->> Softmax()
// ...

and once you're done with all your layers:

network.initialize()

To know more about this have a look at API.

Requirements

  • Xcode 9
  • iOS 11.0+ (but deployment target is iOS 10.0, so iOS 10 is supported)

Getting involved

  • If you want to contribute please feel free to submit pull requests.
  • If you have a feature request please open an issue.
  • If you found a bug or need help please check older issues, FAQ and threads on StackOverflow before submitting an issue.

Before contribute check the CONTRIBUTING file for more info.

If you use Bender in your app We would love to hear about it! Drop us a line on Twitter.

Examples

Follow these steps to run the examples:

  • Clone Bender repository (or download it).
  • Run carthage update --platform iOS in the downloaded folder.
  • Open Bender workspace and run the Example project.

There is an Image recognition example which includes a MobileNet model in Bender and one in CoreML. It is also set up to run an Inception model but you will have to download it separately as it is almost 100 MB in size. You can download it from http://download.tensorflow.org/models/inception_v3_2016_08_28.tar.gz but then you have to freeze it and add it to the 'Example' Xcode project as 'inception_v3.pb'.

Installation

CocoaPods

To install Bender, simply add the following line to your Podfile:

pod 'MetalBender', '~> 0.5'

Remember that Bender compiles for iOS 10. So you must add platform :ios, '10.0' to your Podfile

Carthage

Carthage is a simple, decentralized dependency manager for Cocoa.

To install Bender, add the following line to your Cartfile:

github "xmartlabs/Bender" ~> 0.5

Then run:

carthage update --platform iOS

Finally, drag the built .framework binaries for MetalBender, MetalPerformanceShadersProxy and SwiftProtobuf to your application's Xcode project.

Author

Change Log

This can be found in the CHANGELOG.md file.

License

FOSSA Status

Comments
  • Where were the style transfer example models taken from?

    Where were the style transfer example models taken from?

    Hi, if possible, do you guys know from what example the q_and_w.pb and g_and_w2.pb files were taken from. Im trying to get similar style transfer models to work with bender but the list of unsupported operations varies widely.

    If you trained them yourself it would be helpful to know more information about those models.

    PS:i would have posted this on stack-overflow but i don't have the rep to create the bender tag. Thanks in advance!

    opened by AndreJFBico 26
  • Iphone 6 plus style transfer example not working on iOS 11

    Iphone 6 plus style transfer example not working on iOS 11

    (SOLVED:) updating to latest xcode beta version solved the issue.

    (UPDATE:) I just realised i was testing on iOS 11, the example runs ok with iOS 10 with the latest pull request branch. https://github.com/xmartlabs/Bender/pull/39

    However Im still posting this as an issue for iOS 11.

    Xcode version: 8.3.2

    This is the console output: ALL TEST PASSED "Unknown input: ^moments/sufficient_statistics/mean_ss" "Unknown input: ^moments/sufficient_statistics/var_ss" "Unknown input: ^moments_1/sufficient_statistics/mean_ss" "Unknown input: ^moments_1/sufficient_statistics/var_ss" "Unknown input: ^moments_2/sufficient_statistics/mean_ss" "Unknown input: ^moments_2/sufficient_statistics/var_ss" "Unknown input: ^moments_3/sufficient_statistics/mean_ss" "Unknown input: ^moments_3/sufficient_statistics/var_ss" "Unknown input: ^moments_4/sufficient_statistics/mean_ss" "Unknown input: ^moments_4/sufficient_statistics/var_ss" "Unknown input: ^moments_5/sufficient_statistics/mean_ss" "Unknown input: ^moments_5/sufficient_statistics/var_ss" "Unknown input: ^moments_6/sufficient_statistics/mean_ss" "Unknown input: ^moments_6/sufficient_statistics/var_ss" "Unknown input: ^moments_7/sufficient_statistics/mean_ss" "Unknown input: ^moments_7/sufficient_statistics/var_ss" "Unknown input: ^moments_8/sufficient_statistics/mean_ss" "Unknown input: ^moments_8/sufficient_statistics/var_ss" "Unknown input: ^moments_9/sufficient_statistics/mean_ss" "Unknown input: ^moments_9/sufficient_statistics/var_ss" "Unknown input: ^moments_10/sufficient_statistics/mean_ss" "Unknown input: ^moments_10/sufficient_statistics/var_ss" "Unknown input: ^moments_11/sufficient_statistics/mean_ss" "Unknown input: ^moments_11/sufficient_statistics/var_ss" "Unknown input: ^moments_12/sufficient_statistics/mean_ss" "Unknown input: ^moments_12/sufficient_statistics/var_ss" "Unknown input: ^moments_13/sufficient_statistics/mean_ss" "Unknown input: ^moments_13/sufficient_statistics/var_ss" "Set up takes:: 1.77025699615479 (0.564889731927128 per second)"

    Screenshot of the example after clicking run of course: http://imgur.com/a/WIUHE

    opened by AndreJFBico 14
  • Example.ConcatTest: Compiler failed to build request fatal error: Unable to create pipeline state, check metal shaders: file

    Example.ConcatTest: Compiler failed to build request fatal error: Unable to create pipeline state, check metal shaders: file

    Environment: Xcode: Version 8.3.3 (8E3004b) iOS: 10.3 Device: iPhone 6 Plus

    Stacktrace:

    2017-07-10 18:54:19.618344-0400 Example[1174:306198] [DYMTLInitPlatform] platform initialization successful 2017-07-10 18:54:19.753218-0400 Example[1174:305990] Metal GPU Frame Capture Enabled 2017-07-10 18:54:19.754074-0400 Example[1174:305990] Metal API Validation Enabled 2017-07-10 18:54:19.808660-0400 Example[1174:305990] libMobileGestalt MobileGestaltSupport.m:153: pid 1174 (Example) does not have sandbox access for frZQaeyWLUvLjeuEK43hmg and IS NOT appropriately entitled 2017-07-10 18:54:19.808771-0400 Example[1174:305990] libMobileGestalt MobileGestalt.c:550: no access to InverseDeviceID (see rdar://problem/11744455) Running test: Example.TextureConversionTest Running test: Example.LocalResponseNormTest Running test: Example.InstanceNormTest Running test: Example.ConcatTest 2017-07-10 18:54:20.203515-0400 Example[1174:306178] Compiler failed to build request fatal error: Unable to create pipeline state, check metal shaders: file /LATEST_RESEARCH/LATEST/Bender/Sources/Core/MetalShaderManager.swift, line 61 2017-07-10 18:54:20.205110-0400 Example[1174:305990] fatal error: Unable to create pipeline state, check metal shaders: file /LATEST_RESEARCH/LATEST/Bender/Sources/Core/MetalShaderManager.swift, line 61

    opened by samkohli 10
  • clarification tensorflow + magenta .mag file / cyclic redundancy problem

    clarification tensorflow + magenta .mag file / cyclic redundancy problem

    Awesome job on this project.

    I'm looking to extend functionality over to support magenta. Yes, granted current library isn't rigged up to do this. https://github.com/tensorflow/magenta

    I added a .mag file from google's tensorflow / magenta project.

    They are using a metagraph file - this contains a few extra protobuf files needed from tensorflow. Tensorflow_MetaGraphDef /MetaInfoDef and others etc

    (Side Note - feel free to cherry pick this stash - my branch includes all files https://github.com/johndpope/swift-grpc-tensorflow/tree/0.0.2 If you ever find yourself wanting to programmatically convert proto files - you may also want to checkout https://github.com/nubbel/swift-tensorflow/blob/master/RUNME.sh )

    so I rigged up a new wrapper to extract the graph def from a metagraphdef https://github.com/johndpope/Bender/

    if let path = Bundle.main.path(forResource: "attention_rnn", ofType: "mag"){

            let model = try? Data(contentsOf:URL(fileURLWithPath: path))
            
            // pull apart inception model.
            let myGraphProto = try? Tensorflow_Magenta_GeneratorBundle.init(serializedData:model!)
            
            let metaGraphData = myGraphProto!.metagraphFile
            
            let graph1 = try? Tensorflow_MetaGraphDef.init(serializedData:metaGraphData)
            print("graph1:",graph1)
            https://gist.github.com/johndpope/7be8b9b365050ad9615938ae15c36ac4
           
            // this errors.
              var graph =  TFGraph(graphDef: graph1.graphDef)
        }
    

    could there be a specific nip and tuck that could be done to get this to work? https://gist.github.com/johndpope/7be8b9b365050ad9615938ae15c36ac4

    related https://github.com/tensorflow/magenta/issues/710

    opened by johndpope 7
  • Assertion error with Mobilenet on iOS 10 device

    Assertion error with Mobilenet on iOS 10 device

    I am using latest master branch code and testing the new added Mobilenet sample code in our own project.

    • Environment: Xcode 9.2, iOS 11.2 SDK, iPhone 7 running iOS 10.3.3.

    • Assertion error: Number of network inputs xx and input sizes xx are not equal. Line 109, Network.swift.

    We also tested in iOS 11 without meeting the assertion error, since this framework is supposed to be working on iOS 10+, I treat this issue as a defect.

    opened by haoxi911 6
  • Can't Build on xcode 8 [Macpods]

    Can't Build on xcode 8 [Macpods]

    Hey I can't build the project in xcode 8. I can build in xcode 9 beta BUT when I try to deploy to a actual device I get the same error This is the error message is:

    Undefined symbols for architecture arm64: "OBJC_CLASS$_MPSCNNSpatialNormalization", referenced from: objc-class-ref in SpatialNorm.o "OBJC_CLASS$_MPSCNNSoftMax", referenced from: objc-class-ref in Softmax.o "OBJC_CLASS$_MPSImageLanczosScale", referenced from: objc-class-ref in Scale.o objc-class-ref in Start.o "OBJC_CLASS$_MPSCNNPoolingMax", referenced from: objc-class-ref in Pooling.o "OBJC_CLASS$_MPSCNNPoolingAverage", referenced from: objc-class-ref in Pooling.o "OBJC_CLASS$_MPSTemporaryImage", referenced from: objc-class-ref in ConvTranspose.o objc-class-ref in Start.o "OBJC_CLASS$_MPSImageDescriptor", referenced from: objc-class-ref in Add.o objc-class-ref in BGRAtoRGBA.o objc-class-ref in Concat-2F047EC0A4EC20B2.o objc-class-ref in Convolution.o objc-class-ref in ConvTranspose.o objc-class-ref in Crop.o objc-class-ref in FullyConnected.o ... "OBJC_CLASS$_MPSCNNConvolution", referenced from: objc-class-ref in Convolution.o "OBJC_CLASS$_MPSImage", referenced from: objc-class-ref in Add.o objc-class-ref in BGRAtoRGBA.o objc-class-ref in Concat-2F047EC0A4EC20B2.o objc-class-ref in Convolution.o objc-class-ref in ConvTranspose.o objc-class-ref in Crop.o objc-class-ref in FullyConnected.o ... "OBJC_CLASS$_MPSCNNNeuronSigmoid", referenced from: objc-class-ref in ActivationNeuronType.o "OBJC_CLASS$_MPSCNNConvolutionDescriptor", referenced from: objc-class-ref in Convolution.o objc-class-ref in FullyConnected.o "OBJC_CLASS$_MPSCNNNeuron", referenced from: objc-class-ref in ActivationNeuronType.o objc-class-ref in Neuron.o "OBJC_CLASS$_MPSCNNFullyConnected", referenced from: objc-class-ref in FullyConnected.o "OBJC_CLASS$_MPSCNNNeuronReLU", referenced from: objc-class-ref in ActivationNeuronType.o "OBJC_CLASS$_MPSCNNNeuronLinear", referenced from: objc-class-ref in ActivationNeuronType.o "OBJC_CLASS$_MPSCNNPooling", referenced from: objc-class-ref in Pooling.o "OBJC_CLASS$_MPSCNNNeuronTanH", referenced from: objc-class-ref in ActivationNeuronType.o ld: symbol(s) not found for architecture arm64 clang: error: linker command failed with exit code 1 (use -v to see invocation)

    opened by geoeo 6
  • Correct Installation?

    Correct Installation?

    Noob question: Downloading the git and installing the example project using the cartfile works, but when incorporating Bender into my own project the import clearly fails. I added the lines

    platform :ios, '10.0'
    pod 'MetalBender', :git => 'https://github.com/xmartlabs/Bender.git'
    

    to my Podfile, but importing 'Bender' fails with the "No such module". Sorry if this is non-bender specific as I don't really know Swift :)

    opened by ackdav 6
  • Tests fail in iOS 11.3.1 on iPhone 8+

    Tests fail in iOS 11.3.1 on iPhone 8+

    In iOS 11.3.1 the tests for the Concat and LocalResponseNorm layers fail in iPhone 8 and iPhone X. They work in iOS 11.3 (to my understanding) and also if run on iPhone 7 or below.

    I am not sure this is a Bender or an iOS issue.

    bug 
    opened by mats-claassen 5
  • Can't compile with Xcode 9.3, iOS 11.3

    Can't compile with Xcode 9.3, iOS 11.3

    After installing from the cocopod, I get this error on iOS 11.3, Xcode 9.3. Bender version 0.4.1

    Type of expression is ambiguous without more context

    It appears in the Convolution layer, as well as the DepthwiseConvolution.

    Convolution.swift: line 114

    screen shot 2018-04-07 at 4 42 39 pm

    and again in DepthwiseConvolution.swift: line 41

    screen shot 2018-04-07 at 4 43 25 pm

    Any ideas on how to get this working?

    opened by mdlockyer 4
  • pod install, unable to satisfy the following requirements, `MetalBender (~> 0.4)` required by `Podfile`

    pod install, unable to satisfy the following requirements, `MetalBender (~> 0.4)` required by `Podfile`

    also, when trying on simulator no warning but when connecting my iPhone , i have multiple error including couldn't not find metal bend ( import) and multiple " already declared etc" on the proxy

    opened by JeremyGozlan 4
  • Unable to find a specification for `MetalPerformanceShadersProxy`

    Unable to find a specification for `MetalPerformanceShadersProxy`

    When running pod install with pod 'MetalBender', :git => 'https://github.com/xmartlabs/Bender.git' I get the following error in the terminal.

    [!] Unable to find a specification for MetalPerformanceShadersProxy

    Looking at the cocoa pods specs it seams the MetalPerformanceShadersProxy were removed a few hours ago. Did I miss something?

    screen shot 2017-08-09 at 13 36 30

    Cocoa Pods 1.3.1, Xcode 8.3.3, ios 10 with iphone 6

    opened by simonbengtsson 4
  • macOS support

    macOS support

    Seems like a great AI framework for iOS. I wish to use your framework on macOS.

    My podfile looks something like this:

    swift_version = "5.0"
    platform :osx, '10.14'
    pod 'MetalBender', :git => 'https://github.com/xmartlabs/Bender.git', :commit => '512ea171950d1ab997b05cf469908fbfc48060e6'
    

    Doing install, says that MetalBender is not available for macOS.

    PROMPT> pod install
    Analyzing dependencies
    Pre-downloading: `MetalBender` from `https://github.com/xmartlabs/Bender.git`, commit `512ea171950d1ab997b05cf469908fbfc48060e6`
    [!] The platform of the target `FreeHongKong` (macOS 10.14) is not compatible with `MetalBender (0.5.0)`, which does not support `macOS`.
    

    Best of luck. Thank you for open sourcing your project.

    opened by neoneye 1
  • Get the weight of each layer of neural network

    Get the weight of each layer of neural network

    styleNet = Network(device: device, inputSize: inputSize, parameterLoader: loader)

    styleNet.start ->> Convolution(size: ConvSize(outputChannels: 32, kernelSize: 9, stride: 1), id: “conv1”) ->> Convolution(size: ConvSize(outputChannels: 64, kernelSize: 3, stride: 2), id: “conv2”) ->> Convolution(size: ConvSize(outputChannels: 128, kernelSize: 3, stride: 2), id: “conv3”) ->> Residual(size: ConvSize(outputChannels: 128, kernelSize: 3, stride: 1), id: “res_block1”) ->> Residual(size: ConvSize(outputChannels: 128, kernelSize: 3, stride: 1), id: “res_block2”) ->> Residual(size: ConvSize(outputChannels: 128, kernelSize: 3, stride: 1), id: “res_block3”) ->> Residual(size: ConvSize(outputChannels: 128, kernelSize: 3, stride: 1), id: “res_block4”) —>> ConvTranspose(size: ConvSize(outputChannles: 64, kernelSize: 3, stride: 2), id: “convt1”) —>> ConvTranspose(size: ConvSize(outputChannles: 32, kernelSize: 3, stride: 2), id: “convt2”) ->> Convolution(size: ConvSize(outputChannels: 3, kernelSize: 9, stride: 1), neuron: .tanh, id: “convFinal”)

    1.I want to do two pool layers for the add operation How to get the weight of each layer of neural network? 2.Model.run the self? .network. Run is going to be returned to tensor, I need to take care of it, don't return the label?

    opened by baihualinxin 0
  • Example does not compile

    Example does not compile

    Cloning the repo with checkout on tag 0.4.0 Example has seven errors after pod installation (0.4.0) Seems to be removed at compile but all three frameworks installed by cocoapods are not found at compil time

    "PBXCp /Users/thibautnoah/Library/Developer/Xcode/DerivedData/Example-fheatxxixelckdgltxeulpnuyhru/Build/Products/Debug-iphoneos/MetalBender.framework /Users/thibautnoah/Library/Developer/Xcode/DerivedData/Example-fheatxxixelckdgltxeulpnuyhru/Build/Products/Debug-iphoneos/Example.app/Frameworks/MetalBender.framework cd /Users/thibautnoah/testing/Bender/Example export PATH="/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/usr/bin:/Applications/Xcode.app/Contents/Developer/usr/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin" builtin-copy -exclude .DS_Store -exclude CVS -exclude .svn -exclude .git -exclude .hg -exclude Headers -exclude PrivateHeaders -exclude Modules -exclude *.tbd -bitcode-strip replace-with-marker -bitcode-strip-tool /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/bitcode_strip -resolve-src-symlinks /Users/thibautnoah/Library/Developer/Xcode/DerivedData/Example-fheatxxixelckdgltxeulpnuyhru/Build/Products/Debug-iphoneos/MetalBender.framework /Users/thibautnoah/Library/Developer/Xcode/DerivedData/Example-fheatxxixelckdgltxeulpnuyhru/Build/Products/Debug-iphoneos/Example.app/Frameworks"

    "error: /Users/thibautnoah/Library/Developer/Xcode/DerivedData/Example-fheatxxixelckdgltxeulpnuyhru/Build/Products/Debug-iphoneos/MetalBender.framework: No such file or directory"

    opened by tirrorex 8
  • Batch inference API

    Batch inference API

    Is there a batch inference API available? In our use case, we need to perform inference on several (up to 100+) cropped parts of an image. A batch inference API would make this operation efficient. Thanks.

    opened by karthikv2k 1
  • how to make bender data loader much fast?

    how to make bender data loader much fast?

    I had met a problem, using blender for load some model's data from training framework,like pytorch. I need to convert them from float32 into float16(half) in Metal kernel for processing, but I found it is very slow, so is any way to speed up for this.

    opened by erickingxu 1
Releases(0.5.0)
Accelerated tensor operations and dynamic neural networks based on reverse mode automatic differentiation for every device that can run Swift - from watchOS to Linux

DL4S provides a high-level API for many accelerated operations common in neural networks and deep learning. It furthermore has automatic differentiati

Palle 87 Dec 29, 2022
A toolbox of AI modules written in Swift: Graphs/Trees, Support Vector Machines, Neural Networks, PCA, K-Means, Genetic Algorithms

AIToolbox A toolbox of AI modules written in Swift: Graphs/Trees, Linear Regression, Support Vector Machines, Neural Networks, PCA, KMeans, Genetic Al

Kevin Coble 776 Dec 18, 2022
DL4S provides a high-level API for many accelerated operations common in neural networks and deep learning.

DL4S provides a high-level API for many accelerated operations common in neural networks and deep learning. It furthermore has automatic differentiati

DL4S Team 2 Dec 5, 2021
Joint Face Detection and Alignment using Multi-task Cascaded Convolutional Neural Networks

mtcnn-caffe Joint Face Detection and Alignment using Multi-task Cascaded Convolutional Neural Networks. This project provide you a method to update mu

Weilin Cong 500 Oct 30, 2022
Automatic colorization using deep neural networks. Colorful Image Colorization. In ECCV, 2016.

Colorful Image Colorization [Project Page] Richard Zhang, Phillip Isola, Alexei A. Efros. In ECCV, 2016. + automatic colorization functionality for Re

Richard Zhang 3k Dec 27, 2022
Models and examples built with TensorFlow

Welcome to the Model Garden for TensorFlow The TensorFlow Model Garden is a repository with a number of different implementations of state-of-the-art

null 74.9k Dec 29, 2022
BrainCore is a simple but fast neural network framework written in Swift.

BrainCore is a simple but fast neural network framework written in Swift. It uses Metal which makes it screamin' fast. If you want to see it

Alejandro Isaza 377 Jun 29, 2022
BetterMood is an iOS app that uses Tensorflow to recognize user’s emotions

BetterMood is an iOS app that uses Tensorflow to recognize user’s emotions, convert it into categories then send via our api along with the user’s date of birth and name, to end up with a emotion analyse and horoscope prediction.

Yosri 2 Sep 30, 2021
[yolov5] + [ios] + [tensorflow lite]

YOLOv5 - TensorFlow Lite Object Detection iOS Example Application iOS Versions Supported: iOS 12.0 and above. Xcode Version Required: 10.0 and above O

Inpyo Hong 14 Dec 12, 2022
Pose Estimation on iOS with TensorFlow Lite

This project is Pose Estimation on iOS with TensorFlow Lite. If you are interested in iOS + Machine Learning, visit here you can see various DEMOs. 2D

tucan9389 125 Nov 28, 2022
A lightweight library to calculate tensors in Swift, which has similar APIs to TensorFlow's

TensorSwift TensorSwift is a lightweight library to calculate tensors, which has similar APIs to TensorFlow's. TensorSwift is useful to simulate calcu

Qoncept, Inc. 323 Oct 20, 2022
Flutter Piano Audio Detection implemented with Tensorflow Lite Model (Google Magenta)

FlutterPianoAudioDetection Plugin Flutter Piano Audio Detection implemented with Tensorflow Lite Model (Google Magenta) Android Implementation iOS/iPa

WonyJeong 27 Dec 29, 2022
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.

English | 简体中文 | 繁體中文 | 한국어 State-of-the-art Machine Learning for JAX, PyTorch and TensorFlow ?? Transformers provides thousands of pretrained models

Hugging Face 77.1k Dec 31, 2022
TensorFlow C API Class Wrapper in Server Side Swift.

Perfect TensorFlow 简体中文 This project is an experimental wrapper of TensorFlow C API which enables Machine Learning in Server Side Swift. This package

PerfectlySoft Inc. 169 Dec 11, 2022
Swift for TensorFlow

Swift for TensorFlow (Archived) Swift for TensorFlow was an experiment in the next-generation platform for machine learning, incorporating the latest

null 6.1k Dec 31, 2022
DeepInfant® is a Neural network system designed to predict whether and why your baby is crying.

DeepInfant DeepInfant® is a Neural network system designed to predict whether and why your baby is crying. DeepInfant uses artificial intelligence and

Skytells AI Research 14 Oct 19, 2022
Takes those cursed usernames you see on social networks and lets them be accessible to screen readers.

AccessibleAuthorLabel ?? Takes those cursed usernames you see on social networks and lets them be accessible to screen readers so everyone can partake

Christian Selig 40 Jan 25, 2022
Shallow and Deep Convolutional Networks for Saliency Prediction

Shallow and Deep Convolutional Networks for Saliency Prediction Paper accepted at 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVP

Image Processing Group - BarcelonaTECH - UPC 183 Jan 5, 2023
Deeper Depth Prediction with Fully Convolutional Residual Networks (FCRN)

Deeper Depth Prediction with Fully Convolutional Residual Networks By Iro Laina, Christian Rupprecht, Vasileios Belagiannis, Federico Tombari, Nassir

Iro Laina 1.1k Dec 22, 2022