Flutter Piano Audio Detection implemented with Tensorflow Lite Model (Google Magenta)

Overview

FlutterPianoAudioDetection Plugin

pub package likes License: MIT style: effective dart


Flutter Piano Audio Detection implemented with Tensorflow Lite Model (Google Magenta)

  • Android Implementation
  • iOS/iPadOS Implementation

To keep this project alive, consider giving a star or a like. Contributors are also welcome.


Example Demo

ezgif com-gif-maker


Setting up a Flutter app with flutter_piano_audio_detection

1. Setting Tensorflow model file into your project

First, Add tensorflow lite file in your project. Copy the downloaded onsets_frames_wavinput.tflite.

Android : Copy the downloaded file YourApp/android/app/src/main/assets
iOS : Navigator -> Build Phases -> Copy Bundle Resourse

If you have experience installing other plugins, it should be very simple.

2. iOS Installation & Permissions

  1. Add the permissions below to your info.plist. This could be found in /ios/Runner folder. For example:
    <key>NSMicrophoneUsageDescription</key>
    <string>Your Text</string>

2. Add the following to your Podfile file. Since the AudioModule library is sensitive to the iOS version, please apply the ios version in the Podfile to 12.1. and This plugin depends on [permission_handler flutter plugin](https://pub.dev/packages/permission_handler).
    platform :ios, '12.1' // or higher version.
    
    // ...
 
    post_install do |installer|
      installer.pods_project.targets.each do |target|
        target.build_configurations.each do |config|
          config.build_settings['GCC_PREPROCESSOR_DEFINITIONS'] ||= [
            '$(inherited)',
            ## dart: PermissionGroup.microphone
            'PERMISSION_MICROPHONE=1',
          ]
        end
      end
    end

3. Android Installation & Permissions

  1. Add the permissions below to your AndroidManifest. This could be found in /android/app/src folder. For example:
<uses-permission android:name="android.permission.RECORD_AUDIO" />

  1. Edit the following below to your build.gradle. This could be found in YourApp/app/src/For example:
aaptOptions {
        noCompress 'tflite'
  }

How to use this plugin

Please look at the example on how to implement these futures.

  1. Add line in pubspec.yaml
  dependencies:
    flutter_piano_audio_detection: ${version}
  1. Usage in Flutter Code
  import 'package:flutter_piano_audio_detection/flutter_piano_audio_detection.dart';
  // ...
  
  class _YourAppState extends State<MyApp> {
    FlutterPianoAudioDetection fpad = new FlutterPianoAudioDetection();
  
    Stream<List<dynamic>>? result;
    List<String> notes = [];
    
    // ...
    
    @override
    void initState() {
      super.initState();
      fpad.prepare();
    }
  
    void start() {
      fpad.start(); // Start Engine 
      getResult();  // Event Subscription
    }

    void stop() {
      fpad.stop();  // Stop Engine
    }

    void getResult() {
      result = fpad.startAudioRecognition();
      result!.listen((event) {
        setState(() {
          notes = fpad.getNotes(event); // notes = [C3, D3]
        });
      });
    }
    // ...
  }

License

MIT

Reference

You might also like...
On-device wake word detection powered by deep learning.
On-device wake word detection powered by deep learning.

Porcupine Made in Vancouver, Canada by Picovoice Porcupine is a highly-accurate and lightweight wake word engine. It enables building always-listening

Joint Face Detection and Alignment using Multi-task Cascaded Convolutional Neural Networks
Joint Face Detection and Alignment using Multi-task Cascaded Convolutional Neural Networks

mtcnn-caffe Joint Face Detection and Alignment using Multi-task Cascaded Convolutional Neural Networks. This project provide you a method to update mu

CloneCorp - Data corpus for the evaluation of cross-language clone detection algorithms

CloneCorp - Data corpus for the evaluation of cross-language clone detection algorithms

Resource monitor - A flutter plugin for Android and IOS to monitor CPU and RAM usage of device.

resource_monitor A flutter plugin for Android and IOS to monitor CPU and RAM usage of device. TODO Implement Android Side of this plugin. Add listener

Tiny YOLO for iOS implemented using CoreML but also using the new MPS graph API.
Tiny YOLO for iOS implemented using CoreML but also using the new MPS graph API.

YOLO with Core ML and MPSNNGraph This is the source code for my blog post YOLO: Core ML versus MPSNNGraph. YOLO is an object detection network. It can

Sample code for Core ML using ResNet50 provided by Apple and a custom model generated by coremltools.
Sample code for Core ML using ResNet50 provided by Apple and a custom model generated by coremltools.

CoreML-samples This is the sample code for Core ML using ResNet50 provided by Apple. ResNet50 can categorize the input image to 1000 pre-trained categ

Model stock prediction for iOS
Model stock prediction for iOS

Stockify Problem Investing in Stocks is great way to grow money Picking the right stocks for you can get tedious and confusing Too many things to foll

Visual Memorability with Caffe Model

Visual Memorability with Caffe Model @inproceedings{ICCV15_Khosla, author = "Aditya Khosla and Akhil S. Raju and Antonio Torralba and Aude Oliva", tit

An example of CoreML using a pre-trained VGG16 model

CoreMLExample In this example we use AVFoundation to continuously get image data from the back camera, and try to detect the dominant objects present

Comments
  • Some recognization problem

    Some recognization problem

    hi,
    Thank you for sharing such demo, it is perfect i am developing a score play app on mobile, i use audio unit to record, but i found sometime, when i play the piano for [4D,6D], it recognized [4D], but not every time, do you have such problem? And sometime i play more pitches one time, i will miss one or two.

    opened by kevinszjmm 3
  • Detection not woking correctly

    Detection not woking correctly

    If am start the record its detect continuously. when am not pressing any keys ,its detect something ,Can you fix the issues to detect when i am pressing any key

    opened by manoharantechie 1
  • correct typo in iOS ModelDataHandler.swift

    correct typo in iOS ModelDataHandler.swift

    The dart file contains "getNotesDetail returns the recognized notes and displays the [key, frame, onset, offset, velocity] details." but in ModelDataHandler for iOS it is "velocitiy" instead of velocity.

    opened by ShumpeiYamauchi 1
  • can not run this example

    can not run this example

    i got this error:

    [+1825 ms] V/FLUTTER_PITCH_TRACKER(20514): Audio Engine Prepare [ +1 ms] I/tflite (20514): Initialized TensorFlow Lite runtime. [ ] V/FLUTTER_PITCH_TRACKER(20514): Tflite Model Successfully Loaded [+3510 ms] W/System (20514): A resource failed to call close. [ +1 ms] W/System (20514): A resource failed to call close. [+15148 ms] I/ViewRootImpl@c809cb6MainActivity: ViewPostIme pointer 0 [ +76 ms] I/ViewRootImpl@c809cb6MainActivity: ViewPostIme pointer 1 [ +308 ms] V/FLUTTER_PITCH_TRACKER(20514): Start Recognition [ +36 ms] E/IAudioFlinger(20514): createRecord returned error -1 [ +1 ms] E/AudioRecord(20514): createRecord_l(0): AudioFlinger could not create record track, status: -1 [ ] E/AudioRecord-JNI(20514): Error creating AudioRecord instance: initialization check failed with status -1. [ ] E/android.media.AudioRecord(20514): Error code -20 when initializing native AudioRecord object. [ ] E/FLUTTER_PITCH_TRACKER(20514): Audio Record can't initialize!

    opened by javadrajabi 4
Owner
WonyJeong
Hi! melo!
WonyJeong
Pose Estimation on iOS with TensorFlow Lite

This project is Pose Estimation on iOS with TensorFlow Lite. If you are interested in iOS + Machine Learning, visit here you can see various DEMOs. 2D

tucan9389 125 Nov 28, 2022
Realtime yoga pose detection and classification plugin for Flutter using MLKit

ML Kit Pose Detection Plugin Flutter plugin for realtime pose detection using MLKit's Blazepose. License Copyright (c) 2021 Souvik Biswas, Bharat Bira

Souvik Biswas 8 May 5, 2022
CoreMLSample - CoreML Example for in app model and download model

CoreMLSample Sample for CoreML This project is CoreML Example for in app model a

Kim Seonghun 2 Aug 31, 2022
BetterMood is an iOS app that uses Tensorflow to recognize user’s emotions

BetterMood is an iOS app that uses Tensorflow to recognize user’s emotions, convert it into categories then send via our api along with the user’s date of birth and name, to end up with a emotion analyse and horoscope prediction.

Yosri 2 Sep 30, 2021
Easily craft fast Neural Networks on iOS! Use TensorFlow models. Metal under the hood.

Bender Bender is an abstraction layer over MetalPerformanceShaders useful for working with neural networks. Contents Introduction Why did we need Bend

xmartlabs 1.7k Dec 24, 2022
A lightweight library to calculate tensors in Swift, which has similar APIs to TensorFlow's

TensorSwift TensorSwift is a lightweight library to calculate tensors, which has similar APIs to TensorFlow's. TensorSwift is useful to simulate calcu

Qoncept, Inc. 323 Oct 20, 2022
Models and examples built with TensorFlow

Welcome to the Model Garden for TensorFlow The TensorFlow Model Garden is a repository with a number of different implementations of state-of-the-art

null 74.9k Dec 29, 2022
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.

English | 简体中文 | 繁體中文 | 한국어 State-of-the-art Machine Learning for JAX, PyTorch and TensorFlow ?? Transformers provides thousands of pretrained models

Hugging Face 77.1k Dec 31, 2022
TensorFlow C API Class Wrapper in Server Side Swift.

Perfect TensorFlow 简体中文 This project is an experimental wrapper of TensorFlow C API which enables Machine Learning in Server Side Swift. This package

PerfectlySoft Inc. 169 Dec 11, 2022
Swift for TensorFlow

Swift for TensorFlow (Archived) Swift for TensorFlow was an experiment in the next-generation platform for machine learning, incorporating the latest

null 6.1k Dec 31, 2022