AudioKit is an audio synthesis, processing, and analysis platform for iOS, macOS, and tvOS.

Overview

AudioKit

Build Status License Platform Reviewed by Hound Twitter Follow

AudioKit is an audio synthesis, processing, and analysis platform for iOS, macOS (including Catalyst), and tvOS.

Installation

To add AudioKit to your Xcode project, select File -> Swift Packages -> Add Package Depedancy. Enter https://github.com/AudioKit/AudioKit for the URL. You can define which version range you want, or which branch to use, or even which exact commit you would like use.

Documentation

In addition to the Migration Guide, our documentation is now automatically generated on the Github wiki.

Examples

The AudioKit Cookbook contains many recipes for simple uses for AudioKit components. More examples are here.

Getting help

  1. Post your problem to StackOverflow with the #AudioKit hashtag.

  2. Once you are sure the problem is not in your implementation, but in AudioKit itself, you can open a Github Issue.

  3. If you, your team or your company is using AudioKit, please consider sponsoring Aure on Github Sponsors.

Contributing Code

When you want to modify AudioKit, check out the develop branch (as opposed to main), make your changes, and send us a pull request.

About Us

AudioKit was created by Aurelius Prochazka who is your life line if you need help! Matthew Fecher, Jeff Cooper, and Aure create AudioKitPro apps together, and Stephane Peter is Aure's co-admin and manages AudioKit's releases. Taylor Holliday has been instrumental in AudioKit 5 improvements.

But, there are many other important people in our family:

Group Description
Core Team The biggest contributors to AudioKit!
Slack Pro-level developer chat group, contact a core team member for an in invitation.
Contributors A list of all people who have submitted code to AudioKit.
Comments
  • Xcode 12 + AudioKit 4.11 Problems

    Xcode 12 + AudioKit 4.11 Problems

    Hi. I just wanted to ask if there is a planned support for v4 of AudioKit in Xcode 12? Or if there is a documentation for the v5 beta version which works in Xcode 12.

    I cannot compile my app in Xcode 12 and hence cannot prepare it for the iOS 14 which is now already released. I have went into detail in this stackoverflow post. https://stackoverflow.com/questions/63860545/implementing-microphone-analysis-with-audiokit-v5

    Thank you.

    opened by vojtabohm 82
  • Add OSC instrument communication

    Add OSC instrument communication

    We have full MIDI support, but no OSC support yet. There seems to be a few options listed at Charles Martin's Blog:

    http://charlesmartin.com.au/blog/2013/3/26/finding-a-good-osc-library-for-ios

    but so far it looks like his Metatone might be best

    https://github.com/cpmpercussion/MetatoneOSC

    and it comes with this example

    https://github.com/cpmpercussion/ExampleOSC

    If you are on OSC wiz and want to knock this out, that would be great, or else you can also just comment that you would also like to have this feature, and that will motivate us to prioritize this higher.

    enhancement 
    opened by aure 43
  • Ableton Link Integration

    Ableton Link Integration

    AudioKit and Ableton Link

    Just writing down a quick list of methods that get called by Ableton Links 'LinkHut' example app to execute a tempo update. Since Ableton Link is meant to keep different music apps in tempo / sync then updating the AKSequencer / tempo backend would probably be enough to integrate Link with AudioKit.

    Integration

    Note:

    • Need to download Ableton Link / LinkHut from the [LinkKit release tab] (https://github.com/Ableton/LinkKit/releases)
    • LinkHut is in the examples folder inside the release zip

    Here are the steps taken in LinkHut from Ableton:

    Below are brief breakdown of steps taken to setup Ableton Link with an audio engine to update tempo

    All steps described below are based of the Ableton Link LinkHut example app.

    The process of updating the tempo (when another user set's new tempo):

    • First audio engine and Ableton Link objects are setup in ViewController -> viewDidLoad

      • _audioEngine = [[AudioEngine alloc] initWithTempo:_bpm];
      • This is where AVAudioSession shared instance and callbacks get setup
      • ABLLinkSetSessionTempoCallback(...)
        • This is where Ableton Link tempo change gets passed along to user callback to update UI
      • _audioEngine.linkRef, onSessionTempoChanged, (__bridge void *)self);
      • _linkSettings = [ABLLinkSettingsViewController instance:_audioEngine.linkRef];
      • [_audioEngine setQuantum:_quanta];
    • Ableton Link will receive the tempo change from the other user (over the network for ex)

    • Ableton Link will send an event to the AVAudioSession audio engine route

    • User code has a callback that gets called when route event happens

    • See 'AudioEngine.m' in the Ableton LinkHut app example:

      • callback in AudioEngine.m -> audioCallback
      • This is setup in 'setupAudioEngine' method that gets called during init:
      • This is where _linkData object is setup to connect to shared instance of AVAudioSession properties
      • 'AURenderCallbackStruct ioRemoteInput;'
        • gets called to setup i/o callback using 'audioCallback' function
        • audioCallback is defined to checks for a tempo update and saves the timestamp and new tempo
        • See 'static OSStatus audioCallback' method in AudioEngine.m
        • This is where the tempo actually gets updated when we receive an tempo event from Ableton Link
        • If new tempo is valid it passes along the new tempo event to another callback that informs the UI to display the new tempo to the user
        • This happens in 'ABLLinkSetSessionTempoCallback' defined in the ViewController.swift
        • This callback is setup in ViewController.swift -> viewDidLoad -> ABLLinkSetSessionTempoCallback
    enhancement 
    opened by JoshuaBThompson 36
  • Notes not stopping properly in MIDI sequencer

    Notes not stopping properly in MIDI sequencer

    Something strange has started happening with the Sequencer. I'm not sure exactly when, but recently.

    The "stop playing notes" functionality has essentially stopped working. I have confirmed that my code is calling the stop method on each track, but the notes continue sustaining, which is a real problem in my app.

    Here's what the code looks like that I execute in direct response to the user pressing the "stop" button:

        public func stop() {
            if let sequencer = sequencer {
                internalPlaybackState = .stopped
                sequencer.stop()
                for track in sequencer.tracks {
                    track.stopPlayingNotes()
                }
                sequencer.rewind()
            }
        }
    

    In that code, "sequencer" is just an AudioKit Sequencer. The "internalPlaybackState" is something related to my app that should have no bearing here.

    I'd be happy to help you reproduce it. All the source code that I'm using for playback is in the SongSprout Swift Package located here: https://github.com/dunesailer/SongSprout

    The main playback class is Orchestrion.

    opened by btfranklin 33
  • AKMicrophone init crash in objective C

    AKMicrophone init crash in objective C

    I am trying to use AKMicrophone in objective C sample app like this and it crashes for all the pod versions above 4.5.1 on the init step. iOS version is 12.1.2, iphone 7plus, Xcode version 10.1. It works fine in swift sample app MicrophoneAnalysis. So I suspect maybe missing @objc somewhere? @property (strong, nonatomic) AKMicrophone *mic; Then in ViewDidLoad: _mic = [[AKMicrophone alloc] init]; // Crashes here.
    I am also getting following error in the console: 2019-01-15 12:56:30.809227-0500 VoiceWavePoc[2856:1473174] [avas] AVAudioSessionPortImpl.mm:56:ValidateRequiredFields: Unknown selected data source for Port Receiver (type: Receiver)

    opened by KozDan 32
  • Setting Input/Output for external device

    Setting Input/Output for external device

    While connecting an external I/O device , It is batter has a setting for user can select which channels are prefer . ig. User apogee quartet , quartet has 4 analog input , and 6 analog output , you can select specific I/O channels for recording and monitor . you can select input 1 & 2 for stereo recording or just 1 for mono recording.

    For current version I test with quartet , if will using default input 1 , still works , but if your audio is via other channels , nothing can be recorded

    A setting view with real time chart UI will be great for measuring the input/output volume

    img

    enhancement tips and tricks 
    opened by WebberLai 31
  • AKMicrophone crash init in 4.5.5 version

    AKMicrophone crash init in 4.5.5 version

    AudioKit version 4.5.5: crash in code AKSettings.audioInputEnabled = true mic = AKMicrophone() tracker = AKFrequencyTracker(mic) silence = AKBooster(tracker, gain: 0)

    _AVAE_Check: required condition is false: [AVAudioIONodeImpl.mm:911:SetOutputFormat: (format.sampleRate == hwFormat.sampleRate)] *** Terminating app due to uncaught exception 'com.apple.coreaudio.avfaudio', reason: 'required condition is false: format.sampleRate == hwFormat.sampleRate'

    Test device - iPhone 8 and simulators AudioKit.engine.inputNode.inputFormat(forBus: 0).sampleRate periodically returns in the application 44100.0 or 48000.0

    On the iPhone se AKMicrophone() works stably, no crash

    opened by extnous 30
  • AKOfflineRenderNode doesn't render any sound.

    AKOfflineRenderNode doesn't render any sound.

    I utilized AKOfflineRenderNode to process recorded voice. The idea is to record voice and save it into the file. Next apply some effects to this file and save combined voice+effects into file.

    I have successful result on my iPhone 6s running on iOS 11.0.3 but on other devices running on iOS 11+(for example last generation of iPad on 11.0.3) I always have "silent" file with same size about 60 Kb. I attached one for reference(rec.zip). Also the issue 100% reproducible on iOS 11 simulator.

    Here is my setup(I removed effects setup since offline render doesn't work even on clean recording):

        fileprivate func setupAudioKit(){
            AKSettings.enableLogging = true
            AKAudioFile.cleanTempDirectory()
            
            AKSettings.bufferLength = .medium
            AKSettings.numberOfChannels = 2
            AKSettings.sampleRate = 44100
            AKSettings.defaultToSpeaker = true
            
            do {
                try AKSettings.setSession(category: .playAndRecord, with: .allowBluetoothA2DP)
            } catch {
                AKLog("Could not set session category")
            }
            
            mic = AKMicrophone()
            micMixer = AKMixer(mic)
            
            recorder = try? AKNodeRecorder(node: micMixer)
            
            if let file = recorder.audioFile {
                player = try? AKAudioPlayer(file: file)
                player.looping = true
            }
            playerMixer = AKMixer(player)
    // Effects setup
            offlineRenderer = AKOfflineRenderNode(playerMixer)
            AudioKit.output = offlineRenderer
    
            AudioKit.start()
    }
    

    Here is export method:

         fileprivate func render() {
            offlineRenderer.internalRenderEnabled = false
            player.schedule(from: 0, to: player.duration, avTime: nil)
            
            let renderURL = URL(fileURLWithPath: FileHelper.documentsDirectory() + "rec.m4a")
            
            let sampleTimeZero = AVAudioTime(sampleTime: 0, atRate: AudioKit.format.sampleRate)
            player.play(at: sampleTimeZero)
            do {
                try offlineRenderer.renderToURL(renderURL, seconds: player.duration)
            } catch {
                print(error)
            }
            player.stop()
            offlineRenderer.internalRenderEnabled = true
        }
    
    opened by ygoncharov-ideas 30
  • Swift 5 support

    Swift 5 support

    I'm sure you're all working on this, just making the issue to add some visibility for others who are wondering as well.

    Xcode 10.2 with Swift 5 support was just released today. Hoping Swift 5 support is coming soon to AudioKit!

    opened by colinhumber 29
  • Audiobus 'hwFormat' crash

    Audiobus 'hwFormat' crash

    I'm integrating Audiobus into my AudioKit-based app, and have followed the instructions here, but when I call Audiobus.start(), the app will continue running for about a second, then I get some error output, followed by a crash:

    2017-03-22 23:25:45.620918 Vulse[20369:7115604] [central] 54: ERROR: [0x16e167000] >avae> AVAudioIONodeImpl.mm:365: _GetHWFormat: required condition is false: hwFormat framerate: 23 2017-03-22 23:26:03.917219 Vulse[20369:7115604] *** Terminating app due to uncaught exception 'com.apple.coreaudio.avfaudio', reason: 'required condition is false: hwFormat'

    So, I got the FilterEffects example up and running to see if the issue was just in my app. (First I had to change the input variable in viewDidLoad to a property, because it was deiniting too early and causing a crash.) But after fixing that, I unfortunately got the same error above.

    Interestingly, for the second or two before the app crashes, everything seems to work properly: the app takes audio input from audiobus and I can hear it output with the reverb applied. Also (at least in my app) if I remove the Audiobus.start(), but still try to instantiate an AKStereoInput, I get a similar message: SetOutputFormat: required condition is false: format.sampleRate == hwFormat.sampleRate

    Do you get the same crash when trying this yourself, with the latest AudioKit release? I'm using Audiobus 2.3.3, on an iPhone 6s running iOS 10.2.1.

    opened by jconst 29
  • AKPlayer.load() on a file crashes after upgrading to 4.10 (from 4.9.4)

    AKPlayer.load() on a file crashes after upgrading to 4.10 (from 4.9.4)

    AudioKit.AKFaderAudioUnit input format must match output format

    2020-06-02 12:15:08.494929+0530 [avae] AVAEInternal.h:109 [AVAudioEngineGraph.mm:4170:UpdateGraphAfterReconfig: (AUGraphParser::InitializeActiveNodesInOutputChain(ThisGraph, kOutputChainFullTraversal, *conn.srcNode, isChainActive)): error -10868 2020-06-02 12:15:08.497253+0530 *** Terminating app due to uncaught exception 'com.apple.coreaudio.avfaudio', reason: 'error -10868'

    opened by ghost 28
  • TSAN data race

    TSAN data race

    macOS Version(s) Used to Build

    macOS 13 Ventura

    Xcode Version(s)

    Xcode 14

    Description

    image

    To reproduce, run tests with TSAN enabled

    @josi

    Crash Logs, Screenshots or Other Attachments (if applicable)

    No response

    bug 
    opened by wtholliday 0
  • Add a test for start and pause with empty mixer

    Add a test for start and pause with empty mixer

    • This currently fails so it is skipped
    • Add a reference to bug report

    I don't have clear idea in mind on how to fix this. I was thinking about possibly maintaining separate state within AudioEngine, but we don't have a reference to it from Mixer. Adding associated object to AVAudioEngine would probably work, but I am not sure if it is worth it. It is not that difficult to workaround at the callsite.

    opened by jcavar 0
  • Approach to disable input bus

    Approach to disable input bus

    Description

    AUAudioUnitBus has isEnabled property. If this property is disabled, pullBlock will not be called on an input bus and will save processing time.

    This is working very well in Apple's Audio Units (e.g. AVAudioMixerNode) and depending where this bus in the chain is, you might experience significant performance improvements as none of the upstream nodes will be called.

    Unfortunately this is not implemented in AudioKit units (e.g. DryWetMixer).

    Proposed Solution

    This check would need to happen here and here I believe.

    I am not sure how to approach this since this property is in AUAudioUnitBusArray->AUAudioUnitBus which is not available within DSP.

    Describe Alternatives You've Considered

    I guess one approach would be to duplicate these properties on DSP, but that seems not very elegant.

    Additional Context

    Please note that this is different to bypass effect, which just bypasses processing in one node, but doesn't prevent further signal propagation.

    opened by jcavar 0
  • (Mostly) Eliminate AVAudioEngine

    (Mostly) Eliminate AVAudioEngine

    Description

    AVAudioEngine is:

    • relatively buggy (still not handling dynamic connections well)
    • opaque (we don't have access to its code)
    • annoying (mapping AK's connections onto AVAudioEngine is tricky)
    • hard to use (terrible error messages)

    Proposed Solution

    Mostly replace AVAudioEngine usage in AK with our own code. AVAudioEngine would still be used to handle input and output, but we would evaluate our audio graph within a single AU. Thus the AVAudioEngine graph would always be Input -> AK AU -> Output.

    • Instead of the recursive pull-based style of typical AUs, sort the graph and call the AUs in sequence with dummy input blocks (this makes for simpler stack traces and easier profiling).
    • Write everything in Swift (AK needs to run in Playgrounds. Make use of @noAllocations and @noLocks)
    • Host AUs (actually the same AUs as before!)
    • Don't bother trying to share buffers and update in-place (this seems to be of marginal benefit on modern hardware)
    • Preserve almost exactly the same API

    Describe Alternatives You've Considered

    Continue to file bugs with Apple and hope they fix them. This is the status quo. It's not good enough.

    Additional Context

    The devil's in the details, and this could prove to be too much work after we investigate further.

    opened by wtholliday 0
  • Solution for #298

    Solution for #298

    Description

    Re discussion in https://github.com/AudioKit/AudioKit/issues/298

    From Logic, if you save your EXS24 instrument into the Samples/ folder in your Xcode project and also pick "include audio data" — the saved EXS24 instrument will contain the correct relative path to be able to load!

    Proposed Solution

    Since this is something a few people have run into, it might be worth adding something in the docs or FAQ about it?

    Describe Alternatives You've Considered

    This seems like the best thing

    Additional Context

    Happy to write up the FAQ post if you like

    opened by charliewilliams 1
  • AudioPlayer buffer samplerate mismatch

    AudioPlayer buffer samplerate mismatch

    macOS Version(s) Used to Build

    macOS 12 Monterey

    Xcode Version(s)

    Xcode 14

    Description

    The samplerate of the outputFormat of playerNode in an AudioPlayer does not seem to match with the buffer that is being used. I don't have any issues when using a non-buffered AudioPlayer

    func load(_ audioFile: AVAudioFile) {
      try! audioPlayer.load(file: file, buffered: true)
    }
    

    This results in slowed down audio because the outputFormat of the playerNode is 44100 while the file has a samplerate of 48000 (on my device, in the simulator it works fine).

    In AVFAudio.AVAudioPlayerNode there's a comment mentioning: When playing buffers, there is an implicit assumption that the buffers are at the same sample rate as the node's output format.

    I've read somewhere that setting the audioFormat fixes these kind of issues but it does not work.

    Settings.audioFormat = AVAudioFormat(standardFormatWithSampleRate: 48000, channels: 2) ?? AVAudioFormat()
    

    I noticed there's a check on format in the AudioPlayer: https://github.com/AudioKit/AudioKit/blob/main/Sources/AudioKit/Nodes/Playback/AudioPlayer/AudioPlayer.swift#L257

    But when loading the first file, this part will not be checked. Therefor the makeInternalConnections function on line 273 will not be called.

    I've created an extension function of the AudioPlayer where I choose to reconnect the AudioPlayer. This solves the issue.

    extension AudioPlayer {
        public func reconnectPlayerNode() {
            let engine = mixerNode.engine!
            
            engine.disconnectNodeInput(playerNode)
            
            if playerNode.engine == nil {
                engine.attach(playerNode)
            }
            
            engine.connect(playerNode, to: mixerNode, format: file?.processingFormat)
        }
    }
    

    Is there something I'm missing so I don't have to reconnect? Or is this the way to go for now?

    Crash Logs, Screenshots or Other Attachments (if applicable)

    No response

    bug 
    opened by renezuidhof 3
Releases(5.5.2)
An iOS and macOS audio visualization framework built upon Core Audio useful for anyone doing real-time, low-latency audio processing and visualizations.

A simple, intuitive audio framework for iOS and OSX. Deprecated EZAudio has recently been deprecated in favor of AudioKit. However, since some people

Syed Haris Ali 4.9k Jan 2, 2023
Beethoven is an audio processing Swift library

Beethoven is an audio processing Swift library that provides an easy-to-use interface to solve an age-old problem of pitch detection of musical signals.

Vadym Markov 735 Dec 24, 2022
AudiosPlugin is a Godot iOS Audio Plugin that resolves the audio recording issue in iOS for Godot Engine.

This plugin solves the Godot game engine audio recording and playback issue in iOS devices. Please open the Audios Plugin XCode Project and compile the project. You can also use the libaudios_plugin.a binary in your project.

null 3 Dec 22, 2022
The Amazing Audio Engine is a sophisticated framework for iOS audio applications, built so you don't have to.

Important Notice: The Amazing Audio Engine has been retired. See the announcement here The Amazing Audio Engine The Amazing Audio Engine is a sophisti

null 523 Nov 12, 2022
AudioPlayer is a simple class for playing audio in iOS, macOS and tvOS apps.

AudioPlayer AudioPlayer is a simple class for playing audio in iOS, macOS and tvOS apps.

Tom Baranes 260 Nov 27, 2022
🅿️ PandoraPlayer is a lightweight music player for iOS, based on AudioKit and completely written in Swift.

Made by Applikey Solutions Find this project on Dribbble Table of Contents Purpose Features Supported OS & SDK Versions Installation Usage Demo Releas

Applikey Solutions 1.1k Dec 26, 2022
AudioKit Synth One: Open-Source iOS Synthesizer App

AudioKit Synth One We've open-sourced the code for this synthesizer so that everyone is able to make changes to the code, introduce new features, fix

AudioKit 1.5k Dec 25, 2022
AudioKit 67 Dec 21, 2022
Microtonal Tuning Tables for AudioKit

Microtonal Tuning Tables for AudioKit These tuning tables were developed by Marcus Hobbs and used in the AudioKit Synth One iOS app. Installation via

AudioKit 11 Nov 23, 2022
AudioKit Sample Player (ROM Player) - EXS24, Sound Font, Wave Player

AudioKit ROM / Sample Player Welcome to the official AudioKit example of a sample-based music instrument written in Swift. It can be modified to play

AudioKit 500 Dec 27, 2022
Swift Xcode Project that demonstrates how to set up a microphone input via AudioKit verions 5.

AudioKit Mic Input Swift Xcode Project that demonstrates how to set up a microphone input via AudioKit verions 5. Be sure to plug in headphones in ord

Mark Jeschke 0 Oct 23, 2021
C-backed AudioKit DSP

AudioKitEX This extension to AudioKit contains all of the AudioKit features that rely on C/C++ DSP. Documentation The documentation appears in the Wik

AudioKit 27 Jan 1, 2023
A minimal AUv3 instrument example using AudioKit 5

AUv3 Instrument (AudioKit 5) A minimal AUv3 instrument example using AudioKit 5. Download the project, customize it however you like, and share your w

Nick Culbertson 24 Jan 4, 2023
Website for AudioKit documentation.

AudioKit.io audiokit.io hosts DocC documentation for all AudioKit packages. Running the Website The first and only thing we need to do is add a workin

AudioKit 3 Sep 5, 2022
Numerical Analysis of Fundamental Frequencies (NAFF)

hpNaff Numerical Analysis of Fundamental Frequencies (NAFF) A little command line tool (macOS, Linux, Windows1) to perform high-resolution frequency a

null 1 Jan 5, 2022
Simple command line utility for switching audio inputs and outputs on macOS

Switch Audio Simple command line utility for switching audio inputs and outputs

Daniel Hladík 3 Nov 22, 2022
Extensions and classes in Swift that make it easy to get an iOS device reading and processing MIDI data

MorkAndMIDI A really thin Swift layer on top of CoreMIDI that opens a virtual MIDI destination and port and connects to any MIDI endpoints that appear

Brad Howes 11 Nov 5, 2022
TuningFork is a simple utility for processing microphone input and interpreting pitch, frequency, amplitude, etc.

Overview TuningFork is a simple utility for processing microphone input and interpreting pitch, frequency, amplitude, etc. TuningFork powers the Parti

Comyar Zaheri 419 Dec 23, 2022
Voice Memos is an audio recorder App for iPhone and iPad that covers some of the new technologies and APIs introduced in iOS 8 written in Swift.

VoiceMemos Voice Memos is a voice recorder App for iPhone and iPad that covers some of the new technologies and APIs introduced in iOS 8 written in Sw

Zhouqi Mo 322 Aug 4, 2022