An iOS and macOS audio visualization framework built upon Core Audio useful for anyone doing real-time, low-latency audio processing and visualizations.

Related tags

Audio EZAudio
Overview

alt text

A simple, intuitive audio framework for iOS and OSX.

Deprecated

EZAudio has recently been deprecated in favor of AudioKit. However, since some people are still forking and using EZAudio I've decided to restore the README as it was. Check out the note below.

Apps Using EZAudio

I'd really like to start creating a list of projects made using EZAudio. If you've used EZAudio to make something cool, whether it's an app or open source visualization or whatever, please email me at syedhali07[at]gmail.com and I'll add it to our wall of fame! To start it off:

  • Detour - Gorgeous location-aware audio walks
  • Jumpshare - Incredibly fast, real-time file sharing

Features

Awesome Components

I've designed six audio components and two interface components to allow you to immediately get your hands dirty recording, playing, and visualizing audio data. These components simply plug into each other and build on top of the high-performance, low-latency AudioUnits API and give you an easy to use API written in Objective-C instead of pure C.

EZAudioDevice

A useful class for getting all the current and available inputs/output on any Apple device. The EZMicrophone and EZOutput use this to direct sound in/out from different hardware components.

EZMicrophone

A microphone class that provides its delegate audio data from the default device microphone with one line of code.

EZOutput

An output class that will playback any audio it is provided by its datasource.

EZAudioFile

An audio file class that reads/seeks through audio files and provides useful delegate callbacks.

EZAudioPlayer

A replacement for AVAudioPlayer that combines an EZAudioFile and a EZOutput to perform robust playback of any file on any piece of hardware.

EZRecorder

A recorder class that provides a quick and easy way to write audio files from any datasource.

EZAudioPlot

A Core Graphics-based audio waveform plot capable of visualizing any float array as a buffer or rolling plot.

EZAudioPlotGL

An OpenGL-based, GPU-accelerated audio waveform plot capable of visualizing any float array as a buffer or rolling plot.

Cross Platform

EZAudio was designed to work transparently across all iOS and OSX devices. This means one universal API whether you're building for Mac or iOS. For instance, under the hood an EZAudioPlot knows that it will subclass a UIView for iOS or an NSView for OSX and the EZMicrophone knows to build on top of the RemoteIO AudioUnit for iOS, but defaults to the system defaults for input and output for OSX.

Examples & Docs

Within this repo you'll find the examples for iOS and OSX to get you up to speed using each component and plugging them into each other. With just a few lines of code you'll be recording from the microphone, generating audio waveforms, and playing audio files like a boss. See the full Getting Started guide for an interactive look into each of components.

Example Projects

EZAudioCoreGraphicsWaveformExample

CoreGraphicsWaveformExampleGif

Shows how to use the EZMicrophone and EZAudioPlot to visualize the audio data from the microphone in real-time. The waveform can be displayed as a buffer or a rolling waveform plot (traditional waveform look).

EZAudioOpenGLWaveformExample

OpenGLWaveformExampleGif

Shows how to use the EZMicrophone and EZAudioPlotGL to visualize the audio data from the microphone in real-time. The drawing is using OpenGL so the performance much better for plots needing a lot of points.

EZAudioPlayFileExample

PlayFileExample

Shows how to use the EZAudioPlayer and EZAudioPlotGL to playback, pause, and seek through an audio file while displaying its waveform as a buffer or a rolling waveform plot.

EZAudioRecordWaveformExample

RecordWaveformExample

Shows how to use the EZMicrophone, EZRecorder, and EZAudioPlotGL to record the audio from the microphone input to a file while displaying the audio waveform of the incoming data. You can then playback the newly recorded audio file using AVFoundation and keep adding more audio data to the tail of the file.

EZAudioWaveformFromFileExample

WaveformExample

Shows how to use the EZAudioFile and EZAudioPlot to animate in an audio waveform for an entire audio file.

EZAudioPassThroughExample

PassthroughExample

Shows how to use the EZMicrophone, EZOutput, and the EZAudioPlotGL to pass the microphone input to the output for playback while displaying the audio waveform (as a buffer or rolling plot) in real-time.

EZAudioFFTExample

FFTExample

Shows how to calculate the real-time FFT of the audio data coming from the EZMicrophone and the Accelerate framework. The audio data is plotted using two EZAudioPlots for the time and frequency displays.

Documentation

The official documentation for EZAudio can be found here: http://cocoadocs.org/docsets/EZAudio/1.1.4/
You can also generate the docset yourself using appledocs by running the appledocs on the EZAudio source folder.

Getting Started

To begin using EZAudio you must first make sure you have the proper build requirements and frameworks. Below you'll find explanations of each component and code snippets to show how to use each to perform common tasks like getting microphone data, updating audio waveform plots, reading/seeking through audio files, and performing playback.

Build Requirements

iOS

  • 6.0+

OSX

  • 10.8+

Frameworks

iOS

  • Accelerate
  • AudioToolbox
  • AVFoundation
  • GLKit

OSX

  • Accelerate
  • AudioToolbox
  • AudioUnit
  • CoreAudio
  • QuartzCore
  • OpenGL
  • GLKit

Adding To Project

You can add EZAudio to your project in a few ways:

1.) The easiest way to use EZAudio is via <a href="
http://cocoapods.org/", target="_blank">Cocoapods. Simply add EZAudio to your <a href="http://guides.cocoapods.org/using/the-podfile.html", target="_blank">Podfile like so:

pod 'EZAudio', '~> 1.1.4'

Using EZAudio & The Amazing Audio Engine

If you're also using the Amazing Audio Engine then use the EZAudio/Core subspec like so:

pod 'EZAudio/Core', '~> 1.1.4'

2.) EZAudio now supports Carthage (thanks Andrew and Tommaso!). You can refer to Carthage's installation for a how-to guide: https://github.com/Carthage/Carthage

3.) Alternatively, you can check out the iOS/Mac examples for how to setup a project using the EZAudio project as an embedded project and utilizing the frameworks. Be sure to set your header search path to the folder containing the EZAudio source.

Core Components

EZAudio currently offers six audio components that encompass a wide range of functionality. In addition to the functional aspects of these components such as pulling audio data, reading/writing from files, and performing playback they also take special care to hook into the interface components to allow developers to display visual feedback (see the Interface Components below).

EZAudioDevice

Provides a simple interface for obtaining the current and all available inputs and output for any Apple device. For instance, the iPhone 6 has three microphones available for input, while on OSX you can choose the Built-In Microphone or any available HAL device on your system. Similarly, for iOS you can choose from a pair of headphones connected or speaker, while on OSX you can choose from the Built-In Output, any available HAL device, or Airplay.

EZAudioDeviceInputsExample

Getting Input Devices

To get all the available input devices use the inputDevices class method:

NSArray *inputDevices = [EZAudioDevice inputDevices];

or to just get the currently selected input device use the currentInputDevice method:

// On iOS this will default to the headset device or bottom microphone, while on OSX this will
// be your selected inpupt device from the Sound preferences
EZAudioDevice *currentInputDevice = [EZAudioDevice currentInputDevice];

Getting Output Devices

Similarly, to get all the available output devices use the outputDevices class method:

NSArray *outputDevices = [EZAudioDevice outputDevices];

or to just get the currently selected output device use the currentInputDevice method:

// On iOS this will default to the headset speaker, while on OSX this will be your selected
// output device from the Sound preferences
EZAudioDevice *currentOutputDevice = [EZAudioDevice currentOutputDevice];

EZMicrophone

Provides access to the default device microphone in one line of code and provides delegate callbacks to receive the audio data as an AudioBufferList and float arrays.

Relevant Example Projects

  • EZAudioCoreGraphicsWaveformExample (iOS)
  • EZAudioCoreGraphicsWaveformExample (OSX)
  • EZAudioOpenGLWaveformExample (iOS)
  • EZAudioOpenGLWaveformExample (OSX)
  • EZAudioRecordExample (iOS)
  • EZAudioRecordExample (OSX)
  • EZAudioPassThroughExample (iOS)
  • EZAudioPassThroughExample (OSX)
  • EZAudioFFTExample (iOS)
  • EZAudioFFTExample (OSX)

Creating A Microphone

Create an EZMicrophone instance by declaring a property and initializing it like so:

// Declare the EZMicrophone as a strong property
@property (nonatomic, strong) EZMicrophone *microphone;

...

// Initialize the microphone instance and assign it a delegate to receive the audio data
// callbacks
self.microphone = [EZMicrophone microphoneWithDelegate:self];

Alternatively, you could also use the shared EZMicrophone instance and just assign its EZMicrophoneDelegate.

// Assign a delegate to the shared instance of the microphone to receive the audio data
// callbacks
[EZMicrophone sharedMicrophone].delegate = self;

Setting The Device

The EZMicrophone uses an EZAudioDevice instance to select what specific hardware destination it will use to pull audio data. You'd use this if you wanted to change the input device like in the EZAudioCoreGraphicsWaveformExample for iOS or OSX. At any time you can change which input device is used by setting the device property:

NSArray *inputs = [EZAudioDevice inputDevices];
[self.microphone setDevice:[inputs lastObject]];

Anytime the EZMicrophone changes its device it will trigger the EZMicrophoneDelegate event:

- (void)microphone:(EZMicrophone *)microphone changedDevice:(EZAudioDevice *)device
{
    // This is not always guaranteed to occur on the main thread so make sure you
    // wrap it in a GCD block
    dispatch_async(dispatch_get_main_queue(), ^{
        // Update UI here
	    NSLog(@"Changed input device: %@", device);
    });
}

Note: For iOS this can happen automatically if the AVAudioSession changes the current device.

Getting Microphone Data

To tell the microphone to start fetching audio use the startFetchingAudio function.

// Starts fetching audio from the default device microphone and sends data to EZMicrophoneDelegate
[self.microphone startFetchingAudio];

Once the EZMicrophone has started it will send the EZMicrophoneDelegate the audio back in a few ways. An array of float arrays:

/**
 The microphone data represented as non-interleaved float arrays useful for:
    - Creating real-time waveforms using EZAudioPlot or EZAudioPlotGL
    - Creating any number of custom visualizations that utilize audio!
 */
-(void)   microphone:(EZMicrophone *)microphone
    hasAudioReceived:(float **)buffer
      withBufferSize:(UInt32)bufferSize
withNumberOfChannels:(UInt32)numberOfChannels
{
    __weak typeof (self) weakSelf = self;
	// Getting audio data as an array of float buffer arrays that can be fed into the
	// EZAudioPlot, EZAudioPlotGL, or whatever visualization you would like to do with
	// the microphone data.
	dispatch_async(dispatch_get_main_queue(),^{
		// Visualize this data brah, buffer[0] = left channel, buffer[1] = right channel
		[weakSelf.audioPlot updateBuffer:buffer[0] withBufferSize:bufferSize];
    });
}

or the AudioBufferList representation:

/**
 The microphone data represented as CoreAudio's AudioBufferList useful for:
    - Appending data to an audio file via the EZRecorder
    - Playback via the EZOutput

 */
-(void)    microphone:(EZMicrophone *)microphone
        hasBufferList:(AudioBufferList *)bufferList
       withBufferSize:(UInt32)bufferSize
 withNumberOfChannels:(UInt32)numberOfChannels
{
	// Getting audio data as an AudioBufferList that can be directly fed into the EZRecorder
	// or EZOutput. Say whattt...
}

Pausing/Resuming The Microphone

Pause or resume fetching audio at any time like so:

// Stop fetching audio
[self.microphone stopFetchingAudio];

// Resume fetching audio
[self.microphone startFetchingAudio];

Alternatively, you could also toggle the microphoneOn property (safe to use with Cocoa Bindings)

// Stop fetching audio
self.microphone.microphoneOn = NO;

// Start fetching audio
self.microphone.microphoneOn = YES;

EZOutput

Provides flexible playback to the default output device by asking the EZOutputDataSource for audio data to play. Doesn't care where the buffers come from (microphone, audio file, streaming audio, etc). As of 1.0.0 the EZOutputDataSource has been simplified to have only one method to provide audio data to your EZOutput instance.

// The EZOutputDataSource should fill out the audioBufferList with the given frame count.
// The timestamp is provided for sample accurate calculation, but for basic use cases can
// be ignored.
- (OSStatus)        output:(EZOutput *)output
 shouldFillAudioBufferList:(AudioBufferList *)audioBufferList
        withNumberOfFrames:(UInt32)frames
                 timestamp:(const AudioTimeStamp *)timestamp;

Relevant Example Projects

  • EZAudioPlayFileExample (iOS)
  • EZAudioPlayFileExample (OSX)
  • EZAudioPassThroughExample (iOS)
  • EZAudioPassThroughExample (OSX)

Creating An Output

Create an EZOutput by declaring a property and initializing it like so:

// Declare the EZOutput as a strong property
@property (nonatomic, strong) EZOutput *output;
...

// Initialize the EZOutput instance and assign it a delegate to provide the output audio data
self.output = [EZOutput outputWithDataSource:self];

Alternatively, you could also use the shared output instance and just assign it an EZOutputDataSource if you will only have one EZOutput instance for your application.

// Assign a delegate to the shared instance of the output to provide the output audio data
[EZOutput sharedOutput].delegate = self;

Setting The Device

The EZOutput uses an EZAudioDevice instance to select what specific hardware destination it will output audio to. You'd use this if you wanted to change the output device like in the EZAudioPlayFileExample for OSX. At any time you can change which output device is used by setting the device property:

// By default the EZOutput uses the default output device, but you can change this at any time
EZAudioDevice *currentOutputDevice = [EZAudioDevice currentOutputDevice];
[self.output setDevice:currentOutputDevice];

Anytime the EZOutput changes its device it will trigger the EZOutputDelegate event:

- (void)output:(EZOutput *)output changedDevice:(EZAudioDevice *)device
{
    NSLog(@"Change output device to: %@", device);
}

Playing Audio

Setting The Input Format

When providing audio data the EZOutputDataSource will expect you to fill out the AudioBufferList provided with whatever inputFormat that is set on the EZOutput. By default the input format is a stereo, non-interleaved, float format (see defaultInputFormat for more information). If you're dealing with a different input format (which is typically the case), just set the inputFormat property. For instance:

// Set a mono, float format with a sample rate of 44.1 kHz
AudioStreamBasicDescription monoFloatFormat = [EZAudioUtilities monoFloatFormatWithSampleRate:44100.0f];
[self.output setInputFormat:monoFloatFormat];
Implementing the EZOutputDataSource

An example of implementing the EZOutputDataSource is done internally in the EZAudioPlayer using an EZAudioFile to read audio from an audio file on disk like so:

- (OSStatus)        output:(EZOutput *)output
 shouldFillAudioBufferList:(AudioBufferList *)audioBufferList
        withNumberOfFrames:(UInt32)frames
                 timestamp:(const AudioTimeStamp *)timestamp
{
    if (self.audioFile)
    {
        UInt32 bufferSize; // amount of frames actually read
        BOOL eof; // end of file
        [self.audioFile readFrames:frames
                   audioBufferList:audioBufferList
                        bufferSize:&bufferSize
                               eof:&eof];
        if (eof && [self.delegate respondsToSelector:@selector(audioPlayer:reachedEndOfAudioFile:)])
        {
            [self.delegate audioPlayer:self reachedEndOfAudioFile:self.audioFile];
        }
        if (eof && self.shouldLoop)
        {
            [self seekToFrame:0];
        }
        else if (eof)
        {
            [self pause];
            [self seekToFrame:0];
            [[NSNotificationCenter defaultCenter] postNotificationName:EZAudioPlayerDidReachEndOfFileNotification
                                                                object:self];
        }
    }
    return noErr;
}

I created a sample project that uses the EZOutput to act as a signal generator to play sine, square, triangle, sawtooth, and noise waveforms. Here's a snippet of code to generate a sine tone:

...
double const SAMPLE_RATE = 44100.0;

- (void)awakeFromNib
{
    //
    // Create EZOutput to play audio data with mono format (EZOutput will convert
    // this mono, float "inputFormat" to a clientFormat, i.e. the stereo output format).
    //
    AudioStreamBasicDescription inputFormat = [EZAudioUtilities monoFloatFormatWithSampleRate:SAMPLE_RATE];
    self.output = [EZOutput outputWithDataSource:self inputFormat:inputFormat];
    [self.output setDelegate:self];
    self.frequency = 200.0;
    self.sampleRate = SAMPLE_RATE;
    self.amplitude = 0.80;
}

- (OSStatus)        output:(EZOutput *)output
 shouldFillAudioBufferList:(AudioBufferList *)audioBufferList
        withNumberOfFrames:(UInt32)frames
                 timestamp:(const AudioTimeStamp *)timestamp
{
    Float32 *buffer = (Float32 *)audioBufferList->mBuffers[0].mData;
    size_t bufferByteSize = (size_t)audioBufferList->mBuffers[0].mDataByteSize;
    double theta = self.theta;
    double frequency = self.frequency;
    double thetaIncrement = 2.0 * M_PI * frequency / SAMPLE_RATE;
    if (self.type == GeneratorTypeSine)
    {
        for (UInt32 frame = 0; frame < frames; frame++)
        {
            buffer[frame] = self.amplitude * sin(theta);
            theta += thetaIncrement;
            if (theta > 2.0 * M_PI)
            {
                theta -= 2.0 * M_PI;
            }
        }
        self.theta = theta;
    }
    else if (... other shapes in full source)
}

For the full implementation of the square, triangle, sawtooth, and noise functions here: (https://github.com/syedhali/SineExample/blob/master/SineExample/GeneratorViewController.m#L220-L305)

Once the EZOutput has started it will send the EZOutputDelegate the audio back as float arrays for visualizing. These are converted inside the EZOutput component from whatever input format you may have provided. For instance, if you provide an interleaved, signed integer AudioStreamBasicDescription for the inputFormat property then that will be automatically converted to a stereo, non-interleaved, float format when sent back in the delegate playedAudio:... method below: An array of float arrays:

/**
 The output data represented as non-interleaved float arrays useful for:
    - Creating real-time waveforms using EZAudioPlot or EZAudioPlotGL
    - Creating any number of custom visualizations that utilize audio!
 */
- (void)       output:(EZOutput *)output
          playedAudio:(float **)buffer
       withBufferSize:(UInt32)bufferSize
 withNumberOfChannels:(UInt32)numberOfChannels
{
    __weak typeof (self) weakSelf = self;
    dispatch_async(dispatch_get_main_queue(), ^{
	// Update plot, buffer[0] = left channel, buffer[1] = right channel
    });
}

Pausing/Resuming The Output

Pause or resume the output component at any time like so:

// Stop fetching audio
[self.output stopPlayback];

// Resume fetching audio
[self.output startPlayback];

Chaining Audio Unit Effects

Internally the EZOutput is using an AUGraph to chain together a converter, mixer, and output audio units. You can hook into this graph by subclassing EZOutput and implementing the method:

// By default this method connects the AUNode representing the input format converter to
// the mixer node. In subclasses you can add effects in the chain between the converter
// and mixer by creating additional AUNodes, adding them to the AUGraph provided below,
// and then connecting them together.
- (OSStatus)connectOutputOfSourceNode:(AUNode)sourceNode
                  sourceNodeOutputBus:(UInt32)sourceNodeOutputBus
                    toDestinationNode:(AUNode)destinationNode
              destinationNodeInputBus:(UInt32)destinationNodeInputBus
                              inGraph:(AUGraph)graph;

This was inspired by the audio processing graph from CocoaLibSpotify (Daniel Kennett of Spotify has an excellent blog post explaining how to add an EQ to the CocoaLibSpotify AUGraph).

Here's an example of how to add a delay audio unit (kAudioUnitSubType_Delay):

// In interface, declare delay node info property
@property (nonatomic, assign) EZAudioNodeInfo *delayNodeInfo;

// In implementation, overwrite the connection method
- (OSStatus)connectOutputOfSourceNode:(AUNode)sourceNode
                  sourceNodeOutputBus:(UInt32)sourceNodeOutputBus
                    toDestinationNode:(AUNode)destinationNode
              destinationNodeInputBus:(UInt32)destinationNodeInputBus
                              inGraph:(AUGraph)graph
{
    self.delayNodeInfo = (EZAudioNodeInfo *)malloc(sizeof(EZAudioNodeInfo));

    // A description for the time/pitch shifter Device
    AudioComponentDescription delayComponentDescription;
    delayComponentDescription.componentType = kAudioUnitType_Effect;
    delayComponentDescription.componentSubType = kAudioUnitSubType_Delay;
    delayComponentDescription.componentManufacturer = kAudioUnitManufacturer_Apple;
    delayComponentDescription.componentFlags = 0;
    delayComponentDescription.componentFlagsMask = 0;

    [EZAudioUtilities checkResult:AUGraphAddNode(graph,
                                                 &delayComponentDescription,
                                                 &self.delayNodeInfo->node)
                        operation:"Failed to add node for time shift"];

    // Get the time/pitch shifter Audio Unit from the node
    [EZAudioUtilities checkResult:AUGraphNodeInfo(graph,
                                                  self.delayNodeInfo->node,
                                                  NULL,
                                                  &self.delayNodeInfo->audioUnit)
                        operation:"Failed to get audio unit for delay node"];

    // connect the output of the input source node to the input of the time/pitch shifter node
    [EZAudioUtilities checkResult:AUGraphConnectNodeInput(graph,
                                                          sourceNode,
                                                          sourceNodeOutputBus,
                                                          self.delayNodeInfo->node,
                                                          0)
                        operation:"Failed to connect source node into delay node"];

    // connect the output of the time/pitch shifter node to the input of the destination node, thus completing the chain.
    [EZAudioUtilities checkResult:AUGraphConnectNodeInput(graph,
                                                          self.delayNodeInfo->node,
                                                          0,
                                                          destinationNode,
                                                          destinationNodeInputBus)
                        operation:"Failed to connect delay to destination node"];
    return noErr;
}

// Clean up
- (void)dealloc
{
    free(self.delayNodeInfo);
}

EZAudioFile

Provides simple read/seek operations, pulls waveform amplitude data, and provides the EZAudioFileDelegate to notify of any read/seek action occuring on the EZAudioFile. This can be thought of as the NSImage/UIImage equivalent of the audio world.

Relevant Example Projects

  • EZAudioWaveformFromFileExample (iOS)
  • EZAudioWaveformFromFileExample (OSX)

Opening An Audio File

To open an audio file create a new instance of the EZAudioFile class.

// Declare the EZAudioFile as a strong property
@property (nonatomic, strong) EZAudioFile *audioFile;

...

// Initialize the EZAudioFile instance and assign it a delegate to receive the read/seek callbacks
self.audioFile = [EZAudioFile audioFileWithURL:[NSURL fileURLWithPath:@"/path/to/your/file"] delegate:self];

Getting Waveform Data

The EZAudioFile allows you to quickly fetch waveform data from an audio file with as much or little detail as you'd like.

__weak typeof (self) weakSelf = self;
// Get a waveform with 1024 points of data. We can adjust the number of points to whatever level
// of detail is needed by the application
[self.audioFile getWaveformDataWithNumberOfPoints:1024
                                  completionBlock:^(float **waveformData,
                                                    int length)
{
     [weakSelf.audioPlot updateBuffer:waveformData[0]
                       withBufferSize:length];
}];

Reading From An Audio File

Reading audio data from a file requires you to create an AudioBufferList to hold the data. The EZAudio utility function, audioBufferList, provides a convenient way to get an allocated AudioBufferList to use. There is also a utility function, freeBufferList:, to use to free (or release) the AudioBufferList when you are done using that audio data.

Note: You have to free the AudioBufferList, even in ARC.

// Allocate an AudioBufferList to hold the audio data (the client format is the non-compressed
// in-app format that is used for reading, it's different than the file format which is usually
// something compressed like an mp3 or m4a)
AudioStreamBasicDescription clientFormat = [self.audioFile clientFormat];
UInt32 numberOfFramesToRead = 512;
UInt32 channels = clientFormat.mChannelsPerFrame;
BOOL isInterleaved = [EZAudioUtilities isInterleaved:clientFormat];
AudioBufferList *bufferList = [EZAudioUtilities audioBufferListWithNumberOfFrames:numberOfFramesToRead
                                                                 numberOfChannels:channels
                                                                      interleaved:isInterleaved];

// Read the frames from the EZAudioFile into the AudioBufferList
UInt32 framesRead;
UInt32 isEndOfFile;
[self.audioFile readFrames:numberOfFramesToRead
           audioBufferList:bufferList
                bufferSize:&framesRead
                       eof:&isEndOfFile]

When a read occurs the EZAudioFileDelegate receives two events.

An event notifying the delegate of the read audio data as float arrays:

-(void)     audioFile:(EZAudioFile *)audioFile
            readAudio:(float **)buffer
       withBufferSize:(UInt32)bufferSize
 withNumberOfChannels:(UInt32)numberOfChannels
{
    __weak typeof (self) weakSelf = self;
    dispatch_async(dispatch_get_main_queue(), ^{
        [weakSelf.audioPlot updateBuffer:buffer[0]
                          withBufferSize:bufferSize];
    });
}

and an event notifying the delegate of the new frame position within the EZAudioFile:

-(void)audioFile:(EZAudioFile *)audioFile updatedPosition:(SInt64)framePosition
{
    __weak typeof (self) weakSelf = self;
    dispatch_async(dispatch_get_main_queue(), ^{
		// Update UI
    });
}

Seeking Through An Audio File

You can seek very easily through an audio file using the EZAudioFile's seekToFrame: method. The EZAudioFile provides a totalFrames method to provide you the total amount of frames in an audio file so you can calculate a proper offset.

// Get the total number of frames for the audio file
SInt64 totalFrames = [self.audioFile totalFrames];

// Seeks halfway through the audio file
[self.audioFile seekToFrame:(totalFrames/2)];

// Alternatively, you can seek using seconds
NSTimeInterval duration = [self.audioFile duration];
[self.audioFile setCurrentTime:duration/2.0];

When a seek occurs the EZAudioFileDelegate receives the seek event:

-(void)audioFile:(EZAudioFile *)audioFile updatedPosition:(SInt64)framePosition
{
    __weak typeof (self) weakSelf = self;
    dispatch_async(dispatch_get_main_queue(), ^{
		// Update UI
    });
}

EZAudioPlayer

Provides a class that combines the EZAudioFile and EZOutput for file playback of all Core Audio supported formats to any hardware device. Because the EZAudioPlayer internally hooks into the EZAudioFileDelegate and EZOutputDelegate, you should implement the EZAudioPlayerDelegate to receive the playedAudio:... and updatedPosition: events. The EZAudioPlayFileExample projects for iOS and OSX shows how to use the EZAudioPlayer to play audio files, visualize the samples with an audio plot, adjust the volume, and change the output device using the EZAudioDevice class. The EZAudioPlayer primarily uses NSNotificationCenter to post notifications because often times you have one audio player and multiple UI elements that need to listen for player events to properly update.

Creating An Audio Player

// Declare the EZAudioFile as a strong property
@property (nonatomic, strong) EZAudioFile *audioFile;

...

// Create an EZAudioPlayer with a delegate that conforms to EZAudioPlayerDelegate
self.player = [EZAudioPlayer audioPlayerWithDelegate:self];

Playing An Audio File

The EZAudioPlayer uses an internal EZAudioFile to provide data to its EZOutput for output via the EZOutputDataSource. You can provide an EZAudioFile by just setting the audioFile property on the EZAudioPlayer will make a copy of the EZAudioFile at that file path url for its own use.

// Set the EZAudioFile for playback by setting the `audioFile` property
EZAudioFile *audioFile = [EZAudioFile audioFileWithURL:[NSURL fileURLWithPath:@"/path/to/your/file"]];
[self.player setAudioFile:audioFile];

// This, however, will not pause playback if a current file is playing. Instead
// it's encouraged to use `playAudioFile:` instead if you're swapping in a new
// audio file while playback is already running
EZAudioFile *audioFile = [EZAudioFile audioFileWithURL:[NSURL fileURLWithPath:@"/path/to/your/file"]];
[self.player playAudioFile:audioFile];

As audio is played the EZAudioPlayerDelegate will receive the playedAudio:..., updatedPosition:..., and, if the audio file reaches the end of the file, the reachedEndOfAudioFile: events. A typical implementation of the EZAudioPlayerDelegate would be something like:

- (void)  audioPlayer:(EZAudioPlayer *)audioPlayer
          playedAudio:(float **)buffer
       withBufferSize:(UInt32)bufferSize
 withNumberOfChannels:(UInt32)numberOfChannels
          inAudioFile:(EZAudioFile *)audioFile
{
    __weak typeof (self) weakSelf = self;
    // Update an EZAudioPlot or EZAudioPlotGL to reflect the audio data coming out
    // of the EZAudioPlayer (post volume and pan)
    dispatch_async(dispatch_get_main_queue(), ^{
        [weakSelf.audioPlot updateBuffer:buffer[0]
                          withBufferSize:bufferSize];
    });
}

//------------------------------------------------------------------------------

- (void)audioPlayer:(EZAudioPlayer *)audioPlayer
    updatedPosition:(SInt64)framePosition
        inAudioFile:(EZAudioFile *)audioFile
{
    __weak typeof (self) weakSelf = self;
    // Update any UI controls including sliders and labels
    // display current time/duration
    dispatch_async(dispatch_get_main_queue(), ^{
        if (!weakSelf.positionSlider.highlighted)
        {
            weakSelf.positionSlider.floatValue = (float)framePosition;
            weakSelf.positionLabel.integerValue = framePosition;
        }
    });
}

Seeking

You can seek through the audio file in a similar fashion as with the EZAudioFile. That is, using the seekToFrame: or currentTime property.

// Get the total number of frames and seek halfway
SInt64 totalFrames = [self.player totalFrames];
[self.player seekToFrame:(totalFrames/2)];

// Alternatively, you can seek using seconds
NSTimeInterval duration = [self.player duration];
[self.player setCurrentTime:duration/2.0];

Setting Playback Parameters

Because the EZAudioPlayer wraps the EZOutput you can adjust the volume and pan parameters for playback.

// Make it half as loud, 0 = silence, 1 = full volume. Default is 1.
[self.player setVolume:0.5];

// Make it only play on the left, -1 = left, 1 = right. Default is 0.0 (center)
[self.player setPan:-1.0];

Getting Audio File Parameters

The EZAudioPlayer wraps the EZAudioFile and provides a high level interface for pulling values like current time, duration, the frame index, total frames, etc.

NSTimeInterval  currentTime          = [self.player currentTime];
NSTimeInterval  duration             = [self.player duration];
NSString       *formattedCurrentTime = [self.player formattedCurrentTime]; // MM:SS formatted
NSString       *formattedDuration    = [self.player formattedDuration];    // MM:SS formatted
SInt64          frameIndex           = [self.player frameIndex];
SInt64          totalFrames          = [self.player totalFrames];

In addition, the EZOutput properties are also offered at a high level as well:

EZAudioDevice *outputDevice = [self.player device];
BOOL 	       isPlaying    = [self.player isPlaying];
float          pan          = [self.player pan];
float          volume       = [self.player volume];

Notifications

The EZAudioPlayer provides the following notifications (as of 1.1.2):

/**
 Notification that occurs whenever the EZAudioPlayer changes its `audioFile` property. Check the new value using the EZAudioPlayer's `audioFile` property.
 */
FOUNDATION_EXPORT NSString * const EZAudioPlayerDidChangeAudioFileNotification;

/**
 Notification that occurs whenever the EZAudioPlayer changes its `device` property. Check the new value using the EZAudioPlayer's `device` property.
 */
FOUNDATION_EXPORT NSString * const EZAudioPlayerDidChangeOutputDeviceNotification;

/**
 Notification that occurs whenever the EZAudioPlayer changes its `output` component's `pan` property. Check the new value using the EZAudioPlayer's `pan` property.
 */
FOUNDATION_EXPORT NSString * const EZAudioPlayerDidChangePanNotification;

/**
 Notification that occurs whenever the EZAudioPlayer changes its `output` component's play state. Check the new value using the EZAudioPlayer's `isPlaying` property.
 */
FOUNDATION_EXPORT NSString * const EZAudioPlayerDidChangePlayStateNotification;

/**
 Notification that occurs whenever the EZAudioPlayer changes its `output` component's `volume` property. Check the new value using the EZAudioPlayer's `volume` property.
 */
FOUNDATION_EXPORT NSString * const EZAudioPlayerDidChangeVolumeNotification;

/**
 Notification that occurs whenever the EZAudioPlayer has reached the end of a file and its `shouldLoop` property has been set to NO.
 */
FOUNDATION_EXPORT NSString * const EZAudioPlayerDidReachEndOfFileNotification;

/**
 Notification that occurs whenever the EZAudioPlayer performs a seek via the `seekToFrame` method or `setCurrentTime:` property setter. Check the new `currentTime` or `frameIndex` value using the EZAudioPlayer's `currentTime` or `frameIndex` property, respectively.
 */
FOUNDATION_EXPORT NSString * const EZAudioPlayerDidSeekNotification;

EZRecorder

Provides a way to record any audio source to an audio file. This hooks into the other components quite nicely to do something like plot the audio waveform while recording to give visual feedback as to what is happening. The EZRecorderDelegate provides methods to listen to write events and a final close event on the EZRecorder (explained below).

Relevant Example Projects

  • EZAudioRecordExample (iOS)
  • EZAudioRecordExample (OSX)

Creating A Recorder

To create an EZRecorder you must provide at least 3 things: an NSURL representing the file path of where the audio file should be written to (an existing file will be overwritten), a clientFormat representing the format in which you will be providing the audio data, and either an EZRecorderFileType or an AudioStreamBasicDescription representing the file format of the audio data on disk.

// Provide a file path url to write to, a client format (always linear PCM, this is the format
// coming from another component like the EZMicrophone's audioStreamBasicDescription property),
// and a EZRecorderFileType constant representing either a wav (EZRecorderFileTypeWAV),
// aiff (EZRecorderFileTypeAIFF), or m4a (EZRecorderFileTypeM4A) file format. The advantage of
// this is that the `fileFormat` property will be automatically filled out for you.
+ (instancetype)recorderWithURL:(NSURL *)url
                   clientFormat:(AudioStreamBasicDescription)clientFormat
                       fileType:(EZRecorderFileType)fileType;

// Alternatively, you can provide a file path url to write to, a client format (always linear
// PCM, this is the format coming from another component like the EZMicrophone's
// audioStreamBasicDescription property), a `fileFormat` representing your custom
// AudioStreamBasicDescription, and an AudioFileTypeID that corresponds with your `fileFormat`.
+ (instancetype)recorderWithURL:(NSURL *)url
                   clientFormat:(AudioStreamBasicDescription)clientFormat
                     fileFormat:(AudioStreamBasicDescription)fileFormat
                audioFileTypeID:(AudioFileTypeID)audioFileTypeID;

Start by declaring an instance of the EZRecorder (you will have one of these per audio file written out)

// Declare the EZRecorder as a strong property
@property (nonatomic, strong) EZRecorder *recorder;

and initialize it using one of the two initializers from above. For instance, using the EZRecorderFileType shortcut initializer you could create an instance like so:

// Example using an EZMicrophone and a string called kAudioFilePath representing a file
// path location on your computer to write out a M4A file.
self.recorder = [EZRecorder recorderWithURL:[NSURL fileURLWithPath:@"/path/to/your/file.m4a"]
                               clientFormat:[self.microphone audioStreamBasicDescription]
                                   fileType:EZRecorderFileTypeM4A];

or to configure your own custom file format, say to write out a 8000 Hz, iLBC file:

// Example using an EZMicrophone, a string called kAudioFilePath representing a file
// path location on your computer, and an iLBC file format.
AudioStreamBasicDescription iLBCFormat = [EZAudioUtilities iLBCFormatWithSampleRate:8000];
self.recorder = [EZRecorder recorderWithURL:[NSURL fileURLWithPath:@"/path/to/your/file.caf"]
                               clientFormat:[self.microphone audioStreamBasicDescription]
                                 fileFormat:iLBCFormat
                            audioFileTypeID:kAudioFileCAFType];

Recording Some Audio

Once you've initialized your EZRecorder you can append data by passing in an AudioBufferList and its buffer size like so:

// Append the microphone data coming as a AudioBufferList with the specified buffer size
// to the recorder
-(void)    microphone:(EZMicrophone *)microphone
        hasBufferList:(AudioBufferList *)bufferList
       withBufferSize:(UInt32)bufferSize
 withNumberOfChannels:(UInt32)numberOfChannels
{
    // Getting audio data as a buffer list that can be directly fed into the EZRecorder. This is
    // happening on the audio thread - any UI updating needs a GCD main queue block.
    if (self.isRecording)
    {
        // Since we set the recorder's client format to be that of the EZMicrophone instance,
        // the audio data coming in represented by the AudioBufferList can directly be provided
        // to the EZRecorder. The EZRecorder will internally convert the audio data from the
        // `clientFormat` to `fileFormat`.
        [self.recorder appendDataFromBufferList:bufferList
                                 withBufferSize:bufferSize];
    }
}
Responding to an EZRecorder after it has written audio data

Once audio data has been successfully written with the EZRecorder it will notify the EZRecorderDelegate of the event so it can respond via:

// Triggers after the EZRecorder's `appendDataFromBufferList:withBufferSize:` method is called
// so you can update your interface accordingly.
- (void)recorderUpdatedCurrentTime:(EZRecorder *)recorder
{
    __weak typeof (self) weakSelf = self;
    // This will get triggerd on the thread that the write occured on so be sure to wrap your UI
    // updates in a GCD main queue block! However, I highly recommend you first pull the values
    // you'd like to update the interface with before entering the GCD block to avoid trying to
    // fetch a value after the audio file has been closed.
    NSString *formattedCurrentTime = [recorder formattedCurrentTime]; // MM:SS formatted
    dispatch_async(dispatch_get_main_queue(), ^{
    	// Update label
        weakSelf.currentTimeLabel.stringValue = formattedCurrentTime;
    });
}

Closing An Audio File

When you're recording is done be sure to call the closeAudioFile method to make sure the audio file written to disk is properly closed before you attempt to read it again.

// Close the EZRecorder's audio file BEFORE reading
[self.recorder closeAudioFile];

This will trigger the EZRecorder's delegate method:

- (void)recorderDidClose:(EZRecorder *)recorder
{
    recorder.delegate = nil;
}

Interface Components

EZAudio currently offers two drop in audio waveform components that help simplify the process of visualizing audio.

EZAudioPlot

Provides an audio waveform plot that uses CoreGraphics to perform the drawing. On iOS this is a subclass of UIView while on OSX this is a subclass of NSView. As of the 1.0.0 release, the waveforms are drawn using CALayers where compositing is done on the GPU. As a result, there have been some huge performance gains and CPU usage per real-time (i.e. 60 frames per second redrawing) plot is now about 2-3% CPU as opposed to the 20-30% we were experiencing before.

Relevant Example Projects

  • EZAudioCoreGraphicsWaveformExample (iOS)
  • EZAudioCoreGraphicsWaveformExample (OSX)
  • EZAudioRecordExample (iOS)
  • EZAudioRecordExample (OSX)
  • EZAudioWaveformFromFileExample (iOS)
  • EZAudioWaveformFromFileExample (OSX)
  • EZAudioFFTExample (iOS)
  • EZAudioFFTExample (OSX)

Creating An Audio Plot

You can create an audio plot in the interface builder by dragging in a UIView on iOS or an NSView on OSX onto your content area. Then change the custom class of the UIView/NSView to EZAudioPlot.

EZAudioPlotInterfaceBuilder

Alternatively, you can could create the audio plot programmatically

// Programmatically create an audio plot
EZAudioPlot *audioPlot = [[EZAudioPlot alloc] initWithFrame:self.view.frame];
[self.view addSubview:audioPlot];

Customizing The Audio Plot

All plots offer the ability to change the background color, waveform color, plot type (buffer or rolling), toggle between filled and stroked, and toggle between mirrored and unmirrored (about the x-axis). For iOS colors are of the type UIColor while on OSX colors are of the type NSColor.

// Background color (use UIColor for iOS)
audioPlot.backgroundColor = [NSColor colorWithCalibratedRed:0.816
                                                      green:0.349
                                                       blue:0.255
                                                      alpha:1];
// Waveform color (use UIColor for iOS)
audioPlot.color = [NSColor colorWithCalibratedRed:1.000
                                            green:1.000
                                             blue:1.000
                                            alpha:1];
// Plot type
audioPlot.plotType = EZPlotTypeBuffer;
// Fill
audioPlot.shouldFill = YES;
// Mirror
audioPlot.shouldMirror = YES;

IBInspectable Attributes

Also, as of iOS 8 you can adjust the background color, color, gain, shouldFill, and shouldMirror parameters directly in the Interface Builder via the IBInspectable attributes:

EZAudioPlotInspectableAttributes

Updating The Audio Plot

All plots have only one update function, updateBuffer:withBufferSize:, which expects a float array and its length.

// The microphone component provides audio data to its delegate as an array of float buffer arrays.
- (void)   microphone:(EZMicrophone *)microphone
     hasAudioReceived:(float **)buffer
       withBufferSize:(UInt32)bufferSize
 withNumberOfChannels:(UInt32)numberOfChannels
{
    /**
     Update the audio plot using the float array provided by the microphone:
       buffer[0] = left channel
       buffer[1] = right channel
     Note: Audio updates happen asynchronously so we need to make sure
         sure to update the plot on the main thread
     */
    __weak typeof (self) weakSelf = self;
    dispatch_async(dispatch_get_main_queue(), ^{
        [weakSelf.audioPlot updateBuffer:buffer[0] withBufferSize:bufferSize];
    });
}

EZAudioPlotGL

Provides an audio waveform plot that uses OpenGL to perform the drawing. The API this class are exactly the same as those for the EZAudioPlot above. On iOS this is a subclass of the GLKView while on OSX this is a subclass of the NSOpenGLView. In most cases this is the plot you want to use, it's GPU-accelerated, can handle lots of points while displaying 60 frames per second (the EZAudioPlot starts to choke on anything greater than 1024), and performs amazingly on all devices. The only downside is that you can only have one OpenGL plot onscreen at a time. However, you can combine OpenGL plots with Core Graphics plots in the view hierachy (see the EZAudioRecordExample for an example of how to do this).

Relevant Example Projects

  • EZAudioOpenGLWaveformExample (iOS)
  • EZAudioOpenGLWaveformExample (OSX)
  • EZAudioPlayFileExample (iOS)
  • EZAudioPlayFileExample (OSX)
  • EZAudioRecordExample (iOS)
  • EZAudioRecordExample (OSX)
  • EZAudioPassThroughExample (iOS)
  • EZAudioPassThroughExample (OSX)

Creating An OpenGL Audio Plot

You can create an audio plot in the interface builder by dragging in a UIView on iOS or an NSView on OSX onto your content area. Then change the custom class of the UIView/NSView to EZAudioPlotGL.

EZAudioPlotGLInterfaceBuilder

Alternatively, you can could create the EZAudioPlotGL programmatically

// Programmatically create an audio plot
EZAudioPlotGL *audioPlotGL = [[EZAudioPlotGL alloc] initWithFrame:self.view.frame];
[self.view addSubview:audioPlotGL];

Customizing The OpenGL Audio Plot

All plots offer the ability to change the background color, waveform color, plot type (buffer or rolling), toggle between filled and stroked, and toggle between mirrored and unmirrored (about the x-axis). For iOS colors are of the type UIColor while on OSX colors are of the type NSColor.

// Background color (use UIColor for iOS)
audioPlotGL.backgroundColor = [NSColor colorWithCalibratedRed:0.816
                                                        green:0.349
                                                         blue:0.255
                                                        alpha:1];
// Waveform color (use UIColor for iOS)
audioPlotGL.color = [NSColor colorWithCalibratedRed:1.000
                                              green:1.000
                                               blue:1.000
                                              alpha:1];
// Plot type
audioPlotGL.plotType = EZPlotTypeBuffer;
// Fill
audioPlotGL.shouldFill = YES;
// Mirror
audioPlotGL.shouldMirror = YES;

IBInspectable Attributes

Also, as of iOS 8 you can adjust the background color, color, gain, shouldFill, and shouldMirror parameters directly in the Interface Builder via the IBInspectable attributes:

EZAudioPlotGLInspectableAttributes

Updating The OpenGL Audio Plot

All plots have only one update function, updateBuffer:withBufferSize:, which expects a float array and its length.

// The microphone component provides audio data to its delegate as an array of float buffer arrays.
- (void)   microphone:(EZMicrophone *)microphone
     hasAudioReceived:(float **)buffer
       withBufferSize:(UInt32)bufferSize
 withNumberOfChannels:(UInt32)numberOfChannels
{
    /**
     Update the audio plot using the float array provided by the microphone:
       buffer[0] = left channel
       buffer[1] = right channel
     Note: Audio updates happen asynchronously so we need to make sure
         sure to update the plot on the main thread
     */
    __weak typeof (self) weakSelf = self;
    dispatch_async(dispatch_get_main_queue(), ^{
        [weakSelf.audioPlotGL updateBuffer:buffer[0] withBufferSize:bufferSize];
    });
}

License

EZAudio is available under the MIT license. See the LICENSE file for more info.

Contact & Contributers

Syed Haris Ali
www.syedharisali.com
syedhali07[at]gmail.com

Acknowledgements

The following people rock:

  • My brother, Reza Ali, for walking me through all the gritty details of OpenGL and his constant encouragement through this journey to 1.0.0.
  • Aure Prochazka for his amazing work on AudioKit and his encouragement to bring EZAudio to 1.0.0
  • Daniel Kennett for writing this great blog post that inspired the rewrite of the EZOutput in 1.0.0.
  • Michael Tyson for creating the TPCircularBuffer and all his contributions to the community including the Amazing Audio Engine, Audiobus, and all the tasty pixel blog posts.
  • Chris Adamson and Kevin Avila for writing the amazing Learning Core Audio book.

Deprecated

alt text

As of today, June 13, 2016, I’m officially deprecating EZAudio. I’d like to thank everyone for the support over the last few years I’ve been hacking on EZAudio and working to make it better.

Alternatives

The best alternative to EZAudio is now AudioKit. Note that The Amazing Audio Engine and The Amazing Audio Engine 2 have now both been retired as well. Any further contributions I make to iOS/macOS/tvOS audio programming will be to AudioKit.

Why?

EZAudio started as a pet project of mine in 2013 in an attempt to reduce the amount of duplicate code I was writing for my own iOS/Mac audio projects. I originally just wanted to record audio from an iPhone’s mic and plot a waveform. This over time quickly grew into the collection of classes you may know today (EZMicrophone, EZAudioPlot, etc).

I apologize for not being more active in addressing the issues and pull requests, but I’m hoping you all understand I’m only one person. EZAudio was solely written and maintained by me out of love during weeks of time I wasn’t making any money, all while living in one of the most expensive cities in the world. Like many of you, I spend the majority of my time working full-time to sustain myself (I have bills and rent to pay too!). I’m transitioning to non-audio ventures and will hopefully be able to share another cool open source project soon enough.

Getting the opportunity to work on EZAudio with all of you has been an incredibly insightful experience and I’m incredibly grateful to have gotten a chance to share it with all of you. Thank you.

Is all of it broken?

As I’m writing this I’m a little hesitant to say it’s all broken (despite all the issues filed), but I’d recommend forking this repo from this point forward and using at your own risk. I’ll probably be continuing to use EZAudio’s EZAudioPlot for rendering waveforms in my future projects, however, please don’t expect any updates to this repo. Deprecated means I won’t be responding to issues, EZAudio-related emails, or pushing any new changes to this repo.

Comments
  • Error: Failed to fill complex buffer in float converter ('insz') on iPhone 6S and 6S Plus

    Error: Failed to fill complex buffer in float converter ('insz') on iPhone 6S and 6S Plus

    Everything is working great on iPhone 4S, 5 / 5S, 6 and 6 Plus , but when trying to use audioplot on iPhone 6S and 6S Plus, following error is coming:

    Error: Failed to fill complex buffer in float converter ('insz')

    I have also implemented https://github.com/syedhali/EZAudio/issues/173 solution, but no luck

    Please help!

    opened by hyd00 69
  • Wont compile - Xcode 6

    Wont compile - Xcode 6

    Trying to build this in Xcode 6 - and get the following:

    Implicit declaration of function glPushMatrix/glPopMatrix is invalid in c99.

    I have verified that open gl framework is set ... this build worked in Xcode 5 ... what has to be done to get it to work in 6 ?

    opened by pkasson 22
  • Opening a file from the iPod library makes EZAudioFile crash

    Opening a file from the iPod library makes EZAudioFile crash

    Hey!

    I tried to open an MPMediaItem file using this code:

    // Get asset url
    //
    NSURL *assetURL = [item valueForProperty:MPMediaItemPropertyAssetURL];
    self.asset = [AVAsset assetWithURL:assetURL];
    self.audioFile = [EZAudioFile audioFileWithURL:assetURL];
    
    
    // Set up waveform
    //
    [self.audioFile getWaveformDataWithCompletionBlock:^(float *waveformData, UInt32 length) {
    
        // Update the audio plot with the waveform data
        //
        self.audioPlot.plotType = EZPlotTypeBuffer;
        [self.audioPlot updateBuffer:waveformData withBufferSize:length];
    }];
    

    I also tried using to copy the song in a temporary folder using AVAssetExportSession but it didn't work.

    file:///private/var/mobile/Applications/B0746B0F-46A6-4292-94D3-49F3A6E22DD8/tmp/tempBase.m4a
    
    opened by jbouaziz 18
  • Recording in Fast-Forward mode

    Recording in Fast-Forward mode

    Hello @syedhali ,

    Thanks for writing this library! I am seeing interesting behavior in my app. For the most part the library works like a charm when I am recording audio using the library. But at times (may be 1/10) it records audio in fast forward mode. So if I think I have recorded 30 secs, it actually records it in ~2x speed, so my output audio is ~15 secs.

    I tried a whole bunch of debugging but am not able to figure out what the issue could be. Have you ever seen such behavior? Thoughts?

    opened by swaroopbutala 12
  • Custom ABSD ignored when using EZMicrophone init methods

    Custom ABSD ignored when using EZMicrophone init methods

    initWithMicrophoneDelegate:withAudioStreamBasicDescription: and its "instant" counterpart ignore the ABSD passed into the method because _createInputUnit is called in the call to initWithMicrophoneDelegate:.

    When trying to work around the issue doesn't seem to work either because no audio frames are delivered.

    int preferredSampleRate = 48000;
    AVAudioSession *audioSession = [AVAudioSession sharedInstance];
    [audioSession setPreferredSampleRate:preferredSampleRate error:&error];
    AudioStreamBasicDescription absd = [EZAudio monoCanonicalFormatWithSampleRate: preferredSampleRate];
    self.microphone = [[EZMicrophone alloc] initWithMicrophoneDelegate:self];
    [self.microphone setAudioStreamBasicDescription:absd];
    [self.microphone startFetchingAudio];
    
    opened by chrisballinger 12
  • App Crashes with AVPlayer Streaming

    App Crashes with AVPlayer Streaming

    When I run the app to stream a radio link I get a crash.

    here is my code to start the stream:

    AVPlayerItem* playerItem = [AVPlayerItem playerItemWithURL:[NSURL URLWithString:streamURL]]; [playerItem addObserver:self forKeyPath:@"timedMetadata" options:NSKeyValueObservingOptionNew context:nil]; music = [AVPlayer playerWithPlayerItem:playerItem]; [music play];

    How can I fix this problem?

    opened by AndrexOfficial 12
  • External Audio hardware?

    External Audio hardware?

    Hey Everyone,

    EZAudioDevice *currentInputDevice = [EZAudioDevice currentInputDevice]; NSLog(@"currentInputDevice: %@", currentInputDevice);

    and

    NSArray *inputDevices = [EZAudioDevice inputDevices]; NSLog(@"inputDevices: %@", inputDevices);

    work as expected WITHOUT connecting my RME babyface USB or MotU 828mk2 Firewire hardware, BUT if I do so, I get the following error (on exit):

    Error: Failed to get frame size ('!siz')

    Thanks alot!

    opened by sarrass 11
  • "Error: Failed to create new audio converter (-50)"

    No matter what I pass into [EZRecorder recorderWithDestinationURL:andSourceFormat:], it always prints this error message:

    Error: Failed to create new audio converter (-50)
    

    Are there limitations on the AudioStreamBasicDescription that can be used? Do the descriptions for EZMicrophone and EZRecorder have to match?

    I'm trying to output to aac but this AudioStreamBasicDescription doesn't work:

    struct AudioStreamBasicDescription desc;
    
    desc.mSampleRate = 44100;
    desc.mFormatID = kAudioFormatMPEG4AAC;
    desc.mFormatFlags = 0;
    desc.mBytesPerPacket = 2; // must have a value or won't write apparently
    desc.mFramesPerPacket = 0;
    desc.mBytesPerFrame = 0;
    desc.mChannelsPerFrame = 1;
    desc.mBitsPerChannel = 0;
    desc.mReserved = 0;
    
    opened by alecgorge 9
  • Problem With EZAudioPlayFile Example IOS7

    Problem With EZAudioPlayFile Example IOS7

    Hi There, This is a great library.... However when I run the example on a IPAD, it regularly crashes. Often it shows: CrashIfClientProvidedBogusAudioBufferList

    Any ideas would be great :)

    Also, how would one reset the audio plot?

    Keep up the good work :)

    Josh

    opened by ghost 9
  • Changing Buffer Size

    Changing Buffer Size

    I have been searching for a way to change the buffer size to increase the number of data points received per second.

    This issue (https://github.com/syedhali/EZAudio/issues/50) is similar to what I am trying to do, but there does not appear to be an answer. How do I decrease the buffer size?

    Thanks!

    opened by LukasJoswiak 7
  • Having a scrollview instead of erasing and redrawing the content

    Having a scrollview instead of erasing and redrawing the content

    Hey!

    Great job on the library! Really. Would there be a way to have a scrollview instead of erasing what's been drawn already? You can try recording using the SoundCloud app to see what I'm talking about.

    opened by jbouaziz 7
  • Devices with no Inputs will crash EZAudio and Solution is to extend the Framework to handle Aggregated Devices.

    Devices with no Inputs will crash EZAudio and Solution is to extend the Framework to handle Aggregated Devices.

    As of 2021 aggregated Devices exist or how to call them.. Combined Devices which can be setup to only use Inputs or Outputs just the same as some devices will just have no inputs or outputs. When i try to instantiate EZAudioDevice or enumerating it will crash in EZAudioDevice.m

    + (NSInteger)channelCountForScope:(AudioObjectPropertyScope)scope forDeviceID:(AudioDeviceID)deviceID {
        
        AudioObjectPropertyAddress address;
        address.mScope = scope;
        address.mElement = kAudioObjectPropertyElementMaster;
        address.mSelector = kAudioDevicePropertyStreamConfiguration;
        
        // - - - - catch configuration - - - - -
        AudioBufferList streamConfiguration;
        UInt32 propSize = sizeof(streamConfiguration);
        EZAudioUtilities_checkResult(AudioObjectGetPropertyData(deviceID,
                                                     &address,
                                                     0,
                                                     NULL,
                                                     &propSize,
                                                     &streamConfiguration),
                                            "Failed to get frame size");
    
        // - - - - use read property - - - - -
        NSInteger channelCount = 0;
        for (NSInteger i = 0; i < streamConfiguration.mNumberBuffers; i++)
        {       
            channelCount += streamConfiguration.mBuffers[i].mNumberChannels;
            //FIXME: ^ crash! channelCount is ridiculous high number here and mNumberChannels is empty
        }
        
        return channelCount;
        
    }
    

    msg: [AudioHAL_Client] HALC_ProxyIOContext.cpp:1984:GetPropertyData: HALC_ProxyIOContext::_GetPropertyData: bad property data size for kAudioDevicePropertyStreamConfiguration

    [AudioHAL_Client] HALC_ShellObject.cpp:401:GetPropertyData: HALC_ShellObject::GetPropertyData: call to the proxy failed, Error: 561211770 (!siz)

    [AudioHAL_Client] HALPlugIn.cpp:292:ObjectGetPropertyData: HALPlugIn::ObjectGetPropertyData: got an error from the plug-in routine, Error: 561211770 (!siz)

    `Error: Failed to get frame size ('!siz')`
    

    `

    i guess this is because EZAudio is fairly an old framework that didnt know about Aggregated Devices.

    but why, we could improve it. after googling a found some hints how to do it in this gist

    maybe someone wants to join solving this issue

    ps: here a screenshot of my audio devices.. devices_

    device.name Built-in Microphone -- device.UID AppleHDAEngineInput:1B,0,1,0:1 device.name Built-in Output -- device.UID AppleHDAEngineOutput:1B,0,1,1:0
    device.name QU-16 Audio -- device.UID AppleUSBAudioEngine:Allen&Heath Ltd:QU-16:14130000:2,3
    device.name BlackHole 16ch -- device.UID BlackHole16ch_UID
    device.name Combi -- device.UID ~:AMS2_Aggregate:0

    as you can read, it is also possible to create an Aggregate Device with no inputs and outputs at all.

    when i change back the system setup to a normal audio device and erase the aggregated device my App does not crash and just works flawless also showing properly the channels.. it is not a Blackhole issue, as this works also perfectly with EZAudio.

    opened by designerfuzzi 1
  • EZAudioPlot

    EZAudioPlot

    Hello,

    I am using your awesome library for EZAudioPlot. AudioKit is very very slow to draw a plot! But I wonder why there is a gap (a blank) at the end of the plot, while there is indeed a sound in this part. If you have an idea, I'm a taker.

    The audio plot is configured like that:

        @IBOutlet weak var audioPlot: EZAudioPlot! {
            didSet {
                EZAudioUtilities.setShouldExitOnCheckResultFail(false)
                audioPlot.plotType = EZPlotType.buffer
                audioPlot.isOpaque = false
                audioPlot.backgroundColor = appDelegate.secondaryBackgroundColor()
                audioPlot.color = appDelegate.labelColor()
                audioPlot.shouldFill   = true
                audioPlot.shouldMirror = true
            }
        }
    

    And displayed in a scroll view.

    Simulator Screen Shot - iPod touch (7th generation) - 2021-05-19 at 11 44 54

    Same behavior with a .caf and a .m4a file.

    Thanks!

    opened by iDevelopper 1
  • open iTunes music crash?

    open iTunes music crash?

    How should I play the music of iTunes? Query the mpmediaitem through mpmediaquery to get the URL. However, if you open it with ezaudio, it will crash each time. The crash message is: failed to dispose of ext audio file (- 50)

    opened by Veryed-VS 0
  • ios13, iphone 11 max, crash while recording

    ios13, iphone 11 max, crash while recording

    Hi, I am experiencing some problems, but only on iphone 11 pro max, other iphones, even with ios13, work prefectly.

    The stack trace is: Crashed: AURemoteIO::IOThread 0 AudioToolboxCore 0x1b15955d0 CABufferList::BytesConsumed(unsigned int) + 296 1 AudioToolboxCore 0x1b15c26b8 ExtAudioFile::WriteInputProc(OpaqueAudioConverter*, unsigned int*, AudioBufferList*, AudioStreamPacketDescription**, void*) + 120 2 AudioToolboxCore 0x1b15b0098 AudioConverterChain::CallInputProc(unsigned int) + 440 3 AudioToolboxCore 0x1b15b3924 AudioConverterChain::FillBufferFromInputProc(unsigned int*, CABufferList*) + 268 4 AudioToolboxCore 0x1b1594fc4 BufferedAudioConverter::GetInputBytes(unsigned int, unsigned int&, CABufferList const*&) + 200 5 AudioToolboxCore 0x1b171f148 CBRConverter::RenderOutput(CABufferList*, unsigned int, unsigned int&, AudioStreamPacketDescription*) + 120 6 AudioToolboxCore 0x1b15952f0 BufferedAudioConverter::FillBuffer(unsigned int&, AudioBufferList&, AudioStreamPacketDescription*) + 396 7 AudioToolboxCore 0x1b1594f94 BufferedAudioConverter::GetInputBytes(unsigned int, unsigned int&, CABufferList const*&) + 152 8 AudioToolboxCore 0x1b16bcc04 CodecConverter::GetCodecInput(unsigned int&) + 680 9 AudioToolboxCore 0x1b16bbc5c CodecConverter::EncoderFillBuffer(unsigned int&, AudioBufferList&, AudioStreamPacketDescription*) + 740 10 AudioToolboxCore 0x1b16be9e4 CodecConverter::FillBuffer(unsigned int&, AudioBufferList&, AudioStreamPacketDescription*) + 60 11 AudioToolboxCore 0x1b15b03f8 AudioConverterChain::RenderOutput(CABufferList*, unsigned int, unsigned int&, AudioStreamPacketDescription*) + 128 12 AudioToolboxCore 0x1b15952f0 BufferedAudioConverter::FillBuffer(unsigned int&, AudioBufferList&, AudioStreamPacketDescription*) + 396 13 AudioToolboxCore 0x1b163f080 AudioConverterFillComplexBuffer + 352 14 AudioToolboxCore 0x1b15c1e2c ExtAudioFile::WritePacketsFromCallback(int (*)(OpaqueAudioConverter*, unsigned int*, AudioBufferList*, AudioStreamPacketDescription**, void*), void*) + 132 15 AudioToolboxCore 0x1b164bff4 ExtAudioFileWrite + 84 16 EZAudio 0x102a96604 -[EZRecorder appendDataFromBufferList:withBufferSize:] + 68 17 mpscribe 0x1027260b8 specialized GravadorViewController.microphone(_:hasBufferList:withBufferSize:withNumberOfChannels:) + 506 (GravadorViewController.swift:506) 18 mpscribe 0x102724508 @objc GravadorViewController.microphone(_:hasBufferList:withBufferSize:withNumberOfChannels:) + 4366796040 (<compiler-generated>:4366796040) 19 EZAudio 0x102a936a0 EZAudioMicrophoneCallback + 224 20 libEmbeddedSystemAUs.dylib 0x1c25892bc AURemoteIO::PerformIO(unsigned int, unsigned int, unsigned int, AudioTimeStamp const&, AudioTimeStamp const&, AudioBufferList const*, AudioBufferList*, int&) 21 libEmbeddedSystemAUs.dylib 0x1c25cf76c _XPerformIO 22 libAudioToolboxUtility.dylib 0x1af7cd6c4 mshMIGPerform + 268 23 libAudioToolboxUtility.dylib 0x1af7cdae4 MSHMIGDispatchMessage + 40 24 libEmbeddedSystemAUs.dylib 0x1c257f568 AURemoteIO::IOThread::Entry(void*) 25 libAudioToolboxUtility.dylib 0x1af7cb828 CAPThread::Entry(CAPThread*) + 92 26 libsystem_pthread.dylib 0x1a3fc1840 _pthread_start + 168 27 libsystem_pthread.dylib 0x1a3fc99f4 thread_start + 8

    opened by manueljmgomes 1
Releases(1.1.3)
  • 1.1.3(Jan 24, 2016)

    • Updated projects to get rid of warnings with Xcode 7.2
    • Fixed #246 & #236 caused by invalid bufferByteSize values on scratch buffer lists.
    • Updated examples for iOS (fixed interface builder issues causing black lines on top and bottom of iPhone 6 and newer devices)
    • Updated examples for OSX

    #265

    Source code(tar.gz)
    Source code(zip)
  • 1.1.0(Jul 13, 2015)

    • Added classes to simplify calculating the FFT of incoming audio data in the same design as the other EZAudio components.
    • Updated EZAudioFFT examples to pitch detector example
    Source code(tar.gz)
    Source code(zip)
  • 1.0.0(Jul 7, 2015)

  • 0.9.0(Jul 6, 2015)

    • Added fix from #178
    • Added currentInputDevice and currentOutputDevice methods to EZAudioDevice
    • Updated OSX examples to look more similar to iOS examples
    Source code(tar.gz)
    Source code(zip)
  • 0.8.0(Jul 4, 2015)

    • Updating recorder to use same kind of API as EZAudioFile
    • Added EZRecorder delegate to get callback whenever write occurs
    • Made writes synchronous
    • Updated documentation EZRecorder
    • Added play state delegate method to EZMicrophoneDelegate
    Source code(tar.gz)
    Source code(zip)
  • 0.7.2(Jul 3, 2015)

  • 0.6.0(Jul 2, 2015)

    • Rewrote EZAudioPlotGL using GLKView for iOS and NSOpenGLView for OSX.
    • Removed EZAudioPlotGLKViewController (no embedding in EZAudioPlotGL needed)
    • Made EZAudioPlotGL layer-backed for OSX so you can add cocoa controls on top of it
    • Merged similar OpenGL drawing calls and abstracted it to draw method available for subclasses to easily implement their own geometries
    Source code(tar.gz)
    Source code(zip)
  • 0.5.0(Jun 30, 2015)

    • Rewrote EZAudioPlayer to use new EZOuput
    • Added notifications for hooking into EZAudioPlayer playback state
    • Fixed bugs with iOS FFT example project not setting AVAudioSession
    • Fixed bugs with iOS PlayFile example project not overriding the speaker
    Source code(tar.gz)
    Source code(zip)
  • 0.4.0(Jun 30, 2015)

    Now uses an AUGraph to chain together AUConverter + Mixer + Output Audio Units for a much more robust and customizable playback engine.

    • Added volume and pan properties
    • Added EZAudioDevice property to allow switching playback to any output hardware device
    • Simplified EZOutputDataSource to one method instead of 3
    • Added EZOutputDelegate to handle EZOutput audio received, device change, and playback state change calls.
    • Added outputDevices to EZAudioDevice for iOS and Mac to allow enumerating output devices for EZOutput to use
    • Added subclass method to allow adding in additional nodes to the graph
    Source code(tar.gz)
    Source code(zip)
  • 0.3.0(Jun 29, 2015)

    • Added mutex lock for seek/read operations
    • Added multi-channel support for getting waveform data
    • Removed dependency on AEFloatConverter (replaced with EZAudioFloatConverter)
    • Updated examples
    Source code(tar.gz)
    Source code(zip)
  • 0.2.0(Jun 26, 2015)

    • Completely rewrote EZAudioPlot using Core Animation layers for much better performance and customization.
    • Made EZAudioPlot easier to subclass.
    • Added EZAudioDisplayLink to optimize real-time plots
    • Added EZPlotHistoryInfo to better manage rolling plots using internal circular buffer implementation.
    Source code(tar.gz)
    Source code(zip)
Owner
Syed Haris Ali
All things mobile and audio. Currently Director of Software @ Happy Health.
Syed Haris Ali
MuVis is a macOS, iOS, iPadOS app for real-time music visualization.

MuVis MuVis is an open-source multi-platform app (using SwiftUI, Swift, and Xcode) for music visualization. It renders informative (and musically usef

Keith Bromley 4 Dec 24, 2022
AudioKit is an audio synthesis, processing, and analysis platform for iOS, macOS, and tvOS.

AudioKit is an audio synthesis, processing, and analysis platform for iOS, macOS (including Catalyst), and tvOS. Installation To add AudioKit

AudioKit 9.5k Dec 31, 2022
Swift audio synthesis, processing, & analysis platform for iOS, macOS and tvOS

AudioKit AudioKit is an audio synthesis, processing, and analysis platform for iOS, macOS (including Catalyst), and tvOS. Installation To add AudioKit

AudioKit 8.7k Sep 30, 2021
The Amazing Audio Engine is a sophisticated framework for iOS audio applications, built so you don't have to.

Important Notice: The Amazing Audio Engine has been retired. See the announcement here The Amazing Audio Engine The Amazing Audio Engine is a sophisti

null 523 Nov 12, 2022
Beethoven is an audio processing Swift library

Beethoven is an audio processing Swift library that provides an easy-to-use interface to solve an age-old problem of pitch detection of musical signals.

Vadym Markov 735 Dec 24, 2022
A real-time, votable, democratized music queue on iPad and iPhone using Spotify

Queue'd Music Queue'd is the best way to enjoy music with your friends. Add your favorite songs to a shared music queue at your favorite bars, restaur

Ryan Daulton 88 Dec 2, 2022
AudiosPlugin is a Godot iOS Audio Plugin that resolves the audio recording issue in iOS for Godot Engine.

This plugin solves the Godot game engine audio recording and playback issue in iOS devices. Please open the Audios Plugin XCode Project and compile the project. You can also use the libaudios_plugin.a binary in your project.

null 3 Dec 22, 2022
Extensions and classes in Swift that make it easy to get an iOS device reading and processing MIDI data

MorkAndMIDI A really thin Swift layer on top of CoreMIDI that opens a virtual MIDI destination and port and connects to any MIDI endpoints that appear

Brad Howes 11 Nov 5, 2022
TuningFork is a simple utility for processing microphone input and interpreting pitch, frequency, amplitude, etc.

Overview TuningFork is a simple utility for processing microphone input and interpreting pitch, frequency, amplitude, etc. TuningFork powers the Parti

Comyar Zaheri 419 Dec 23, 2022
AudioPlayer is a simple class for playing audio in iOS, macOS and tvOS apps.

AudioPlayer AudioPlayer is a simple class for playing audio in iOS, macOS and tvOS apps.

Tom Baranes 260 Nov 27, 2022
Simple command line utility for switching audio inputs and outputs on macOS

Switch Audio Simple command line utility for switching audio inputs and outputs

Daniel Hladík 3 Nov 22, 2022
Functional DSP / Audio Framework for Swift

Lullaby Lullaby is an audio synthesis framework for Swift that supports both macOS and Linux! It was inspired by other audio environments like FAUST,

Jae 16 Nov 5, 2022
Voice Memos is an audio recorder App for iPhone and iPad that covers some of the new technologies and APIs introduced in iOS 8 written in Swift.

VoiceMemos Voice Memos is a voice recorder App for iPhone and iPad that covers some of the new technologies and APIs introduced in iOS 8 written in Sw

Zhouqi Mo 322 Aug 4, 2022
Painless high-performance audio on iOS and Mac OS X

An analgesic for high-performance audio on iOS and OSX. Really fast audio in iOS and Mac OS X using Audio Units is hard, and will leave you scarred an

Alex Wiltschko 2.2k Nov 23, 2022
Audio Filters on iOS and OSX

Audio Filters on iOS and OSX Implement high quality audio filters with just a few lines of code and Novocaine, or your own audio library of choice. NV

Bart Olsthoorn 411 Dec 16, 2022
AudioPlayer is syntax and feature sugar over AVPlayer. It plays your audio files (local & remote).

AudioPlayer AudioPlayer is a wrapper around AVPlayer. It also offers cool features such as: Quality control based on number of interruption (buffering

Kevin Delannoy 676 Dec 25, 2022
YiVideoEditor is a library for rotating, cropping, adding layers (watermark) and as well as adding audio (music) to the videos.

YiVideoEditor YiVideoEditor is a library for rotating, cropping, adding layers (watermark) and as well as adding audio (music) to the videos. YiVideoE

coderyi 97 Dec 14, 2022
App for adding and listening audio files

SomeSa SomeSa (самса) – приложение, позволяющее загружать и воспроизводить произвольные аудиофайлы. Протестировано на форматах файлов .wav и .mp3, раз

Yegor Dobrodeyev 0 Nov 7, 2021
This app demonstrates how to use the Google Cloud Speech API and Apple on-device Speech library to recognize speech in live recorded audio.

SpeechRecognitionIOS This app demonstrates how to use Google Cloud Speech API and Apple on-device Speech library to recognize speech in live audio rec

Josh Uvi 0 Mar 11, 2022