A speech recognition framework designed for SwiftUI.

Overview

SwiftSpeech

Speech Recognition Made Simple

A few lines of code to do this!

Recognize your user's voice elegantly without having to figure out authorization and audio engines.

SwiftSpeech Examples

Aside from the readme, the best way to learn more about SwiftSpeech and how speech recognition capabilities are implemented in apps like WeChat is to check out my new project SwiftSpeech Examples. For now, it contains a WeChat voice message interface mock and the three demos in SwiftSpeech.

WeChat

Features

SwiftSpeech is a wrapper for Apple's Speech framework with deep SwiftUI and Combine integration.

  • UI control + speech recognition functionality in just several lines of code.
  • Customizable cancelling.
  • SwiftUI style reactive APIs and Combine support.
  • Highly customizable but also keeping your code highly reusable via a composable structure.
  • Fully open low-level APIs.

Installation

Swift Package Manager (Recommended)

In Xcode, select Add Packages... from the File menu and enter the following package URL:

https://github.com/Cay-Zhang/SwiftSpeech

CocoaPods

pod 'SwiftSpeech'

Getting Started

1. Authorization

Although SwiftSpeech takes care of all the verbose stuff of authorization for you, you still have to state the usage descriptions and specify where you want the authorization process to happen before you start to use it.

Usage Descriptions in Info.plist

If you haven't, add these two rows in your Info.plist: NSSpeechRecognitionUsageDescription and NSMicrophoneUsageDescription.

These are the messages your users will see on their first use, in the alerts that ask them for permission to use speech recognition and to access the microphone.

Here's an exmample:

<key>NSSpeechRecognitionUsageDescription</key>
<string>This app uses speech recognition to convert your speech into text.</string>
<key>NSMicrophoneUsageDescription</key>
<string>This app uses the mircrophone to record audio for speech recognition.</string>

Request Authorization

Place SwiftSpeech.requestSpeechRecognitionAuthorization() where you want the request to happen. A common location is inside an onAppear modifier. Common enough that there is a snippet called Request Speech Recognition Authorization on Appear exposed in the Xcode Modifiers library.

.onAppear {
    SwiftSpeech.requestSpeechRecognitionAuthorization()
}

2. Try some demos

You can now start to try out some light-weight demos bundled with the framework using Xcode preview. Click the "Preview on Device" button to try the demo on your device.

static var previews: some View {
    // Two of the demo views below can take a `localeIdentifier: String` as an argument.
    // Example locale identifiers:
    // 简体中文(中国)= "zh_Hans_CN"
    // English (US) = "en_US"
    // 日本語(日本)= "ja_JP"
    
    Group {
        SwiftSpeech.Demos.Basic(localeIdentifier: yourLocaleString)
        SwiftSpeech.Demos.Colors()
        SwiftSpeech.Demos.List(localeIdentifier: yourLocaleString)
    }
}

Here are the "previews" of your previews:

Demos

3. Build it yourself

Knowing what this framework can do, you can now start to learn about the concepts in SwiftSpeech.

Inspect the source code of SwiftSpeech.Demos.Basic. The only new thing here is this:

SwiftSpeech.RecordButton()                                        // 1. The View Component
    .swiftSpeechRecordOnHold(sessionConfiguration:animation:distanceToCancel:)  // 2. The Functional Component
    .onRecognizeLatest(update: $text)                             // 3. SwiftSpeech Modifier(s)

There are three parts here (and luckily, you can customize every one of them!):

  1. The View Component: A View that is only responsible for UI.
  2. The Functional Component: A component that handles user interaction and provides the essential functionality of speech recognition. In the built-in one here, the first two arguments let you specify the configuration for the recording session (locales and more) and an animation used when the user interacts with the View Component. The third argument sets the distance the user has to swipe up in order to cancel the recording. The framework also provides another Functional Component: .swiftSpeechToggleRecordingOnTap(sessionConfiguration:animation:).
  3. SwiftSpeech Modifier(s): One or more components allowing you to receive and manipulate the recognition results. They can be stacked together to create powerful effects.

For now, you can just use the built-in View Component and Functional Component. Let's explore some SwiftSpeech Modifiers first since every app handles its data differently:

Important: Chaining multiple or identical SwiftSpeech Modifiers together doesn't override any behavior. All actions of the modifiers will be executed in the order where the closest to the Functional Component executes first and the farthest executes last.

// 1
// All three demos use these modifiers.
// Inspect the source code of them if you want examples!
.onRecognizeLatest(
    includePartialResults: Bool = true,
    handleResult: (SwiftSpeech.Session, SFSpeechRecognitionResult) -> Void,
    handleError: (SwiftSpeech.Session, Error) -> Void
)

.onRecognize(
    includePartialResults: Bool = true,
    handleResult: (SwiftSpeech.Session, SFSpeechRecognitionResult) -> Void,
    handleError: (SwiftSpeech.Session, Error) -> Void
)

// This one simply assigns the recognized text to the binding in `handleResult` and ignores errors.
.onRecognizeLatest(
    includePartialResults: Bool = true,
    update: Binding<String>
)

// This one prints the recognized text and ignores errors.
.printRecognizedText(includePartialResults: Bool = true)

The first group of modifiers encapsulates the core value of SwiftSpeech. It does all the publisher transformation and subscription for you and calls the closures with enough information to facilitate a sophisticated task when a recognition result is yielded.

onRecognizeLatest ignores recognition results from the last recording session (if any) when a new session is started, while onRecognize subscribes to results from every recording session.

In handleResult, the first closure parameter is a SwiftSpeech.Session, which has a unique id for every recording. Use it to distinguish the recognition result from one recording and that from another.

The second is a SFSpeechRecognitionResult, which contains rich information about the recognition. Not only the recognized text (result.bestTranscription.formattedString), but also interesting stuff like speaking rate and pitch!

In handleError, you will handle the errors produced in the recognition process and also during the initialization of the recording session (such as a microphone activation failure).

// 2
.onStartRecording(appendAction: (SwiftSpeech.Session) -> Void)
.onStopRecording(appendAction: (SwiftSpeech.Session) -> Void)
.onCancelRecording(appendAction: (SwiftSpeech.Session) -> Void)

The second group gives you utter control over the whole lifespan of a SwiftSpeech.Session. It runs the provided closures after a recording was started/stopped/cancelled. Inside the closures, you will have access to the corresponding SwiftSpeech.Session, which is discussed below.

// 3
// `SwiftSpeech.ViewModifiers.OnRecognize` uses these modifiers.
// Inspect the source code of it if you want examples!
.onStartRecording(sendSessionTo: Subject)
.onStopRecording(sendSessionTo: Subject)
.onCancelRecording(sendSessionTo: Subject)

The third group might be useful if you prefer a reactive programming style. The only new argument here is a Combine.Subject (e.g. CurrentValueSubject and PassthroughSubject) and the modifier will send the corresponding SwiftSpeech.Session to the Subject after a recording is started/stopped/cancelled.

SwiftSpeech.Session

Configuration

A session can be configured using a SwiftSpeech.Session.Configuration struct. A configuration contains information such as the locale, the task hint, custom phrases to recognize, options for on-device recognition, and audio session configurations. Inspect SwiftSpeech.Session.Configuration for more details.

Customized Subscription to Recognition Results

If the built-in onRecognize(Latest) modifiers do not satisfy your needs, you can subscribe to recognition results via onStart/Stop/CancelRecording.

A Session publishes its recognition results via its resultPublisher. It has an Output type of SFSpeechRecognitionResult and an Failure type of Error.

You will receive a completion event when the Session finishes processing the user's voice (i.e. result.isFinal == true), an error happens, or you have explicitly called the cancelRecording() on the session.

A Session also has a convenient publisher called stringPublisher that maps the results to the recognized string.

Independent Use

Here's an example of using Session to recognize user's voice and receive updates.

let session = SwiftSpeech.Session(configuration: SwiftSpeech.Session.Configuration(locale: Locale(identifier: "en-US"), contextualStrings: ["SwiftSpeech"]))
try session.startRecording()
session.stringPublisher?
    .sink { text in
        // do something with the text
    }
    .store(in: &cancelBag)

For more, please refer to the documentation of SwiftSpeech.Session.

Customized View Components

A View Component is a dedicated View for design. It does not react to user interaction directly, but instead reacts to its environments, allowing developers to only focus on the view design and making the view more composable. User interactions are handled by the Functional Component.

Inspect the source code of SwiftSpeech.RecordButton (again, it's not a Button since it doesn't respond to user interaction). You will notice that it doesn't own any state or apply any gestures. It only responds to the two variables below.

@Environment(\.swiftSpeechState) var state: SwiftSpeech.State
@SpeechRecognitionAuthStatus var authStatus

Both are pretty self-explanatory: the first one represents its current state of recording, and the second one indicates the authorization status of speech recognition.

Here are more details of SwiftSpeech.State:

enum SwiftSpeech.State {
    /// Indicating there is no recording in progress.
    /// - Note: It's the default value for `@Environment(\.swiftSpeechState)`.
    case pending
    /// Indicating there is a recording in progress and the user does not intend to cancel it.
    case recording
    /// Indicating there is a recording in progress and the user intends to cancel it.
    case cancelling
}

authStatus here is a SFSpeechRecognizerAuthorizationStatus. You can also use $authStatus for a short hand of authStatus == .authorized.

Combined with a Functional Component and some SwiftSpeech Modifiers, hopefully, you can build your own fancy record systems now!

Support SwiftSpeech Modifiers

The library provides two general functional components that add a gesture to the view it modifies and perform speech recognition for you:

// They already support SwiftSpeech Modifiers.
func swiftSpeechRecordOnHold(
    sessionConfiguration: SwiftSpeech.Session.Configuration = SwiftSpeech.Session.Configuration(),
    animation: Animation = SwiftSpeech.defaultAnimation,
    distanceToCancel: CGFloat = 50.0
) -> some View

func swiftSpeechToggleRecordingOnTap(
    sessionConfiguration: SwiftSpeech.Session.Configuration = SwiftSpeech.Session.Configuration(),
    animation: Animation = SwiftSpeech.defaultAnimation
)

If you decide to implement a view that involves a custom gesture other than a hold or a tap, you can also support SwiftSpeech Modifiers by adding a delegate and calling its methods at the appropriate time:

var delegate = SwiftSpeech.FunctionalComponentDelegate()

For guidance on how to implement a custom view for speech recognition, refer to ViewModifiers.swift and SwiftSpeechExamples. It is not that hard, really.

License

SwiftSpeech is available under the MIT license.

Comments
  • Unable to use with TextToSpeech

    Unable to use with TextToSpeech

    Firstly, the library is awesome.

    But I bumped an issue when trying to use this with the text to speech.

    Simply you can also reproduce the issue with the following code;

    import AVFoundation
    func onSpeechToTextEnded() {
       let utterance = AVSpeechUtterance(string: "Hello world")
       utterance.voice = AVSpeechSynthesisVoice(language: "en-GB") 
    
       let synthesizer = AVSpeechSynthesizer()
       synthesizer.speak(utterance)
    }
    

    if I try to call this function (onSpeechToTextEnded) before actually using this library, I can hear the voice. But when I try calling this function to hear some voices it is now working.

    Can you investigate the issue please

    opened by EnesKaraosman 5
  • Graceful notification of microphone activation failure

    Graceful notification of microphone activation failure

    Hi!

    I've encountered an exception due to a bug in iOS 14 beta 4 + AirPods that breaks (on software level, hopefully) mic on AirPods. In system apps (iMessage, Recorder ...) the issue prevents voice recording/recognition from working but it does not crash an app. In case of SwiftSpeech, the app crashes with uncaught exception.

    Is is possible to catch such a failure and gracefully pass notification with the error? Or at least prevent the crash.

    The log message is: Terminating app due to uncaught exception 'com.apple.coreaudio.avfaudio', reason: 'required condition is false: IsFormatSampleRateAndChannelCountValid(format)'

    opened by shengchalover 4
  • Possible volume conflict between SwiftSpeech and AVAudioPlayer?

    Possible volume conflict between SwiftSpeech and AVAudioPlayer?

    My app involves both SwiftSpeech's features and sound effects through SwiftySound. All sound effects work fine in the simulator, but on the device all sound effects stop working once the SwiftSpeech button is pressed. I have one button that makes a "click" noise when pressed. It works until I press the SwiftSpeech button.

    If I press the SwiftSpeech button first, I get a sound effect the first time, but then not for subsequent presses.

    I made a new, simple project without SwiftSpeech just to test the sound, and everything worked fine on the device. I also switched out SwiftySound and used the normal AVAudioPlayer procedure, and the sound works that way, too.

    So the only thing I can think of is that there is a conflict between SwiftSpeech and the sound effects. Is it possible that SwiftSpeech is turning off my sound effects? If so, how do I turn them back on? My code appears below:

    `import SwiftUI import SwiftSpeech import SwiftySound import AVFoundation import AudioToolbox

    // VIEW MODEL struct ContentView: View {

    let emojiArray = ["🐵","🦍","🐶","🐺","🦊","🦝","🐱","🦁","🐅","🐴","🦓","🦌","🐮","🐷","🐐","🐪","🦙","🦒","🐘","🦏","🦛","🐁","🐀","🐰","🦇","🐻","🐨","🐼","🦘","🦃","🐔","🐧","🦅","🦆","🦢","🦉","🦚","🐸","🐊","🐢","🦎","🐍","🐳","🐬","🐟","🐙","🐌","🦋","🐜","🐝","🐞","🦗","🕷","🦂","🦟"]
    
    @State private var emoji = ""
    @State private var nextEmoji = ""
    @State private var text = "What is this? (Press and Hold)"
    @State private var theDescription = ""
    @State var isCorrect:Bool
    @State var player = AVAudioPlayer()
    
    var body: some View {
        
        ZStack(alignment:.top) {
        VStack(alignment: .center) {
            
            Text (emoji).font(.system(size: 200, weight: .bold, design: .default))
                .onAppear() {
                    emoji = emojiArray.randomElement() ?? "none"
                   
                    theDescription = emoji.applyingTransform(.toUnicodeName, reverse: false) ?? "None"
                    print (theDescription) // get the emoji's unicode name
                }
    
        Text (text)
                .onAppear {
                    SwiftSpeech.requestSpeechRecognitionAuthorization()
                }
          .padding()
            
            SwiftSpeech.RecordButton()
            }
                .swiftSpeechRecordOnHold()
                .onRecognize { _, result in
                    text = result.bestTranscription.formattedString
                    print (text)
                    self.text = text
                    if theDescription.contains(self.text.uppercased()) == true {
                        
                        print ("That's right")
                        text = "That's right!"
                        isCorrect = true
                        playRightSound() 
                     
                    }
                    else {print ("That's wrong")
                        text = "Try again!"
                        isCorrect = false
                        playWrongSound()
                    }
                        
                } handleError: { _, _ in }
        
        Spacer()
            
        Button("Change Animal") {
            nextEmoji = emojiArray.randomElement() ?? "none"
            while nextEmoji == emoji {
                
                nextEmoji = emojiArray.randomElement() ?? "none"
                
            }
            
            playClickSound()
            emoji = nextEmoji
            text = "What is this? (Press and hold)"
            theDescription = emoji.applyingTransform(.toUnicodeName, reverse: false) ?? "None"
            print (theDescription)
                }
      
            }
            
        }
    
    func playRightSound(){
    
    print ("Playing right sound")
    
    Sound.play(file:"yay.wav")
    
           }
    
     func playWrongSound() {
        
     Sound.play(file:"raspberry.wav")
        
    }
    
    func playClickSound() {
        
        Sound.play(file:"click.wav")
       
    }
    
    struct ContentView_Previews: PreviewProvider {
    static var previews: some View {
        
        ContentView(isCorrect:true)
        
    }
    

    }

    }

    `

    opened by rabbigarfinkel 2
  • Swift Playgrounds Compatibility

    Swift Playgrounds Compatibility

    @Cay-Zhang Adding this to Swift Playgrounds on iPadOS results in an error message: “package doesn't have version tags”. I fixed that in my fork by adding a new tag removing the “v“ prefix.

    You can also have a look at this: https://github.com/erikdoe/ocmock/issues/496

    opened by JanX2 2
  • It's possible to save the audio during speech to text?

    It's possible to save the audio during speech to text?

    Maybe like,

    SwiftSpeech.RecordButton()                                      
        .swiftSpeechRecordOnHold(sessionConfiguration:animation:distanceToCancel:)
        .onRecognizeLatest(update: $text)                           
        .onFinal(saveTo: url)
    
    opened by metrue 2
  • SwiftSpeech and Other Languages

    SwiftSpeech and Other Languages

    I know from the examles that SwiftSpeech can handle all supported languages. But I don't see how to implement this functionality. I gather from the example that I must add like this for Hebrew:

    public init(locale: Locale = .autoupdatingCurrent) { self.locale = locale } public init(localeIdentifier: String) { self.locale = Locale(identifier: "he-IL") }

    but I don't understand how to use this setting in SwiftSpeech:

    `Text (text) .onAppear { SwiftSpeech.requestSpeechRecognitionAuthorization() }

             SwiftSpeech.RecordButton()
                    .swiftSpeechRecordOnHold(sessionConfiguration: .init(audioSessionConfiguration:     .playAndRecord))
                .onRecognize { _, result in
                    
                    text = result.bestTranscription.formattedString
                   
                    self.text = text
                    if text == word {                 // word from array, checking pronunciation
                        
                        playRightSound()
                                }
    
                    else {
                       
                        playWrongSound()
                    
                            }
                        
                            } handleError: { _, _ in }
                
                    }
            
            }
        
          }`
    

    I appreciate your help!

    opened by rabbigarfinkel 1
  • Speech Recognition string does not match a (hard coded) string

    Speech Recognition string does not match a (hard coded) string

    I assign the speech recognition string to a @State var speechRecogText and I check if the other hardcoded string private var text contains the string from the speech recognition. This works properly and prints contains... in English. But with the Arabic Language, it does not work.

    @State var speechRecogText: String = ""
    private var text: String = "قل"
    
    if textFieldText.contains(speechRecogText) {
                print("contaions voice text")
            } else {
                print("doesn't contain voice text")
            }
    
    Console
    // doesn't contain voice text
    

    However when I try to swap the variables like this:

    if speechRecogText.contains(textFieldText) {
                print("contains text")
            } else {
                print("doesnt contain text")
            }
    
    Console
    // contains text
    

    What might be the reason for this? Does it have to do anything with the Language or how Strings actually behave?

    opened by ItsmeKY 1
  • install issues

    install issues

    when i put this install url,i got error“Unable to find a specification for 'SwiftSpeech'.”

    would you please tell me how can i install it through cocoapods?

    opened by Crabbit-F 1
  • IOS 16 beta 2

    IOS 16 beta 2

    Worked fine up to iOS 15.6. I tested it on IOS 16 beta 2 on iPhone 12 Pro Max and I always get Thread 1: Fatal error : recordingSession is nil in EndRecording() in the following function when I release the speech button.

    fileprivate func endRecording() { guard let session = recordingSession else { preconditionFailure("recordingSession is nil in (#function)") } recordingSession?.stopRecording() delegate.onStopRecording(session: session) self.viewComponentState = .pending self.recordingSession = nil }

    opened by buenasuerte 2
Releases(v0.9.3)
  • v0.9.3(Jul 27, 2021)

  • v0.9.2(Sep 9, 2020)

    • Added: SwiftSpeech.Session.Configuration now has a new property audioSessionConfiguration of type SwiftSpeech.Session.AudioSessionConfiguration. Use it to customize the audio sessions in your app. For example, if you want your app to play some audio while/after recordings, you can set the configuration to .playAndRecord.
    • Changed Behavior: Swift Speech now by default deactivates your app's audio session when a recording session stops, meaning other apps can resume their audio playback when your user finishes speaking. If you would like to change this behavior, see SwiftSpeech.Session.AudioSessionConfiguration.
    Source code(tar.gz)
    Source code(zip)
  • v0.9.1(Aug 26, 2020)

    • Errors happened during the initialization of the recording session (such as a microphone activation failure) are now sent through resultSubject, so that you can catch them in error handlers of onRecognize.
    Source code(tar.gz)
    Source code(zip)
  • v0.9.0(Aug 4, 2020)

  • v0.8.0(Jul 27, 2020)

    • Added: The companion project for SwiftSpeech: SwiftSpeech Examples.
    • Changed: A more powerful set of onRecognize modifiers that is ready for complex tasks. Check README for more information.
    • Added: FunctionalComponentDelegate makes it easier to support SwiftSpeech Modifiers.
    • Added: Xcode library contents for Xcode 12.
    • Changed: The new authorization system gives you more control over when to request an authorization.
    Source code(tar.gz)
    Source code(zip)
  • v0.5.0(Mar 30, 2020)

    We've come a long way!

    Features and changes since v0.2.0:

    • SwiftSpeech.Session was implemented to avoid strong references to SpeechRecognizer.
    • A couple of demos!
    • Utilized SwiftUI Environments to build a composable structure: stack your SwiftSpeech Modifiers together to create powerful effects!
    • Added SwiftSpeech.State and thus supported reacting to a cancelling state in a View Component.
    • Updated SwiftSpeech.RecordButton to react to a cancelling state.
    • New Functional Component: ToggleRecordingOnTap.
    • A rich Readme with GIFs!
    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(Feb 17, 2020)

    Implemented foundations for SwiftUI integration (RecordOnHold). Added several View extensions: now you can make any View a record button with just one line of code! SwiftSpeech namespace overhaul.

    ⚠️Warning: Readme is not yet updated and there's barely any documentation in the code.

    Source code(tar.gz)
    Source code(zip)
Owner
cayZ
cayZ
Meow-Speech - IOS app - Text to Speech

Meow-Speech IOS app - Text to Speech All licensed code belongs to the respective

Shivam Mishra 2 Jul 30, 2022
Audite - My SwiftUI app that is using Google's text to speech API

Speech My SwiftUI app that is using Google's text to speech API. Goal is to list

null 1 May 23, 2022
TTSLanguage: Text To Speech commandline executable for macOS

TTSLanguage Text To Speech commandline executable for macOS. It can detect sentence language and read it using proper voice. example: $ TTSLanguage "H

Mateusz Szlosek 2 Jan 17, 2022
MusicKit is a framework and DSL for creating, analyzing, and transforming music in Swift.

MusicKit MusicKit is a framework and DSL for creating, analyzing, and transforming music in Swift. Examples Functional harmony let C5 = Pitch(midi: 72

Ben Guo 591 Oct 18, 2022
SoundManager - A simple framework to load and play sounds in your app.

SoundManager - A simple framework to load and play sounds in your app.

Jonathan Chacón 3 Jan 5, 2022
An iOS and macOS audio visualization framework built upon Core Audio useful for anyone doing real-time, low-latency audio processing and visualizations.

A simple, intuitive audio framework for iOS and OSX. Deprecated EZAudio has recently been deprecated in favor of AudioKit. However, since some people

Syed Haris Ali 4.9k Jan 2, 2023
The Amazing Audio Engine is a sophisticated framework for iOS audio applications, built so you don't have to.

Important Notice: The Amazing Audio Engine has been retired. See the announcement here The Amazing Audio Engine The Amazing Audio Engine is a sophisti

null 523 Nov 12, 2022
iOS framework for the Quiet Modem (data over sound)

QuietModemKit This is the iOS framework for https://github.com/quiet/quiet With this library, you can send data through sound. Live demo: https://quie

Quiet Modem Project 442 Nov 17, 2022
Functional DSP / Audio Framework for Swift

Lullaby Lullaby is an audio synthesis framework for Swift that supports both macOS and Linux! It was inspired by other audio environments like FAUST,

Jae 16 Nov 5, 2022
M3UKit - A µ framework for parsing m3u files

M3UKit - A µ framework for parsing m3u files

Omar Albeik 32 Dec 28, 2022
iOS framework that enables detecting and handling voice commands using microphone.

iOS framework that enables detecting and handling voice commands using microphone. Built using Swift with minumum target iOS 14.3.

Ahmed Abdelkarim 20 Aug 4, 2022
Beautiful Music Player app built using SwiftUI to demonstrate Neumorphic design pattern and MVVM architecture.

Beautiful Music Player app built using SwiftUI to demonstrate Neumorphic design pattern ?? and MVVM architecture ?? . Made with love ❤️ by Sameer Nawaz

Sameer Nawaz 120 Jan 4, 2023
Subsonic is a small library that makes it easier to play audio with SwiftUI

Subsonic is a small library that makes it easier to play audio with SwiftUI

Paul Hudson 218 Dec 16, 2022
A Neumorphism UI NetEase Cloud Music With SwiftUI

A Neumorphism UI NetEase Cloud Music With SwiftUI

null 2 Jun 9, 2022
A small project written with SwiftUI achieves a scrolling effect similar to Apple Music lyrics.

Music Lyrics scrolling animation effect Since the iOS/iPadOS 13 update, Apple has brought a new scrolling lyrics feature to Apple Music. The album im

Huang Runhua 18 Nov 9, 2022
A Spotify clone created using SwiftUI

Spotify Clone Its a Spotify clone created using SwiftUI. Deployment To deploy th

Manav Deep Singh Lamba 1 Jan 17, 2022
MusicPlayer - Beautiful Music Player app built using SwiftUI to demonstrate Neumorphic design pattern and MVVM architecture

Skailer ?? Beautiful Music Player app built using SwiftUI to demonstrate Neumorp

null 23 Dec 10, 2022
CLI tool for macOS that transcribes speech from the microphone using Apple’s speech recognition API, SFSpeechRecognizer. (help.)

CLI tool for macOS that uses SFSpeechRecognizer to transcribe speech from the microphone. The recognition result will be written to the standard output as JSON string.

Thai Pangsakulyanont 23 Sep 20, 2022
A speech recognition framework designed for SwiftUI.

SwiftSpeech Speech Recognition Made Simple Recognize your user's voice elegantly without having to figure out authorization and audio engines. SwiftSp

cayZ 297 Dec 17, 2022
Meow-Speech - IOS app - Text to Speech

Meow-Speech IOS app - Text to Speech All licensed code belongs to the respective

Shivam Mishra 2 Jul 30, 2022