Microbenchmarking app for Swift with nice log-log plots

Overview

Attabench

Xcode 9 Swift 4 Platform Build Status

⚠️ WARNING
This package has been largely superseded by the Swift Collections Benchmark package. That package provides a portable benchmarking solution that works on all platforms that Swift supports, and it is being maintained by the Swift Standard Library team.

Attabench is a microbenchmarking app for macOS, designed to measure and visualize the performance of Swift code.

Screenshot of Attabench app

Table of Contents

Background

This app is for microbenchmarking low-level algorithms with one degree of freedom (usually size). It works by repeatedly performing the same operation on random data of various sizes, while continuously charting the results in nice plots. Attabench's default log-log plots are ideal for seeing your algorithm's performance at a glance.

Attabench was originally created to supply nice log-log charts for my dotSwift 2017 talk and Optimizing Collections book. At the time, it seemed easier to build a custom chart renderer from scratch using Core Graphics than to mess with a bunch of CSV files and pivot tables in Excel. (It has to be noted though that this opinion has been somewhat weakened during the implementation process.)

Attabench was made in a hurry for a single use case, so its code is what polite people might call a little messy. But it's shockingly fun to play with, and the graphs it produces are chock full of strange and wonderful little mysteries.

If you find Attabench useful in your own project, please consider buying a copy of my book! It contains a lot of benchmarks made with Attabench; I'm positive you'll find it entertaining and informative.

Installation

Follow these steps to compile Attabench on your own:

  1. Clone this repo to your Mac.

    git clone https://github.com/attaswift/Attabench.git Attabench --recursive
    cd Attabench
    
  2. Install Carthage if you don't already have it. (This assumes you have Homebrew installed.)

    brew install carthage
    
  3. Retrieve and build dependencies (SipHash, BTree and GlueKit).

    carthage bootstrap --platform Mac
    
  4. Open the project file in Xcode 9, then build and run the Attabench target.

    open Attabench.xcodeproj
    

Usage

Attabench has two document formats: a benchmark document defining what to test (with the extension .attabench), and a results document containing benchmark timings (extension .attaresult). Results documents remember which benchmark they came from, so you can stop Attabench and restart any particular benchmark run at any point. You may create many result documents for any benchmark.

When the app starts up, it prompts you to open a benchmark document. Each benchmark contains executable code for one or more individually measurable tasks that can take some variable input. The repository contains two examples, so you don't need to start from sratch:

  • SampleBenchmark.attabench is a simple example benchmark with just three tasks. It is a useful starting point for starting your own benchmarks.

  • OptimizingCollections.attabench is an example of a real-life benchmark definition. It was used to generate the charts in the Optimizing Collections book. (See if you can reproduce my results!)

We are going to look at how to define your own benchmark files later; let's just play with the app first.

Once you load a benchmark, you can press ⌘-R to start running benchmarks with the parameters displayed in the toolbar and the left panel. The chart gets updated in real time as new measurements are made.

Screenshot of Attabench app

You can follow the benchmarking progress by looking at the status bar in the middle panel. Below it there is a console area that includes Attabench status messages. If the benchmark prints anything on standard output or standard error during its run, that too will get included in the console area.

Control What Gets Measured

You can use the checkboxes in the list inside the left panel to control which tasks get executed. If you have many tasks, you can filter them by name using the search field on the bottom. (You can build simple expressions using negation, conjunction (AND) and disjunction (OR) -- for example, typing dog !brown, cat in the search field will get you all tasks whose name includes either dog but not brown, or it includes cat.) To check/uncheck many tasks at once, just select them all and press any of their checkboxes.

The two pop up buttons in the tool bar lets you select the size interval on which you want to run your tasks. Attabench will smoothly sample the interval on a logarithmic curve that perfectly fits the charts.

While there is an active benchmark running, whenever you change something on the left side of the window, the benchmark is immediately stopped and restarted with the new parameters. This includes typing anything in the search bar -- only visible tasks get run. Be careful not to interrupt long-running measurements.

The run options panel in the bottom controls how many times any particular task is executed before a measurement is reported. As a general rule, the task is repeated Iterations times, and the fastest result is used as the measurement. However, when the Min Duration field is set, each task will keep repeating until the specified time has elapsed; this smooths out the charts of super quick benchmarks that would otherwise have really noisy results. On the other hand, a task will not get repeated when it has cumulatively taken more time than the time interval in Max Duration field. (This will get you results quicker for long-running tasks.) So with the Duration fields, any particular task may get run for either more or less times than what you set in the Iteration field.

Change How Data Gets Displayed

The right panel is used to configure how the chart is rendered. Feel free to experiment by tweaking these controls; they only change the appearance of how the results get displayed; they do not affect any currently running benchmark.

The pop up button on the top lets select one from a handful of built-in visual themes to radically change the chart's appearance. For example, the Presentation theme is nice for white-on-black presentation decks. (You currently need to modify the source of the app if you need to change these themes or create your own.)

The three Scales checkboxes lets you enable/disable amortized time display or switch to linear scales on any of the two axes. These are occasionally useful.

The Curves panel includes two pop up buttons for selecting what data to display. For the actual curves, you can choose from the following kinds of data:

  • None to disable the curve altogether
  • Minimum to show the minimum of all collected measurements.
  • Average selects the arithmetic mean of collected samples. This is often the most informative, so this is the default choice.
  • Maximum displays the slowest measurement only. This is probably not that useful on its own, but it was really cheap to implement! (And it can be interesting to combine it with the stddev-based error bands.)
  • Sample Count is the odd one out: it displays the count of measurements made, not their value. This is a bit of a hack, but it is very useful for determining if you have taken enough measurements. (To get the best view, switch to a linear "time" scale.)

There is also an optional Error Band that you can display around each curve. Here are the available options for these bands:

  • None disables them. This is the minimalist choice.
  • Maximum paints a faintly colored band between the minimum and maximum measured values.
  • The μ + σ option replaces the maximum value with the sum of the average and the standard deviation. (This is the 68% in the 68-95-99.7 rule.)
  • μ + 2σ doubles the standard deviation from the previous option. ("95%")
  • μ + 3σ goes triple. ("99.7%")

The bottom band is always set to the minimum value, in all cases except None. (E.g., μ - σ can easily go below zero, which looks really bad on a log scale.)

A word of warning: I know nothing about statistics, and I'm not qualified to do proper statistical analysis. I chose these options because they produced cool-looking charts that seemed to tell me something meaningful about the spread of the data. These sigma expressions look suitably scientific, but they are likely not the greatest choice for benchmarking. (I'm pretty sure benchmark measurements don't follow a normal distribution.) If you do know this sort of thing, please submit a PR to fix things!

The Visible Ranges panel lets you select what ranges of values to display on the chart. By default, the chart is automatically scaled to fit all existing measurements for the active tasks and the entire active range. Setting specific ranges is useful if you need to zoom into a part of the chart; sorry you can't do this directly on the chart view.

Finally, the two Performance fields lets you control how often Attabench updates the UI status and how often it redraws the chart. If results come too quickly, the CPU spent on Attabench's UI updates could easily affect measurements.

Get Chart Images Out of Attabench

To get a PNG version of the current chart, simply use the mouse to drag the chart into Finder or another app. Attabench also includes a command-line tool to automate the rendering of charts -- check out the attachart executable target in the Swift package. You can use it to prevent wrist fatigue when you need to generate more than a handful of images. (Saving the command line invocations into a script will also let you regenerate the whole batch later.)

Create Your Own Benchmarks

90% the fun of Attabench is in defining and running your own benchmarks. The easiest way to do that is to make a copy of the included SampleBenchmark.attabench benchmark and then modify the code in it.

Anatomy of an Attabench Benchmark Document

An .attabench document is actually a folder containing the files needed to run the benchmark tasks. The only required file is run.sh; it gets executed every time Attabench needs to run a new measurement. The one in SampleBenchmark uses the Swift Package Manager to build and run the Swift package that's included in the folder. (You can define benchmarks in other languages, too; however, you'll need to implement the Attabench IPC protocol on your own. Attabench only provides a client implementation in Swift.)

Defining Tasks in Swift

SampleBenchmark contains a Swift package that is already set up to run benchmarks in Attabench; you only need to replace the example tasks with your own ones.

(To help you debug things, you may want to build the package in Terminal rather than inside Attabench. It is a normal Swift package, so you can build it and run it on its own. It even contains a set of command line options that you can use to run benchmarks directly from the command line -- this is extremely useful when you need to debug something about a task.)

To help you get started, let's describe the tasks that SampleBenchmark gives you by default.

To define a new benchmark, you need to create a new instance of the Benchmark<Input> generic class and add some tasks to it.

public class Benchmark<Input>: BenchmarkProtocol {
    public let title: String
    public var descriptiveTitle: String? = nil
    public var descriptiveAmortizedTitle: String? = nil

    public init(title: String, inputGenerator: @escaping (Int) -> Input)
    public func addTask(title: String, _ body: @escaping (Input) -> ((BenchmarkTimer) -> Void)?)    
}

Each benchmark has an Input type parameter that defines the shared input type that all tasks in that benchmark take. To create a benchmark, you also need to supply a function that takes a size (a positive integer) and returns an Input value of that size, typically using some sort of random number generator.

For example, let's create a simple benchmark that measures raw lookup performance in some standard collection types. To do that, we need to generate two things as input: a list of elements that the collection should contain, and a sequence of lookup operations to perform. We can represent both parts by randomly shuffling integers from 0 to size - 1, so that the order in which we insert elements into the collection will have no relation to the order we look them up:

let inputGenerator: (Int) -> (input: [Int], lookups: [Int]) = { size in
    return ((0 ..< size).shuffled(), (0 ..< size).shuffled())
}

Now that we have an input generator, we can start defining our benchmark:

let benchmark = Benchmark(title: "Sample", inputGenerator: inputGenerator)
benchmark.descriptiveTitle = "Time spent on all elements"
benchmark.descriptiveAmortizedTitle = "Average time spent on a single element"

We can add tasks to a benchmark by calling its addTask method. Let's start with a task that measures linear search by calling Array.contains on the input array:

benchmark.addTask(title: "Array.contains") { (input, lookups) in
    guard input.count <= 16384 else { return nil }
    return { timer in
        for value in lookups {
            guard input.contains(value) else { fatalError() }
        }
    }
}

The syntax may look strange at first, because we're returning a closure from within a closure, with the returned closure doing the actual measurement. This looks complicated, but it allows for extra functionality that's often important. In this case, we expect that the simple linear search implemented by Array.contains will be kind of slow, so to keep measurements fast, we limit the size of the input to about 16 thousand elements. Returning nil means that the task does not want to run on a particular input value, so its curve will have a gap on the chart corresponding to that particular size.

The inner closure receives a timer parameter that can be used to narrow the measurement to the section of the code we're actually interested in. For example, when we're measuring Set.contains, we aren't interested in the time needed to construct the set, so we need to exclude it from the measurement:

benchmark.addTask(title: "Set.contains") { (input, lookups) in
    return { timer in
        let set = Set(input)
        timer.measure {
            for i in lookups {
                guard set.contains(i) else { fatalError() }
            }
        }
    }
}

But preprocessing input data like this is actually better done in the outer closure, so that repeated runs of the task will not waste time on setting up the environment again:

benchmark.addTask(title: "Set.contains") { (input, lookups) in
    let set = Set(input)
    return { timer in
        for value in lookups {
            guard set.contains(value) else { fatalError() }
        }
    }
}

This variant will go much faster the second and subsequent time the app runs it.

To make things a little more interesting let's add a third task that measures binary search in a sorted array:

benchmark.addTask(title: "Array.binarySearch") { input, lookups in
    let data = input.sorted()
    return { timer in 
        for value in lookups {
            var i = 0
            var j = array.count
            while i < j {
                let middle = i + (j - i) / 2
                if value > array[middle] {
                    i = middle + 1
                }
                else {
                    j = middle
                }
            }
            guard i < array.count && array[i] == value else { fatalError() }
        }
    }
}

That's it! To finish things off, we just need to start the benchmark. The start() method parses command line arguments and starts running tasks based on the options it receives.

benchmark.start()

Get Surprised by Results

To run the new benchmark, just open it in Attabench, and press play. This gets us a chart like this one:

Sample benchmark results

The chart uses logarithmic scale on both axes, and displays amortized per-element execution time, where the elapsed time of each measurement is divided by its size.

We can often gain suprisingly deep insights into the behavior of our algorithms by just looking at the log-log charts generated by Attabench. For example, let's try explaining some of the more obvious features of the chart above:

  1. The curves start high. Looking up just a few members is relatively expensive compared to looking up many of them in a loop. Evidently there is some overhead (initializing iteration state, warming up the instruction cache etc.) that is a significant contributor to execution time at small sizes, but is gradually eclipsed by our algorithmic costs as we add more elements.

  2. After the initial warmup, the cost of looking up an element using Array.contains seems to be proportional to the size of the array. This is exactly what we expect, because linear search is supposed to be, well, linear. Still, it's nice to see this confirmed.

  3. The chart of Set.contains has a striking sawtooth pattern. This must be a side-effect of the particular way the set resizes itself to prevent an overly full hash table. At the peak of a sawtooth, the hash table is at full capacity (75% of its allocated space), leading to relatively frequent hash collisions, which slow down lookup operations. However, these collisions mostly disappear at the next size step, when the table is grown to double its previous size. So increasing the size of a Set sometimes makes it faster. Neat!

  4. In theory, Set.contains should be an O(1) operation, i.e., the time it takes should not depend on the size of the set. However, our benchmark indicates that's only true in practice when the set is small.

    Starting at about half a million elements, contains seems to switch gears to a non-constant curve: from then onwards, lookup costs consistently increase by a tiny amount whenever we double the size of the set. I believe this is because at 500,000 elements, our benchmark's random access patterns overwhelm the translation lookaside buffer that makes our computers' virtual memory abstraction efficient. Even though the data still fits entirely in physical memory, it takes extra time to find the physical address of individual elements.

    So when we have lots of data, randomly scattered memory accesses get really slow---and this can actually break the complexity analysis of our algorithms. Scandalous!

  1. Array.binarySearch is supposed to take O(log(n)) time to complete, but this is again proven incorrect for large arrays. At half a million elements, the curve for binary search bends upward exactly like like Set.contains did. It looks like the curve's slope is roughly doubled after the bend. Doubling the slope of a line on a log-log chart squares the original function, i.e., the time complexity seems to have become O(log(n)*log(n)) instead of O(log(n)).

    By simply looking at a chart, we've learned that at large scales, scattered memory access costs logarithmic time. Isn't that remarkable?

  1. Finally, Array.binarySearch has highly prominent spikes at powers-of-two sizes. This isn't some random benchmarking artifact: the spikes are in fact due to cache line aliasing, an interesting (if unfortunate) interaction between the processor's L2 cache and our binary search algorithm. The series of memory accesses performed by binary search on a large enough continuous array with a power-of-two size tends to all fall into the same L2 cache line, quickly overwhelming its associative capacity. Try changing the algorithm so that you optimize away the spikes without affecting the overall shape and position of the curve!

Internal Details: The Attabench Protocol

(In most cases, you don't need to know about the info in this section; however, you'll need to know it if you want to create benchmarks in languages other than Swift.)

Attabench runs run.sh with two parameters: the first is the constant string attabench, identifying the protocol version, and the second is a path to a named FIFO file that will serve as the report channel for the benchmark. (Benchmarking progress is not written to stdout/stderr to make sure you can still use print in your benchmarking code without worrying about the output getting interleaved with progress reports.)

The command to run is fed to run.sh via the stdin file. It consists of a single JSON-encoded BenchmarkIPC.Command value; the type definition contains some documentation describing what each command is supposed to do. Only a single command is sent to stdin, and the pipe is then immediately closed. When Attabench wants to run multiple commands, it will simply execute run.sh multiple times.

When the run command is given, Attabench expects run.sh to keep running indefinitely, constantly making new measurements, in an infinite loop over the specified sizes and tasks. Measurements are to be reported through the report FIFO, in JSON-encoded BenchmarkIPC.Report values. Each report must be written as a single line, including the terminating newline character.

When Attabench needs to stop a running benchmark, it sends SIGTERM (signal 15) to the process. The process is expected to exit withing 2 seconds; if it doesn't, then Attabench will kill it immediately with SIGKILL (signal 9). Normally you don't need to do anything to make this work -- but you should be aware that the benchmark may get terminated at any time, so be sure to install a signal handler for SIGTERM if you need to do any cleanup prior to exiting.

Comments
  • The info.plist can't be found ,build failed.Is there any way to solve it?

    The info.plist can't be found ,build failed.Is there any way to solve it?

    could not read data from '/Users/ly/Attabench/Benchmarking/BenchmarkIPC/Info.plist': The file “Info.plist” couldn’t be opened because there is no such file. All the files in the folder(BenchmarkIPC and Benchmarking) do not exist. Is there any way to solve it?

    opened by lmyl 4
  • Benchmarking git submodule not cloning

    Benchmarking git submodule not cloning

    Hi, by cloning Attabench repository (folowing the installation step 1) i was not able to build the project. It was required to manually add the files from Benchmarking submodule.

    Am i the only one with this issue?

    opened by cserban 2
  • Create an inspector panel to configure stuff

    Create an inspector panel to configure stuff

    Putting a bunch of toggles in the View menu works, but it would be better to build an inspector panel to hold these options:

    • Toggling between linear/logarithmic scale on either axis
    • Toggling between displaying amortized/raw measurements
    • Selecting the time and size scale shown on the chart (Issue #2)
    • Selecting the current theme
    • Controlling the position of the legend (also, show/hide it) (Issue #5)
    • Setting the order of benchmark tasks
    • Changing the curve color/thickness/dashing of an individual benchmark task
    • etc.

    There should be a button on the toolbar to show/hide the inspector.

    enhancement 
    opened by lorentey 1
  • Load benchmarks dynamically

    Load benchmarks dynamically

    Benchmarks are currently hardwired into the app, which is just terrible:

    • UI code has to be compiled with the same build configuration as the code being benchmarked.
    • We can't directly compare performance of code built with and without optimization.
    • We can't compare performance of code compiled with different Swift toolchains.
    • Benchmarking code not written in Swift is a pain.
    • You need to recompile the app to modify benchmarks.

    Yuck. Fixing this is not easy, though.

    We can't load code compiled with two different Swift toolchains into the the same process, so dlopen is not the answer—we need to run each job inside its own process. This means we need some sort of two-way IPC mechanism to let the app control the job processes.

    • [ ] Define the IPC mechanism and protocol. Unfortunately XPC is unlikely to be an option: we want to be able to run jobs written in Python or Ruby, and even Swift miscompiles code that uses XPC's high-level API. Standard I/O over named pipes (or UNIX domain sockets) and manual process management seems the easiest way to go. Job processes should be kept alive during a benchmark run, with the app starting individual benchmark instances and getting execution times over IPC.) Note that we need to ensure that each instance runs on the same input data; either the app should know about the input type and transmit data over IPC, or each process should generate the same data using the same random seed.

    • [ ] Define a package format for benchmark bundles. Beside metadata, a benchmark bundle would consist of a list of runner executables, either supplied directly as an executable file or as a Swift package that gets compiled into an executable.

    • [ ] Implement language-specific modules for writing jobs. The user should not need to care about the details of the IPC protocol in order to create a new job.

    • [ ] Implement the new benchmark runner. Bonus: Interrupting a benchmark will be as simple as killing the process that runs it.

    • [ ] Convert Attabench to a document-based app. Benchmarks contain executable code, so the app should pop up a quarantine dialog when users first open benchmarks they downloaded from the net.

    • [ ] Implement some sort of UI for creating/modifying a benchmark bundle.

    enhancement hard 
    opened by lorentey 1
  • Carthage bootstrap Failing

    Carthage bootstrap Failing

    First up; big fan of your book!

    I wanted to run Attabench to check the performance of my own data structures. However while following the Carthage step; I'm running into the following error:

    screen shot 2018-09-10 at 2 36 53 pm
    opened by n0shake 6
  • Custom position for the legend

    Custom position for the legend

    • [ ] The chart should be improved so that the legend is automatically placed in a corner where it doesn't intersect any of the curves.

    • [ ] I should also be able to click and drag the legend to move it somewhere else, in case it happens to cover some interesting area.

    • [ ] There should be a command to forget the custom position and restore automatic layout.

    The chart renderer already includes support for drawing the legend at a custom position. The position is given as a relative offset vector to one of the corners of the chart. It is important to anchor the legend to one of the corners, so that the user does not need to keep repositioning the legend after resizing the chart.

    enhancement 
    opened by lorentey 0
  • Better UI for selecting the active size range

    Better UI for selecting the active size range

    There should be a more intuitive way to select the size range on which benchmarks are run.

    The chart has an option to highlight the active range. We could make that highlight interactive, for example: let me drag the endpoints of the highlighted range to change it.

    We should still display the exact numerical value of the current size range somewhere on the toolbar, but the current popup buttons could (and probably should) be replaced by something else.

    enhancement 
    opened by lorentey 0
  • Display the exact measurement values for the size under the cursor

    Display the exact measurement values for the size under the cursor

    I want to be able to easily access measurement values for any size.

    • [ ] Make the chart interactive by automatically labeling curves with the measurement values corresponding to the horizontal position of the cursor.
    • [ ] Allow me to select a particular size by e.g. option-clicking on the chart. This should toggle between the labels following the cursor and staying in place at the size I set.
    • [ ] Allow me to switch between the labels displaying exact measurement values or speedup/slowdown factors relative to a particular task. For example, this could be toggled by option-clicking on a particular curve (or a line in the legend).
    enhancement 
    opened by lorentey 0
  • Let the user zoom in/out on a specific area on the chart

    Let the user zoom in/out on a specific area on the chart

    The chart is currently always autoscaled to fit the active size range and all existing measurements for the active benchmark tasks.

    It would be nice to have a toggle to disable this auto-fitting, and to allow the user to select their own size/time range to display instead.

    enhancement 
    opened by lorentey 0
Releases(v2.0.0)
Owner
A collection of useful Swift packages
null
An iOS app decrypter, full static using fouldecrypt.

Iridium An iOS app decrypter, full static using fouldecrypt. Supporting iOS 13+ Note We have built everything into the package, you can install and fl

Lakr Aream 234 Jan 9, 2023
An iOS app decrypter, full static using fouldecrypt.

Iridium An iOS app decrypter, full static using fouldecrypt. Supporting iOS 13+ Note We have built everything into the package, you can install and fl

Lakr Aream 226 Dec 24, 2022
Swift CLI for strong-typing images, colors, storyboards, fonts and localizations

Shark Shark is a Swift command line tool that generates type safe enums for your images, colors, storyboards, fonts and localizations. Because Shark r

Kaan Dedeoglu 377 Dec 1, 2022
Strong typed, autocompleted resources like images, fonts and segues in Swift projects

R.swift Get strong typed, autocompleted resources like images, fonts and segues in Swift projects Why use this? It makes your code that uses resources

Mathijs Kadijk 8.9k Jan 6, 2023
SwiftGen is a tool to automatically generate Swift code for resources of your projects

SwiftGen SwiftGen is a tool to automatically generate Swift code for resources of your projects (like images, localised strings, etc), to make them ty

null 8.3k Jan 5, 2023
Soulful docs for Swift & Objective-C

jazzy is a command-line utility that generates documentation for Swift or Objective-C About Both Swift and Objective-C projects are supported. Instead

Realm 7.2k Jan 3, 2023
Laurine - Localization code generator written in Swift. Sweet!

Author's note: Thanks everyone for making Laurine the TOP trending Swift repository in the world - this is amazing and very heart-warming! But this is

Jiri Trecak 1.3k Dec 28, 2022
swiftenv allows you to easily install, and switch between multiple versions of Swift.

Swift Version Manager swiftenv allows you to easily install, and switch between multiple versions of Swift. This project was heavily inspired by pyenv

Kyle Fuller 1.9k Dec 27, 2022
Script to support easily using Xcode Asset Catalog in Swift.

Misen Misen is a script to support using Xcode Asset Catalog in Swift. Features Misen scans sub-directories in the specified Asset Catalog and creates

Kazunobu Tasaka 123 Jun 29, 2022
An Xcode Plugin to convert Objective-C to Swift

XCSwiftr Convert Objective-C code into Swift from within Xcode. This plugin uses the Java applet of objc2swift to do the conversion. Noticed that the

Ignacio Romero Zurbuchen 338 Nov 29, 2022
Swift autocompleter for Sublime Text, via the adorable SourceKitten framework

SwiftKitten SwiftKitten is a Swift autocompleter for Sublime Text, via the adorable SourceKitten framework. Faster than XCode ! This package is new an

John Snyder 142 Sep 9, 2022
A drop-in universal library allows to record audio within the app with a nice User Interface.

IQAudioRecorderController IQAudioRecorderController is a drop-in universal library allows to record and crop audio within the app with a nice User Int

Mohd Iftekhar Qurashi 637 Nov 17, 2022
An App that gives a nice interface where the user can type in their start location and destination

SixtCarSummoner What it does We developed an App that gives a nice interface where the user can type in their start location and destination. The user

Dominik Schiwietz 1 Nov 21, 2021
A nice tutorial like the one introduced in the Path 3.X App

ICETutorial Welcome to ICETutorial. This small project is an implementation of the newly tutorial introduced by the Path 3.X app. Very simple and effi

Icepat 798 Jun 30, 2022
FlexLayout adds a nice Swift interface to the highly optimized facebook/yoga flexbox implementation. Concise, intuitive & chainable syntax.

FlexLayout adds a nice Swift interface to the highly optimized Yoga flexbox implementation. Concise, intuitive & chainable syntax. Flexbox is an incre

layoutBox 1.7k Dec 30, 2022
Simple Swift class for iOS that shows nice popup windows with animation.

NMPopUpView Simple class for iOS that shows nice popup windows, written in Swift. The project is build with Swift 4.0, so you need Xcode 9.0 or higher

Nikos Maounis 194 Jun 5, 2022
NiceAlertView is a Swift framework that can increase time of development and show nice custom AlertsViews

NiceAlertView Nice and beautiful AlertView for your iOS project NiceAlertView is a Swift framework that can increase time of development and show nice

Daniel Beltrami 0 Nov 24, 2021
A nice iOS View Capture Swift Library which can capture all content.

SwViewCapture A nice iOS View Capture Library which can capture all content. SwViewCapture could convert all content of UIWebView to a UIImage. 一个用起来还

Xing Chen 597 Nov 22, 2022