Combine SnapshotTesting images into a single asset

Overview

SnapshotTesting Stitch

Compatible with the latest Swift versions Compatible with iOS Contact @JamesSherlouk on Twitter

An extension to SnapshotTesting which allows you to create images combining the output of multiple snapshot strategies, assuming they all output to UIImage.

In essence, this allows you to have a single image which represents a single view in your application, shown in multiple different configurations. This might be useful, for example, where you want to visualise the same UIViewController on multiple devices or in light and dark mode.

Images may also have titles, allowing you to easily identify each configuration within the image.

An image demonstrating an example output of the package. It shows seven views coloured in blue with red borders with titles above each of them naming each of the views.

Usage

Once installed, no additional configuration is required. You can import the SnapshotTestingStitch module, call SnapshotTesting following their usage guide and simply provide our stitch strategy as below.

import SnapshotTesting
import SnapshotTestingStitch
import XCTest

class MyViewControllerTests: XCTestCase {
  func testMyViewController() {
    let vc = MyViewController()

    assertSnapshot(matching: vc, as: .stitch(strategies: [
      .image(on: .iPhone8),
      .image(on: .iPhone8Plus),
    ]))
  }
}

Titles

By default, if you simply provide an array of strategies then these will be untitled. If you instead provide a tuple containing a string and the strategy then the string will be positioned as a title above the image in the snapshot.

assertSnapshot(matching: vc, as: .stitch(strategies: [
  ("iPhone 8", .image(on: .iPhone8)),
  ("iPhone 8 Plus", .image(on: .iPhone8Plus)),
]))

Customisation

An optional parameter of the stitch strategy is the "style". This value allows you to customise certain parts of the rendered snapshot generated by the package.

This includes the spacing around the images, the colors used, and an optional border which can surround each image. The border can be useful for clearly identifying the bounds of each image - especially if the image background is the same as the snapshot background.

Opinionated defaults have already been provided for you.

assertSnapshot(matching: vc, as: .stitch(strategies: [
  ("iPhone 8", .image(on: .iPhone8)),
  ("iPhone 8 Plus", .image(on: .iPhone8Plus)),
], style: .init(
  fontSize: 20,
  titleColor: .white,
  borderColor: .red,
  borderWidth: 5,
  itemSpacing: 32,
  framePadding: 32,
  titleSpacing: 32,
  backgroundColor: .black
)))

Installation

Xcode 11

⚠️ Warning: By default, Xcode will try to add the SnapshotTestingStitch package to your project's main application/framework target. Please ensure that SnapshotTestingStitch is added to a test target instead, as documented in the last step, below.

  1. From the File menu, navigate through Swift Packages and select Add Package Dependency….
  2. Enter package repository URL: https://github.com/Sherlouk/swift-snapshot-testing-stitch
  3. Confirm the version and let Xcode resolve the package
  4. On the final dialog, update SnapshotTestingStitch's Add to Target column to a test target that will contain snapshot tests (if you have more than one test target, you can later add SnapshotTestingStitch to them by manually linking the library in its build phase)

Swift Package Manager

If you want to use SnapshotTestingStitch in any other project that uses Swift Package Manager, add the package as a dependency in Package.swift:

dependencies: [
  .package(name: "SnapshotTestingStitch", url: "https://github.com/Sherlouk/swift-snapshot-testing-stitch.git", from: "1.0.0"),
]

Next, add SnapshotTestingStitch as a dependency of your test target:

targets: [
  .target(
    name: "MyApp"
  ),
  
  .testTarget(
    name: "MyAppTests", 
    dependencies: [
      .target(name: "MyApp"),
      .product(name: "SnapshotTestingStitch", package: "SnapshotTestingStitch"),
    ]
  ),
]

Other

We do not currently support distribution through CocoaPods or Carthage.

License

This library is released under the MIT license. See LICENSE for details.

Comments
  • Adding perceptional precision support

    Adding perceptional precision support

    This PR adds support for the great work that Eric has completed: https://github.com/pointfreeco/swift-snapshot-testing/pull/628

    png support only for now. I'll be working on updating the HEIC library next.

    opened by sprejjs 1
  • Add ability to provide `configure` block to each stitch asset

    Add ability to provide `configure` block to each stitch asset

    This allows users to make value-level changes before each snapshot is taken. This increases the amount of flexibility and control we are providing to the end user.

    Closes #4

    opened by Sherlouk 0
  • Add `configure` block to allow input customisation

    Add `configure` block to allow input customisation

    The idea of the library is to allow users to stitch multiple variants of the same value into a single image. This is possible using the different traits (such as dynamic text size or user interface style) or image strategies (such as device size).

    However, it doesn't allow you to apply customisations to the view which are done at a custom class level. To be clearer, we should add the ability to change variables on the view itself for each test in the stitch.

    assertSnapshot(matching: vc, as: .stitch(strategies: [
      ("iPhone 8", .image(on: .iPhone8), configure: { $0.theme = .light }),
      ("iPhone 8 Plus", .image(on: .iPhone8Plus), configure: { $0.theme = .dark }),
    ]))
    
    opened by Sherlouk 0
  • Add CocoaPods Support

    Add CocoaPods Support

    I'm fortunate that all of my personal and work projects support Swift Package Manager in some part which enables us to use this project.

    I understand not everyone is in the same boat, so if anybody has the need for CocoaPods then feel free to let me know here and I can add compatibility for you.

    I won't do it unless someone asks though, always preferable to keep things light and easy!

    opened by Sherlouk 0
  • Increase Test Coverage

    Increase Test Coverage

    We should increase the test coverage to demonstrate more real-world examples as opposed to flat blue blobs (which I'm pretty sure are against Apple's HIGs and wouldn't be allowed in production).

    I want to demonstrate (and then document with images in the README) various different use cases including:

    • Different Themes (A view controller which responds to light/mode dark)
    • Different Localisations (A view controller which changes based on the user's locale)
    • Different Accessibility Modes (A view controller which supports dynamic text sizes, smart invert, etc)
    • Different Devices (A view controller which changes depending on the device or screen size)
    • Different Sizes (A view/cell which changes based on the rendered size)

    other ideas welcome

    I also want to add some extra tests including:

    • A performance test covering the entire stitch strategy
    • A test which uses a custom strategy which adds random timeouts to the snapshots (and as such will return out of order - testing our sorting solution)
    • A test which checks the title trimming approach
    • Tests to cover independent units of code throughout the package (especially the calculateImageSize function)
    opened by Sherlouk 0
  • Custom Snapshot Diffing

    Custom Snapshot Diffing

    ⚠️ I'm writing this down for my own sanity. It's just an idea, and not something I think somebody should actually pick up. Though I'm always interested if people have their own opinions.

    Currently, as part of the initialisation of the Snapshotting type we provide the diffing engine as the same as SnapshotTesting's own .image.

    This diffing solution will compare pixel to pixel across our entire outputted image... this works as expected, but it could be better with our own solution.

    Specifically, if we had the ability to diff each 'nested image' independently, then this would enable us to provide considerably better feedback to the user - aiding in their work to fix the error!

    How would we do this?

    Well a diffing engine has two requirements.

    First of all the ability to losslessly convert the 'Value' (in our case an instance of UIImage) to Data and back to the UIImage. This is in order to save the file to disk. For our use case we can copy what SnapshotTesting does and simply convert to/from PNG.

    The second is more complex and is where we have to compare two versions of 'Value' to check if they're the same. We could start with a crude precision check, this would be our "happy path" and would allow the test to quickly succeed if they were identical.

    If they were different however, we would need to break each image down into the separate 'nested images' and then run comparison on those. First checking that each set of images contain the same images, and then checking each image for likeness (with an optional precision). This would need access to the original metadata (title/strategy information) which we could pass through.

    The key challenge will be trying to break a UIImage down into this object with metadata. Ideally we would store a JSON file alongside the image file in order to safely serve this data - I'm not sure this is feasible without substantial API changes in the SnapshotTesting library. Instead, I think we're going to need to encode this data directly into the image itself 😱

    Essentially the first pixel would denote how many lines of pixels are used to decode the data. The rest of the pixels would then be parsed into a Data object, which would be in turn turned into a Codable struct. The exact science here is to be explored, but it is technically viable.

    One of the downsides of this approach (besides being awfully complex, and potentially not outweighed by the benefits - more on this later) is that tools which do external diffing might pick up on the subtle changes to our data pixels. Now I don't think this is likely to cause problems though as the data we would encode would simply inform us of the coordinates used to store each image and the title attached. Both of these pieces of information would already cause a significant enough change as to be caught by these external tools.

    Is this worth it?

    From a nerdy perspective, I'd love to explore it. Though it's very easy to win an argument where we say this is completely unnecessary. I think, if this does get added, we should definitely make it optional (at least initially) and it should be packaged separately requiring users to explicitly opt-in.

    opened by Sherlouk 0
Releases(1.2.0)
  • 1.2.0(Jan 18, 2022)

    With thanks to @alexey1312, we now support creating and storing images in the HEIC format. This is functionally the same as PNG but allows for much smaller file sizes.

    To use this new capability, simply add format: .heic() to your stitch function call. See tests for more complete examples.

    Source code(tar.gz)
    Source code(zip)
  • 1.1.0(Jul 23, 2021)

    This version adds a new StitchTask struct which can be used in replacement of the tuples which were required previously. The old method is still supported, so is entirely backwards compatible.

    The new StitchTask struct also supports a new, optional, configuration block. If used, it allows you to mutate the underlying input value before the snapshot is taken for that task. This can be handy if you have functionality not controlled by traits and still want to visualise multiple variants.

    Beware though, future stitches using the same value object will use the now mutated version. We would recommend that if you're configuring one task, that you reset/configure every task that uses the same object to prevent inconsistencies and inaccuracies.

    Source code(tar.gz)
    Source code(zip)
  • 1.0.0(Jun 27, 2021)

  • 0.0.3(Jun 27, 2021)

    • Fixed an issue where unnamed tasks would output the wrong image
    • Increased the amount of customisation options in StitchStyle
    • Allow for precision to be set in the public interface
    • Reduced Swift build tools requirement to 5.2 (was 5.3)
    • Improved documentation
    Source code(tar.gz)
    Source code(zip)
  • 0.0.2(Jun 26, 2021)

    Removes the tie to UIViewController which allows this feature to be used by any other SnapshotTesting strategy as long as the output is a UIImage.

    Source code(tar.gz)
    Source code(zip)
Owner
James Sherlock
James Sherlock
Script to support easily using Xcode Asset Catalog in Swift.

Misen Misen is a script to support using Xcode Asset Catalog in Swift. Features Misen scans sub-directories in the specified Asset Catalog and creates

Kazunobu Tasaka 123 Jun 29, 2022
DTPhotoViewerController - A fully customizable photo viewer ViewController to display single photo or collection of photos, inspired by Facebook photo viewer.

DTPhotoViewerController Example Demo video: https://youtu.be/aTLx4M4zsKk DTPhotoViewerController is very simple to use, if you want to display only on

Tung Vo 277 Dec 17, 2022
Fun GridScrollView written in SwiftUI + Combine, bridging between UIKit

BSZoomGridScrollView BSZoomGridScrollView is a powerful, pure swift iOS UI framework that provides the awesome grid scrollview containing your image a

Jang Seoksoon 148 Dec 17, 2022
Applies filter to a selected image from the Gallery using Combine

CombinePhotoFiltering App CombinePhotoFiltering is an app that applies sepia and bloom to a selected picture from the Photos app. Highlights The app i

Mauricio Esteves 0 Nov 14, 2021
PublisherKit - An open source implementation of Apple's Combine framework for processing asynchronous events over time

Publisher Kit Overview PublisherKit provides a declarative Swift API for processing asynchronous events over time. It is an open source version of App

null 5 Feb 22, 2022
Same repository with SwiftUI&Combine

RSS Feed Reader SwiftUI&Combine This repository is an example iOS Application with SwiftUI & Combine & MVVM design pattern. Same example of this repos

null 3 Oct 10, 2022
Agrume - 🍋 An iOS image viewer written in Swift with support for multiple images.

Agrume An iOS image viewer written in Swift with support for multiple images. Requirements Swift 5.0 iOS 9.0+ Xcode 10.2+ Installation Use Swift Packa

Jan Gorman 601 Dec 26, 2022
APNGKit is a high performance framework for loading and displaying APNG images in iOS and macOS.

APNGKit is a high performance framework for loading and displaying APNG images in iOS and macOS. It's built on top of a modified version of libpng wit

Wei Wang 2.1k Dec 30, 2022
A lightweight generic cache for iOS written in Swift with extra love for images.

Haneke is a lightweight generic cache for iOS and tvOS written in Swift 4. It's designed to be super-simple to use. Here's how you would initalize a J

Haneke 5.2k Dec 11, 2022
Kingfisher is a powerful, pure-Swift library for downloading and caching images from the web

Kingfisher is a powerful, pure-Swift library for downloading and caching images from the web. It provides you a chance to use a pure-Swift way to work

Wei Wang 20.9k Dec 30, 2022
Image viewer (or Lightbox) with support for local and remote videos and images

Table of Contents Features Focus Browse Rotation Zoom tvOS Setup Installation License Author Features Focus Select an image to enter into lightbox mod

Nes 534 Jan 3, 2023
SwiftGen is a tool to automatically generate Swift code for resources of your projects (like images, localised strings, etc), to make them type-safe to use.

SwiftGen is a tool to automatically generate Swift code for resources of your projects (like images, localised strings, etc), to make them type-safe to use.

null 8.3k Jan 5, 2023
A high-performance image library for downloading, caching, and processing images in Swift.

Features Asynchronous image downloader with priority queuing Advanced memory and database caching using YapDatabase (SQLite) Guarantee of only one ima

Yap Studios 72 Sep 19, 2022
AsyncImageExample An example project for AsyncImage. Loading images in SwiftUI article.

AsyncImageExample An example project for AsyncImage. Loading images in SwiftUI article. Note: The project works in Xcode 13.0 beta (13A5154h).

Artem Novichkov 4 Dec 31, 2021
A simple macOS app to read code from images, written purely in Swift using Vision Framework.

CodeReader A simple macOS app to read code from images, written purely in Swift using Vision Framework. Usage Drag an image Click the convert button R

Md Ibrahim Hassan 44 Nov 20, 2022
A UIActivityViewController to share images while displaying them as a nice preview.

PSActivityImageViewController Overview This view controller allows you to share an image the same way as a normal UIActivityViewController would, with

Peter Salz 11 Oct 19, 2022
Easily display images, animations, badges and alerts to your macOS application's dock icon

DSFDockTile Easily display images, animations, badges and alerts to your macOS application's dock icon. Why? I was inspired by Neil Sardesai after he

Darren Ford 45 Dec 2, 2022
📱iOS app to extract full-resolution video frames as images.

Frame Grabber is a focused, easy-to-use iOS app to extract full-resolution video frames as images. Perfect to capture and share your favorite video mo

Arthur Hammer 319 Jan 7, 2023
Jogendra 113 Nov 28, 2022