A simple but flexible cache

Related tags

Cache swift ios cache apps
Overview

Carlos

Build Status Carthage compatible

A simple but flexible cache, written in Swift for iOS 13+ and WatchOS 6 apps.

Breaking Changes

Carlos 1.0.0 has been migrated from PiedPiper dependency to Combine hence the minimum supported platforms versions are equal to the Combine's minimum supported platforms versions. See the releases page for more information.

Contents of this Readme

What is Carlos?

Carlos is a small set of classes and functions to realize custom, flexible and powerful cache layers in your application.

With a Functional Programming vocabulary, Carlos makes for a monoidal cache system. You can check the best explanation of how that is realized here or in this video, thanks to @bkase for the slides.

By default, Carlos ships with an in-memory cache, a disk cache, a simple network fetcher and a NSUserDefaults cache (the disk cache is inspired by HanekeSwift).

With Carlos you can:

  • create levels and fetchers depending on your needs
  • combine levels
  • Cancel pending requests
  • transform the key each level will get, or the values each level will output (this means you're free to implement every level independing on how it will be used later on). Some common value transformers are already provided with Carlos
  • Apply post-processing steps to a cache level, for example sanitizing the output or resizing images
  • Post-processing steps and value transformations can also be applied conditionally on the key used to fetch the value
  • react to memory pressure events in your app
  • automatically populate upper levels when one of the lower levels fetches a value for a key, so the next time the first level will already have it cached
  • enable or disable specific levels of your composed cache depending on boolean conditions
  • easily pool requests so you don't have to care whether 5 requests with the same key have to be executed by an expensive cache level before even only 1 of them is done. Carlos can take care of that for you
  • batch get requests to only get notified when all of them are done
  • setup multiple lanes for complex scenarios where, depending on certain keys or conditions, different caches should be used
  • have a type-safe complex cache that won't even compile if the code doesn't satisfy the type requirements

Installation

Swift Package Manager (Preferred)

Add Carlos to your project through the Xcode or add the following line to your package dependencies:

.package("https://github.com/spring-media/Carlos", from: "1.0.0")

CocoaPods

Carlos is available through CocoaPods. To install it, simply add the following line to your Podfile:

pod "Carlos", :git => "https://github.com/spring-media/Carlos"

Carthage

Carthage is also supported.

Requirements

  • iOS 13.0+
  • WatchOS 6+
  • Xcode 12+

Usage

To run the example project, clone the repo.

Usage examples

let cache = MemoryCacheLevel<String, NSData>().compose(DiskCacheLevel())

This line will generate a cache that takes String keys and returns NSData values. Setting a value for a given key on this cache will set it for both the levels. Getting a value for a given key on this cache will first try getting it on the memory level, and if it cannot find one, will ask the disk level. In case both levels don't have a value, the request will fail. In case the disk level can fetch a value, this will also be set on the memory level so that the next fetch will be faster.

Carlos comes with a CacheProvider class so that standard caches are easily accessible.

  • CacheProvider.dataCache() to create a cache that takes URL keys and returns NSData values
  • CacheProvider.imageCache() to create a cache that takes URL keys and returns UIImage values
  • CacheProvider.JSONCache() to create a cache that takes URL keys and returns AnyObject values (that should be then safely casted to arrays or dictionaries depending on your application)

The above methods always create new instances (so calling CacheProvider.imageCache() twice doesn't return the same instance, even though the disk level will be effectively shared because it will use the same folder on disk, but this is a side-effect and should not be relied upon) and you should take care of retaining the result in your application layer. If you want to always get the same instance, you can use the following accessors instead:

  • CacheProvider.sharedDataCache to retrieve a shared instance of a data cache
  • CacheProvider.sharedImageCache to retrieve a shared instance of an image cache
  • CacheProvider.sharedJSONCache to retrieve a shared instance of a JSON cache

Creating requests

To fetch a value from a cache, use the get method.

cache.get("key")
  .sink( 
    receiveCompletion: { completion in 
      if case let .failure(error) = completion {
        print("An error occurred :( \(error)")
      }
    },
    receiveValue: { value in 
      print("I found \(value)!")
    }
  )

A request can also be canceled with the cancel() method, and you can be notified of this event by calling onCancel on a given request:

let cancellable = cache.get(key)
                    .handleEvents(receiveCancel: { 
                      print("Looks like somebody canceled this request!")
                    })
                    .sink(...)
[... somewhere else]
cancellable.cancel()

This cache is not very useful, though. It will never actively fetch values, just store them for later use. Let's try to make it more interesting:

let cache = MemoryCacheLevel()
              .compose(DiskCacheLevel())
              .compose(NetworkFetcher())

This will create a cache level that takes URL keys and stores NSData values (the type is inferred from the NetworkFetcher hard-requirement of URL keys and NSData values, while MemoryCacheLevel and DiskCacheLevel are much more flexible as described later).

Key transformations

Key transformations are meant to make it possible to plug cache levels in whatever cache you're building.

Let's see how they work:

// Define your custom ErrorType values
enum URLTransformationError: Error {
    case invalidURLString
}

let transformedCache = NetworkFetcher().transformKeys(
  OneWayTransformationBox(
    transform: {
      Future { promise in 
        let url = URL(string: $0) {
          promise(.success(url))
        } else {
          promise(.failure(URLTransformationError.invalidURLString))
        }
      }
    }
  )
)

With the line above, we're saying that all the keys coming into the NetworkFetcher level have to be transformed to URL values first. We can now plug this cache into a previously defined cache level that takes String keys:

let cache = MemoryCacheLevel<String, NSData>().compose(transformedCache)

If this doesn't look very safe (one could always pass string garbage as a key and it won't magically translate to a URL, thus causing the NetworkFetcher to silently fail), we can still use a domain specific structure as a key, assuming it contains both String and URL values:

struct Image {
  let identifier: String
  let URL: Foundation.URL
}

let imageToString = OneWayTransformationBox(transform: { (image: Image) -> AnyPublisher<String, String> in
    Just(image.identifier).eraseToAnyPublisher()
})

let imageToURL = OneWayTransformationBox(transform: { (image: Image) -> AnyPublisher<URL> in
    Just(image.URL).eraseToAnyPublisher()
})

let memoryLevel = MemoryCacheLevel<String, NSData>().transformKeys(imageToString)
let diskLevel = DiskCacheLevel<String, NSData>().transformKeys(imageToString)
let networkLevel = NetworkFetcher().transformKeys(imageToURL)

let cache = memoryLevel.compose(diskLevel).compose(networkLevel)

Now we can perform safe requests like this:

let image = Image(identifier: "550e8400-e29b-41d4-a716-446655440000", URL: URL(string: "http://goo.gl/KcGz8T")!)

cache.get(image).sink {
  print("Found \(value)!")
}

Since Carlos 0.5 you can also apply conditions to OneWayTransformers used for key transformations. Just call the conditioned function on the transformer and pass your condition. The condition can also be asynchronous and has to return a AnyPublisher<Bool, Error>, having the chance to return a specific error for the failure of the transformation.

let transformer = OneWayTransformationBox<String, URL>(transform: { key in
  Future { promise in 
    if let value = URL(string: key) {
      promise(.success(value))
    } else {
      promise(.failure(MyError.stringIsNotURL))
    }
  }.eraseToAnyPublisher()
}).conditioned { key in
  Just(key)
    .filter { $0.rangeOfString("http") != nil }
    .eraseToAnyPublisher()
}

let cache = CacheProvider.imageCache().transformKeys(transformer)

That's not all, though.

What if our disk cache only stores Data, but we want our memory cache to conveniently store UIImage instances instead?

Value transformations

Value transformers let you have a cache that (let's say) stores Data and mutate it to a cache that stores UIImage values. Let's see how:

let dataTransformer = TwoWayTransformationBox(transform: { (image: UIImage) -> AnyPublisher<Data, Error> in
    Just(UIImagePNGRepresentation(image)).eraseToAnyPublisher()
}, inverseTransform: { (data: Data) -> AnyPublisher<UIImage, Error> in
    Just(UIImage(data: data)!).eraseToAnyPublisher()
})

let memoryLevel = MemoryCacheLevel<String, UIImage>().transformKeys(imageToString).transformValues(dataTransformer)

This memory level can now replace the one we had before, with the difference that it will internally store UIImage values!

Keep in mind that, as with key transformations, if your transformation closure fails (either the forward transformation or the inverse transformation), the cache level will be skipped, as if the fetch would fail. Same considerations apply for set calls.

Carlos comes with some value transformers out of the box, for example:

  • JSONTransformer to serialize NSData instances into JSON
  • ImageTransformer to serialize NSData instances into UIImage values (not available on the Mac OS X framework)
  • StringTransformer to serialize NSData instances into String values with a given encoding
  • Extensions for some Cocoa classes (DateFormatter, NumberFormatter, MKDistanceFormatter) so that you can use customized instances depending on your needs.

As of Carlos 0.4, it's possible to transform values coming out of Fetcher instances with just a OneWayTransformer (as opposed to the required TwoWayTransformer for normal CacheLevel instancess. This is because the Fetcher protocol doesn't require set). This means you can easily chain Fetchers that get a JSON from the internet and transform their output to a model object (for example a struct) into a complex cache pipeline without having to create a dummy inverse transformation just to satisfy the requirements of the TwoWayTransformer protocol.

As of Carlos 0.5, all transformers natively support asynchronous computation, so you can have expensive transformations in your custom transformers without blocking other operations. In fact, the ImageTransformer that comes out of the box processes image transformations on a background queue.

As of Carlos 0.5 you can also apply conditions to TwoWayTransformers used for value transformations. Just call the conditioned function on the transformer and pass your conditions (one for the forward transformation, one for the inverse transformation). The conditions can also be asynchronous and have to return a AnyPublisher<Bool, Error>, having the chance to return a specific error for the failure of the transformation.

let transformer = JSONTransformer().conditioned({ input in
  Just(myCondition).eraseToAnyPublisher()
}, inverseCondition: { input in
  Just(myCondition)eraseToAnyPublisher()
})

let cache = CacheProvider.dataCache().transformValues(transformer)

Post-processing output

In some cases your cache level could return the right value, but in a sub-optimal format. For example, you would like to sanitize the output you're getting from the Cache as a whole, independently of the exact layer that returned it.

For these cases, the postProcess function introduced with Carlos 0.4 could come helpful. The function is available as a protocol extension of the CacheLevel protocol.

The postProcess function takes a CacheLevel and a OneWayTransformer with TypeIn == TypeOut as parameters and outputs a decorated BasicCache with the post-processing step embedded in.

// Let's create a simple "to uppercase" transformer
let transformer = OneWayTransformationBox<NSString, String>(transform: { Just($0.uppercased() as String).eraseToAnyPublisher() })

// Our memory cache
let memoryCache = MemoryCacheLevel<String, NSString>()

// Our decorated cache
let transformedCache = memoryCache.postProcess(transformer)

// Lowercase value set on the memory layer
memoryCache.set("test String", forKey: "key")

// We get the lowercase value from the undecorated memory layer
memoryCache.get("key").sink { value in
  let x = value
}

// We get the uppercase value from the decorated cache, though
transformedCache.get("key").sink { value in
  let x = value
}

Since Carlos 0.5 you can also apply conditions to OneWayTransformers used for post processing transformations. Just call the conditioned function on the transformer and pass your condition. The condition can also be asynchronous and has to return a AnyPublisher<Bool, Error>, having the chance to return a specific error for the failure of the transformation. Keep in mind that the condition will actually take the output of the cache as the input, not the key used to fetch this value! If you want to apply conditions based on the key, use conditionedPostProcess instead, but keep in mind this doesn't support using OneWayTransformer instances yet.

let processer = OneWayTransformationBox<NSData, NSData>(transform: { value in
      Future { promise in 
        if let value = String(data: value as Data, encoding: .utf8)?.uppercased().data(using: .utf8) as NSData? {
          promise(.success(value))
        } else {
          promise(.failure(FetchError.conditionNotSatisfied))
        }
      }
    }).conditioned { value in
      Just(value.length < 1000).eraseToAnyPublisher()
    }

let cache = CacheProvider.dataCache().postProcess(processer)

Conditioned output post-processing

Extending the case for simple output post-processing, you can also apply conditional transformations based on the key used to fetch the value.

For these cases, the conditionedPostProcess function introduced with Carlos 0.6 could come helpful. The function is available as a protocol extension of the CacheLevel protocol.

The conditionedPostProcess function takes a CacheLevel and a conditioned transformer conforming to ConditionedOneWayTransformer as parameters and outputs a decorated CacheLevel with the conditional post-processing step embedded in.

// Our memory cache
let memoryCache = MemoryCacheLevel<String, NSString>()

// Our decorated cache
let transformedCache = memoryCache.conditionedPostProcess(ConditionedOneWayTransformationBox(conditionalTransformClosure: { (key, value) in
	if key == "some sentinel value" {
	    return Just(value.uppercased()).eraseToAnyPublisher()
	} else {
	    return Just(value).eraseToAnyPublisher()
	}
})

// Lowercase value set on the memory layer
memoryCache.set("test String", forKey: "some sentinel value")

// We get the lowercase value from the undecorated memory layer
memoryCache.get("some sentinel value").sink { value in
  let x = value
}

// We get the uppercase value from the decorated cache, though
transformedCache.get("some sentinel value").sink { value in
  let x = value
}

Conditioned value transformation

Extending the case for simple value transformation, you can also apply conditional transformations based on the key used to fetch or set the value.

For these cases, the conditionedValueTransformation function introduced with Carlos 0.6 could come helpful. The function is available as a protocol extension of the CacheLevel protocol.

The conditionedValueTransformation function takes a CacheLevel and a conditioned transformer conforming to ConditionedTwoWayTransformer as parameters and outputs a decorated CacheLevel with a modified OutputType (equal to the transformer's TypeOut, as in the normal value transformation case) with the conditional value transformation step embedded in.

// Our memory cache
let memoryCache = MemoryCacheLevel<String, NSString>()

// Our decorated cache
let transformedCache = memoryCache.conditionedValueTransformation(ConditionedTwoWayTransformationBox(conditionalTransformClosure: { (key, value) in
	if key == "some sentinel value" {
	    return Just(1).eraseToAnyPublisher()
	} else {
	    return Just(0).eraseToAnyPublisher()
	}
}, conditionalInverseTransformClosure: { (key, value) in
    if key > 0 {
	    return Just("Positive").eraseToAnyPublisher()
	} else {
		return Just("Null or negative").eraseToAnyPublisher()
	}
})

// Value set on the memory layer
memoryCache.set("test String", forKey: "some sentinel value")

// We get the same value from the undecorated memory layer
memoryCache.get("some sentinel value").sink { value in
  let x = value
}

// We get 1 from the decorated cache, though
transformedCache.get("some sentinel value").sink { value in
  let x = value
}

// We set "Positive" on the decorated cache
transformedCache.set(5, forKey: "test")

Composing transformers

As of Carlos 0.4, it's possible to compose multiple OneWayTransformer objects. This way, one can create several transformer modules to build a small library and then combine them as more convenient depending on the application.

You can compose the transformers in the same way you do with normal CacheLevels: with the compose protocol extension:

let firstTransformer = ImageTransformer() // NSData -> UIImage
let secondTransformer = ImageTransformer().invert() // Trivial UIImage -> NSData

let identityTransformer = firstTransformer.compose(secondTransformer)

The same approach can be applied to TwoWayTransformer objects (that by the way are already OneWayTransformer as well).

Many transformer modules will be provided by default with Carlos.

Pooling requests

When you have a working cache, but some of your levels are expensive (say a Network fetcher or a database fetcher), you may want to pool requests in a way that multiple requests for the same key, coming together before one of them completes, are grouped so that when one completes all of the other complete as well without having to actually perform the expensive operation multiple times.

This functionality comes with Carlos.

let cache = (memoryLevel.compose(diskLevel).compose(networkLevel)).pooled()

Keep in mind that the key must conform to the Hashable protocol for the pooled function to work:

extension Image: Hashable {
  var hashValue: Int {
    return identifier.hashValue
  }
}

extension Image: Equatable {}

func ==(lhs: Image, rhs: Image) -> Bool {
  return lhs.identifier == rhs.identifier && lhs.URL == rhs.URL
}

Now we can execute multiple fetches for the same Image value and be sure that only one network request will be started.

Batching get requests

Since Carlos 0.7 you can pass a list of keys to your CacheLevel through batchGetSome. This returns a AnyPublisher that succeeds when all the requests for the specified keys complete, not necessarily succeeding. You will only get the successful values in the success callback, though.

Since Carlos 0.9 you can transform your CacheLevel into one that takes a list of keys through allBatch. Calling get on such a CacheLevel returns a AnyPublisher that succeeds only when the requests for all of the specified keys succeed, and fails as soon as one of the requests for the specified keys fails. If you cancel the AnyPublisher returned by this CacheLevel, all of the pending requests are canceled, too.

An example of the usage:

let cache = MemoryCacheLevel<String, Int>()

for iter in 0..<99 {
  cache.set(iter, forKey: "key_\(iter)")
}

let keysToBatch = (0..<100).map { "key_\($0)" }

cache.batchGetSome(keysToBatch).sink(
    receiveCompletion: { completion in 
        print("Failed because \($0)")
    },
    receiveValue: { values in 
        print("Got \(values.count) values in total")
    }
)

In this case the allBatch().get call would fail because there are only 99 keys set and the last request will make the whole batch fail, with a valueNotInCache error. The batchGetSome().get will succeed instead, printing Got 99 values in total.

Since allBatch returns a new CacheLevel instance, it can be composed or transformed just like any other cache:

In this case cache is a cache that takes a sequence of String keys and returns a AnyPublisher of a list of Int values, but is limited to 3 concurrent requests (see the next paragraph for more information on limiting concurrent requests).

Conditioning caches

Sometimes we may have levels that should only be queried under some conditions. Let's say we have a DatabaseLevel that should only be triggered when users enable a given setting in the app that actually starts storing data in the database. We may want to avoid accessing the database if the setting is disabled in the first place.

let conditionedCache = cache.conditioned { key in
  Just(appSettingIsEnabled).eraseToAnyPublisher()
}

The closure gets the key the cache was asked to fetch and has to return a AnyPublisher<Bool, Error> object indicating whether the request can proceed or should skip the level, with the possibility to fail with a specific Error to communicate the error to the caller.

At runtime, if the variable appSettingIsEnabled is false, the get request will skip the level (or fail if this was the only or last level in the cache). If true, the get request will be executed.

Multiple cache lanes

If you have a complex scenario where, depending on the key or some other external condition, either one or another cache should be used, then the switchLevels function could turn useful.

Usage:

let lane1 = MemoryCacheLevel<URL, NSData>() // The two lanes have to be equivalent (same key type, same value type).
let lane2 = CacheProvider.dataCache() // Keep in mind that you can always use key transformation or value transformations if two lanes don't match by default

let switched = switchLevels(lane1, lane2) { key in
  if key.scheme == "http" {
  	return .cacheA
  } else {
   	return .cacheB // The example is just meant to show how to return different lanes
  }
}

Now depending on the scheme of the key URL, either the first lane or the second will be used.

Listening to memory warnings

If we store big objects in memory in our cache levels, we may want to be notified of memory warning events. This is where the listenToMemoryWarnings and unsubscribeToMemoryWarnings functions come handy:

let token = cache.listenToMemoryWarnings()

and later

unsubscribeToMemoryWarnings(token)

With the first call, the cache level and all its composing levels will get a call to onMemoryWarning when a memory warning comes.

With the second call, the behavior will stop.

Keep in mind that this functionality is not yet supported by the WatchOS 2 framework CarlosWatch.framework.

Normalization

In case you need to store the result of multiple Carlos composition calls in a property, it may be troublesome to set the type of the property to BasicCache as some calls return different types (e.g. PoolCache). In this case, you can normalize the cache level before assigning it to the property and it will be converted to a BasicCache value.

import Carlos

class CacheManager {
  let cache: BasicCache<URL, NSData>

  init(injectedCache: BasicCache<URL, NSData>) {
	self.cache = injectedCache
  }
}

[...]

let manager = CacheManager(injectedCache: CacheProvider.dataCache().pooled()) // This won't compile

let manager = CacheManager(injectedCache: CacheProvider.dataCache().pooled().normalize()) // This will

As a tip, always use normalize if you need to assign the result of multiple composition calls to a property. The call is a no-op if the value is already a BasicCache, so there will be no performance loss in that case.

Creating custom levels

Creating custom levels is easy and encouraged (after all, there are multiple cache libraries already available if you only need memory, disk and network functionalities!).

Let's see how to do it:

class MyLevel: CacheLevel {
  typealias KeyType = Int
  typealias OutputType = Float

  func get(_ key: KeyType) -> AnyPublisher<OutputType, Error> {
    Future {
      // Perform the fetch and either succeed or fail
    }.eraseToAnyPublisher()
  }

  func set(_ value: OutputType, forKey key: KeyType) -> AnyPublisher<Void, Error> {  
    Future {
      // Store the value (db, memory, file, etc) and call this on completion:
    }.eraseToAnyPublisher()
  }

  func clear() {
    // Clear the stored values
  }

  func onMemoryWarning() {
    // A memory warning event came. React appropriately
  }
}

The above class conforms to the CacheLevel protocol. First thing we need is to declare what key types we accept and what output types we return. In this example case, we have Int keys and Float output values.

The required methods to implement are 4: get, set, clear and onMemoryWarning. This sample cache can now be pipelined to a list of other caches, transforming its keys or values if needed as we saw in the earlier paragraphs.

Creating custom fetchers

With Carlos 0.4, the Fetcher protocol was introduced to make it easier for users of the library to create custom fetchers that can be used as read-only levels in the cache. An example of a "Fetcher in disguise" that has always been included in Carlos is NetworkFetcher: you can only use it to read from the network, not to write (set, clear and onMemoryWarning were no-ops).

This is how easy it is now to implement your custom fetcher:

class CustomFetcher: Fetcher {
  typealias KeyType = String
  typealias OutputType = String

  func get(_ key: KeyType) -> Anypublisher<OutputType, Error> {
    return Just("Found an hardcoded value :)").eraseToAnyPublisher()
  }
}

You still need to declare what KeyType and OutputType your CacheLevel deals with, of course, but then you're only required to implement get. Less boilerplate for you!

Built-in levels

Carlos comes with 3 cache levels out of the box:

  • MemoryCacheLevel
  • DiskCacheLevel
  • NetworkFetcher
  • Since the 0.5 release, a UserDefaultsCacheLevel

MemoryCacheLevel is a volatile cache that internally stores its values in an NSCache instance. The capacity can be specified through the initializer, and it supports clearing under memory pressure (if the level is subscribed to memory warning notifications). It accepts keys of any given type that conforms to the StringConvertible protocol and can store values of any given type that conforms to the ExpensiveObject protocol. Data, NSData, String, NSString UIImage, URL already conform to the latter protocol out of the box, while String, NSString and URL conform to the StringConvertible protocol. This cache level is thread-safe.

DiskCacheLevel is a persistent cache that asynchronously stores its values on disk. The capacity can be specified through the initializer, so that the disk size will never get too big. It accepts keys of any given type that conforms to the StringConvertible protocol and can store values of any given type that conforms to the NSCoding protocol. This cache level is thread-safe, and currently the only CacheLevel that can fail when calling set, with a DiskCacheLevelError.diskArchiveWriteFailed error.

NetworkFetcher is a cache level that asynchronously fetches values over the network. It accepts URL keys and returns NSData values. This cache level is thread-safe.

NSUserDefaultsCacheLevel is a persistent cache that stores its values on a UserDefaults persistent domain with a specific name. It accepts keys of any given type that conforms to the StringConvertible protocol and can store values of any given type that conforms to the NSCoding protocol. It has an internal soft cache used to avoid hitting the persistent storage too often, and can be cleared without affecting other values saved on the standardUserDefaults or on other persistent domains. This cache level is thread-safe.

Logging

When we decided how to handle logging in Carlos, we went for the most flexible approach that didn't require us to code a complete logging framework, that is the ability to plug-in your own logging library. If you want the output of Carlos to only be printed if exceeding a given level, if you want to completely silent it for release builds, or if you want to route it to a file, or whatever else: just assign your logging handling closure to Carlos.Logger.output:

Carlos.Logger.output = { message, level in
   myLibrary.log(message) //Plug here your logging library
}

Tests

Carlos is thouroughly tested so that the features it's designed to provide are safe for refactoring and as much as possible bug-free.

We use Quick and Nimble instead of XCTest in order to have a good BDD test layout.

As of today, there are around 1000 tests for Carlos (see the folder Tests), and overall the tests codebase is double the size of the production codebase.

Future development

Carlos is under development and here you can see all the open issues. They are assigned to milestones so that you can have an idea of when a given feature will be shipped.

If you want to contribute to this repo, please:

  • Create an issue explaining your problem and your solution
  • Clone the repo on your local machine
  • Create a branch with the issue number and a short abstract of the feature name
  • Implement your solution
  • Write tests (untested features won't be merged)
  • When all the tests are written and green, create a pull request, with a short description of the approach taken

Apps using Carlos

Using Carlos? Please let us know through a Pull request, we'll be happy to mention your app!

Authors

Carlos was made in-house by WeltN24

Contributors:

Vittorio Monaco, [email protected], @vittoriom on Github, @Vittorio_Monaco on Twitter

Esad Hajdarevic, @esad

License

Carlos is available under the MIT license. See the LICENSE file for more info.

Acknowledgements

Carlos internally uses:

The DiskCacheLevel class is inspired by Haneke. The source code has been heavily modified, but adapting the original file has proven valuable for Carlos development.

Comments
  • Using Carlos to cache videos

    Using Carlos to cache videos

    Is there a way to play cached videos (which initially came from a server) using AVPlayer or plugins like https://github.com/piemonte/Player (similar to the way one can display cached images: .onSuccess { value in and then next line: UIImageView(image: value)) or does one have to find out where exactly the (video) file is stored in order to provide the URL and play it?

    feature nice to have 
    opened by blurtime 18
  • Batching set requests

    Batching set requests

    In preparation for #65, #66 and other web CacheLevels to include in Carlos, a function batch, together with a protocol extension shoud be added, that takes an integer N from 0 to +inf, builds a wrapper that batches all set requests before sending them to the underlying cache, and passes through clear and onMemoryWarning calls. get requests will go through a soft internal cache and in case of failure will be dispatched to the underlying cache. Calling onMemoryWarning will also force flush the soft cache. The soft cache will be implemented as a memory cache. Biggest question until now: how to properly handle when the app closes on multiple targets (iOS, Mac OS X, watchOS, tvOS?)

    • [ ] Investigate whether it's possible to react on app termination on iOS/tvOS/MacOS/watchOS 2
    • [ ] Investigate whether it's possible to have a soft cache of method calls to the decorated cache
    • [ ] Investigate whether it's possible to have a soft cache for the values to set on the decorated cache
    • [ ] Basic API
    • [ ] Implement get: check the soft cache (?) or forward the call to the decorated cache
    • [ ] Implement set: enqueue the call (and save the value in the soft cache?). When the queue has a determined number of elements, flush it (with the right order of calls).
    • [ ] Consider whether it's possible to group calls by key and only call the latest one for the same key when flushing the queue
    • [ ] Implement clear: forward the call to the decorated cache, clear the soft cache
    • [ ] Implement onMemoryWarning: forward the call to the decorated cache, flush the soft cache
    • [ ] Write tests
    • [ ] Write code documentation
    • [ ] Add sample code
    • [ ] Update README.md
    • [ ] Update Wiki
    • [ ] Update CHANGELOG.md
    • [ ] Update Github milestone
    feature 
    opened by vittoriom 12
  • #121: changed `set` to return Future<()> in CacheLevel and its implementations

    #121: changed `set` to return Future<()> in CacheLevel and its implementations

    Hi. This is an implementation of a fix that I described before. Surprisingly enough, it doesn't break any sample code.

    Please let me know if any additional changes needed for this be merged.

    enhancement question 
    opened by MaxDesiatov 10
  • Image resize post processing step

    Image resize post processing step

    Thanks for this great component - using it in a Watch project.

    Could you please post an example of a post processing image cache step? In my case I'd like to resize the images after downloading. I'd also like to understand how I could conditionally do that based on the app state (in my case I only want to resize if the JSON property has a certain value).

    opened by falkobuttler 8
  • Custom Fetcher with high memory usage

    Custom Fetcher with high memory usage

    Hey 👋 I am currently trying to implement a custom cache fetcher for downloading images from AWS S3 for a SwiftUI image grid like the iOS photo app.

    My fetcher looks like the following:

    class S3CacheFetcher: Fetcher {
        
        typealias KeyType = MediaItemCacheInfo
        typealias OutputType = NSData
        
        func get(_ key: KeyType) -> AnyPublisher<OutputType, Error> {
            return download(mediaItem: key).eraseToAnyPublisher()
        }
        
        private func download(mediaItem: KeyType) -> AnyPublisher<OutputType, Error>{
            let BUCKET = "someBucket"
            
            return Deferred {
                Future { promise in
                    guard let key:String = S3CacheFetcher.getItemKey(mediaItem: mediaItem) else { fatalError("UserPoolID Error") }
                    print("Downloading image with key: \(key)")
                    AWSS3TransferUtility.default().downloadData(fromBucket: BUCKET,
                                                                key: key,
                                                                expression: nil) { (task, url, data, error) in
                        if let error = error{
                            print(error)
                            promise(.failure(error))
                        }else if let data = data{
                            if let imageData:Data = S3CacheFetcher.decryptImage(with: data, mediaItem: mediaItem){
                                promise(.success(imageData as NSData))
                            }
                        }
                    }
                }
            }
            .eraseToAnyPublisher()
        }
    ....
    }
    

    My problem is that when I use only this fetcher without any memory or disk cache than my memory is increasing until the app crashes. I narrowed down the problem to this fetcher because when I am using the included disk cache without my fetcher everything works as expected. The memory usage is around 40-50 Mb and not in the Gb range.

    I am fairly new to Combine, so any help is greatly appreciated.

    EDIT

    My ImageGrid View

    struct AllPhotos: View {
        @StateObject var mediaManager = MediaManager()
        var body: some View {
          ScrollView{
              LazyVGrid(columns: columns, spacing: 3){
                  ForEach(mediaManager.mediaItems) { item in
                      VStack{
                          ImageView(downloader: ImageLoader(mediaItem: item, size: .large, parentAlbum: nil))
                      }
                  }
             }
        }
    }
    

    My ImageView that is shown in each grid item which is responsible to show the image it self

    struct ImageView: View{
            @StateObject var downloader: ImageLoader
            
            var body: some View {
                Image(uiImage: downloader.image ?? UIImage(systemName: "photo")!)
                    .resizable()
                    .aspectRatio(contentMode: .fill)
                    .onAppear(perform: {
                        downloader.load()
                    })
                    .onDisappear {
                        downloader.cancel()
                    }
            }
        }
    

    And the Image loader it self which interacts with the cache.

    class ImageLoader: ObservableObject {
        @Published var image: UIImage?
        
        private(set) var isLoading = false
        
        private var cancellable: AnyCancellable?
        private(set) var mediaItem:MediaItem
        private(set) var size: ThumbnailSizes
        private(set) var parentAlbum: GetAlbum?
        
        init(mediaItem: MediaItem, size: ThumbnailSizes, parentAlbum: GetAlbum?) {
            self.mediaItem = mediaItem
            self.size = size
            self.parentAlbum = parentAlbum
        }
        
        deinit {
            cancel()
            self.image = nil
        }
        
        func load() {
            guard !isLoading else { return }
            
            cancellable = S3CacheFetcher().get(.init(parentAlbum: self.parentAlbum, size: self.size, cipher: self.mediaItem.cipher, ivNonce: self.mediaItem.ivNonce, mid: self.mediaItem.mid))
                .map{ UIImage(data: $0 as Data)}
                .replaceError(with: nil)
                .handleEvents(receiveSubscription: { [weak self] _ in self?.onStart() },
                              receiveCompletion: { [weak self] _ in self?.onFinish() },
                              receiveCancel: { [weak self] in self?.onFinish() })
                .receive(on: DispatchQueue.main)
                .sink { [weak self] in self?.image = $0 }
        }
        
        func cancel() {
            cancellable?.cancel()
            self.image = nil
        }
        
        private func onStart() {
            isLoading = true
        }
        
        private func onFinish() {
            isLoading = false
        }
    }
    
    question 
    opened by mufumade 7
  • Consider simplifying usage of 3rd-party Futures libraries with Carlos

    Consider simplifying usage of 3rd-party Futures libraries with Carlos

    I've started using https://github.com/Thomvis/BrightFutures in my project as Carlos' Future implementation is not powerful enough: no possibility of chaining and thus no easy interaction with other async code compared to what 3rd-party libraries provide.

    Currently the most popular and clean library in my opinion is BrightFutures, which I've picked for my project. The main problem is that both Carlos and BrightFutures provide a public Future symbol which doesn't exactly clash when imported in the same file, but doesn't resolve smartly either. BrightFuture's Future has two generic parameters while Carlos' Future has only one. Still, when writing a type declaration like Future<[CKRecordID], NSError> Swift compiler for some reason expects it to be the one-parameter generic version from Carlos. This leads to a redundant type specifications in my code like BrightFutures.Future<[CKRecordID], NSError> in files where BrightFutures and Carlos are both imported.

    Thus I see three possible solutions to this:

    1. Future class in Carlos is renamed to something more specific like CFuture or whatever.
    2. Carlos public APIs could define a protocol with an unambiguous name like FutureType and by default return it's own implementation of the protocol, which would be hidden from the user though. If user would want to override the return type, there would be an override point provided as a metatype property or a block property, whichever is considered more convenient.
    3. Carlos uses a 3rd-party Futures implementation, which would be more powerful than the current one, or the internal implementation could be extended to be more powerful, requiring more maintenance though.

    I hope that @vittoriom considers option 3 with BrightFutures as it seems to me so far the most stable, powerful and thoroughly documented library. Also, cases like batch get/set could be generalised with code like this:

    ["a", "b"].traverse {
        cache.get($0)
    }.onSuccess { result in
        // result is an array of OutputTypes
    }
    

    Obviously, I don't know the original motivation of Carlos implementing it's own Futures library, so please clarify if some of the suggestions don't make sense.

    infra nice to have 
    opened by MaxDesiatov 7
  • Fix #114, Disable Optimization for Carthage/xcodebuild

    Fix #114, Disable Optimization for Carthage/xcodebuild

    Fix #114 Build error by Carthage

    Happened when trying to build Carlos via xcodebuild (which is used for carthage) I don't know why it works, but I felt tiny bitcode glitch between NS*(Foundation) - Swift.

    opened by minhoryang 7
  • #154: Reifies batchAllGet into a CacheLevel

    #154: Reifies batchAllGet into a CacheLevel

    See #154

    For some reason I couldn't the get the test setup properly in a new file, so I just stuck the tests in the BatchTests.swift file.

    Since this is just a reification of batchGetAll I reused the same tests from BatchTests just tweaking slightly to fit the new API.

    It probably also makes sense to deprecate batchGetAll if you agree this is a useful change.

    enhancement feature 
    opened by bkase 5
  • Can't build with Carthage

    Can't build with Carthage

    I tried building with Carthage, but error occurred.

    xcodebuild: error: Scheme PiedPiper is not currently configured for the build action.
    

    How can I build?

    bug 
    opened by rizumita 5
  • Build error by Carthage

    Build error by Carthage

    I tried building by Carthage and received the following error.

    Undefined symbols for architecture x86_64:
      "direct type metadata for ext.Carlos.__ObjC.NSDateFormatter.Error", referenced from:
          protocol witness for Swift.ErrorType._domain.getter : Swift.String in conformance ext.Carlos.__ObjC.NSDateFormatter.Error : Swift.ErrorType in Carlos in ConcurrentOperation.o
      "direct type metadata for ext.Carlos.__ObjC.NSNumberFormatter.Error", referenced from:
          protocol witness for Swift.ErrorType._domain.getter : Swift.String in conformance ext.Carlos.__ObjC.NSNumberFormatter.Error : Swift.ErrorType in Carlos in ConcurrentOperation.o
    ld: symbol(s) not found for architecture x86_64
    

    By CocoaPods, Building succeeded.

    bug infra 
    opened by rizumita 5
  • provide an ability to disable logging

    provide an ability to disable logging

    Currently, there is no way to disable log output emitted by Carlos.

    While log levels are specified, there is no way to restrict Carlos to emitting output only when it exceeds a certain threshold level.

    question 
    opened by MaxDesiatov 5
  • Bump addressable from 2.7.0 to 2.8.1

    Bump addressable from 2.7.0 to 2.8.1

    Bumps addressable from 2.7.0 to 2.8.1.

    Changelog

    Sourced from addressable's changelog.

    Addressable 2.8.1

    • refactor Addressable::URI.normalize_path to address linter offenses (#430)
    • remove redundant colon in Addressable::URI::CharacterClasses::AUTHORITY regex (#438)
    • update gemspec to reflect supported Ruby versions (#466, #464, #463)
    • compatibility w/ public_suffix 5.x (#466, #465, #460)
    • fixes "invalid byte sequence in UTF-8" exception when unencoding URLs containing non UTF-8 characters (#459)
    • Ractor compatibility (#449)
    • use the whole string instead of a single line for template match (#431)
    • force UTF-8 encoding only if needed (#341)

    #460: sporkmonger/addressable#460 #463: sporkmonger/addressable#463 #464: sporkmonger/addressable#464 #465: sporkmonger/addressable#465 #466: sporkmonger/addressable#466

    Addressable 2.8.0

    • fixes ReDoS vulnerability in Addressable::Template#match
    • no longer replaces + with spaces in queries for non-http(s) schemes
    • fixed encoding ipv6 literals
    • the :compacted flag for normalized_query now dedupes parameters
    • fix broken escape_component alias
    • dropping support for Ruby 2.0 and 2.1
    • adding Ruby 3.0 compatibility for development tasks
    • drop support for rack-mount and remove Addressable::Template#generate
    • performance improvements
    • switch CI/CD to GitHub Actions
    Commits
    • 8657465 Update version, gemspec, and CHANGELOG for 2.8.1 (#474)
    • 4fc5bb6 CI: remove Ubuntu 18.04 job (#473)
    • 860fede Force UTF-8 encoding only if needed (#341)
    • 99810af Merge pull request #431 from ojab/ct-_do_not_parse_multiline_strings
    • 7ce0f48 Merge branch 'main' into ct-_do_not_parse_multiline_strings
    • 7ecf751 Merge pull request #449 from okeeblow/freeze_concatenated_strings
    • 41f12dd Merge branch 'main' into freeze_concatenated_strings
    • 068f673 Merge pull request #459 from jarthod/iso-encoding-problem
    • b4c9882 Merge branch 'main' into iso-encoding-problem
    • 08d27e8 Merge pull request #471 from sporkmonger/sporkmonger-enable-codeql
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 1
  • Configure Renovate

    Configure Renovate

    Mend Renovate

    Welcome to Renovate! This is an onboarding PR to help you understand and configure settings before regular Pull Requests begin.

    🚦 To activate Renovate, merge this Pull Request. To disable Renovate, simply close this Pull Request unmerged.


    Detected Package Files

    • Gemfile (bundler)
    • .github/workflows/pr.yml (github-actions)
    • Package.swift (swift)

    Configuration

    🔡 Renovate has detected a custom config for this PR. Feel free to ask for help if you have any doubts and would like it reviewed.

    Important: Now that this branch is edited, Renovate can't rebase it from the base branch any more. If you make changes to the base branch that could impact this onboarding PR, please merge them manually.

    What to Expect

    With your current configuration, Renovate will create 4 Pull Requests:

    Update dependency Quick/Nimble to from: "v9.2.1"
    • Schedule: ["at any time"]
    • Branch name: renovate/quick-nimble-9.x
    • Merge into: master
    • Upgrade Quick/Nimble to f141f9151c0aad2a372528d0f9d6cbe52bc32b92
    Update actions/checkout action to v3
    • Schedule: ["at any time"]
    • Branch name: renovate/actions-checkout-3.x
    • Merge into: master
    • Upgrade actions/checkout to v3
    Update dependency Quick/Nimble to v10
    • Schedule: ["at any time"]
    • Branch name: renovate/quick-nimble-10.x
    • Merge into: master
    • Upgrade Quick/Nimble to 1f3bde57bde12f5e7b07909848c071e9b73d6edc
    Update dependency Quick/Quick to v5
    • Schedule: ["at any time"]
    • Branch name: renovate/quick-quick-5.x
    • Merge into: master
    • Upgrade Quick/Quick to f9d519828bb03dfc8125467d8f7b93131951124c

    🚸 Branch creation will be limited to maximum 2 per hour, so it doesn't swamp any CI resources or spam the project. See docs for prhourlylimit for details.


    ❓ Got questions? Check out Renovate's Docs, particularly the Getting Started section. If you need any further assistance then you can also request help here.


    This PR has been generated by Mend Renovate. View repository job log here.

    opened by renovate[bot] 1
  • Bump jmespath from 1.4.0 to 1.6.1

    Bump jmespath from 1.4.0 to 1.6.1

    Bumps jmespath from 1.4.0 to 1.6.1.

    Release notes

    Sourced from jmespath's releases.

    Release v1.6.1 - 2022-03-07

    • Issue - Use JSON.parse instead of JSON.load.

    Release v1.6.0 - 2022-02-14

    • Feature - Add support for string comparissons.

    Release v1.5.0 - 2022-01-10

    • Support implicitly convertible objects/duck-type values responding to to_hash and to_ary.

      [See related GitHub pull request #51](jmespath/jmespath.rb#51).

    Changelog

    Sourced from jmespath's changelog.

    1.6.1 (2022-03-07)

    • Issue - Use JSON.parse instead of JSON.load.

    1.6.0 (2022-02-14)

    • Feature - Add support for string comparisons.

    1.5.0 (2022-01-10)

    • Support implicitly convertible objects/duck-type values responding to to_hash and to_ary.

      [See related GitHub pull request #51](jmespath/jmespath.rb#51).

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 1
  • [Snyk] Security upgrade fastlane from 2.156.1 to 2.156.1

    [Snyk] Security upgrade fastlane from 2.156.1 to 2.156.1

    This PR was automatically created by Snyk using the credentials of a real user.


    Snyk has created this PR to fix one or more vulnerable packages in the `rubygems` dependencies of this project.

    Changes included in this PR

    • Changes to the following files to upgrade the vulnerable dependencies to a fixed version:
      • Gemfile
      • Gemfile.lock

    Vulnerabilities that will be fixed

    With an upgrade:

    Severity | Priority Score (*) | Issue | Breaking Change | Exploit Maturity :-------------------------:|-------------------------|:-------------------------|:-------------------------|:------------------------- high severity | 691/1000
    Why? Recently disclosed, Has a fix available, CVSS 8.1 | Deserialization of Untrusted Data
    SNYK-RUBY-JMESPATH-2859799 | No | No Known Exploit

    (*) Note that the real score may have changed since the PR was raised.

    Check the changes in this PR to ensure they won't cause issues with your project.


    Note: You are seeing this because you or someone else with access to this repository has authorized Snyk to open fix PRs.

    For more information: 🧐 View latest project report

    🛠 Adjust project settings

    📚 Read more about Snyk's upgrade and patch logic


    Learn how to fix vulnerabilities with free interactive lessons:

    🦉 Deserialization of Untrusted Data

    opened by HerrMichaelSteiner 1
  • setDataAsync assigns a negative value to an UInt causing runtime crash

    setDataAsync assigns a negative value to an UInt causing runtime crash

    For some reasons only in my tests, the framework fails in the DiskCacheLevel.swift method setDataAsync.

    The crash seems to be caused by the assignment of a negative value to the iVar size which is an UInt.

    We are specifically talking about this part of the code

    if newSize > previousSize {
        size += newSize - previousSize
        controlCapacity()
      } else {
        size -= previousSize - newSize
      }
    

    In case it helps: At the first call of the method setDataAsync the size = 0, previousSize = 0, newSize = 223 At the second call of the method the size = 223, previousSize = 27986, newSize = 223, as a result it gets to:

    size -= previousSize - newSize

    And because size is a UInt iVar, it crashes.

    So, the previous size was much larger than the new size and in the same time the size iVar is not big enough for the subtracting operation. Not sure why is the previous size that big if the first call of the method determines an initial change in size of 223.

    More details: The cache I am using is as follows:

    MemoryCacheLevel<URL, NSDate>().compose(DiskCacheLevel<URL, NSDate>().pooled())

    And the ExpensiveObject extension is specified like this:

    extension NSDate: ExpensiveObject { public var cost: Int { return 1 } }

    Maybe I'm doing something wrong with the cost [I was not sure what is the correct way to determine it's cost, so I just assumed we can reduce it to TimInterval cost which is a Double], but even in this case, I guess the library should be crash free and in this kind of situations just return an error or fail silently with an error log.

    Also please note that my tests set the value for exactly the same key 3 times consequently.

    bug 
    opened by Carpemeid 1
Releases(1.2.1)
  • 1.2.1(Jul 9, 2021)

  • 1.2.0(Jul 8, 2021)

  • 1.1.1(May 27, 2021)

  • 1.1.0(Nov 9, 2020)

    Changes

    • Made Carlos less opinionated on which queue it shall run its operations.
    • Wrap Carlos Future's in Deferred to make sure that Carlos doesn't execute its operation unless there are subscribers.
    Source code(tar.gz)
    Source code(zip)
  • 1.0.0(Oct 19, 2020)

    1.0.0

    Breaking Changes

    • Swift 5.3 and Xcode 12 support
    • The codebase has been migrated from PiedPiper to Combine
    • The minimum supported OS version set to Combine's minimum supported version: iOS 13, macOS 10.15, watchOS 6, tvOS 13.
    • Removed Dispatched.swift and RequestCapperCache.swift because the functionality they provided could be easily re-implemented using Combine operators.

    New Features

    • Carlos is now powered by Combine which means you can use awesome Combine provided operators on the Carlos cached values!
    Source code(tar.gz)
    Source code(zip)
  • 0.10.1(Sep 22, 2020)

  • 0.10.0(Jul 9, 2020)

  • 0.9.1(Dec 14, 2016)

    Breaking changes

    • Swift 3.0 support (for Swift 2.3 use specific commit 5d354c829d766568f164c386c59de21357b5ccff instead)
    • batchGetAll has been removed and replaced with a reified allBatch (see New features)
    • All deprecated functions have been removed
    • All custom operators have been removed in favor of their function counterparts
    • macOS and tvOS support has been temporarily dropped and will be probably re-added in the future
    • set method on CacheLevel now returns a Future enabling error-handling and progress-tracking of set calls.

    New Features

    • It's now possible to lift a CacheLevel into one that operates on a sequence of keys and returns a sequence of values. You can use allBatch to create a concrete BatchAllCache. You can use get on this cache if you want to pass a list of keys and get the success callback when all of them succeed and the failure callback as soon as one of them fails (old behavior of batchGetAll), or you can compose or transform an allBatch cache just like any another CacheLevel. Consult the README.md for an example.
    Source code(tar.gz)
    Source code(zip)
  • 0.8(May 16, 2016)

    Breaking changes

    • The codebase has been migrated to Swift 2.2
    • Promise now has only an empty init. If you used one of the convenience init (with value:, with error: or with value:error:), they now moved to Future.

    New features

    • Adds value and error properties to Result
    • Added a way to initialize Futures through closures
    • It's now possible to map Futures through:
      • a simple transformation closure
      • a closure that throws
    • It's now possible to flatMap Futures through:
      • a closure that returns an Optional
      • a closure that returns another Future
      • a closure that returns a Result
    • It's now possible to filter Futures through:
      • a simple condition closure
      • a closure that returns a Future<Bool>
    • It's now possible to reduce a SequenceType of Futures into a new Future through a combine closure
    • It's now possible to zip a Future with either another Future or with a Result
    • Added merge to a SequenceType of Futures to collapse a list of Futures into a single one
    • Added traverse to SequenceType to generate a list of Futures through a given closure and merge them together
    • Added recover to Future so that it's possible to provide a default value the Future can use instead of failing
    • It's now possible to map Results through:
      • a simple transformation closure
      • a closure that throws
    • It's now possible to flatMap Results through:
      • a closure that returns an Optional
      • a closure that returns a Future
      • a closure that returns another Result
    • It's now possible to filter Results through a simple condition closure
    • Added mimic to Result
    Source code(tar.gz)
    Source code(zip)
  • PiedPiper-0.7(Mar 27, 2016)

    First release of Pied Piper as a separate framework.

    Breaking changes

    • As documented in the MIGRATING.md file, you will have to add a import PiedPiper line everywhere you make use of Carlos' Futures or Promises.

    New features

    • It's now possible to compose async functions and Futures through the >>> operator.
    • The implementation of ReadWriteLock taken from Deferred is now exposed as public.
    • It's now possible to take advantage of the GCD struct to execute asynchronous computation through the functions main and background for GCD built-in queues and async for GCD serial or custom queues.

    Improvements

    • Promises are now safer to use with GCD and in multi-thread scenarios.

    Fixes

    • Fixes a bug where calling succeed, fail or cancel on a Promise or a Future didn't correctly release all the attached listeners.
    • Fixes a retain cycle between Promise and Future objects.
    Source code(tar.gz)
    Source code(zip)
  • Carlos-0.7(Mar 27, 2016)

    Breaking changes

    • onCompletion argument now is a closure accepting a Result<T> as a parameter instead of a tuple (value: T?, error: ErrorType?). Result<T> is the usual enum (aka Either) that can be .Success(T), .Error(ErrorType) or Cancelled in case of canceled computations.
    • Please add a import PiedPiper line everywhere you make use of Carlos' Futures or Promises, since with 0.7 we now ship a separate Pied Piper framework.
    • AsyncComputation has been removed from the public API. Please use OneWayTransformer (or CacheLevel) instead now.

    Deprecated

    • APIs using closures instead of Fetcher, CacheLevel or OneWayTransformer parameters are now deprecated in favor of their counterparts. They will be removed from Carlos with the 1.0 release.

    New features

    • It's now possible to batch a set of fetch requests. You can use batchGetAll if you want to pass a list of keys and get the success callback when all of them succeed and the failure callback as soon as one of them fails, or batchGetSome if you want to pass a list of keys and get the success callback when all of them completed (successfully or not) but only get the list of successful responses back.

    Fixes

    • Correctly updates access date on the disk cache when calling set on a DiskCacheLevel
    Source code(tar.gz)
    Source code(zip)
  • 0.6(Jan 22, 2016)

    New features

    • It's now possible to conditionally post-process values fetched from CacheLevels (or fetch closures) on the key used to fetch the value. Use the function conditionedPostProcess or consult the README.md for more information
    • It's now possible to conditionally transform values fetched from (or set on) CacheLevels on the key used to fetch (or set) the value. Use the function conditionedValueTransformation or consult the README.md for more information

    Fixes

    • Carthage works again

    Minor improvements

    • CacheProvider now has accessors to retrieve shared instances of the built-in caches (sharedImageCache, sharedDataCache and sharedJSONCache)
    Source code(tar.gz)
    Source code(zip)
  • 0.5(Nov 6, 2015)

    New features

    • Promise can now be canceled. Call cancel() to cancel a Promise. Be notified of a canceled operation with the onCancel function. Use onCancel to setup the cancel behavior of your custom operation. Remember that an operation can only be canceled once, and can only be executing, canceled, failed or succeeded at any given time.
    • It's now possible to apply a condition to a OneWayTransformer. You can call conditioned on the instance of OneWayTransformer to decorate and pass the condition on the input. This means you can effectively implement conditioned key transformations on CacheLevels. Moreover, you can implement conditioned post processing transformations as well. For this, though, keep in mind that the input of the OneWayTransformer will be the output of the cache, not the key.
    • It's now possible to apply a condition to a TwoWayTransformer. You can call conditioned on the instance of TwoWayTransformer to decorate and pass two conditions: the one to apply for the forward transformation and the one to apply for the inverse transformation, that will take of course different input types. This means you can effectively implement conditioned value transformations on CacheLevels.
    • A new NSUserDefaultsCacheLevel is now included in Carlos. You can use this CacheLevel to persist values on NSUserDefaults, and you can even use multiple instances of this level to persist sandboxed sets of values
    • It's now possible to dispatch a CacheLevel or a fetch closure on a given GCD queue. Use the dispatch protocol extension or the ~>> operator and pass the specific dispatch_queue_t. Global functions are not provided since we're moving towards a global-functions-free API for Carlos 1.0

    Major changes

    • API Breaking: CacheRequest is now renamed to Future. All the public API return Future instances now, and you can use Promise for your custom cache levels and fetchers
    • API Breaking: OneWayTransformer and TwoWayTransformer are now asynchronous, i.e. they return a Future<T> instead of a T directly
    • API Breaking: all the conditioned variants now take an asynchronous condition closure, i.e. the closure has to return a Future<Bool> instead of a (Bool, ErrorType) tuple
    • All the global functions are now deprecated. They will be removed from the public API with the release of Carlos 1.0

    Minor improvements

    • Promise can now be initialized with an Optional<T> and an ErrorType, correctly behaving depending on the optional value
    • Promise now has a mimic function that takes a Future<T> and succeeds or fails when the given Future does so
    • ImageTransformer now applies its tranformations on a background queue
    • JSONTransformer now passes the right error when the transformations fail
    • CacheProvider.dataCache now pools requests on the network and disk levels, so pooled requests don't result in multiple set calls on the disk level
    • It's now possible to cancel operations coming from a NetworkFetcher
    • Int, Float, Double and Character conform to ExpensiveObject now with a unit (1) cost
    • Added a MIGRATING.md to the repo and to the Wiki that explains how to migrate to new versions of Carlos (only for breaking changes)
    Source code(tar.gz)
    Source code(zip)
  • 0.4(Oct 4, 2015)

    Major changes

    • Adds a Fetcher protocol that you can use to create your custom fetchers.
    • Adds the possibility to transform values coming out of Fetcher instances through OneWayTransformer objects without forcing them to be TwoWayTransformer as in the case of transforming values of CacheLevel instances
    • Adds a JSONCache function to CacheProvider
    • Adds output processers to process/sanitize values coming out of CacheLevels (see postProcess)
    • Adds a way to compose multiple OneWayTransformers through functions, operators and protocol extensions
    • Adds a way to compose multiple TwoWayTransformers through functions, operators and protocol extensions
    • Adds a normalize function and protocol extension transforming CacheLevel instances into BasicCache ones to make it easier to store instance properties
    • Adds a JSONTransformer class conforming to TwoWayTransformer
    • Adds a ImageTransformer class for the iOS and WatchOS 2 frameworks conforming to TwoWayTransformer
    • Adds a StringTransformer class conforming to TwoWayTransformer

    Minor improvements

    • invert is now available as a protocol extension to the TwoWayTransformer protocol

    WatchOS 2

    • Adds WatchOS 2 support through CocoaPods

    tvOS

    • Adds framework support for tvOS
    Source code(tar.gz)
    Source code(zip)
  • 0.3.0(Sep 22, 2015)

    Major notes

    • Codebase converted to Swift 2.0
    • Adds Watch OS 2 support
    • Adds Mac OS X support

    API-Breaking changes

    • CacheRequest.onFailure now passes an ErrorType instead of an NSError

    Enhancements

    • Adds an onCompletion method to the CacheRequest class, that will be called in both success and failure cases
    Source code(tar.gz)
    Source code(zip)
  • 0.2(Aug 13, 2015)

    New features

    • Includes a CacheProvider class to create commonly used caches
    • Includes a Playground to quickly test Carlos and custom cache architectures
    • includes a new switchLevels function to have multiple cache lanes

    Enhancements

    • Improves DiskCacheLevel and MemoryCacheLevel by having protocol-based keys
    • Defines safer Transformers (either OneWayTransformer or TwoWayTransformer) that return Optionals. If a conversion fails, set operations silently fail and get operations fail with a meaningful error.
    • Extends the conditioned function and the <?> operator to support fetch closures
    • Improves the code documentation

    Bugfixes

    • Fixes an issue where the NetworkFetcher would not correctly handle multiple get requests for the same URL
    Source code(tar.gz)
    Source code(zip)
  • 0.1.0(Aug 13, 2015)

Owner
National Media & Tech
National Media & Tech
A simple but flexible cache

Carlos A simple but flexible cache, written in Swift for iOS 13+ and WatchOS 6 apps. Breaking Changes Carlos 1.0.0 has been migrated from PiedPiper de

National Media & Tech 628 Dec 3, 2022
Awesome Cache Delightful on-disk cache (written in Swift). Backed by NSCache for maximum performance

Awesome Cache Delightful on-disk cache (written in Swift). Backed by NSCache for maximum performance and support for expiry of single objects. Usage d

Alexander Schuch 1.3k Dec 29, 2022
Apple Asset Cache (Content Cache) Tools

AssetCacheTool A library and tool for interacting with both the local and remote asset caches. This is based on research I did a few years ago on the

Kenneth Endfinger 21 Jan 5, 2023
💾 Simple memory & disk cache

Cache ?? Simple memory & disk cache Usage ??‍?? Default let cache = Cache<String>() try memory.save("MyValue", forKey: "MyKey") let cached = try cac

SeongHo Hong 2 Feb 28, 2022
A simple cache that can hold anything, including Swift items

CacheIsKing CacheIsKing is a simple cache that allows you to store any item, including objects, pure Swift structs, enums (with associated values), et

Christopher Luu 13 Jan 22, 2018
MrCode is a simple GitHub iPhone App that can cache Markdown content (include images in HTML) for read it later.

MrCode is a simple GitHub iPhone App that can cache Markdown content (include images in HTML) for read it later.

hao 448 Dec 19, 2022
CachyKit - A Caching Library is written in Swift that can cache JSON, Image, Zip or AnyObject with expiry date/TTYL and force refresh.

Nice threadsafe expirable cache management that can cache any object. Supports fetching from server, single object expire date, UIImageView loading etc.

Sadman Samee 122 Dec 28, 2022
Cachyr A typesafe key-value data cache for iOS, macOS, tvOS and watchOS written in Swift.

Cachyr A typesafe key-value data cache for iOS, macOS, tvOS and watchOS written in Swift. There already exists plenty of cache solutions, so why creat

Norsk rikskringkasting (NRK) 124 Nov 24, 2022
MemoryCache - type-safe, thread-safe memory cache class in Swift

MemoryCache is a memory cache class in swift. The MemoryCache class incorporates LRU policies, which ensure that a cache doesn’t

Yusuke Morishita 74 Nov 24, 2022
A lightweight generic cache for iOS written in Swift with extra love for images.

Haneke is a lightweight generic cache for iOS and tvOS written in Swift 4. It's designed to be super-simple to use. Here's how you would initalize a J

Haneke 5.2k Dec 29, 2022
High performance cache framework for iOS.

YYCache High performance cache framework for iOS. (It's a component of YYKit) Performance You may download and compile the latest version of sqlite an

null 2.3k Dec 16, 2022
Everyone tries to implement a cache at some point in their iOS app’s lifecycle, and this is ours.

Everyone tries to implement a cache at some point in their app’s lifecycle, and this is ours. This is a library that allows people to cache NSData wit

Spotify 1.2k Dec 28, 2022
Track is a thread safe cache write by Swift. Composed of DiskCache and MemoryCache which support LRU.

Track is a thread safe cache write by Swift. Composed of DiskCache and MemoryCache which support LRU. Features Thread safe: Implement by dispatch_sema

Cheer 268 Nov 21, 2022
UITableView cell cache that cures scroll-lags on cell instantiating

UITableView + Cache https://github.com/Kilograpp/UITableView-Cache UITableView cell cache that cures scroll-lags on a cell instantiating. Introduction

null 73 Aug 6, 2021
Fast, non-deadlocking parallel object cache for iOS, tvOS and OS X

PINCache Fast, non-deadlocking parallel object cache for iOS and OS X. PINCache is a fork of TMCache re-architected to fix issues with deadlocking cau

Pinterest 2.6k Dec 28, 2022
Cache library for videos for React Native

@lowkey/react-native-cache Cache everything Installation npm install @lowkey/react-native-cache Usage import ReactNativeCache from "@lowkey/react-nati

Max Prokopenko 1 Oct 1, 2021
CachedAsyncImage is the simplest way to add cache to your AsyncImage.

CachedAsyncImage ??️ CachedAsyncImage is AsyncImage, but with cache capabilities. Usage CachedAsyncImage has the exact same API and behavior as AsyncI

Lorenzo Fiamingo 278 Jan 5, 2023
🏈 Cache CocoaPods for faster rebuild and indexing Xcode project.

Motivation Working on a project with a huge amount of pods I had some troubles: - Slow and unnecessary indexing of pods targets, which implementation

Vyacheslav Khorkov 487 Jan 5, 2023
XCRemoteCache is a remote cache tool for Xcode projects.

XCRemoteCache is a remote cache tool for Xcode projects. It reuses target artifacts generated on a remote machine, served from a simple REST server. H

Spotify 737 Dec 27, 2022