iOS image caching. Libraries benchmark (SDWebImage vs FastImageCache)

photos_time_screen1. Introduction

In the past years, iOS apps have become more and more visually appealing. Displaying images is a key part of that, that’s why most of them use images that need to be downloaded and rendered. Most developers have faced the need to populate table views or collection views with images. Downloading the images is resource consuming (cellular data, battery, CPU, …), so in order to minimize this the caching model appeared.

To achieve a great user experience, it’s important to understand what is going on under the iOS hood when we cache and load images.

Also, the benchmarks on the most used image caching open source libraries can be of great help when choosing your solution.

2. Classical approach

  • download the images asynchronously
  • process images (scale, remove red eyes, remove borders, …) so they are ready to be displayed
  • write them on disk
  • read from disk and display them when needed
// assuming we have an NSURL *imageUrl and UIImageView *imageView, we need to load the image from the URL and display it in the imageView
if ([self hasImageDataForURL:imageUrl] {
  NSData *data = [self imageDataForUrl:imageUrl];
  UIImage *image = [UIImage imageWithData:imageData];
  dispatch_async(dispatch_get_main_queue(), ^{
    imageView.image = image;
  });
} else {
  [self downloadImageFromURL:imageUrl withCompletion:^(NSData *imageData, …) {
    [self storeImageData:imageData …];
    UIImage *image = [UIImage imageWithData:imageData];
    dispatch_async(dispatch_get_main_queue(), ^{
      imageView.image = image;
    });
  }];
}
FPS simple math: 
    • 60 FPS is our ideal for any UI update, so the experience is flawless
    • 60FPS => 16.7ms per frame. This means that if any main-queue operation takes longer than 16.7 ms, the scrolling FPS will drop, since the CPU will be busy doing something else than rendering UI.

3. Downsides of the classical variant:

  • loading images or any file from the disk is expensive (disk access is usually from 10.000 to 1.000.000 times slower than memory access. See comparison here. If we refer to SSD disks, those can come closer to RAM speeds (like 10 times slower), but at this point no smartphone or tablet is equipped with an SSD unit.
  • creating the UIImage instance will result in a compressed version of the image mapped to a memory section. The compressed image is small and cannot be rendered. If loaded from disk, the image is not even loaded into memory. Decompressing an image is also expensive.
  • setting the image property of the imageView in this case will create a CATransaction that will be committed on the run loop. On the next run loop iteration, the CATransaction involves (depending on the images) creating a copy of any images which have been set as layer contents. Copying images includes:
    • allocating buffers for file IO and decompression
    • reading disk data into memory
    • decompressing the image data (results the raw bitmap) – high CPU consumer
    • CoreAnimation uses the decompressed data and renders it
  • improper byte-aligned images are copied by CoreAnimation so that their byte-alignament is fixed and can be rendered. This isn’t stated by Apple docs, but profiling apps with Instruments shows CA::Render::copy_image even when the Core Animation instrument shows no images copied
  • starting with iOS 7, the JPEG hardware decoder is no longer accessible to 3rd party apps. This means our apps are relying on a software decoder which is significantly slower. This was noticed by the FastImageCache team on their Github page and also by Nick Lockwood on a Twitter post.

4. A strong iOS image cache component must:

  • download images asynchronously, so the main queue is used as little as possible
  • decompress images on a background queue. This is far from being trivial. See a strong article about background decompression
  • cache images into memory and on disk. Caching on disk is important because the app might be closed or need to purge the memory because of low memory conditions. In this case, re-loading the images from disk is a lot faster than downloading them. Note: if you use NSCache for the memory cache, this class will purge all it’s contents when a memory warning is issued. Details about NSCache here http://nshipster.com/nscache/
  • store the decompressed image on disk and in memory to avoid redoing the decompression
  • use GCD and blocks. This makes the code more performant, easier to read and write. In nowadays, GCD and blocks is a must for async operations
  • nice to have: category over UIImageView for trivial integration.
  • nice to have: ability to process the image after download and before storing it into the cache.
Advanced imaging on iOS

To find out more about imaging on iOS, how the SDK frameworks work (CoreGraphics, Image IO, CoreAnimation, CoreImage), CPU vs GPU and more, go through this great article by @rsebbe.

Is Core Data a good candidate?

Here is a benchmark of image caching using Core Data versus File System, the results are recommending the File System (as we are already accustomed to).

Just looking at the concepts listed above makes it clear that writing such a component on your own is hard, time consuming and painful. That’s why we turn to open source image caching solutions. Most of you have heard of SDWebImage or the new FastImageCache. In order to decide which one fits you best, I’ve benchmarked them and analysed how they match our list of requirements.

5. Benchmarks

Libraries tested:

Note: AFNetworking was added to the comparison because it benefits of disk caching from iOS 7 (due to NSURLCache).

Scenario:

– for each library, I made a clean install of the benchmark app, then started the app, scroll easily while all images are loaded, then scroll back and forth with different intensities (from slow to fast). I closed the app to force loading from disk cache (where available), then run the same scrolling scenario.

Benchmark app – project:

– the demo project source can be found on Github under the name ImageCachingBenchmark, together with the charts, collected data tables and more.
– please note the project from Github had to be modified as well as the image caching libraries so that we know the cache source of each image loaded. Because I didn’t want to check in the Cocoapods source files (not a good practice) and that the project code must compile after a clean install of the Cocoapods, the current version of the Github project is slightly different from the one I used for the benchmarks.
– if some of you want to rerun the benchmarks, you need to make a similar completionBlock for image loading for all libraries like the default one on SDWebImage that returns the SDImageCacheType.

Fastest vs slowest device results

Complete benchmark results can be found on the Github project. Since those tables are big, I decided to create charts using the fastest device data (iPhone 5s) and the slowest (iPhone 4).

iPhone 5s results

This slideshow requires JavaScript.

iPhone 4 results

This slideshow requires JavaScript.

Summary
Results SDWebImage FastImageCache AFNetworking TMCache Haneke
async download
backgr decompr
store decompr
memory cache
disk cache NSURLCache
GCD and blocks
easy to use
UIImageView categ
from memory
from disk
lowest CPU
lowest mem
high FPS
License MIT MIT MIT Apache Apache
Table legend:

– async download = support for asynchronous downloads directly into the library
– backgr decompr = image decompression executed on a background queue/thread
– store decompr = images are stored in their decompressed version
– memory/disk cache = support for memory/disk cache
– UIImageView categ = category for UIImageView directly into the library
– from memory/disk = top results for the average retrieve times from memory/disk cache

6. Conclusions

– writing an iOS image caching component from scratch is hard
– SDWebImage and AFNetworking are solid projects, with many contributors, that are maintained properly. FastImageCache is catching up pretty fast with that.
– looking at all the data provided above, I think we can all agree SDWebImage is the best solution at this time, even if for some projects AFNetworking or FastImageCache might fit better. It all depends on the project’s requirements.

49 thoughts on “iOS image caching. Libraries benchmark (SDWebImage vs FastImageCache)”

  1. Thanks for this immensely useful benchmark Bogdan! I will be adding better image decompression to Haneke shortly (currently it only decompresses the image the first time), which hopefully should make it perform better.

    I didn’t understand what you meant by “from disk”. Would you mind clarifying this?

      1. I see. Is this what BPCacheType does? I noticed that both FastImageCache and Haneke only log BPCacheTypeNone in the benchmark code.

  2. No, this is a bit tricky. I tried explaining in the “Benchmark app – project” section. Libraries like FastImageCache or Haneke didn’t have a way to return the cache source like SDWebImage. SDWebImage setImageWithUrl:completed: works with a completion block like this: ^(UIImage *image, NSError *error, SDImageCacheType cacheType). I modified the code of FastImageCache, Haneke and TMCache to make their completion blocks return a cachetype. But I wouldn’t/shouldn’t commit that to each project. And since I didn’t want to add the Pods sources to the repo, I was left with the only option to remove that parts of the code so the project compiles. Basically in order to re-run the benchmarks one must make the same changes as I did.

  3. I don’t get it. From what I can see, Haneke doest better than most of the others. Why SDWebImage is best solution of all time? Please clarify, thanks.

  4. I agree Haneke performs very good, but there are some advantages of SDWebImage that win the fight for me:
    – background image decompression
    – supports iOS 6
    – is a mature project that is used by a lot of developers and is very robust (which can happen with Haneke if we give it support and time)

    1. LRImageManager could be a competitive candidate, but I didn’t include it since I was looking at wide spread projects (500+ stars) with a good contributors lists.

  5. hello
    I am making project in Xcode 4.4 and using SDWebImage.
    I have followed all steps which u\you mentioned above but still i am getting error which is “Apple LLVM 4.0 compile error”
    So please help to resolve this issue.

    Thank you

    1. Hello. I think the best way to get help is to create an issue on the SDWebImage GitHub page, make sure you write the exact error causing your issue (because that “Apple LLVM …” error is too generic and doesn’t help). Also, I suggest you upgrade to at least Xcode 5.

  6. Great article! I have a collectionview project where I’m also using cloudkit as the backend. Since CK is querying and pulling back an array of images, could SDWebImage still be used in this scenario? Thanks!

      1. Hi again. I finally got around to implementing the caching for SDWebImage, but I realized that I’m not sure how to now populate the collectionview from the cache. Currently, I’m using the index path of the cell to index into the array containing the images, which is obviously the typical way of doing it. But how do you do that when using the cache? Is there a way to use the index path with it? Thanks!

      2. You need to write your own structure that maps the index path to the image URL. SDWebImage has not capability to do that and I doesn’t need one 🙂

  7. Great article! One question with SDWebImage vs FastImageCache: Why is SDWebimage _so_ much faster reading from memory but FastImageCache _so_ much faster reading from disk?

    1. There is not a very big difference between the speed of memory reading for all the libraries, SDWebImage got a better score than FastImageCache, but they both perform very well. The big significant difference is at disk retrieval, where SDWebImage uses standard image files stored on disk vs FastImageCache using byte-aligned image table files + they use Mapped Memory when reading from those files. For more details, see their project description, it explains in detail how they managed to achieve such performance https://github.com/path/FastImageCache

  8. I am a bit shocked to see this. We’ve been using SDWebImage, but unfortunately the library hasn’t evolved properly, IMHO. There was also once a very nasty crash for a month or so. Overall, SDWebImage served us really well, and I still have it integrated in other simple apps.

    Now, we decided to move to FastImageCache for our main app. Other than the performance it promises for large image sets, the way it forces you to organize your image source is a huge gain for us (think prefetching images). We’re in the middle of the migration, but I’ll hopefully find time to do our own benchmarks and shed some light at the situation.

    For example, library A might be sacrificing a bit of performance by doing more work in a background, low priority thread, which typically takes longer to execute, while library B lags the main thread a bit hoping to finish work quickly. This is just one of the many dimensions that would be useful to gauge. Per FIC readme, though, the library sacrifices disk space/access for butter scrolling lots of images, if done properly. So, I’m guessing the performance impact of FIC scales better than other libraries.

    Ultimately, of course, each use case and its requirements.

    1. Kingfisher looks interesting, but I haven’t had the chance to try it out. I think it would worth it to redo the test on a Swift app, with Swift libraries vs Obj-c libraries. I don’t have the time to do it now, but maybe someone else will.

  9. Please include KFSwiftImageLoader as well: https://github.com/kiavashfaisali/KFSwiftImageLoader

    Would love to see Kingfisher be pitted against that. Both are extremely high performance, but KFSwiftImageLoader is not just for UIImageView, UIButton, and MKAnnotationView., but it’s also the first to support  Watch as well with an extension for WKInterfaceImage and leveraging WKInterfaceDevice for device-level cache with automatic management.

  10. From the article:
    “If we refer to SSD disks, those can come closer to RAM speeds (like 10 times slower), but at this point no smartphone or tablet is equipped with an SSD unit.”

    Do you understand what SSD stands for? Solid state drive. i.e., not a spinning magnetic disk. ALL SMARTPHONES AND TABLETS USE THIS TYPE OF “DISK”.

    Erodes my trust your entire article…

    1. At the time I wrote the article, there was a difference in speed between SSD disks and the regular flashdrive that the iOS devices were using (SSD was significantly faster). The SSD term (even though it’s very similar to flash drive) wasn’t used to describe the storage unit for iOS devices.
      I agree I didn’t made this very clear, but I don’t think your aggressive tone was necessary.

  11. Excellent post i enjoyed regarding and thanks for the great comparison results which can help me in deciding which one i should go for.

  12. Have been using SDWebImage for 1+ year now, it’s very convenient esp with the progressive image download. But it’s based out of NSUrlConnection and we know that that is deprecated starting iOS 9. NSUrlSession has been around for 2+ years now and I think the future winner would be the one that implements NSUrlSession nice and right.

  13. I wrote an image library: FlyImage, which takes the advantages of SDWebImage, FastImageCache and AFNetworking, is a simple and high performance image library.

    Features:

    – High Performance, reduce memory operations while rendering, avoid Memory warning caused by image
    – Store and retrieve different size of small images in one memory file, smooth scrolling
    – Simple, support UIImageView, CALayer category
    – An asynchronous image downloader
    – Support WebP format
    – Support mmap to improve I/O performace

Leave a reply to Laura Goldin Cancel reply