Best practices for using the Images in iOS applications

Kumar Reddy
3 min readJun 27, 2021

UIImage is one of the basic UI elements literally used in all applications. Though it is the basic UI component, the way the images take the internal memory footprint is still not clear to most of us.

If we do not manage the images carefully, the mobile app ends up taking a high amount of memory footprint which causes other effects like Battery Life, Responsiveness, CPU. In this article, I am going to cover best practices for using images in iOS applications.

For creating the images, use UIGraphicsImageRenderer instead of UIGraphicsBeginImageContextWithOptions.

To get more insights on lets, deep-dive, into Image Rendering Formats. There are broadly four different rendering formats as

UIGraphicsBeginImageContextWithOptions uses SRGB mode and takes 4 bytes per pixel with full-color image. Instead, If we use UIGraphicsImageRenderer, depend upon the context of the image, the system picks the right format instead of 4 bytes per pixel.

You do not see much change in the way we create the images but it is going to have a lot of benefits on the memory usage for an application.

Using UIGraphicsBeginImageContextWithOptions to create the image
Using UIGraphicsImageRenderer to create the image

Do not apply the Image to UIImageView directly, Downsample the image before applying on to UIImageView.

Typically rendering an image on the screen is a 3 steps process, though it looks one-liner in the code. It involves loading the image on the memory, decoding the image, and render it on the screen.

Image Pipeline, Image from WWDC Presentations.

Before we get to understand the memory details, let’s understand the buffers.

Buffers is a contiguous region of memory and viewed as a sequence of elements.

Date Buffer stores the content of an image file in the memory and here the bytes do not describe the pixels. Here the data is in compress form and can be in any supported formats(PNG, JPEG, etc)

Image Buffer is an in memory representation of an Image. Each element represents a color of anigle pixel, this indicates that image buffer is directly proportional to image size.

image from WWDC Presentations

In general, If you try to load an image which is of 1000px * 1000 px from the galley or download it to render on a screen for a 50 * 50 thumbnail or 250 * 250 banner image, the system would end decoding the image of size 1000 px into memory and will render.

However, if you downsample the image, the system now decodes only the required pixels instead of the heavy-sized image. This is huge savings on the memory footprint. Consider, you are rending 10s of images in a screen, the effect is exponential savings.

Let’s try with an example. I have created a sample application that loads the five images from the gallery. I tried to perform the same with/without downsample and you can see the impact on the memory footprint.

apply the galley image without downsampling
apply the galley image with downsampling

References:

I have an article on recommended videos for iOS memory understanding. Please go through the WWDC videos listed in the article.

Thank you for reading the article. Please provide your valuable feedback/comments for improving the article.

--

--