In the last episode, we covered MTLBuffer-s. Since Metal revolves around the GPU, and GPUs are fundamentally tied to graphics, the next essential topic is textures, which we'll explore in this episode.
MTLTextureAt its core, a texture is similar to a buffer - it's a block of memory. However, it includes additional metadata that allows it to be accessed as an image, optimizing operations compared to using a plain buffer. Textures also enable features like working with normalized values and sampling without needing extra implementations. Unlike buffers, Metal handles all type mappings between the CPU and GPU automatically, raising errors if any mismatches occur.
You can use Xcode's Frame Capture tool to inspect the contents of your buffers. I provided a brief explanation on how to use it in the previous episode, so in this episode, we'll focus on textures specifically.

By double-clicking on a texture, you can view its preview, where you can:
Inspect the value of individual pixels

Check the texture's pixel format and dimensions

Adjust preview settings, such as remapping channels or changing the color space

You can also inspect a texture's content in the debugger, but be aware that the texture may contain outdated data if it hasn't been properly synchronized between the CPU and GPU.

To create a basic texture, you only need a single call:
let texture = device.makeTexture( // (1)
descriptor: MTLTextureDescriptor.texture2DDescriptor( // (2)
pixelFormat: .rgba8Unorm, // (3)
width: 640, // (4)
height: 480, // (5)
mipmapped: false)) // (6)
A mipmap is a pyramid of images where the first (zero) level contains the original image, and each subsequent level is scaled down to half the size of the previous one, with the final level being a 1x1 pixel. While some standards and platforms require textures to have power-of-two dimensions, Metal allows textures of any size.

By en:User:Mulad, based on a NASA image - Created by en:User:Mulad based on File:ISS from Atlantis - Sts101-714-016.jpg, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=1140741
Mipmaps are commonly used for trilinear interpolation, but fundamentally, they are just a pyramid. You can also use them for techniques like pyramid blending, dynamic blurring, and more.

By BillyBob CornCob - Own work, CC0, https://commons.wikimedia.org/w/index.php?curid=76760547
Note that mipmaps aren't generated automatically—you need to create or populate them using one of the blit operations (which we'll cover in a later episode or you can refer to the documentation).
You can also bind a texture to an IOSurface, which can be useful for alternative rendering of UI elements, video processing, and other tasks. It's your responsibility to configure the descriptor so that it's compatible with the surface plane.
device.makeTexture(
descriptor: MTLTextureDescriptor.texture2DDescriptor(
pixelFormat: .rgba8Unorm,
width: 640, height: 480,
mipmapped: false),
iosurface: surface,
plane: 0)
If you need a more customized texture, such as for writing or rendering into it (and yes, these are distinct operations), you'll need to manually configure the texture descriptor. For example, let's say we need a lookup table for color mapping. A cube texture is ideal for this purpose, as it can be initialized and used exclusively on the GPU.
let descriptor = MTLTextureDescriptor() // (1)
descriptor.textureType = .type3D // (2)
descriptor.pixelFormat = .rgba32Float // (3)
descriptor.width = 64 // \
descriptor.height = 64 // (4)
descriptor.depth = 64 // /
descriptor.mipmapLevelCount = 0 // (5)
descriptor.sampleCount = 1 // (6)
descriptor.storageMode = .private // (7)
descriptor.usage = [.shaderRead, .shaderWrite, .renderTarget] // (8)
MTLTextureDescriptor.There are additional fields available, but they are more specialized and will be covered in future episodes.
The most common use of textures is loading an image and applying it to a surface with shaders. But how do you upload an image if makeTexture doesn't handle that directly?
In the easiest way, you can let MetalKit handle this by using MTKTextureLoader.
func loadTexture(device: MTLDevice, image: CGImage) -> MTLTexture? {
let textureLoader = MTKTextureLoader(device: device) // (1)
let usage: MTLTextureUsage = [
MTLTextureUsage.shaderRead,
MTLTextureUsage.shaderWrite,
MTLTextureUsage.renderTarget]
let textureLoaderOptions = [
MTKTextureLoader.Option.origin: MTKTextureLoader.Origin.bottomLeft, // (2)
MTKTextureLoader.Option.textureUsage: NSNumber(value: usage.rawValue), // (3)
MTKTextureLoader.Option.textureStorageMode: NSNumber(value: MTLStorageMode.shared.rawValue),// (4)
MTKTextureLoader.Option.SRGB: false // (5)
] as [MTKTextureLoader.Option: Any]
var texture: MTLTexture?
do {
texture = try textureLoader.newTexture(cgImage: image, options: textureLoaderOptions) // (6)
} catch {
print("Unable to load texture. Error info: \(error)")
}
return texture
}
MTKTextureLoader.CGImage with the specified options.Alternatively, you can upload your raw image buffer to an MTLBuffer and then blit it to your texture:
func upload(imageBuffer: vImage_Buffer, to texture: MTLTexture, commandBuffer: MTLCommandBuffer) throws {
guard let buffer = texture.device.makeBuffer( // (1)
bytes: imageBuffer.data,
length: imageBuffer.rowBytes * Int(imageBuffer.height)),
let blitEncoder = commandBuffer.makeBlitCommandEncoder() // (2)
else { throw fatalError("Unabel create buffer or blit encoder") }
blitEncoder.copy( // (3)
from: buffer, // (4)
sourceOffset: 0, // (5)
sourceBytesPerRow: imageBuffer.rowBytes, // (6)
sourceBytesPerImage: imageBuffer.rowBytes * Int(imageBuffer.height), // (7)
sourceSize: MTLSize(width: Int(imageBuffer.width), // (8)
height: Int(imageBuffer.height),
depth: 1),
to: texture, // (9)
destinationSlice: 0, // (10)
destinationLevel: 0, // (11)
destinationOrigin: MTLOrigin(x: 0, y: 0, z: 0)) // (12)
blitEncoder.endEncoding() // (13)
}
In the function we assume that uploading is happeining in an existing command buffer from a vImage_Buffer (not neccessary, but for convenience) to a given existing texture.
This approach is the only way to upload data to a private texture from the CPU. Alternatively, you can copy from a shared texture, depending on what best suits your task.
If your texture uses shared storage mode and is accessible from the CPU, you can directly copy data to it using the replace method:
texture.replace(
region: MTLRegion(
origin: .init(x: 0, y: 0, z: 0),
size: .init(
width: Int(imageBuffer.width),
height: Int(imageBuffer.height),
depth: 1)),
mipmapLevel: 0,
withBytes: imageBuffer.data,
bytesPerRow: imageBuffer.rowBytes)
As you can see, this method doesn't require a device instance or command buffers - everything happens directly in the texture's memory. This is possible because of the shared storage mode.
Sometimes, such as when building an image or video editing app, you may need to download the result of your GPU processing back to the CPU. In a previous episode, we discussed how to do this with buffers. While the process for textures is quite similar, there are some important nuances to consider.
If you’re working with a private texture, you’ll need to blit firstly it to a shared buffer before downloading the buffer’s content to the CPU. You can find the details on how to do this in the episode about buffers.
In some cases, if your texture uses shared storage and grants access to its buffer, you can retrieve the content as you would with a normal buffer:
let bytes = texture.buffer?.contents()
Alternatively, you can use the getBytes method (similar to replace, but with the data flow reversed):
texture.getBytes(
imageBytes,
bytesPerRow: imageBytesPerRow,
from: MTLRegionMake2D(0, 0, width, height),
mipmapLevel: 0)
We’ve now created a texture, uploaded content, and even downloaded it back. But how do we actually use it in GPU functions like shaders or compute kernels? This depends on the usage type and the function type. To bind your texture object to a GPU function index, you can use one of the following options:
computeEncoder.setTexture(texture, index: 0) // (1)
...
renderEncoder.setVertexTexture(texture, index: 0) // (2)
renderEncoder.setFragmentTexture(texture, index: 0) // (3)
If you need to bind a target texture (for writing into it) in a compute kernel, you can bind it as shown above. However, for rendering, you need to set it as a render target by attaching it to a color attachment in the render encoder (this could also be stencil or depth depending on your task):
let encoderDescriptor = MTLRenderPassDescriptor()
encoderDescriptor.colorAttachments[0].texture = texture
Note that there is always a texture attached—for instance: when using an encoder descriptor from MTKView, the texture bound to the view’s IOSurface is attached by default. An important moment is that the pixel format of the texture and the render pipeline attachment must match.
In this episode, we explored how to create, upload, and use textures in Apple Metal. From binding textures to shaders to handling render targets, understanding these concepts will allow you to effectively integrate textures into your GPU workflows.