NIELIT

NIELIT Ropar

Deep Learning Techniques · DOAI250006

← Back

// Practical — 06

Image Data Augmentation
using Keras Preprocessing Layers

Open in Colab

Aim

To understand and implement image data augmentation using Keras preprocessing layers as a regularization technique to artificially expand the training dataset. The practical demonstrates four randomized geometric and photometric transformations — horizontal/vertical flipping, rotation, zoom, and contrast adjustment — and shows how to embed the augmentation pipeline directly into a neural network model for GPU-accelerated on-the-fly augmentation during training.

Prerequisites

Python & NumPy
TensorFlow / Keras
CIFAR-10 Dataset
Image Tensors (4D)
Overfitting Concepts
Regularization
Matplotlib Visualization
GPU Computation

Theory

Data augmentation is a regularization technique that artificially increases the effective size and diversity of a training dataset by applying random, label-preserving transformations to existing samples. In the context of images, these transformations include geometric operations (flips, rotations, translations, zooms, crops) and photometric operations (brightness, contrast, hue, saturation shifts). The fundamental insight is that a slightly rotated or flipped version of a cat image is still a cat — the semantic label remains unchanged while the pixel-level representation varies. By exposing the model to these variations during training, augmentation reduces overfitting, improves generalization to unseen data, and decreases the model's reliance on memorizing exact training examples.

Mathematically, augmentation expands the empirical risk minimization objective. Without augmentation, the training loss is L = (1/N) Σ L(f(x_i), y_i) over N fixed training samples. With augmentation, each epoch sees a different stochastic transformation T applied to each sample: L = (1/N) Σ L(f(T(x_i)), y_i), where T is drawn from a distribution of transformations. This effectively creates an infinite stream of unique training samples, preventing the model from memorizing the exact training set and forcing it to learn invariant features — properties of the input that remain constant under the applied transformations.

RandomFlip
Mirrors image horizontally, vertically, or both with 50% probability.
"horizontal_and_vertical"
RandomRotation
Rotates image by a random angle within a specified fraction of 2π.
0.2 → ±72°
RandomZoom
Randomly zooms in or out, cropping or padding as needed.
0.2 → ±20%
RandomContrast
Randomly adjusts the contrast of the image by a specified factor.
0.2 → ±20%

RandomFlip("horizontal_and_vertical") mirrors the image along the horizontal axis, vertical axis, or both with 50% probability. Horizontal flips are particularly effective for natural images since most objects (animals, vehicles, objects) remain semantically identical when mirrored. RandomRotation(0.2) rotates the image by a random angle in the range [−0.2 × 2π, +0.2 × 2π] = [−72°, +72°]. The rotation uses bilinear interpolation to fill newly exposed pixel regions. RandomZoom(0.2) randomly scales the image by a factor in [1−0.2, 1+0.2] = [0.8, 1.2], either zooming in (cropping) or zooming out (zero-padding). RandomContrast(0.2) adjusts the contrast by a random factor, making the model robust to varying lighting conditions.

The Keras preprocessing layers approach offers a critical advantage over legacy ImageDataGenerator: the augmentation pipeline runs on the GPU as part of the model graph. In the old ImageDataGenerator workflow, images were augmented on the CPU using Python/PIL operations, creating a CPU-to-GPU data transfer bottleneck. With preprocessing layers, transformations execute as TensorFlow operations within the GPU compute graph via tf.data or direct model integration. This means augmentation happens in parallel with forward/backward passes, fully utilizing GPU parallelism and eliminating I/O bottlenecks. Furthermore, because the augmentation is part of the model, it is automatically saved and restored with the model weights.

A crucial detail is the training=True flag when calling augmentation layers. Keras preprocessing layers have two behaviors: during training (training=True), they apply random transformations; during inference (training=False), they pass data through unchanged. When augmentation layers are embedded inside a model, Keras automatically sets this flag correctly during model.fit() and model.predict(). When calling the augmentation pipeline directly (as in visualization), you must explicitly pass training=True to activate randomness.

Algorithm / Step-by-Step

  1. Import tensorflow, matplotlib.pyplot, and Keras layers and datasets modules.
  2. Load the CIFAR-10 dataset using datasets.cifar10.load_data() and extract only the training portion (test data is not needed for this demonstration).
  3. Select a single sample image from the training set (e.g., index 4) and display it using Matplotlib as the reference "Original Image."
  4. Define a tf.keras.Sequential augmentation pipeline containing, in order:
    • layers.RandomFlip("horizontal_and_vertical")
    • layers.RandomRotation(0.2)
    • layers.RandomZoom(0.2)
    • layers.RandomContrast(0.2)
  5. Add a batch dimension to the sample image using tf.expand_dims(sample_image, 0), changing shape from (32, 32, 3) to (1, 32, 32, 3), since preprocessing layers expect 4D input tensors of shape (batch, height, width, channels).
  6. Set up a 3×3 Matplotlib subplot grid (9 panels) and iterate 9 times. On each iteration, call the augmentation pipeline with training=True and display the result in the next subplot. Cast the output tensor back to tf.uint8 for correct visualization.
  7. Add a super-title to the figure summarizing the visualization.
  8. Integration note: In a production model, the augmentation pipeline is placed as the first layer(s) of the classification model, so augmentation happens automatically during training and is bypassed during inference.

Key Code Concepts

Snippet 1 — Loading CIFAR-10 and Selecting a Sample Image

from tensorflow.keras import datasets
import matplotlib.pyplot as plt

# Load CIFAR-10 — 60,000 color images, 32×32, 10 classes
(train_images, train_labels), _ = datasets.cifar10.load_data()

# Extract a single sample image
sample_image = train_images[4]  # shape: (32, 32, 3)

# Display original
plt.figure(figsize=(3, 3))
plt.imshow(sample_image)
plt.title("Original Image")
plt.axis('off')
plt.show()

CIFAR-10 consists of 60,000 32×32 RGB images across 10 classes (airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck). Each image is a uint8 tensor of shape (32, 32, 3). We extract a single image at index 4 (typically an automobile or frog) to serve as the seed for generating augmented variations. The underscore _ discards the test split since this practical focuses purely on augmentation visualization, not model training.

Snippet 2 — Building the Augmentation Pipeline

from tensorflow.keras import layers

# Compose augmentation layers into a Sequential pipeline
data_augmentation = tf.keras.Sequential([
    layers.RandomFlip("horizontal_and_vertical"),
    layers.RandomRotation(0.2),
    layers.RandomZoom(0.2),
    layers.RandomContrast(0.2)
])

# Each layer applies a random transform with 50% probability
# Rotation: 0.2 × 360° = ±72° max rotation
# Zoom: ±20% scale factor
# Contrast: ±20% contrast adjustment

The augmentation pipeline is itself a Keras Sequential model composed entirely of preprocessing layers. Each layer applies its transformation stochastically during training and acts as an identity function during inference. The order matters: flipping is applied first, then rotation, zoom, and contrast. This composition is differentiable and executes as part of the TensorFlow compute graph, enabling GPU acceleration. The parameter 0.2 for rotation represents a fraction of a full circle (2π radians), equivalent to ±72 degrees.

Snippet 3 — Visualizing Augmented Variations

# Add batch dimension: (32, 32, 3) → (1, 32, 32, 3)
image_batch = tf.expand_dims(sample_image, 0)

# Generate and display 9 augmented variants
plt.figure(figsize=(10, 10))
for i in range(9):
    augmented = data_augmentation(image_batch, training=True)
    plt.subplot(3, 3, i + 1)
    plt.imshow(tf.cast(augmented[0], tf.uint8))
    plt.axis("off")
plt.tight_layout()
plt.suptitle("Augmented Variations", y=1.02, fontsize=16)
plt.show()

Preprocessing layers expect a 4D input tensor of shape (batch_size, height, width, channels). The tf.expand_dims() operation inserts a new dimension at axis 0, converting a single image into a batch of one. Inside the loop, training=True is essential — without it, the layers would skip randomization and return the original image unchanged. Each iteration generates a different random transformation because the random seeds are re-sampled each call. The output of preprocessing layers is float32, so tf.cast(..., tf.uint8) converts back to the original data type for proper color rendering in Matplotlib.

Snippet 4 — Model Integration

# Augmentation is the FIRST layer of the model
model = tf.keras.Sequential([
    data_augmentation,                 # ← runs on GPU during training
    layers.Conv2D(32, 3, padding='same', activation='relu'),
    layers.MaxPooling2D(),
    layers.Conv2D(64, 3, padding='same', activation='relu'),
    layers.MaxPooling2D(),
    layers.Flatten(),
    layers.Dense(10, activation='softmax')
])

Embedding the augmentation pipeline as the first layer(s) of the model is the recommended production pattern. When model.fit() is called, Keras automatically passes training=True to the augmentation layers, activating random transforms. When model.predict() or model.evaluate() is called, Keras passes training=False, disabling augmentation for consistent inference. This tight integration means the augmentation logic travels with the model — saving the model with model.save() preserves the entire preprocessing pipeline, and loading it with tf.keras.models.load_model() restores augmentation behavior without any additional code.

Expected Output

Original Image Plot: A single 3×3 inch figure displaying one CIFAR-10 image (32×32 RGB). The image should be clearly recognizable (e.g., a red automobile or a green frog) with axis labels hidden. This establishes the baseline for comparison.

Augmentation Pipeline Confirmation: Console prints "Data augmentation pipeline ready." confirming successful instantiation of the tf.keras.Sequential containing the four preprocessing layers.

Augmented 3×3 Grid: A 10×10 inch figure containing 9 subplot panels arranged in a 3×3 grid, each showing a unique augmented version of the original image. Because transformations are stochastic, every panel will differ:

  • Some images will be horizontally flipped, others vertically, some unchanged.
  • Rotation angles will vary between −72° and +72°, with black triangular regions visible in corners where rotation exposes empty space.
  • Zoom levels will vary: zoomed-in images show cropped portions at larger scale; zoomed-out images have a centered image surrounded by black/zero-padding borders.
  • Contrast will vary subtly: some images appear slightly washed out, others more vivid.

The super-title "Augmented Variations of the Original Image" appears above the grid. Despite all transformations, the semantic content of the image (the object class) remains identifiable in every panel — this is the key property that makes augmentation a valid regularization strategy.

Viva Questions & Answers

Q1. Why is data augmentation considered a regularization technique?
Data augmentation is a form of regularization because it reduces overfitting by artificially expanding the effective training set and preventing the model from memorizing exact training examples. By applying random, label-preserving transformations, augmentation forces the model to learn invariant features — properties that remain constant regardless of orientation, scale, or lighting. This is conceptually similar to dropout or L2 regularization, all of which constrain the model to learn more generalizable representations rather than fitting noise in the training data.
Q2. What is the difference between Keras preprocessing layers and the legacy ImageDataGenerator?
There are three key differences: (1) Execution location — preprocessing layers run on the GPU as part of the TensorFlow compute graph, while ImageDataGenerator runs on the CPU using Python/PIL, creating a CPU-to-GPU data transfer bottleneck. (2) Model integration — preprocessing layers can be embedded directly inside a Keras model (they are just layers), so augmentation is saved/restored with the model. ImageDataGenerator is external to the model and must be managed separately. (3) API consistency — preprocessing layers use the standard Keras layer API (fit, predict, save), whereas ImageDataGenerator requires a separate flow/from_directory API that is less flexible and harder to compose with modern tf.data pipelines.
Q3. Why must we pass training=True when calling the augmentation pipeline directly?
Keras preprocessing layers are designed to behave differently during training and inference. During training (training=True), layers like RandomFlip, RandomRotation, and RandomZoom apply their stochastic transformations. During inference (training=False), they act as identity functions and pass data through unchanged. When augmentation layers are embedded in a model, Keras automatically manages this flag based on whether model.fit() or model.predict() is called. However, when calling the augmentation pipeline standalone (as in visualization code), you must explicitly set training=True to activate the random behavior — otherwise, the original image is returned unchanged 9 times.
Q4. Why do we add a batch dimension with tf.expand_dims before passing the image to the augmentation pipeline?
All Keras layers, including preprocessing layers, expect input tensors with a batch dimension. A single CIFAR-10 image has shape (32, 32, 3) representing (height, width, channels). Preprocessing layers expect (batch_size, height, width, channels), i.e., a 4D tensor. tf.expand_dims(sample_image, 0) inserts a dimension of size 1 at axis 0, converting the shape to (1, 32, 32, 3) — a batch containing a single image. Without this, the layer would raise a shape mismatch error.
Q5. What would happen if you applied augmentation to the test/validation set?
Applying random augmentation to test or validation data would make evaluation non-deterministic and incomparable across runs. Since the transformations are stochastic, the same test image could yield different predictions on each evaluation, making it impossible to reliably measure model performance or compare hyperparameter settings. The test set must reflect the true, original distribution that the model will encounter in production. Augmentation is exclusively a training-time technique. During inference, preprocessing layers automatically disable randomness when training=False.