In this tutorial, we explore the power of self-supervised learning using the Lightly AI framework. We begin by building a SimCLR model to learn meaningful image representations without labels, then generate and visualize embeddings using UMAP and t-SNE. We then dive into coreset selection techniques to curate data intelligently, simulate an active learning workflow, and finally assess the benefits of transfer learning through a linear probe evaluation. Throughout this hands-on guide, we work step by step in Google Colab, training, visualizing, and comparing coreset-based and random sampling to understand how self-supervised learning can significantly improve data efficiency and model performance. Check out the FULL CODES here. Copy CodeCopiedUse a different Browser !pip uninstall -y numpy !pip install numpy==1.26.4 !pip install -q lightly torch torchvision matplotlib scikit-learn umap-learn import torch import torch.nn as nn import torchvision from torch.utils.data import DataLoader, Subset from torchvision import transforms import numpy as np import matplotlib.pyplot as plt from sklearn.manifold import TSNE from sklearn.neighbors import NearestNeighbors import umap from lightly.loss import NTXentLoss from lightly.models.modules import SimCLRProjectionHead from lightly.transforms import SimCLRTransform from lightly.data import LightlyDataset print(f”PyTorch version: {torch.__version__}”) print(f”CUDA available: {torch.cuda.is_available()}”) We begin by setting up the environment, ensuring compatibility by fixing the NumPy version and installing essential libraries like Lightly, PyTorch, and UMAP. We then import all necessary modules for building, training, and visualizing our self-supervised learning model, confirming that PyTorch and CUDA are ready for GPU acceleration. Check out the FULL CODES here. Copy CodeCopiedUse a different Browser class SimCLRModel(nn.Module): “””SimCLR model with ResNet backbone””” def __init__(self, backbone, hidden_dim=512, out_dim=128): super().__init__() self.backbone = backbone self.backbone.fc = nn.Identity() self.projection_head = SimCLRProjectionHead( input_dim=512, hidden_dim=hidden_dim, output_dim=out_dim ) def forward(self, x): features = self.backbone(x).flatten(start_dim=1) z = self.projection_head(features) return z def extract_features(self, x): “””Extract backbone features without projection””” with torch.no_grad(): return self.backbone(x).flatten(start_dim=1) We define our SimCLRModel, which uses a ResNet backbone to learn visual representations without labels. We remove the classification head and add a projection head to map features into a contrastive embedding space. The model’s extract_features method allows us to obtain raw feature embeddings directly from the backbone for downstream analysis. Check out the FULL CODES here. Copy CodeCopiedUse a different Browser def load_dataset(train=True): “””Load CIFAR-10 dataset””” ssl_transform = SimCLRTransform(input_size=32, cj_prob=0.8) eval_transform = transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)) ]) base_dataset = torchvision.datasets.CIFAR10( root=’./data’, train=train, download=True ) class SSLDataset(torch.utils.data.Dataset): def __init__(self, dataset, transform): self.dataset = dataset self.transform = transform def __len__(self): return len(self.dataset) def __getitem__(self, idx): img, label = self.dataset[idx] return self.transform(img), label ssl_dataset = SSLDataset(base_dataset, ssl_transform) eval_dataset = torchvision.datasets.CIFAR10( root=’./data’, train=train, download=True, transform=eval_transform ) return ssl_dataset, eval_dataset In this step, we load the CIFAR-10 dataset and apply separate transformations for self-supervised and evaluation phases. We create a custom SSLDataset class that generates multiple augmented views of each image for contrastive learning, while the evaluation dataset uses normalized images for downstream tasks. This setup helps the model learn robust representations invariant to visual changes. Check out the FULL CODES here. Copy CodeCopiedUse a different Browser def train_ssl_model(model, dataloader, epochs=5, device=’cuda’): “””Train SimCLR model””” model.to(device) criterion = NTXentLoss(temperature=0.5) optimizer = torch.optim.SGD(model.parameters(), lr=0.06, momentum=0.9, weight_decay=5e-4) print(“n=== Self-Supervised Training ===”) for epoch in range(epochs): model.train() total_loss = 0 for batch_idx, batch in enumerate(dataloader): views = batch[0] view1, view2 = views[0].to(device), views[1].to(device) z1 = model(view1) z2 = model(view2) loss = criterion(z1, z2) optimizer.zero_grad() loss.backward() optimizer.step() total_loss += loss.item() if batch_idx % 50 == 0: print(f”Epoch {epoch+1}/{epochs} | Batch {batch_idx} | Loss: {loss.item():.4f}”) avg_loss = total_loss / len(dataloader) print(f”Epoch {epoch+1} Complete | Avg Loss: {avg_loss:.4f}”) return model Here, we train our SimCLR model in a self-supervised manner using the NT-Xent contrastive loss, which encourages similar representations for augmented views of the same image. We optimize the model with stochastic gradient descent (SGD) and track the loss across epochs to monitor learning progress. This stage teaches the model to extract meaningful visual features without relying on labeled data. Check out the FULL CODES here. Copy CodeCopiedUse a different Browser def generate_embeddings(model, dataset, device=’cuda’, batch_size=256): “””Generate embeddings for the entire dataset””” model.eval() model.to(device) dataloader = DataLoader(dataset, batch_size=batch_size, shuffle=False, num_workers=2) embeddings = [] labels = [] print(“n=== Generating Embeddings ===”) with torch.no_grad(): for images, targets in dataloader: images = images.to(device) features = model.extract_features(images) embeddings.append(features.cpu().numpy()) labels.append(targets.numpy()) embeddings = np.vstack(embeddings) labels = np.concatenate(labels) print(f”Generated {embeddings.shape[0]} embeddings with dimension {embeddings.shape[1]}”) return embeddings, labels def visualize_embeddings(embeddings, labels, method=’umap’, n_samples=5000): “””Visualize embeddings using UMAP or t-SNE””” print(f”n=== Visualizing Embeddings with {method.upper()} ===”) if len(embeddings) > n_samples: indices = np.random.choice(len(embeddings), n_samples, replace=False) embeddings = embeddings[indices] labels = labels[indices] if method == ‘umap’: reducer = umap.UMAP(n_neighbors=15, min_dist=0.1, metric=’cosine’) else: reducer = TSNE(n_components=2, perplexity=30, metric=’cosine’) embeddings_2d = reducer.fit_transform(embeddings) plt.figure(figsize=(12, 10)) scatter = plt.scatter(embeddings_2d[:, 0], embeddings_2d[:, 1], c=labels, cmap=’tab10′, s=5, alpha=0.6) plt.colorbar(scatter) plt.title(f’CIFAR-10 Embeddings ({method.upper()})’) plt.xlabel(‘Component 1’) plt.ylabel(‘Component 2′) plt.tight_layout() plt.savefig(f’embeddings_{method}.png’, dpi=150) print(f”Saved visualization to embeddings_{method}.png”) plt.show() def select_coreset(embeddings, labels, budget=1000, method=’diversity’): “”” Select a coreset using different strategies: – diversity: Maximum diversity using k-center greedy – balanced: Class-balanced selection “”” print(f”n=== Coreset Selection ({method}) ===”) if method == ‘balanced’: selected_indices = [] n_classes = len(np.unique(labels)) per_class = budget // n_classes for cls in range(n_classes): cls_indices = np.where(labels == cls)[0] selected = np.random.choice(cls_indices, min(per_class, len(cls_indices)), replace=False) selected_indices.extend(selected) return np.array(selected_indices) elif method == ‘diversity’: selected_indices = [] remaining_indices = set(range(len(embeddings))) first_idx = np.random.randint(len(embeddings)) selected_indices.append(first_idx) remaining_indices.remove(first_idx) for _ in range(budget – 1): if not remaining_indices: break remaining = list(remaining_indices) selected_emb = embeddings[selected_indices] remaining_emb = embeddings[remaining] distances = np.min( np.linalg.norm(remaining_emb[:, None] – selected_emb, axis=2), axis=1 ) max_dist_idx = np.argmax(distances) selected_idx = remaining[max_dist_idx] selected_indices.append(selected_idx) remaining_indices.remove(selected_idx) print(f”Selected {len(selected_indices)} samples”) return np.array(selected_indices) We extract high-quality feature embeddings from our trained backbone, cache them with labels, and project them to 2D using UMAP or t-SNE to visually see the cluster structure emerge. Next, we curate data using a coreset selector, either class-balanced or diversity-driven (k-center greedy), to prioritize the most informative, non-redundant samples for downstream training. This pipeline helps us both see what the model learns and select what matters most. Check out the FULL CODES here. Copy CodeCopiedUse a different Browser def evaluate_linear_probe(model, train_subset, test_dataset, device=’cuda’): “””Train linear classifier on frozen features””” model.eval() train_loader = DataLoader(train_subset, batch_size=128, shuffle=True, num_workers=2) test_loader = DataLoader(test_dataset, batch_size=256, shuffle=False,