Experiments/Integrations

PyTorch

Integrating with PyTorch

Open In Colab

mlop provides best-in-class support for PyTorch, including automatic logging of model parameters, gradients, graphing out model structure, data flows, as well as code profiling.

Migrating from Weights & Biases

See the Migrating from Weights & Biases guide for a quickstart.

Logging Model Details

Currently, mlop supports directly invoking mlop.watch() on a PyTorch model to log model details. After initializing the logger, you can equivalently use run.watch() on the run object to collect evolution of the model graph.

mlop.watch(
    model,
    disable_graph=False,
    disable_grad=False,
    disable_param=False,
    freq=1000,
    bins=64,
    **kwargs,
)

Source

See the Python code for more details.

As training progresses, histograms are created for all flattened gradients and parameters in the model. You may optionally specify the logging frequency and resolution of the histograms by setting freq and bins to the mlop.watch() function.

Example

import torch
import torch.nn as nn
import mlop
 
class ConvNet(nn.Module):
    def __init__(self, kernels, classes=10):
        super(ConvNet, self).__init__()
 
        self.layer1 = nn.Sequential(
            nn.Conv2d(1, kernels[0], kernel_size=5, stride=1, padding=2),
            nn.ReLU(),
            nn.MaxPool2d(kernel_size=2, stride=2),
        )
        self.layer2 = nn.Sequential(
            nn.Conv2d(16, kernels[1], kernel_size=5, stride=1, padding=2),
            nn.ReLU(),
            nn.MaxPool2d(kernel_size=2, stride=2),
        )
        self.fc = nn.Linear(7 * 7 * kernels[-1], classes)
 
    def forward(self, x):
        out = self.layer1(x)
        out = self.layer2(out)
        out = out.reshape(out.size(0), -1)
        out = self.fc(out)
        return out
 
model = ConvNet([16, 32], 10)
mlop.watch(model, freq=100)

This provides you with nice visualization as soon as the model training starts, graph

On this page