This documentation accompanies the video tutorial: youtube link


Tutorial 1: Curate Pizza Images

Warning

Tutorial is outdated

This tutorial uses a deprecated workflow of the Lightly Solution and will be removed in the future. Please refer to the new documentation and tutorials instead.

In this tutorial, you will learn how to upload a dataset to the Lightly platform, curate the data, and finally use the curated data to train a model.

What you will learn

  • Create and upload a new dataset

  • Curate a dataset using simple image metrics such as Width, Height, Sharpness, Signal-to-Noise ratio, File Size

  • Download images based on a tag from a dataset

  • Train an image classifier with the filtered dataset

Requirements

You can use your dataset or use the one we provide with this tutorial: pizzas.zip. If you use your dataset, please make sure the images are smaller than 2048 pixels with width and height, and you use less than 1000 images.

Note

For this tutorial, we provide you with a small dataset of pizza images. We chose a small dataset because it’s easy to ship and train.

Upload the data

We start by uploading the dataset to the Lightly Platform.

Create a new account if you do not have one yet. Go to your user Preferences and copy your API token.

Now install lightly if you haven’t already, and upload your dataset.

# install Lightly
pip3 install lightly

# upload your DATA directory
lightly-upload token=MY_TOKEN new_dataset_name='NEW_DATASET_NAME' input_dir='DATA/'

Filter the dataset using metadata

Once the dataset is created and the images uploaded, you can head to ‘Metadata’ under the ‘Analyze & Filter’ menu.

Move the sliders below the histograms to define filter rules for the dataset. Once you are satisfied with the filtered dataset, create a new tag using the tag menu on the left side.

Download the curated dataset

We have filtered the dataset and want to download it now to train a model. Therefore, click on the download menu on the left.

We can now download the filtered images by clicking on the ‘DOWNLOAD IMAGES’ button. In our case, the images are stored in the ‘pizzas’ folder. We now have to annotate the images. We can do this by moving the individual images to subfolders corresponding to the class. E.g. we move salami pizza images to the ‘salami’ folder and Margherita pizza images to the ‘margherita’ folder.


Training a model using the curated data

Now we can start training our model using PyTorch Lightning We start by importing the necessary dependencies

import os

import pytorch_lightning as pl
import torch
import torchmetrics
from torch.utils.data import DataLoader, random_split
from torchvision import transforms
from torchvision.datasets import ImageFolder
from torchvision.models import resnet18

We use a small batch size to make sure we can run the training on all kinds of machines. Feel free to adjust the value to one that works on your machine.

batch_size = 8
seed = 42

Set the seed to make the experiment reproducible

pl.seed_everything(seed)
42

Let’s set up the augmentations for the train and the test data.

train_transform = transforms.Compose(
    [
        transforms.RandomResizedCrop((224, 224), scale=(0.7, 1.0)),
        transforms.RandomHorizontalFlip(),
        transforms.ToTensor(),
        transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]),
    ]
)

# we don't do any resizing or mirroring for the test data
test_transform = transforms.Compose(
    [
        transforms.Resize((224, 224)),
        transforms.ToTensor(),
        transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]),
    ]
)

We load our data and split it into train/test with a 70/30 ratio.

# Please make sure the data folder contains subfolders for each class
#
# pizzas
#  L salami
#  L margherita
dset = ImageFolder("pizzas", transform=train_transform)

# to use the random_split method we need to obtain the length
# of the train and test set
full_len = len(dset)
train_len = int(full_len * 0.7)
test_len = int(full_len - train_len)
dataset_train, dataset_test = random_split(dset, [train_len, test_len])
dataset_test.transforms = test_transform

print("Training set consists of {} images".format(len(dataset_train)))
print("Test set consists of {} images".format(len(dataset_test)))
Training set consists of 118 images
Test set consists of 52 images

We can create our data loaders to fetch the data from the training and test set and pack them into batches.

dataloader_train = DataLoader(dataset_train, batch_size=batch_size, shuffle=True)
dataloader_test = DataLoader(dataset_test, batch_size=batch_size)

PyTorch Lightning allows us to pack the loss as well as the optimizer into a single module.

class MyModel(pl.LightningModule):
    def __init__(self, num_classes=2):
        super().__init__()
        self.save_hyperparameters()

        # load a pretrained resnet from torchvision
        self.model = resnet18(pretrained=True)

        # add new linear output layer (transfer learning)
        num_ftrs = self.model.fc.in_features
        self.model.fc = torch.nn.Linear(num_ftrs, 2)

        self.accuracy = torchmetrics.Accuracy()

    def forward(self, x):
        return self.model(x)

    def training_step(self, batch, batch_idx):
        x, y = batch
        y_hat = self(x)
        loss = torch.nn.functional.cross_entropy(y_hat, y)
        self.log("train_loss", loss, prog_bar=True)
        return loss

    def validation_step(self, batch, batch_idx):
        x, y = batch
        y_hat = self(x)
        loss = torch.nn.functional.cross_entropy(y_hat, y)
        y_hat = torch.nn.functional.softmax(y_hat, dim=1)
        self.accuracy(y_hat, y)
        self.log("val_loss", loss, on_epoch=True, prog_bar=True)
        self.log("val_acc", self.accuracy.compute(), on_epoch=True, prog_bar=True)

    def configure_optimizers(self):
        return torch.optim.SGD(self.model.fc.parameters(), lr=0.001, momentum=0.9)

Finally, we can create the model and use the Trainer to train our model.

model = MyModel()
trainer = pl.Trainer(max_epochs=4, devices=1)
trainer.fit(model, dataloader_train, dataloader_test)
/opt/runner_04/hostedtoolcache/Python/3.10.8/x64/lib/python3.10/site-packages/torchvision/models/_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and will be removed in 0.15, please use 'weights' instead.
  warnings.warn(
/opt/runner_04/hostedtoolcache/Python/3.10.8/x64/lib/python3.10/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and will be removed in 0.15. The current behavior is equivalent to passing `weights=ResNet18_Weights.IMAGENET1K_V1`. You can also use `weights=ResNet18_Weights.DEFAULT` to get the most up-to-date weights.
  warnings.warn(msg)
/opt/runner_04/hostedtoolcache/Python/3.10.8/x64/lib/python3.10/site-packages/pytorch_lightning/trainer/setup.py:175: PossibleUserWarning: GPU available but not used. Set `accelerator` and `devices` using `Trainer(accelerator='gpu', devices=2)`.
  rank_zero_warn(

Sanity Checking: 0it [00:00, ?it/s]/opt/runner_04/hostedtoolcache/Python/3.10.8/x64/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:224: PossibleUserWarning: The dataloader, val_dataloader 0, does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` (try 48 which is the number of cpus on this machine) in the `DataLoader` init to improve performance.
  rank_zero_warn(

Sanity Checking:   0%|          | 0/2 [00:00<?, ?it/s]
Sanity Checking DataLoader 0:   0%|          | 0/2 [00:00<?, ?it/s]
Sanity Checking DataLoader 0:  50%|#####     | 1/2 [00:00<00:00, 14.23it/s]
Sanity Checking DataLoader 0: 100%|##########| 2/2 [00:00<00:00, 14.42it/s]

/opt/runner_04/hostedtoolcache/Python/3.10.8/x64/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:224: PossibleUserWarning: The dataloader, train_dataloader, does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` (try 48 which is the number of cpus on this machine) in the `DataLoader` init to improve performance.
  rank_zero_warn(
/opt/runner_04/hostedtoolcache/Python/3.10.8/x64/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py:1555: PossibleUserWarning: The number of training batches (15) is smaller than the logging interval Trainer(log_every_n_steps=50). Set a lower value for log_every_n_steps if you want to see logs for the training epoch.
  rank_zero_warn(

Training: 0it [00:00, ?it/s]
Training:   0%|          | 0/22 [00:00<?, ?it/s]
Epoch 0:   0%|          | 0/22 [00:00<?, ?it/s]
Epoch 0:   5%|4         | 1/22 [00:00<00:03,  5.90it/s]
Epoch 0:   5%|4         | 1/22 [00:00<00:03,  5.89it/s, loss=0.937, v_num=0, train_loss=0.937]
Epoch 0:   9%|9         | 2/22 [00:00<00:03,  5.88it/s, loss=0.937, v_num=0, train_loss=0.937]
Epoch 0:   9%|9         | 2/22 [00:00<00:03,  5.87it/s, loss=0.801, v_num=0, train_loss=0.666]
Epoch 0:  14%|#3        | 3/22 [00:00<00:03,  5.84it/s, loss=0.801, v_num=0, train_loss=0.666]
Epoch 0:  14%|#3        | 3/22 [00:00<00:03,  5.83it/s, loss=0.783, v_num=0, train_loss=0.745]
Epoch 0:  18%|#8        | 4/22 [00:00<00:03,  5.94it/s, loss=0.783, v_num=0, train_loss=0.745]
Epoch 0:  18%|#8        | 4/22 [00:00<00:03,  5.93it/s, loss=0.767, v_num=0, train_loss=0.720]
Epoch 0:  23%|##2       | 5/22 [00:00<00:02,  6.00it/s, loss=0.767, v_num=0, train_loss=0.720]
Epoch 0:  23%|##2       | 5/22 [00:00<00:02,  5.99it/s, loss=0.788, v_num=0, train_loss=0.871]
Epoch 0:  27%|##7       | 6/22 [00:01<00:02,  5.85it/s, loss=0.788, v_num=0, train_loss=0.871]
Epoch 0:  27%|##7       | 6/22 [00:01<00:02,  5.85it/s, loss=0.758, v_num=0, train_loss=0.610]
Epoch 0:  32%|###1      | 7/22 [00:01<00:02,  5.85it/s, loss=0.758, v_num=0, train_loss=0.610]
Epoch 0:  32%|###1      | 7/22 [00:01<00:02,  5.85it/s, loss=0.74, v_num=0, train_loss=0.630]
Epoch 0:  36%|###6      | 8/22 [00:01<00:02,  5.79it/s, loss=0.74, v_num=0, train_loss=0.630]
Epoch 0:  36%|###6      | 8/22 [00:01<00:02,  5.79it/s, loss=0.764, v_num=0, train_loss=0.937]
Epoch 0:  41%|####      | 9/22 [00:01<00:02,  5.75it/s, loss=0.764, v_num=0, train_loss=0.937]
Epoch 0:  41%|####      | 9/22 [00:01<00:02,  5.75it/s, loss=0.755, v_num=0, train_loss=0.677]
Epoch 0:  45%|####5     | 10/22 [00:01<00:02,  5.79it/s, loss=0.755, v_num=0, train_loss=0.677]
Epoch 0:  45%|####5     | 10/22 [00:01<00:02,  5.78it/s, loss=0.746, v_num=0, train_loss=0.668]
Epoch 0:  50%|#####     | 11/22 [00:01<00:01,  5.71it/s, loss=0.746, v_num=0, train_loss=0.668]
Epoch 0:  50%|#####     | 11/22 [00:01<00:01,  5.70it/s, loss=0.745, v_num=0, train_loss=0.733]
Epoch 0:  55%|#####4    | 12/22 [00:02<00:01,  5.71it/s, loss=0.745, v_num=0, train_loss=0.733]
Epoch 0:  55%|#####4    | 12/22 [00:02<00:01,  5.70it/s, loss=0.732, v_num=0, train_loss=0.595]
Epoch 0:  59%|#####9    | 13/22 [00:02<00:01,  5.67it/s, loss=0.732, v_num=0, train_loss=0.595]
Epoch 0:  59%|#####9    | 13/22 [00:02<00:01,  5.67it/s, loss=0.713, v_num=0, train_loss=0.484]
Epoch 0:  64%|######3   | 14/22 [00:02<00:01,  5.65it/s, loss=0.713, v_num=0, train_loss=0.484]
Epoch 0:  64%|######3   | 14/22 [00:02<00:01,  5.65it/s, loss=0.716, v_num=0, train_loss=0.755]
Epoch 0:  68%|######8   | 15/22 [00:02<00:01,  5.67it/s, loss=0.716, v_num=0, train_loss=0.755]
Epoch 0:  68%|######8   | 15/22 [00:02<00:01,  5.67it/s, loss=0.703, v_num=0, train_loss=0.511]

Validation: 0it [00:00, ?it/s]

Validation:   0%|          | 0/7 [00:00<?, ?it/s]

Validation DataLoader 0:   0%|          | 0/7 [00:00<?, ?it/s]

Validation DataLoader 0:  14%|#4        | 1/7 [00:00<00:00, 22.36it/s]
Epoch 0:  73%|#######2  | 16/22 [00:02<00:01,  5.90it/s, loss=0.703, v_num=0, train_loss=0.511]

Validation DataLoader 0:  29%|##8       | 2/7 [00:00<00:00, 18.26it/s]
Epoch 0:  77%|#######7  | 17/22 [00:02<00:00,  6.12it/s, loss=0.703, v_num=0, train_loss=0.511]

Validation DataLoader 0:  43%|####2     | 3/7 [00:00<00:00, 15.27it/s]
Epoch 0:  82%|########1 | 18/22 [00:02<00:00,  6.28it/s, loss=0.703, v_num=0, train_loss=0.511]

Validation DataLoader 0:  57%|#####7    | 4/7 [00:00<00:00, 13.47it/s]
Epoch 0:  86%|########6 | 19/22 [00:02<00:00,  6.41it/s, loss=0.703, v_num=0, train_loss=0.511]

Validation DataLoader 0:  71%|#######1  | 5/7 [00:00<00:00, 12.91it/s]
Epoch 0:  91%|######### | 20/22 [00:03<00:00,  6.55it/s, loss=0.703, v_num=0, train_loss=0.511]

Validation DataLoader 0:  86%|########5 | 6/7 [00:00<00:00, 13.27it/s]
Epoch 0:  95%|#########5| 21/22 [00:03<00:00,  6.73it/s, loss=0.703, v_num=0, train_loss=0.511]

Validation DataLoader 0: 100%|##########| 7/7 [00:00<00:00, 14.21it/s]
Epoch 0: 100%|##########| 22/22 [00:03<00:00,  6.96it/s, loss=0.703, v_num=0, train_loss=0.511]
Epoch 0: 100%|##########| 22/22 [00:03<00:00,  6.96it/s, loss=0.703, v_num=0, train_loss=0.511, val_loss=1.180, val_acc=0.306]


Epoch 0: 100%|##########| 22/22 [00:03<00:00,  6.96it/s, loss=0.703, v_num=0, train_loss=0.511, val_loss=1.180, val_acc=0.306]
Epoch 0:   0%|          | 0/22 [00:00<?, ?it/s, loss=0.703, v_num=0, train_loss=0.511, val_loss=1.180, val_acc=0.306]
Epoch 1:   0%|          | 0/22 [00:00<?, ?it/s, loss=0.703, v_num=0, train_loss=0.511, val_loss=1.180, val_acc=0.306]
Epoch 1:   5%|4         | 1/22 [00:00<00:03,  5.64it/s, loss=0.703, v_num=0, train_loss=0.511, val_loss=1.180, val_acc=0.306]
Epoch 1:   5%|4         | 1/22 [00:00<00:03,  5.63it/s, loss=0.716, v_num=0, train_loss=0.923, val_loss=1.180, val_acc=0.306]
Epoch 1:   9%|9         | 2/22 [00:00<00:03,  5.65it/s, loss=0.716, v_num=0, train_loss=0.923, val_loss=1.180, val_acc=0.306]
Epoch 1:   9%|9         | 2/22 [00:00<00:03,  5.64it/s, loss=0.707, v_num=0, train_loss=0.565, val_loss=1.180, val_acc=0.306]
Epoch 1:  14%|#3        | 3/22 [00:00<00:03,  5.77it/s, loss=0.707, v_num=0, train_loss=0.565, val_loss=1.180, val_acc=0.306]
Epoch 1:  14%|#3        | 3/22 [00:00<00:03,  5.77it/s, loss=0.699, v_num=0, train_loss=0.561, val_loss=1.180, val_acc=0.306]
Epoch 1:  18%|#8        | 4/22 [00:00<00:03,  5.72it/s, loss=0.699, v_num=0, train_loss=0.561, val_loss=1.180, val_acc=0.306]
Epoch 1:  18%|#8        | 4/22 [00:00<00:03,  5.71it/s, loss=0.686, v_num=0, train_loss=0.454, val_loss=1.180, val_acc=0.306]
Epoch 1:  23%|##2       | 5/22 [00:00<00:03,  5.48it/s, loss=0.686, v_num=0, train_loss=0.454, val_loss=1.180, val_acc=0.306]
Epoch 1:  23%|##2       | 5/22 [00:00<00:03,  5.47it/s, loss=0.677, v_num=0, train_loss=0.503, val_loss=1.180, val_acc=0.306]
Epoch 1:  27%|##7       | 6/22 [00:01<00:02,  5.53it/s, loss=0.677, v_num=0, train_loss=0.503, val_loss=1.180, val_acc=0.306]
Epoch 1:  27%|##7       | 6/22 [00:01<00:02,  5.53it/s, loss=0.661, v_num=0, train_loss=0.607, val_loss=1.180, val_acc=0.306]
Epoch 1:  32%|###1      | 7/22 [00:01<00:02,  5.64it/s, loss=0.661, v_num=0, train_loss=0.607, val_loss=1.180, val_acc=0.306]
Epoch 1:  32%|###1      | 7/22 [00:01<00:02,  5.63it/s, loss=0.673, v_num=0, train_loss=0.907, val_loss=1.180, val_acc=0.306]
Epoch 1:  36%|###6      | 8/22 [00:01<00:02,  5.64it/s, loss=0.673, v_num=0, train_loss=0.907, val_loss=1.180, val_acc=0.306]
Epoch 1:  36%|###6      | 8/22 [00:01<00:02,  5.64it/s, loss=0.656, v_num=0, train_loss=0.419, val_loss=1.180, val_acc=0.306]
Epoch 1:  41%|####      | 9/22 [00:01<00:02,  5.47it/s, loss=0.656, v_num=0, train_loss=0.419, val_loss=1.180, val_acc=0.306]
Epoch 1:  41%|####      | 9/22 [00:01<00:02,  5.47it/s, loss=0.648, v_num=0, train_loss=0.548, val_loss=1.180, val_acc=0.306]
Epoch 1:  45%|####5     | 10/22 [00:01<00:02,  5.46it/s, loss=0.648, v_num=0, train_loss=0.548, val_loss=1.180, val_acc=0.306]
Epoch 1:  45%|####5     | 10/22 [00:01<00:02,  5.46it/s, loss=0.632, v_num=0, train_loss=0.548, val_loss=1.180, val_acc=0.306]
Epoch 1:  50%|#####     | 11/22 [00:02<00:02,  5.41it/s, loss=0.632, v_num=0, train_loss=0.548, val_loss=1.180, val_acc=0.306]
Epoch 1:  50%|#####     | 11/22 [00:02<00:02,  5.41it/s, loss=0.626, v_num=0, train_loss=0.491, val_loss=1.180, val_acc=0.306]
Epoch 1:  55%|#####4    | 12/22 [00:02<00:01,  5.44it/s, loss=0.626, v_num=0, train_loss=0.491, val_loss=1.180, val_acc=0.306]
Epoch 1:  55%|#####4    | 12/22 [00:02<00:01,  5.44it/s, loss=0.614, v_num=0, train_loss=0.402, val_loss=1.180, val_acc=0.306]
Epoch 1:  59%|#####9    | 13/22 [00:02<00:01,  5.39it/s, loss=0.614, v_num=0, train_loss=0.402, val_loss=1.180, val_acc=0.306]
Epoch 1:  59%|#####9    | 13/22 [00:02<00:01,  5.39it/s, loss=0.608, v_num=0, train_loss=0.817, val_loss=1.180, val_acc=0.306]
Epoch 1:  64%|######3   | 14/22 [00:02<00:01,  5.41it/s, loss=0.608, v_num=0, train_loss=0.817, val_loss=1.180, val_acc=0.306]
Epoch 1:  64%|######3   | 14/22 [00:02<00:01,  5.41it/s, loss=0.611, v_num=0, train_loss=0.721, val_loss=1.180, val_acc=0.306]
Epoch 1:  68%|######8   | 15/22 [00:02<00:01,  5.50it/s, loss=0.611, v_num=0, train_loss=0.721, val_loss=1.180, val_acc=0.306]
Epoch 1:  68%|######8   | 15/22 [00:02<00:01,  5.50it/s, loss=0.604, v_num=0, train_loss=0.540, val_loss=1.180, val_acc=0.306]

Validation: 0it [00:00, ?it/s]

Validation:   0%|          | 0/7 [00:00<?, ?it/s]

Validation DataLoader 0:   0%|          | 0/7 [00:00<?, ?it/s]

Validation DataLoader 0:  14%|#4        | 1/7 [00:00<00:00, 22.44it/s]
Epoch 1:  73%|#######2  | 16/22 [00:02<00:01,  5.73it/s, loss=0.604, v_num=0, train_loss=0.540, val_loss=1.180, val_acc=0.306]

Validation DataLoader 0:  29%|##8       | 2/7 [00:00<00:00, 17.94it/s]
Epoch 1:  77%|#######7  | 17/22 [00:02<00:00,  5.94it/s, loss=0.604, v_num=0, train_loss=0.540, val_loss=1.180, val_acc=0.306]

Validation DataLoader 0:  43%|####2     | 3/7 [00:00<00:00, 15.09it/s]
Epoch 1:  82%|########1 | 18/22 [00:02<00:00,  6.11it/s, loss=0.604, v_num=0, train_loss=0.540, val_loss=1.180, val_acc=0.306]

Validation DataLoader 0:  57%|#####7    | 4/7 [00:00<00:00, 13.41it/s]
Epoch 1:  86%|########6 | 19/22 [00:03<00:00,  6.23it/s, loss=0.604, v_num=0, train_loss=0.540, val_loss=1.180, val_acc=0.306]

Validation DataLoader 0:  71%|#######1  | 5/7 [00:00<00:00, 12.89it/s]
Epoch 1:  91%|######### | 20/22 [00:03<00:00,  6.38it/s, loss=0.604, v_num=0, train_loss=0.540, val_loss=1.180, val_acc=0.306]

Validation DataLoader 0:  86%|########5 | 6/7 [00:00<00:00, 13.15it/s]
Epoch 1:  95%|#########5| 21/22 [00:03<00:00,  6.55it/s, loss=0.604, v_num=0, train_loss=0.540, val_loss=1.180, val_acc=0.306]

Validation DataLoader 0: 100%|##########| 7/7 [00:00<00:00, 14.08it/s]
Epoch 1: 100%|##########| 22/22 [00:03<00:00,  6.78it/s, loss=0.604, v_num=0, train_loss=0.540, val_loss=1.180, val_acc=0.306]
Epoch 1: 100%|##########| 22/22 [00:03<00:00,  6.77it/s, loss=0.604, v_num=0, train_loss=0.540, val_loss=0.600, val_acc=0.454]


Epoch 1: 100%|##########| 22/22 [00:03<00:00,  6.77it/s, loss=0.604, v_num=0, train_loss=0.540, val_loss=0.600, val_acc=0.454]
Epoch 1:   0%|          | 0/22 [00:00<?, ?it/s, loss=0.604, v_num=0, train_loss=0.540, val_loss=0.600, val_acc=0.454]
Epoch 2:   0%|          | 0/22 [00:00<?, ?it/s, loss=0.604, v_num=0, train_loss=0.540, val_loss=0.600, val_acc=0.454]
Epoch 2:   5%|4         | 1/22 [00:00<00:03,  6.17it/s, loss=0.604, v_num=0, train_loss=0.540, val_loss=0.600, val_acc=0.454]
Epoch 2:   5%|4         | 1/22 [00:00<00:03,  6.15it/s, loss=0.587, v_num=0, train_loss=0.385, val_loss=0.600, val_acc=0.454]
Epoch 2:   9%|9         | 2/22 [00:00<00:03,  5.67it/s, loss=0.587, v_num=0, train_loss=0.385, val_loss=0.600, val_acc=0.454]
Epoch 2:   9%|9         | 2/22 [00:00<00:03,  5.66it/s, loss=0.572, v_num=0, train_loss=0.298, val_loss=0.600, val_acc=0.454]
Epoch 2:  14%|#3        | 3/22 [00:00<00:03,  5.73it/s, loss=0.572, v_num=0, train_loss=0.298, val_loss=0.600, val_acc=0.454]
Epoch 2:  14%|#3        | 3/22 [00:00<00:03,  5.73it/s, loss=0.562, v_num=0, train_loss=0.293, val_loss=0.600, val_acc=0.454]
Epoch 2:  18%|#8        | 4/22 [00:00<00:03,  5.43it/s, loss=0.562, v_num=0, train_loss=0.293, val_loss=0.600, val_acc=0.454]
Epoch 2:  18%|#8        | 4/22 [00:00<00:03,  5.43it/s, loss=0.549, v_num=0, train_loss=0.487, val_loss=0.600, val_acc=0.454]
Epoch 2:  23%|##2       | 5/22 [00:00<00:03,  5.46it/s, loss=0.549, v_num=0, train_loss=0.487, val_loss=0.600, val_acc=0.454]
Epoch 2:  23%|##2       | 5/22 [00:00<00:03,  5.46it/s, loss=0.543, v_num=0, train_loss=0.399, val_loss=0.600, val_acc=0.454]
Epoch 2:  27%|##7       | 6/22 [00:01<00:02,  5.37it/s, loss=0.543, v_num=0, train_loss=0.399, val_loss=0.600, val_acc=0.454]
Epoch 2:  27%|##7       | 6/22 [00:01<00:02,  5.37it/s, loss=0.518, v_num=0, train_loss=0.410, val_loss=0.600, val_acc=0.454]
Epoch 2:  32%|###1      | 7/22 [00:01<00:02,  5.47it/s, loss=0.518, v_num=0, train_loss=0.410, val_loss=0.600, val_acc=0.454]
Epoch 2:  32%|###1      | 7/22 [00:01<00:02,  5.47it/s, loss=0.505, v_num=0, train_loss=0.300, val_loss=0.600, val_acc=0.454]
Epoch 2:  36%|###6      | 8/22 [00:01<00:02,  5.46it/s, loss=0.505, v_num=0, train_loss=0.300, val_loss=0.600, val_acc=0.454]
Epoch 2:  36%|###6      | 8/22 [00:01<00:02,  5.46it/s, loss=0.497, v_num=0, train_loss=0.418, val_loss=0.600, val_acc=0.454]
Epoch 2:  41%|####      | 9/22 [00:01<00:02,  5.44it/s, loss=0.497, v_num=0, train_loss=0.418, val_loss=0.600, val_acc=0.454]
Epoch 2:  41%|####      | 9/22 [00:01<00:02,  5.44it/s, loss=0.487, v_num=0, train_loss=0.250, val_loss=0.600, val_acc=0.454]
Epoch 2:  45%|####5     | 10/22 [00:01<00:02,  5.49it/s, loss=0.487, v_num=0, train_loss=0.250, val_loss=0.600, val_acc=0.454]
Epoch 2:  45%|####5     | 10/22 [00:01<00:02,  5.49it/s, loss=0.487, v_num=0, train_loss=0.499, val_loss=0.600, val_acc=0.454]
Epoch 2:  50%|#####     | 11/22 [00:02<00:02,  5.48it/s, loss=0.487, v_num=0, train_loss=0.499, val_loss=0.600, val_acc=0.454]
Epoch 2:  50%|#####     | 11/22 [00:02<00:02,  5.48it/s, loss=0.479, v_num=0, train_loss=0.440, val_loss=0.600, val_acc=0.454]
Epoch 2:  55%|#####4    | 12/22 [00:02<00:01,  5.38it/s, loss=0.479, v_num=0, train_loss=0.440, val_loss=0.600, val_acc=0.454]
Epoch 2:  55%|#####4    | 12/22 [00:02<00:01,  5.38it/s, loss=0.446, v_num=0, train_loss=0.252, val_loss=0.600, val_acc=0.454]
Epoch 2:  59%|#####9    | 13/22 [00:02<00:01,  5.42it/s, loss=0.446, v_num=0, train_loss=0.252, val_loss=0.600, val_acc=0.454]
Epoch 2:  59%|#####9    | 13/22 [00:02<00:01,  5.42it/s, loss=0.454, v_num=0, train_loss=0.591, val_loss=0.600, val_acc=0.454]
Epoch 2:  64%|######3   | 14/22 [00:02<00:01,  5.43it/s, loss=0.454, v_num=0, train_loss=0.591, val_loss=0.600, val_acc=0.454]
Epoch 2:  64%|######3   | 14/22 [00:02<00:01,  5.43it/s, loss=0.441, v_num=0, train_loss=0.283, val_loss=0.600, val_acc=0.454]
Epoch 2:  68%|######8   | 15/22 [00:02<00:01,  5.54it/s, loss=0.441, v_num=0, train_loss=0.283, val_loss=0.600, val_acc=0.454]
Epoch 2:  68%|######8   | 15/22 [00:02<00:01,  5.54it/s, loss=0.44, v_num=0, train_loss=0.519, val_loss=0.600, val_acc=0.454]

Validation: 0it [00:00, ?it/s]

Validation:   0%|          | 0/7 [00:00<?, ?it/s]

Validation DataLoader 0:   0%|          | 0/7 [00:00<?, ?it/s]

Validation DataLoader 0:  14%|#4        | 1/7 [00:00<00:00, 22.36it/s]
Epoch 2:  73%|#######2  | 16/22 [00:02<00:01,  5.77it/s, loss=0.44, v_num=0, train_loss=0.519, val_loss=0.600, val_acc=0.454]

Validation DataLoader 0:  29%|##8       | 2/7 [00:00<00:00, 18.18it/s]
Epoch 2:  77%|#######7  | 17/22 [00:02<00:00,  5.99it/s, loss=0.44, v_num=0, train_loss=0.519, val_loss=0.600, val_acc=0.454]

Validation DataLoader 0:  43%|####2     | 3/7 [00:00<00:00, 15.17it/s]
Epoch 2:  82%|########1 | 18/22 [00:02<00:00,  6.15it/s, loss=0.44, v_num=0, train_loss=0.519, val_loss=0.600, val_acc=0.454]

Validation DataLoader 0:  57%|#####7    | 4/7 [00:00<00:00, 13.55it/s]
Epoch 2:  86%|########6 | 19/22 [00:03<00:00,  6.28it/s, loss=0.44, v_num=0, train_loss=0.519, val_loss=0.600, val_acc=0.454]

Validation DataLoader 0:  71%|#######1  | 5/7 [00:00<00:00, 12.94it/s]
Epoch 2:  91%|######### | 20/22 [00:03<00:00,  6.42it/s, loss=0.44, v_num=0, train_loss=0.519, val_loss=0.600, val_acc=0.454]

Validation DataLoader 0:  86%|########5 | 6/7 [00:00<00:00, 13.29it/s]
Epoch 2:  95%|#########5| 21/22 [00:03<00:00,  6.60it/s, loss=0.44, v_num=0, train_loss=0.519, val_loss=0.600, val_acc=0.454]

Validation DataLoader 0: 100%|##########| 7/7 [00:00<00:00, 14.29it/s]
Epoch 2: 100%|##########| 22/22 [00:03<00:00,  6.83it/s, loss=0.44, v_num=0, train_loss=0.519, val_loss=0.600, val_acc=0.454]
Epoch 2: 100%|##########| 22/22 [00:03<00:00,  6.83it/s, loss=0.44, v_num=0, train_loss=0.519, val_loss=0.332, val_acc=0.575]


Epoch 2: 100%|##########| 22/22 [00:03<00:00,  6.83it/s, loss=0.44, v_num=0, train_loss=0.519, val_loss=0.332, val_acc=0.575]
Epoch 2:   0%|          | 0/22 [00:00<?, ?it/s, loss=0.44, v_num=0, train_loss=0.519, val_loss=0.332, val_acc=0.575]
Epoch 3:   0%|          | 0/22 [00:00<?, ?it/s, loss=0.44, v_num=0, train_loss=0.519, val_loss=0.332, val_acc=0.575]
Epoch 3:   5%|4         | 1/22 [00:00<00:03,  6.00it/s, loss=0.44, v_num=0, train_loss=0.519, val_loss=0.332, val_acc=0.575]
Epoch 3:   5%|4         | 1/22 [00:00<00:03,  5.98it/s, loss=0.43, v_num=0, train_loss=0.295, val_loss=0.332, val_acc=0.575]
Epoch 3:   9%|9         | 2/22 [00:00<00:03,  5.67it/s, loss=0.43, v_num=0, train_loss=0.295, val_loss=0.332, val_acc=0.575]
Epoch 3:   9%|9         | 2/22 [00:00<00:03,  5.66it/s, loss=0.418, v_num=0, train_loss=0.160, val_loss=0.332, val_acc=0.575]
Epoch 3:  14%|#3        | 3/22 [00:00<00:03,  5.70it/s, loss=0.418, v_num=0, train_loss=0.160, val_loss=0.332, val_acc=0.575]
Epoch 3:  14%|#3        | 3/22 [00:00<00:03,  5.69it/s, loss=0.389, v_num=0, train_loss=0.237, val_loss=0.332, val_acc=0.575]
Epoch 3:  18%|#8        | 4/22 [00:00<00:03,  5.63it/s, loss=0.389, v_num=0, train_loss=0.237, val_loss=0.332, val_acc=0.575]
Epoch 3:  18%|#8        | 4/22 [00:00<00:03,  5.63it/s, loss=0.396, v_num=0, train_loss=0.862, val_loss=0.332, val_acc=0.575]
Epoch 3:  23%|##2       | 5/22 [00:00<00:02,  5.67it/s, loss=0.396, v_num=0, train_loss=0.862, val_loss=0.332, val_acc=0.575]
Epoch 3:  23%|##2       | 5/22 [00:00<00:02,  5.67it/s, loss=0.4, v_num=0, train_loss=0.625, val_loss=0.332, val_acc=0.575]
Epoch 3:  27%|##7       | 6/22 [00:01<00:02,  5.48it/s, loss=0.4, v_num=0, train_loss=0.625, val_loss=0.332, val_acc=0.575]
Epoch 3:  27%|##7       | 6/22 [00:01<00:02,  5.48it/s, loss=0.425, v_num=0, train_loss=0.871, val_loss=0.332, val_acc=0.575]
Epoch 3:  32%|###1      | 7/22 [00:01<00:02,  5.52it/s, loss=0.425, v_num=0, train_loss=0.871, val_loss=0.332, val_acc=0.575]
Epoch 3:  32%|###1      | 7/22 [00:01<00:02,  5.51it/s, loss=0.428, v_num=0, train_loss=0.359, val_loss=0.332, val_acc=0.575]
Epoch 3:  36%|###6      | 8/22 [00:01<00:02,  5.47it/s, loss=0.428, v_num=0, train_loss=0.359, val_loss=0.332, val_acc=0.575]
Epoch 3:  36%|###6      | 8/22 [00:01<00:02,  5.47it/s, loss=0.427, v_num=0, train_loss=0.276, val_loss=0.332, val_acc=0.575]
Epoch 3:  41%|####      | 9/22 [00:01<00:02,  5.48it/s, loss=0.427, v_num=0, train_loss=0.276, val_loss=0.332, val_acc=0.575]
Epoch 3:  41%|####      | 9/22 [00:01<00:02,  5.48it/s, loss=0.42, v_num=0, train_loss=0.346, val_loss=0.332, val_acc=0.575]
Epoch 3:  45%|####5     | 10/22 [00:01<00:02,  5.48it/s, loss=0.42, v_num=0, train_loss=0.346, val_loss=0.332, val_acc=0.575]
Epoch 3:  45%|####5     | 10/22 [00:01<00:02,  5.48it/s, loss=0.437, v_num=0, train_loss=0.754, val_loss=0.332, val_acc=0.575]
Epoch 3:  50%|#####     | 11/22 [00:02<00:02,  5.43it/s, loss=0.437, v_num=0, train_loss=0.754, val_loss=0.332, val_acc=0.575]
Epoch 3:  50%|#####     | 11/22 [00:02<00:02,  5.43it/s, loss=0.43, v_num=0, train_loss=0.255, val_loss=0.332, val_acc=0.575]
Epoch 3:  55%|#####4    | 12/22 [00:02<00:01,  5.50it/s, loss=0.43, v_num=0, train_loss=0.255, val_loss=0.332, val_acc=0.575]
Epoch 3:  55%|#####4    | 12/22 [00:02<00:01,  5.50it/s, loss=0.457, v_num=0, train_loss=0.846, val_loss=0.332, val_acc=0.575]
Epoch 3:  59%|#####9    | 13/22 [00:02<00:01,  5.48it/s, loss=0.457, v_num=0, train_loss=0.846, val_loss=0.332, val_acc=0.575]
Epoch 3:  59%|#####9    | 13/22 [00:02<00:01,  5.48it/s, loss=0.465, v_num=0, train_loss=0.590, val_loss=0.332, val_acc=0.575]
Epoch 3:  64%|######3   | 14/22 [00:02<00:01,  5.48it/s, loss=0.465, v_num=0, train_loss=0.590, val_loss=0.332, val_acc=0.575]
Epoch 3:  64%|######3   | 14/22 [00:02<00:01,  5.48it/s, loss=0.476, v_num=0, train_loss=0.468, val_loss=0.332, val_acc=0.575]
Epoch 3:  68%|######8   | 15/22 [00:02<00:01,  5.61it/s, loss=0.476, v_num=0, train_loss=0.468, val_loss=0.332, val_acc=0.575]
Epoch 3:  68%|######8   | 15/22 [00:02<00:01,  5.61it/s, loss=0.475, v_num=0, train_loss=0.474, val_loss=0.332, val_acc=0.575]

Validation: 0it [00:00, ?it/s]

Validation:   0%|          | 0/7 [00:00<?, ?it/s]

Validation DataLoader 0:   0%|          | 0/7 [00:00<?, ?it/s]

Validation DataLoader 0:  14%|#4        | 1/7 [00:00<00:00, 23.91it/s]
Epoch 3:  73%|#######2  | 16/22 [00:02<00:01,  5.85it/s, loss=0.475, v_num=0, train_loss=0.474, val_loss=0.332, val_acc=0.575]

Validation DataLoader 0:  29%|##8       | 2/7 [00:00<00:00, 19.43it/s]
Epoch 3:  77%|#######7  | 17/22 [00:02<00:00,  6.08it/s, loss=0.475, v_num=0, train_loss=0.474, val_loss=0.332, val_acc=0.575]

Validation DataLoader 0:  43%|####2     | 3/7 [00:00<00:00, 16.26it/s]
Epoch 3:  82%|########1 | 18/22 [00:02<00:00,  6.25it/s, loss=0.475, v_num=0, train_loss=0.474, val_loss=0.332, val_acc=0.575]

Validation DataLoader 0:  57%|#####7    | 4/7 [00:00<00:00, 14.15it/s]
Epoch 3:  86%|########6 | 19/22 [00:02<00:00,  6.38it/s, loss=0.475, v_num=0, train_loss=0.474, val_loss=0.332, val_acc=0.575]

Validation DataLoader 0:  71%|#######1  | 5/7 [00:00<00:00, 13.57it/s]
Epoch 3:  91%|######### | 20/22 [00:03<00:00,  6.53it/s, loss=0.475, v_num=0, train_loss=0.474, val_loss=0.332, val_acc=0.575]

Validation DataLoader 0:  86%|########5 | 6/7 [00:00<00:00, 13.89it/s]
Epoch 3:  95%|#########5| 21/22 [00:03<00:00,  6.72it/s, loss=0.475, v_num=0, train_loss=0.474, val_loss=0.332, val_acc=0.575]

Validation DataLoader 0: 100%|##########| 7/7 [00:00<00:00, 14.89it/s]
Epoch 3: 100%|##########| 22/22 [00:03<00:00,  6.95it/s, loss=0.475, v_num=0, train_loss=0.474, val_loss=0.332, val_acc=0.575]
Epoch 3: 100%|##########| 22/22 [00:03<00:00,  6.95it/s, loss=0.475, v_num=0, train_loss=0.474, val_loss=0.257, val_acc=0.659]


Epoch 3: 100%|##########| 22/22 [00:03<00:00,  6.95it/s, loss=0.475, v_num=0, train_loss=0.474, val_loss=0.257, val_acc=0.659]
Epoch 3: 100%|##########| 22/22 [00:03<00:00,  6.82it/s, loss=0.475, v_num=0, train_loss=0.474, val_loss=0.257, val_acc=0.659]

Total running time of the script: ( 0 minutes 13.580 seconds)

Gallery generated by Sphinx-Gallery