This documentation accompanies the video tutorial: youtube link


Tutorial 1: Curate Pizza Images

In this tutorial, you will learn how to upload a dataset to the Lightly platform, curate the data, and finally use the curated data to train a model.

What you will learn

  • Create and upload a new dataset via the web frontend

  • Curate a dataset using simple image metrics such as Width, Height, Sharpness, Signal-to-Noise ratio, File Size

  • Download images based on a tag from a dataset

  • Train an image classifier with the filtered dataset

Requirements

You can use your dataset or use the one we provide with this tutorial: pizzas.zip. If you use your dataset, please make sure the images are smaller than 2048 pixels with width and height, and you use less than 1000 images.

Note

For this tutorial, we provide you with a small dataset of pizza images. We chose a small dataset because it’s easy to ship and train.

Upload the data

We start by uploading the dataset to the Lightly Platform.

Create a new account if you do not have one yet and create a new dataset. You can upload images using drag and drop from your local machine.

Filter the dataset using metadata

Once the dataset is created and the images uploaded, you can head to ‘Histogram’ under the ‘Analyze & Filter’ menu.

Move the sliders below the histograms to define filter rules for the dataset. Once you are satisfied with the filtered dataset, create a new tag using the tag menu on the left side.

Download the curated dataset

We have filtered the dataset and want to download it now to train a model. Therefore, click on the download menu on the left.

We can now download the filtered images by clicking on the ‘DOWNLOAD IMAGES’ button. In our case, the images are stored in the ‘pizzas’ folder. We now have to annotate the images. We can do this by moving the individual images to subfolders corresponding to the class. E.g. we move salami pizza images to the ‘salami’ folder and Margherita pizza images to the ‘margherita’ folder.


Training a model using the curated data

Now we can start training our model using PyTorch Lightning We start by importing the necessary dependencies

import os
import torch
import pytorch_lightning as pl
from torchvision.datasets import ImageFolder
from torch.utils.data import DataLoader, random_split
from torchvision import transforms
from torchvision.models import resnet18

We use a small batch size to make sure we can run the training on all kinds of machines. Feel free to adjust the value to one that works on your machine.

batch_size = 8
seed = 42

Set the seed to make the experiment reproducible

pl.seed_everything(seed)

Out:

42

Let’s set up the augmentations for the train and the test data.

train_transform = transforms.Compose([
    transforms.RandomResizedCrop((224, 224), scale=(0.7, 1.0)),
    transforms.RandomHorizontalFlip(),
    transforms.ToTensor(),
    transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])

# we don't do any resizing or mirroring for the test data
test_transform = transforms.Compose([
    transforms.Resize((224, 224)),
    transforms.ToTensor(),
    transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])

We load our data and split it into train/test with a 70/30 ratio.

# Please make sure the data folder contains subfolders for each class
#
# pizzas
#  L salami
#  L margherita
dset = ImageFolder('pizzas', transform=train_transform)

# to use the random_split method we need to obtain the length
# of the train and test set
full_len = len(dset)
train_len = int(full_len * 0.7)
test_len = int(full_len - train_len)
dataset_train, dataset_test = random_split(dset, [train_len, test_len])
dataset_test.transforms = test_transform

print('Training set consists of {} images'.format(len(dataset_train)))
print('Test set consists of {} images'.format(len(dataset_test)))

Out:

Training set consists of 118 images
Test set consists of 52 images

We can create our data loaders to fetch the data from the training and test set and pack them into batches.

dataloader_train = DataLoader(dataset_train, batch_size=batch_size, shuffle=True)
dataloader_test = DataLoader(dataset_test, batch_size=batch_size)

PyTorch Lightning allows us to pack the loss as well as the optimizer into a single module.

class MyModel(pl.LightningModule):
    def __init__(self, num_classes=2):
        super().__init__()
        self.save_hyperparameters()

        # load a pretrained resnet from torchvision
        self.model = resnet18(pretrained=True)

        # add new linear output layer (transfer learning)
        num_ftrs = self.model.fc.in_features
        self.model.fc = torch.nn.Linear(num_ftrs, 2)

        self.accuracy = pl.metrics.Accuracy()

    def forward(self, x):
        return self.model(x)

    def training_step(self, batch, batch_idx):
        x, y = batch
        y_hat = self(x)
        loss = torch.nn.functional.cross_entropy(y_hat, y)
        self.log('train_loss', loss, prog_bar=True)
        return loss

    def validation_step(self, batch, batch_idx):
        x, y = batch
        y_hat = self(x)
        loss = torch.nn.functional.cross_entropy(y_hat, y)
        self.accuracy(y_hat, y)
        self.log('val_loss', loss, on_epoch=True, prog_bar=True)
        self.log('val_acc', self.accuracy.compute(), on_epoch=True, prog_bar=True)

    def configure_optimizers(self):
        return torch.optim.SGD(self.model.fc.parameters(), lr=0.001, momentum=0.9)

Finally, we can create the model and use the Trainer to train our model.

model = MyModel()
trainer = pl.Trainer(max_epochs=4)
trainer.fit(model, dataloader_train, dataloader_test)

Out:

GPU available: True, used: False
TPU available: None, using: 0 TPU cores
/opt/conda/envs/lightly/lib/python3.7/site-packages/pytorch_lightning/utilities/distributed.py:49: UserWarning: GPU available but not used. Set the --gpus flag when calling the script.
  warnings.warn(*args, **kwargs)

  | Name     | Type     | Params
--------------------------------------
0 | model    | ResNet   | 11.2 M
1 | accuracy | Accuracy | 0
--------------------------------------
11.2 M    Trainable params
0         Non-trainable params
11.2 M    Total params
/opt/conda/envs/lightly/lib/python3.7/site-packages/pytorch_lightning/utilities/distributed.py:49: UserWarning: The dataloader, val dataloader 0, does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` (try 12 which is the number of cpus on this machine) in the `DataLoader` init to improve performance.
  warnings.warn(*args, **kwargs)

Validation sanity check: 0it [00:00, ?it/s]
Validation sanity check:  50%|#####     | 1/2 [00:00<00:00,  1.24it/s]
Validation sanity check: 100%|##########| 2/2 [00:01<00:00,  1.79it/s]

/opt/conda/envs/lightly/lib/python3.7/site-packages/pytorch_lightning/utilities/distributed.py:49: UserWarning: The dataloader, train dataloader, does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` (try 12 which is the number of cpus on this machine) in the `DataLoader` init to improve performance.
  warnings.warn(*args, **kwargs)

Training: 0it [00:00, ?it/s]
Training:   0%|          | 0/22 [00:00<?, ?it/s]
Epoch 0:   0%|          | 0/22 [00:00<?, ?it/s]
Epoch 0:   5%|4         | 1/22 [00:00<00:20,  1.03it/s]
Epoch 0:   5%|4         | 1/22 [00:00<00:20,  1.03it/s, loss=0.877, v_num=4, val_loss=1.13, val_acc=0.25]
Epoch 0:   9%|9         | 2/22 [00:01<00:17,  1.17it/s, loss=0.877, v_num=4, val_loss=1.13, val_acc=0.25]
Epoch 0:   9%|9         | 2/22 [00:01<00:17,  1.17it/s, loss=0.866, v_num=4, val_loss=1.13, val_acc=0.25, train_loss=0.877]
Epoch 0:  14%|#3        | 3/22 [00:02<00:15,  1.20it/s, loss=0.866, v_num=4, val_loss=1.13, val_acc=0.25, train_loss=0.877]
Epoch 0:  14%|#3        | 3/22 [00:02<00:15,  1.20it/s, loss=0.837, v_num=4, val_loss=1.13, val_acc=0.25, train_loss=0.855]
Epoch 0:  18%|#8        | 4/22 [00:03<00:14,  1.23it/s, loss=0.837, v_num=4, val_loss=1.13, val_acc=0.25, train_loss=0.855]
Epoch 0:  18%|#8        | 4/22 [00:03<00:14,  1.23it/s, loss=0.796, v_num=4, val_loss=1.13, val_acc=0.25, train_loss=0.78]
Epoch 0:  23%|##2       | 5/22 [00:04<00:15,  1.08it/s, loss=0.796, v_num=4, val_loss=1.13, val_acc=0.25, train_loss=0.78]
Epoch 0:  23%|##2       | 5/22 [00:04<00:15,  1.08it/s, loss=0.765, v_num=4, val_loss=1.13, val_acc=0.25, train_loss=0.671]
Epoch 0:  27%|##7       | 6/22 [00:05<00:14,  1.10it/s, loss=0.765, v_num=4, val_loss=1.13, val_acc=0.25, train_loss=0.671]
Epoch 0:  27%|##7       | 6/22 [00:05<00:14,  1.10it/s, loss=0.76, v_num=4, val_loss=1.13, val_acc=0.25, train_loss=0.642]
Epoch 0:  32%|###1      | 7/22 [00:06<00:13,  1.09it/s, loss=0.76, v_num=4, val_loss=1.13, val_acc=0.25, train_loss=0.642]
Epoch 0:  32%|###1      | 7/22 [00:06<00:13,  1.09it/s, loss=0.767, v_num=4, val_loss=1.13, val_acc=0.25, train_loss=0.736]
Epoch 0:  36%|###6      | 8/22 [00:07<00:13,  1.07it/s, loss=0.767, v_num=4, val_loss=1.13, val_acc=0.25, train_loss=0.736]
Epoch 0:  36%|###6      | 8/22 [00:07<00:13,  1.07it/s, loss=0.746, v_num=4, val_loss=1.13, val_acc=0.25, train_loss=0.808]
Epoch 0:  41%|####      | 9/22 [00:08<00:11,  1.10it/s, loss=0.746, v_num=4, val_loss=1.13, val_acc=0.25, train_loss=0.808]
Epoch 0:  41%|####      | 9/22 [00:08<00:11,  1.10it/s, loss=0.734, v_num=4, val_loss=1.13, val_acc=0.25, train_loss=0.596]
Epoch 0:  45%|####5     | 10/22 [00:08<00:10,  1.13it/s, loss=0.734, v_num=4, val_loss=1.13, val_acc=0.25, train_loss=0.596]
Epoch 0:  45%|####5     | 10/22 [00:08<00:10,  1.13it/s, loss=0.732, v_num=4, val_loss=1.13, val_acc=0.25, train_loss=0.64]
Epoch 0:  50%|#####     | 11/22 [00:09<00:09,  1.13it/s, loss=0.732, v_num=4, val_loss=1.13, val_acc=0.25, train_loss=0.64]
Epoch 0:  50%|#####     | 11/22 [00:09<00:09,  1.13it/s, loss=0.773, v_num=4, val_loss=1.13, val_acc=0.25, train_loss=0.712]
Epoch 0:  55%|#####4    | 12/22 [00:10<00:08,  1.14it/s, loss=0.773, v_num=4, val_loss=1.13, val_acc=0.25, train_loss=0.712]
Epoch 0:  55%|#####4    | 12/22 [00:10<00:08,  1.14it/s, loss=0.776, v_num=4, val_loss=1.13, val_acc=0.25, train_loss=1.19]
Epoch 0:  59%|#####9    | 13/22 [00:11<00:07,  1.13it/s, loss=0.776, v_num=4, val_loss=1.13, val_acc=0.25, train_loss=1.19]
Epoch 0:  59%|#####9    | 13/22 [00:11<00:07,  1.13it/s, loss=0.764, v_num=4, val_loss=1.13, val_acc=0.25, train_loss=0.807]
Epoch 0:  64%|######3   | 14/22 [00:12<00:07,  1.14it/s, loss=0.764, v_num=4, val_loss=1.13, val_acc=0.25, train_loss=0.807]
Epoch 0:  64%|######3   | 14/22 [00:12<00:07,  1.14it/s, loss=0.751, v_num=4, val_loss=1.13, val_acc=0.25, train_loss=0.624]
Epoch 0:  68%|######8   | 15/22 [00:12<00:05,  1.18it/s, loss=0.751, v_num=4, val_loss=1.13, val_acc=0.25, train_loss=0.624]
Epoch 0:  68%|######8   | 15/22 [00:12<00:05,  1.18it/s, loss=0.742, v_num=4, val_loss=1.13, val_acc=0.25, train_loss=0.572]

Validating: 0it [00:00, ?it/s]

Validating:  14%|#4        | 1/7 [00:00<00:01,  3.11it/s]
Epoch 0:  77%|#######7  | 17/22 [00:13<00:03,  1.30it/s, loss=0.742, v_num=4, val_loss=1.13, val_acc=0.25, train_loss=0.572]

Validating:  29%|##8       | 2/7 [00:00<00:02,  2.03it/s]

Validating:  43%|####2     | 3/7 [00:01<00:01,  2.06it/s]
Epoch 0:  86%|########6 | 19/22 [00:14<00:02,  1.34it/s, loss=0.742, v_num=4, val_loss=1.13, val_acc=0.25, train_loss=0.572]

Validating:  57%|#####7    | 4/7 [00:01<00:01,  2.20it/s]

Validating:  71%|#######1  | 5/7 [00:02<00:00,  2.09it/s]
Epoch 0:  95%|#########5| 21/22 [00:15<00:00,  1.39it/s, loss=0.742, v_num=4, val_loss=1.13, val_acc=0.25, train_loss=0.572]

Validating:  86%|########5 | 6/7 [00:02<00:00,  1.98it/s]

Validating: 100%|##########| 7/7 [00:03<00:00,  2.40it/s]
Epoch 0: 100%|##########| 22/22 [00:16<00:00,  1.37it/s, loss=0.742, v_num=4, val_loss=0.567, val_acc=0.654, train_loss=0.621]

                                                         
Epoch 0:   0%|          | 0/22 [00:00<?, ?it/s, loss=0.742, v_num=4, val_loss=0.567, val_acc=0.654, train_loss=0.621]
Epoch 1:   0%|          | 0/22 [00:00<?, ?it/s, loss=0.742, v_num=4, val_loss=0.567, val_acc=0.654, train_loss=0.621]
Epoch 1:   5%|4         | 1/22 [00:01<00:25,  1.23s/it, loss=0.752, v_num=4, val_loss=0.567, val_acc=0.654, train_loss=0.621]
Epoch 1:   9%|9         | 2/22 [00:02<00:20,  1.01s/it, loss=0.752, v_num=4, val_loss=0.567, val_acc=0.654, train_loss=0.621]
Epoch 1:   9%|9         | 2/22 [00:02<00:20,  1.01s/it, loss=0.733, v_num=4, val_loss=0.567, val_acc=0.654, train_loss=0.912]
Epoch 1:  14%|#3        | 3/22 [00:02<00:16,  1.12it/s, loss=0.718, v_num=4, val_loss=0.567, val_acc=0.654, train_loss=0.415]
Epoch 1:  18%|#8        | 4/22 [00:03<00:15,  1.20it/s, loss=0.718, v_num=4, val_loss=0.567, val_acc=0.654, train_loss=0.415]
Epoch 1:  18%|#8        | 4/22 [00:03<00:15,  1.20it/s, loss=0.708, v_num=4, val_loss=0.567, val_acc=0.654, train_loss=0.46]
Epoch 1:  23%|##2       | 5/22 [00:04<00:14,  1.20it/s, loss=0.701, v_num=4, val_loss=0.567, val_acc=0.654, train_loss=0.53]
Epoch 1:  27%|##7       | 6/22 [00:05<00:13,  1.19it/s, loss=0.701, v_num=4, val_loss=0.567, val_acc=0.654, train_loss=0.53]
Epoch 1:  27%|##7       | 6/22 [00:05<00:13,  1.19it/s, loss=0.679, v_num=4, val_loss=0.567, val_acc=0.654, train_loss=0.566]
Epoch 1:  32%|###1      | 7/22 [00:05<00:12,  1.19it/s, loss=0.666, v_num=4, val_loss=0.567, val_acc=0.654, train_loss=0.444]
Epoch 1:  36%|###6      | 8/22 [00:06<00:11,  1.20it/s, loss=0.666, v_num=4, val_loss=0.567, val_acc=0.654, train_loss=0.444]
Epoch 1:  36%|###6      | 8/22 [00:06<00:11,  1.20it/s, loss=0.647, v_num=4, val_loss=0.567, val_acc=0.654, train_loss=0.598]
Epoch 1:  41%|####      | 9/22 [00:07<00:10,  1.20it/s, loss=0.635, v_num=4, val_loss=0.567, val_acc=0.654, train_loss=0.395]
Epoch 1:  45%|####5     | 10/22 [00:08<00:10,  1.17it/s, loss=0.635, v_num=4, val_loss=0.567, val_acc=0.654, train_loss=0.395]
Epoch 1:  45%|####5     | 10/22 [00:08<00:10,  1.17it/s, loss=0.631, v_num=4, val_loss=0.567, val_acc=0.654, train_loss=0.423]
Epoch 1:  50%|#####     | 11/22 [00:09<00:09,  1.17it/s, loss=0.614, v_num=4, val_loss=0.567, val_acc=0.654, train_loss=0.564]
Epoch 1:  55%|#####4    | 12/22 [00:10<00:08,  1.16it/s, loss=0.614, v_num=4, val_loss=0.567, val_acc=0.654, train_loss=0.564]
Epoch 1:  55%|#####4    | 12/22 [00:10<00:08,  1.16it/s, loss=0.622, v_num=4, val_loss=0.567, val_acc=0.654, train_loss=0.401]
Epoch 1:  59%|#####9    | 13/22 [00:11<00:07,  1.17it/s, loss=0.631, v_num=4, val_loss=0.567, val_acc=0.654, train_loss=0.974]
Epoch 1:  64%|######3   | 14/22 [00:11<00:06,  1.18it/s, loss=0.631, v_num=4, val_loss=0.567, val_acc=0.654, train_loss=0.974]
Epoch 1:  64%|######3   | 14/22 [00:11<00:06,  1.18it/s, loss=0.645, v_num=4, val_loss=0.567, val_acc=0.654, train_loss=0.764]
Epoch 1:  68%|######8   | 15/22 [00:12<00:05,  1.22it/s, loss=0.621, v_num=4, val_loss=0.567, val_acc=0.654, train_loss=0.935]
Epoch 1:  73%|#######2  | 16/22 [00:12<00:04,  1.30it/s, loss=0.621, v_num=4, val_loss=0.567, val_acc=0.654, train_loss=0.935]

Validating: 0it [00:00, ?it/s]

Validating:  14%|#4        | 1/7 [00:00<00:02,  2.86it/s]

Validating:  29%|##8       | 2/7 [00:00<00:01,  2.68it/s]
Epoch 1:  82%|########1 | 18/22 [00:13<00:02,  1.38it/s, loss=0.621, v_num=4, val_loss=0.567, val_acc=0.654, train_loss=0.935]

Validating:  43%|####2     | 3/7 [00:01<00:01,  2.56it/s]

Validating:  57%|#####7    | 4/7 [00:01<00:01,  2.30it/s]
Epoch 1:  91%|######### | 20/22 [00:13<00:01,  1.43it/s, loss=0.621, v_num=4, val_loss=0.567, val_acc=0.654, train_loss=0.935]

Validating:  71%|#######1  | 5/7 [00:02<00:00,  2.21it/s]

Validating:  86%|########5 | 6/7 [00:02<00:00,  2.17it/s]
Epoch 1: 100%|##########| 22/22 [00:14<00:00,  1.48it/s, loss=0.621, v_num=4, val_loss=0.567, val_acc=0.654, train_loss=0.935]

Validating: 100%|##########| 7/7 [00:02<00:00,  2.65it/s]
Epoch 1: 100%|##########| 22/22 [00:15<00:00,  1.44it/s, loss=0.621, v_num=4, val_loss=0.379, val_acc=0.865, train_loss=0.219]

                                                         
Epoch 1:   0%|          | 0/22 [00:00<?, ?it/s, loss=0.621, v_num=4, val_loss=0.379, val_acc=0.865, train_loss=0.219]
Epoch 2:   0%|          | 0/22 [00:00<?, ?it/s, loss=0.621, v_num=4, val_loss=0.379, val_acc=0.865, train_loss=0.219]
Epoch 2:   5%|4         | 1/22 [00:00<00:20,  1.04it/s, loss=0.582, v_num=4, val_loss=0.379, val_acc=0.865, train_loss=0.219]
Epoch 2:   9%|9         | 2/22 [00:02<00:20,  1.05s/it, loss=0.582, v_num=4, val_loss=0.379, val_acc=0.865, train_loss=0.219]
Epoch 2:   9%|9         | 2/22 [00:02<00:20,  1.05s/it, loss=0.576, v_num=4, val_loss=0.379, val_acc=0.865, train_loss=0.404]
Epoch 2:  14%|#3        | 3/22 [00:03<00:19,  1.05s/it, loss=0.58, v_num=4, val_loss=0.379, val_acc=0.865, train_loss=0.696]
Epoch 2:  18%|#8        | 4/22 [00:04<00:18,  1.04s/it, loss=0.58, v_num=4, val_loss=0.379, val_acc=0.865, train_loss=0.696]
Epoch 2:  18%|#8        | 4/22 [00:04<00:18,  1.04s/it, loss=0.604, v_num=4, val_loss=0.379, val_acc=0.865, train_loss=0.71]
Epoch 2:  23%|##2       | 5/22 [00:05<00:17,  1.04s/it, loss=0.611, v_num=4, val_loss=0.379, val_acc=0.865, train_loss=1.05]
Epoch 2:  27%|##7       | 6/22 [00:06<00:16,  1.01s/it, loss=0.611, v_num=4, val_loss=0.379, val_acc=0.865, train_loss=1.05]
Epoch 2:  27%|##7       | 6/22 [00:06<00:16,  1.01s/it, loss=0.59, v_num=4, val_loss=0.379, val_acc=0.865, train_loss=0.751]
Epoch 2:  32%|###1      | 7/22 [00:07<00:15,  1.02s/it, loss=0.601, v_num=4, val_loss=0.379, val_acc=0.865, train_loss=0.495]
Epoch 2:  36%|###6      | 8/22 [00:08<00:14,  1.01s/it, loss=0.601, v_num=4, val_loss=0.379, val_acc=0.865, train_loss=0.495]
Epoch 2:  36%|###6      | 8/22 [00:08<00:14,  1.01s/it, loss=0.603, v_num=4, val_loss=0.379, val_acc=0.865, train_loss=0.626]
Epoch 2:  41%|####      | 9/22 [00:08<00:12,  1.00it/s, loss=0.59, v_num=4, val_loss=0.379, val_acc=0.865, train_loss=0.504]
Epoch 2:  45%|####5     | 10/22 [00:09<00:11,  1.04it/s, loss=0.59, v_num=4, val_loss=0.379, val_acc=0.865, train_loss=0.504]
Epoch 2:  45%|####5     | 10/22 [00:09<00:11,  1.04it/s, loss=0.57, v_num=4, val_loss=0.379, val_acc=0.865, train_loss=0.266]
Epoch 2:  50%|#####     | 11/22 [00:10<00:10,  1.04it/s, loss=0.567, v_num=4, val_loss=0.379, val_acc=0.865, train_loss=0.185]
Epoch 2:  55%|#####4    | 12/22 [00:11<00:09,  1.07it/s, loss=0.567, v_num=4, val_loss=0.379, val_acc=0.865, train_loss=0.185]
Epoch 2:  55%|#####4    | 12/22 [00:11<00:09,  1.07it/s, loss=0.564, v_num=4, val_loss=0.379, val_acc=0.865, train_loss=0.383]
Epoch 2:  59%|#####9    | 13/22 [00:11<00:08,  1.09it/s, loss=0.593, v_num=4, val_loss=0.379, val_acc=0.865, train_loss=0.535]
Epoch 2:  64%|######3   | 14/22 [00:12<00:07,  1.11it/s, loss=0.593, v_num=4, val_loss=0.379, val_acc=0.865, train_loss=0.535]
Epoch 2:  64%|######3   | 14/22 [00:12<00:07,  1.11it/s, loss=0.617, v_num=4, val_loss=0.379, val_acc=0.865, train_loss=0.978]
Epoch 2:  68%|######8   | 15/22 [00:13<00:06,  1.15it/s, loss=0.634, v_num=4, val_loss=0.379, val_acc=0.865, train_loss=0.888]
Epoch 2:  73%|#######2  | 16/22 [00:13<00:04,  1.23it/s, loss=0.634, v_num=4, val_loss=0.379, val_acc=0.865, train_loss=0.888]

Validating: 0it [00:00, ?it/s]

Validating:  14%|#4        | 1/7 [00:00<00:02,  2.58it/s]

Validating:  29%|##8       | 2/7 [00:00<00:01,  3.30it/s]
Epoch 2:  82%|########1 | 18/22 [00:13<00:03,  1.32it/s, loss=0.634, v_num=4, val_loss=0.379, val_acc=0.865, train_loss=0.888]

Validating:  43%|####2     | 3/7 [00:00<00:01,  3.11it/s]

Validating:  57%|#####7    | 4/7 [00:01<00:01,  2.79it/s]
Epoch 2:  91%|######### | 20/22 [00:14<00:01,  1.39it/s, loss=0.634, v_num=4, val_loss=0.379, val_acc=0.865, train_loss=0.888]

Validating:  71%|#######1  | 5/7 [00:01<00:00,  2.60it/s]

Validating:  86%|########5 | 6/7 [00:02<00:00,  2.25it/s]
Epoch 2: 100%|##########| 22/22 [00:15<00:00,  1.43it/s, loss=0.634, v_num=4, val_loss=0.379, val_acc=0.865, train_loss=0.888]

Validating: 100%|##########| 7/7 [00:02<00:00,  2.52it/s]
Epoch 2: 100%|##########| 22/22 [00:15<00:00,  1.39it/s, loss=0.634, v_num=4, val_loss=0.325, val_acc=0.885, train_loss=0.917]

                                                         
Epoch 2:   0%|          | 0/22 [00:00<?, ?it/s, loss=0.634, v_num=4, val_loss=0.325, val_acc=0.885, train_loss=0.917]
Epoch 3:   0%|          | 0/22 [00:00<?, ?it/s, loss=0.634, v_num=4, val_loss=0.325, val_acc=0.885, train_loss=0.917]
Epoch 3:   5%|4         | 1/22 [00:00<00:19,  1.07it/s, loss=0.641, v_num=4, val_loss=0.325, val_acc=0.885, train_loss=0.917]
Epoch 3:   9%|9         | 2/22 [00:01<00:19,  1.02it/s, loss=0.641, v_num=4, val_loss=0.325, val_acc=0.885, train_loss=0.917]
Epoch 3:   9%|9         | 2/22 [00:01<00:19,  1.02it/s, loss=0.627, v_num=4, val_loss=0.325, val_acc=0.885, train_loss=0.538]
Epoch 3:  14%|#3        | 3/22 [00:02<00:18,  1.04it/s, loss=0.606, v_num=4, val_loss=0.325, val_acc=0.885, train_loss=0.685]
Epoch 3:  18%|#8        | 4/22 [00:03<00:17,  1.04it/s, loss=0.606, v_num=4, val_loss=0.325, val_acc=0.885, train_loss=0.685]
Epoch 3:  18%|#8        | 4/22 [00:03<00:17,  1.04it/s, loss=0.616, v_num=4, val_loss=0.325, val_acc=0.885, train_loss=0.35]
Epoch 3:  23%|##2       | 5/22 [00:04<00:16,  1.05it/s, loss=0.617, v_num=4, val_loss=0.325, val_acc=0.885, train_loss=1.13]
Epoch 3:  27%|##7       | 6/22 [00:05<00:15,  1.05it/s, loss=0.617, v_num=4, val_loss=0.325, val_acc=0.885, train_loss=1.13]
Epoch 3:  27%|##7       | 6/22 [00:05<00:15,  1.05it/s, loss=0.661, v_num=4, val_loss=0.325, val_acc=0.885, train_loss=0.234]
Epoch 3:  32%|###1      | 7/22 [00:06<00:13,  1.08it/s, loss=0.715, v_num=4, val_loss=0.325, val_acc=0.885, train_loss=1.29]
Epoch 3:  36%|###6      | 8/22 [00:07<00:12,  1.12it/s, loss=0.715, v_num=4, val_loss=0.325, val_acc=0.885, train_loss=1.29]
Epoch 3:  36%|###6      | 8/22 [00:07<00:12,  1.12it/s, loss=0.695, v_num=4, val_loss=0.325, val_acc=0.885, train_loss=1.78]
Epoch 3:  41%|####      | 9/22 [00:07<00:11,  1.14it/s, loss=0.67, v_num=4, val_loss=0.325, val_acc=0.885, train_loss=0.31]
Epoch 3:  45%|####5     | 10/22 [00:08<00:10,  1.14it/s, loss=0.67, v_num=4, val_loss=0.325, val_acc=0.885, train_loss=0.31]
Epoch 3:  45%|####5     | 10/22 [00:08<00:10,  1.14it/s, loss=0.644, v_num=4, val_loss=0.325, val_acc=0.885, train_loss=0.546]
Epoch 3:  50%|#####     | 11/22 [00:09<00:09,  1.15it/s, loss=0.633, v_num=4, val_loss=0.325, val_acc=0.885, train_loss=0.225]
Epoch 3:  55%|#####4    | 12/22 [00:10<00:08,  1.13it/s, loss=0.633, v_num=4, val_loss=0.325, val_acc=0.885, train_loss=0.225]
Epoch 3:  55%|#####4    | 12/22 [00:10<00:08,  1.13it/s, loss=0.621, v_num=4, val_loss=0.325, val_acc=0.885, train_loss=0.287]
Epoch 3:  59%|#####9    | 13/22 [00:11<00:08,  1.11it/s, loss=0.603, v_num=4, val_loss=0.325, val_acc=0.885, train_loss=0.387]
Epoch 3:  64%|######3   | 14/22 [00:12<00:07,  1.09it/s, loss=0.603, v_num=4, val_loss=0.325, val_acc=0.885, train_loss=0.387]
Epoch 3:  64%|######3   | 14/22 [00:12<00:07,  1.09it/s, loss=0.617, v_num=4, val_loss=0.325, val_acc=0.885, train_loss=0.142]
Epoch 3:  68%|######8   | 15/22 [00:13<00:06,  1.12it/s, loss=0.628, v_num=4, val_loss=0.325, val_acc=0.885, train_loss=0.532]
Epoch 3:  73%|#######2  | 16/22 [00:13<00:05,  1.20it/s, loss=0.628, v_num=4, val_loss=0.325, val_acc=0.885, train_loss=0.532]

Validating: 0it [00:00, ?it/s]

Validating:  14%|#4        | 1/7 [00:00<00:02,  2.94it/s]

Validating:  29%|##8       | 2/7 [00:00<00:01,  2.92it/s]
Epoch 3:  82%|########1 | 18/22 [00:14<00:03,  1.28it/s, loss=0.628, v_num=4, val_loss=0.325, val_acc=0.885, train_loss=0.532]

Validating:  43%|####2     | 3/7 [00:00<00:01,  3.06it/s]

Validating:  57%|#####7    | 4/7 [00:01<00:01,  2.35it/s]
Epoch 3:  91%|######### | 20/22 [00:14<00:01,  1.34it/s, loss=0.628, v_num=4, val_loss=0.325, val_acc=0.885, train_loss=0.532]

Validating:  71%|#######1  | 5/7 [00:02<00:00,  2.12it/s]

Validating:  86%|########5 | 6/7 [00:02<00:00,  2.44it/s]
Epoch 3: 100%|##########| 22/22 [00:15<00:00,  1.39it/s, loss=0.628, v_num=4, val_loss=0.325, val_acc=0.885, train_loss=0.532]

Validating: 100%|##########| 7/7 [00:02<00:00,  3.17it/s]
Epoch 3: 100%|##########| 22/22 [00:15<00:00,  1.38it/s, loss=0.628, v_num=4, val_loss=0.565, val_acc=0.673, train_loss=0.417]

                                                         
Epoch 3: 100%|##########| 22/22 [00:15<00:00,  1.38it/s, loss=0.628, v_num=4, val_loss=0.565, val_acc=0.673, train_loss=0.417]

1

Total running time of the script: ( 1 minutes 6.123 seconds)

Gallery generated by Sphinx-Gallery