[Quick learning] Introduction to Pytorch (XNUMX): Implement MLP regression and learn the basics of PyTorch
Last time, I reviewed how to handle torch, which is the base of PyTorch.
This time, I would like to implement Multilayer perceptron (MLP) regression in PyTorch and see the main PyTorch packages.
Overall flow
The overall flow of model implementation is as follows. The numbers in parentheses are the PyTorch packages used at that time.
- Input dataset creation, iterative processing (Dataset, Dataloader)
- Definition of neural network (nn.Module)
- Loss calculation, gradient propagated to network parameters (nn.Module)
- Update network weights (Optimaizer)
Major PyTorch packages
Regardless of PyTorch, many deep learning frameworks offer various packages to simplify implementation. The main items in PyTorch are:
torch tensor | Multidimensional array. Data structure used in PyTorch. |
torch.autograd | Implement forward / backpropagation.Supports automatic differentiation operation for Tensor such as back propagation (backward ()). |
torch.utils.data | It includes utilities such as "Dataset" that collects the data to be input and its label as a set, and "Dataloader" that extracts data from the Dataset in a mini-batch and passes it to the model. |
torch.nn.Module | Used for building neural networks.Responsible for parameter encapsulation such as saving and loading models and moving to GPU. |
torch.optim | Allows you to use parameter optimization algorithms such as SDG and Adam. |
Model implementation
Creating a dataset
This time, we will prepare sin (5x) plus random numbers as practice data with numpy.from_numpy()
Convert to torch.tensor with.
Model definition
In pytorch, the model is defined as a "python class" that inherits from the nn.Module class.
class MLP (nn.Module): The defined MLP class inherits from the parent class nn.Module
def init (): Receives arguments and instantiates
super (MLP, self) .init (): Inherit parent class with super function
def forward (self, x): Works when the function is called after instantiation. When you define the forward function, the backward function (gradient calculation) is also defined automatically.
.parameters()
You can get the structure and parameters of the network with.
Loss calculation / backpropagation / weight update
To understand the individual behavior, we will take a piece of data from x and input it into the neural network to see how the parameters change due to error calculations and weight updates.
Try turning the learning loop
Perform the above flow for each batch to train the neural network.Dataset
Returns a set of data and the corresponding label,DataLoader
Is a class that returns data in batch size.
Visualization of calculation graph
The structure of the 3-layer MLP created this time can be visualized by using a python package called torchviz.parameters()
Please when it is not enough.
So far, we have identified PyTorch and its major PyTorch packages through the implementation of MLP regression.
In-Depth Discussions
Comment list
There are not any comments yet