What is translation equivariance, and why do we use convolutions to get it

By Medium - 2020-10-05

Description

Multi-layer Perceptrons (MLPs) are standard neural networks with fully connected layers, where each input unit is connected with each output unit. We rarely use these kind of layers when working with…

Summary

  • ?
  • It is interesting to note, that this does not require a change of the convolutional part of the model, as convolutions are independent of the input size of the image.
  • As we can see, a stack of convolutional layers produces an equivariant mapping.
  • Then, ϕ(0,0 p) is nothing else then the “impulse response”, so the output of our operator when applied to a Dirac at the origin, which we defined as being “h”.

 

Topics

  1. Machine_Learning (0.42)
  2. NLP (0.18)
  3. Management (0.02)

Similar Articles

10 Deep Learning Terms Explained in Simple English

By datasciencecentral - 2020-12-27

  Deep Learning is a new area of Machine Learning research that has been gaining significant media interest owing to the role it is playing in artificial intel…