Crash Course On Multi-Layer Perceptron Neural Networks

Last Updated on August 15, 2020
Artificial neural networks are a fascinating area of study, although they can be intimidating when just getting started.
There are a lot of specialized terminology used when describing the data structures and algorithms used in the field.
In this post you will get a crash course in the terminology and processes used in the field of multi-layer perceptron artificial neural networks. After reading this post you will know:
- The building blocks of neural networks including neurons, weights and activation functions.
- How the building blocks are used in layers to create networks.
- How networks are trained from example data.
Kick-start your project with my new book Deep Learning With Python, including step-by-step tutorials and the Python source code files for all examples.
Let’s get started.

Crash Course In Neural Networks
Photo by Joe Stump, some rights reserved.
Crash Course Overview
We are going to cover a lot of ground very quickly in this post. Here is an idea of what is ahead:
- Multi-Layer Perceptrons.
- Neurons, Weights
To finish reading, please visit source site