How to Improve Performance With Transfer Learning for Deep Learning Neural Networks

Last Updated on August 25, 2020 An interesting benefit of deep learning neural networks is that they can be reused on related problems. Transfer learning refers to a technique for predictive modeling on a different but somehow similar problem that can then be reused partly or wholly to accelerate the training and improve the performance of a model on the problem of interest. In deep learning, this means reusing the weights in one or more layers from a pre-trained network […]

Read more

Your First Machine Learning Project in Python Step-By-Step

Last Updated on August 19, 2020 Do you want to do machine learning using Python, but you’re having trouble getting started? In this post, you will complete your first machine learning project using Python. In this step-by-step tutorial you will: Download and install Python SciPy and get the most useful package for machine learning in Python. Load a dataset and understand it’s structure using statistical summaries and data visualization. Create 6 machine learning models, pick the best and build confidence […]

Read more

Framework for Better Deep Learning

Last Updated on August 6, 2019 Modern deep learning libraries such as Keras allow you to define and start fitting a wide range of neural network models in minutes with just a few lines of code. Nevertheless, it is still challenging to configure a neural network to get good performance on a new predictive modeling problem. The challenge of getting good performance can be broken down into three main areas: problems with learning, problems with generalization, and problems with predictions. […]

Read more

How to Control Neural Network Model Capacity With Nodes and Layers

Last Updated on August 25, 2020 The capacity of a deep learning neural network model controls the scope of the types of mapping functions that it is able to learn. A model with too little capacity cannot learn the training dataset meaning it will underfit, whereas a model with too much capacity may memorize the training dataset, meaning it will overfit or may get stuck or lost during the optimization process. The capacity of a neural network model is defined […]

Read more

A Gentle Introduction to the Challenge of Training Deep Learning Neural Network Models

Last Updated on August 6, 2019 Deep learning neural networks learn a mapping function from inputs to outputs. This is achieved by updating the weights of the network in response to the errors the model makes on the training dataset. Updates are made to continually reduce this error until either a good enough model is found or the learning process gets stuck and stops. The process of training neural networks is the most challenging part of using the technique in […]

Read more

How to Get Better Deep Learning Results (7-Day Mini-Course)

Last Updated on January 8, 2020 Better Deep Learning Neural Networks Crash Course. Get Better Performance From Your Deep Learning Models in 7 Days. Configuring neural network models is often referred to as a “dark art.” This is because there are no hard and fast rules for configuring a network for a given problem. We cannot analytically calculate the optimal model type or model configuration for a given dataset. Fortunately, there are techniques that are known to address specific issues […]

Read more

Neural Networks: Tricks of the Trade Review

Last Updated on August 6, 2019 Deep learning neural networks are challenging to configure and train. There are decades of tips and tricks spread across hundreds of research papers, source code, and in the heads of academics and practitioners. The book “Neural Networks: Tricks of the Trade” originally published in 1998 and updated in 2012 at the cusp of the deep learning renaissance ties together the disparate tips and tricks into a single volume. It includes advice that is required […]

Read more

8 Tricks for Configuring Backpropagation to Train Better Neural Networks

Last Updated on August 6, 2019 Neural network models are trained using stochastic gradient descent and model weights are updated using the backpropagation algorithm. The optimization solved by training a neural network model is very challenging and although these algorithms are widely used because they perform so well in practice, there are no guarantees that they will converge to a good model in a timely manner. The challenge of training neural networks really comes down to the challenge of configuring […]

Read more

Recommendations for Deep Learning Neural Network Practitioners

Last Updated on August 6, 2019 Deep learning neural networks are relatively straightforward to define and train given the wide adoption of open source libraries. Nevertheless, neural networks remain challenging to configure and train. In his 2012 paper titled “Practical Recommendations for Gradient-Based Training of Deep Architectures” published as a preprint and a chapter of the popular 2012 book “Neural Networks: Tricks of the Trade,” Yoshua Bengio, one of the fathers of the field of deep learning, provides practical recommendations […]

Read more

How to Fix FutureWarning Messages in scikit-learn

Last Updated on August 21, 2019 Upcoming changes to the scikit-learn library for machine learning are reported through the use of FutureWarning messages when the code is run. Warning messages can be confusing to beginners as it looks like there is a problem with the code or that they have done something wrong. Warning messages are also not good for operational code as they can obscure errors and program output. There are many ways to handle a warning message, including […]

Read more
1 826 827 828 829 830 905