What is Teacher Forcing for Recurrent Neural Networks?

Last Updated on August 14, 2019 Teacher forcing is a method for quickly and efficiently training recurrent neural network models that use the ground truth from a prior time step as input. It is a network training method critical to the development of deep learning language models used in machine translation, text summarization, and image captioning, among many other applications. In this post, you will discover the teacher forcing as a method for training recurrent neural networks. After reading this […]

Read more

Encoder-Decoder Models for Text Summarization in Keras

Last Updated on August 7, 2019 Text summarization is a problem in natural language processing of creating a short, accurate, and fluent summary of a source document. The Encoder-Decoder recurrent neural network architecture developed for machine translation has proven effective when applied to the problem of text summarization. It can be difficult to apply this architecture in the Keras deep learning library, given some of the flexibility sacrificed to make the library clean, simple, and easy to use. In this […]

Read more

Difference Between Classification and Regression in Machine Learning

Last Updated on May 22, 2019 There is an important difference between classification and regression problems. Fundamentally, classification is about predicting a label and regression is about predicting a quantity. I often see questions such as: How do I calculate accuracy for my regression problem? Questions like this are a symptom of not truly understanding the difference between classification and regression and what accuracy is trying to measure. In this tutorial, you will discover the differences between classification and regression. […]

Read more

How to Visualize a Deep Learning Neural Network Model in Keras

Last Updated on September 11, 2019 The Keras Python deep learning library provides tools to visualize and better understand your neural network models. In this tutorial, you will discover exactly how to summarize and visualize your deep learning models in Keras. After completing this tutorial, you will know: How to create a textual summary of your deep learning model. How to create a graph plot of your deep learning model. Best practice tips when developing deep learning models in Keras. […]

Read more

A Gentle Introduction to Concept Drift in Machine Learning

Last Updated on August 12, 2019 Data can change over time. This can result in poor and degrading predictive performance in predictive models that assume a static relationship between input and output variables. This problem of the changing underlying relationships in the data is called concept drift in the field of machine learning. In this post, you will discover the problem of concept drift and ways to you may be able to address it in your own predictive modeling problems. […]

Read more

A Gentle Introduction to Exploding Gradients in Neural Networks

Last Updated on August 14, 2019 Exploding gradients are a problem where large error gradients accumulate and result in very large updates to neural network model weights during training. This has the effect of your model being unstable and unable to learn from your training data. In this post, you will discover the problem of exploding gradients with deep artificial neural networks. After completing this post, you will know: What exploding gradients are and the problems they cause during training. […]

Read more

A Gentle Introduction to Transfer Learning for Deep Learning

Last Updated on September 16, 2019 Transfer learning is a machine learning method where a model developed for a task is reused as the starting point for a model on a second task. It is a popular approach in deep learning where pre-trained models are used as the starting point on computer vision and natural language processing tasks given the vast compute and time resources required to develop neural network models on these problems and from the huge jumps in […]

Read more

Why Applied Machine Learning Is Hard

How to Handle the Intractability of Applied Machine Learning. Applied machine learning is challenging. You must make many decisions where there is no known “right answer” for your specific problem, such as: What framing of the problem to use? What input and output data to use? What learning algorithm to use? What algorithm configuration to use? This is challenging for beginners that expect that you can calculate or be told what data to use or how to best configure an […]

Read more

A Gentle Introduction to Applied Machine Learning as a Search Problem

Last Updated on September 28, 2020 Applied machine learning is challenging because the designing of a perfect learning system for a given problem is intractable. There is no best training data or best algorithm for your problem, only the best that you can discover. The application of machine learning is best thought of as search problem for the best mapping of inputs to outputs given the knowledge and resources available to you for a given project. In this post, you […]

Read more

Caption Generation with the Inject and Merge Encoder-Decoder Models

Last Updated on August 7, 2019 Caption generation is a challenging artificial intelligence problem that draws on both computer vision and natural language processing. The encoder-decoder recurrent neural network architecture has been shown to be effective at this problem. The implementation of this architecture can be distilled into inject and merge based models, and both make different assumptions about the role of the recurrent neural network in addressing the problem. In this post, you will discover the inject and merge […]

Read more
1 813 814 815 816 817 910