A Gentle Introduction to Transfer Learning for Deep Learning

Last Updated on September 16, 2019 Transfer learning is a machine learning method where a model developed for a task is reused as the starting point for a model on a second task. It is a popular approach in deep learning where pre-trained models are used as the starting point on computer vision and natural language processing tasks given the vast compute and time resources required to develop neural network models on these problems and from the huge jumps in […]

Read more

Why Applied Machine Learning Is Hard

How to Handle the Intractability of Applied Machine Learning. Applied machine learning is challenging. You must make many decisions where there is no known “right answer” for your specific problem, such as: What framing of the problem to use? What input and output data to use? What learning algorithm to use? What algorithm configuration to use? This is challenging for beginners that expect that you can calculate or be told what data to use or how to best configure an […]

Read more

A Gentle Introduction to Applied Machine Learning as a Search Problem

Last Updated on September 28, 2020 Applied machine learning is challenging because the designing of a perfect learning system for a given problem is intractable. There is no best training data or best algorithm for your problem, only the best that you can discover. The application of machine learning is best thought of as search problem for the best mapping of inputs to outputs given the knowledge and resources available to you for a given project. In this post, you […]

Read more

Caption Generation with the Inject and Merge Encoder-Decoder Models

Last Updated on August 7, 2019 Caption generation is a challenging artificial intelligence problem that draws on both computer vision and natural language processing. The encoder-decoder recurrent neural network architecture has been shown to be effective at this problem. The implementation of this architecture can be distilled into inject and merge based models, and both make different assumptions about the role of the recurrent neural network in addressing the problem. In this post, you will discover the inject and merge […]

Read more

A Gentle Introduction to Neural Machine Translation

Last Updated on August 7, 2019 One of the earliest goals for computers was the automatic translation of text from one language to another. Automatic or machine translation is perhaps one of the most challenging artificial intelligence tasks given the fluidity of human language. Classically, rule-based systems were used for this task, which were replaced in the 1990s with statistical methods. More recently, deep neural network models achieve state-of-the-art results in a field that is aptly named neural machine translation. […]

Read more

Encoder-Decoder Recurrent Neural Network Models for Neural Machine Translation

Last Updated on August 7, 2019 The encoder-decoder architecture for recurrent neural networks is the standard neural machine translation method that rivals and in some cases outperforms classical statistical machine translation methods. This architecture is very new, having only been pioneered in 2014, although, has been adopted as the core technology inside Google’s translate service. In this post, you will discover the two seminal examples of the encoder-decoder model for neural machine translation. After reading this post, you will know: […]

Read more

How to Configure an Encoder-Decoder Model for Neural Machine Translation

Last Updated on August 7, 2019 The encoder-decoder architecture for recurrent neural networks is achieving state-of-the-art results on standard machine translation benchmarks and is being used in the heart of industrial translation services. The model is simple, but given the large amount of data required to train it, tuning the myriad of design decisions in the model in order get top performance on your problem can be practically intractable. Thankfully, research scientists have used Google-scale hardware to do this work […]

Read more

How to Implement a Beam Search Decoder for Natural Language Processing

Last Updated on June 3, 2020 Natural language processing tasks, such as caption generation and machine translation, involve generating sequences of words. Models developed for these problems often operate by generating probability distributions across the vocabulary of output words and it is up to decoding algorithms to sample the probability distributions to generate the most likely sequences of words. In this tutorial, you will discover the greedy search and beam search decoding algorithms that can be used on text generation […]

Read more

How to Prepare a French-to-English Dataset for Machine Translation

Last Updated on April 30, 2020 Machine translation is the challenging task of converting text from a source language into coherent and matching text in a target language. Neural machine translation systems such as encoder-decoder recurrent neural networks are achieving state-of-the-art results for machine translation with a single end-to-end system trained directly on source and target language. Standard datasets are required to develop, explore, and familiarize yourself with how to develop neural machine translation systems. In this tutorial, you will […]

Read more

How to Develop a Neural Machine Translation System from Scratch

Last Updated on September 3, 2020 Develop a Deep Learning Model to AutomaticallyTranslate from German to English in Python with Keras, Step-by-Step. Machine translation is a challenging task that traditionally involves large statistical models developed using highly sophisticated linguistic knowledge. Neural machine translation is the use of deep neural networks for the problem of machine translation. In this tutorial, you will discover how to develop a neural machine translation system for translating German phrases to English. After completing this tutorial, […]

Read more
1 822 823 824 825 826 919