Home Topics Recurrent neural network (RNN) What is a recurrent neural network (RNN)?
Use RNN with watsonx.ai Sign up for AI updates
Illustration with collage of pictograms of clouds, pie chart, graph pictograms on the following
What is an RNN?

A recurrent neural network, or RNN, is a deep neural network trained on sequential or time series data to create a machine learning model can make sequential predictions or conclusions based on sequential inputs.

An RNN might be used to predict daily flood levels based on past daily flood, tide and meteorilogical data. But RNNs can also be used to solve ordinal or temporal problems such as language translation, natural language processing (NLP), speech recognition, and image captioning. RNNs are incorporated into popular applications such as Siri, voice search, and Google Translate.

How to choose the right AI foundation model

Use this model selection framework to choose the most appropriate model while balancing your performance requirements with cost, risks and deployment needs.

How RNNs work

Like feedforward and convolutional neural networks (CNNs), recurrent neural networks utilize training data to learn. They are distinguished by their “memory” as they take information from prior inputs to influence the current input and output. While traditional deep neural networks assume that inputs and outputs are independent of each other, the output of recurrent neural networks depend on the prior elements within the sequence. While future events would also be helpful in determining the output of a given sequence, unidirectional recurrent neural networks cannot account for these events in their predictions.

Let’s take an idiom, such as “feeling under the weather”, which is commonly used when someone is ill, to aid us in the explanation of RNNs. In order for the idiom to make sense, it needs to be expressed in that specific order. As a result, recurrent networks need to account for the position of each word in the idiom and they use that information to predict the next word in the sequence.

Another distinguishing characteristic of recurrent networks is that they share parameters across each layer of the network. While feedforward networks have different weights across each node, recurrent neural networks share the same weight parameter within each layer of the network. That said, these weights are still adjusted in the through the processes of backpropagation and gradient descent to facilitate reinforcement learning.

Recurrent neural networks leverage backpropagation through time (BPTT) algorithms to determine the gradients, which is slightly different from traditional backpropagation as it is specific to sequence data. The principles of BPTT are the same as traditional backpropagation, where the model trains itself by calculating errors from its output layer to its input layer. These calculations allow us to adjust and fit the parameters of the model appropriately. BPTT differs from the traditional approach in that BPTT sums errors at each time step whereas feedforward networks do not need to sum errors as they do not share parameters across each layer.

Through this process, RNNs tend to run into two problems, known as exploding gradients and vanishing gradients. These issues are defined by the size of the gradient, which is the slope of the loss function along the error curve. When the gradient is too small, it continues to become smaller, updating the weight parameters until they become insignificant—i.e. 0. When that occurs, the algorithm is no longer learning. Exploding gradients occur when the gradient is too large, creating an unstable model. In this case, the model weights will grow too large, and they will eventually be represented as NaN. One solution to these issues is to reduce the number of hidden layers within the neural network, eliminating some of the complexity in the RNN model.

Related content

Read the guide for data leaders

Types of RNNs

Feedforward networks map one input to one output, and while we’ve visualized recurrent neural networks in this way in the above diagrams, they do not actually have this constraint. Instead, their inputs and outputs can vary in length, and different types of RNNs are used for different use cases, such as music generation, sentiment classification, and machine translation.

Common activation functions

As discussed in the Learn article on Neural Networks, an activation function determines whether a neuron should be activated. The nonlinear functions typically convert the output of a given neuron to a value between 0 and 1 or -1 and 1. 

Variant RNN architectures

Popular RNN architecture variants include

  • Bidirectional recurrent neural networks (BRRNs)
  • Long short-term memory (LSTM)
  • Gated recurrent units (GNUs)

Bidirectional recurrent neural networks (BRNNs)
 

While unidirectional RNNs can only drawn from previous inputs to make predictions about the current state, bidirectional RNNs, or BRNNs, pull in future data to improve the accuracy of it. Returning to the example of “feeling under the weather”, a model based on a BRNN can better predict that the second word in that phrase is “under” if it knows that the last word in the sequence is “weather.”

Long short-term memory (LSTM)
 

LSTM a popular RNN architecture, which was introduced by Sepp Hochreiter and Juergen Schmidhuber as a solution to vanishing gradient problem. In their paper (link resides outside ibm.com), they work to address the problem of long-term dependencies. That is, if the previous state that is influencing the current prediction is not in the recent past, the RNN model may not be able to accurately predict the current state.

As an example, let’s say we wanted to predict the italicized words in following, “Alice is allergic to nuts. She can’t eat peanut butter.” The context of a nut allergy can help us anticipate that the food that cannot be eaten contains nuts. However, if that context was a few sentences prior, then it would make it difficult, or even impossible, for the RNN to connect the information.

To remedy this, LSTMs have “cells” in the hidden layers of the neural network, which have three gates–an input gate, an output gate, and a forget gate. These gates control the flow of information which is needed to predict the output in the network.  For example, if gender pronouns, such as “she”, was repeated multiple times in prior sentences, you may exclude that from the cell state.

Gated recurrent units (GRUs)


A GRU is similar to an LSTM as it also works to address the short-term memory problem of RNN models. Instead of using a “cell state” regulate information, it uses hidden states, and instead of three gates, it has two—a reset gate and an update gate. Similar to the gates within LSTMs, the reset and update gates control how much and which information to retain.

Recurrent neural networks and IBM Cloud

For decades now, IBM has been a pioneer in the development of AI technologies and neural networks, highlighted by the development and evolution of IBM Watson. Watson is now a trusted solution for enterprises looking to apply advanced natural language processing and deep learning techniques to their systems using a proven tiered approach to AI adoption and implementation.

IBM products, such as IBM Watson Machine Learning, also support popular Python libraries, such as TensorFlow, Keras, and PyTorch, which are commonly used in recurrent neural networks. Utilizing tools like, IBM Watson Studio and Watson Machine Learning, your enterprise can seamlessly bring your open-source AI projects into production while deploying and running your models on any cloud.

For more information on how to get started with artificial intelligence technology, explore IBM Watson Studio.

Sign up for an IBMid and create your IBM Cloud account
Related solutions
IBM watsonx™

IBM watsonx is a portfolio of business-ready tools, applications and solutions, designed to reduce the costs and hurdles of AI adoption while optimizing outcomes and responsible use of AI.

Explore IBM watsonx
IBM watsonx™ Assistant™ - AI chatbot

IBM watsonx Assistant is the AI chatbot for business. This enterprise artificial intelligence technology enables users to build conversational AI solutions.

Explore watsonx Assistant
IBM Watson® Studio

Build, run and manage AI models. Prepare data and build models on any cloud using open-source frameworks like PyTorch, TensorFlow and scikit-learn, tools like Jupyter notebooks, JupyterLab and CLIs, or languages such as Python, R and Scala.

Explore Watson Studio
Resources Free, hands-on learning for generative AI technologies

Learn the fundamental concepts for AI and generative AI, including prompt engineering, large language models and the best open-source projects.

Discover IBM's Granite LLM

Granite is IBM's flagship series of LLM foundation models based on decoder-only transformer architecture. Granite language models are trained on trusted enterprise data spanning internet, academic, code, legal and finance.

An introduction to deep learning

Explore this branch of machine learning that's trained on large amounts of data and deals with computational units working in tandem to perform predictions.

Take the next step

Train, validate, tune and deploy generative AI, foundation models and machine learning capabilities with IBM watsonx.ai, a next-generation enterprise studio for AI builders. Build AI applications in a fraction of the time with a fraction of the data.

Explore watsonx.ai Book a live demo