Tuesday, February 3, 2026

Convolutional LSTM for spatial forecasting

Convolutional LSTM for spatial forecasting

This submit is the primary in a free collection exploring forecasting of spatially-determined information over time. By spatially-determined I imply that regardless of the portions we’re attempting to foretell – be they univariate or multivariate time collection, of spatial dimensionality or not – the enter information are given on a spatial grid.

For instance, the enter might be atmospheric measurements, akin to sea floor temperature or strain, given at some set of latitudes and longitudes. The goal to be predicted may then span that very same (or one other) grid. Alternatively, it might be a univariate time collection, like a meteorological index.

However wait a second, you might be considering. For time-series prediction, we have now that time-honored set of recurrent architectures (e.g., LSTM, GRU), proper? Proper. We do; however, as soon as we feed spatial information to an RNN, treating totally different areas as totally different enter options, we lose a necessary structural relationship. Importantly, we have to function in each area and time. We wish each: recurrence relations and convolutional filters. Enter convolutional RNNs.

What to anticipate from this submit

Right now, we received’t leap into real-world purposes simply but. As a substitute, we’ll take our time to construct a convolutional LSTM (henceforth: convLSTM) in torch. For one, we have now to – there isn’t a official PyTorch implementation.

What’s extra, this submit can function an introduction to constructing your personal modules. That is one thing you might be aware of from Keras or not – relying on whether or not you’ve used customized fashions or quite, most well-liked the declarative outline -> compile -> match fashion. (Sure, I’m implying there’s some switch happening if one involves torch from Keras customized coaching. Syntactic and semantic particulars could also be totally different, however each share the object-oriented fashion that enables for nice flexibility and management.)

Final however not least, we’ll additionally use this as a hands-on expertise with RNN architectures (the LSTM, particularly). Whereas the overall idea of recurrence could also be straightforward to know, it isn’t essentially self-evident how these architectures ought to, or may, be coded. Personally, I discover that unbiased of the framework used, RNN-related documentation leaves me confused. What precisely is being returned from calling an LSTM, or a GRU? (In Keras this relies on the way you’ve outlined the layer in query.) I believe that when we’ve determined what we need to return, the precise code received’t be that sophisticated. Consequently, we’ll take a detour clarifying what it’s that torch and Keras are giving us. Implementing our convLSTM might be much more easy thereafter.

A torch convLSTM

The code mentioned right here could also be discovered on GitHub. (Relying on if you’re studying this, the code in that repository could have developed although.)

My place to begin was one of many PyTorch implementations discovered on the web, particularly, this one. When you seek for “PyTorch convGRU” or “PyTorch convLSTM”, one can find beautiful discrepancies in how these are realized – discrepancies not simply in syntax and/or engineering ambition, however on the semantic degree, proper on the middle of what the architectures could also be anticipated to do. As they are saying, let the customer beware. (Concerning the implementation I ended up porting, I’m assured that whereas quite a few optimizations might be attainable, the essential mechanism matches my expectations.)

What do I anticipate? Let’s strategy this activity in a top-down method.

Enter and output

The convLSTM’s enter might be a time collection of spatial information, every remark being of dimension (time steps, channels, top, width).

Evaluate this with the standard RNN enter format, be it in torch or Keras. In each frameworks, RNNs anticipate tensors of dimension (timesteps, input_dim). input_dim is (1) for univariate time collection and higher than (1) for multivariate ones. Conceptually, we could match this to convLSTM’s channels dimension: There might be a single channel, for temperature, say – or there might be a number of, akin to for strain, temperature, and humidity. The 2 extra dimensions present in convLSTM, top and width, are spatial indexes into the information.

In sum, we would like to have the ability to go information that:

  • include a number of options,

  • evolve in time, and

  • are listed in two spatial dimensions.

How in regards to the output? We wish to have the ability to return forecasts for as many time steps as we have now within the enter sequence. That is one thing that torch RNNs do by default, whereas Keras equivalents don’t. (It’s a must to go return_sequences = TRUE to acquire that impact.) If we’re all in favour of predictions for only a single cut-off date, we are able to all the time decide the final time step within the output tensor.

Nonetheless, with RNNs, it isn’t all about outputs. RNN architectures additionally carry by hidden states.

What are hidden states? I rigorously phrased that sentence to be as normal as attainable – intentionally circling across the confusion that, for my part, usually arises at this level. We’ll try to clear up a few of that confusion in a second, however let’s first end our high-level necessities specification.

We wish our convLSTM to be usable in several contexts and purposes. Numerous architectures exist that make use of hidden states, most prominently maybe, encoder-decoder architectures. Thus, we would like our convLSTM to return these as properly. Once more, that is one thing a torch LSTM does by default, whereas in Keras it’s achieved utilizing return_state = TRUE.

Now although, it truly is time for that interlude. We’ll kind out the methods issues are referred to as by each torch and Keras, and examine what you get again from their respective GRUs and LSTMs.

Interlude: Outputs, states, hidden values … what’s what?

For this to stay an interlude, I summarize findings on a excessive degree. The code snippets within the appendix present arrive at these outcomes. Closely commented, they probe return values from each Keras and torch GRUs and LSTMs. Working these will make the upcoming summaries appear quite a bit much less summary.

First, let’s take a look at the methods you create an LSTM in each frameworks. (I’ll typically use LSTM because the “prototypical RNN instance”, and simply point out GRUs when there are variations vital within the context in query.)

In Keras, to create an LSTM you might write one thing like this:

lstm <- layer_lstm(models = 1)

The torch equal can be:

lstm <- nn_lstm(
  input_size = 2, # variety of enter options
  hidden_size = 1 # variety of hidden (and output!) options
)

Don’t concentrate on torch‘s input_size parameter for this dialogue. (It’s the variety of options within the enter tensor.) The parallel happens between Keras’ models and torch’s hidden_size. When you’ve been utilizing Keras, you’re in all probability considering of models because the factor that determines output dimension (equivalently, the variety of options within the output). So when torch lets us arrive on the similar end result utilizing hidden_size, what does that imply? It signifies that one way or the other we’re specifying the identical factor, utilizing totally different terminology. And it does make sense, since at each time step present enter and former hidden state are added:

[
mathbf{h}_t = mathbf{W}_{x}mathbf{x}_t + mathbf{W}_{h}mathbf{h}_{t-1}
]

Now, about these hidden states.

When a Keras LSTM is outlined with return_state = TRUE, its return worth is a construction of three entities referred to as output, reminiscence state, and carry state. In torch, the identical entities are known as output, hidden state, and cell state. (In torch, we all the time get all of them.)

So are we coping with three various kinds of entities? We’re not.

The cell, or carry state is that particular factor that units aside LSTMs from GRUs deemed chargeable for the “lengthy” in “lengthy short-term reminiscence”. Technically, it might be reported to the consumer in any respect deadlines; as we’ll see shortly although, it isn’t.

What about outputs and hidden, or reminiscence states? Confusingly, these actually are the identical factor. Recall that for every enter merchandise within the enter sequence, we’re combining it with the earlier state, leading to a brand new state, to be made used of within the subsequent step:

[
mathbf{h}_t = mathbf{W}_{x}mathbf{x}_t + mathbf{W}_{h}mathbf{h}_{t-1}
]

Now, say that we’re all in favour of taking a look at simply the ultimate time step – that’s, the default output of a Keras LSTM. From that perspective, we are able to contemplate these intermediate computations as “hidden”. Seen like that, output and hidden states really feel totally different.

Nonetheless, we are able to additionally request to see the outputs for each time step. If we accomplish that, there isn’t a distinction – the outputs (plural) equal the hidden states. This may be verified utilizing the code within the appendix.

Thus, of the three issues returned by an LSTM, two are actually the identical. How in regards to the GRU, then? As there isn’t a “cell state”, we actually have only one kind of factor left over – name it outputs or hidden states.

Let’s summarize this in a desk.

Desk 1: RNN terminology. Evaluating torch-speak and Keras-speak. In row 1, the phrases are parameter names. In rows 2 and three, they’re pulled from present documentation.

Variety of options within the output

This determines each what number of output options there are and the dimensionality of the hidden states.

hidden_size models

Per-time-step output; latent state; intermediate state …

This might be named “public state” within the sense that we, the customers, are in a position to get hold of all values.

hidden state reminiscence state

Cell state; interior state … (LSTM solely)

This might be named “personal state” in that we’re in a position to get hold of a price just for the final time step. Extra on that in a second.

cell state carry state

Now, about that public vs. personal distinction. In each frameworks, we are able to get hold of outputs (hidden states) for each time step. The cell state, nonetheless, we are able to entry just for the final time step. That is purely an implementation choice. As we’ll see when constructing our personal recurrent module, there aren’t any obstacles inherent in holding monitor of cell states and passing them again to the consumer.

When you dislike the pragmatism of this distinction, you possibly can all the time go along with the mathematics. When a brand new cell state has been computed (primarily based on prior cell state, enter, overlook, and cell gates – the specifics of which we’re not going to get into right here), it’s remodeled to the hidden (a.okay.a. output) state making use of one more, particularly, the output gate:

[
h_t = o_t odot tanh(c_t)
]

Undoubtedly, then, hidden state (output, resp.) builds on cell state, including extra modeling energy.

Now it’s time to get again to our authentic purpose and construct that convLSTM. First although, let’s summarize the return values obtainable from torch and Keras.

Desk 2: Contrasting methods of acquiring numerous return values in torch vs. Keras. Cf. the appendix for full examples.
entry all intermediate outputs ( = per-time-step outputs) ret[[1]] return_sequences = TRUE
entry each “hidden state” (output) and “cell state” from ultimate time step (solely!) ret[[2]] return_state = TRUE
entry all intermediate outputs and the ultimate “cell state” each of the above return_sequences = TRUE, return_state = TRUE
entry all intermediate outputs and “cell states” from all time steps no method no method

convLSTM, the plan

In each torch and Keras RNN architectures, single time steps are processed by corresponding Cell lessons: There’s an LSTM Cell matching the LSTM, a GRU Cell matching the GRU, and so forth. We do the identical for ConvLSTM. In convlstm_cell(), we first outline what ought to occur to a single remark; then in convlstm(), we construct up the recurrence logic.

As soon as we’re accomplished, we create a dummy dataset, as reduced-to-the-essentials as will be. With extra complicated datasets, even synthetic ones, chances are high that if we don’t see any coaching progress, there are tons of of attainable explanations. We wish a sanity verify that, if failed, leaves no excuses. Sensible purposes are left to future posts.

A single step: convlstm_cell

Our convlstm_cell’s constructor takes arguments input_dim , hidden_dim, and bias, similar to a torch LSTM Cell.

However we’re processing two-dimensional enter information. As a substitute of the standard affine mixture of latest enter and former state, we use a convolution of kernel dimension kernel_size. Inside convlstm_cell, it’s self$conv that takes care of this.

Be aware how the channels dimension, which within the authentic enter information would correspond to totally different variables, is creatively used to consolidate 4 convolutions into one: Every channel output might be handed to simply one of many 4 cell gates. As soon as in possession of the convolution output, ahead() applies the gate logic, ensuing within the two sorts of states it must ship again to the caller.

library(torch)
library(zeallot)

convlstm_cell <- nn_module(
  
  initialize = operate(input_dim, hidden_dim, kernel_size, bias) {
    
    self$hidden_dim <- hidden_dim
    
    padding <- kernel_size %/% 2
    
    self$conv <- nn_conv2d(
      in_channels = input_dim + self$hidden_dim,
      # for every of enter, overlook, output, and cell gates
      out_channels = 4 * self$hidden_dim,
      kernel_size = kernel_size,
      padding = padding,
      bias = bias
    )
  },
  
  ahead = operate(x, prev_states) {

    c(h_prev, c_prev) %<-% prev_states
    
    mixed <- torch_cat(listing(x, h_prev), dim = 2)  # concatenate alongside channel axis
    combined_conv <- self$conv(mixed)
    c(cc_i, cc_f, cc_o, cc_g) %<-% torch_split(combined_conv, self$hidden_dim, dim = 2)
    
    # enter, overlook, output, and cell gates (equivalent to torch's LSTM)
    i <- torch_sigmoid(cc_i)
    f <- torch_sigmoid(cc_f)
    o <- torch_sigmoid(cc_o)
    g <- torch_tanh(cc_g)
    
    # cell state
    c_next <- f * c_prev + i * g
    # hidden state
    h_next <- o * torch_tanh(c_next)
    
    listing(h_next, c_next)
  },
  
  init_hidden = operate(batch_size, top, width) {
    
    listing(
      torch_zeros(batch_size, self$hidden_dim, top, width, gadget = self$conv$weight$gadget),
      torch_zeros(batch_size, self$hidden_dim, top, width, gadget = self$conv$weight$gadget))
  }
)

Now convlstm_cell must be referred to as for each time step. That is accomplished by convlstm.

Iteration over time steps: convlstm

A convlstm could include a number of layers, similar to a torch LSTM. For every layer, we’re in a position to specify hidden and kernel sizes individually.

Throughout initialization, every layer will get its personal convlstm_cell. On name, convlstm executes two loops. The outer one iterates over layers. On the finish of every iteration, we retailer the ultimate pair (hidden state, cell state) for later reporting. The interior loop runs over enter sequences, calling convlstm_cell at every time step.

We additionally preserve monitor of intermediate outputs, so we’ll have the ability to return the whole listing of hidden_states seen in the course of the course of. Not like a torch LSTM, we do that for each layer.

convlstm <- nn_module(
  
  # hidden_dims and kernel_sizes are vectors, with one aspect for every layer in n_layers
  initialize = operate(input_dim, hidden_dims, kernel_sizes, n_layers, bias = TRUE) {
 
    self$n_layers <- n_layers
    
    self$cell_list <- nn_module_list()
    
    for (i in 1:n_layers) {
      cur_input_dim <- if (i == 1) input_dim else hidden_dims[i - 1]
      self$cell_list$append(convlstm_cell(cur_input_dim, hidden_dims[i], kernel_sizes[i], bias))
    }
  },
  
  # we all the time assume batch-first
  ahead = operate(x) {
    
    c(batch_size, seq_len, num_channels, top, width) %<-% x$dimension()
   
    # initialize hidden states
    init_hidden <- vector(mode = "listing", size = self$n_layers)
    for (i in 1:self$n_layers) {
      init_hidden[[i]] <- self$cell_list[[i]]$init_hidden(batch_size, top, width)
    }
    
    # listing containing the outputs, of size seq_len, for every layer
    # this is identical as h, at every step within the sequence
    layer_output_list <- vector(mode = "listing", size = self$n_layers)
    
    # listing containing the final states (h, c) for every layer
    layer_state_list <- vector(mode = "listing", size = self$n_layers)

    cur_layer_input <- x
    hidden_states <- init_hidden
    
    # loop over layers
    for (i in 1:self$n_layers) {
      
      # each layer's hidden state begins from 0 (non-stateful)
      c(h, c) %<-% hidden_states[[i]]
      # outputs, of size seq_len, for this layer
      # equivalently, listing of h states for every time step
      output_sequence <- vector(mode = "listing", size = seq_len)
      
      # loop over time steps
      for (t in 1:seq_len) {
        c(h, c) %<-% self$cell_list[[i]](cur_layer_input[ , t, , , ], listing(h, c))
        # preserve monitor of output (h) for each time step
        # h has dim (batch_size, hidden_size, top, width)
        output_sequence[[t]] <- h
      }

      # stack hs all the time steps over seq_len dimension
      # stacked_outputs has dim (batch_size, seq_len, hidden_size, top, width)
      # similar as enter to ahead (x)
      stacked_outputs <- torch_stack(output_sequence, dim = 2)
      
      # go the listing of outputs (hs) to subsequent layer
      cur_layer_input <- stacked_outputs
      
      # preserve monitor of listing of outputs or this layer
      layer_output_list[[i]] <- stacked_outputs
      # preserve monitor of final state for this layer
      layer_state_list[[i]] <- listing(h, c)
    }
 
    listing(layer_output_list, layer_state_list)
  }
    
)

Calling the convlstm

Let’s see the enter format anticipated by convlstm, and entry its totally different outputs.

Right here is an acceptable enter tensor.

# batch_size, seq_len, channels, top, width
x <- torch_rand(c(2, 4, 3, 16, 16))

First we make use of a single layer.

mannequin <- convlstm(input_dim = 3, hidden_dims = 5, kernel_sizes = 3, n_layers = 1)

c(layer_outputs, layer_last_states) %<-% mannequin(x)

We get again an inventory of size two, which we instantly cut up up into the 2 sorts of output returned: intermediate outputs from all layers, and ultimate states (of each varieties) for the final layer.

With only a single layer, layer_outputs[[1]]holds the entire layer’s intermediate outputs, stacked on dimension two.

dim(layer_outputs[[1]])
# [1]  2  4  5 16 16

layer_last_states[[1]]is an inventory of tensors, the primary of which holds the one layer’s ultimate hidden state, and the second, its ultimate cell state.

dim(layer_last_states[[1]][[1]])
# [1]  2  5 16 16
dim(layer_last_states[[1]][[2]])
# [1]  2  5 16 16

For comparability, that is how return values search for a multi-layer structure.

mannequin <- convlstm(input_dim = 3, hidden_dims = c(5, 5, 1), kernel_sizes = rep(3, 3), n_layers = 3)
c(layer_outputs, layer_last_states) %<-% mannequin(x)

# for every layer, tensor of dimension (batch_size, seq_len, hidden_size, top, width)
dim(layer_outputs[[1]])
# 2  4  5 16 16
dim(layer_outputs[[3]])
# 2  4  1 16 16

# listing of two tensors for every layer
str(layer_last_states)
# Listing of three
#  $ :Listing of two
#   ..$ :Float [1:2, 1:5, 1:16, 1:16]
#   ..$ :Float [1:2, 1:5, 1:16, 1:16]
#  $ :Listing of two
#   ..$ :Float [1:2, 1:5, 1:16, 1:16]
#   ..$ :Float [1:2, 1:5, 1:16, 1:16]
#  $ :Listing of two
#   ..$ :Float [1:2, 1:1, 1:16, 1:16]
#   ..$ :Float [1:2, 1:1, 1:16, 1:16]

# h, of dimension (batch_size, hidden_size, top, width)
dim(layer_last_states[[3]][[1]])
# 2  1 16 16

# c, of dimension (batch_size, hidden_size, top, width)
dim(layer_last_states[[3]][[2]])
# 2  1 16 16

Now we wish to sanity-check this module with the simplest-possible dummy information.

Sanity-checking the convlstm

We generate black-and-white “films” of diagonal beams successively translated in area.

Every sequence consists of six time steps, and every beam of six pixels. Only a single sequence is created manually. To create that one sequence, we begin from a single beam:

library(torchvision)

beams <- vector(mode = "listing", size = 6)
beam <- torch_eye(6) %>% nnf_pad(c(6, 12, 12, 6)) # left, proper, prime, backside
beams[[1]] <- beam

Utilizing torch_roll() , we create a sample the place this beam strikes up diagonally, and stack the person tensors alongside the timesteps dimension.

for (i in 2:6) {
  beams[[i]] <- torch_roll(beam, c(-(i-1),i-1), c(1, 2))
}

init_sequence <- torch_stack(beams, dim = 1)

That’s a single sequence. Due to torchvision::transform_random_affine(), we nearly effortlessly produce a dataset of 100 sequences. Shifting beams begin at random factors within the spatial body, however all of them share that upward-diagonal movement.

sequences <- vector(mode = "listing", size = 100)
sequences[[1]] <- init_sequence

for (i in 2:100) {
  sequences[[i]] <- transform_random_affine(init_sequence, levels = 0, translate = c(0.5, 0.5))
}

enter <- torch_stack(sequences, dim = 1)

# add channels dimension
enter <- enter$unsqueeze(3)
dim(enter)
# [1] 100   6  1  24  24

That’s it for the uncooked information. Now we nonetheless want a dataset and a dataloader. Of the six time steps, we use the primary 5 as enter and attempt to predict the final one.

dummy_ds <- dataset(
  
  initialize = operate(information) {
    self$information <- information
  },
  
  .getitem = operate(i) {
    listing(x = self$information[i, 1:5, ..], y = self$information[i, 6, ..])
  },
  
  .size = operate() {
    nrow(self$information)
  }
)

ds <- dummy_ds(enter)
dl <- dataloader(ds, batch_size = 100)

Here’s a tiny-ish convLSTM, skilled for movement prediction:

mannequin <- convlstm(input_dim = 1, hidden_dims = c(64, 1), kernel_sizes = c(3, 3), n_layers = 2)

optimizer <- optim_adam(mannequin$parameters)

num_epochs <- 100

for (epoch in 1:num_epochs) {
  
  mannequin$prepare()
  batch_losses <- c()
  
  for (b in enumerate(dl)) {
    
    optimizer$zero_grad()
    
    # last-time-step output from final layer
    preds <- mannequin(b$x)[[2]][[2]][[1]]
  
    loss <- nnf_mse_loss(preds, b$y)
    batch_losses <- c(batch_losses, loss$merchandise())
    
    loss$backward()
    optimizer$step()
  }
  
  if (epoch %% 10 == 0)
    cat(sprintf("nEpoch %d, coaching loss:%3fn", epoch, imply(batch_losses)))
}
Epoch 10, coaching loss:0.008522

Epoch 20, coaching loss:0.008079

Epoch 30, coaching loss:0.006187

Epoch 40, coaching loss:0.003828

Epoch 50, coaching loss:0.002322

Epoch 60, coaching loss:0.001594

Epoch 70, coaching loss:0.001376

Epoch 80, coaching loss:0.001258

Epoch 90, coaching loss:0.001218

Epoch 100, coaching loss:0.001171

Loss decreases, however that in itself shouldn’t be a assure the mannequin has realized something. Has it? Let’s examine its forecast for the very first sequence and see.

For printing, I’m zooming in on the related area within the 24×24-pixel body. Right here is the bottom reality for time step six:

0  0  0  0  0  0  0  0  0  0
0  0  0  0  0  0  0  0  0  0
0  0  1  0  0  0  0  0  0  0
0  0  0  1  0  0  0  0  0  0
0  0  0  0  1  0  0  0  0  0
0  0  0  0  0  1  0  0  0  0
0  0  0  0  0  0  1  0  0  0
0  0  0  0  0  0  0  1  0  0
0  0  0  0  0  0  0  0  0  0
0  0  0  0  0  0  0  0  0  0

And right here is the forecast. This doesn’t look unhealthy in any respect, given there was neither experimentation nor tuning concerned.

       [,1]  [,2]  [,3]  [,4]  [,5]  [,6]  [,7]  [,8]  [,9] [,10]
 [1,]  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00     0
 [2,] -0.02  0.36  0.01  0.06  0.00  0.00  0.00  0.00  0.00     0
 [3,]  0.00 -0.01  0.71  0.01  0.06  0.00  0.00  0.00  0.00     0
 [4,] -0.01  0.04  0.00  0.75  0.01  0.06  0.00  0.00  0.00     0
 [5,]  0.00 -0.01 -0.01 -0.01  0.75  0.01  0.06  0.00  0.00     0
 [6,]  0.00  0.01  0.00 -0.07 -0.01  0.75  0.01  0.06  0.00     0
 [7,]  0.00  0.01 -0.01 -0.01 -0.07 -0.01  0.75  0.01  0.06     0
 [8,]  0.00  0.00  0.01  0.00  0.00 -0.01  0.00  0.71  0.00     0
 [9,]  0.00  0.00  0.00  0.01  0.01  0.00  0.03 -0.01  0.37     0
[10,]  0.00  0.00  0.00  0.00  0.00  0.00 -0.01 -0.01 -0.01     0

This could suffice for a sanity verify. When you made it until the top, thanks in your endurance! In the perfect case, you’ll have the ability to apply this structure (or an identical one) to your personal information – however even when not, I hope you’ve loved studying about torch mannequin coding and/or RNN weirdness 😉

I, for one, am definitely wanting ahead to exploring convLSTMs on real-world issues within the close to future. Thanks for studying!

Appendix

This appendix comprises the code used to create tables 1 and a couple of above.

Keras

LSTM

library(keras)

# batch of three, with 4 time steps every and a single function
enter <- k_random_normal(form = c(3L, 4L, 1L))
enter

# default args
# return form = (batch_size, models)
lstm <- layer_lstm(
  models = 1,
  kernel_initializer = initializer_constant(worth = 1),
  recurrent_initializer = initializer_constant(worth = 1)
)
lstm(enter)

# return_sequences = TRUE
# return form = (batch_size, time steps, models)
#
# observe how for every merchandise within the batch, the worth for time step 4 equals that obtained above
lstm <- layer_lstm(
  models = 1,
  return_sequences = TRUE,
  kernel_initializer = initializer_constant(worth = 1),
  recurrent_initializer = initializer_constant(worth = 1)
  # bias is by default initialized to 0
)
lstm(enter)

# return_state = TRUE
# return form = listing of:
#                - outputs, of form: (batch_size, models)
#                - "reminiscence states" for the final time step, of form: (batch_size, models)
#                - "carry states" for the final time step, of form: (batch_size, models)
#
# observe how the primary and second listing gadgets are similar!
lstm <- layer_lstm(
  models = 1,
  return_state = TRUE,
  kernel_initializer = initializer_constant(worth = 1),
  recurrent_initializer = initializer_constant(worth = 1)
)
lstm(enter)

# return_state = TRUE, return_sequences = TRUE
# return form = listing of:
#                - outputs, of form: (batch_size, time steps, models)
#                - "reminiscence" states for the final time step, of form: (batch_size, models)
#                - "carry states" for the final time step, of form: (batch_size, models)
#
# observe how once more, the "reminiscence" state present in listing merchandise 2 matches the final-time step outputs reported in merchandise 1
lstm <- layer_lstm(
  models = 1,
  return_sequences = TRUE,
  return_state = TRUE,
  kernel_initializer = initializer_constant(worth = 1),
  recurrent_initializer = initializer_constant(worth = 1)
)
lstm(enter)

GRU

# default args
# return form = (batch_size, models)
gru <- layer_gru(
  models = 1,
  kernel_initializer = initializer_constant(worth = 1),
  recurrent_initializer = initializer_constant(worth = 1)
)
gru(enter)

# return_sequences = TRUE
# return form = (batch_size, time steps, models)
#
# observe how for every merchandise within the batch, the worth for time step 4 equals that obtained above
gru <- layer_gru(
  models = 1,
  return_sequences = TRUE,
  kernel_initializer = initializer_constant(worth = 1),
  recurrent_initializer = initializer_constant(worth = 1)
)
gru(enter)

# return_state = TRUE
# return form = listing of:
#    - outputs, of form: (batch_size, models)
#    - "reminiscence" states for the final time step, of form: (batch_size, models)
#
# observe how the listing gadgets are similar!
gru <- layer_gru(
  models = 1,
  return_state = TRUE,
  kernel_initializer = initializer_constant(worth = 1),
  recurrent_initializer = initializer_constant(worth = 1)
)
gru(enter)

# return_state = TRUE, return_sequences = TRUE
# return form = listing of:
#    - outputs, of form: (batch_size, time steps, models)
#    - "reminiscence states" for the final time step, of form: (batch_size, models)
#
# observe how once more, the "reminiscence state" present in listing merchandise 2 matches the final-time-step outputs reported in merchandise 1
gru <- layer_gru(
  models = 1,
  return_sequences = TRUE,
  return_state = TRUE,
  kernel_initializer = initializer_constant(worth = 1),
  recurrent_initializer = initializer_constant(worth = 1)
)
gru(enter)

torch

LSTM (non-stacked structure)

library(torch)

# batch of three, with 4 time steps every and a single function
# we are going to specify batch_first = TRUE when creating the LSTM
enter <- torch_randn(c(3, 4, 1))
enter

# default args
# return form = (batch_size, models)
#
# observe: there may be a further argument num_layers that we may use to specify a stacked LSTM - successfully composing two LSTM modules
# default for num_layers is 1 although 
lstm <- nn_lstm(
  input_size = 1, # variety of enter options
  hidden_size = 1, # variety of hidden (and output!) options
  batch_first = TRUE # for simple comparability with Keras
)

nn_init_constant_(lstm$weight_ih_l1, 1)
nn_init_constant_(lstm$weight_hh_l1, 1)
nn_init_constant_(lstm$bias_ih_l1, 0)
nn_init_constant_(lstm$bias_hh_l1, 0)

# returns an inventory of size 2, particularly
#   - outputs, of form (batch_size, time steps, hidden_size) - given we specified batch_first
#       Be aware 1: If this can be a stacked LSTM, these are the outputs from the final layer solely.
#               For our present goal, that is irrelevant, as we're proscribing ourselves to single-layer LSTMs.
#       Be aware 2: hidden_size right here is equal to models in Keras - each specify variety of options
#  - listing of:
#    - hidden state for the final time step, of form (num_layers, batch_size, hidden_size)
#    - cell state for the final time step, of form (num_layers, batch_size, hidden_size)
#      Be aware 3: For a single-layer LSTM, the hidden states are already offered within the first listing merchandise.

lstm(enter)

GRU (non-stacked structure)

# default args
# return form = (batch_size, models)
#
# observe: there may be a further argument num_layers that we may use to specify a stacked GRU - successfully composing two GRU modules
# default for num_layers is 1 although 
gru <- nn_gru(
  input_size = 1, # variety of enter options
  hidden_size = 1, # variety of hidden (and output!) options
  batch_first = TRUE # for simple comparability with Keras
)

nn_init_constant_(gru$weight_ih_l1, 1)
nn_init_constant_(gru$weight_hh_l1, 1)
nn_init_constant_(gru$bias_ih_l1, 0)
nn_init_constant_(gru$bias_hh_l1, 0)

# returns an inventory of size 2, particularly
#   - outputs, of form (batch_size, time steps, hidden_size) - given we specified batch_first
#       Be aware 1: If this can be a stacked GRU, these are the outputs from the final layer solely.
#               For our present goal, that is irrelevant, as we're proscribing ourselves to single-layer GRUs.
#       Be aware 2: hidden_size right here is equal to models in Keras - each specify variety of options
#  - listing of:
#    - hidden state for the final time step, of form (num_layers, batch_size, hidden_size)
#    - cell state for the final time step, of form (num_layers, batch_size, hidden_size)
#       Be aware 3: For a single-layer GRU, these values are already offered within the first listing merchandise.
gru(enter)

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles