# theanets.layers.recurrent.RRNN¶

class theanets.layers.recurrent.RRNN(rate='matrix', **kwargs)[source]

An RNN with an update rate for each unit.

Parameters: rate : str, optional This parameter controls how rates are represented in the layer. If this is None, the default, then rates are computed as a function of the input at each time step. If this parameter is 'vector', then rates are represented as a single vector of learnable rates. If this parameter is 'uniform' then rates are chosen randomly at uniform from the open interval (0, 1). If this parameter is 'log' then rates are chosen randomly from a log-uniform distribution such that few rates are near 0 and many rates are near 1.

Notes

In a normal RNN, a hidden unit is updated completely at each time step, $$h_t = f(x_t, h_{t-1})$$. With an explicit update rate, the state of a hidden unit is computed as a mixture of the new and old values,

$h_t = (1 - z_t) \odot h_{t-1} + z_t \odot f(x_t, h_{t-1})$

where $$\odot$$ indicates elementwise multiplication.

Rates might be defined in a number of ways, spanning a continuum between vanilla RNNs (i.e., all rate parameters are effectively fixed at 1), fixed but non-uniform rates for each hidden unit [Ben12], parametric rates that are dependent only on the input, all the way to parametric rates that are computed as a function of the inputs and the hidden state at each time step (i.e., something more like the gated recurrent unit).

This class represents rates in different ways depending on the value of the rate parameter at inititialization.

Parameters

• b — vector of bias values for each hidden unit
• xh — matrix connecting inputs to hidden units
• hh — matrix connecting hiddens to hiddens

If rate is initialized to the string 'vector', we define:

• r — vector of rates for each hidden unit

If rate is initialized to None, we define:

• r — vector of rate bias values for each hidden unit
• xr — matrix connecting inputs to rate values for each hidden unit

Outputs

• out — the post-activation state of the layer
• pre — the pre-activation state of the layer
• hid — the pre-rate-mixing hidden state
• rate — the rate values

References

 [Ben12] (1, 2) Y. Bengio, N. Boulanger-Lewandowski, & R. Pascanu. (2012) “Advances in Optimizing Recurrent Networks.” http://arxiv.org/abs/1212.0901
 [Jag07] H. Jaeger, M. Lukoševičius, D. Popovici, & U. Siewert. (2007) “Optimization and applications of echo state networks with leaky-integrator neurons.” Neural Networks, 20(3):335–352.
__init__(rate='matrix', **kwargs)[source]

x.__init__(…) initializes x; see help(type(x)) for signature

Methods

 __init__([rate]) x.__init__(…) initializes x; see help(type(x)) for signature add_bias(name, size[, mean, std]) Helper method to create a new bias vector. add_weights(name, nin, nout[, mean, std, …]) Helper method to create a new weight matrix. bind(graph[, reset, initialize]) Bind this layer into a computation graph. connect(inputs) Create Theano variables representing the outputs of this layer. find(key) Get a shared variable for a parameter by name. full_name(name) Return a fully-scoped name for the given layer output. log() Log some information about this layer. log_params() Log information about this layer’s parameters. resolve_inputs(layers) Resolve the names of inputs for this layer into shape tuples. resolve_outputs() Resolve the names of outputs for this layer into shape tuples. setup() Set up the parameters and initial values for this layer. to_spec() Create a specification dictionary for this layer. transform(inputs) Transform the inputs for this layer into an output for the layer.

Attributes

 input_name Name of layer input (for layers with one input). input_shape Shape of layer input (for layers with one input). input_size Size of layer input (for layers with one input). output_name Full name of the default output for this layer. output_shape Shape of default output from this layer. output_size Number of “neurons” in this layer’s default output. params A list of all parameters in this layer.
setup()[source]

Set up the parameters and initial values for this layer.

transform(inputs)[source]

Transform the inputs for this layer into an output for the layer.

Parameters: inputs : dict of Theano expressions Symbolic inputs to this layer, given as a dictionary mapping string names to Theano expressions. See Layer.connect(). output : Theano expression The output for this layer is the same as the input. updates : list An empty updates list.