# theanets.layers.recurrent.SCRN¶

class theanets.layers.recurrent.SCRN(rate='vector', s_0=None, context_size=None, **kwargs)[source]

Structurally Constrained Recurrent Network layer.

Notes

A Structurally Constrained Recurrent Network incorporates an explicitly slow-moving hidden context layer with a simple recurrent network.

The update equations in this layer are largely those given by [Mik15], pages 4 and 5, but this implementation adds a bias term for the output of the layer. The update equations are thus:

$\begin{split}\begin{eqnarray} s_t &=& r \odot x_t W_{xs} + (1 - r) \odot s_{t-1} \\ h_t &=& \sigma(x_t W_{xh} + h_{t-1} W_{hh} + s_t W_{sh}) \\ o_t &=& g\left(h_t W_{ho} + s_t W_{so} + b\right). \\ \end{eqnarray}\end{split}$

Here, $$g(\cdot)$$ is the activation function for the layer and $$\odot$$ is elementwise multiplication. The rate values $$r$$ are computed using $$r = \sigma(\hat{r})$$ so that the rate values are limited to the open interval (0, 1). $$\sigma(\cdot)$$ is the logistic sigmoid.

Parameters

• w — matrix connecting inputs to [hidden, state] units (this is a concatenation of parameters A and B in the paper)
• sh — matrix connecting state to hiddens (P)
• hh — matrix connecting hiddens to hiddens (R)
• ho — matrix connecting hiddens to output (U)
• so — matrix connecting state to output (V)
• b — vector of output bias values (not in original paper)

Additionally, if rate is specified as 'vector' (the default), then we also have:

• r — vector of learned rate values for the state units

Outputs

• out — the overall output of the layer
• hid — the output from the layer’s hidden units
• state — the output from the layer’s state units
• rate — the rate values of the state units

References

 [Mik15] (1, 2) T. Mikolov, A. Joulin, S. Chopra, M. Mathieu, & M. Ranzato (ICLR 2015) “Learning Longer Memory in Recurrent Neural Networks.” http://arxiv.org/abs/1412.7753
__init__(rate='vector', s_0=None, context_size=None, **kwargs)[source]

x.__init__(…) initializes x; see help(type(x)) for signature

Methods

 __init__([rate, s_0, context_size]) x.__init__(…) initializes x; see help(type(x)) for signature add_bias(name, size[, mean, std]) Helper method to create a new bias vector. add_weights(name, nin, nout[, mean, std, …]) Helper method to create a new weight matrix. bind(graph[, reset, initialize]) Bind this layer into a computation graph. connect(inputs) Create Theano variables representing the outputs of this layer. find(key) Get a shared variable for a parameter by name. full_name(name) Return a fully-scoped name for the given layer output. log() Log some information about this layer. log_params() Log information about this layer’s parameters. resolve_inputs(layers) Resolve the names of inputs for this layer into shape tuples. resolve_outputs() Resolve the names of outputs for this layer into shape tuples. setup() Set up the parameters and initial values for this layer. to_spec() Create a specification dictionary for this layer. transform(inputs) Transform the inputs for this layer into an output for the layer.

Attributes

 input_name Name of layer input (for layers with one input). input_shape Shape of layer input (for layers with one input). input_size Size of layer input (for layers with one input). output_name Full name of the default output for this layer. output_shape Shape of default output from this layer. output_size Number of “neurons” in this layer’s default output. params A list of all parameters in this layer.
resolve_inputs(layers)[source]

Resolve the names of inputs for this layer into shape tuples.

Parameters: layers : list of Layer A list of the layers that are available for resolving inputs. theanets.util.ConfigurationError : If an input cannot be resolved.
setup()[source]

Set up the parameters and initial values for this layer.

to_spec()[source]

Create a specification dictionary for this layer.

Returns: spec : dict A dictionary specifying the configuration of this layer.
transform(inputs)[source]

Transform the inputs for this layer into an output for the layer.

Parameters: inputs : dict of Theano expressions Symbolic inputs to this layer, given as a dictionary mapping string names to Theano expressions. See Layer.connect(). output : Theano expression The output for this layer is the same as the input. updates : list An empty updates list.