theanets.regularizers.HiddenL1

class theanets.regularizers.HiddenL1(pattern='*', weight=0.0)[source]

Penalize the activation of hidden layers under an L1 norm.

Notes

This regularizer implements the loss() method to add the following term to the network’s loss function:

\[\frac{1}{|\Omega|} \sum_{i \in \Omega} \|Z_i\|_1\]

where \(\Omega\) is a set of “matching” graph output indices, and the L1 norm :math`|cdot|_1` is the sum of the absolute values of the elements in the corresponding array.

This regularizer tends to encourage the hidden unit activations in a model to be zero. Nonzero activations are used only when they are able to reduce the other components of the loss (e.g., the squared reconstruction error).

This regularizer acts indirectly to force a model to cover the space of its input dataset using as few features as possible; this pressure often causes features to be duplicated with slight variations to “tile” the input space in a very different way than a non-regularized model.

References

[Ng11]A. Ng. (2011). “Sparse Autoencoder.” Stanford CS294A Lecture Notes http://web.stanford.edu/class/cs294a/sae/sparseAutoencoderNotes.pdf

Examples

This regularizer can be specified at training or test time by providing the hidden_l1 or hidden_sparsity keyword arguments:

>>> net = theanets.Regression(...)

To use this regularizer at training time:

>>> net.train(..., hidden_sparsity=0.1)

By default all hidden layer outputs are penalized. To include only some graph outputs:

>>> net.train(..., hidden_sparsity=dict(weight=0.1, pattern='hid3:out'))

To use this regularizer when running the model forward to generate a prediction:

>>> net.predict(..., hidden_sparsity=0.1)

The value associated with the keyword argument can be a scalar—in which case it provides the weight for the regularizer—or a dictionary, in which case it will be passed as keyword arguments directly to the constructor.

__init__(pattern='*', weight=0.0)

x.__init__(…) initializes x; see help(type(x)) for signature

Methods

__init__([pattern, weight]) x.__init__(…) initializes x; see help(type(x)) for signature
log() Log some diagnostic info about this regularizer.
loss(layers, outputs) Compute a scalar term to add to the loss function for a model.
modify_graph(outputs) Modify the outputs of a particular layer in the computation graph.
loss(layers, outputs)[source]

Compute a scalar term to add to the loss function for a model.

Parameters:
layers : list of theanets.layers.Layer

A list of the layers in the model being regularized.

outputs : dict of Theano expressions

A dictionary mapping string expression names to their corresponding Theano expressions in the computation graph. This dictionary contains the fully-scoped name of every layer output in the graph.