theanets.regularizers.WeightL1¶

class
theanets.regularizers.
WeightL1
(pattern=None, weight=0.0)¶ Decay the weights in a model using an L1 norm penalty.
Notes
This regularizer implements the
loss()
method to add the following term to the network’s loss function:\[\frac{1}{\Omega} \sum_{i \in \Omega} \W_i\_1\]where \(\Omega\) is a set of “matching” weight parameters, and the L1 norm :math`cdot_1` is the sum of the absolute values of the elements in the matrix.
This regularizer tends to encourage the weights in a model to be zero. Nonzero weights are used only when they are able to reduce the other components of the loss (e.g., the squared reconstruction error).
References
[Qiu11] Q. Qiu, Z. Jiang, & R. Chellappa. (ICCV 2011). “Sparse dictionarybased representation and recognition of action attributes.” Examples
This regularizer can be specified at training or test time by providing the
weight_l1
orweight_sparsity
keyword arguments:>>> net = theanets.Regression(...)
To use this regularizer at training time:
>>> net.train(..., weight_sparsity=0.1)
By default all (2dimensional) weights in the model are penalized. To include only some weights:
>>> net.train(..., weight_sparsity=dict(weight=0.1, pattern='hid[23].w'))
To use this regularizer when running the model forward to generate a prediction:
>>> net.predict(..., weight_sparsity=0.1)
The value associated with the keyword argument can be a scalar—in which case it provides the weight for the regularizer—or a dictionary, in which case it will be passed as keyword arguments directly to the constructor.

__init__
(pattern=None, weight=0.0)¶
Methods
__init__
([pattern, weight])log
()Log some diagnostic info about this regularizer. loss
(layers, outputs)modify_graph
(outputs)Modify the outputs of a particular layer in the computation graph. 