theanets.activations.LGrelu¶

class theanets.activations.LGrelu(*args, **kwargs)

Rectified linear activation with learnable leak rate and gain.

This activation is characterized by two linear pieces joined at the origin. For negative inputs, the unit response is a linear function of the input with slope $$r$$ (the “leak rate”). For positive inputs, the unit response is a different linear function of the input with slope $$g$$ (the “gain”):

$\begin{split}f(x) = \left\{ \begin{eqnarray*} rx &\qquad& \mbox{if } x < 0 \\ gx &\qquad& \mbox{otherwise} \end{eqnarray*} \right.\end{split}$

This activation allocates a separate leak and gain rate for each unit in its layer.

__init__(*args, **kwargs)

Methods

 __init__(*args, **kwargs)