Reference

This package groups together a bunch of Theano code for neural nets.

theanets.activations.Activation(name, layer, …) An activation function for a neural network layer.
theanets.activations.LGrelu(*args, **kwargs) Rectified linear activation with learnable leak rate and gain.
theanets.activations.Maxout(*args, **kwargs) Arbitrary piecewise linear activation.
theanets.activations.Prelu(*args, **kwargs) Parametric rectified linear activation with learnable leak rate.
theanets.activations.build(name, layer, **kwargs) Construct an activation function by name.
theanets.feedforward.Autoencoder(layers[, …]) An autoencoder network attempts to reproduce its input.
theanets.feedforward.Classifier(layers[, …]) A classifier computes a distribution over labels, given an input.
theanets.feedforward.Regressor([layers, …]) A regressor attempts to produce a target output given some inputs.
theanets.graph.Network([layers, loss, …]) The network class encapsulates a network computation graph.
theanets.layers.base.Concatenate([name]) Concatenate multiple inputs along the last axis.
theanets.layers.base.Flatten([name]) Flatten all but the batch index of the input.
theanets.layers.base.Input([name, ndim, sparse]) A layer that receives external input data.
theanets.layers.base.Layer([name]) Base class for network layers.
theanets.layers.base.Product([name]) Multiply several inputs together elementwise.
theanets.layers.base.Reshape([name]) Reshape an input to have different numbers of dimensions.
theanets.layers.convolution.Conv1(filter_size) 1-dimensional convolutions run over one data axis.
theanets.layers.feedforward.Classifier(**kwargs) A classifier layer performs a softmax over a linear input transform.
theanets.layers.feedforward.Feedforward([name]) A feedforward neural network layer performs a transform of its input.
theanets.layers.feedforward.Tied(partner, …) A tied-weights feedforward layer shadows weights from another layer.
theanets.layers.recurrent.Bidirectional([worker]) A bidirectional recurrent layer runs worker models forward and backward.
theanets.layers.recurrent.Clockwork(periods, …) A Clockwork RNN layer updates “modules” of neurons at specific rates.
theanets.layers.recurrent.GRU([h_0]) Gated Recurrent Unit layer.
theanets.layers.recurrent.LSTM([c_0]) Long Short-Term Memory (LSTM) layer.
theanets.layers.recurrent.MRNN([factors]) A recurrent network layer with multiplicative dynamics.
theanets.layers.recurrent.MUT1([h_0]) “MUT1” evolved recurrent layer.
theanets.layers.recurrent.RNN([h_0]) Standard recurrent network layer.
theanets.layers.recurrent.RRNN([rate]) An RNN with an update rate for each unit.
theanets.layers.recurrent.SCRN([rate, s_0, …]) Structurally Constrained Recurrent Network layer.
theanets.losses.CrossEntropy(target[, …]) Cross-entropy (XE) loss function for classifiers.
theanets.losses.GaussianLogLikelihood([…]) Gaussian Log Likelihood (GLL) loss function.
theanets.losses.Hinge(target[, weight, …]) Hinge loss function for classifiers.
theanets.losses.KullbackLeiblerDivergence(target) The KL divergence loss is computed over probability distributions.
theanets.losses.Loss(target[, weight, …]) A loss function base class.
theanets.losses.MaximumMeanDiscrepancy([kernel]) Maximum Mean Discrepancy (MMD) loss function.
theanets.losses.MeanAbsoluteError(target[, …]) Mean-absolute-error (MAE) loss function.
theanets.losses.MeanSquaredError(target[, …]) Mean-squared-error (MSE) loss function.
theanets.recurrent.Autoencoder(layers[, …]) An autoencoder network attempts to reproduce its input.
theanets.recurrent.Classifier(layers[, …]) A classifier computes a distribution over labels, given an input.
theanets.recurrent.Regressor([layers, loss, …]) A regressor attempts to produce a target output given some inputs.
theanets.recurrent.Text(text[, alpha, …]) A class for handling sequential text data.
theanets.recurrent.batches(arrays[, steps, …]) Create a callable that generates samples from a dataset.
theanets.regularizers.BernoulliDropout([…]) Randomly set activations of a layer output to zero.
theanets.regularizers.Contractive([pattern, …]) Penalize the derivative of hidden layers with respect to their inputs.
theanets.regularizers.GaussianNoise([…]) Add isotropic Gaussian noise to one or more graph outputs.
theanets.regularizers.HiddenL1([pattern, weight]) Penalize the activation of hidden layers under an L1 norm.
theanets.regularizers.Regularizer([pattern, …]) A regularizer for a neural network model.
theanets.regularizers.RecurrentNorm([…]) Penalize successive activation norms of recurrent layers.
theanets.regularizers.RecurrentState([…]) Penalize state changes of recurrent layers.
theanets.regularizers.WeightL1([pattern, weight]) Decay the weights in a model using an L1 norm penalty.
theanets.regularizers.WeightL2([pattern, weight]) Decay the weights in a model using an L2 norm penalty.
theanets.trainer.DownhillTrainer(algo, network) Wrapper for using trainers from downhill.
theanets.trainer.SampleTrainer(network) This trainer replaces network weights with samples from the input.
theanets.trainer.SupervisedPretrainer(algo, …) This trainer adapts parameters using a supervised pretraining approach.
theanets.trainer.UnsupervisedPretrainer(…) Train a classification model using an unsupervised pre-training step.