theanets.recurrent.Regressor

class theanets.recurrent.Regressor(layers=(), loss='mse', weighted=False, rng=13)[source]

A regressor attempts to produce a target output given some inputs.

Notes

Regressor models default to a MSE loss. To use a different loss, provide a non-default argument for the loss keyword argument when constructing your model.

Examples

To create a recurrent regression model, just create a new class instance. Often you’ll provide the layer configuration at this time:

>>> model = theanets.recurrent.Regressor([10, (20, 'rnn'), 3])

See Creating a Model for more information.

Data

Training data for a recurrent regression model takes the form of two three-dimensional arrays. The shapes of these arrays are (num-examples, num-time-steps, num-variables): the first axis enumerates data points in a batch, the second enumerates time steps, and the third enumerates the variables (input variables for the input array, and output variables for the output array) in the model.

For instance, to create a training dataset containing 1000 examples, each with 100 time steps:

>>> inputs = np.random.randn(1000, 100, 10).astype('f')
>>> outputs = np.random.randn(1000, 100, 3).astype('f')

Training

Training the model can be as simple as calling the train() method:

>>> model.train([inputs, outputs])

See Training a Model for more information.

Use

A model can be used to predict() the output of some input data points:

>>> test = np.random.randn(3, 200, 10).astype('f')
>>> print(model.predict(test))

Note that the test data does not need to have the same number of time steps as the training data.

See Using a Model for more information.

__init__(layers=(), loss='mse', weighted=False, rng=13)

x.__init__(…) initializes x; see help(type(x)) for signature

Methods

__init__([layers, loss, weighted, rng]) x.__init__(…) initializes x; see help(type(x)) for signature
add_layer([layer]) Add a layer to our network graph.
add_loss([loss]) Add a loss function to the model.
build_graph([regularizers]) Connect the layers in this network to form a computation graph.
feed_forward(x, **kwargs) Compute a forward pass of all layers from the given input.
find(which, param) Get a parameter from a layer in the network.
itertrain(train[, valid, algo, subalgo, …]) Train our network, one batch at a time.
load(filename_or_handle) Load a saved network from disk.
loss(**kwargs) Return a variable representing the regularized loss for this network.
monitors(**kwargs) Return expressions that should be computed to monitor training.
predict(x, **kwargs) Compute a forward pass of the inputs, returning the network output.
save(filename_or_handle) Save the state of this network to a pickle file on disk.
score(x, y[, w]) Compute R^2 coefficient of determination for a given labeled input.
set_loss(*args, **kwargs) Clear the current loss functions from the network and add a new one.
train(*args, **kwargs) Train the network until the trainer converges.
updates(**kwargs) Return expressions to run as updates during network training.

Attributes

DEFAULT_OUTPUT_ACTIVATION
INPUT_NDIM Number of dimensions for holding input data arrays.
OUTPUT_NDIM Number of dimensions for holding output data arrays.
inputs A list of Theano variables for feedforward computations.
params A list of the learnable Theano parameters for this network.
variables A list of Theano variables for loss computations.
INPUT_NDIM = 3

Number of dimensions for holding input data arrays.

OUTPUT_NDIM = 3

Number of dimensions for holding output data arrays.