nerva_numpy.activation_functions
Activation functions and utilities used by the MLP implementation.
This module provides simple callable classes for common activations and a parser that turns textual specifications into activation instances (e.g. “ReLU”, “LeakyReLU(alpha=0.1)”, “SReLU(al=0, tl=0, ar=0, tr=1)”).
Functions
|
AllReLU factory. |
|
Gradient factory for AllReLU. |
Hyperbolic tangent activation. |
|
Gradient of tanh: 1 - tanh²(X). |
|
|
Leaky ReLU factory: max(X, alpha * X). |
|
Gradient factory for leaky ReLU. |
|
Rectified linear unit activation: max(0, X). |
Gradient of ReLU: 1 where X > 0, 0 elsewhere. |
|
|
Sigmoid activation: 1 / (1 + exp(-X)). |
Gradient of sigmoid: σ(X) * (1 - σ(X)). |
|
|
SReLU factory: smooth rectified linear with learnable parameters. |
|
Gradient factory for SReLU. |
|
Parse a textual activation specification into an ActivationFunction. |
Classes
Interface for activation functions with value and gradient methods. |
|
|
AllReLU activation (alternative parameterization of leaky ReLU). |
Hyperbolic tangent activation function. |
|
|
Leaky ReLU activation: max(x, alpha * x). |
ReLU activation function: max(0, x). |
|
|
Smooth rectified linear activation with learnable parameters. |
Sigmoid activation function: 1 / (1 + exp(-x)). |
- nerva_numpy.activation_functions.Relu(X: numpy.ndarray) numpy.ndarray [source]
Rectified linear unit activation: max(0, X).
- nerva_numpy.activation_functions.Relu_gradient(X: numpy.ndarray) numpy.ndarray [source]
Gradient of ReLU: 1 where X > 0, 0 elsewhere.
- nerva_numpy.activation_functions.Leaky_relu_gradient(alpha)[source]
Gradient factory for leaky ReLU.
- nerva_numpy.activation_functions.Hyperbolic_tangent(X: numpy.ndarray) numpy.ndarray [source]
Hyperbolic tangent activation.
- nerva_numpy.activation_functions.Hyperbolic_tangent_gradient(X: numpy.ndarray) numpy.ndarray [source]
Gradient of tanh: 1 - tanh²(X).
- nerva_numpy.activation_functions.Sigmoid(X: numpy.ndarray) numpy.ndarray [source]
Sigmoid activation: 1 / (1 + exp(-X)).
- nerva_numpy.activation_functions.Sigmoid_gradient(X: numpy.ndarray) numpy.ndarray [source]
Gradient of sigmoid: σ(X) * (1 - σ(X)).
- nerva_numpy.activation_functions.Srelu(al, tl, ar, tr)[source]
SReLU factory: smooth rectified linear with learnable parameters.
- nerva_numpy.activation_functions.Srelu_gradient(al, tl, ar, tr)[source]
Gradient factory for SReLU.
- class nerva_numpy.activation_functions.ActivationFunction[source]
Bases:
object
Interface for activation functions with value and gradient methods.
- class nerva_numpy.activation_functions.ReLUActivation[source]
Bases:
ActivationFunction
ReLU activation function: max(0, x).
- class nerva_numpy.activation_functions.LeakyReLUActivation(alpha)[source]
Bases:
ActivationFunction
Leaky ReLU activation: max(x, alpha * x).
- class nerva_numpy.activation_functions.AllReLUActivation(alpha)[source]
Bases:
ActivationFunction
AllReLU activation (alternative parameterization of leaky ReLU).
- class nerva_numpy.activation_functions.HyperbolicTangentActivation[source]
Bases:
ActivationFunction
Hyperbolic tangent activation function.
- class nerva_numpy.activation_functions.SigmoidActivation[source]
Bases:
ActivationFunction
Sigmoid activation function: 1 / (1 + exp(-x)).
- class nerva_numpy.activation_functions.SReLUActivation(al=0.0, tl=0.0, ar=0.0, tr=1.0)[source]
Bases:
ActivationFunction
Smooth rectified linear activation with learnable parameters.
- nerva_numpy.activation_functions.parse_activation(text: str) ActivationFunction [source]
Parse a textual activation specification into an ActivationFunction.
Examples include “ReLU”, “Sigmoid”, “HyperbolicTangent”, “AllReLU(alpha=0.1)”, “LeakyReLU(alpha=0.1)”, and “SReLU(al=0, tl=0, ar=0, tr=1)”.