rate

Built-in learning rate policies, which allow for a dynamic learning rate during neural network training.

Source:
See:

Example

      Copy
      let { methods, Network } = require("@liquid-carrot/carrot");

let network = new Network(5, 10, 5);

// OPTION #1: methods.rate.FIXED
network.train(dataset, { rate_policy: methods.rate.FIXED });

// OPTION #2: methods.rate.STEP
network.train(dataset, { rate_policy: methods.rate.STEP });

// OPTION #3: methods.rate.EXP
network.train(dataset, { rate_policy: methods.rate.EXP });

// OPTION #4: methods.rate.INV
network.train(dataset, { rate_policy: methods.rate.INV });

// more coming soon...
    

Methods

(static) EXP (base_rate, iteration, optionsopt) → {function} open an issue

Exponential Learning Rate

The learning rate will exponentially decrease.

The rate at iteration is calculated as: rate = base_rate * Math.pow(options.gamma, iteration)

Parameters:
Name Type Description
base_rate number

A base learning rate - 0 < base_rate < 1

iteration number

A number - iteration > 0

options
Properties
Name Type Default Description
gamma number <optional> 0.999

Learning rate retention per iteration; - 0 < options.gamma < 1 - large options.gamma CAN cause networks to never converge, low options.gamma CAN cause networks to converge too quickly

Example
      Copy
      let { Network, methods } = require("@liquid-carrot/carrot");

let network = new Network(10, 1);

network.train(dataset, { rate_policy: methods.rate.EXP });
    
Source:

(static) FIXED (base_rate) → {function} open an issue

Fixed Learning Rate

Default rate policy. Using this will make learning rate static (no change). Useful as a way to update a previous rate policy.

Parameters:
Name Type Description
base_rate number

A base learning rate - 0 < base_rate < 1

Example
      Copy
      let { Network, methods } = require("@liquid-carrot/carrot");

let network = new Network(10, 1);

network.train(dataset, { rate_policy: methods.rate.FIXED });
    
Source:

(static) INV (base_rate, iteration, optionsopt) → {function} open an issue

Inverse Exponential Learning Rate

The learning rate will exponentially decrease.

The rate at iteration is calculated as: rate = baseRate * Math.pow(1 + options.gamma * iteration, -options.power)

Parameters:
Name Type Description
base_rate number

A base learning rate - 0 < base_rate < 1

iteration number

A number - iteration > 0

options
Properties
Name Type Default Description
gamma number <optional> 0.001

Learning rate decay per iteration; - 0 < options.gamma < 1 - large options.gamma CAN cause networks to converge too quickly and stop learning, low options.gamma CAN cause networks to converge to learn VERY slowly

power number <optional> 2

Decay rate per iteration - 0 < options.power - large options.power CAN cause networks to stop learning quickly, low options.power CAN cause networks to learn VERY slowly

Example
      Copy
      let { Network, methods } = require("@liquid-carrot/carrot");

let network = new Network(10, 1);

network.train(dataset, { rate_policy: methods.rate.INV });
    
Source:

(static) STEP (base_rate, iteration, optionsopt) → {function} open an issue

Step Learning Rate

The learning rate will decrease (i.e. 'step down') every options.step_size iterations.

Parameters:
Name Type Description
base_rate number

A base learning rate - 0 < base_rate < 1

iteration number

A number - iteration > 0

options
Properties
Name Type Default Description
gamma number <optional> 0.9

Learning rate retention per step; - 0 < options.gamma < 1 - large options.gamma CAN cause networks to never converge, low options.gamma CAN cause networks to converge too quickly

step_size number <optional> 100

Learning rate is updated every options.step_size iterations

Example
      Copy
      let { Network, methods } = require("@liquid-carrot/carrot");

let network = new Network(10, 1);

network.train(dataset, { rate_policy: methods.rate.STEP });
    
Source: