Skip to main content

Table 5 Deep learning optimizers' hyperparameter specification

From: Early prediction of chronic kidney disease based on ensemble of deep learning models and optimizers

Optimizer

Specification

Adamax

Learning rate = 0.0009, beta1 = 0.9, beta2 = 0.99, epsilon = 1 × 10−8

Adam

Learning rate = 0.0009, beta1 = 0.9, beta2 = 0.99, epsilon = 1 × 10−8

SGD

Learning rate = 0.0009, momentum = 0.9, nesterov = False

Adadelta

Learning rate = 0.0009, rho = 0.95, epsilon = 1 × 10−6

Adagrad

Learning rate = 0.0009, epsilon = 1 × 10−7