Two-Layer Neural Network Training

Deep Learning dengan Multi-Layer Gradient Descent

Model Neural Network Dua Layer
Layer 1: $h_1 = \sigma(w_{11} \cdot x + b_1)$, $h_2 = \sigma(w_{21} \cdot x + b_2)$
Layer 2: $y = \sigma(w_{12} \cdot h_1 + w_{22} \cdot h_2 + b_3)$
Loss Function: $L = \frac{1}{n} \sum_{i=1}^{n} (y_i - \hat{y}_i)^2$ (Mean Squared Error)
Total Parameters: 5 weights + 3 biases = 8 parameters
Visualisasi Arsitektur Jaringan
Input Layer
x
Input
Hidden Layer (Layer 1)
h₁
b₁
σ
h₂
b₂
σ
Output Layer (Layer 2)
y
b₃
σ
🤖 Neural Network Parameter Controls

Adjust all 8 parameters of the two-layer neural network:

🔧 Layer 1 Parameters (Hidden Layer)
0.50
-0.50
0.00
0.00
⚡ Layer 2 Parameters (Output Layer)
1.00
1.00
0.00
0.10
🎛️ Training Controls
Current Loss (MSE):
0.000
Ready to start gradient descent algorithm
Progress Training: Loss vs Epoch
Status Neural Network
Current Network Output: 0.000
Hidden Layer 1: h₁ = 0.000, h₂ = 0.000
Sample Prediction: For x=1.0, y = 0.000
Arsitektur Neural Network
Forward Pass:
$h_1 = \sigma(w_{11} \cdot x + b_1)$, $h_2 = \sigma(w_{21} \cdot x + b_2)$
$y = \sigma(w_{12} \cdot h_1 + w_{22} \cdot h_2 + b_3)$

Backpropagation Gradients:
Layer 2: $\frac{\partial L}{\partial w_{12}} = \delta_3 \cdot h_1$, $\frac{\partial L}{\partial w_{22}} = \delta_3 \cdot h_2$
Layer 1: $\frac{\partial L}{\partial w_{11}} = \delta_1 \cdot x$, $\frac{\partial L}{\partial w_{21}} = \delta_2 \cdot x$

Current Iteration: 0
Parameter Count: 8 total parameters
Gradient Magnitude: 0.000
# X Value Y Value (True) Y Predicted Loss (Squared Error)