aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorRaghuram Subramani <raghus2247@gmail.com>2024-10-07 17:52:22 +0530
committerGitHub <noreply@github.com>2024-10-07 17:52:22 +0530
commit205be32d664aacbc4920ec38230813be18dc7a37 (patch)
treec3ab3b0d60e5c050833f49a86c39c4801f7974ee
parent93dcb1f4265407c8cbf322a0683eba4d4a5b483a (diff)
Update README.mdHEADmain
-rw-r--r--README.md30
1 files changed, 29 insertions, 1 deletions
diff --git a/README.md b/README.md
index cff84f5..bffcc35 100644
--- a/README.md
+++ b/README.md
@@ -1,2 +1,30 @@
# autograd
-An implementation of autograd / backpropagation.
+An simple implementation of autograd / backpropagation.
+
+All you need to run a simple neural network using autograd is the following code:
+
+The code defines a data set `X`, expected output (or ground truth) `y`. It then trains the neural network by performing backward propagation (`.backward()`), then applies the calculated gradients through `.optimise()` along with a learning rate of `0.01`.
+
+```py
+from src.nn import MLP
+from src.loss import mse
+
+X = [
+ [ 0.0, 1.0, 2.0 ],
+ [ 2.0, 1.0, 0.0 ],
+ [ 2.0, 2.0, 2.0 ],
+ [ 3.0, 3.0, 3.0 ]
+]
+
+y = [ 1.0, -1.0, 1.0, -1.0 ]
+n = MLP(3, [ 4, 4, 1 ])
+
+for i in range(400):
+ pred = [ n(x) for x in X ]
+ loss = mse(y, pred)
+ loss.zero_grad()
+ loss.backward()
+ n.optimise(0.01)
+
+print(pred)
+```