Hi there! I’m Shrijith Venkatrama, founder of Hexmos. Right now, I’m building LiveAPI, a tool that makes generating API docs from your code ridiculously easy.
Replicating The micrograd Program in PyTorch
In PyTorch we can replicate our micrograd expression using the following code. Here instead of the Value
class we use the Tensor
class.
<span>import</span> <span>torch</span><span>x1</span> <span>=</span> <span>torch</span><span>.</span><span>Tensor</span><span>([</span><span>2.0</span><span>]).</span><span>double</span><span>()</span> <span>;</span> <span>x1</span><span>.</span><span>requires_grad</span> <span>=</span> <span>True</span><span>x2</span> <span>=</span> <span>torch</span><span>.</span><span>Tensor</span><span>([</span><span>0.0</span><span>]).</span><span>double</span><span>()</span> <span>;</span> <span>x2</span><span>.</span><span>requires_grad</span> <span>=</span> <span>True</span><span>w1</span> <span>=</span> <span>torch</span><span>.</span><span>Tensor</span><span>([</span><span>-</span><span>3.0</span><span>]).</span><span>double</span><span>()</span> <span>;</span> <span>w1</span><span>.</span><span>requires_grad</span> <span>=</span> <span>True</span><span>w2</span> <span>=</span> <span>torch</span><span>.</span><span>Tensor</span><span>([</span><span>1.0</span><span>]).</span><span>double</span><span>()</span> <span>;</span> <span>w2</span><span>.</span><span>requires_grad</span> <span>=</span> <span>True</span><span>b</span> <span>=</span> <span>torch</span><span>.</span><span>Tensor</span><span>([</span><span>6.8813735870195432</span><span>]).</span><span>double</span><span>()</span> <span>;</span> <span>b</span><span>.</span><span>requires_grad</span> <span>=</span> <span>True</span><span>n</span> <span>=</span> <span>x1</span><span>*</span><span>w1</span> <span>+</span> <span>x2</span><span>*</span><span>w2</span> <span>+</span> <span>b</span><span>o</span> <span>=</span> <span>torch</span><span>.</span><span>tanh</span><span>(</span><span>n</span><span>)</span><span>print</span><span>(</span><span>o</span><span>.</span><span>data</span><span>.</span><span>item</span><span>())</span><span>o</span><span>.</span><span>backward</span><span>()</span><span>print</span><span>(</span><span>'</span><span>----</span><span>'</span><span>)</span><span>print</span><span>(</span><span>'</span><span>x2</span><span>'</span><span>,</span> <span>x2</span><span>.</span><span>grad</span><span>.</span><span>item</span><span>())</span><span>print</span><span>(</span><span>'</span><span>w2</span><span>'</span><span>,</span> <span>w2</span><span>.</span><span>grad</span><span>.</span><span>item</span><span>())</span><span>print</span><span>(</span><span>'</span><span>x1</span><span>'</span><span>,</span> <span>x1</span><span>.</span><span>grad</span><span>.</span><span>item</span><span>())</span><span>print</span><span>(</span><span>'</span><span>w1</span><span>'</span><span>,</span> <span>w1</span><span>.</span><span>grad</span><span>.</span><span>item</span><span>())</span><span>import</span> <span>torch</span> <span>x1</span> <span>=</span> <span>torch</span><span>.</span><span>Tensor</span><span>([</span><span>2.0</span><span>]).</span><span>double</span><span>()</span> <span>;</span> <span>x1</span><span>.</span><span>requires_grad</span> <span>=</span> <span>True</span> <span>x2</span> <span>=</span> <span>torch</span><span>.</span><span>Tensor</span><span>([</span><span>0.0</span><span>]).</span><span>double</span><span>()</span> <span>;</span> <span>x2</span><span>.</span><span>requires_grad</span> <span>=</span> <span>True</span> <span>w1</span> <span>=</span> <span>torch</span><span>.</span><span>Tensor</span><span>([</span><span>-</span><span>3.0</span><span>]).</span><span>double</span><span>()</span> <span>;</span> <span>w1</span><span>.</span><span>requires_grad</span> <span>=</span> <span>True</span> <span>w2</span> <span>=</span> <span>torch</span><span>.</span><span>Tensor</span><span>([</span><span>1.0</span><span>]).</span><span>double</span><span>()</span> <span>;</span> <span>w2</span><span>.</span><span>requires_grad</span> <span>=</span> <span>True</span> <span>b</span> <span>=</span> <span>torch</span><span>.</span><span>Tensor</span><span>([</span><span>6.8813735870195432</span><span>]).</span><span>double</span><span>()</span> <span>;</span> <span>b</span><span>.</span><span>requires_grad</span> <span>=</span> <span>True</span> <span>n</span> <span>=</span> <span>x1</span><span>*</span><span>w1</span> <span>+</span> <span>x2</span><span>*</span><span>w2</span> <span>+</span> <span>b</span> <span>o</span> <span>=</span> <span>torch</span><span>.</span><span>tanh</span><span>(</span><span>n</span><span>)</span> <span>print</span><span>(</span><span>o</span><span>.</span><span>data</span><span>.</span><span>item</span><span>())</span> <span>o</span><span>.</span><span>backward</span><span>()</span> <span>print</span><span>(</span><span>'</span><span>----</span><span>'</span><span>)</span> <span>print</span><span>(</span><span>'</span><span>x2</span><span>'</span><span>,</span> <span>x2</span><span>.</span><span>grad</span><span>.</span><span>item</span><span>())</span> <span>print</span><span>(</span><span>'</span><span>w2</span><span>'</span><span>,</span> <span>w2</span><span>.</span><span>grad</span><span>.</span><span>item</span><span>())</span> <span>print</span><span>(</span><span>'</span><span>x1</span><span>'</span><span>,</span> <span>x1</span><span>.</span><span>grad</span><span>.</span><span>item</span><span>())</span> <span>print</span><span>(</span><span>'</span><span>w1</span><span>'</span><span>,</span> <span>w1</span><span>.</span><span>grad</span><span>.</span><span>item</span><span>())</span>import torch x1 = torch.Tensor([2.0]).double() ; x1.requires_grad = True x2 = torch.Tensor([0.0]).double() ; x2.requires_grad = True w1 = torch.Tensor([-3.0]).double() ; w1.requires_grad = True w2 = torch.Tensor([1.0]).double() ; w2.requires_grad = True b = torch.Tensor([6.8813735870195432]).double() ; b.requires_grad = True n = x1*w1 + x2*w2 + b o = torch.tanh(n) print(o.data.item()) o.backward() print('----') print('x2', x2.grad.item()) print('w2', w2.grad.item()) print('x1', x1.grad.item()) print('w1', w1.grad.item())
Enter fullscreen mode Exit fullscreen mode
The output is — which agrees with the output from the previous post:
0.7071066904050358----x2 0.5000001283844369w2 0.0x1 -1.5000003851533106w1 1.00000025676887370.7071066904050358 ---- x2 0.5000001283844369 w2 0.0 x1 -1.5000003851533106 w1 1.00000025676887370.7071066904050358 ---- x2 0.5000001283844369 w2 0.0 x1 -1.5000003851533106 w1 1.0000002567688737
Enter fullscreen mode Exit fullscreen mode
In a typical real-world project, instead of scalars, we’d use larger tensors.
For instance, we can define a 2×3 tensor as follows:
<span>torch</span><span>.</span><span>Tensor</span><span>([[</span><span>1</span><span>,</span><span>2</span><span>,</span><span>3</span><span>],[</span><span>4</span><span>,</span><span>5</span><span>,</span><span>6</span><span>]])</span><span>torch</span><span>.</span><span>Tensor</span><span>([[</span><span>1</span><span>,</span><span>2</span><span>,</span><span>3</span><span>],[</span><span>4</span><span>,</span><span>5</span><span>,</span><span>6</span><span>]])</span>torch.Tensor([[1,2,3],[4,5,6]])
Enter fullscreen mode Exit fullscreen mode
Result:
tensor([[1., 2., 3.],[4., 5., 6.]])tensor([[1., 2., 3.], [4., 5., 6.]])tensor([[1., 2., 3.], [4., 5., 6.]])
Enter fullscreen mode Exit fullscreen mode
By default, PyTorch stores number as float32
, so we convert them to float64
as expected:
<span>torch</span><span>.</span><span>Tensor</span><span>([</span><span>2.0</span><span>])</span><span>torch</span><span>.</span><span>Tensor</span><span>([</span><span>2.0</span><span>])</span>torch.Tensor([2.0])
Enter fullscreen mode Exit fullscreen mode
Also, in PyTorch by default nodes are not expected to require gradients. This is for efficiency reasons – for example, we do not require gradients in leaf nodes.
For nodes that require gradients, we must explicitly enable it:
<span>x1</span><span>.</span><span>requires_grad</span> <span>=</span> <span>True</span><span>x1</span><span>.</span><span>requires_grad</span> <span>=</span> <span>True</span>x1.requires_grad = True
Enter fullscreen mode Exit fullscreen mode
Start Building Neural Networks on the Value
Class
The goal is to build a two layer MLP (Multi-Layer Perceptron).
<span>class</span> <span>Neuron</span><span>:</span><span>def</span> <span>__init__</span><span>(</span><span>self</span><span>,</span> <span>nin</span><span>):</span><span>self</span><span>.</span><span>w</span> <span>=</span> <span>[</span><span>Value</span><span>(</span><span>random</span><span>.</span><span>uniform</span><span>(</span><span>-</span><span>1</span><span>,</span><span>1</span><span>))</span> <span>for</span> <span>_</span> <span>in</span> <span>range</span><span>(</span><span>nin</span><span>)]</span><span>self</span><span>.</span><span>b</span> <span>=</span> <span>Value</span><span>(</span><span>random</span><span>.</span><span>uniform</span><span>(</span><span>-</span><span>1</span><span>,</span><span>1</span><span>))</span><span>class</span> <span>Neuron</span><span>:</span> <span>def</span> <span>__init__</span><span>(</span><span>self</span><span>,</span> <span>nin</span><span>):</span> <span>self</span><span>.</span><span>w</span> <span>=</span> <span>[</span><span>Value</span><span>(</span><span>random</span><span>.</span><span>uniform</span><span>(</span><span>-</span><span>1</span><span>,</span><span>1</span><span>))</span> <span>for</span> <span>_</span> <span>in</span> <span>range</span><span>(</span><span>nin</span><span>)]</span> <span>self</span><span>.</span><span>b</span> <span>=</span> <span>Value</span><span>(</span><span>random</span><span>.</span><span>uniform</span><span>(</span><span>-</span><span>1</span><span>,</span><span>1</span><span>))</span>class Neuron: def __init__(self, nin): self.w = [Value(random.uniform(-1,1)) for _ in range(nin)] self.b = Value(random.uniform(-1,1))
Enter fullscreen mode Exit fullscreen mode
In the above code random.uniform(-1, 1)
will generate a random number between -1 and 1. And nin
is the number of inputs. So if we want say 10 inputs to our Neuron, then we will set nin=10
.
For reference, this is the diagram for a neuron:
The __call__
Mechanism In Python Classes
Consider this code:
<span>import</span> <span>random</span><span>random</span><span>.</span><span>uniform</span><span>(</span><span>-</span><span>1</span><span>,</span> <span>1</span><span>)</span><span>class</span> <span>Neuron</span><span>:</span><span>def</span> <span>__init__</span><span>(</span><span>self</span><span>,</span> <span>nin</span><span>):</span><span>self</span><span>.</span><span>w</span> <span>=</span> <span>[</span><span>Value</span><span>(</span><span>random</span><span>.</span><span>uniform</span><span>(</span><span>-</span><span>1</span><span>,</span> <span>1</span><span>))</span> <span>for</span> <span>_</span> <span>in</span> <span>range</span><span>(</span><span>nin</span><span>)]</span><span>self</span><span>.</span><span>b</span> <span>=</span> <span>Value</span><span>(</span><span>random</span><span>.</span><span>uniform</span><span>(</span><span>-</span><span>1</span><span>,</span> <span>1</span><span>))</span><span>def</span> <span>__call__</span><span>(</span><span>self</span><span>,</span> <span>x</span><span>):</span><span>print</span><span>(</span><span>x</span><span>)</span><span>return</span> <span>0.0</span><span>import</span> <span>random</span> <span>random</span><span>.</span><span>uniform</span><span>(</span><span>-</span><span>1</span><span>,</span> <span>1</span><span>)</span> <span>class</span> <span>Neuron</span><span>:</span> <span>def</span> <span>__init__</span><span>(</span><span>self</span><span>,</span> <span>nin</span><span>):</span> <span>self</span><span>.</span><span>w</span> <span>=</span> <span>[</span><span>Value</span><span>(</span><span>random</span><span>.</span><span>uniform</span><span>(</span><span>-</span><span>1</span><span>,</span> <span>1</span><span>))</span> <span>for</span> <span>_</span> <span>in</span> <span>range</span><span>(</span><span>nin</span><span>)]</span> <span>self</span><span>.</span><span>b</span> <span>=</span> <span>Value</span><span>(</span><span>random</span><span>.</span><span>uniform</span><span>(</span><span>-</span><span>1</span><span>,</span> <span>1</span><span>))</span> <span>def</span> <span>__call__</span><span>(</span><span>self</span><span>,</span> <span>x</span><span>):</span> <span>print</span><span>(</span><span>x</span><span>)</span> <span>return</span> <span>0.0</span>import random random.uniform(-1, 1) class Neuron: def __init__(self, nin): self.w = [Value(random.uniform(-1, 1)) for _ in range(nin)] self.b = Value(random.uniform(-1, 1)) def __call__(self, x): print(x) return 0.0
Enter fullscreen mode Exit fullscreen mode
We have a __call__
mechanism, which can be used to use the objects of type Neuron as though they were functions.
So we can use like this:
<span>x</span> <span>=</span> <span>[</span><span>1.0</span><span>,</span> <span>2.0</span><span>]</span><span>N</span> <span>=</span> <span>Neuron</span><span>(</span><span>2</span><span>)</span><span>N</span><span>(</span><span>x</span><span>)</span><span>x</span> <span>=</span> <span>[</span><span>1.0</span><span>,</span> <span>2.0</span><span>]</span> <span>N</span> <span>=</span> <span>Neuron</span><span>(</span><span>2</span><span>)</span> <span>N</span><span>(</span><span>x</span><span>)</span>x = [1.0, 2.0] N = Neuron(2) N(x)
Enter fullscreen mode Exit fullscreen mode
Result:
[1.0, 2.0]0.0[1.0, 2.0] 0.0[1.0, 2.0] 0.0
Enter fullscreen mode Exit fullscreen mode
Implementing tanh(wx + b)
on a neuron
<span>import</span> <span>random</span><span>class</span> <span>Neuron</span><span>:</span><span>def</span> <span>__init__</span><span>(</span><span>self</span><span>,</span> <span>nin</span><span>):</span><span>self</span><span>.</span><span>w</span> <span>=</span> <span>[</span><span>Value</span><span>(</span><span>random</span><span>.</span><span>uniform</span><span>(</span><span>-</span><span>1</span><span>,</span><span>1</span><span>))</span> <span>for</span> <span>_</span> <span>in</span> <span>range</span><span>(</span><span>nin</span><span>)]</span><span>self</span><span>.</span><span>b</span> <span>=</span> <span>Value</span><span>(</span><span>random</span><span>.</span><span>uniform</span><span>(</span><span>-</span><span>1</span><span>,</span><span>1</span><span>))</span><span>def</span> <span>__call__</span><span>(</span><span>self</span><span>,</span> <span>x</span><span>):</span><span># w * x + b </span> <span># sum is initiailized as the bias value (and then rest of the stuff is added to it) </span> <span>act</span> <span>=</span> <span>sum</span><span>((</span><span>wi</span> <span>*</span> <span>xi</span> <span>for</span> <span>wi</span><span>,</span> <span>xi</span> <span>in</span> <span>zip</span><span>(</span><span>self</span><span>.</span><span>w</span><span>,</span> <span>x</span><span>)),</span> <span>self</span><span>.</span><span>b</span><span>)</span><span>out</span> <span>=</span> <span>act</span><span>.</span><span>tanh</span><span>()</span><span>return</span> <span>out</span><span>x</span> <span>=</span> <span>[</span><span>2.0</span><span>,</span> <span>3.0</span><span>]</span><span>n</span> <span>=</span> <span>Neuron</span><span>(</span><span>2</span><span>)</span><span>n</span><span>(</span><span>x</span><span>)</span><span>import</span> <span>random</span> <span>class</span> <span>Neuron</span><span>:</span> <span>def</span> <span>__init__</span><span>(</span><span>self</span><span>,</span> <span>nin</span><span>):</span> <span>self</span><span>.</span><span>w</span> <span>=</span> <span>[</span><span>Value</span><span>(</span><span>random</span><span>.</span><span>uniform</span><span>(</span><span>-</span><span>1</span><span>,</span><span>1</span><span>))</span> <span>for</span> <span>_</span> <span>in</span> <span>range</span><span>(</span><span>nin</span><span>)]</span> <span>self</span><span>.</span><span>b</span> <span>=</span> <span>Value</span><span>(</span><span>random</span><span>.</span><span>uniform</span><span>(</span><span>-</span><span>1</span><span>,</span><span>1</span><span>))</span> <span>def</span> <span>__call__</span><span>(</span><span>self</span><span>,</span> <span>x</span><span>):</span> <span># w * x + b </span> <span># sum is initiailized as the bias value (and then rest of the stuff is added to it) </span> <span>act</span> <span>=</span> <span>sum</span><span>((</span><span>wi</span> <span>*</span> <span>xi</span> <span>for</span> <span>wi</span><span>,</span> <span>xi</span> <span>in</span> <span>zip</span><span>(</span><span>self</span><span>.</span><span>w</span><span>,</span> <span>x</span><span>)),</span> <span>self</span><span>.</span><span>b</span><span>)</span> <span>out</span> <span>=</span> <span>act</span><span>.</span><span>tanh</span><span>()</span> <span>return</span> <span>out</span> <span>x</span> <span>=</span> <span>[</span><span>2.0</span><span>,</span> <span>3.0</span><span>]</span> <span>n</span> <span>=</span> <span>Neuron</span><span>(</span><span>2</span><span>)</span> <span>n</span><span>(</span><span>x</span><span>)</span>import random class Neuron: def __init__(self, nin): self.w = [Value(random.uniform(-1,1)) for _ in range(nin)] self.b = Value(random.uniform(-1,1)) def __call__(self, x): # w * x + b # sum is initiailized as the bias value (and then rest of the stuff is added to it) act = sum((wi * xi for wi, xi in zip(self.w, x)), self.b) out = act.tanh() return out x = [2.0, 3.0] n = Neuron(2) n(x)
Enter fullscreen mode Exit fullscreen mode
I get an output like this:
Value(data=-0.6963855451596829, grad=0, label='')Value(data=-0.6963855451596829, grad=0, label='')Value(data=-0.6963855451596829, grad=0, label='')
Enter fullscreen mode Exit fullscreen mode
As of now, on every run the value received will be different – since we are initializing with random inputs during initialization.
Defining a Layer of Neurons
The code for defining a layer of neurons is as follows:
<span>class</span> <span>Layer</span><span>:</span><span>def</span> <span>__init__</span><span>(</span><span>self</span><span>,</span> <span>nin</span><span>,</span> <span>nout</span><span>):</span><span>self</span><span>.</span><span>neurons</span> <span>=</span> <span>[</span><span>Neuron</span><span>(</span><span>nin</span><span>)</span> <span>for</span> <span>_</span> <span>in</span> <span>range</span><span>(</span><span>nout</span><span>)]</span><span>def</span> <span>__call__</span><span>(</span><span>self</span><span>,</span> <span>x</span><span>):</span><span>outs</span> <span>=</span> <span>[</span><span>n</span><span>(</span><span>x</span><span>)</span> <span>for</span> <span>n</span> <span>in</span> <span>self</span><span>.</span><span>neurons</span><span>]</span><span>return</span> <span>outs</span><span>x</span> <span>=</span> <span>[</span><span>2.0</span><span>,</span> <span>3.0</span><span>]</span><span>n</span> <span>=</span> <span>Layer</span><span>(</span><span>2</span><span>,</span> <span>3</span><span>)</span><span>n</span><span>(</span><span>x</span><span>)</span><span>class</span> <span>Layer</span><span>:</span> <span>def</span> <span>__init__</span><span>(</span><span>self</span><span>,</span> <span>nin</span><span>,</span> <span>nout</span><span>):</span> <span>self</span><span>.</span><span>neurons</span> <span>=</span> <span>[</span><span>Neuron</span><span>(</span><span>nin</span><span>)</span> <span>for</span> <span>_</span> <span>in</span> <span>range</span><span>(</span><span>nout</span><span>)]</span> <span>def</span> <span>__call__</span><span>(</span><span>self</span><span>,</span> <span>x</span><span>):</span> <span>outs</span> <span>=</span> <span>[</span><span>n</span><span>(</span><span>x</span><span>)</span> <span>for</span> <span>n</span> <span>in</span> <span>self</span><span>.</span><span>neurons</span><span>]</span> <span>return</span> <span>outs</span> <span>x</span> <span>=</span> <span>[</span><span>2.0</span><span>,</span> <span>3.0</span><span>]</span> <span>n</span> <span>=</span> <span>Layer</span><span>(</span><span>2</span><span>,</span> <span>3</span><span>)</span> <span>n</span><span>(</span><span>x</span><span>)</span>class Layer: def __init__(self, nin, nout): self.neurons = [Neuron(nin) for _ in range(nout)] def __call__(self, x): outs = [n(x) for n in self.neurons] return outs x = [2.0, 3.0] n = Layer(2, 3) n(x)
Enter fullscreen mode Exit fullscreen mode
For a layer – we need to take in the number of inputs and number of outputs, and we simply create a list of Neuron objects first.
When the Layer is called, we just call each neuron object with the given input values.
The above code gives a result like this:
[Value(data=0.8813774949215492, grad=0, label=''),Value(data=0.9418974314812039, grad=0, label=''),Value(data=0.3765244335798038, grad=0, label='')][Value(data=0.8813774949215492, grad=0, label=''), Value(data=0.9418974314812039, grad=0, label=''), Value(data=0.3765244335798038, grad=0, label='')][Value(data=0.8813774949215492, grad=0, label=''), Value(data=0.9418974314812039, grad=0, label=''), Value(data=0.3765244335798038, grad=0, label='')]
Enter fullscreen mode Exit fullscreen mode
Defining a full MLP
Code:
<span>class</span> <span>MLP</span><span>:</span><span>def</span> <span>__init__</span><span>(</span><span>self</span><span>,</span> <span>nin</span><span>,</span> <span>nouts</span><span>):</span><span>sz</span> <span>=</span> <span>[</span><span>nin</span><span>]</span> <span>+</span> <span>nouts</span><span>self</span><span>.</span><span>layers</span> <span>=</span> <span>[</span><span>Layer</span><span>(</span><span>sz</span><span>[</span><span>i</span><span>],</span> <span>sz</span><span>[</span><span>i</span><span>+</span><span>1</span><span>])</span> <span>for</span> <span>i</span> <span>in</span> <span>range</span><span>(</span><span>len</span><span>(</span><span>nouts</span><span>))]</span><span>def</span> <span>__call__</span><span>(</span><span>self</span><span>,</span> <span>x</span><span>):</span><span>for</span> <span>layer</span> <span>in</span> <span>self</span><span>.</span><span>layers</span><span>:</span><span>x</span> <span>=</span> <span>layer</span><span>(</span><span>x</span><span>)</span><span>return</span> <span>x</span><span>class</span> <span>MLP</span><span>:</span> <span>def</span> <span>__init__</span><span>(</span><span>self</span><span>,</span> <span>nin</span><span>,</span> <span>nouts</span><span>):</span> <span>sz</span> <span>=</span> <span>[</span><span>nin</span><span>]</span> <span>+</span> <span>nouts</span> <span>self</span><span>.</span><span>layers</span> <span>=</span> <span>[</span><span>Layer</span><span>(</span><span>sz</span><span>[</span><span>i</span><span>],</span> <span>sz</span><span>[</span><span>i</span><span>+</span><span>1</span><span>])</span> <span>for</span> <span>i</span> <span>in</span> <span>range</span><span>(</span><span>len</span><span>(</span><span>nouts</span><span>))]</span> <span>def</span> <span>__call__</span><span>(</span><span>self</span><span>,</span> <span>x</span><span>):</span> <span>for</span> <span>layer</span> <span>in</span> <span>self</span><span>.</span><span>layers</span><span>:</span> <span>x</span> <span>=</span> <span>layer</span><span>(</span><span>x</span><span>)</span> <span>return</span> <span>x</span>class MLP: def __init__(self, nin, nouts): sz = [nin] + nouts self.layers = [Layer(sz[i], sz[i+1]) for i in range(len(nouts))] def __call__(self, x): for layer in self.layers: x = layer(x) return x
Enter fullscreen mode Exit fullscreen mode
You can see how the above will transform into a list of layers – with the right number of input and output neurons:
### Using the MLP class
<span>x</span> <span>=</span> <span>[</span><span>2.0</span><span>,</span> <span>3.0</span><span>,</span> <span>-</span><span>1.0</span><span>]</span><span>n</span> <span>=</span> <span>MLP</span><span>(</span><span>3</span><span>,</span> <span>[</span><span>4</span><span>,</span> <span>4</span><span>,</span> <span>1</span><span>])</span><span>n</span><span>(</span><span>x</span><span>)</span><span>x</span> <span>=</span> <span>[</span><span>2.0</span><span>,</span> <span>3.0</span><span>,</span> <span>-</span><span>1.0</span><span>]</span> <span>n</span> <span>=</span> <span>MLP</span><span>(</span><span>3</span><span>,</span> <span>[</span><span>4</span><span>,</span> <span>4</span><span>,</span> <span>1</span><span>])</span> <span>n</span><span>(</span><span>x</span><span>)</span>x = [2.0, 3.0, -1.0] n = MLP(3, [4, 4, 1]) n(x)
Enter fullscreen mode Exit fullscreen mode
Gives a Value
object to the last output (after forward pass).
We can visualize the whole expression graph with following:
draw_dot(n(x))draw_dot(n(x))draw_dot(n(x))
Enter fullscreen mode Exit fullscreen mode
The result is a huge expression graph – representing the whole expression with a single output node.
Reference
The spelled-out intro to neural networks and backpropagation: building micrograd – YouTube
原文链接:Implementing Layer and MLP Classes in micrograd (As Explained By Karpathy)
暂无评论内容