the training data used here is x1 and x2 so if either or both x1 and x2 is 0 then this would give us 0 and if both x1 and x2 are 1 then this would give us 1 so this is the target result and we

will start by randomly initializing the weights so here we have two weights w1

and w2 and after that we will go into a loop where we would first calculate the

weighted sum so ws is equal to w1 times x1 plus w2 times x2 and second we will apply an

activation function and in this case we are using a step function so if the

resulting ws is bigger than 1 then the result would equal to 1

otherwise if the resulting ws is smaller or equal to 1 then the result would be

0 and third we would determine the error and the error is equal to the

target result minus the result so this target result minus this result and

fourth we would adjust the weights so the adjusted w1 would equal to the

learning rate times the error from here times x1 plus the original w1 and

same thing for w2 and the learning rate is the rate at which the neural network

learns and it ranges from 0 to 1 and we repeat steps 1 through 4 for all

vectors in the training data for all those vectors so this is one epoch so

an epoch is one pass through all training vectors and we will also have

an outer loop where we would loop through additional epochs until the

error is equal to 0 for all training vectors in the final epoch so the

objective is to determine what weights in this case what w1 and w2 would

lead to the target result equaling to the result for all those vectors

next I’ll go ahead and demo the application before writing it

step-by-step so we went from epoch 0 where the target result in this vector

is 1 and the result is 0 so the error is 1 and we went to epoch 4 where the

target result is equal to the result for all training vectors so we have error is

0 everywhere here and if we run it again we’ll get to from epoch 0 to epoch 9

with the same thing next I will start by creating a new Java

project and here I will have two classes a Perceptron and this is an artificial

neuron modeling a biological one and I will have it in this package and I

will also have a Driver class with a main method and in the Perceptron

class I will define this 3 dimensional array for the AND

training data and I will define this learning rate and I’ll set it to 0.05

for the purpose of this tutorial and I will have this initial weights

array where I would randomly initialize w1 and w2 to a number between 0 and

1 and the here I will need three methods one of them for calculating the weighted

sum let’s return 0 for now and another one for applying the activation function and let’s also return 0 for now and a

third one for adjusting the weights and let’s return null for now and in the Driver class

in the main method I’ll pick up the three-dimensional array

for the training data and the array for the weights so perceptron dot initial

weights and I will also instantiate both a driver class and a perceptron

class and coming back to the perceptron class so the calculate

weighted sum let me define this weighted sum and return it but

before doing that, have a loop here where I will be doing the weighted sum is

equal to w1 times x1 or in this case x1 times w1 plus x2 times w2 and for

apply activation function we have a step function here so I’ll define this result

and return it so if the weighted sum is bigger than 1

then the result will be 1 otherwise the result will be 0 and for the

adjust weights method, have this array adjusted weights that will have

w1 and w2 the adjusted w1 and w2 and I will return it but before doing that I

will calculate those adjusted w1 and w2 by using learning rate times error

times x1 plus w1 and learning rate times error times x2 plus w2 and that’s

all what I need for this class so let’s go to the driver class and here

let’s define an error flag and I will have a while loop where I will be training

the neural network and stop when there is no errors remaining and I will

have this print heading method that will print the heading and the epoch number

that’s passed in and we will call this method from here so let’s define epoch number here and

start it from 0 and for that epoch I will loop through all vectors in the

training data where I will calculate the weighted sum and apply the

activation function to that weighted sum and let me define this error over here and i will

calculate that error and let’s set the error flag to false initially and if

the error is different than zero then I will set that error flag to true and let

me define this adjusted weights array and go ahead and adjust the

weights and let me set error to zero over here and we have this print vector method and I will be calling it from here and

last thing I’m gonna do is to set the weights to the adjusted weights so

this should do it and lastly let me go ahead and

test run the application so we went from epoch 0 to epoch 4

nice tutorial thank you very much

Excellent content! Do put some sort of sound-absorbing sponge under your mic (or laptop) if you're going to have it on your desk. The key strokes are pretty thumpy and distracting.

cant find where to download the source code? On your site is just the youtube video

Excellent !!! What input can be given non-binary ?

I did not find the source code on your website. The only thing that's on your website is a reference to your youtube video

Does this have backwards propagation? It can't solve an XOR truth table.

Your code doesn't breathe!

This is perfect, thank you! I would love to see an XOR solution too

where the bias?

Hey thanks! I was Looking for a Video like this a Long time.

that's very good ,thak you

Neural Networks w/ JAVA – Tutorial 02 @ https://youtu.be/ZUFdrvQFlwE

Neural Networks w/ JAVA – Tutorial 03 @ https://youtu.be/-8Fd68XRxCY

Neural Networks w/ JAVA – Tutorial 04 @ https://youtu.be/UasXoLeMi10

Neural Networks w/ JAVA – Tutorial 05 @ https://youtu.be/fN_ZLtAjVqA

Neural Networks w/ JAVA (Tutorial 06) – Solve XOR w/ Hill Climbing @ https://youtu.be/I5eXGPYLrKU

Neural Networks w/ JAVA (Solve XOR w/ Simulated Annealing) – Tutorial 07 @ https://youtu.be/QNbLfMJ0598

Neural Networks w/ JAVA (Hopfield Network) – Tutorial 08 @ https://youtu.be/7d-O3ZcGnAo

Neural Networks w/ JAVA (Backpropagation 01) – Tutorial 09 @ https://youtu.be/qWLjKsgo3sE

Neural Networks w/ JAVA (Backpropagation 02) – Tutorial 10 @ https://youtu.be/PX0j1Txl8Gs

Awesome explanation! Simple and clean. Recommended.

Thnk you very much… The explanation is very good 👍