100 thoughts on “Neural Networks (E01: introduction)

  1. sweet and simple with a mathematical and geometrical representation of the function of the variables, great video!

  2. wait why would you use a sigmoid and why would you add a layer? (and why would you add a constant term b. that seems weird)

  3. As a practitioner, I never had the privilege of being able to explain neural networks with so much ease to audiences who have little or no idea on how activation functions, biases and hidden layers work. You, my friend, are on the top of my list – a big thank you! I look forward to more content like this, keep up the excellent work!

  4. Best Neural Network explanation I have ever seen or read! 👏
    With your teaching skills, if you're not already, you should became a teacher!
    I'm sure you'd help a lot of students and raise very competent professionals!

  5. Sr, you are the first one that I found to explain this, you revived my interest to learn neural networks and IA, thank you

  6. what graphing software did u use there? and how to visualize it like that? really need this for my research

  7. Wow… I never knew why the activation function was necessary, but only knew it needed to be there. Now I know! Thanks so much for making this very easy to understand!

  8. I mean, 3B1B's videos on neural networks are great, and they have some great explanations, but in this video, you explained the "network" part of a neural network in just the right way… in five minutes!!

  9. This is probably the best explanation I’ve ever seen of a neural network’s inner workings, and in only a few minutes.

  10. One of the best, if not the best YouTube videos that visually represents what is happening to the neural network and how the activation functions splits the data. Thank you.

  11. Seeing everyone comment on how well you explained this I realized I am not the only one thinking you did an amazing job!

  12. excuse my french but holy FUCK this is well done. The way you skip unessential bits so that the viewer can grasp the whole before going in to details is great. I personally prefer this mode of education, even when I'm learning seriously, it becomes much faster to learn the details after understanding the goal

  13. This particular visualization of the mathematical parts I've been studying for so long caused more than one ah-ha moments for me. I'm so grateful for your work. 3BlueOneBrown had a great primer a while back—you two guys should totally collaborate on one. Thank you, liked, subscribed, and getting notified 😉

  14. I have watched so many NN teaching videos but this is by far the most helpful and easy to understand video on Neural Network. Finally I understand how the weights and the Bias work.
    Thank you so much.

  15. I love this video so much. The way you explained it so simply and using pure linear equations, it's beautiful. Thank you so much for this video.

  16. This is great. I've seen a lot on neural networks from other people (like 3Blue1Brown) but this gave me some serious insight into the sigmoid function! Thanks 🙂

  17. Dude, make more videos plz. I've seen many videos about neural networks and this one IS THE BEST. Great job.

  18. This visual example is actualy the best. I've watched a learning course, where he explained it, but i doesn't understand how we make more komplex equations for the determination at this point. The Animations helps me alot!

  19. i never subscribed to anyone this faster!

    been self studying ML and NN for over 4 months and only now i have a clearer understanding of why biases and activation functions (except ReLU) are used !
    this video is gold

  20. I cant thank u enough for this brother finally i can go into neural networks..i am python coder..never seen forword to neural network bcz i didn't understand it from core and i thought learning it without knowing core is just useless but finally i can give it a try and go in nachine learning and neural network..field…thank u so much

  21. I kept seeing graphs like these in AI videos and didn't know what they were until watching this video. Very nice. 🙂

  22. The necessity of the hidden layer remained unexplained,
    yet for those interested any expansion between input and output relates to the degrees of freedom between elements.
    Too much depth distorts the ability for the machine to learn
    since ineffective back-propagation effectively leads to forgetting.

  23. No continuation of these series? I really wanted to finish the whole picture in my head and you were the only one doing it right for me 🙁 Can I get private paid tutorship?

  24. Thank you for including these evolving decision boundary animations, it's something I haven't seen in explanations elsewhere. I'm still trying to understand how to adjust the complexity of the neural network to the problem, e.g. how many layers to use, but now I can see HOW more layers can increase the flexibility of decision boundaries.

  25. Yeah, this is by far the best visual example of some of the basic math and it's effects in NNs. Great job, I will be watching the rest of the series and have subbed!

  26. I'm trying to understand the math here. Before you added in the hidden layer the math for each output was essentially: r=X1*W1 + X2*W2 + b1 and p=X1*W3 + X2*W4 + b2 , then you added the hidden layer math (the a's, I don't know how large of a comment I can make on youtube so I'll refer to them as a's) which were the previous r/p outputs, my main question is the part with getting the decision boundary to curve, the f(x) = 1/1+e^-x sigmoid function. How does this play into the previous established math? I'm not the best with understanding the first time but can anyone (unlikely as this is a year old video) help me understand where he put this function? If I'm understanding the video correctly, the a's and the outputs r/p are now that function but how would this math work out? f(a1*w7 + a2*w8 + a3*w9 + b4) = 1/1e^-x ? I understand functions but this specific function I'd love a simple breakdown or explanation on. Thanks!

  27. I didn't find this introduction particularly easy or useful for a beginner. However, I did like it, because it showed a new "viewing angle" on the same topic, thus broadening my understanding.

  28. never saw a better explanation of this, even understanding it before it was super cool to see it adjust by hand sliders and how activation function changes it

  29. Being a Deep learning researcher and having taken courses both in college and online MOOCs I find this as the best visual explanation of neural networks ever. Huge fan of your channel, been watching it for unity for a while now, and will now start watching your channel for everything. Keep up the great work Sebastian.

  30. Is there a way to measure how "complicated" an output graph we can get? Initially using only linear combinations, we could only split it in half. I'd imagine that with more neurons, we could capture more and more complicated distributions. Is there anyone studying the "topology" so to speak of the output distribution, as a function of the neural network's shape?

  31. Hello, Sebastian,
    thank you for the nice example and interesting videos. I want to ask you what Python's programming language library do you use to make such interesting animated graph visualizations ?
    Although the plots look similar to https://www.desmos.com/

Leave a Reply

Your email address will not be published. Required fields are marked *