Build a deep neural network in 4 mins with TensorFlow in Colab

Build a deep neural network in 4 mins with TensorFlow in Colab


Author: Kevin Mason

52 thoughts on “Build a deep neural network in 4 mins with TensorFlow in Colab

  1. When we are testing/verifying, it is easier for us to get everything right due to the relatively small testing size. When the testing size increases, variation also increases. As a result, our accuracy will go down.

  2. Your testing samples can also have different accuracies due to sampling error. The sample proportion doesn’t always reflect the true proportion (population proportion or the overall accuracy of the model)

  3. Great video 👍 Will use this to build a net.

    Because on the test data it was using some probability criteria other than ><0.5 ?

  4. I'd say the 114 samples on the test dataset are just fully separable by the model obtained.
    Prototyping a reduction using the eigenvectors of the Gram matrix showed that with 3 features the train dataset is already pretty separable, also valid for the test dataset (even with 2 features in this set). Based on that I expect that the first three layers of the neural network obtained are capable of transforming the space in such a way that the convex hulls of the two classes (in the test dataset) do not overlap in this space, making the data separable by the final layer.

  5. running tensorflow on the browser without installing anything, and even using a gpu version of it… man I feel like someone punched me so hard that I ended up in the future! Can I install cirq on this fella?

  6. 96% accuracy is seen when there is no additional loose condition like 0.5 boundary. It is much bigger than width of possible delta of 0.04, that’s why result is very accurate.

  7. Very clear and powerful tutorial. If you want to run this notebook locally on your machine to compare performance do not forget to comment or delete these two lines:
    from google.colab import files

    file = files.upload()

  8. I assume, y_train and y_test are 0 or 1. The network learns to classify breast cancer by converging towards 0.0 or 1.0, but it may not be able to reach these two values exactly for every sample in the training data. In some cases it may only predict 0.96 instead of 1.0. These numerical differences are still accumulated in the loss output, but don't make a difference in the final classification. Thanks a lot for the great educational videos on this channel.

  9. You're iterating over values instead of indices in the last cell !

    Replace line 4 by "for i in range(len(Y_pred)):" and the result should be more accurate… 😁
    Thanks for sharing your knowledge BTW !

  10. test set is much smaller than training set, so training set has more room for error, whereas the model is only 6% wrong there, so the probability that it gets 100% correct in the test set. is high.

  11. Thank you for the video and notebook. I added the accuracy metric to the Keras compile method and found that the training accuracy reaches above 99% after 100 epochs. If you train it for a few more epochs it will reach 100%. Considering that the test dataset is considerably smaller it is no surprise that the test accuracy reaches 100%.

  12. and how do we implement tensorboard here to see how the networks is working and also see the training graphs to see if it is really overfitting or not ?

  13. 100% accuracy: is it because of activation function(sigmoid) we used. What if use leaky relu or tanh, have to test that!!!

  14. Is it heavily overfitted? or exceptionally near "perfect" model? Maybe more data samples in the test set might result in a 100 per cent accuracy! Or adding the appropriate penalties.

  15. There could be two reasons :
    1. Either model is over-fitted.
    2. Or while testing also you are running with training data.

  16. For those of you wondering — the reason why I got 100% of the answers right at the end is because of a bug in the code — spotted by Bruno Fergani (thanks, Bruno!)

    Change the loop to for i in range(len(Y_pred)): instead of what it presently is, and you'll see the correct answers here.

    This, to me, is a great example of how we can, with the code surrounding a neural network, either train it poorly, or misinterpret its results.

    Typically, if you get close to 100% on anything with a NN you should instantly get suspicious. It's usually overfitting (as many of you guessed), but it can often be something else too.

  17. Test accuracy is100% as the model has generalized the training data and has not overfitted the training data. Also it can be the case that it has not considered the outliers as it reduce the model accuracy letter

  18. I did not understand. If I want to model it in my machine like Anaconda, it does not work and it should be done just in Colab….. To use tensorFlow we are obliged to use Colab?????

  19. Thank you for this amazing tutorial. I just want to know what represents the benign cell is it 0 or 1?

  20. Hi there,
    I am wondering, whether is it normal to have a mach slower performance in colab (I am using the GPU version) than my personal pc (using CPU on ac an Intel Core i5-6300HQ CPU @2.30FHz)?
    I expected colab to be much faster.
    Or am I missing some tricks?

  21. you know what Google COlab missing? the ability to edit the file directly from the browser without having to download – edited and reuploaded it … also better interface and user friendly interaction is a must have. either way Google COlab is a still an amazing app to use

  22. Apparently, I'm the only one that can't get this example to work. Specifically, on the line "Y_pred = [1 if y >= 0.5 else 0 for y in Y_pred] I get the error "The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()". I also get this same error on the code at the end of the file that is totaling up the results. I haven't a clue how to fix it, although I've spent the last few hours trying to. Why is it that no one else apparently encounters this error?

  23. Apart from over fitting there can be an attribute in dataset which influence the results of NN, in other words it may guide the results to be perfect. Will this be counted under overfitting too???

  24. Because data in test is less, if you go with more data accuracy will come down, if you use data that is triple or quadruple than train data then the accuracy will goes down

  25. can any one help me with error ""Unexpected token � in JSON at position 0"" while i am uploading the Breastcancer.ipynb file.

  26. I'm getting a value error at
    if(y_test.at[i,0] == y_pred[i]):
    can someone help? Thanks

    I changed the line "for i in range(len(y_pred)):" still the error persists

  27. hmm ..let me try.
    96% accuracy is on the training data NOT on the test data, which trained using 455 datasets. On the test data for 114 datasets. Overfit means it trains well of the training data but it doesn't do well on the testing data. So, if the training accuracy was say 96% and during testing it was much worse say 85%, then we could say the model overfits. This is not the case. It seems the opposite which is underfit ? Underfit says during training does not do very well, which is NOT the case. So it is not overfit and also not underfit. So why? 1 reason I could think of is being lucky, it is just so happen the testing distribution dataset under those 114 Y_test is just right. BUT this is also unlikely. The other thing that I could think of is the selection of threshold making Y_pred = 1 or 0, on the example we are using 0.5. So anything above 0.5 will be 1 otherwise 0. This will 'kind' of zero out the error, because of the 0.5 threshold. If i lower the threshold say to 0.1 it will make 1 error out of 113!

  28. When trying to run "classifier = Sequential()" myself, I got an error when using keras:

    "module 'tensorflow' has no attribute 'get_default_graph'"

    I fixed it by changing the keras imports to "tensorflow.keras":

    from tensorflow.keras.models import Sequential
    from tensorflow.keras.layers import Dense

    I believe it's because my version of tensorflow was 2.0.0

Leave a Reply

Your email address will not be published. Required fields are marked *