data_146


This was my first Slice of Data presentation that I have seen. Hearing how predictions for roads or images were created from different models and datasets was extremely interesting. I am not planning on becoming a statistics or computer science major, so I am not really knowledgeable in the topic or field. So, to be completely honest, a lot of the details and pieces of the presentation were lost on me. Regardless, the presentation linked with several of the topics that we learned in class, making some points easier to understand.

Some aspects of the presentation that I didn’t really understand were “freezing the layers” and convoluted neural networks. I’m sure that she did a great job using examples and explaining those topics, but I guess they were too advanced for me to understand.

One of the parts of the presentation that I could use as a breather was the beginning, where she discussed Neural Networks. It was really cool listening to her examples of model fitting and how she struggled with creating a strong model on the numerous projects that she’s done. I think that was one of the strongest attributes of her presentation. Not only did she thoroughly explain certain topics such as freezing layers or adjusting weights, but she also included her projects. She talked about the project to classify good roads vs. bad roads and also predicting genres from numerous sonic variables. This was super cool because I haven’t worked on any data science/computer science projects (Aspiring Marketing major) and haven’t been able to see any cool projects like these behind the scenes.

I also enjoyed the discussion on Stacked Generalization and how different models could be used together to “vote” and predict more accurately. It seems like a lot of work having to implement multiple models but it is really interesting to think of the possible outcomes from that. She also talked about Model Inception, which is an even cooler concept. Using predictions and inputting those into another model for yet another prediction just leads to a seemingly endless rabbit hole of answers.

One final aspect that I liked about her presentation was the Q&A section. While answering some questions about her lack of “drop out” (I don’t know what this is) or data augmentation, she admitted that adding “drop out” would have been better. She finished her project/experiment but knows that there are many ways to improve her model still, which is cool to see. She explains that data augmentation didn’t help out in her case, but is still a very valuable technique to make the weights more accurate.