What papers or textbooks do i need to read to have all the basics / background knowledge to use pytorch and understand what I am doing based on solely the documentation pytorch provides?
What papers or textbooks do i need to read to have all the basics / background knowledge to use pytorch and understand what I am doing based on solely the documentation pytorch provides?
I am trying to understand how an example i just coded using pytorch actually works, how the convuluted network works, meaning what are the arguments of conv2 or what it’s called and what is relu. I am digging through the documentation but I am missing a lot of basics as it seems.
My best bet was to read papers, but since this is already a couple years into the whole deep learning thing it is quite a challenge to idenity the foundational papers among the many that just repeat them.
I think if you haven’t found them yet, some three blue one brown videos might be helpful.
Like it’s important to get an understanding for how the big picture stuff works conceptually, but realistically, you will probably just be making minor modifications to existing frameworks. The framworks, have really ended up being almost more important in these most recent vintages of models, where the previous generations of models were very much architecture solutions.
So in that regard, it’s more important to focus on understanding the frame works around self learning, attention, generative and discriminative approaches etc…
After that, maybe you could answer a question for me.
What is it you want to do? Do you want to build models? Do you want to develop frameworks? Do you want to work on algorithms?
Because each of these really requires it’s own skillset, and while they have some overlap, most people don’t do everything.
I am trying to understand a new competitor to pytorch. My goal is to contribute and just be able to build my own some day in the future.
I mean that’s a pretty massive undertaking.
If that’s your goal, don’t bother with pytorch at all.
Start by implementing the individual algorithms required to do a simple machine learning algorithm from scratch (only numpy).
You need to learn and be able to encode back propagation, Adam, sigmoid, etc… I can’t remember them all off hand but it’s like maybe 4 or 5 different functions in total.
There are many tutorials for this. If you need me to, I can link you to some.
This is a great way to get the basics down, however, beware that things like pytorch are ultimately collaborative projects involving thousands of team members incorporating advancements and research from all kinds of sources.
thank your comments were very helpful! Yeah, I had no idea as I am still ignorant of the scope of things.
Let me know if you need links or support. Most of it you can Google.
Honestly, I don’t think that there’s room for a competitor until a whole new paradigm is found. PyTorch’s community is the biggest and still growing. With their recent focus on compilation, not only are TF and Jax losing any chance at having an advantage, but the barrier to entry for new competitors is becoming much higher. Compilation takes a LOT of development time to implement, and it’s hard to ignore 50-200% performance boosts.
Community size tends to ultimately drive open source software adoption. You can see the same with the web frameworks - in the end, most people didn’t learn React because it was the best available library, they learned it because the massive community had published so many tutorials and driven so many job adverts that it was a no-brainer to choose it over Angular, Vue, etc. Only the paradigm-shift libraries like Svelte and Htmx have had a chance at chipping away at React’s dominance.
I wouldn’t focus on foundational papers, the current phase of deep learning is far enough along that there are better tutorials/resources that better distill how these models work.
I would actually recommend you look into books on deep learning or something like a udemy course (Harvard or Stanford may also have free courses online, but I’ve never been a fan of their pacing) . I can send you some recommendations if you want, but that’s probably the best/fastest way.
I know you said you couldn’t find what you were looking for in the docs, but just in case you were looking in the wrong place:
Besides the convolution operator, I believe all the math should have been covered in high school (summation, max, and basic arithmetics). And convolution is also just defined in terms of these same operations, so you should be able to understand the definition (See the discrete definition in the wiki page under the “cross corrosion of deterministic signals” section).
The math does look daunting if it’s your first time encountering them (I’ve been there), and sometimes all you really need to confirmation that you already have all the requisite knowledge.
thank you yeah, I found the conv2d and reLu on pytorch’s home page i am struggling with the arguments that conv2d accepts and i just realized i need to refresh linear algebra first.
Learning about hermitians and transposed and inverted matrices, tbh i remembered how to multiply and about the determinant and all that but there is a lot that i forgot. So i am digging through the matrix cookbook currently also reading a book on deep learning in parallel.
I am trying to find a way to get a hold and read this paper: Krizhevsky, A., Sutskever, I. & Hinton, G. ImageNet classification with deep convolutional neural networks. In Proc. Advances in Neural Information Processing Systems 25 1090-1098 (2012)
but have failed so far…
Paper:
https://sci-hub.se/https://doi.org/10.1145/3065386