Machine Learning

This node captures misc information regarding the large root that ML is. I plan on accumulating and assimilating all that I learn from various domains in a uniform manner here. It'll be notebooks from completed kaggle competitions, blogs, papers, courses, anything that's worth goes in. I don't plan on incorporating complete book walkthroughs in here - I'll reference them and won't be mapping what I read verbatim. Wikipedia is a good quick reference so will be referencing that too rather than typing what's within convenient reach.

1. Misc nodes

1.1. Production

1.2. No Free Lunch

  • no model is better at all problems than all other models

1.3. Occam's Razor

  • on why traditional machine learning methods are still relevant with Deep Learning on the rise.

1.4. Interpretable ML

  • ML models are generally a black box that are inexplicable representations of the data they are trained on.
  • understanding how a model favors certain outputs for a set of input features is a domain in itself.

1.6. Universal Approximation Theorem

  • any mathematical function can be modelled with the desired level of accuracy given the right architecture and enough training
  • is a foundational theorem responsible for Deep Learning's rise

2. Foundational Papers

A series oriented towards summarizing some important foundational reads in the machine learning community. I'll be formally writing about the same on Bits2Nats once I incorporate them into the zettelkasten here. Do note that these are papers and not textbooks - won't be summarizing textbooks - if that wasn't obvious.

3. Foundational Texts

A collection of foundational textbooks I'd like to read and incorporate into the notes here slowly over a long duration of time. Reading them all at once might not be the best strategy - Although I'm imagining now, the burnout from too much technical reads packed close together is quite real. Do note that I'm collating the foundational texts of Deep Learning and Reinforcement Learning here as well.

3.2. Introduction to Statistical Learning(book)

3.3. Elements of Statistical Learning(book)

3.4. AI: A Modern Approach(book)

3.5. Deep Learning(book)

3.6. RL : An Intro (Sutton and Barto) (book)

4. Sentinels

4.1. SpaCy

  • Industrial grade NLP tooling
  • exploring the library with Spacy 101

4.2. TextaCy

4.3. DisplaCy

4.4. sklearn

  • traditional ML algos and preprocessing utilities

4.5. Duckling

  • Language, engine, and tooling for expressing, testing, and evaluating composable language rules on input strings.
  • https://github.com/facebook/duckling
    • is a haskell lib but has wrappers in other languages.

4.6. sense2vec

4.7. The Hundred Page Machine Learning Book

  • supposedly a dense ref into most stuff that I need to know about ML
    • reading and documenting densly into nodes of this web
  • this is a sentinel ref for those nodes to refer as an index
  • notes from this book are rooted in the node : the100pagemlbook
Tags::root:transient: