Udacity. Intro to Machine Learning: Pattern Recognition for Fun and Profit
MIT 6.S099: Artificial General Intelligence
Lecture Series on Artificial Intelligence by Prof. P. Dasgupta, Department of Computer Science & Engineering, Indian Institute of Technology, Kharagpur
DeepMind. Reinforcement Learning Course by David Silver
Stanford Winter Quarter 2016 class: CS231n: Convolutional Neural Networks for Visual Recognition.
Deep learning addresses it to some extent, but isn’t always the best choice if you don’t have image / text data (eg tabular datasets from databases, log files) or a lot of training examples.
I’m the developer of a library called Featuretools (https://github.com/Featuretools/featuretools) which is a good tool to know for automated feature engineering. Our demos are also a useful resource to learn using some interesting datasets and problems: https://www.featuretools.com/demos
Fast.ai was fine, but I felt like most of my learning for the things I cared about came from reading research papers, watching Karpathy's CS231n lectures, and blog posts that went into detail on particular concepts.
But when at certain points I felt confused on certain concepts, Geron's book did a pretty good job explaining things slowly and in great detail, especially with respects to the code he wrote. It's still a book I'll pick up for 20-40 minutes every other day to help my mind recall about how something works.
Funnily enough, I've spent the last few months reading Sutton/Barto's Intro to Reinforcement Learning (along with Silver's lectures on DeepMind's YouTube Channel) and only realized Geron touches upon RL a little bit in the latter part of the ML book.
I went through the first phase of the course as an intro to AI/DL and thought it was really great from a high-level perspective. If you have a decent understanding of Python you'll have a working model running on AWS within the first few hours of the course which is very rewarding.
It does a better job than I expected explaining the underlying intuition of the math, but doesn't dive deep into the actual formulas. There are obviously tradeoffs to this approach and if you want to continue in the field you'll need to do something to fill in this background, but as far getting your hands dirty and understanding the basics I really liked the fast.ai approach.
andrew ng's machine learning course: https://www.coursera.org/learn/machine-learning
to get up to date on convnet architecture
Fei-Fei Li and Karpathy's cs231n: https://cs231n.github.io/
if you want to go deep
geoff hinton's neural networks for machine learning coursera: https://www.coursera.org/learn/neural-networks
it is not a full course, but more an introduction.