Chris: I spent last week in San Francisco on a short project that included a trip to NVIDIA's technology conference on GPUs and machine learning. A few quick takeaways:
- Potrero Hill is beautiful. I can't figure out why the Mission is considered more hip.
- NVIDIA has brilliantly cornered this market. When you need machine learning hardware, you buy NVIDIA, without exception. I'd estimate they're 2-5 years ahead of the competition (Intel, AMD, ATI?), given their DGX-1 machine learning supercomputer and other feats. I'm really curious what signals they saw that justified going all-in (to the tune of multiple billions of dollars) on machine learning 5-10 years ago.
- It makes sense to go all-in on machine learning. Most people are simply unaware of how fast this field is progressing, and even the experts are continually being surprised by the pace of innovation. For instance, NVIDIA's DaveNet is a deep neural net that can drive a car (fairly well, I believe). This isn't a collection of human-designed classic robotics components like PID controllers, inverse kinematics, computer vision modules, pedestrian detectors, etc. It's a single network that learned how to drive.