In 2016, almost all machine learning involving the artificial neural network (ANN) approach used a combination of standard GPU chips (graphics processing units) and CPU chips (central processing units) in large data centres.
The expected growth in the use of FPGAs and ASICs is likely to dramatically increase the use of machine learning, as these new kinds of chips enable applications to use less power and at the same time become more responsive, flexible and capable. This, in turn, is likely to expand the addressable market.
Growth should be able to continue beyond 2018. The current leader in GPUs for machine learning in the data centre has publicly stated that it anticipates the total available market for both training and inference acceleration to be $26 billion by 2020 , which would be many millions of chips of various kinds per year, though probably not tens of millions.
When it comes to machine learning, big changes to the machine (in this case, the chips) are likely to cause big changes in the industry. After moving from CPU-only to CPU-plus-GPU solutions, the industry exploded in usefulness and ubiquity; using chips that are 10 to 50 times better will do that.
If the various FPGA and ASIC solutions offer similar order-of-magnitude improvements in processing speed, efficiency, price or any combination thereof, a similar explosion in utility and adoption seems probable.
That said, there are certain tasks that machine learning is good at and others where it has its limitations. These new chips are likely to allow companies to perform a given level of ML using less power at less cost. But on their own, they are not likely to give better or more accurate results.
It is not just the chips that are getting better, there are multiple vectors of progress that promise to unlock more intensive use of machine learning in the enterprise.
The key improvements can be found in the companion prediction Machine learning: things are getting intense and include automating data science, reducing the need for training data, explaining the results of machine learning better and deploying local machine learning. Some of these advances make machine learning easier, cheaper or faster (or a combination of all three).