Specialized hardware needed to accelerate machine learning algorithms is moving out of the data center and into high-end mobile phones. Soon it could be in everyone's pocket, if chip designer Imagination Technologies has its way.
Imagination made a name for itself as the designer of the graphics accelerators in Apple's smartphones and tablets. Now it's designed a new range of processing cores that chip makers can use to accelerate artificial intelligence algorithms in their own hardware.
That, in turn, means app developers will have access to powerful local processing capabilities without the need for network access -- potentially allowing the use of artificial intelligence-based image recognition and diagnostic tools on industrial sites, in remote areas, or after natural disasters.
The renaissance in AI research is built on the use of neural networks to draw inferences from data, and then to apply that learning to new situations. The process is computationally intensive, especially in the learning phase, but the repetitive calculations involved can be significantly speeded up by specialized hardware.
Until recently, that hardware has been confined to power-hungry racks in air-conditioned data centers. That poses a dilemma for mobile app developers and embedded device manufacturers wanting to apply machine learning algorithms in the field: Wait for an unaccelerated mobile processor to come up with the answer, or flip the raw data to a remote server for faster processing and wait for the answer to come back.
That's fine for some applications, but for others (self-driving cars, say, or on-the-fly video processing) the latency or cost of sending the data back to the server may be unacceptable. There's also the issue of privacy or security: local processing leaves users in control of their data.
That's prompted a couple of smartphone manufacturers to dedicate hardware to the processing of neural networks in their latest smartphones.
Huawei Technologies jumped first, revealing that the Kirin 970 chipset that will power its forthcoming Huawei Mate 10 phone includes a dedicated neural processing unit.
And at the big reveal of the iPhone X, Apple announced that the A11 chip at the heart of it includes silicon dedicated to machine learning applications, a feature it calls the Neural Engine.
If developers have to rely on customers having the most expensive phones out there to run their apps, though, they're not going to build much of a following.
Imagination hopes that its new PowerVR Series 2NX Neural Network Accelerator will make neural processing capabilities available to a much larger slice of the Android smartphone market than just flagship phones. In addition to smartphones, Imagination is also targeting other mobile and embedded devices.
Sign up for Computerworld eNewsletters.