It has gone out of its way to lower power consumption. One of the ways it does this is by allowing variable bit-depth processing. When teaching, or tuning, a neural network, it's important to perform the calculations with precision, but when using the tuned network to make decisions based on live data, it's often possible to make correct decisions with less detailed calculations, using as few as 4 or 5 bits of precision instead of 16.
Less precision requires less power, so with the 2NX developers can choose to perform calculations with 16, 12, 10, 8, 7, 6, 5 or even 4-bit precision. According to Imagination, switching from 8-bit precision to 4 bits increases speed by 60 percent and reduces bandwidth by 46 percent, yet only has a 1 percent effect on the accuracy of inferences.
To help Android developers prepare for the new capabilities, Imagination is offering a combined API for the 2NX and its existing graphics accelerators. Developers will be able to write to the API, getting some benefits from existing hardware and, "as the new hardware becomes available, people will be able to take advantage of the increase in power," Chris Longstaff, the company's senior director of product and technology marketing, said.
That won't be for a while yet: Imagination sells designs, not devices, so it will be late next year before manufacturers have phones containing the 2NX core are on the market, Longstaff said.
Sign up for Computerworld eNewsletters.