Khronos Group Enters Partnership With Au-Zone Technologies
May 21, 2018

Khronos Group Enters Partnership With Au-Zone Technologies

BEAVERTON, OR — The Khronos Group (Khronos.org), an open consortium of leading hardware and software companies creating advanced acceleration standards, is working with Au-Zone Technologies to enable NNEF (Neural Network Exchange Format) files to be easily used with leading machine learning training frameworks. NNEF enables the optimized ingestion of trained neural networks into hardware inference engines on a diverse range of devices and platforms. Au-Zone is working with the Khronos NNEF Working Group to implement two purpose-built bidirectional converters, between TensorFlow and NNEF and also Caffe2 and NNEF. Both converters are expected to be released as open source projects to the development community in Q3 2018 under the Apache 2.0 license.
“We are very excited to be working with the Khronos Group on the NNEF converter project and for the opportunity to contribute back to the community,” says Brad Scott, president of Au-Zone. “By providing the NNEF converters as open source projects, we expect there will be strong adoption, additional contributors, and greatly improved portability for CNN models. To meet our customers needs, we are also adding NNEF import/export capabilities to the DeepView ML Toolkit. This will allow developers to work in their preferred training framework and provide a direct path to deploy, profile and optimize their trained models on a full range of embedded processors including: x86 and Cortex-A based CPUs, Cortex-M MCUs, GPUs with OpenCL support and proprietary NN compute engines.”
 
“The NNEF working group at Khronos is delighted to be working closely with Au-Zone,” adds Peter McGuinness, NNEF Working Group chair. “Growing the range of NNEF exporters for the key machine learning training frameworks widens the choices for training neural networks for embedded inference engines, all as part of our ongoing work to reduce machine learning deployment fragmentation.” 
 
Additionally, the NNEF and OpenVX Working Groups are working closely within Khronos to develop open-source importers, using the OpenVX Kernel Import extension, to enable the ingestion and execution of NNEF files. The OpenVX Neural Network extension enables OpenVX to act as a cross-platform inference engine, combining computer vision and deep learning operations in a single graph description for highly-optimized hardware acceleration. Finally, when the final NNEF 1.0 specification is released later this year, Khronos will also provide open source software for ingesting NNEF into the Android NNAPI inferencing stack.