ai wave lines neural network pink, purple, blue and green light (darkened)

Intel Labs Presents Innovative Software Frameworks at 2022 AutoML Conference


  • The Automated Machine Learning Conference will take place in Baltimore, MD, from July 25-27, 2022.

  • Intel Labs details two novel software frameworks for automating the generation of super-networks and finding diverse sets of sub-networks optimized for various performance metrics and hardware configurations.



The first international Automated Machine Learning Conference (AutoML) is co-located with ICML in Baltimore, MD, and runs July 25-27, 2022. For those unable to travel, there will be virtual attendance options as well. Promoting collaboration within the field, the event encourages open code sharing. Intel is excited to share two novel software frameworks to further advance ML methods by automating the generation and training of super-networks and finding diverse sets of sub-networks optimized for various performance metrics and hardware configurations.

Intel Labs first presents BootstrapNAS, an Automated Super-Network Generation for Scalable Neural Architecture Search. Weight-sharing Neural Architecture Search (NAS) solutions often discover neural network architectures that outperform their human-crafted counterparts. Weight-sharing facilitates the creation and training of super-networks that contain many smaller and more efficient sub-networks. For the average deep learning practitioner, generating and training one of these super-networks for an arbitrary neural network architecture design space can be a daunting experience. BootstrapNAS addresses this challenge by automating the generation and training of super-networks, and then subseqquently searching for high-performing sub-networks using a simple and extensible API. This solution allows developers to convert a pre-trained model into a super-network with custom implementations. The Labs framework then trains the super-network using a weight-sharing NAS technique available in BootstrapNAS or provided by the user. Finally, a search component discovers high-performing sub-networks that are returned to the end-user. In their paper, researchers demonstrate BootstrapNAS by automatically generating super-networks from popular pre-trained models available from Torchvision and other repositories. BootstrapNAS can achieve up to 9.87× improvement in throughput in comparison to the pre-trained Torchvision ResNet-50 (FP32) on Intel Xeon platform. It can also produce sub-networks with substantial improvements in the performance-accuracy trade-off for lightweight models such as MobileNet and EfficientNet. Access the code here.  

Other advances in NAS such as one-shot NAS, offer the ability to extract specialized hardware-aware sub-network configurations from a task-specific super-network. While previous works have employed considerable effort towards improving the first stage, namely, the training of the super-network, the search for derivative high-performing sub-networks is still under-explored. To this end, Intel Labs researchers detail their Hardware-Aware Framework for Accelerating Neural Architecture Search Across Modalities. In many cases, the deep neural network (DNN) design and evaluation process is tied to whatever hardware platform is available to the researcher at the time (e.g., GPU). Furthermore, the researcher may have only been interested in a single performance objective such as accuracy when evaluating the network. Therefore, the network is inherently optimized for a specific hardware platform and specific objective. However, users wanting to solve the same problem for which the network was designed may have different hardware platforms available and may be interested in multiple performance metrics (e.g., accuracy and latency). This flexible search framework automatically and efficiently finds diverse sets of sub-networks that are optimized for different performance metrics and hardware configurations. A primary goal of this framework is to reduce the number of validation measurements that are required to find optimal DNN architectures given a set of performance objectives and a hardware platform. To this end, the framework also offers a one-shot predictor approach to reduce the validation cost overhead as described in existing work. In their paper, the researchers also demonstrate how pairing evolutionary algorithms in an iterative fashion with lightly trained performance predictors can yield an accelerated and more cost-effective exploration of a DNN architectural design space across the modalities of machine translation, recommendation, and image classification. As NAS continues to grow, this Labs work highlights the need for continued investigation on the generalizability of NAS approaches in broader tasks and modalities.

To learn more about these works and other AutoML advancements, register for the event here.