Back
21
October 2020
11:00 am
Public Event

Speed and Scale AI Inference Operations Across Multiple Architectures

Sponsored By
Intel
ATTEND
Chicago

Successful inference platforms should include acceptable latency for demanding workloads, easy integration between training and deployment systems, scalability, and a standard client interface. Easy, right? Actually … yes. Find out how a new OpenVINO™ toolkit AI extension delivers.

When executing inference operations, developers need an efficient way to integrate components that deliver great performance at scale while providing a simple interface between the application and execution engine.

Thus far, TensorFlow Serving has been the serving system of choice. But with it come challenges including its lack of cross-architecture inference execution on GPUs, VPUs, and FPGAs.

The 2021.1 release of the Intel® Distribution of OpenVINO™ toolkit solves these challenges with its improved Model Server, a Docker container capable of hosting machine-learning models for high-performance inference.

Join Principal Engineer and AI Solution Architect Adam Marek and AI Developer Tools Product Manager Zoe Cayetano to learn about this serving system for production environments, including how to:

  • More easily deploy new algorithms and AI experiments for your AI models
  • Take advantage of a write-once, deploy-anywhere programming paradigm, from edge to cloud
  • Leverage Docker containers to simplify the integration of AI inference with a wide range of platforms and solutions

Save your spot now.

Get the software

  • Download the OpenVINO™ toolkit—includes nearly 20 dev tools and libraries for creating cross-architecture applications.
  • Sign up for an Intel® DevCloud for oneAPI account—a free development sandbox with access to the latest Intel® hardware and oneAPI software.