Home News IBM Research Technology Enables Scalable Edge AI Applications

IBM Research Technology Enables Scalable Edge AI Applications

0

Edge computing is one of the intriguing topics driving the technology landscape. Trying to distribute computing tasks across multiple locations and then coordinate these disparate efforts into a cohesive, meaningful whole is much harder than it first appears. This is especially true when trying to scale small proof-of-concept projects into full-scale production.

For several years, IBM research groups have been working to help overcome some of these challenges. More recently, they have begun to see success in industrial settings such as automotive manufacturing by taking a different approach to problem solving. In particular, the company has been rethinking how it analyzes data at various edge locations and how it shares AI models with other locations.

For example, at automotive manufacturing plants, most companies have begun using AI-driven visual inspection models to help find manufacturing defects that may be difficult or too costly for humans to identify. For example, using tools like the IBM Maximo suite of applications for visual inspection solutions can help automakers both save significant amounts of money in defect avoidance and keep manufacturing lines running as fast as possible. This has become especially critical recently given the supply chain constraints many automotive companies have faced recently.

However, the real trick is getting to the zero-defect aspect of the solution, as inconsistent results based on incorrectly interpreted data can actually have the opposite effect, especially if such erroneous data ends up being promulgated to multiple manufacturing sites via inaccurate AI models. To avoid costly and unnecessary production line downtime, it is critical to ensure that only the appropriate data is used to generate AI models and that the models themselves are regularly checked for accuracy to avoid any defects that may be caused by mislabeled data.

This “recalibration” of AI models is a boon to manufacturers from IBM Research, which is working on an algorithm they call “out-of-distribution” (OOD) that can help determine whether the data used to refine vision models is out of acceptable range and prevent models from making inaccurate inferences about incoming data. Most importantly, it does this on an automated basis to avoid the slowdowns associated with manual labeling efforts and to allow this work to scale across multiple production sites.

A byproduct of OOD inspection, called data aggregation, is the ability to select data for manual inspection, labeling, and model updates. In fact, IBM is working to reduce data traffic by a factor of 10-100 for many early edge computing deployments today. In addition, this approach increases the time utilization of manual inspection and tagging by a factor of 10 by eliminating redundant data. Combined with state-of-the-art technologies such as OFA (Once For All) model architecture exploration, the company hopes to reduce model size by as much as 100x as well. This allows for more efficient deployment of edge computing.

Exit mobile version