AI enabled Camera at Network edge for next-gen IoT Solutions

In fast moving, high-tech market, product developers are looking for building blocks and frameworks to enable niche cutting edge solutions. Device manufacturers do not want to re-invent the wheel while designing their products. At the same time, they are looking to design products that are affordable, scalable, and compliant with the latest standards.

So far, artificial intelligence (AI) techniques were deployed in data centers, to leverage the available compute power to perform processor-demanding tasks. With cloud adoption, AI made its way into software and it has also moved to the outer edges of networks. For quick action and to reduce the data overload on the cloud, IoT solution providers are deploying AI techniques at endpoints, gateways, and other devices at the point of use. The inferencing and training of AI algorithms happen at the backend at the trained models are deployed on the edge nodes.

In the past, AI / ML and computer vision systems were perceived to be highly complex and powerful with lots of memory. Deploying these technologies to high-volume cost sensitive IoT edge devices was limited due to stringent BOM and cost constraints. However, cloud-based solutions and multi-core low power SoCs made it possible. When it comes to vision based connected solutions, smart cameras are one of the key components. Smart cameras with IP connectivity, advanced data analytics and artificial intelligence, will drive innovations in the Internet of Things (IoT) innovations and its applications.

Few Use cases

  • Driver behavior analysis for fleet management: Driver behavior assessment for detecting instances of distraction due to anomalous behavior like texting, eating and drinking, talking on the phone, reaching behind; alerts and notifications of recurring instances and generate driver scorecards.
  • Intelligent checkout solution for retail stores by identifying cart items; Inventory management and monitoring for retail stores
  • Smart City Estimates garbage bin contamination onboard truck; identify items (paper, cardboard, Styrofoam, plastic, metal scrap) to improve garbage classification and disposal productivity
  • Vision based defect monitoring solution for manufacturing plants
  • Smart HMI for kitchen appliances – Computer vision for food detection, recognition, and activity tracking; Nature Language Processing for voice assistant for issuing a command to appliances.

Key technical considerations while designing AI enabled, vision based IoT solutions

  • Embedded platforms with high computing power; Trade-offs between power consumption, cost, accuracy, and flexibility
  • Camera sensor, High-end video processing, video analytics on the edge
  • Multiple connectivity options for remote monitoring, control, and upgrades
  • Balance compute, networking, and storage to deliver optimal performance for AI workloads
  • Security vulnerability; protect sensitive data

In order to enable AI based IoT solution, eInfochips has launched a development kit based on Qualcomm® QCS610 system-on-chip (SoC).  The QCS610 SoC is part of the Visual Intelligence Platform series, targeted for smart-camera applications in consumer and enterprise IoT spaces. This octa-core SoC provides heterogeneous computing that is needed to process high-quality images and video, run multiple machine learning models simultaneously at low power consumption. The Qualcomm Neural Processing SDK helps developers save time and effort in optimizing the performance of trained neural networks on devices with Snapdragon. It provides tools for model conversion and execution as well as APIs for targeting the core with the power and performance profile to match the desired user experience.

eInfochips has also developed a reference design based on the QCS610 development kit. This reference design is a combination of a development kit with customizable camera framework with a proven and tested feature-set. The solution enables original design manufacturers (ODMs) and application developers in rapid prototyping and benchmarking. The framework is designed to support multiple streams, resolutions, frame rates, and bit rates in varied combinations, to enable customization for specific deployment scenarios.

eInfochips is also supporting AI Vision kit, that combines the edge computing power of the Qualcomm Vision Intelligence Platform with Azure Machine Learning and Azure IoT Edge services from Microsoft. Powered by Qualcomm Artificial Intelligence (AI) Engine, the kit enables on-device inferencing through integration with Azure Machine Learning and Azure IoT Edge. Use cases like visual defect detection, object recognition, object classification, motion detection, etc. can be easily developed and deployed in the Vision AI Developer Kit using models trained in Microsoft Azure, allowing uninterrupted 24×7 offline functioning.

eInfochips has developed a use case of Intelligent Checkout Assistant for Retail. We had deployed the object recognition model using AzureML through which it recognizes the cart content and all the items are automatically populated in the POS terminal via cloud applications.

In addition to kits, eInfochips also offers design, development, and manufacturing services leveraging more than two decades of experience in product design. eInfochips has designed 30+ camera designs, leveraging an in-house state of the art image tuning lab. eInfochips also has experience with the latest AI frameworks and tools (Tensorflow, OpenCV, Python, Caffee, Keras). Please reach out to marketing@einfochips.com for additional details on our capabilities.