Nothing major, just a couple of exposure articles from this month on Gen 2 with one being in a Biometric publication with a blurb on Vivotek facial recognition camera and Hailo and Forbes also in conjunction with a Hailo blurb.
A trio of edge AI developers have revealed new technologies that bring biometric storage and processing close to the place of application.
www.biometricupdate.com
AI provides ways to process the vast amounts of data by creating models and running them on inference engines in devices and at the network edge. Brainchip and Hailo recently announced neural network-based AI inference devices that could augment human abilities.
www.forbes.com
Vivotek unveils edge computing facial recognition camera
Mar 22, 2023, 4:18 pm EDT |
Larisa Redins
CATEGORIES
Biometric R&D |
Biometrics News |
Facial Recognition
A trio of edge AI developers have revealed new technologies that bring biometric storage and processing close to the place of application. They include a new camera from
Vivotek, a family of vision-processing chips from Hailo, and a real-time data processing platform from BrainChip.
Vivotek has launched its first facial recognition camera, FD9387-FR-v2, that combines edge computing to identify gender and age from video footage even when people wear masks. The company says it can store up to 10,000 profiles with a 99 percent accuracy rate and is compliant with the U.S. National Defense Authorization Act.
The FD9387-FR-v2 Facial Recognition Camera from Vivotek integrates the
SAFR Inside AI facial recognition platform from RealNetworks, Inc.
Features include real-time facial detection and tracking, early warning of strange faces, image privacy mode for sensitive areas, strong cybersecurity protection with encrypted data storage and transmission.
The Vivotek FD9387-FR-v2 Facial Recognition Camera suits building banksā, retailers, and buildingsā automation/access control systems. For example, it can integrate with business intelligence (BI) services to send real-time notifications when VIP customers enter the store.
Additionally, it helps track traffic in and out of smart buildings, adding an extra layer of security. Unauthorized visitors can be reported and recorded for future reference.
Hailo introduces new vision processor for edge-based smart cameras
Hailo, a chipmaker for edge AI processors, has released its Hailo-15 family of vision processors. These high-performance chips can be integrated into cameras to provide advanced video processing and analytics at the edge.
According to the company, Hailo-15 offers deep learning video processing and AI performance, allowing it to be used by city operators, manufacturers, retailers and transportation authorities for various applications. It can help improve safety and security, increase productivity and machine uptime, protect supply chains, and detect incidents quickly.
āWith this launch, we are leveraging our leadership in edge solutions, which are already deployed by hundreds of customers worldwide; the maturity of our AI technology; and our comprehensive software suite to enable high-performance AI in a camera form factor,ā states Orr Danon, the CEO of Hailo.
The Hailo-15 VPU family has three variants (Hailo-15H, Hailo-15M, and Hailo-15L) to meet different processing requirements. With performance up to 20 TOPS, the company says it enables over 5x higher performance than existing solutions at a comparable price point.
According to the company, this AI-based video analytic solution provides enhanced safety, privacy and cost-efficiency to organizations while reducing the complexity of network infrastructure.
Hailo also claims that its low-power, fanless processors are well-suited to industrial and outdoor applications where dirt or dust may otherwise impact reliability.
āWith a single software stack for all our product families, camera designers, application developers, and integrators can now benefit from an easy and cost-effective deployment supporting more AI, more video analytics, higher accuracy, and faster inference time, exactly where theyāre needed,ā adds Danon.
Hailo will demonstrate its Hailo-15 AI vision processor at ISC-West in Las Vegas, Nevada, from March 28-31.
The company began a
collaboration with Innovatrics on facial recognition late last year.
BrainChip unveils second-generation Akida platform
BrainChip Holdings Ltd. has announced the second generation of its Akida platform. It drives hyper-efficient and intelligent edge AIoT devices with advanced capabilities such as 8-bit processing, time domain convolutions and vision transformer acceleration.
āWe see an increasing demand for real-time, on-device intelligence in AI applications powered by our MCUs and the need to make sensors smarter for industrial and IoT devices,ā says Roger Wendelken, the senior vice president in Renesasā IoT and Infrastructure Business Unit.
Applications that can benefit from fast vision transformation include facial recognition and other biometrics.
Akidaās second-generation technology features Temporal Event Based Neural Nets (TENN) spatial-temporal convolutions, which can efficiently process raw time-continuous streaming data. The technology enables more straightforward implementations while providing high accuracy and a lower development cost and is suitable for use in industrial, automotive, digital health, smart home, and smart city applications.
Akidaās second generation also offers Vision Transformers (ViT) acceleration. This neural network excels in computer vision tasks such as object detection and semantic segmentation, BrainChip says. Akidaās ability to process multiple layers simultaneously and hardware support for skip connections enables it to self-manage complex networks like RESNET-50 entirely within the neural processor without CPU intervention and reduces system load.
āWith the addition of advanced temporal convolution and vision transformers, we can see how low-power MCUs can revolutionize vision, perception, and predictive applications in a wide variety of markets like industrial and consumer IoT and personalized healthcare, just to name a few,ā says Wendelken.
AI Inference Processes Data And Augments Human Abilities
Tom Coughlin
Contributor
Follow
0
Mar 11, 2023,02:11pm EST
https://policies.google.com/privacy
Arficial Intelligence
GETTY
Artificial Intelligence (AI) has been making headlines over the past few months with the widespread use and speculation about generative AI, such as Chat GPT. However, AI is a broad topic covering many algorithmic approaches to mimic some of the capabilities of human beings. There is a lot of work going on to use various types of AI to assist humans in their various activities. Note that all AI has limitations. It generally doesnāt reason, like we do and it is generally best at recognizing patterns and using that information to create conclusions or recommendations. Thus, AI must be used with caution and not used to form conclusions beyond what it is capable of analyzing. Also, AI must be tested to make sure it is not biased based upon the training sets used to create the AI algorithms.
AI comes in big and small packages. Much of the training is done in data centers, where vast stored information and sophisticated processing of that information can be used to train AI models. However, once a model is created it can be used to interpret information collected locally via sensors, cameras, microphones and other data sources to make inferences that can help people to make informed decisions. One important application for such AI inference engines is for mobile and consumer devices that may be used for biometric monitoring, voice and image recognition and many other purposes. Inference engines can also be used to improve manufacturing and various office tasks. AI inference engines are also an important component in driving assistance and autonomous vehicles, enabling lane departure detection and collision avoidance, and other capabilities.
There have been some recent announcements for AI inference engine chips from Brainchip and Hailo. Letās look at these announcements and their implications for processing and interpreting vast amounts of stored and sensed data. Slides from a Brainchip briefing provided some economics for making and potential applications for AI. It said that the costs of training a single high-end model is about $6M and annual losses in manufacturing due to unplanned downtime were about $50B. In terms of the opportunity, about 1TB is generated by an autonomous car per day and there is about a $1.1T cost of healthcare and lost productivity due to preventable chronic disease. A PWC report projects $15.7T in global GDP benefit from AI in 2030 and Forbes Business Insights projects $1.2T in AI internet of things (AIoT) revenue by that year.
To help enable this opportunity, Brainchip announced its 2nd generation of its akida IP Platform. The figure below from the briefing seems to show that this platform may be using chiplet technology that integrates the Akita Neuron Fabric chiplet to perform various functions.
Brainchip akida neural network processor
BRAINCHIP PRODUCT BRIEFING
The akida IP Platform is a digital neuromorphic event-based AI device that is capable of some learning for continuous improvement. The company says that its second-generation of Akida now includes Temporal Event Based Neural Nets (TENN)spatial-temporal convolutions that supercharge the processing of raw time-continuous streaming data, such as video analytics, target tracking, audio classification, analysis of MRI and CT scans for vital signs prediction, and time series analytics used in forecasting, and predictive maintenance. The second-generation device also supports Vision Transformers (ViT acceleration, a neural network that is capable of many computer vision tasks, such as image classification, object detection, and semantic segmentation
Brainchip says that these devices are extremely energy-efficient in running complex models. For instance, it can run RESNET-50 on the neural processor. It is capable of spatial-temporal convolutions for handling 3D data. It enables advanced video analytics and predictive analysis of raw time series data. It allows low-power support for vision transformation for edge AIoT.
The akida product is being offered in three types of packages. The akida-E is the smallest and lowest power with 1-4 nodes and is designed for sensor inference and is used for detection and classification. The Akida-S with 2-8 nodes, includes a microprocessor with sensor fusion and includes application system on chips (SoCs) and is used for detection and classification working with a system CPU. The akida-P with 8-128 nodes, is the companyās maximum performance package. It is designed for network edge inference and neural network accelerators and can be used for classification, detection, segmentation and prediction with hardware accelerators.
Brainchip akida products
BRAINCHIP PRODUCT BRIEFING
Brainchip believes its akida IP packages can serve many applications including industrial uses such as predictive maintenance, manufacturing management, robotics and automation and security management. In vehicles it can enhance the in-cabin experience, provide real-time sensing, enhance the electronic control unit (ECU) and provide an intuitive human machine interface (HMI). For health and wellness applications it can provide vital-signs prediction, sensory augmentation, chronic disease monitoring and fitness and training augmentation. For home consumers it can augment security and surveillance, intelligent home automation, personalization and privacy and provide proactive maintenance.
Hailo announced its Hailo-15 family of high-performance vision processors, designed for integration into intelligent cameras for video processing and analytics at the edge. The image below shows an image of the Hailo VPU package. The company says that Hailo-15 allows smart city operators can more quickly detect and respond to incidents; manufacturers can increase productivity and machine uptime; retailers can protect supply chains and improve customer satisfaction; and transportation authorities can recognize everything from lost children, to accidents, to misplaced luggage.
Image of Hailo Visual Processing Unit
HAILO PRODUCT ANNOUNCEMENT
The Hailo-15 visual processing unit (VPU) family includes three variants ā the Hailo-15H, Hailo-15M, and Hailo-15L. Hailo devices also include neural networking cores. These devices are designed to meet the varying processing needs and price points of smart camera makers and AI application providers. The product support from 7 TOPS (Tera Operation per Second) up to 20 TOPS. The company says that all Hailo-15 VPUs support multiple input streams at 4K resolution and combine a powerful CPU and DSP subsystems with Hailoās AI core. The Hailo-15H is capable of running the object detection model YoloV5M6 with high input resolution (1280x1280) at real time sensor rate, or the industry classification model benchmark, ResNet-50, at an 700 FPS.
The Hailo-15H is capable of running the state-of-the-art object detection model YoloV5M6 with high input resolution (1280x1280) at real time sensor rate, or the industry classification model benchmark, ResNet-50, at an extraordinary 700 FPS. The figure below shows the block diagram for the Hailo device.
Hailo VPU Block Diagram
HAILO DATA SHEET
This is an industrial SoC with interfaces for image sensors, data and memory. It utilizes a Yoto-based Linux distribution and provides secure boot and debug with a hardware accelerated crypto library including TrustZone, and a true random number generation (TRNG) and Firewall. Hailo says that the low power consumption of these devices enables implementation without an active cooling system (e.g. a fan), making it useful in industrial and outdoor applications.
AI provides ways to process the vast amounts of stored and generated data by creating models and running them on inference engines in devices and at the network edge. Brainchip and Hailo recently announced neural network-based AI inference devices for industrial, medical, consumer, smart cities and other applications that could augment human abilities.