BRN Discussion Ongoing

Kachoo

Regular
  • Like
Reactions: 5 users

Frangipani

Regular
It is just the second version called the 'ultra' which is just an improvement on the original M2 chip. It has beefed up memory and GPU power but no enhanced CPU capablities that implies any kind of Akida inside as far as I can tell.

An Apple app called Octane X uses a decentralised system called Render which is a cryptocurrency which shares GPU power on the blockchain for a fee to boost GPU power to any kind of processing needed such as videos, images and eventually AI processing.

The Vision Pro is damned expensive. $5000 AUD and will be released in 2024. I am not an Apple fan boy, but the XR (mix of VR and AR) technology seems interesting. Over time it may replace laptops and T.Vs, but this will be much further down the track. I am disappointed in its price seeing in theory it should save money by not using LCD seeing it doesn't use screens. Meta's equilvalent is only $500 or so.

Methinks you are confusing different Apple products announced yesterday:
The M2 Ultra will not be featured in the Vision Pro mixed-reality headset (which will come with the original M2 chip), but instead will be premiering in the new Mac Studio and Mac Pro.

What seems intriguing to me from a neuromorphic point of view, however, is Vision Pro’s second chip alongside the M2, the brand-new R1! The lofty price tag and the limited (external) battery power of the VR/AR headset may not speak for Akida being utilised (yet?). But given the lengthy product cycle timelines - who knows what secret sauce future versions will contain, and a rumoured more affordable model is also on the cards, even though there was no mention of it yesterday. But why call the upcoming model “Pro” if there is no “Non-Pro” in the pipeline? Apple sees what they call “spatial computing” (they obviously avoided the term “metaverse”) as revolutionary.

The fact that Apple is partnering with Disney for Vision Pro also reminded me of that recent Brainchip podcast with Geoffrey Moore who coincidentally (? 🤔) used “Disney” as an example of a world class company…

Don’t wake me from my daydream just yet!



To reduce motion sickness in VR, Apple developed the new R1 chip
Haje Jan Kamps@Haje / 9:03 PM GMT+2•June 5, 2023
Comment
Apple Vision Pro

Image Credits: Apple
At WWDC today, Apple showed off its Vision Pro VR/AR headset. The device is powered by the company’s own M2 chip, but in order to do the real-time processing from the company’s wall of sensors, Apple had to develop a brand-new processor, too — which it dubs the R1.
The R1 chip is taking all the sensors embedded into the headset to create precise head and hand tracking, along with real-time 3D mapping and eye-tracking.
The specialized chip was designed specifically for the challenging task of real-time sensor processing, taking the input from 12 cameras, five sensors (including a lidar sensor!) and six microphones. The company claims it can process the sensor data within 12 milliseconds — eight times faster than the blink of an eye — and says this will dramatically reduce the motion sickness plaguing many other AR/VR systems.


Apple says that using all this vision and sensor data means it doesn’t need to use controllers and instead uses hand gesture tracking and eye tracking to control the experience.
With the combination of an M2 chip to pack a ton of computing power and an R1 chip to deal with inputs, Apple describes its device as the most advanced device ever and claims it filed 5,000 patents to make it all happen.
Apple was short on details regarding the R1 processor, but we’ll keep an eye out for more technical specs.

————————————————————————————————————

Here is also a more in-depth article on the Vision Pro specifications, in case you are interested:

 
  • Like
  • Fire
Reactions: 13 users

Learning

Learning to the Top 🕵‍♂️
BRN short position dropped to 5.76 percent now funny how they report.

Let's see what it will go up to after these days but it's clearly dropping in my view
How did they close 15+ Million shares on the 31 of May.

When the daily volume was 11+ Million shares
Screenshot_20230606_214240_Chrome.jpg
Screenshot_20230606_214145_CommSec.jpg


At the end of the day, it's just game that they play.

I have plenty of time before early retirement 😎

Learning 🏖
 
  • Like
  • Fire
Reactions: 13 users

Jchandel

Regular
This article is in Japanese, but you can translate this page by clicking on ‘English’ on the top.

1686052597433.png
 
  • Like
  • Fire
  • Love
Reactions: 74 users

Damo4

Regular
Not much said here about yesterdays,licensing/Prtnership F/UP, How many times seriously,cost me 14k yesterday. Seriously WTF is going on with this Shitshow. Management tell us to follow their social media sites in Lew of ASX announcements. As investors we do they put a article on their forum sites get on word wrong. OH SORRY that was a mistake. Useless management is a understatement Tony get your shit together or go!.

14k hahaha
Best post this year.
Let's ignore it's an article written by someone else and not IR.

BackUnfitDwarfmongoose-max-1mb.gif


Edit: have I fallen for some ultra-low quality bait? Is this shitpost good or is it actually shit??
 
  • Haha
  • Fire
Reactions: 8 users

Diogenese

Top 20
Methinks you are confusing different Apple products announced yesterday:
The M2 Ultra will not be featured in the Vision Pro mixed-reality headset (which will come with the original M2 chip), but instead will be premiering in the new Mac Studio and Mac Pro.

What seems intriguing to me from a neuromorphic point of view, however, is Vision Pro’s second chip alongside the M2, the brand-new R1! The lofty price tag and the limited (external) battery power of the VR/AR headset may not speak for Akida being utilised (yet?). But given the lengthy product cycle timelines - who knows what secret sauce future versions will contain, and a rumoured more affordable model is also on the cards, even though there was no mention of it yesterday. But why call the upcoming model “Pro” if there is no “Non-Pro” in the pipeline? Apple sees what they call “spatial computing” (they obviously avoided the term “metaverse”) as revolutionary.

The fact that Apple is partnering with Disney for Vision Pro also reminded me of that recent Brainchip podcast with Geoffrey Moore who coincidentally (? 🤔) used “Disney” as an example of a world class company…

Don’t wake me from my daydream just yet!



To reduce motion sickness in VR, Apple developed the new R1 chip
Haje Jan Kamps@Haje / 9:03 PM GMT+2•June 5, 2023
Comment
Apple Vision Pro

Image Credits: Apple
At WWDC today, Apple showed off its Vision Pro VR/AR headset. The device is powered by the company’s own M2 chip, but in order to do the real-time processing from the company’s wall of sensors, Apple had to develop a brand-new processor, too — which it dubs the R1.
The R1 chip is taking all the sensors embedded into the headset to create precise head and hand tracking, along with real-time 3D mapping and eye-tracking.
The specialized chip was designed specifically for the challenging task of real-time sensor processing, taking the input from 12 cameras, five sensors (including a lidar sensor!) and six microphones. The company claims it can process the sensor data within 12 milliseconds — eight times faster than the blink of an eye — and says this will dramatically reduce the motion sickness plaguing many other AR/VR systems.


Apple says that using all this vision and sensor data means it doesn’t need to use controllers and instead uses hand gesture tracking and eye tracking to control the experience.
With the combination of an M2 chip to pack a ton of computing power and an R1 chip to deal with inputs, Apple describes its device as the most advanced device ever and claims it filed 5,000 patents to make it all happen.
Apple was short on details regarding the R1 processor, but we’ll keep an eye out for more technical specs.

————————————————————————————————————

Here is also a more in-depth article on the Vision Pro specifications, in case you are interested:

From the last link:

"Finally, eye tracking combines with the downwards cameras to track your facial expressions in real time to drive your FaceTime Persona, Apple's take on photorealistic avatars. Meta has been showing off research towards this for over four years now, but it looks like Apple will be the first to ship – albeit not to the same quality of Meta's research."

As late as September 2020, Apple seemed to believe NNs were software in this eye-tracking patent application:

WO2021050329A1 GLINT-BASED GAZE TRACKING USING DIRECTIONAL LIGHT SOURCES 20190909

1686054678662.png


[0040] In some implementations, the gaze tracker 246 is configured to determine a gaze direction of a user via one or more of the techniques disclosed herein. To that end, in various implementations, the gaze tracker 246 includes instructions and/or logic therefor, configured neural networks, and heuristics and metadata therefor.

So is it probable that the scales fallen from Apple's eyes in the last 32 months ... and just think how much they could reduce the size of that external battery ...
 
Last edited:
  • Like
  • Love
Reactions: 17 users

AusEire

Founding Member. It's ok to say No to Dot Joining
14k hahaha
Best post this year.
Let's ignore it's an article written by someone else and not IR.

View attachment 37865

Edit: have I fallen for some ultra-low quality bait? Is this shitpost good or is it actually shit??
I had him on ignore and suddenly I can see his posts again? 🤷🏼‍♂️

It's a shit shit post though 😂
 
  • Haha
  • Like
  • Fire
Reactions: 7 users
So we know from the blurb....

"Lorser serves clients in communications, industrial, medical, automotive, multimedia, defense and aerospace, computing and storage, and oil and gas."

Who else plays in this space and their interest?

Who might Lorser or BRN find could be interested in a SNN SDR?

I believe SDR is also a relative of NECR of which we are involved with Intellisense on further development.

Excerpt from Verdict.


Software-defined radio is a key innovation area in technology

Software-defined radio (SDR) is a communication system in which traditionally hardware-based components such as mixers, filters, amplifiers, and modulators/demodulators are replaced by software running on general-purpose computers or embedded computing devices. This software-based approach allows for flexible and reconfigurable radio functionalities, eliminating the need for dedicated hardware components.

GlobalData’s analysis also uncovers the companies at the forefront of each innovation area and assesses the potential reach and impact of their patenting activity across different applications and geographies. According to GlobalData, there are 10+ companies, spanning technology vendors, established technology companies, and up-and-coming start-ups engaged in the development and application of software-defined radio.

Key players in software-defined radio – a disruptive innovation in the technology industry

Screenshot_2023-06-06-20-46-19-73_4641ebc0df1485bf6b47ebd018b5ee76.jpg



‘Application diversity’ measures the number of different applications identified for each relevant patent and broadly splits companies into either ‘niche’ or ‘diversified’ innovators.

‘Geographic reach’ refers to the number of different countries each relevant patent is registered in and reflects the breadth of geographic application intended, ranging from ‘global’ to ‘local’.

For a pretty good outline on SDR on mobile networks and some use cases, this article from Military Embedded mid last year.



Big data on mobile networks: the role of software-defined radio (SDR)​

Story
July 22, 2022



1658517054.jpeg

The main structure of a 5G network is the radio access network (RAN), which can be implemented in several architectures. Regardless of the architecture, software-defined radios (SDRs) play a major role in every step of the RAN chain, including backhaul, midhaul, and fronthaul. SDRs – whether on the battlefield or the urban jungle – provide important technological features that handle the “big data” glut, including fast 10-100 Gb/sec fiber communication, wide tuning range, several multiple-input/multiple-output (MIMO) channels, high phase coherency, and a software-based backend that can be programmed to fit any applications.​

The term big data is used to designate a massive amount of data collected from many different sources and the statistical tools and techniques designed to analyze these data sets, typically based on cloud/edge computing, machine learning (ML), and artificial intelligence (AI). Collecting the massive amounts of information required to collect and use big data is no easy task.

A wide variety of wireless devices – often called user equipment (UE) – operate on large 5G networks, including autonomous vehicles, smartphones, and IoT [Internet of Things] devices. Extraction of big data from the multiple UEs and the further information processing through statistical analysis and ML/AI algorithms provide the perfect framework for several useful network operations, including optimization of device gateways, spectrum-sharing and dynamic spectrum access in networks, and real-time performance diagnostics and analytics on IoT networks, including the evaluation of key performance indicators (KPI). However, to properly enable big data, the UEs and the network must provide very high data throughput, low-latency backhaul, and optimized data storage.

Typically, the communication modules in baseband units (BBUs) and radio head units (RRUs) are based on software-defined radios (SDRs); therefore, the performance of these SDRs is one of the main bottlenecks in wireless big data collection. In this context, ultra-low latency and very high-performance SDRs are increasingly being adopted in 5G base stations, particularly due to the truly parallel signal processing (powered by FPGA [field-programmable gate array] technology) and their very high data throughput capability over Ethernet fiber, enabling extraction, processing, and packetization of vast amounts of data.

The FPGA-based digital backend can also easily implement big data functions and analytics inside the SDR. Furthermore, as the number of UEs connected on LTE/5G increases, these frequency bands become more and more congested and scarce, so smart radio resource allocation and spectrum-sharing policies can significantly improve the electromagnetic (EM) performance of a network. Both techniques require wideband spectrum monitoring, which can only be implemented using high-performance SDRs, with MIMO capabilities, ultra-low latency, wide bandwidths, and high tuning ranges.

The 5G future

Differently from the current 4G/LTE technology, 5G is intended to be much more than a simple data pipe. In fact, 5G networks can be seen as purpose-built networks designed to facilitate the connection between many devices, sensors, and automation systems. By providing more than ten times the capacity of 4G, 5G can ensure high levels of interconnectivity for the needs of military, government, and commercial devices, transmitting massive amounts of data with a high bit rate and ultra-low latency. This connectivity power is crucial for a variety of purposes, such as augmented reality (AR), virtual reality (VR), autonomous systems, tactile internet, and automation.

Despite the technological revolution that 5G is ushering in, there are several challenges that must be addressed and solved to unleash its full potential. The main bottleneck of 5G implementation is the network infrastructure. Although part of the backbone of the network can be implemented with the telecommunication infrastructure already in use, true 5G requires large numbers of small cells in densely populated areas to support the massive traffic of data, each working with wireless and fiber links with speeds greater than 10 Gb/sec and latencies smaller than 1 ms. Furthermore, the high frequencies necessary to provide enough bandwidth for high data rate users (> 6 GHz) is significantly problematic in terms of RF signal quality. For instance, high frequencies have shorter range due to the signal loss and can be easily obstructed by obstacles (including buildings, walls, and trees), so they require large numbers of small cells to increase the coverage.

Fortunately, the use of multiple-input/multiple-output (MIMO) SDRs can simplify the implementation of highly dense small-cell networks, addressing both bandwidth and coverage. Finally, network security is much more concerning in 5G than 4G, due to the tight integration with critical systems including autonomous systems and vehicles: Therefore, dedicated security schemes are required to ensure the robustness of 5G networks, particularly considering the privacy issues in big data applications.

The high bandwidth and low latency of 5G networks, combined with the implementation of multi-access edge computing (MEC) architecture, creates the perfect environment for collecting and processing RF big data. The main idea consists of extracting as much data as possible from the densely packed small cells, applied in massive machine type communication (mMTC) and machine-to-machine communications (M2M) in IoT, and convert this big data into real-time insights for intelligent decision-making, using distributed computing architectures and high-performance network links, that can be further applied in quality of service (QoS) evaluation and optimization. For instance, big data can be used as a tool for operators to forecast the curve of demand, coordinate on-the-fly resource allocation and network slicing, and solve interference and coverage limitations by optimizing network capacity. Furthermore, 5G's improved interconnectivity will enable distributed edge computing on a completely new level, with all the heavy big data computation performed in the cloud. (Figure 1.)

62daf600bfcc6-PerVices-Figure_1.jpg


[Figure 1 | Overall 5G architecture on a service, network, and functional level.]

SDRs in 5G networks


As the name suggests, software-defined radios are RF units that implement most of the signal processing and communication functions in the digital domain, leaving only the essentials to the analog circuit. The general architecture of an SDR consists of an analog front end (AFE) and a digital back end. The AFE contains both the receive and transmit functionalities, and can be composed of several channels in MIMO operations. Each AFE channel can be tuned over a wide range of frequencies, including the 5G tuning range. The analog signals amplified and filtered by the AFE are digitized using dedicated ADCs and DACs with high and stable phase coherency. However, the real crux of the SDR is the digital backend, which is typically implemented using high-end FPGAs. The FPGA, with onboard DSP capabilities, is responsible for basic radio functions, such as modulation/demodulation, upconverting/downconverting, and filtering, but it can also perform complex communication tasks, including the latest 5G communication protocols and DSP algorithms. Moreover, it can packetize and transport Ethernet packets over 10 to 100 Gb/sec via SFP+/qSFP+ links. The FPGA-based backend enables the SDR to be designed for a wide range of SWaP (size, weight and power) requirements.

SDRs are the main building blocks of the general 5G RF network. They can be implemented as the fronthaul network in RRUs to receive and transmit data from the user equipment, or as the BBUs, particularly the distributed units (DUs) and central units (CUs) in the O-RAN network standard. In fact, SDRs can be applied in any step of the network chain, playing important roles in the midhaul and backhaul as well. Furthermore, they are the ideal technology to combine RRU and BBU functionalities in femto cells (small, low-power cellular base stations) by providing an off-the-shelf solution with high flexibility, low power consumption, and small form factor. An SDR-based femto cell can be used in a variety of applications, including wireless networks for tactical use or embedded communications system for unmanned aerial systems (UASs).

Big data-driven AI/ML and SDR

Because of its affinity with software-based technology, SDRs can facilitate several technologies related to big data and artificial intelligence/machine learning (AI/ML) algorithms. For instance, SDRs can be used to test key performance indicators (KPI) of the 5G network. The high speed of reconfigurability required to rapidly manage network slicing (NS) can easily degrade the KPIs of the network. In this context, KPI monitoring and evaluation techniques, such as Autonomous Anomaly Detection (AAD) and real-time analysis, are fundamental to maintain the QoS. These techniques can be easily implemented using SDRs, by taking advantage of the digital backend and use the FPGA to run sophisticated KPI analysis algorithms in loco.

Due to the high amount of different devices and services that a 5G RAN [radio access network] must handle, network slicing is mandatory. To create these self-contained slices, SDN/NFV techniques are required, providing enough flexibility and reconfigurability over the physical layer of the network. SDRs play a major role in the softwareization of the network, enabling the implementation of SD-RAN algorithms, such as real-time RAN intelligent controllers (RIN). Furthermore, big data-driven dynamic slicing can take real-time information about the network state and automatically reallocate resources based on traffic predictors and dedicated cost functions. (Figure 2.)

62daf61ecab79-PerVices-Figure_2.jpg


[Figure 2 | A diagram shows how data-driven AI/ML can help a radio access network (RAN).]

Data-driven ML/AI algorithms can also be used in beamforming optimization, by assisting the RRUs to calculate and select the best beams to maximize the reference signal received power (RSRP). In this case, the beamforming process will take the form of a data-driven feedback system, where each UE informs the serving cell about several beam parameters, including beam index (BI) and beam reference signal received power (BRSRP), which then makes a decision about which beams must be selected to serve that unit. With the massive number of UEs in use, this becomes a big data problem. By promoting softwareization of the RRU and providing MIMO capabilities to drive the antenna arrays, SDRs actually enable beamforming optimization via big data. Furthermore, in massive MIMO applications, each beamforming antenna receives a weight that must be optimized to obtain the best beam. ML/AI algorithms can be used to dynamically optimize the weight of the antennas based on forecast models, historical data, interference data, and user specifications.

Brendon McHugh is a field application engineer and technical writer at Per Vices, a company that develops, builds, and integrates software-defined radios (SDRs). Brendon is responsible for assisting current and prospective clients in configuring the right SDR solutions for their unique needs. He holds a degree in theoretical and mathematical physics from the University of Toronto. Readers may contact Brendon at solutions@pervices.com.

Kaue Morcelles is a technical writer and is a Ph.D. student in electrical engineering.


Per Vices • https://www.pervices.com
 
  • Like
  • Fire
  • Love
Reactions: 30 users
Think I figured out the SP issue.

BRN are testing Akida in edge HFT and it's gone rogue :ROFLMAO:



The Crucial Role of Edge Computing in High-Frequency Trading: Unlocking the Power of Proximity and Latency​


Tom Reyes​

Cloud Exec | Data Geek | Transformer​

Published Jun 1, 2023

In the fast-paced world of #highfrequencytrading (HFT) or #algotrading, where every millisecond counts, gaining a competitive edge is of paramount importance. As financial markets become increasingly digitized and complex, high-frequency trading companies are embracing innovative technologies to stay ahead. Among these advancements, #edgecomputing has emerged as a game-changer, offering substantial benefits for #hft firms. In this article, we will explore how edge computing revolutionizes high-frequency trading by leveraging its proximity to major markets, such as the New York Stock Exchange (#nyse) and the Chicago Mercantile Exchange (CME), and ensuring ultra-low latency for lightning-fast trade executions.
Proximity to Major Markets:
Location plays a critical role in high-frequency trading, where even minor disparities in execution speed can result in significant competitive advantages or disadvantages. By deploying edge computing infrastructure in close proximity to major financial markets like the NYSE and CME, HFT firms can significantly reduce network latency and gain an edge over competitors. The physical proximity minimizes the time required for data transmission between trading systems and the exchanges, enabling faster access to market data and execution of trades.
Reducing Latency:
Latency, measured in milliseconds or even microseconds, is the lifeblood of high-frequency trading. Edge computing empowers HFT firms to execute trades with lightning speed, thereby enhancing their ability to capture fleeting market opportunities. By processing and analyzing data closer to the source, edge computing minimizes the latency introduced by long network routes and centralized data centers. This ultra-low latency enables HFT firms to react swiftly to market changes, implement complex trading algorithms, and execute trades with remarkable precision.
Enhanced Scalability and Reliability:
In addition to proximity and low latency, edge computing provides high-frequency trading companies with enhanced scalability and reliability. Edge nodes placed near major markets can handle massive volumes of data and processing demands, enabling real-time analytics and rapid decision-making. This distributed approach ensures that even during peak trading hours or when faced with sudden market volatility, HFT firms can maintain uninterrupted trading operations without compromising performance or reliability.
Risk Mitigation and Regulatory Compliance:
Edge computing also addresses the risk and compliance challenges faced by high-frequency trading firms. By processing critical trading data at the edge, firms can reduce exposure to potential security breaches, data loss, or compliance violations that could arise from relying solely on remote data centers. The ability to process data locally and transmit only the necessary information to centralized systems provides an added layer of security and mitigates potential risks associated with sensitive financial data.

Conclusion:
Edge computing is revolutionizing high-frequency trading by leveraging proximity to major markets and delivering ultra-low latency for lightning-fast trade executions. Edge computing companies, like StackPath, have edge locations within the metropolitan areas of major exchanges, such as NYSE, #nasdaq, CME, XJPX, HKEX , TSE, LSE, TSX, ENX, and BSE. By embracing this deterministic technology, HFT firms gain a competitive edge, enabling them to seize market opportunities with exceptional speed and precision. The benefits of edge computing, including enhanced scalability, reliability, risk mitigation, and regulatory compliance, make it a vital tool for high-frequency trading companies in today's fast-paced financial landscape. As HFT continues to evolve, leveraging edge computing will undoubtedly remain an essential strategy for staying ahead of the competition and capitalizing on market dynamics.
 
  • Like
  • Love
  • Fire
Reactions: 11 users

TECH

Regular
How are we all feeling ?

A mini pump and dump, create a false range, create some panic on the downside, gobble up some fresh selling volume ?
This pattern just simply repeats, " Hey Frank, get the vacuum cleaner out, I smell some fresh blood (panic). I wonder how many
dead fish we'll pull in this run" "It's like taking candy from a baby, too easy mate".

Honestly, no genuine shareholder would be selling their stock unless they were holding stock that they couldn't afford, fair enough,
they over committed this month, Brainchip is still one of the most solid Australian stocks with the best potential upside that we have
come across, losing faith now reflects a total lack of research, a total lack of understanding of the space we currently occupy.

Learn the design cycle, learn the IP License model, learn what timeframes are involved, learn what can and can't be controlled, learn
how royalties are negotiated, learn how IP License fees are negotiated, learn how trust is formed, learn how selling a totally new
computing architecture takes time, learn that new design concepts take time, learn that the whole company has to be aboard to
succeed, learn that revolutionary technology takes time to understand, learn that change of any kind takes trust and a leap of
faith, learn that to succeed in this business you must have some degree of patience.

I believe in what I have invested in, but do you ?

Tech 2023 🎯
 
  • Like
  • Love
  • Fire
Reactions: 67 users

Boab

I wish I could paint like Vincent
A UK site covering the Brainchip Lorser news.

 
  • Like
  • Love
  • Fire
Reactions: 24 users

cosors

👀

"Advancing Intelligence at the Edge with AI Vision Processors​

June 5, 2023
Sponsored by Texas Instruments. A neural network has an extensive set of parameters that are trained using a set of input images—the network "learns" the rules used to perform tasks like object detection or facial recognition on future images.

This year is giving every indication of becoming a watershed period in the development of AI-based vision processing. And, if things happen as expected, the results could be as big as, or bigger than, consumer PCs were in the 1970s, the web was in the 1990s and smartphones became during this century. The artificial-intelligence (AI) vision market is expected to be valued at $17.2 billion in 2023, growing at a CAGR of 21.5% from 2023 to 2028 (Source: MarketsandMarkets).

The question is not whether it will happen, but rather how do we want to do it? How do we want to develop vision-based AI for collision avoidance, hazard detection, route planning, and warehouse and factory efficiency, to name just a few use cases?

We know a surveillance camera can be smarter with edge AI functionality. And, when we say smarter, we mean the ability to identify objects and respond accordingly in real-time.

Traditional vision analytics uses predefined rules to solve tasks such as object detection, facial recognition, or red-eye detection. Deep learning employs neural networks to process the images. A neural network has a set of parameters that are trained using input images so that the network "learns" the rules, which are then applied to perform tasks like object detection or facial recognition on future images.

AI at the edge happens when AI algorithms are processed on local devices instead of in the cloud and where deep neural networks (DNNs) are the main algorithm component. Edge AI applications require high-speed and low-power processing, along with advanced integration unique to the application and its tasks.

TI’s vision processors make it possible to execute facial recognition, object detection, pose estimation, and other AI features in real-time using the same software. With scalable performance for up to 12 cameras, you can build smart security cameras, autonomous mobile robots, and everything in between (Fig. 1).

1686066221738.png

1. Machine vision and professional surveillance systems need vision processors with high performance.


The scalable and efficient family of vision processors enables higher system performance with hardware accelerators and faster development with hardware-agnostic programming for vision and multimedia analytics.


Advancing Intelligence at the Edge with AI Vision Processors​

June 5, 2023
Sponsored by Texas Instruments. A neural network has an extensive set of parameters that are trained using a set of input images—the network "learns" the rules used to perform tasks like object detection or facial recognition on future images.

This year is giving every indication of becoming a watershed period in the development of AI-based vision processing. And, if things happen as expected, the results could be as big as, or bigger than, consumer PCs were in the 1970s, the web was in the 1990s and smartphones became during this century. The artificial-intelligence (AI) vision market is expected to be valued at $17.2 billion in 2023, growing at a CAGR of 21.5% from 2023 to 2028 (Source: MarketsandMarkets).

The question is not whether it will happen, but rather how do we want to do it? How do we want to develop vision-based AI for collision avoidance, hazard detection, route planning, and warehouse and factory efficiency, to name just a few use cases?
We know a surveillance camera can be smarter with edge AI functionality. And, when we say smarter, we mean the ability to identify objects and respond accordingly in real-time.

The scalable and efficient family of vision processors enables higher system performance with hardware accelerators and faster development with hardware-agnostic programming for vision and multimedia analytics.

Deep-Learning Accelerator​


TI’s AM6xA family uses Arm Cortex-A MPUs to offload computationally intense tasks such as deep-learning inference, imaging, vision, video, and graphics processing. An accelerator called MMA—Matrix Multiply Accelerator—is employed for deep-learning computations. The MMA along with TI's C7x digital signal processor can perform efficient tensor, vector, and scalar processing. The accelerator is self-contained and doesn’t depend on the host Arm CPU.
Each edge AI device in the processor family, such as the AM62A, AM68A, etc. (the A at the end means it is an AI accelerated series of processors), has a different version of the C7xMMA deep-learning accelerator.
For instance, the AM68A (for up to eight cameras) and AM69A (up to 12 cameras) use a 256-bit variant of the C7xMMA, which can compute 1024 MAC operations per cycle, resulting in a maximum 2 TOPS capability. Training a deep-learning model typically requires a very high teraoperations-per-second processing engine, but most edge AI inference applications require performance in the range of 2 to 8 TOPS.
The AM62A enables one to two cameras. It can extend to four cameras, though, and is designed to operate at 2 to 3 W in a form factor small enough for use in power-efficient battery-operated applications. The processor can handle up to 5-Mpixel cameras. These are more than enough for in-house indoor usage, ranging from video doorbells to smart retail apps (Fig. 2).

Texas Instruments

2. This camera uses an AI model to monitor customer activity in a smart retail store.

2. This camera uses an AI model to monitor customer activity in a smart retail store.​

AM6xA edge AI software architecture makes it possible for developers to develop applications completely in Python or C++ language. There’s no need to learn any special language to take advantage of the performance and energy efficiency of the deep-learning accelerator.
The SK-AM62A-LP starter kit (SK) evaluation module (EVM) is built around the AM62A AI vision processor, which includes an image signal processor (ISP) supporting up to 5 Mpixels at 60 fps, a 2-TOPS AI accelerator, a quad-core 64-bit Arm Cortex-A53 microprocessor, a single-core Arm Cortex-R5F, and an H.264/H.265 video encode/decode. Similarly, the SK-AM68 Starter Kit/EVM is based on the AM68x vision SoC.

Easier Design Using Package Videos and ModelZoo​

TI's vision AI processors, with accelerated deep learning, vision and video processing, purpose-built system integration, and advanced component integration, enables commercially viable edge AI systems optimized for performance, power, size, weight, and system costs. And such AI processors help in the process of designing efficient edge AI systems thanks to their heterogeneous architecture and scalable AI execution.

Supplied along with the camera options discussed in this article are package videos—choose one to view performance of the various models.

In addition, TI continues to extend its “ModelZoo” to support the latest AI models on its embedded processors. ModelZoo is a large collection of pre-compiled models trained on industry-standard datasets; the models are optimized for inference, speed, and low power consumption. These runtime libraries can be used both for deep-learning model compilation and deployment to TI’s edge AI SoCs."
https://www.electronicdesign.com/to...ligence-at-the-edge-with-ai-vision-processors

Would be nice, wouldn't it?
 

Attachments

  • 1686066396989.png
    1686066396989.png
    160.3 KB · Views: 57
Last edited:
  • Like
  • Fire
Reactions: 28 users

Tothemoon24

Top 20
  • Like
  • Fire
  • Wow
Reactions: 30 users

MDhere

Regular
Hewlett-Packard as we know Sean’s pass employer , are very much talking all things machine learning & “intelligent edge”

Running time 3 minutes

i like it thanks tothemoon and i really like his voice and pivitol emphasis in his wording. good video.
 
  • Like
  • Fire
Reactions: 11 users
D

Deleted member 118

Guest
And still no one thinks the company should hire an experienced PR professional to review any and all communications and announcements?!
Completely agree or maybe they should have hired one more sales person instead.

 
  • Haha
Reactions: 4 users

Kachoo

Regular
How did they close 15+ Million shares on the 31 of May.

When the daily volume was 11+ Million shares
View attachment 37861 View attachment 37862

At the end of the day, it's just game that they play.

I have plenty of time before early retirement 😎

Learning 🏖
Your right and many times it's happened in the past where volume covered was higher then the volume traded lol. This is based on what is reported really a measure of trend. Nothing more. Just pointing out this is the lowest I have seen BRN on this short list so I would assume in my opinion o n covering is happen i ng while there is negative talk. IMO
 
  • Like
Reactions: 5 users

Kachoo

Regular
This article is in Japanese, but you can translate this page by clicking on ‘English’ on the top.

View attachment 37866
Nice find this is what needs to be advertised by BRN alittle more.
 
  • Like
Reactions: 8 users

Dhm

Regular
Renesas webinar will hopefully reveal our participation in their new product.


IMG_0121.jpeg
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 37 users
Top Bottom