BRN Discussion Ongoing

It has mycoleather seats, but they taste like shitake to stop the passengers licking the seats.
Why would you want to stop that?
😂🤡🤣😂
 
  • Haha
Reactions: 6 users

Quatrojos

Regular
Intel still think NNs use MACs (well at least they did in October 2022). Could they have learnt anything since?


US2023059976A1 DEEP NEURAL NETWORK (DNN) ACCELERATOR FACILITATING QUANTIZED INFERENCE US202218047415A·2022-10-18

View attachment 31958

PE = processing element.
[0008] FIG. 4 illustrates a PE (processing element) array, in accordance with various embodiments.

An DNN accelerator may include a PE (processing element) array performing MAC operations. The PE array may include PEs capable of MAC operations on quantized values. A PE may include subtractors for subtracting zeropoints from quantized activations and quantized weights to generate intermediate activations and intermediate weights. The intermediate activations and intermediate weights may be stored in data storage units in the PE and maybe used by an MAC unit in the PE. The subtractors may be placed outside the MAC unit but inside the PE. The MAC unit may perform sequential cycles of MAC operations. The MAC unit may include a plurality of multipliers. The intermediate activations and intermediate weights stored in the data storage units may be reused by different multipliers in different cycles of MAC operations. An output of the MAC unit or of the PE may be multiplied with a quantization scale to produce a floating-point value.


Intel has cancelled its Rialto Bridge processor, announced mid-2022, which was intended as an evolutionary change over its predecessor, and replaced it with Falcon Shores GPU which will be released in 2025, and its “flexible chiplet-based architecture will address the exponential growth of computing needs for HPC and AI,” McVeigh wrote.

... but does the left hand know which mouth the right foot is in?



'... but does the left hand know which mouth the right foot is in?'

Agreed. I think that it's such a huge co that their research community doesn't communicate with their ecosystem community. They may, I'd say, be competing for attention$...
 
  • Like
  • Haha
Reactions: 2 users
Agree, RF could be an interesting one.

Not just military but also Lidar amongst others.

I saw @Fact Finder earlier post discussing 16nm as well as Xilinx.

From Nandans tweet I decided to have a closer look at Ipsolon Research as he provided a web link. One of the companies he thanked re Akida Gen 2.

View attachment 31947

Was curious where they fit into picture.

So, snip of Ipsolon.


Ipsolon Research is an innovator in software defined radio technology.​


We launched Ipsolon Research in 2017 to develop and improve the analog and digital signal processing system for an airborne obstruction penetrating LIDAR system. Since then, we’ve engineered hardware, software, and FPGA design for a range of Software Defined Radios (SDR) and advanced wireless systems.

In addition to work for commercial wireless IC, we also manufacture and sell Chameleon and Cerberus Radios, both are wideband, small-form factor SDR for military communications, anti-jamming, and other advanced wireless signal processing applications. Our goal is to provide the wireless community with advanced radio system software and hardware development, early SDR conceptual design, advanced signal processing, and rapid prototyping.

From Ipsolon product page...16nm Xilinx.

They have a new product due soon but doesn't give specs yet.

ARAGMA​

Coming soon!


DIGITAL SIGNAL PROCESSING HORSEPOWER

16 TX/RX Channels, Smallest Footprint Software Defined Radio/RADAR

Multi-processor Subsystem

  • Xilinx 16nm FinFET RFSoC XCZU29
  • Quad-core 1.2 GHz ARM Cortex-A53 64b with L1/L2, MMU, DMA
  • Dual 500MHz R5 real-time processor, power manager & secure boot
  • 4GB DDR4, Micro SD memory
  • Range of IO including UARTs, 1G Ethernet, USB3.0
  • Massive parallel AXI IO between PS and FPGA fabric
FPGA Resources

  • 30+ system logic cells (K), 16 32g SERDES
  • 4,272 DSP slices, 2.13 TMACs @ 500 Mhz
Ipsolon also connected with US Navy via SBIR in 2021.


Ipsolon Research wins follow-on contract with Navy​

September 9, 2021

Fredrick, MD – The United States Navy has selected Ipsolon Research’s SBIR for a phase II follow-on contract after a successful phase I demonstration of its Deep Learning (DL) techniques.

“We used DL techniques to replace or extend the PDW-based approach to radar signal detection, “explained Ipsolon Research CEO, John Shanton.

Deep learning (DL) algorithms are a form of Machine Learning (ML) that use neural network (NN) architectures to process data and predict the presence of data patterns often called prediction or inference.

The specific goal of this project is to develop a deployable electronic sensor optimized to use ML techniques for real-time detection of radar signals and other signals of interest.

“The phase I results demonstrated that ML is a viable method for detection of a known radar signal type in power- and space-efficient FPGA devices,” Shanton said.


Was reading an article from 2020 about..

Artificial Intelligence in Radio Frequencies​



Excerpt.

Application of Machine Learning in Cognitive Radio​

The concept of CR was first proposed by Joseph Mitola [1] in order for mitigating the scarcity in limited radio spectrum by improving spectrum allocation efficiency by allowing unlicensed users (cognitive radio users) to identify and transmit over the frequency bands which are already assigned to the licensed users (primary users), but idle over specific time/space (spectrum holes). Spectrum hole identification is usually achieved by sensing the radio frequency (RF) environment through a process called spectrum sensing. It should be noted that the title of CR is not limited only to unlicensed wireless users and implied to any wireless user that adds the cognition capability along with reconfigurability to its system function.

So, where had I heard of "cognitive radio"?...oh that's right...from the Ph I and Ph II NASA solicitation with Intellisense using Akida.

So some prev posts on NECR and related SBIRs as refresh.



Speaking of Intellisense ;)


Principal Machine Learning Engineer​

Intellisense Systems, Inc. Torrance, CA
4 days ago Be among the first 25 applicants​


Intellisense Systemsinnovates what seemed impossible. We are a fast-growing Southern California technology innovator that solves tough, mission-critical challenges for our customers in advanced military, law enforcement, and commercial markets. We design, develop, and manufacture novel technology solutions for ground, vehicle, maritime, and airborne applications. Our products have been deployed in every extreme environment on Earth!

We are looking for an exceptional Principal Machine Learning Engineerskilled in development,Algorithms, SQL, Python, and C/C++to join our Artificial Intelligence (AI) and Radio Frequency (RF) Systems team. This person will code everything in Python, convert python into C/C++ for optimization to utilize in final production. The team works on cutting-edge technologies for government customers and DoD applications.

As part of the team, you will work alongside other experienced scientists and engineers to develop novel cutting-edge solutions to several challenging problems. From creating experiments and prototyping implementations to designing new machine learning algorithms, you will contribute to algorithmic and system modeling and simulation, transition your developments to software and hardware implementations, and test your integrated solutions in accordance with project objectives, requirements, and schedules.

Projects You May Work On:

  • Real-time object detection, classification, and tracking
  • RF signal detection, classification, tracking, and identification
  • Fully integrated object detection systems featuring edge processing of modern deep learning algorithms
  • Optimization of cutting-edge neural network architectures for deployment on neuromorphic processors
 
  • Like
  • Fire
Reactions: 21 users
Just having a flick through Socionexts report for Q3 - 2023/3 that came out not long ago.

Attached as well fwiw.

Liked a couple slides re their direction that we know of so far. Definitely think there will be some growth here though probs a slower burn.

Initiatives like closer partnerships with SoC ecosystem EDA, IP and suppliers.

Auto & datacentre in particular appear to have NPU requirements / targets...hopefully Akida in the mix.

Screenshot_2023-03-12-20-26-33-72_e2d5b3f32b79de1d45acd1fad96fbb0f.jpg


IMG_20230312_204444.jpg


IMG_20230312_204514.jpg
 

Attachments

  • sn_ir20230216_02e.pdf
    2.1 MB · Views: 144
  • Like
Reactions: 15 users

Learning

Learning to the Top 🕵‍♂️
Thanks for that's
Thanks for that's Top Bloke.

Very impressive CV.
It's amazing how BrainChip attract experience and talented individuals to join BrainChip.

What is it? That these individuals knows that's the (What A Neuromorphic Chips Anyway WANCA) don't. 😂😂😂

Oh that right, BrainChip’s new staffs must know more about BrainChip's Akida than the WANCA.

Learning 🏖
 
  • Like
  • Haha
  • Love
Reactions: 25 users
Screenshot_20230313-031018-871.png
Screenshot_20230313-030950.png


 
  • Like
  • Fire
  • Love
Reactions: 34 users

goodvibes

Regular

AI Inference Processes Data And Augments Human Abilities​




Artificial Intelligence (AI) has been making headlines over the past few months with the widespread use and speculation about generative AI, such as Chat GPT. However, AI is a broad topic covering many algorithmic approaches to mimic some of the capabilities of human beings. There is a lot of work going on to use various types of AI to assist humans in their various activities. Note that all AI has limitations. It generally doesn’t reason, like we do and it is generally best at recognizing patterns and using that information to create conclusions or recommendations. Thus, AI must be used with caution and not used to form conclusions beyond what it is capable of analyzing. Also, AI must be tested to make sure it is not biased based upon the training sets used to create the AI algorithms.



AI comes in big and small packages. Much of the training is done in data centers, where vast stored information and sophisticated processing of that information can be used to train AI models. However, once a model is created it can be used to interpret information collected locally via sensors, cameras, microphones and other data sources to make inferences that can help people to make informed decisions. One important application for such AI inference engines is for mobile and consumer devices that may be used for biometric monitoring, voice and image recognition and many other purposes. Inference engines can also be used to improve manufacturing and various office tasks. AI inference engines are also an important component in driving assistance and autonomous vehicles, enabling lane departure detection and collision avoidance, and other capabilities.


There have been some recent announcements for AI inference engine chips from Brainchip and Hailo. Let’s look at these announcements and their implications for processing and interpreting vast amounts of stored and sensed data. Slides from a Brainchip briefing provided some economics for making and potential applications for AI. It said that the costs of training a single high-end model is about $6M and annual losses in manufacturing due to unplanned downtime were about $50B. In terms of the opportunity, about 1TB is generated by an autonomous car per day and there is about a $1.1T cost of healthcare and lost productivity due to preventable chronic disease. A PWC report projects $15.7T in global GDP benefit from AI in 2030 and Forbes Business Insights projects $1.2T in AI internet of things (AIoT) revenue by that year.



To help enable this opportunity, Brainchip announced its 2nd generation of its akida IP Platform. The figure below from the briefing seems to show that this platform may be using chiplet technology that integrates the Akita Neuron Fabric chiplet to perform various functions.


Akida IP Platform

Brainchip akida neural network processor

BRAINCHIP PRODUCT BRIEFING
MORE FOR YOU

The akida IP Platform is a digital neuromorphic event-based AI device that is capable of some learning for continuous improvement. The company says that its second-generation of Akida now includes Temporal Event Based Neural Nets (TENN)spatial-temporal convolutions that supercharge the processing of raw time-continuous streaming data, such as video analytics, target tracking, audio classification, analysis of MRI and CT scans for vital signs prediction, and time series analytics used in forecasting, and predictive maintenance. The second-generation device also supports Vision Transformers (ViT acceleration, a neural network that is capable of many computer vision tasks, such as image classification, object detection, and semantic segmentation


Brainchip says that these devices are extremely energy-efficient in running complex models. For instance, it can run RESNET-50 on the neural processor. It is capable of spatial-temporal convolutions for handling 3D data. It enables advanced video analytics and predictive analysis of raw time series data. It allows low-power support for vision transformation for edge AIoT.





1.png
Forbes Innovation00:0101:06
New Apple Leak Reveals The MacBook Pro You Must Avoid
Wing Drone Delivery in 2024: ‘Capable Of Handling Tens Of Millions Of Deliveries For Millions Of Consumers’
ak Reveals The MacBook Pro YouMust Avoid
The akida product is being offered in three types of packages. The akida-E is the smallest and lowest power with 1-4 nodes and is designed for sensor inference and is used for detection and classification. The Akida-S with 2-8 nodes, includes a microprocessor with sensor fusion and includes application system on chips (SoCs) and is used for detection and classification working with a system CPU. The akida-P with 8-128 nodes, is the company’s maximum performance package. It is designed for network edge inference and neural network accelerators and can be used for classification, detection, segmentation and prediction with hardware accelerators.

Akida Platforms

Brainchip akida products

BRAINCHIP PRODUCT BRIEFING
Brainchip believes its akida IP packages can serve many applications including industrial uses such as predictive maintenance, manufacturing management, robotics and automation and security management. In vehicles it can enhance the in-cabin experience, provide real-time sensing, enhance the electronic control unit (ECU) and provide an intuitive human machine interface (HMI). For health and wellness applications it can provide vital-signs prediction, sensory augmentation, chronic disease monitoring and fitness and training augmentation. For home consumers it can augment security and surveillance, intelligent home automation, personalization and privacy and provide proactive maintenance.

Hailo announced its Hailo-15 family of high-performance vision processors, designed for integration into intelligent cameras for video processing and analytics at the edge. The image below shows an image of the Hailo VPU package. The company says that Hailo-15 allows smart city operators can more quickly detect and respond to incidents; manufacturers can increase productivity and machine uptime; retailers can protect supply chains and improve customer satisfaction; and transportation authorities can recognize everything from lost children, to accidents, to misplaced luggage.

Hailo

Image of Hailo Visual Processing Unit

HAILO PRODUCT ANNOUNCEMENT
The Hailo-15 visual processing unit (VPU) family includes three variants — the Hailo-15H, Hailo-15M, and Hailo-15L. Hailo devices also include neural networking cores. These devices are designed to meet the varying processing needs and price points of smart camera makers and AI application providers. The product support from 7 TOPS (Tera Operation per Second) up to 20 TOPS. The company says that all Hailo-15 VPUs support multiple input streams at 4K resolution and combine a powerful CPU and DSP subsystems with Hailo’s AI core. The Hailo-15H is capable of running the object detection model YoloV5M6 with high input resolution (1280x1280) at real time sensor rate, or the industry classification model benchmark, ResNet-50, at an 700 FPS.

the Hailo-15H is capable of running the state-of-the-art object detection model YoloV5M6 with high input resolution (1280x1280) at real time sensor rate, or the industry classification model benchmark, ResNet-50, at an extraordinary 700 FPS. The figure below shows the block diagram for the Hailo device.

Hailo Block Diagram

Hailo VPU Block Diagram

HAILO DATA SHEET
This is an industrial SoC with interfaces for image sensors, data and memory. It utilizes a Yoto-based Linux distribution and provides secure boot and debug with a hardware accelerated crypto library including TrustZone, and a true random number generation (TRNG) and Firewall. Hailo says that the low power consumption of these devices enables implementation without an active cooling system (e.g. a fan), making it useful in industrial and outdoor applications.

AI provides ways to process the vast amounts of stored and generated data by creating models and running them on inference engines in devices and at the network edge. Brainchip and Hailo recently announced neural network-based AI inference devices for industrial, medical, consumer, smart cities and other applications that could augment human abilities.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 31 users
D

Deleted member 118

Guest
 
  • Haha
  • Fire
Reactions: 3 users

Dhm

Regular
I just emailed Jason at UCSC asking him about the specific SNN chip his research entailed, together with a few links to Brainchip. If he responds I will let you know. I did let him know I am a shareholder.
I received this reply from Jason. Maybe I should pass this on to TD.

CDDB5A85-8A2A-462A-93FD-3C442E600D87.jpeg


Edit: I emailed Jason’s reply to Tony. Who knows, perhaps Akida could revolutionise the revolution!
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 51 users

Esq.111

Fascinatingly Intuitive.
Morning Chippers,

Wondering if Brainchip management will issue a ASX announcement today to alleviate investors concerns regarding SVB collapse.

I see two companys on the ASX have put out a statement to their shareholders this morning , on the ASX.

VGL & SKO

Regards,
Esq.
 
  • Like
  • Fire
Reactions: 35 users

Dhm

Regular
F90C4108-1717-4D58-863C-ACE0456C4BD6.jpeg
 
  • Like
  • Love
  • Fire
Reactions: 29 users

Lex555

Regular
Morning Chippers,

Wondering if Brainchip management will issue a ASX announcement today to alleviate investors concerns regarding SVB collapse.

I see two companys on the ASX have put out a statement to their shareholders this morning , on the ASX.

VGL & SKO

Regards,
Esq.
100% agree @Esq.111
Clarifying our liquidity today will be far more important than any demo news. Yellen just announced the gov won’t bailout SVB. Hopefully it’s just worry but I fear that a press release by BRN could have been alreadymade over the weekend to clarify the issue if we weren’t involved. Reports are that around 50% of startup in SV had an account with them. My fingers and toes are crossed
 
  • Like
  • Fire
  • Love
Reactions: 14 users
LAGUNA HILLS, CA / ACCESSWIRE / March 12, 2023 /BrainChip Holdings Ltd (ASX:BRN)(OTCQX:BRCHF)(ADR:BCHPY), the world's first commercial producer of ultra-low power, fully digital, event-based, neuromorphic AI IP, today announced it has validated that its Akida™ processor family integrates with the Arm® Cortex®-M85 processor, unlocking new levels of performance and efficiency for next-generation intelligent edge devices.

Arm Cortex-M85 delivers the highest levels of performance in the Arm Cortex-M family. It enables system developers to build the most sophisticated variety of MCUs and embedded SoCs for IoT and embedded applications that require enhanced digital signal processing (DSP) and machine learning (ML) capabilities. Use cases include smart home devices, robotics, and drone control, secured system controllers and sensor hubs. By successfully demonstrating Akida's capabilities with the Arm Cortex-M85 in a fully functioning environment, BrainChip paves the way for a new generation of intelligent edge devices that are capable of delivering unprecedented levels of performance and functionality, built on leading-edge technology from Arm.

"In order to serve the diverse and growing IoT market, developers require a new standard of secure, high-performance microcontrollers, combined with endpoint ML capabilities," said Paul Williamson, SVP and GM, IoT Line of Business at Arm. "The integration of Arm's highest performance microcontroller with the Akida portfolio enables our partners to deliver on this potential and efficiently handle advanced machine learning workloads."

The integration of Akida and the Arm Cortex-M85 processor is an important step forward for BrainChip, demonstrating the company's commitment to developing cutting-edge AI solutions that deliver exceptional performance and efficiency.

"We are delighted Akida can now seamlessly integrate with the Arm Cortex-M85 processor, which is one of the most advanced and efficient MCU platforms available for intelligent IoT," said Nandan Nayampally, CMO of BrainChip. "This is a significant milestone for BrainChip, as it drives new possibilities for high-performance, low-power AI processing at the edge. We are excited to offer our mutual customers the benefits of this powerful combination of technologies."
 
  • Like
  • Fire
  • Love
Reactions: 116 users

AARONASX

Holding onto what I've got
F**K Yeah!!!!!!!😂

 
  • Like
  • Fire
  • Love
Reactions: 90 users
“In order to serve the diverse and growing IoT market, developers require a new standard of secure, high-performance microcontrollers, combined with endpoint ML capabilities,” said Paul Williamson, SVP and GM, IoT Line of Business at Arm. “The integration of Arm’s highest performance microcontroller with the Akida portfolio enables our partners to deliver on this potential and efficiently handle advanced machine learning workloads.”

* @AARONASX deleted mine to prevent a double up - you beat me by a nose 🤣
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 43 users

FJ-215

Regular
Morning Chippers,

Wondering if Brainchip management will issue a ASX announcement today to alleviate investors concerns regarding SVB collapse.

I see two companys on the ASX have put out a statement to their shareholders this morning , on the ASX.

VGL & SKO

Regards,
Esq.
Morning @Esq.111
Definitely a good idea especially considering we are in the middle of a capital raise that is dependent on share price. Also a chance to throw in a cheeky comment along the lines of 'Brainchip are not affected by the fallout of SBV as our EAP customers are all Fortune 500 companies'
 
  • Like
  • Love
  • Fire
Reactions: 21 users
100% agree @Esq.111
Clarifying our liquidity today will be far more important than any demo news. Yellen just announced the gov won’t bailout SVB. Hopefully it’s just worry but I fear that a press release by BRN could have been alreadymade over the weekend to clarify the issue if we weren’t involved. Reports are that around 50% of startup in SV had an account with them. My fingers and toes are crossed
Do not forget that if Brainchip was invested in Silicon Valley Bank it would have lost a commercially price sensitive sum of money and be obligated to make an ASX announcement.

The correct and logical conclusion if there is no announcement is that there is nothing to announce.

While I would prefer an announcement to fend off the attacks by WANCA’s the absence of any announcement means everything is AOK at Brainchip.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Love
  • Fire
Reactions: 75 users

wilzy123

Founding Member
 
  • Like
  • Love
  • Fire
Reactions: 37 users

TopCat

Regular

Arm announced the Arm® Cortex®-M85 core this week which will enable breakthrough performance and fully deterministic, low latency, real-time operation that enables our customers’ most demanding application needs. It is a technology game changer, allowing designers to continue development in the MCU space whilst enjoying performance levels that have until now only been available from MPU products. Read the Arm blog for more about how the Cortex-M85 will push the boundaries of performance and security for microcontrollers.
D51623A5-4C7A-456C-80B7-CE2924641C00.jpeg
 
  • Like
  • Fire
  • Love
Reactions: 68 users

TheDrooben

Pretty Pretty Pretty Pretty Good
LAGUNA HILLS, CA / ACCESSWIRE / March 12, 2023 /BrainChip Holdings Ltd (ASX:BRN)(OTCQX:BRCHF)(ADR:BCHPY), the world's first commercial producer of ultra-low power, fully digital, event-based, neuromorphic AI IP, today announced it has validated that its Akida™ processor family integrates with the Arm® Cortex®-M85 processor, unlocking new levels of performance and efficiency for next-generation intelligent edge devices.

Arm Cortex-M85 delivers the highest levels of performance in the Arm Cortex-M family. It enables system developers to build the most sophisticated variety of MCUs and embedded SoCs for IoT and embedded applications that require enhanced digital signal processing (DSP) and machine learning (ML) capabilities. Use cases include smart home devices, robotics, and drone control, secured system controllers and sensor hubs. By successfully demonstrating Akida's capabilities with the Arm Cortex-M85 in a fully functioning environment, BrainChip paves the way for a new generation of intelligent edge devices that are capable of delivering unprecedented levels of performance and functionality, built on leading-edge technology from Arm.

"In order to serve the diverse and growing IoT market, developers require a new standard of secure, high-performance microcontrollers, combined with endpoint ML capabilities," said Paul Williamson, SVP and GM, IoT Line of Business at Arm. "The integration of Arm's highest performance microcontroller with the Akida portfolio enables our partners to deliver on this potential and efficiently handle advanced machine learning workloads."

The integration of Akida and the Arm Cortex-M85 processor is an important step forward for BrainChip, demonstrating the company's commitment to developing cutting-edge AI solutions that deliver exceptional performance and efficiency.

"We are delighted Akida can now seamlessly integrate with the Arm Cortex-M85 processor, which is one of the most advanced and efficient MCU platforms available for intelligent IoT," said Nandan Nayampally, CMO of BrainChip. "This is a significant milestone for BrainChip, as it drives new possibilities for high-performance, low-power AI processing at the edge. We are excited to offer our mutual customers the benefits of this powerful combination of technologies."
200w.gif
 
  • Like
  • Haha
  • Fire
Reactions: 22 users
Top Bottom