BRN Discussion Ongoing

Cardpro

Regular
That's weird, the reports I've seen definitely have revenue and cash receipts...

Also when the company went public at 25c, what revenue did they have then?
Again, how is a 30% discount to day-0 BRN not an insane buying opportunity.

Another question to ponder: What makes a shareholder (who definitely didn't get sucked into the Mercedes hype and buy over $1) decide the company is so bad they want to trash it on a forum, and yet still hold it and not sell instead?

Imo there's reasons outside the company's control for that kind of behaviour.
If you go out and earn 1 dollar after working for years, would you be happy? For a company once with MC over billion, earning few mills in the period of few years means nothing, IMO, they might have earned more if they opened maccas and asked all employees to work there lol

Well, they had a goal of earning revenue, didn't they? Why shouldn't we trash the company when they've failed to land contracts or earned more revenue after they implied that there will be explosion of sales or there will be things to watch on the financials?

What makes a shareholder who goes nuts when they see a negative post thinking somehow the share price will drop further because of that negative post?

Why am I not selling? Because I know that a single announcement will change the sentiment (e.g. if Valeo confirms that we are in Scala 3, this will fly and completely change my sentiment)...

To be frank, I've been hoping to see that announcement for years but we haven't got a single update on it (I hope it's because they are somehow connected to MegaChips but my hope is dying as time goes without any significant numbers on our statements)...
 
  • Like
  • Fire
Reactions: 10 users
"I can't help but relate Tony Seba's this latest presentation to the evolution of Brainchip/AKIDA and how our little company is part of this phase disruption.

1698054271374.png


In the above presentation, “The Great Transformation” Seba discusses the concept of disruption, emphasizing the speed at which it occurs and how the convergence of technologies contributes to its cost-effectiveness and widespread adoption. Based on everything I've seen, heard, and read over the past few years, I believe this next statement holds true to some extent:

'Neuromorphic computing is one of the keys to global multi-sector disruption.' (FallingKnife 2023)

This belief is supported by the comments made by CEOs of companies with which we have partnered and those we have as ecosystem partners.

The BRN ecosystem partners are actively exploring how AKIDA can be integrated into their processors and products, they all align with some of the foundational sectors that Tony Seba believes underpin humanity (Energy, Transport, Information, Food, and Material sectors). These sectors have far-reaching implications for society as a whole.

1698054313709.png

The image above is taken from one of Tony Seba's other recent presentations, highlighting the 'phase change' we are currently undergoing as a result of the convergence of key technologies. The section from 6min (Technology Convergence) is exactly where BRN sits in this big picture. AKIDA will enable mass adoption of many opportunities and innovations

The vision that many are sharing, 1000 eyes included, is that AKIDA is among the key technologies driving disruptions well into the future, even in sectors that the BRN team have not yet explored.

To quote some key moments from “The Great Transformation” presentation:

  • At 1:53, Seba speaks of a great transformation in humanity, citing years of analysis. He predicts that the next 15-20 years will be the most disruptive in history.
  • At 3:35, he distinguishes between regular disruptive technologies and the more profound 'phase change' disruptions, which occur when various technologies converge. (10 times disruptors)
  • At 11:09, he emphasizes how phase change disruptions can be compared to a different life form, causing ripple effects across society. He provides the example of the automobile's impact on the 20th century, which affected geopolitics and urban planning. (Automotive industry, Home, Industrial and Health and Wellness industries)
We're eagerly awaiting BRN's next 4C report, hoping for positive financial results. However, even if the 4C report disappoints, AKIDA remains one of the technologies poised to bring multifaceted disruptions to various aspects of our lives and will undoubtedly play a significant role in our future. Companies have little choice but to embrace neuromorphic computing, and those who fail to do so may find themselves exposed as technology evolves.



The second generation of AKIDA could serve as the tipping point for mass adoption.

All in my ‘locked on like a Barnacle’ mind of course.
 
  • Like
  • Fire
  • Love
Reactions: 40 users

M_C

Founding Member
Shit it's a full muppet house tonight. Must be getting close huh boys?

What's the matter, no liquidity? Pay up.
 
Last edited:
  • Haha
  • Like
  • Fire
Reactions: 23 users

Diogenese

Top 20

"7 Analog Computing Companies Powering Next-Generation Applications​

October 16, 2023
by Quantum Business Intelligence
View attachment 47822

Analog computing, a method rooted in continuous physical phenomena like electrical voltages or mechanical movement, stands in contrast to the discrete 0s and 1s of digital computing. Historically, tools like slide rules served as rudimentary analog computers, and even water was once employed for complex economic calculations. However, the modern analog revolution is chip-based, with numerous companies delving into its potential, especially in neuromorphic computing. This approach seeks to emulate the human brain’s structure and function, using circuits to mimic neurons and synapses, offering a more efficient and parallel processing alternative to traditional digital methods.

...

BrainChip

...
In conclusion, while analog computing might seem like a relic of the past, its principles are finding new life in modern applications, especially in neuromorphic computing. The companies listed above are just a few pioneers leading the charge in this exciting field. Read another Article about Analog Computing, what it is, and how it relates to Quantum Computing*."
https://quantumzeitgeist.com/7-analog-computing-companies-powering-next-generation-application/

* =>

"Is Analog Computing the Quiet Computing Revolution that you haven’t heard of​

October 12, 2023 by Schrödinger

Though not used as commonly today due to digital calculators, slide rules are a form of analog computer that can perform multiplication, division, and other functions by sliding scales against one another. In the past, researchers have even used water to compute complex calculations for use in economics. But today’s revolution is fully chip-based, and numerous companies are exploring Analog Computing.

What is Analog Computing?​

Analog computing is a method of computation that utilizes continuous physical phenomena, such as electrical voltages or mechanical movement, to represent and process information. Unlike digital computers, which use discrete values like 0s and 1s, analog computers work with continuous signals. They can solve complex mathematical equations, especially differential ones, in real time. Historically, analog computers were widely used in scientific and industrial applications, such as predicting tides, controlling machinery, and simulating flight dynamics, before the rise of digital computing. While they have been largely overshadowed by their digital counterparts, analog computing principles are still explored today in specific applications and research areas.
One of the most intriguing developments in analog computing in recent years is neuromorphic computing. Neuromorphic engineering, or neuromorphic computing, seeks to design systems that mimic the structure and function of the human brain. These systems use circuits to emulate the behavior of neurons and synapses, allowing for more efficient and parallel processing than traditional digital methods.

Use-cases for Analog Computing.​

Neuromorphic systems, when related to analog computing, refer to hardware architectures specifically designed to mimic the neural structures and functionalities of the brain. These systems leverage the inherent strengths of analog computing to emulate the continuous and parallel nature of biological neural networks.
In artificial intelligence (AI), neuromorphic systems offer a more efficient way to train neural networks. Their adaptive nature allows them to learn from data more effectively, potentially leading to faster and more accurate AI models.
An application of neuromorphic computing is in robotics. Robots equipped with neuromorphic chips can process sensory data in real time, allowing them to adapt to their environment more effectively. This is crucial for navigation, object recognition, and interaction with humans or other robots.

Neural Network Simulations and Machine Learning

Analog computers, especially neuromorphic systems, are designed to emulate the behavior of neurons and synapses in the brain. This makes them particularly suitable for running neural network simulations and machine learning algorithms. Their inherent parallelism and energy efficiency can lead to faster processing times and lower power consumption than traditional digital systems when executing these tasks.

Signal Processing

Analog computers can be used for real-time signal-processing tasks. For instance, they can be employed in filtering noise from signals, amplifying specific frequencies, or modulating signals. Their ability to process continuous data streams in real time makes them ideal for audio processing, telecommunications, and radar systems applications.

Control Systems

Analog computers have historically been used in control system applications, such as guiding rockets or controlling industrial processes. They can simulate and predict system behaviors and provide real-time feedback to adjust parameters, ensuring optimal performance and stability.

Differential Equations Solving

Analog computers can solve differential equations much faster than their digital counterparts by directly modeling the continuous changes described by the equations. This capability is in fields like physics, engineering, and economics, where differential equations are commonly used to define system dynamics.

Optimization Problems

Analog computers can be deployed to solve optimization problems by finding the minimum or maximum values of functions. For example, they can be used in network design to find the most efficient path or in financial modeling to optimize investment portfolios.


Companies Involved in Analog Computing​

Intel and Analog Computing

Intel is one of the leading companies delving into neuromorphic computing. Their research in this area has led to the developing of “Loihi,” a neuromorphic research chip. Loihi mimics the brain’s basic computational unit, the neuron, allowing it to process information more efficiently. The chip can adapt quickly, making it suitable for complex tasks like pattern recognition and sensory data processing.
The chip is equipped with 128 cores, each containing 1,024 artificial neurons, resulting in a total of over 130,000 neurons. These neurons can be interconnected with approximately 130 million synapses. The adaptive nature of Loihi allows it to learn in real time, making it particularly effective for tasks like pattern recognition, sensory data processing, and even robotics.


Aspinity and Analog Computing

Another notable name in the field focuses on analog processing for edge devices. The company is named Aspinity. Their approach aims to reduce the power consumption of always-on sensing devices, like voice-activated assistants, by directly analyzing raw, analog sensor data. By processing data in its analog form before converting it to digital, Aspinity’s technology can drastically reduce power consumption, making it a game-changer for battery-operated devices.
The AML100 is the first product in Aspinity’s AnalogML™ (analog machine learning) family. The AML100 detects and classifies sensor-driven events from raw, analog sensor data, allowing developers to design significantly lower-power, always-on edge processing devices. Based on the unique Reconfigurable Analog Modular Processor (RAMP™) technology platform


IBM and Analog Computing

IBM has been a pioneer in neuromorphic computing with its TrueNorth chip. TrueNorth emulates the structure and scale of the brain’s neurons and synapses but uses significantly less power. It’s designed for various applications, including real-time processing in sensors and mobile devices.


BrainChip and Analog Computing

BrainChip is a company known for its work in neuromorphic computing. They have developed the Akida Neuromorphic System-on-Chip, designed to provide advanced neural networking capabilities. Akida is designed for edge and enterprise applications, including advanced driver assistance systems, drones, and IoT devices. The chip’s architecture allows for low-power and low-latency processing, making it suitable for real-time applications. BrainChip’s endeavors in neuromorphic computing showcase the potential of analog computing in modern applications. Their technology aims to bridge the gap between artificial neural networks and the human brain’s functionality.

HPE and Analog Computing

HPE has been exploring the realm of neuromorphic computing through their project called the “Dot Product Engine.” This project focuses on developing hardware that can accelerate deep-learning tasks.
The Dot Product Engine uses analog circuits to perform matrix multiplications, a fundamental operation in deep learning. This approach aims to reduce power consumption and increase the speed of deep learning computations.
HPE’s exploration into analog computing signifies its commitment to finding innovative solutions to modern computational challenges. Their research could pave the way for more efficient AI hardware in the future.

MemComputing and Analog Computing

MemComputing is a company that has developed a novel computing architecture inspired by the human brain’s neurons and synapses. Their technology is designed to solve complex optimization problems.
Their approach involves using memory elements for computation and storage, which can lead to significant speed-ups for specific computational tasks. This is particularly beneficial for industries that require real-time decision-making based on large datasets. MemComputing’s technology showcases the potential of neuromorphic architectures in addressing some of the most challenging problems in computing. Their solutions aim to provide a competitive edge to businesses across various sectors.

Applied Brain Research (ABR) and Analog Computing

Applied Brain Research (ABR) is a company that specializes in neuromorphic engineering and software. They have developed tools and software for building brain-like computing systems.
Their software, called Nengo, is a neural simulator for designing and testing large-scale brain models. It’s used in various applications, from robotics to AI. ABR’s work in neuromorphic computing is centered around creating efficient, brain-like systems that can process information in real time, making them suitable for a range of applications where traditional computing architectures might fall short.

Knowm and Analog Computing

Knowm is a company that focuses on developing memristor-based solutions for neuromorphic computing. Memristors are electronic components that can change resistance based on the amount and direction of voltage applied, making them suitable for brain-like computing.
Knowm’s technology aims to provide a platform for building adaptive learning systems that can evolve and learn over time. Their approach to neuromorphic computing is hardware-centric, focusing on creating components that can support brain-like computation at the chip level.

CogniMem and Analog Computing

CogniMem specializes in pattern recognition and has developed technologies based on neuromorphic computing principles. Their products are designed to mimic the human brain’s ability to recognize patterns and learn from experience. This makes them suitable for applications like image and speech recognition. CogniMem’s vision is to create computing systems that can learn and adapt in real time, providing more natural and intuitive interactions between humans and machines.

Neurala

Neurala is a company that develops deep-learning neural network software for drones, cameras, and other devices. Their Neurala Brain technology is designed to make devices like drones more autonomous and capable of learning and adapting in real time. Neurala’s approach to neuromorphic computing is to create software that can be integrated into various devices, making them smarter and more responsive to their environments.

Analog Computing and Quantum Computing?​

Analog computers operate based on continuous variables and physical phenomena. They represent data as varying physical quantities, such as electrical voltage or fluid pressure. For instance, in an electronic analog computer, the magnitude of an electrical voltage might represent a specific value. Calculations are performed by manipulating these continuous signals through components like operational amplifiers and integrators. In contrast, quantum computers operate on the principles of quantum mechanics. They utilize qubits, which can exist in a superposition of states, allowing them to represent both 0 and 1 simultaneously. Quantum operations involve entanglement and superposition, which Analog systems cannot achieve. One commonality, of course, is that the states are not digital, although they will be typically converted back to digital states to interpret—for example, the words in a document or images.

Modern analog computers, such as neuromorphic chips, are designed to mimic the brain’s neural structures, offering efficient solutions for tasks like pattern recognition and sensory processing3. Quantum computers, on the other hand, have the potential to revolutionize fields like cryptography, optimization, and drug discovery. They can solve problems currently intractable for classical computers, such as factoring large numbers or simulating complex quantum systems."
https://quantumzeitgeist.com/analog-computing-computing-revolution/

@Diogenese can you or someone else explain to me, amateur, why exactly Brainchip speaks of fully digital and in such a way that I understand it in my lack of knowledge? If this has already been clarified please excuse me and point me to the corresponding post.
Hi cosors,

QBI certainly seem to have got their wires crossed.

Analog neuron - here's one I prepared earlier:

1698057004565.png







Vs = supply voltage;
zig-zag lines = resistors;
capacitor C is the two black lines, a capacitor charges at a rate determined by the input series resistance and the supply voltage Vs;
capacitor discharges at a rate determined by the parallel leakage resistance RL;
R1 to Rn are input resistors which connect Vs to C when the corresponding switch is closed by the associated input spike;
RL is the leakage resistor through which C discharges;
zener diode ZD is a diode which blocks current until the applied voltage reaches a threshold voltage Vz;
for the neuron to fire, the voltage across C must be greater than Vz.
switches are controlled by the incoming spikes on the dashed lines from previous neurons;
the input spike time is short so C does not fully charge on a single input spike. It needs several input spikes (X). In addition, because C simultaneously begins to discharge through RL, the X input spikes must arrive within a time window so the sum of the spike currents can charge the capacitor voltage to Vz.


On the other hand, Akida's neuron is shown in WO2020092691A1 AN IMPROVED SPIKING NEURAL NETWORK



1698055076691.png



The analog neuron adds the values of the incoming spike currents in the capacitor , so the capacitor voltage is a summation of the input spikes, decreased by the discharge through RL. Because of manufacturing variance when hundreds of thousands of such circuits are combined, the accumulated error can significantly affect accuracy.

Akida's digital neuron uses the normal binary circuitry of "1," and zeros. Because only one value needs to be measured, digital circuitry is much more tolerant of manufacturing variances.
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 23 users
Shit it's a full muppet house tonight. Must be getting close huh boys?
fozzie-fuck.gif
 
  • Haha
  • Love
  • Like
Reactions: 14 users

GStocks123

Regular
⏱….
 

Attachments

  • IMG_3973.png
    IMG_3973.png
    8.6 MB · Views: 41
  • Haha
  • Like
  • Fire
Reactions: 5 users

wilzy123

Founding Member
If you go out and earn 1 dollar after working for years, would you be happy? For a company once with MC over billion, earning few mills in the period of few years means nothing, IMO, they might have earned more if they opened maccas and asked all employees to work there lol

Well, they had a goal of earning revenue, didn't they? Why shouldn't we trash the company when they've failed to land contracts or earned more revenue after they implied that there will be explosion of sales or there will be things to watch on the financials?

What makes a shareholder who goes nuts when they see a negative post thinking somehow the share price will drop further because of that negative post?

Why am I not selling? Because I know that a single announcement will change the sentiment (e.g. if Valeo confirms that we are in Scala 3, this will fly and completely change my sentiment)...

To be frank, I've been hoping to see that announcement for years but we haven't got a single update on it (I hope it's because they are somehow connected to MegaChips but my hope is dying as time goes without any significant numbers on our statements)...

The 4C is about to be released... so we have been waiting with so much baited anxiety and enthusiasm for your usual supreme quality analysis.

Are you sure you're a card pro? Do you sell books?

F52jSLRb0AAJXUU.jpg
 
  • Haha
  • Like
Reactions: 12 users

IloveLamp

Top 20


1000006953.png
 
  • Like
  • Thinking
Reactions: 6 users

Tothemoon24

Top 20
Be nice if there was a place for a little magic sauce amongst TDK’s Qeexo AutoML cloud / edge adventure


The purpose of edge AI is to process data in real time so the system can respond more quickly than it would with a trip to the cloud.
Rob Spiegel | Oct 22, 2023

SUGGESTED EVENT
ATX_CAN_4c.jpg
ATX Canada 2023
Nov 07, 2023 to Nov 09, 2023

Edge AI and machine learning have become increasingly important in industrial technology settings. With the rise of IoT sensors and smart devices, there is a growing need for edge AI solutions to process data locally and make real-time decisions without relying on cloud services.
TDK’s Qeexo AutoML platform was designed to help developers and companies easily implement AI solutions at the edge without the need for extensive machine learning expertise. The goal is to provide a user-friendly interface and automated feature engineering. According to TDK, Qeexo AutoML can significantly reduce the time and resources needed to develop and deploy edge AI applications.

The need for privacy and security is also important for data processing in industrial applications. With edge AI, data is processed locally without being transmitted to a remote server, ensuring better protection for sensitive data.
Edge AI can improve efficiency and reduce latency in various industries such as manufacturing, healthcare, and transportation. By bringing intelligence closer to the source of data, edge AI enables faster decision-making and real-time monitoring. Altogether, edge AI solutions can help reduce costs and improve performance.
We caught up with Sang Lee, CEO of TDK Qeexo, to dig deeper into the issues and advantages of using edge AI.

Design News: What are the most pressing issues when your customers deploy edge AI and machine learning?​

Sang Lee: The adoption of Edge AI faces two primary challenges: first, it creates high-performance models that can run efficiently on edge hardware, and second, it can address the mass deployment challenges associated with customizing and localizing models for optimal performance in diverse environments.

DN: Explain the importance of artificial intelligence in IoT and smart devices.​


Sang Lee: Artificial intelligence is vital for IoT and smart devices because it transforms data into actionable insights, enabling real-time decision-making, predictive maintenance, enhanced security, energy efficiency, and personalized user experiences. AI's automation and scalability benefits make IoT devices smarter, more efficient, and adaptable, driving cost savings and reducing environmental impact. It's the cornerstone for realizing the full potential of the IoT ecosystem, revolutionizing industries, and improving the way we interact with our connected environments.

DN: What is “Edge AI,” and how does it help in making real-time decisions?​

Sang Lee: “Edge AI” refers to the deployment of AI algorithms and models directly on edge devices, such as IoT devices, rather than relying on a centralized cloud server for processing. It helps in making real-time decisions by bringing AI capabilities closer to the data source, where data is generated, and decisions need to be made.

DN: How does the Qeexo AutoML platform help developers and companies implement AI solutions?​

Sang Lee: Qeexo AutoML streamlines machine learning workflows by automating the most labor-intensive tasks, including data cleaning, sensor and feature selection, model choice, hyperparameter optimization, model validation, conversion, and deployment. This automated process enhances efficiency and saves valuable time, as it evaluates numerous options behind the scenes, identifying the most suitable models for your data.

DN: How does Edge AI improve efficiency and reduce latency?​

Sang Lee: Processing AI on the edge device allows for immediate handling of all raw sensor data at the source, eliminating the need to transmit raw data to a remote server for processing. This results in enhanced power efficiency and reduced latency.

DN: What are the privacy and security implications?​

Sang Lee: Edge AI can process sensitive data locally, without sending it to external servers. This enhances data privacy and security, which is crucial for applications like surveillance and healthcare.






Semi Industry Pushing to Mainstream Chiplet Design​

Article-Semi Industry Pushing to Mainstream Chiplet Design​

AMD
AMD’s 7900M graphic processor uses a chiplet design.
A recent wave of agreements and new platforms indicate progress in making chiplets a staple in packaging high-speed computing devices.
Spencer Chin | Oct 20, 2023


ATX_CAN_4c.jpg



Industry Voices: Making Data Actionable with Edge AI

Article-Industry Voices: Making Data Actionable with Edge AI​

NanoStockk via iStock / Getty Images Plus for Getty Images
edge AI

TDK to Acquire Qeexo to Enable Complete Smart Edge Platforms​

SAN JOSE, Calif., Jan. 4, 2023 — TDK Corporation has announced its acquisition of Qeexo, a U.S.-based venture-backed company engaged in the automation of end-to-end machine learning for edge devices. As a result of the acquisition, Qeexo will become a wholly owned subsidiary of TDK, subject to customary closing conditions, including approval of the Committee on Foreign Investment in the US (CFIUS).
Qeexo, based in Mountain View, California, is the first company to automate end-to-end machine learning for edge devices. Qeexo AutoML enables a no-code environment, enabling data collection and training of 18 (and expanding) different machine learning algorithms, including both neural networks and non-neural-networks, to the same dataset, while generating metrics for each so that users can pick the model that best fits their unique requirements.
A cloud-based easy-to-use solution, it provides an intuitive UI platform system that allows users to collect, annotate, cleanse, and visualize sensor data and automatically build “tinyML” models using different algorithms. Qeexo’s AutoML platform allows customers to leverage sensor data to rapidly build machine learning solutions optimized to have ultra-low latency and power consumption, with an incredibly small memory footprint for highly constrained environments with applications in industrial, IoT, wearables, automotive, mobile, and more. Through streamlined intuitive process automation, Qeexo’s AutoML enables customers without precious ML resources and greatly accelerates design of Edge AI capabilities for their own specific applications.
“Qeexo brings together a unique combination of expertise in automating machine learning application development and deployment for those without ML expertise, high volume shipment of ML applications and understanding of sensors to accelerate the deployment of smart edge solutions,” stated Jim Tran, CEO, TDK USA Corporation. “Their expertise combined with TDK’s leadership positions in sensors, batteries and other critical components will enable the creation of system level solutions addressing a broad range of applications and industries.”
“Our platform is an outgrowth of our own history of high-volume ML application development and deployment enabling those with domain expertise but not ML expertise to solve real world problems quickly and efficiently,” continued Sang Lee, CEO, Qeexo. “We see our AutoML tool as a natural partner to the smarter sensor systems that TDK is building.”
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 14 users

Slade

Top 20
We know this from Jan 2023 CES:
“Advanced AI Solutions for Automotive
Socionext has partnered with artificial intelligence provider BrainChip to develop optimized, intelligent sensor data solutions based on Brainchip’s Akida® processor IP.”

Given we know that, what do we think of this:
Yokohama/Japan -- October 23, 2023 --- Socionext, today announced it is developing custom SoCs for advanced ADAS (Advanced Driver Assistance Systems) and AD (Autonomous Driving) using N3A, TSMC's latest 3nm automotive process technology. Target production of the SoCs will be in 2026.

TSMC’s 3nm technology enables high volume production with significant improvements in power, performance, and area (PPA) when compared to previous technology nodes. TSMC’s N3E process offers up to 18% speed improvement at the same power, or 32% power reduction at the same speed, with up to 60% increase in logic density compared to N5. These PPA improvements are a key ingredient for future EV and vehicle deployments, where computing and workload requirements for next generation ADAS and AD applications often compete with the need for longer battery life and driving range.

The SoCs being developed are designed specifically for advanced driving assistance systems and autonomous driving applications, achieving exceptional performance and low-power consumption. By combining TSMC’s N3A process technology with Socionext’s experience to support ISO26262 functional safety product development, AEC-Q100 and IATF-16949 automotive quality and reliability requirements, Socionext is addressing the performance and safety demands of the rapidly evolving automotive electronic systems required by Automotive OEMs.

"We are excited to announce that we are continuing our history of adopting TSMC's leading-edge process technologies for automotive chip development" said Hisato Yoshida, Corporate Executive Vice President and the Head of Global Development Group at Socionext. “As the advanced ADAS and AD segment continues to grow, our customers are investing in custom SoC solutions to provide product differentiation and optimization of their hardware compute and software platforms. With customers’ requirements for high levels of integration with multi-core CPU clusters, AI acceleration, image and video processing, high-speed interfaces, as well as security and functional safety support, Socionext’s experience enables our customers to deliver next-generation automotive platforms. TSMC is a key silicon manufacturing partner for Socionext, not only for automotive developments, but for a wide range of applications ranging from consumer and industrial to datacenter and networking"

“Socionext has been an early adopter of TSMC’s leading-edge technologies for automotive applications,” said Dr. Cliff Hou, Senior Vice President of Europe and Asia Sales at TSMC. “With our comprehensive automotive technology platform, Socionext can quickly harness the power of 3nm for the computing needed for ADAS and AD without compromising safety and reliability, and we are excited to see the innovations they will bring to life”

Socionext will start designing with the early release of N3A process known as N3AE, to enable an accelerated schedule to mass production of automotive-grade products. Through early and close collaboration with TSMC, Socionext aims to be one of the first suppliers of high-performance and energy-efficient automotive products built on the most advanced N3A technology.”


 
  • Like
  • Fire
  • Thinking
Reactions: 18 users

rgupta

Regular
This post is all purely "Unsubstantiated material " ......... where's the BRN Co ASX announcement.
there need not any announcement if nintendo take technology through megachips. Megachips already hold our license and sell the product to anyone. We will only see royalties payments in receipts and the same will also not distinguish who pay how much.
 
  • Like
Reactions: 8 users
Well....it would be very handy if the "NPU" just happened to be ours...or is that wishful thinking at the mo...or are we included in their "underscoring" of active collaborations with ecosystem partners :unsure:


SiFive Unveils Advanced RISC-V Computing Solutions for Data-Intensive AI Applications​

SiFive

Leading the charge in RISC-V computing, SiFive, Inc. has introduced two groundbreaking products aimed at addressing the escalating demands of high-performance computing. The newly revealed SiFive Performance P870 and SiFive Intelligence X390 promise enhanced low power consumption, high compute density, and vector compute capability, ushering in a new era of innovation in the world of RISC-V.

Unveiled during an in-person press and analyst event in Santa Clara, SiFive’s latest offerings provide a significant performance boost for data-intensive compute tasks, vital for modern artificial intelligence applications in consumer electronics, automotive systems, and infrastructure markets. The P870 core, designed for high-performance consumer applications and data center integration, boasts a 50% peak single-thread performance upgrade over its predecessor, incorporating a six-wide out-of-order core and a shared cluster cache accommodating up to 32 cores. The P870 is fully compatible with Google’s Android platform requirements for RISC-V, solidifying its applicability in the diverse landscape of computing needs.

In parallel, the SiFive Intelligence X390 builds on the success of the previous X280 model, enhancing vector computation capabilities with its single-core configuration, doubled vector length, and dual vector ALUs. This configuration quadruples sustained data bandwidth, providing a robust solution for artificial intelligence and machine learning applications in mobile, infrastructure, and automotive sectors. The Vector Coprocessor Interface eXtension (VCIX) empowers users to integrate custom vector instructions and acceleration hardware, offering unparalleled flexibility in performance optimization.

When combined, the SiFive Performance P870 and SiFive Intelligence X390, along with a high-performance Neural Processing Unit (NPU) cluster, deliver a versatile, low-power, and programmable solution. Designed specifically for generative AI applications, this agile hardware solution offers superior compute density for complex workloads, addressing the evolving needs of the tech industry.

SiFive’s CEO, Patrick Little, emphasized the company’s commitment to bridging the performance gap in the semiconductor industry, stating, “SiFive is leading the industry into a new era of high-performance RISC-V innovation, and closing the gap with other instruction set architectures with our unparalleled portfolio. The flexibility of SiFive’s RISC-V solutions allows companies to address the unique computing requirements of these segments and capitalize on the momentum around generative AI, where we have seen double-digit design wins, and for other cutting-edge applications.”

SiFive also underscored their active collaboration with partners across the ecosystem to ensure seamless integration and commercialization of SiFive-powered products. As the demand for high-performance computing solutions continues to surge, SiFive’s latest offerings mark a significant milestone in the evolution of RISC-V technology, positioning the company at the forefront of the rapidly expanding semiconductor landscape.
 
  • Like
  • Fire
  • Wow
Reactions: 37 users

cosors

👀
Plus, the analogy wouldn’t work anyway, even if you were to think of Brainchip as one of the Wise Men (whose number is actually unspecified in the Bible). Why? Because while Fabrizio Cardinali compares those AI companies to runners in a race to “get to the star first”, we are in fact talking about two completely different stars here.

The only information the Gospel of Matthew provides about the region the Magi came from is the phrase ἀπὸ ἀνατολῶν, literally "from the rising [of the sun]", which is usually translated as "from the east".

Most theologians today agree that the Biblical Magi (provided we assume their historicity) were probably Zoroastrian priests and astronomers/astrologers (a mixture of both from our modern point of view) from either Persia or Mesopotamia, who most likely would have travelled along established trade routes. So they would have journeyed west to Damascus and then south to Jerusalem (to King Herod’s court) and on to Bethlehem, following the wondrous star they had seen rising back home (possibly a conjunction of Jupiter and Saturn or other celestial events a couple of years BC) and had interpreted as “a sign in the heavens” signifying the birth of a new Jewish king.

So it was not the North Star aka Pole Star that was their guiding light leading them to their destination (even though they would have used it for navigation). Rather than heading towards the “True North”, it must have been a “Journey to the West” (many Chinese Christians apparently even believe one of the Magi came from China) and then southwards, in the opposite direction from the North Pole. Which only confirms our belief that IBM and Brainchip’s destinations are not the same and that hence their itineraries are quite different… 😉

Fun fact: While we don’t know for sure where the Magi came from, they somehow ended up near @cosors… 😂
https://en.m.wikipedia.org/wiki/Shrine_of_the_Three_Kings

By then, Western Christian tradition had immortalised them as the Three Kings Caspar, Melchior and Balthasar/Balthazar that are so familiar to us today. They are depicted in innumerable nativity scenes and feature in carols, bring Christmas gifts to children in the Spanish-speaking parts of the world on January 6th, and then there is also the Catholic Sternsinger tradition in German-speaking countries: Children and teenagers dress up as The Three Kings, go from door to door singing carols and asking for donations benefiting children in need worldwide and - if desired - write a chalk blessing onto the door frame.
off topic
but fun fact too: Did you know that in this very elaborate coffin lie not only the Magi, venerated by Christians, but in its magnificent disguise is processed the personal signet ring of Nero who is considered a great persecutor of Christians and had these very executed?
Even the theologians who once commissioned him did not know this.
Which shows that sometimes we should question even the knowledgeable.)
 
  • Like
  • Wow
Reactions: 3 users

Frangipani

Regular
This afternoon, I came across a presentation by a young lady with an Indian English accent uploaded to YouTube earlier today - presumably a university student (Google came up with a LinkedIn account by someone with that same name enrolled at Charotar University of Science in Gujarat state, which could be her; I can’t open the link without signing in as a LinkedIn member, though).



Her lacklustre presentation as such is nothing to shout about, on the contrary; everything is read verbatim and uninspiringly, and the slides are in fact shockingly shoddy with lots of mistakes (especially doubling of words all over the place). Despite applauding her for her choice of topic 👍🏻, I honestly doubt she will get a satisfactory grade for this poor piece of workmanship at (presumably) tertiary (!) level. 😱 At the very least she could have proofread the slides before presenting (and uploading them to a public YouTube channel) in order to delete all those duplicate words… (Please note that this has nothing to do with not being a native English speaker.)

So why am I drawing your attention to this presentation, then?

05706F0C-D9CF-4799-A150-90F9ED80A9AB.jpeg


Well, have a look at the penultimate slide!

DE463A84-DF68-44E7-A479-53A7758D83A1.jpeg


I assume this young lady would have learnt about Brainchip in class? Looks like more and more universities are becoming aware of our company.

I am actually a bit puzzled as to why Brainchip has not yet extended its University AI Accelerator Program to any of the excellent Indian technological universities (especially to the IITs, the spearhead of technical education in India) - there is such a huge talent pool of gifted IT students there, who could experiment with Akida, and after graduating sing its praises to their prospective employers.

Lucky for us, India’s largest conglomerate no longer needs any convincing… 😊
 
  • Like
  • Love
  • Fire
Reactions: 43 users

Ian

Founding Member
 
  • Like
  • Love
  • Fire
Reactions: 39 users

Tothemoon24

Top 20

I wonder why Brainchip have set up an office in South Korea ✅

Leading-edge adopters aren’t waiting. In South Korea’s Pyeongtaek City, city leaders are planning to build a test bed using smart city technologies such as artificial intelligence and autonomous driving to be completed in 2025 and spread throughout the city.​

Transforming industries and enhancing efficiency with cutting-edge AI vision technology.​

One of the most promising applications of IoT AI vision technology is to capture consumer data inside stores so retailers can more quickly and efficiently optimize product placement, store layout and customer experience based on video data.

But there are two major hurdles to overcome: Cost and complexity. A large grocery store that wants to harvest foot-traffic, purchase and other data would need about 15,000 cameras in store. At 30 frames per second of 4K video, those 15,000 cameras would produce 225 gigabits of data per second.

That happens because video data is enormous, compared with other forms of data, and intricate processing is required, including image recognition, object detection, and scene analysis. These AI vision tasks often require advanced algorithms and models, contributing to the computational complexity. On top of that, big data like that needs to be sent to the cloud for efficient computation and then back out for decision-making.

Clearly, 225 gigabits per second is uneconomical.

But that’s only if you think it’s still 2018, not 2023. Much has changed in the past five years. The combination of improved and more-efficient processing at the edge, coupled with AI and machine learning, now chips away at that big uneconomic roadblock in front of many promising vision applications.

Unlocking edge AI vision innovation​

Back then, too many vital technologies sat siloed, each either difficult or impossible to integrate with other important puzzle pieces to enable a frictionless innovation ecosystem. In a homogenous processing world it was difficult to be able to customize solutions for different vision workloads when one size had to fit all. What’s different about today?

Engineers and developers have attacked cost and complexity, as well as other challenges. Take the processing complexity challenge for example. One pathway to driving down the cost and complexity of vision solutions is to offer developers more flexibility in how they implement edge solutions – heterogeneous compute.

Designers are producing increasingly powerful processors that offer higher computational capacity while remaining energy efficient. These processors include CPUs, GPUs, ISPs and accelerators designed to handle complex tasks like AI and machine learning in sometimes resource-constrained environments. In addition, AI accelerators – either as a core on an SoC or as a stand-alone SoC – enable efficient execution of AI algorithms at the edge.

Tackling complexity​

Let’s take one piece of the complexity puzzle. Arm in 2022 introduced the Arm Mali-C55, its smallest and most high-performance Image Signal Processor (ISP) to date. It features a blend of image quality, throughput, power efficiency, and silicon area, for applications like endpoint AI, smart home cameras, AR/VR, and smart displays. It achieves higher performance with throughput of up to 1.2Gpix/sec, making it a compelling choice for demanding visual processing tasks. And, when it comes to the push toward heterogeneous compute, the Mali-C55 is designed for seamless integration in SoC designs with Cortex-A or Cortex-M CPUs.

That’s key because in SoCs ISP output is often linked directly to a machine learning accelerator for further processing using neural networks or similar algorithms. This involves providing scaled-down images to the machine learning models for tasks like object detection and pose estimation.

This synergy, in turn, has given rise to ML-enabled cameras and the concept of “Software Defined Cameras.” And this allows OEMs and service providers to deploy cameras globally with evolving capabilities and revenue models tied to dynamic feature enhancements.

Think for example, of a car parking garage with cameras dangling above every slot, determining whether the slot is filled or not. That was great in 2018 for drivers entering the garage needing to know where the open slots are at a glance, but not an economical solution in 2023. How about leveraging the notion of edge AI and positioning only one or two cameras at the entrance and exit or on each floor and having AI algorithms figure out the rest? That’s 2023 thinking.

That brings us back to the retailer’s problem: 15,000 cameras producing 225 gigabits per second of data. You get the picture, right?

Amazon has recognized this problem and in the latest version of its Just Walk Out store technology, it increased the compute capability in the camera module itself. That’s moved the power of AI to the edge, where it can be more efficiently and quickly computed.

With this powerful, cost-effective vision opportunity, a grocery retailer might, for example, analyze video data from in-store cameras and determine that it needs to restock oranges around noon every day because most people buy them between 9-11 a.m. Upon further analysis, the retailer realizes that a lot of your customers – anonymized in the video data for privacy reasons – also buy peanuts during the same shopping trip. You use this video data to change your product placement.

Right place, right compute​

This kind of compute optimization – putting the right type of edge AI computing much closer to the sensors – reduces latency, can improve tighten security and reduce costs. It also can spark new business models.

One such business model is video surveillance-as-a-service (VSaaS). VSaaS is the provision of video recording, storage, remote management and cybersecurity in the mix of on-premises cameras and cloud-based video-management systems. The VSaaS market is expected to reach $132 billion by 2027, according to Transparency Market Research.

At a broader level, however, immense opportunity awaits because so many powerful potential applications have been waiting in the wings because of economics, processing limitations or sheer complexity.
 
  • Like
  • Wow
  • Thinking
Reactions: 26 users

equanimous

Norse clairvoyant shapeshifter goddess
I am actually a bit puzzled as to why Brainchip has not yet extended its University AI Accelerator Program to any of the excellent Indian technological universities (especially to the IITs, the spearhead of technical education in India) - there is such a huge talent pool of gifted IT students there, who could experiment with Akida, and after graduating sing its praises to their prospective employers.

Lucky for us, India’s largest conglomerate no longer needs any convincing… 😊
Hi Frangipani,

I wouldnt rule that out.

A quick search found this at one of their top universities. I can not access their papers though..

1698090464989.png
 
Last edited:
  • Like
Reactions: 5 users

charles2

Regular
  • Like
Reactions: 6 users
  • Like
  • Fire
  • Love
Reactions: 15 users

TECH

Regular
Time or Timing...it just happens, that's our reality.

Brainchip....at the right place at the right time, intersecting our future.

Would you prefer to have your technology still parked up on the research bench, not even 100% sure that it's silicon proof....OR

Would you prefer to have your 1st and 2nd generation NSoC, 100% fab proof, in the hands of numerous tier 1 companies out in
the field now.

That's what having a leading-edge over competitors really means....NOW not in the imaginable FUTURE.

Love Akida 💘
 
  • Like
  • Love
  • Fire
Reactions: 44 users
Top Bottom