BRN Discussion Ongoing

jtardif999

Regular
On a serious note.

While some newer shareholders say those of two years or less standing have witnessed an increase in third party writings about neuromorphic SNN computing this has not always been the case.

Indeed for shareholders who managed to board back in 2015 it was very much an information desert as far as SNN technology was concerned with Brainchip and Peter van der Made being best described as lone voices in this intellectual wasteland where passing ships were lights on the horizon with a few cruising under the Intel Loihi flag.

Despite letting off flares and signal lanterns Brainchip and Peter van der Made were so far from the normal shipping lanes they were ignored.

I came onboard in 2017 after the best part of 12 months research. I started with absolutely no knowledge of the science and with the release of AKIDA 2 I now know even less.

Back in 2016 it was almost impossible to find anything about feedforward SNN or SCNN except for a few theoretical papers that allowed for the scientific possibility that what Peter van der Made was doing could possibly end in success but even then the preponderance of opinion dismissed his ideas in favour of analogue.

Why am I writing this now because it is as at today an entirely different circumstance.

The commercial, scientific and engineering world have woken up and are seeing Brainchip and Peter van der Made’s signal and are changing course to intercept them and the ones flying the Intel Loihi flag are also flying the AKIDA pennant.

The amount of information both scientific and commercial now being produced about Brainchip and AKIDA technology is unheralded and closely aligns with the statements by the CEO Sean Hehir that the numbers of sales enquires is greater than at anytime in Brainchip’s history.

The fact is that it is very clear to myself and I expect any long term shareholder that change is in the air.

Brainchip and Peter van der Made have been intercepted and escorted into the mainstream by an array of significant ecosystem partners including SiFive, Edge Impulse, ARM, Intel, Prophesee, VVDN, Mercedes Benz, Valeo, MegaChips Renesas, NASA and others.

Rob Telson’s “exciting times” has now a solid gold ring of truth about it.

My opinion only DYOR
FF

AKIDA BALLISTA
This post should go straight to the pool room, or at least to that thread that @MaxwellAnne is proposing 🤓
 
Last edited:
  • Like
  • Haha
Reactions: 10 users

Diogenese

Top 20
This is the latest lecture presented by Katina Michael of ASU.

BrainChip Inc: AI Accelerator Program - Introduction to MetaTF​

With Brainchip's Todd Vierra and Nikunj Kotecha.



Happy viewing.

30 PhD students have already done this program up to November 2022.

PhD students already have their bachelor degrees, so they have a sound understanding of programming principles. Apart from the rocket science of the Akida SoC, there is no real rocket science in building model libraries or programming Akida, but there may be a bit of secret sauce (not the original recipe) in MetaTF to convert NN models to SNN.
 
  • Like
  • Thinking
Reactions: 18 users

jtardif999

Regular
Too right!

View attachment 31792

Neuromorphic Computing: A More Efficient Technology for Activities That Require a Human Touch​

LinkedIn
Facebook
Twitter
Send
Embed

Morgan Lewis - Tech & Sourcing

ChatGPT and subsequent artificial intelligence (AI) programs have been in the headlines recently. Not as common is the discussion of the cost associated with developing and operating such AI tools or if such AI is right for every job.
It is estimated that ChatGPT can cost millions of dollars per day to operate. Given the potentially large price tag, consumers may ask how users can harness the benefit of AI without the high operating cost and what the best technology is in applications where precise decision-making and outcomes are desired. Some believe that the answer to both of these questions is neuromorphic computing.
What Is Neuromorphic Computing?
Neuromorphic computing is designed to mimic the human brain, operating in a manner that allows the technology to solve problems in ways our brains would. The chips that power neuromorphic computing are specially designed with the same structure as our brain’s neurons and synapses, allowing it to make decisions and judgments in a way that typical computer systems or algorithms cannot.
Neuromorphic computing is intended to be more efficient, powerful, and cost-effective than other AI technologies. Although still in development and not widely deployed, it is being evaluated in various settings, including cloud computing, robotics, and autonomous vehicle technology.
The End of the Algorithm in AI?
Rather than processing all the data to follow an algorithm to an answer, the goal of neuromorphic computing is to decipher the necessary information to determine the correct solution. Leveraging this would allow companies and consumers to implement technology into everyday life wherever a human touch is required—rather than utilizing answers based solely on an algorithm.
AI is effective at providing large amounts of computing power, responding to queries that may take a human or even a standard computer a long time to answer. Neuromorphic computing, on the other hand, takes a more active approach, giving the correct response or action to a scenario.
Key Takeaway
As technology and society integrate on a deeper level, there will be an increased demand on our computers and technology to interact with us as a human would with speech, movement, and reason. Neuromorphic computing’s deployment is no easy feat, and we will be on the lookout for how companies bring humanity into future computers and technologies.

This IS gold - what BRN LTHs have known for a long time.. seems the world is starting to catch on to the new SNN paradigm.

The End of the Algorithm in AI?
Rather than processing all the data to follow an algorithm to an answer, the goal of neuromorphic computing is to decipher the necessary information to determine the correct solution. Leveraging this would allow companies and consumers to implement technology into everyday life wherever a human touch is required—rather than utilizing answers based solely on an algorithm.
AI is effective at providing large amounts of computing power, responding to queries that may take a human or even a standard computer a long time to answer. Neuromorphic computing, on the other hand, takes a more active approach, giving the correct response or action to a scenario.
 
  • Like
  • Love
  • Fire
Reactions: 20 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
  • Haha
  • Sad
Reactions: 7 users

JDelekto

Regular
30 PhD students have already done this program up to November 2022.

PhD students already have their bachelor degrees, so they have a sound understanding of programming principles. Apart from the rocket science of the Akida SoC, there is no real rocket science in building model libraries or programming Akida, but there may be a bit of secret sauce (not the original recipe) in MetaTF to convert NN models to SNN.
I would love for them to make the course material available to some of the other online premium online educational platforms, such as Udacity. I have a background in programming, but wrapping my head around the libraries used in many of these samples is difficult for me.

I want to understand the foundation and then get hands-on with the code. I think having access to course materials or even having access to an instructor or engineer with the knowledge would probably help clear things up.
 
  • Like
Reactions: 6 users

Diogenese

Top 20
I would love for them to make the course material available to some of the other online premium online educational platforms, such as Udacity. I have a background in programming, but wrapping my head around the libraries used in many of these samples is difficult for me.

I want to understand the foundation and then get hands-on with the code. I think having access to course materials or even having access to an instructor or engineer with the knowledge would probably help clear things up.
Hi JD,

It's early days yet. We'll have maybe 80 odd engineers who have done the course by the end of this year, and their new employers will be potential customers. The original 30 may already have been turned loose.

I missed the first session and only sampled the second. I haven't done any serious programming since Fortran IV.

What did you think of the second session?

From what I saw, it seemed to be fairly straightforward if you were up to speed with MetaTF.

You can download MetaTF and model libraries and use the simulator.

We had a programmer here, Uiux, who was able to do a couple of implementations using MetaTF's predecessor ADE a couple of years ago, but I think he is in self-imposed exile.
 
  • Like
Reactions: 17 users

charles2

Regular
  • Like
  • Sad
  • Fire
Reactions: 6 users

charles2

Regular

Roku says it could lose 25 percent of its cash after Silicon Valley Bank fails​


Motley Fool 'lovers' will enjoy this. Recommended ROKU March 10 as a no brainer buy. Turns out that ROKU has nearly $500m USD uninsured at SVB.

Not quite a Black Swan event but close.
 
  • Wow
  • Like
  • Fire
Reactions: 13 users

Andy38

The hope of potential generational wealth is real
This IS gold - what BRN LTHs have known for a long time.. seems the world is starting to catch on to the new SNN paradigm.

The End of the Algorithm in AI?
Rather than processing all the data to follow an algorithm to an answer, the goal of neuromorphic computing is to decipher the necessary information to determine the correct solution. Leveraging this would allow companies and consumers to implement technology into everyday life wherever a human touch is required—rather than utilizing answers based solely on an algorithm.
AI is effective at providing large amounts of computing power, responding to queries that may take a human or even a standard computer a long time to answer. Neuromorphic computing, on the other hand, takes a more active approach, giving the correct response or action to a scenario.
BC0997CD-0058-4825-82F0-5B38BE9B01FC.jpeg
BC0997CD-0058-4825-82F0-5B38BE9B01FC.jpeg And our mate Tim from NVISO loves both ChatGPT and 🧠 🍟. Let’s see how all this plays out!
 
  • Like
  • Fire
  • Love
Reactions: 24 users

manny100

Regular
I see Louis Di Nardo still holds 0.34% of BRN. Obviously still has a lot of faith in the business.
He would know where it is heading.
 
  • Like
  • Fire
Reactions: 15 users
A mention in Forbes:
 
  • Like
  • Fire
  • Love
Reactions: 26 users

Makeme 2020

Regular

Tweet​


See new Tweets

Conversation​










Edge Impulse

@EdgeImpulse


Don’t miss a demo from
@BrainChip_inc
at our #EW23 booth! We’ll be showing off our FOMO object detection algorithm running on the Akida AKD1000 neuromorphic processor.
Quote Tweet






DoEfDM0t_mini.jpg


BrainChip

@BrainChip_inc
·
20h
Media Alert: BrainChip Demonstrates Edge AI Processing at Embedded World 2023 in Nuremberg https://brainchip.com/media-alert-brainchip-demonstrates-edge-ai-processing-at-embedded-world-2023-in-nuremberg/…

1:14 AM · Mar 12, 2023
·
1,054
Views

7
Retweets

14
Likes
 
  • Like
  • Fire
Reactions: 16 users
D

Deleted member 118

Guest
I see Louis Di Nardo still holds 0.34% of BRN. Obviously still has a lot of faith in the business.
He would know where it is heading.
Yep and not much selling by any, but would like to see some buying.

 
  • Like
  • Fire
Reactions: 7 users
D

Deleted member 118

Guest
As it vanished yesterday, here it is again

 
  • Like
  • Fire
  • Love
Reactions: 19 users

BaconLover

Founding Member
Yep and not much selling by any, but would like to see some buying.


I see Louis Di Nardo still holds 0.34% of BRN. Obviously still has a lot of faith in the business.
He would know where it is heading.

Lou did sell nearly half of his holdings, he had about 11 million shares about couple of years ago, now it's about 5.9 million shares.

Sure that's a lot of shares he's still holding, but he has taken profits off the table (Nothing wrong with it, just adding some context).
 
  • Like
  • Love
Reactions: 22 users

Steve10

Regular

Shifting to an FPGA Data Center Future: How are FPGAs a Potential Solution?​

April 04, 2022 by Jake Hertz

As data centers are put under more pressure, EEs are looking at field-programmable gate arrays (FPGAs) as a potential solution. However, how could they be useful, and who is ramping up their research efforts?​


Today more than ever, the data center is being put under enormous strain. Between the increasing popularity of cloud computing, the high rate of data creation, and new compute-intensive applications like machine learning, our current data center infrastructures are being pushed to their limits.

To help ensure that the data center of the future will be able to keep up with these trends and continually improve performance, engineers are reimagining data center computing hardware altogether.

From this, one of the most important pieces of hardware for the data center is the FPGA.

View attachment 31878

A high-level overview of an FPGA. Image used courtesy of Stemmer Imaging



A recently announced center, the Intel/VMware Crossroads 3D-FPGA Academic Research Center, is hoping to spur the improvement of FPGA technology explicitly for data centers.

In this article, we’ll talk about the benefits of FPGAs for the data center and how the new research center plans to improve the technology even further.

A Shift to Accelerators​

There are currently two major trends in the data center that are driving the future of the field: an increase in data traffic and an increase in computationally-intensive applications.

The challenge here is that, not only must the data centers be able to handle increased data and tougher computations, but there is a greater demand to do this at lower power and higher performance than ever before.

To achieve this, engineers have shifted away from more general-purpose computing hardware, such as central processing units (CPUs) and graphics processing units (GPUs), and instead, employ hardware accelerators.

View attachment 31880

An example of heterogeneous architecture, which is becoming the norm in the data center. Image used courtesy of Zhang et al


Engineers can achieve higher performance and low power computation with application-specific computing blocks than previously possible. To many, a heterogeneous computing architecture consisting of accelerators, GPUs, and CPUs, is the widely accepted path forward for future data centers.


Benefits of FPGAs for the Data Center​

FPGAs are uniquely positioned to benefit the data center for several reasons.

First off, FPGAs are highly customizable, meaning that they can be configured for use as an application-specific hardware accelerator.

In the context of the data center, engineers can configure FPGAs for applications like machine learning, networking, or security. Due to their software-defined nature, FPGAs offer easier design flows and shorter time to market accelerators than an application-specific integrated circuit (ASIC).

View attachment 31881

An example diagram showing how FPGAs can be dynamically reconfigured. Image used courtesy of Wang et al


Secondly, FPGAs can offer the benefits of versatility. Since an FPGA's functionality can be defined purely by HDL code, a single FPGA can serve many purposes. This functionality could help reduce complexity and create uniformity in a system.

Instead of needing a variety of different hardened ASICs, a single FPGA can be configured and reconfigured for various applications, opening the door to further optimization of hardware resources.

Thus, some FPGAs can be reconfigured in real-time based on the application being run, meaning a single FPGA can serve as many roles as needed.



A 3D-FPGA Academic Research Center​

Recently, the Intel/VMware Crossroads 3D-FPGA Academic Research Center was announced as a multi-university effort to improve the future of FPGA technology.

The team, which consists of researchers from the University of Toronto, UT Austin, Carnegie Mellon, and more, focuses their efforts directly on the role of FPGAs in the data center. More specifically, the group will be investigating ways to achieve 3D integration within the framework of an FPGA.

The idea is that, by being able to stack multiple FPGA dies vertically, researchers should be able to achieve a higher transistor density while also balancing performance, power, and manufacturing costs.

Overall, the group hopes to use 3D-integration technology to create heterogeneous systems consisting of FPGAs and hardened logic- accelerators, all within a single package. The technology will seek to combine a Network-on-Chip (NoC) in a layer beneath the traditional FPGA fabric such that the NoC can control data routing while the FPGA can provide the computation needed.

Overall, the group hopes to extend the rise of in-network computing into the server with their new technologies.



FPGAs for Future Data Centers​

The FPGA will undoubtedly become a key player as the data center trends towards more data and more intensive computation.

With a new research group hoping to bolster the technology, it seems even more apparent now than ever that FPGAs are becoming a mainstay in the data center industry.






Amazon’s Xilinx FPGA Cloud: Why This May Be A Significant Milestone​

/ AI and Machine Learning, CPU GPU DSP FPGA, Semiconductor / By Karl Freund

Datacenters, especially the really big guys known as the Super 7 (Alibaba, Amazon, Baidu, Facebook, Google, Microsoft and Tencent), are experiencing significant growth in key workloads that require more performance than can squeezed out of even the fastest CPUs. Applications such as Deep Neural Networks (DNN) for Artificial Intelligences (AIs), complex data analytics, 4K live streaming video and advanced networking and security features are increasingly being offloaded to super-fast accelerators that can provide 10X or more the performance of a CPU. NVIDIA GPUs in particular have benefited enormously from the training portion of machine learning, reporting a 193% Y/Y last quarter in their datacenter segment, which is now approaching a $1B run-rate business.

View attachment 31882

But GPU’s aren’t the only acceleration game in town. Microsoft has recently announced that Field Programmable Gate Array (FPGA) accelerators have become pervasive in their datacenters. Soon after, Xilinx announced that Baidu is using their devices for acceleration of machine learning applied to speech processing and autonomous vehicles. And Xilinx announced last month a ‘reconfigurable acceleration stack’ that reduces the time to market for FPGA solutions with libraries, tools, frameworks and OpenStack support for several datacenter workloads. Now Amazon has announced the addition of Xilinx FPGAs to their cloud services, signaling that the company may be seeing market demand for access to these once-obscure style of chips for parallel processing. This announcement may be a significant milestone for FPGAs in general, and Xilinx in particular.

What did Amazon Announce?

Amazon is not the first company to offer FPGA cloud services, but they are one of the largest. Microsoft uses them internally but does not yet offer them as a service to their Azure customers. Amazon, on the other hand, built custom servers to enable them to offer new public F1 Elastic Cloud instances supporting up to eight 16nm Xilinx Ultrascale+ FPGAs per instance. Initially offered as a developer’s platform, these instances can target the experienced FPGA community. Amazon did not discuss the availability of high-level tools such as OpenCL or the Xilinx reconfigurable acceleration stack. Adding these capabilities could open up a larger market for early adopters and developers. However, I would expect Amazon to expand their offering in the future, otherwise I doubt they would have gone to all the expense and effort to design and build their own customized, scalable servers.

Why this announcement may be significant

First and foremost, this deal with the world’s largest cloud provider is a major design win for Xilinx over their archrival Altera, acquired last year by Intel, as Altera was named as Microsoft’s supplier for their FPGA enhanced servers. At the time of the Altera acquisition, Intel had predicted that over one third of cloud compute nodes would deploy FPGA accelerators by 2020. Now it looks like Xilinx is poised to benefit from the market’s expected growth, in part since Xilinx appears to enjoy at least a year lead in manufacturing technology over Altera with Xilinx’s new 16nm FinFET generation silicon, which is now shipping in volume production. Xilinx has also focused on providing highly scalable solutions, with support for PCIe and other capabilities such as the CCIX interconnect. Altera, on the other hand, has been focusing on integration into Intel, including the development of an integrated multichip module pairing up one low-end FPGA with a Xeon processor. Surely, Intel wants to drag as much Xeon revenue along with each FPGA as possible. While this approach has distinct advantages for some lower end applications (primarily through faster communications and lower costs), it is not ideal for applications requiring accelerator pooling, where multiple accelerators are attached to a single CPU.

Second, as I mentioned above, Amazon didn’t just throw a bunch of FPGA cards into PCIe servers and call it a day; they designed a custom server with a fabric of pooled accelerators that interconnects up to 8 FPGAs. This allows the chips to share memory and improves bandwidth and latency for inter-chip communication. That tells us that Amazon may be seeing customer demand for significant scaling for applications such as inference engines for Deep Learning and other workloads.

Finally, Amazon must be seeing demand from developers across a broader market than the typical suspects on the list of the Super 7. After all, those massive companies possess the bench strength and wherewithal to buy and build their own FPGA equipped servers and would be unlikely to come to their competitor for services. Amazon named an impressive list of companies endorsing the new F1 instance, spanning a surprising breadth of applications and workloads.

Where do we go from here?

The growing market for datacenter accelerators will be large enough to lift a lot of boats, not just GPUs, and Xilinx appears to be well positioned to benefit from this trend. It will now be important to see more specific customer examples and quantified benefits in order to gauge whether the FPGA is going mainstream or remains a relatively small niche. We also hope to see more support from Amazon for the toolsets needed to make these fast chips easier to use by a larger market. This includes support for application developers to use their framework of choice (e.g, Caffe, FFMPEG) with a simple compile option to target the FPGA, a goal of the recently introduced Xilinx acceleration stack.


There is a link to BRN in this article via Carnegie Mellon.

The team, which consists of researchers from the University of Toronto, UT Austin, Carnegie Mellon, and more, focuses their efforts directly on the role of FPGAs in the data center.

University AI Accelerator Program​



1678569626604.png
 
  • Like
  • Fire
Reactions: 17 users

IloveLamp

Top 20
Screenshot_20230312_074740_LinkedIn.jpg
 
  • Like
Reactions: 7 users

Mazewolf

Regular
  • Like
  • Love
  • Fire
Reactions: 18 users
Top Bottom