BRN Discussion Ongoing

Kachoo

Regular
  • Haha
  • Like
  • Love
Reactions: 13 users

McHale

Regular
View attachment 31875

Geoffrey Carrick​


Non-Executive Director
Chair of the Audit & Governance Committee



By the look of that grin, I'm pretty sure Geoffrey keeps it under his bed. 🤣🤣🤣
Maybe he's got big socks too, but scrub the SVB, and try GC eh.

I don't know about this being a black swan scenario, but Janet Yellen has been trying to calm things down because like SVB there are a lot of other US banks that have big exposure to crypto. The FTX story alsowent down just recently.

This SVB story has definitely weighed on the US financial sector last couple of days, and SVBs SP has been absolutely smashed, so it may be an opportunity for some smart people, maybe try naked puts (probably too late for that). However if you were one of the unfortunates who had any money in SVB, you would be one fucked up duck right now.

So given it's a local bank that supports start ups, I would not like to think that BRN does their banking with them.
 
  • Like
  • Fire
  • Love
Reactions: 10 users

HopalongPetrovski

I'm Spartacus!
Maybe he's got big socks too, but scrub the SVB, and try GC eh.

I don't know about this being a black swan scenario, but Janet Yellen has been trying to calm things down because like SVB there are a lot of other US banks that have big exposure to crypto. The FTX story alsowent down just recently.

This SVB story has definitely weighed on the US financial sector last couple of days, and SVBs SP has been absolutely smashed, so it may be an opportunity for some smart people, maybe try naked puts (probably too late for that). However if you were one of the unfortunates who had any money in SVB, you would be one fucked up duck right now.

So given it's a local bank that supports start ups, I would not like to think that BRN does their banking with them.
Geoffrey actually seems to be a lovely chap. Had a drink and a chat with him last AGM and is impressive like the rest of our Directors.

I just saw on the news that deposit holders of the SVB can get access to their first $250k on Monday from Their Fed Gov bank guarantee scheme, so that's something and at least the little people have some protection.
As far as the Black Swan scenario that was a Fact Finder post a page or two back. 😁
 
  • Like
  • Fire
Reactions: 12 users

rgupta

Regular
Just my opinion of course but we know:

1.Vorago successfully provided a design to harden AKD1000 for deep space applications.

2. We know Anil Mankar said in an Anastasia interview that AKIDA would likely be produced in 90 nm for NASA.

3. We know ISL working with Brainchip and the US Airforce Research Laboratories proved out their radar simulation SBIR.

4. We know Anil Mankar said AKIDA was being benchmarked against a GPU and it was coming up favourable to AKIDA.

5. We know Edge impulse described AKD1000 as science fiction and could compete at 300 gigahertz with a GPU running at 900 gigahertz.

6. We know researchers found that AKIDA in USB form for $US50.00 was a match for a Nvidia GPU at $US30,000.00.

So I would say pretty well probably.

My opinion only DYOR
FF

AKIDA BALLISTA

6. We know researchers found that AKIDA in USB form for $US50.00 was a match for a Nvidia GPU at $US30,000.00.
That means 600 times cost saving
And another 600 time savings on energy costs minimum.
Which means if someone replace 600 GPUs with akida it will save 17.97 million cost saving on product and 360 thousand times savings on energy costs.
Wohooooo......
 
  • Like
  • Fire
  • Love
Reactions: 29 users
6. We know researchers found that AKIDA in USB form for $US50.00 was a match for a Nvidia GPU at $US30,000.00.
That means 600 times cost saving
And another 600 time savings on energy costs minimum.
Which means if someone replace 600 GPUs with akida it will save 17.97 million cost saving on product and 360 thousand times savings on energy costs.
Wohooooo......
Yes it is amazing but it appears in black and white in a research paper commissioned by US Homeland Security for the development of a hand held detector for its agents.

It has been posted multiple times.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Fire
  • Wow
Reactions: 19 users

Taproot

Regular
Impressive AKIDA 2nd Gen $10 compared to $1000 for GPU.
I guess the question is how big is the potential market using GPU we can capture and what does this market do/ consist of?
Cheers
 
  • Like
  • Fire
  • Thinking
Reactions: 9 users

White Horse

Regular
  • Like
  • Love
  • Fire
Reactions: 27 users

Boab

I wish I could paint like Vincent
  • Haha
  • Like
  • Fire
Reactions: 13 users

HopalongPetrovski

I'm Spartacus!
6. We know researchers found that AKIDA in USB form for $US50.00 was a match for a Nvidia GPU at $US30,000.00.
That means 600 times cost saving
And another 600 time savings on energy costs minimum.
Which means if someone replace 600 GPUs with akida it will save 17.97 million cost saving on product and 360 thousand times savings on energy costs.
Wohooooo......
I remember PVDM talking about the astonishing energy savings that could be achieved by Akida in data centres a number of years ago and thinking at the time "that's nice, but ho hum". 🤣
Once again, another prescient, pioneering and currently hot button attribute that can soon be realised and dominated by Brainchip.
The world is rapidly catching up to the technology envisioned by Peter and brought to fruition by Anil and crew.
It's nice to be on the right side of history as well as commercial viability.
We tick all the boxes, green, techie and beancounter's.
 
  • Like
  • Love
  • Fire
Reactions: 29 users

rgupta

Regular
I 100% believes in you. The reason I mentioned the calculations is the how people in the real world got motivated.
Don't you think Mercedes got motivated because of that either.
When I first purchased my brn stock the main reason was power saving. I am aware the world is looking for greener technologies and what better technology you can quote me other than saving billions of dollar on power consumption by I T industry.
I am also aware that one solution does not fit all but when one person get a good outcome that motivate other to try the same for their cause.
I know it is taking time for the things to get materialized but to me it is worth taking risk coz the risk reward ratio here is exponential.
DYOR
Yes it is amazing but it appears in black and white in a research paper commissioned by US Homeland Security for the development of a hand held detector for its agents.

It has been posted multiple times.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Love
  • Fire
Reactions: 14 users
This extracted paragraph is telling the semiconductor world that Brainchip is unleashing a Black Swan event and they need to be on the right side of history:

“In Tirias Research’s opinion, it’s not the path taken to the result that’s important, it’s the result that counts. If Brainchip’s Akida event-based platform succeeds, it won’t be the first time that a radical new silicon technology has swept the field. Consider DRAMs (dynamic random access memories), microprocessors, microcontrollers, and FPGAs (field programmable gate arrays), for example. When those devices first appeared, there were many who expressed doubts. No longer. It’s possible that Brainchip has developed yet another breakthrough that could rank with those previous innovations. Time will tell.”

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Love
  • Fire
Reactions: 46 users

equanimous

Norse clairvoyant shapeshifter goddess
This extracted paragraph is telling the semiconductor world that Brainchip is unleashing a Black Swan event and they need to be on the right side of history:

“In Tirias Research’s opinion, it’s not the path taken to the result that’s important, it’s the result that counts. If Brainchip’s Akida event-based platform succeeds, it won’t be the first time that a radical new silicon technology has swept the field. Consider DRAMs (dynamic random access memories), microprocessors, microcontrollers, and FPGAs (field programmable gate arrays), for example. When those devices first appeared, there were many who expressed doubts. No longer. It’s possible that Brainchip has developed yet another breakthrough that could rank with those previous innovations. Time will tell.”

My opinion only DYOR
FF

AKIDA BALLISTA
Talking about black swans how does this person get involved with 2. Keep an eye out where he goes next

Screenshot_20230311_210725_Brave.jpg
 
  • Haha
  • Wow
  • Like
Reactions: 20 users

Taproot

Regular
All I saw was products that needed big arse fans attached to them to keep them cool.😂😂
GPT-3 used 10,000 Nvidia V100 GPU's !
ChatGPT - estimate is 4480 Nvidia A100 GPU's ( At $200,000 each, that's $896,000,000 )

 
  • Wow
  • Haha
  • Like
Reactions: 13 users

chapman89

Founding Member
This extracted paragraph is telling the semiconductor world that Brainchip is unleashing a Black Swan event and they need to be on the right side of history:

“In Tirias Research’s opinion, it’s not the path taken to the result that’s important, it’s the result that counts. If Brainchip’s Akida event-based platform succeeds, it won’t be the first time that a radical new silicon technology has swept the field. Consider DRAMs (dynamic random access memories), microprocessors, microcontrollers, and FPGAs (field programmable gate arrays), for example. When those devices first appeared, there were many who expressed doubts. No longer. It’s possible that Brainchip has developed yet another breakthrough that could rank with those previous innovations. Time will tell.”

My opinion only DYOR
FF

AKIDA BALLISTA
I’ll never get over that quote.
 
  • Like
  • Love
  • Fire
Reactions: 24 users
It’s been noted before but TATA and Renesas!

Both have more than a passing interest in Brainchip.

Not a small market either!

1678532534461.png
 
  • Like
  • Fire
  • Love
Reactions: 26 users

Steve10

Regular

Shifting to an FPGA Data Center Future: How are FPGAs a Potential Solution?​

April 04, 2022 by Jake Hertz

As data centers are put under more pressure, EEs are looking at field-programmable gate arrays (FPGAs) as a potential solution. However, how could they be useful, and who is ramping up their research efforts?​


Today more than ever, the data center is being put under enormous strain. Between the increasing popularity of cloud computing, the high rate of data creation, and new compute-intensive applications like machine learning, our current data center infrastructures are being pushed to their limits.

To help ensure that the data center of the future will be able to keep up with these trends and continually improve performance, engineers are reimagining data center computing hardware altogether.

From this, one of the most important pieces of hardware for the data center is the FPGA.

1678532442862.png

A high-level overview of an FPGA. Image used courtesy of Stemmer Imaging



A recently announced center, the Intel/VMware Crossroads 3D-FPGA Academic Research Center, is hoping to spur the improvement of FPGA technology explicitly for data centers.

In this article, we’ll talk about the benefits of FPGAs for the data center and how the new research center plans to improve the technology even further.

A Shift to Accelerators​

There are currently two major trends in the data center that are driving the future of the field: an increase in data traffic and an increase in computationally-intensive applications.

The challenge here is that, not only must the data centers be able to handle increased data and tougher computations, but there is a greater demand to do this at lower power and higher performance than ever before.

To achieve this, engineers have shifted away from more general-purpose computing hardware, such as central processing units (CPUs) and graphics processing units (GPUs), and instead, employ hardware accelerators.

1678532619602.png


An example of heterogeneous architecture, which is becoming the norm in the data center. Image used courtesy of Zhang et al


Engineers can achieve higher performance and low power computation with application-specific computing blocks than previously possible. To many, a heterogeneous computing architecture consisting of accelerators, GPUs, and CPUs, is the widely accepted path forward for future data centers.


Benefits of FPGAs for the Data Center​

FPGAs are uniquely positioned to benefit the data center for several reasons.

First off, FPGAs are highly customizable, meaning that they can be configured for use as an application-specific hardware accelerator.

In the context of the data center, engineers can configure FPGAs for applications like machine learning, networking, or security. Due to their software-defined nature, FPGAs offer easier design flows and shorter time to market accelerators than an application-specific integrated circuit (ASIC).

1678532673000.png


An example diagram showing how FPGAs can be dynamically reconfigured. Image used courtesy of Wang et al


Secondly, FPGAs can offer the benefits of versatility. Since an FPGA's functionality can be defined purely by HDL code, a single FPGA can serve many purposes. This functionality could help reduce complexity and create uniformity in a system.

Instead of needing a variety of different hardened ASICs, a single FPGA can be configured and reconfigured for various applications, opening the door to further optimization of hardware resources.

Thus, some FPGAs can be reconfigured in real-time based on the application being run, meaning a single FPGA can serve as many roles as needed.



A 3D-FPGA Academic Research Center​

Recently, the Intel/VMware Crossroads 3D-FPGA Academic Research Center was announced as a multi-university effort to improve the future of FPGA technology.

The team, which consists of researchers from the University of Toronto, UT Austin, Carnegie Mellon, and more, focuses their efforts directly on the role of FPGAs in the data center. More specifically, the group will be investigating ways to achieve 3D integration within the framework of an FPGA.

The idea is that, by being able to stack multiple FPGA dies vertically, researchers should be able to achieve a higher transistor density while also balancing performance, power, and manufacturing costs.

Overall, the group hopes to use 3D-integration technology to create heterogeneous systems consisting of FPGAs and hardened logic- accelerators, all within a single package. The technology will seek to combine a Network-on-Chip (NoC) in a layer beneath the traditional FPGA fabric such that the NoC can control data routing while the FPGA can provide the computation needed.

Overall, the group hopes to extend the rise of in-network computing into the server with their new technologies.



FPGAs for Future Data Centers​

The FPGA will undoubtedly become a key player as the data center trends towards more data and more intensive computation.

With a new research group hoping to bolster the technology, it seems even more apparent now than ever that FPGAs are becoming a mainstay in the data center industry.






Amazon’s Xilinx FPGA Cloud: Why This May Be A Significant Milestone​

/ AI and Machine Learning, CPU GPU DSP FPGA, Semiconductor / By Karl Freund

Datacenters, especially the really big guys known as the Super 7 (Alibaba, Amazon, Baidu, Facebook, Google, Microsoft and Tencent), are experiencing significant growth in key workloads that require more performance than can squeezed out of even the fastest CPUs. Applications such as Deep Neural Networks (DNN) for Artificial Intelligences (AIs), complex data analytics, 4K live streaming video and advanced networking and security features are increasingly being offloaded to super-fast accelerators that can provide 10X or more the performance of a CPU. NVIDIA GPUs in particular have benefited enormously from the training portion of machine learning, reporting a 193% Y/Y last quarter in their datacenter segment, which is now approaching a $1B run-rate business.

1678532845734.png


But GPU’s aren’t the only acceleration game in town. Microsoft has recently announced that Field Programmable Gate Array (FPGA) accelerators have become pervasive in their datacenters. Soon after, Xilinx announced that Baidu is using their devices for acceleration of machine learning applied to speech processing and autonomous vehicles. And Xilinx announced last month a ‘reconfigurable acceleration stack’ that reduces the time to market for FPGA solutions with libraries, tools, frameworks and OpenStack support for several datacenter workloads. Now Amazon has announced the addition of Xilinx FPGAs to their cloud services, signaling that the company may be seeing market demand for access to these once-obscure style of chips for parallel processing. This announcement may be a significant milestone for FPGAs in general, and Xilinx in particular.

What did Amazon Announce?

Amazon is not the first company to offer FPGA cloud services, but they are one of the largest. Microsoft uses them internally but does not yet offer them as a service to their Azure customers. Amazon, on the other hand, built custom servers to enable them to offer new public F1 Elastic Cloud instances supporting up to eight 16nm Xilinx Ultrascale+ FPGAs per instance. Initially offered as a developer’s platform, these instances can target the experienced FPGA community. Amazon did not discuss the availability of high-level tools such as OpenCL or the Xilinx reconfigurable acceleration stack. Adding these capabilities could open up a larger market for early adopters and developers. However, I would expect Amazon to expand their offering in the future, otherwise I doubt they would have gone to all the expense and effort to design and build their own customized, scalable servers.

Why this announcement may be significant

First and foremost, this deal with the world’s largest cloud provider is a major design win for Xilinx over their archrival Altera, acquired last year by Intel, as Altera was named as Microsoft’s supplier for their FPGA enhanced servers. At the time of the Altera acquisition, Intel had predicted that over one third of cloud compute nodes would deploy FPGA accelerators by 2020. Now it looks like Xilinx is poised to benefit from the market’s expected growth, in part since Xilinx appears to enjoy at least a year lead in manufacturing technology over Altera with Xilinx’s new 16nm FinFET generation silicon, which is now shipping in volume production. Xilinx has also focused on providing highly scalable solutions, with support for PCIe and other capabilities such as the CCIX interconnect. Altera, on the other hand, has been focusing on integration into Intel, including the development of an integrated multichip module pairing up one low-end FPGA with a Xeon processor. Surely, Intel wants to drag as much Xeon revenue along with each FPGA as possible. While this approach has distinct advantages for some lower end applications (primarily through faster communications and lower costs), it is not ideal for applications requiring accelerator pooling, where multiple accelerators are attached to a single CPU.

Second, as I mentioned above, Amazon didn’t just throw a bunch of FPGA cards into PCIe servers and call it a day; they designed a custom server with a fabric of pooled accelerators that interconnects up to 8 FPGAs. This allows the chips to share memory and improves bandwidth and latency for inter-chip communication. That tells us that Amazon may be seeing customer demand for significant scaling for applications such as inference engines for Deep Learning and other workloads.

Finally, Amazon must be seeing demand from developers across a broader market than the typical suspects on the list of the Super 7. After all, those massive companies possess the bench strength and wherewithal to buy and build their own FPGA equipped servers and would be unlikely to come to their competitor for services. Amazon named an impressive list of companies endorsing the new F1 instance, spanning a surprising breadth of applications and workloads.

Where do we go from here?

The growing market for datacenter accelerators will be large enough to lift a lot of boats, not just GPUs, and Xilinx appears to be well positioned to benefit from this trend. It will now be important to see more specific customer examples and quantified benefits in order to gauge whether the FPGA is going mainstream or remains a relatively small niche. We also hope to see more support from Amazon for the toolsets needed to make these fast chips easier to use by a larger market. This includes support for application developers to use their framework of choice (e.g, Caffe, FFMPEG) with a simple compile option to target the FPGA, a goal of the recently introduced Xilinx acceleration stack.

 
  • Like
  • Love
  • Wow
Reactions: 18 users

White Horse

Regular
This is the latest lecture presented by Katina Michael of ASU.

BrainChip Inc: AI Accelerator Program - Introduction to MetaTF​

With Brainchip's Todd Vierra and Nikunj Kotecha.



Happy viewing.
 
  • Like
  • Love
  • Fire
Reactions: 34 users

Shifting to an FPGA Data Center Future: How are FPGAs a Potential Solution?​

April 04, 2022 by Jake Hertz

As data centers are put under more pressure, EEs are looking at field-programmable gate arrays (FPGAs) as a potential solution. However, how could they be useful, and who is ramping up their research efforts?​


Today more than ever, the data center is being put under enormous strain. Between the increasing popularity of cloud computing, the high rate of data creation, and new compute-intensive applications like machine learning, our current data center infrastructures are being pushed to their limits.

To help ensure that the data center of the future will be able to keep up with these trends and continually improve performance, engineers are reimagining data center computing hardware altogether.

From this, one of the most important pieces of hardware for the data center is the FPGA.

View attachment 31878

A high-level overview of an FPGA. Image used courtesy of Stemmer Imaging



A recently announced center, the Intel/VMware Crossroads 3D-FPGA Academic Research Center, is hoping to spur the improvement of FPGA technology explicitly for data centers.

In this article, we’ll talk about the benefits of FPGAs for the data center and how the new research center plans to improve the technology even further.

A Shift to Accelerators​

There are currently two major trends in the data center that are driving the future of the field: an increase in data traffic and an increase in computationally-intensive applications.

The challenge here is that, not only must the data centers be able to handle increased data and tougher computations, but there is a greater demand to do this at lower power and higher performance than ever before.

To achieve this, engineers have shifted away from more general-purpose computing hardware, such as central processing units (CPUs) and graphics processing units (GPUs), and instead, employ hardware accelerators.

View attachment 31880

An example of heterogeneous architecture, which is becoming the norm in the data center. Image used courtesy of Zhang et al


Engineers can achieve higher performance and low power computation with application-specific computing blocks than previously possible. To many, a heterogeneous computing architecture consisting of accelerators, GPUs, and CPUs, is the widely accepted path forward for future data centers.


Benefits of FPGAs for the Data Center​

FPGAs are uniquely positioned to benefit the data center for several reasons.

First off, FPGAs are highly customizable, meaning that they can be configured for use as an application-specific hardware accelerator.

In the context of the data center, engineers can configure FPGAs for applications like machine learning, networking, or security. Due to their software-defined nature, FPGAs offer easier design flows and shorter time to market accelerators than an application-specific integrated circuit (ASIC).

View attachment 31881

An example diagram showing how FPGAs can be dynamically reconfigured. Image used courtesy of Wang et al


Secondly, FPGAs can offer the benefits of versatility. Since an FPGA's functionality can be defined purely by HDL code, a single FPGA can serve many purposes. This functionality could help reduce complexity and create uniformity in a system.

Instead of needing a variety of different hardened ASICs, a single FPGA can be configured and reconfigured for various applications, opening the door to further optimization of hardware resources.

Thus, some FPGAs can be reconfigured in real-time based on the application being run, meaning a single FPGA can serve as many roles as needed.



A 3D-FPGA Academic Research Center​

Recently, the Intel/VMware Crossroads 3D-FPGA Academic Research Center was announced as a multi-university effort to improve the future of FPGA technology.

The team, which consists of researchers from the University of Toronto, UT Austin, Carnegie Mellon, and more, focuses their efforts directly on the role of FPGAs in the data center. More specifically, the group will be investigating ways to achieve 3D integration within the framework of an FPGA.

The idea is that, by being able to stack multiple FPGA dies vertically, researchers should be able to achieve a higher transistor density while also balancing performance, power, and manufacturing costs.

Overall, the group hopes to use 3D-integration technology to create heterogeneous systems consisting of FPGAs and hardened logic- accelerators, all within a single package. The technology will seek to combine a Network-on-Chip (NoC) in a layer beneath the traditional FPGA fabric such that the NoC can control data routing while the FPGA can provide the computation needed.

Overall, the group hopes to extend the rise of in-network computing into the server with their new technologies.



FPGAs for Future Data Centers​

The FPGA will undoubtedly become a key player as the data center trends towards more data and more intensive computation.

With a new research group hoping to bolster the technology, it seems even more apparent now than ever that FPGAs are becoming a mainstay in the data center industry.






Amazon’s Xilinx FPGA Cloud: Why This May Be A Significant Milestone​

/ AI and Machine Learning, CPU GPU DSP FPGA, Semiconductor / By Karl Freund

Datacenters, especially the really big guys known as the Super 7 (Alibaba, Amazon, Baidu, Facebook, Google, Microsoft and Tencent), are experiencing significant growth in key workloads that require more performance than can squeezed out of even the fastest CPUs. Applications such as Deep Neural Networks (DNN) for Artificial Intelligences (AIs), complex data analytics, 4K live streaming video and advanced networking and security features are increasingly being offloaded to super-fast accelerators that can provide 10X or more the performance of a CPU. NVIDIA GPUs in particular have benefited enormously from the training portion of machine learning, reporting a 193% Y/Y last quarter in their datacenter segment, which is now approaching a $1B run-rate business.

View attachment 31882

But GPU’s aren’t the only acceleration game in town. Microsoft has recently announced that Field Programmable Gate Array (FPGA) accelerators have become pervasive in their datacenters. Soon after, Xilinx announced that Baidu is using their devices for acceleration of machine learning applied to speech processing and autonomous vehicles. And Xilinx announced last month a ‘reconfigurable acceleration stack’ that reduces the time to market for FPGA solutions with libraries, tools, frameworks and OpenStack support for several datacenter workloads. Now Amazon has announced the addition of Xilinx FPGAs to their cloud services, signaling that the company may be seeing market demand for access to these once-obscure style of chips for parallel processing. This announcement may be a significant milestone for FPGAs in general, and Xilinx in particular.

What did Amazon Announce?

Amazon is not the first company to offer FPGA cloud services, but they are one of the largest. Microsoft uses them internally but does not yet offer them as a service to their Azure customers. Amazon, on the other hand, built custom servers to enable them to offer new public F1 Elastic Cloud instances supporting up to eight 16nm Xilinx Ultrascale+ FPGAs per instance. Initially offered as a developer’s platform, these instances can target the experienced FPGA community. Amazon did not discuss the availability of high-level tools such as OpenCL or the Xilinx reconfigurable acceleration stack. Adding these capabilities could open up a larger market for early adopters and developers. However, I would expect Amazon to expand their offering in the future, otherwise I doubt they would have gone to all the expense and effort to design and build their own customized, scalable servers.

Why this announcement may be significant

First and foremost, this deal with the world’s largest cloud provider is a major design win for Xilinx over their archrival Altera, acquired last year by Intel, as Altera was named as Microsoft’s supplier for their FPGA enhanced servers. At the time of the Altera acquisition, Intel had predicted that over one third of cloud compute nodes would deploy FPGA accelerators by 2020. Now it looks like Xilinx is poised to benefit from the market’s expected growth, in part since Xilinx appears to enjoy at least a year lead in manufacturing technology over Altera with Xilinx’s new 16nm FinFET generation silicon, which is now shipping in volume production. Xilinx has also focused on providing highly scalable solutions, with support for PCIe and other capabilities such as the CCIX interconnect. Altera, on the other hand, has been focusing on integration into Intel, including the development of an integrated multichip module pairing up one low-end FPGA with a Xeon processor. Surely, Intel wants to drag as much Xeon revenue along with each FPGA as possible. While this approach has distinct advantages for some lower end applications (primarily through faster communications and lower costs), it is not ideal for applications requiring accelerator pooling, where multiple accelerators are attached to a single CPU.

Second, as I mentioned above, Amazon didn’t just throw a bunch of FPGA cards into PCIe servers and call it a day; they designed a custom server with a fabric of pooled accelerators that interconnects up to 8 FPGAs. This allows the chips to share memory and improves bandwidth and latency for inter-chip communication. That tells us that Amazon may be seeing customer demand for significant scaling for applications such as inference engines for Deep Learning and other workloads.

Finally, Amazon must be seeing demand from developers across a broader market than the typical suspects on the list of the Super 7. After all, those massive companies possess the bench strength and wherewithal to buy and build their own FPGA equipped servers and would be unlikely to come to their competitor for services. Amazon named an impressive list of companies endorsing the new F1 instance, spanning a surprising breadth of applications and workloads.

Where do we go from here?

The growing market for datacenter accelerators will be large enough to lift a lot of boats, not just GPUs, and Xilinx appears to be well positioned to benefit from this trend. It will now be important to see more specific customer examples and quantified benefits in order to gauge whether the FPGA is going mainstream or remains a relatively small niche. We also hope to see more support from Amazon for the toolsets needed to make these fast chips easier to use by a larger market. This includes support for application developers to use their framework of choice (e.g, Caffe, FFMPEG) with a simple compile option to target the FPGA, a goal of the recently introduced Xilinx acceleration stack.

From an article posted by @FrederickSchack

“A benchmark test using another video object recognition test resulted in a system that could process a 1382x512p video at 30 fps using less than 75 mW of power, in a 16 nm silicon design. This needed 50x fewer parameters and 5x fewer operations than the Resnet50 reference design.”

Then from the above:

“Xilinx appears to enjoy at least a year lead in manufacturing technology over Altera with Xilinx’s new 16nm FinFET generation silicon, which is now shipping in volume production. Xilinx has also focused on providing highly scalable solutions”

It is a fact that Brainchip has never mentioned 16nm at any point until the above interview.

It is a fact that Brainchip has had a long standing association with Xilinx.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Love
  • Thinking
Reactions: 36 users

Diogenese

Top 20

Shifting to an FPGA Data Center Future: How are FPGAs a Potential Solution?​

April 04, 2022 by Jake Hertz

As data centers are put under more pressure, EEs are looking at field-programmable gate arrays (FPGAs) as a potential solution. However, how could they be useful, and who is ramping up their research efforts?​


Today more than ever, the data center is being put under enormous strain. Between the increasing popularity of cloud computing, the high rate of data creation, and new compute-intensive applications like machine learning, our current data center infrastructures are being pushed to their limits.

To help ensure that the data center of the future will be able to keep up with these trends and continually improve performance, engineers are reimagining data center computing hardware altogether.

From this, one of the most important pieces of hardware for the data center is the FPGA.

View attachment 31878

A high-level overview of an FPGA. Image used courtesy of Stemmer Imaging



A recently announced center, the Intel/VMware Crossroads 3D-FPGA Academic Research Center, is hoping to spur the improvement of FPGA technology explicitly for data centers.

In this article, we’ll talk about the benefits of FPGAs for the data center and how the new research center plans to improve the technology even further.

A Shift to Accelerators​

There are currently two major trends in the data center that are driving the future of the field: an increase in data traffic and an increase in computationally-intensive applications.

The challenge here is that, not only must the data centers be able to handle increased data and tougher computations, but there is a greater demand to do this at lower power and higher performance than ever before.

To achieve this, engineers have shifted away from more general-purpose computing hardware, such as central processing units (CPUs) and graphics processing units (GPUs), and instead, employ hardware accelerators.

View attachment 31880

An example of heterogeneous architecture, which is becoming the norm in the data center. Image used courtesy of Zhang et al


Engineers can achieve higher performance and low power computation with application-specific computing blocks than previously possible. To many, a heterogeneous computing architecture consisting of accelerators, GPUs, and CPUs, is the widely accepted path forward for future data centers.


Benefits of FPGAs for the Data Center​

FPGAs are uniquely positioned to benefit the data center for several reasons.

First off, FPGAs are highly customizable, meaning that they can be configured for use as an application-specific hardware accelerator.

In the context of the data center, engineers can configure FPGAs for applications like machine learning, networking, or security. Due to their software-defined nature, FPGAs offer easier design flows and shorter time to market accelerators than an application-specific integrated circuit (ASIC).

View attachment 31881

An example diagram showing how FPGAs can be dynamically reconfigured. Image used courtesy of Wang et al


Secondly, FPGAs can offer the benefits of versatility. Since an FPGA's functionality can be defined purely by HDL code, a single FPGA can serve many purposes. This functionality could help reduce complexity and create uniformity in a system.

Instead of needing a variety of different hardened ASICs, a single FPGA can be configured and reconfigured for various applications, opening the door to further optimization of hardware resources.

Thus, some FPGAs can be reconfigured in real-time based on the application being run, meaning a single FPGA can serve as many roles as needed.



A 3D-FPGA Academic Research Center​

Recently, the Intel/VMware Crossroads 3D-FPGA Academic Research Center was announced as a multi-university effort to improve the future of FPGA technology.

The team, which consists of researchers from the University of Toronto, UT Austin, Carnegie Mellon, and more, focuses their efforts directly on the role of FPGAs in the data center. More specifically, the group will be investigating ways to achieve 3D integration within the framework of an FPGA.

The idea is that, by being able to stack multiple FPGA dies vertically, researchers should be able to achieve a higher transistor density while also balancing performance, power, and manufacturing costs.

Overall, the group hopes to use 3D-integration technology to create heterogeneous systems consisting of FPGAs and hardened logic- accelerators, all within a single package. The technology will seek to combine a Network-on-Chip (NoC) in a layer beneath the traditional FPGA fabric such that the NoC can control data routing while the FPGA can provide the computation needed.

Overall, the group hopes to extend the rise of in-network computing into the server with their new technologies.



FPGAs for Future Data Centers​

The FPGA will undoubtedly become a key player as the data center trends towards more data and more intensive computation.

With a new research group hoping to bolster the technology, it seems even more apparent now than ever that FPGAs are becoming a mainstay in the data center industry.






Amazon’s Xilinx FPGA Cloud: Why This May Be A Significant Milestone​

/ AI and Machine Learning, CPU GPU DSP FPGA, Semiconductor / By Karl Freund

Datacenters, especially the really big guys known as the Super 7 (Alibaba, Amazon, Baidu, Facebook, Google, Microsoft and Tencent), are experiencing significant growth in key workloads that require more performance than can squeezed out of even the fastest CPUs. Applications such as Deep Neural Networks (DNN) for Artificial Intelligences (AIs), complex data analytics, 4K live streaming video and advanced networking and security features are increasingly being offloaded to super-fast accelerators that can provide 10X or more the performance of a CPU. NVIDIA GPUs in particular have benefited enormously from the training portion of machine learning, reporting a 193% Y/Y last quarter in their datacenter segment, which is now approaching a $1B run-rate business.

View attachment 31882

But GPU’s aren’t the only acceleration game in town. Microsoft has recently announced that Field Programmable Gate Array (FPGA) accelerators have become pervasive in their datacenters. Soon after, Xilinx announced that Baidu is using their devices for acceleration of machine learning applied to speech processing and autonomous vehicles. And Xilinx announced last month a ‘reconfigurable acceleration stack’ that reduces the time to market for FPGA solutions with libraries, tools, frameworks and OpenStack support for several datacenter workloads. Now Amazon has announced the addition of Xilinx FPGAs to their cloud services, signaling that the company may be seeing market demand for access to these once-obscure style of chips for parallel processing. This announcement may be a significant milestone for FPGAs in general, and Xilinx in particular.

What did Amazon Announce?

Amazon is not the first company to offer FPGA cloud services, but they are one of the largest. Microsoft uses them internally but does not yet offer them as a service to their Azure customers. Amazon, on the other hand, built custom servers to enable them to offer new public F1 Elastic Cloud instances supporting up to eight 16nm Xilinx Ultrascale+ FPGAs per instance. Initially offered as a developer’s platform, these instances can target the experienced FPGA community. Amazon did not discuss the availability of high-level tools such as OpenCL or the Xilinx reconfigurable acceleration stack. Adding these capabilities could open up a larger market for early adopters and developers. However, I would expect Amazon to expand their offering in the future, otherwise I doubt they would have gone to all the expense and effort to design and build their own customized, scalable servers.

Why this announcement may be significant

First and foremost, this deal with the world’s largest cloud provider is a major design win for Xilinx over their archrival Altera, acquired last year by Intel, as Altera was named as Microsoft’s supplier for their FPGA enhanced servers. At the time of the Altera acquisition, Intel had predicted that over one third of cloud compute nodes would deploy FPGA accelerators by 2020. Now it looks like Xilinx is poised to benefit from the market’s expected growth, in part since Xilinx appears to enjoy at least a year lead in manufacturing technology over Altera with Xilinx’s new 16nm FinFET generation silicon, which is now shipping in volume production. Xilinx has also focused on providing highly scalable solutions, with support for PCIe and other capabilities such as the CCIX interconnect. Altera, on the other hand, has been focusing on integration into Intel, including the development of an integrated multichip module pairing up one low-end FPGA with a Xeon processor. Surely, Intel wants to drag as much Xeon revenue along with each FPGA as possible. While this approach has distinct advantages for some lower end applications (primarily through faster communications and lower costs), it is not ideal for applications requiring accelerator pooling, where multiple accelerators are attached to a single CPU.

Second, as I mentioned above, Amazon didn’t just throw a bunch of FPGA cards into PCIe servers and call it a day; they designed a custom server with a fabric of pooled accelerators that interconnects up to 8 FPGAs. This allows the chips to share memory and improves bandwidth and latency for inter-chip communication. That tells us that Amazon may be seeing customer demand for significant scaling for applications such as inference engines for Deep Learning and other workloads.

Finally, Amazon must be seeing demand from developers across a broader market than the typical suspects on the list of the Super 7. After all, those massive companies possess the bench strength and wherewithal to buy and build their own FPGA equipped servers and would be unlikely to come to their competitor for services. Amazon named an impressive list of companies endorsing the new F1 instance, spanning a surprising breadth of applications and workloads.

Where do we go from here?

The growing market for datacenter accelerators will be large enough to lift a lot of boats, not just GPUs, and Xilinx appears to be well positioned to benefit from this trend. It will now be important to see more specific customer examples and quantified benefits in order to gauge whether the FPGA is going mainstream or remains a relatively small niche. We also hope to see more support from Amazon for the toolsets needed to make these fast chips easier to use by a larger market. This includes support for application developers to use their framework of choice (e.g, Caffe, FFMPEG) with a simple compile option to target the FPGA, a goal of the recently introduced Xilinx acceleration stack.

Funnily enough, Akida is field programmable - it could be described as a FPNN.

"FPGAs are highly customizable, meaning that they can be configured for use as an application-specific hardware accelerator.'

I doubt that you could efficiently replicate Akida with FPGAs. I'm not up to speed on FPGAs, but I'm guessing that the communication between individual logic blocks of a FPGA would be synchronous (to avoid data collisions on the bus) - that's between the individual logic elements. Akida is asynchronous in its inter-NPU communication. From the data transport point of view, synchronous is sort of like waiting for the lift instead of taking the escalator. Compared with Akida, this brings penalties in terms of time and power consumption.

"Amazon didn’t just throw a bunch of FPGA cards into PCIe servers and call it a day; they designed a custom server with a fabric of pooled accelerators that interconnects up to 8 FPGAs. This allows the chips to share memory and improves bandwidth and latency for inter-chip communication."

If I had Xilink shares, longer term I'd be looking at having a saver on BrainChip.
 
  • Like
  • Fire
Reactions: 20 users

Hrdwk

Regular
I have a brother whom I got to invest in BrainChip and a few months later he tells me that he wished
that he never invested in BrainChip, when I asked him why, the reply was that someone told him that
the BrainChip was to be implanted in the brain, and that he don't want to know about that.
So I had to explain to him that this was not the BrainChip that does this, and that whoever told him
that was getting confused with Mr Elon Musk. I must admit my brother is completely illiterate when
it come to computers and any thing electronic. He once asked a friend to fax some plans to a client,
when the friend put the plans into the fax machine, and it started to swallow the plans to copy it,
my brother ran over to the fax machine and started to try and rip the paper out of the machine, yelling
that's the only copy I have, I don't have other one, stop the machine, stop the machine, thinking he was
not going to get the plans back.
Any way hope I have not bored you with this little story.
I have a similar story! I was talking to a guy at work last week and I was trying to explain why I had invested in BrainChip and he also goes ah yeah Elon is putting those chips in people’s heads. 🤦🏻‍♂️🤦🏻‍♂️🤦🏻‍♂️. I quickly said no WANCA, it’s a nuromophic (yes I know the correct spelling) chip and politely changed topics.

Maybe the company name should be changed to Akida or Ubiquitous.Ai as it seems every second person thinks of NeuralLink.
 
  • Haha
  • Like
Reactions: 14 users
Top Bottom