BRN Discussion Ongoing

Tuliptrader

Regular
Hi

TT,

I know very little about transformers, but one thing stands out - the size of the "model library". This will be too large to implement on an edge device like Akida. SWF's article refers to Cerebas' wafer-scale SoC. That could be 30 cm diameter.


but apparently Wajahat Qadeer (Kinara - https://kinara.ai/ ) believes that the databases can be compressed to make them practicable at the edge, in mobile phones.

"There are ways to reduce the size of transformers so that inference can be run in edge devices, Qadeer said. “For deployment on the edge, large models can be reduced in size through techniques such as student-teacher training to create lightweight transformers optimized for edge devices,” he said, citing MobileBert as an example. “Further size reductions are possible by isolating the functionality that pertains to the deployment use cases and only training students for that use case.”

In the student-teacher method for training neural networks, a smaller student network is trained to reproduce the outputs of the teacher network.

Techniques like this can bring transformer-powered NLP to applications like smart-home assistants, where consumer privacy dictates that data doesn’t enter the cloud. Smartphones are another key application, Qadeer said.

“In the second generation of our chips, we have specially enhanced our efficiency for pure matrix-matrix multiplications; have significantly increased our memory bandwidth, both internal and external; and have also added extensive vector support for floating-point operations to accelerate activations and operations that may require higher precision,” he added
."

Given that Kinara uses matrix multiplication and floating-point operations, I think that using N-of-M coding, can improve on the Kinara model.

This is a patent application by Qadeer:

US2021174172A1 METHOD FOR AUTOMATIC HYBRID QUANTIZATION OF DEEP ARTIFICIAL NEURAL NETWORKS
View attachment 26820



The method includes, for each floating-point layer in a set of floating-point layers: calculating a set of input activations and a set of output activations of the floating-point layer; converting the floating-point layer to a low-bit-width layer; calculating a set of low-bit-width output activations based on the set of input activations; and calculating a per-layer deviation statistic of the low-bit-width layer. The method also includes ordering the set of low-bit-width layers based on the per-layer deviation statistic of each low-bit-width layer. The method additionally includes, while a loss-of-accuracy threshold exceeds the accuracy of the quantized network: converting a floating-point layer represented by the low-bit-width layer to a high-bit-width layer; replacing the low-bit-width layer with the high-bit-width layer in the quantized network; updating the accuracy of the quantized network; and, in response to the accuracy of the quantized network exceeding the loss-of-accuracy threshold, returning the quantized network.
Thank you, @Diogenese for your response and for all the other responses you give so often and so selflessly.

TT
 
  • Like
  • Love
  • Fire
Reactions: 19 users

ndefries

Regular
On the addition of LSTM and Transformer Neural Networks solutions.

Very likely these multi-flexible solutions which Brainchip can produce, can make "sealing the deals" a whole lot easier after it becomes official.

I would assume any company innovating into Edge SNN solutions, spending millions, would get a lot of comfort knowing that Brainchip can handle many solutions ongoing. They would see that Brainchip innovation never stops!

Like Mercedes said, the new cutting-edge AI/ML solutions would roll out in phases over the next few years.

Oh My, the number of Use-cases by adding on the LSTM and Transformer Neural Networks will be even more endless. 🌪️
The great thing is that just one kicking off is huge and just the rocket needed to never see below $1 ever again.
 
  • Like
  • Fire
  • Love
Reactions: 14 users

Diogenese

Top 20
Love it
“@Neuromorphic technologies make efficient onboard AI possible. In a recent collaboration with an automotive client, we demonstrated that spiking neural networks running on a neuromorphic processor can recognize simple voice commands up to 0.2 seconds faster than a commonly used embedded GPU accelerator, while using up to a thousand times less power. This brings truly intelligent, low latency interactions into play, at the edge, even within the power-limited constraints of a parked vehicle.”
Hi MA,

That quote is based on Intel Loihi -
Edit ... no! no! no! not Loihi, what's that thing that spells Kapoho Bay?
 
Last edited:
  • Like
Reactions: 4 users

BaconLover

Founding Member
Transcript of today's podcast by Sean with Jean.

Disclosure: At times I found it hard to understand a few words by Jean, so there will be some errors in the transcript, but I have made every effort to stay aligned with the message and context.
I have omitted a few filling words, to make it easy when I was typing.
Some parts of introduction and conclusion are omitted, because I couldn't care less :cautious:😂🤣
Hopefully I haven't missed much in between, if you find any grave errors, feel free to speak up.

Anyways, here's Sean and Jean.





Sean: Why don’t you start by taking a moment to position Accenture for the viewers who may not be familiar with their firm and a little bit about yourself?

Jean: Sure, so as you mentioned we are the largest TSM integrator out there, we are now over 700,000 people strong which is the size of a small city and we focus on helping our customers through their digital transformation journey. Of course, they are not all at the same stage, some are beginning, some are in the middle of it some are tuning it, but what we do we really help people harness the best the technology can give in order to transform their business and to make them more efficient.

We like to say and it is very true that we love to find net new value in our customer’s business. So, it is all about what else can I do to keep the customer I have happy and capture new customers but all of that not by selling tech for tech but really selling tech for value and AI is a good example of the type of thing that we do. As far as AI is concerned, we are a group of about 25k people now an extending network is much bigger than that probably around 60 to 80k people that we can draw upon because not everything in AI is about an algorithms that’s all the change management associated with AI of course, but that about the size of the our group, and we do listing in AI we do the plastic I would say Advanced genetic type of work, the more modern type of algorithmic approach that AI has brought to light, and we do a lot of information also, so all of that is under that umbrella and the blook of AI is data so of course we do a lot of work around data and we in fact have a sister group that all they are focusing is dealing with data in order to make sure AI is going to do the right thing. Last thing I want is from a customer point of view, we serve the global 2000. This is really where we are focused, and we do that with a very strong industry angle, meaning that we they are very proud of the fat that we serve about 20 large industry and we have groups that are really understand the business of our customers and that is how you do good business is when you understand new customers business and you can really emphasis with that challenge and helps them get you where you want to go.



Sean: That’s great, with that kind of customer base we know that the requirements and demands are very high and very unrelenting so we know the quality work that Accenture does to meet that customer base. Having said that share a little bit about your current thinking and your view about AI in general in the state of AI.

Jean: Well AI is finally out of the not sweet dreadful winters. I would say that started many moons ago, I think when I got out of school you know AI was existing but it was just very hard to do but now it has come out to play for real and with in business starting about five years ago really where we were able to find application of AI that were what I am calling pragmatic AI this is what was not about the sexy or plus side of AI or I can you know find gaps on YouTube and all that kind of stuff but more about how can I use the various techniques that AI has in its tool box we need to transform the business we think that are very tangible right now I sometimes use the example of chat bot. Chat bots have been around for long time, in all common days at HP I think we met a few of those, but they tend to be very done to certain extent, you would try to find something behind the scene the system will try to find the few words that were in the final document and give whatever it would call an answer, but really not understanding what was your intent in your question and what was the right thing to answer. But today, chat bots are I am not going to say they are not human like yet, but they are getting there and really it is a pleasant experience as opposed to something you know you were not going to get an answer anyway so just were going through motion. So that what I call pragmatic AI and there other e.g., hyper personalisation for e.g., which is very important into market, so we are in an era of pragmatic ai were we see every day significant improvement that can really be used by client in those digital transformation that I spoke about.



Sean: How about some comments about state of edge AI where BRN is focused on what is going on from your point of view on edge ai?

Jean: I think you are in the right place, because edge ai is now sort of emerging from darkness, starting to be used in some very specific domain, it is not yet everywhere, but it certainly is getting some action today and its where probably in the long term most of the AI will sit in fact, a lot of people are thinking AI demands massive infrastructure behind the scene, which is not untrue there are still some aspect of AI that are very demanding in terms of computing power and we all require the customers thought their partner like cloud providers or others who have significant infrastructure to address the problem. But thanks to the progress that are being made on the silicon side or other semiconductor you can use but not just generic terms, the progress is now allowing AI to move to the edge, they are still constraint on the edge, such as power in general is always a challenge and space in terms real estate you have available however I think a company like yours are really taking that in account because they’re too hard and now bringing the power of AI with a full understanding of those complaints if you put something in satellite for e.g. you got to make sure you know every people be using because it is limited so Edge AI certainly very exciting domain and 2023-2024 is going to be really the time where we are going to see a lot more of it. I mean they are of course and you are already successful others are successful but I think that we’re going to see the next situation like deployment of this technology on the edge.



Sean: That’s great and its certainly consistent with our interactions with our customers 23 and 24 the interest level is very high right now but its also what you comment were a great set for next question I was going to ask you we talked about power elements on edge, what’s your thoughts on neuromorphic which of course that the basis of our tech how do you see neuromorphic what do you see in the future, do you see the direction for a lot of ai being neuromorphic?


Jean: Yes, the reason is the approach of neuromorphic AI is not to try to bend the business of ai to what the plastic techniques are doing, I never believe you should try to mould your customer run the technology you are but try to have technology that can embrace the needs of customers. In this case, you know when you talk about AI we are talking about some form of emulation or aspiring to emulate how the human brain is functioning and a lot of these exercises going to build digital human brain I’ve been down for at least ten year there were all about trying to see how they can make a mirror copy of it, which is kind of an engineering approach right, I am going to break down how the brain function I’m going to try to rebuild the pcs that makes the brain, the neuron of course but these techniques have proven to be unsuccessful and not scalable which is really important, and that to me was trying to make sure you could make literally a computer version of your brain, but I don’t think that what neuromorphic is doing, the effort that I see in the neuromorphic world is understand the brain function so take it a with a cognitive view of the world and then mould the silicon and techniques you are going to be using to but try to achieve the same goal that brain is using when they make a decision forces, so I am a big believer that neuromorphic tech in general is going to be a significant part of the future of AI.



Sean: I couldn’t agree with you more on that because that’s exactly how we view it, we certainly not trying to copy the brain, and usually when I run into people don’t understand the tech well what we do is obviously we take the best part of brain which is a very efficient computation machine, and use that so that’s all we are simply doing inspired by the brain, taking those principles to get a lot more done break through the Von Newman bottleneck and just get things done with less power its just that simple and allow you to apply that for the use cases you are doing not mimic the brain, but use the principles of the brain so agree 100 percent with that. Well let’s shift gears little bit lets talk about what’s going on with models I know you think a lot about that what’s going on with the latest trends on models what’s new with transformers gives us couple of comments about that kind of stuff.

Jean: yes, it’s a very interesting domain we are in now for full disclosure I am a big fan of Chris Re at Stanford which to me is really one of the best vision of the evolution of AI today, and elimination of what we call model ideas, so in the world of AI its very tempting for data scientists to create model for everything every problem they have they want to create a model right, and there’s also a lot of duplication because there’s not enough sharing happening in this domain, but that’s another subject, but there’s this model like we create way many models, in industry and these models they all highly tuned to very specific use cases scenario but this is dramatically changing there has been an emergence of new technique and model that are called transformers of course we call them supermodel right we are engineers so we are going to have some form of humour and the supermodel they are just better and you know, doing the work than there’s a bunch of little ones that are bespoke but they are very function defined better say they are very focused on type of information you are processing, so we see a lot of transformer activity in the tech space, because the first barrier to break down has been natural language processing and that’s where transformer have made literally significant progress I think if quote Dr Re, google translate by itself was 500,000 line of code five years ago, today its 500 lines of code by using transformers. So these transformer born in tech space, you will hear names like gpt3, which is very popular around here, what this supermodel do they have capability to understand pretty much everything in their domain and they just need to be tuned when there is specificity, so say it would be they speak English, but you can tune them for may an industry that is specific dialect and that’s you have to do, once you tune in dialect it will understand it is about you know interpreting text in industry, we see that happening in video and in voice. So future is less model, that are bespoke, more transformer, and then a shift in the personality of the data scientist that is going to be using AI instead of being people you picture as a white lab code and blackboard writing mathematical formula, data scientist will wear a hard hat, if thy are gas industry, meaning they will be very industry savvy and they’ll be able to tune these transformer and select the right data in order to solve the problem in their industry and we going to see the across all industry and there will be a shift in profile of data scientist and also we are seeing it today because data is the fuel for AI, we will see an emergence a massive number of data engineer s people that can prep data, right in order to behave it ready to fed to the transformers to then solve the problem of the client, it s a fascinating domain and in full evolution and I think we are still at the beginning, so more innovation is going to happen as days are going by.



Sean: its great we might have to come back and talk about that at a later date, yeah but let’s get ready for close, but I want to ask you I know you usually put out a prediction for the year but I would love to hear your thoughts, have you got a prediction one or two?

Jean: I will give you one or two, I have to write 20 and I am not there yet, but one the side of data, I really want to send a message that if you don’t have good data, you won’t get good AI, it just really key, and unfortunately a lot of our clients or all clients in general not just ours, they are number one problem is data. And that’s just because of the immersion the whole the year the system of record that are different legacy essentially that are creating massive amount they’re recitals so there is an approach to managing data which is the strategy no longer a tech which is very important , which is data mesh, and data mesh is a way you can break down info silo that you have into your enterprise, in a way that does not remove ownership of the orders of the silos because they are business owner, so they understand the application, but make that data available to whoever needs it within their enterprises itself, data mesh strategy again we cant say it enough to our customers is not a technology it’s a reporting instrument, data mesh strategy supported with thing called data product and metadata stores and governance is going to catch on fire in 2023, 24 25.

Because we have to breakdown the silo, we really enable AI in human knowledge in something that is understandable by computers, knowledge is absolutely necessary in every single type of business application in today’s world, and 5 years ago every application we have not just database and UI but database, knowledge graph and UI, is proven true. I can see that being deployed very well, so that’s my prediction, its going to keep on going. I promise I will find some more, 18 are missing.
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 92 users

Taproot

Regular
Transcript of today's podcast by Sean with Jean.

Disclosure: At times I found it hard to understand a few words by Jean, so there will be some errors in the transcript, but I have made every effort to stay aligned with the message and context.
I have omitted a few filling words, to make it easy when I was typing.
Some parts of introduction and conclusion are omitted, because I couldn't care less :cautious:😂🤣
Hopefully I haven't missed much in between, if you find any grave errors, feel free to speak up.

Anyways, here's Sean and Jean.





Sean: Why don’t you start by taking a moment to position Accenture for the viewers who may not be familiar with their firm and a little bit about yourself?

Jean: Sure, so as you mentioned we are the largest TSM integrator out there, we are now over 700,000 people strong which is the size of a small city and we focus on helping our customers through their digital transformation journey. Of course, they are not all at the same stage, some are beginning, some are in the middle of it some are tuning it, but what we do we really help people harness the best the technology can give in order to transform their business and to make them more efficient.

We like to say and it is very true that we love to find net new value in our customer’s business. So, it is all about what else can I do to keep the customer I have happy and capture new customers but all of that not by selling tech for tech but really selling tech for value and AI is a good example of the type of thing that we do. As far as AI is concerned, we are a group of about 25k people now an extending network is much bigger than that probably around 60 to 80k people that we can draw upon because not everything in AI is about an algorithms that’s all the change management associated with AI of course, but that about the size of the our group, and we do listing in AI we do the plastic I would say Advanced genetic type of work, the more modern type of algorithmic approach that AI has brought to light, and we do a lot of information also, so all of that is under that umbrella and the blook of AI is data so of course we do a lot of work around data and we in fact have a sister group that all they are focusing is dealing with data in order to make sure AI is going to do the right thing. Last thing I want is from a customer point of view, we serve the global 2000. This is really where we are focused, and we do that with a very strong industry angle, meaning that we they are very proud of the fat that we serve about 20 large industry and we have groups that are really understand the business of our customers and that is how you do good business is when you understand new customers business and you can really emphasis with that challenge and helps them get you where you want to go.



Sean: That’s great, with that kind of customer base we know that the requirements and demands are very high and very unrelenting so we know the quality work that Accenture does to meet that customer base. Having said that share a little bit about your current thinking and your view about AI in general in the state of AI.

Jean: Well AI is finally out of the not sweet dreadful winters. I would say that started many moons ago, I think when I got out of school you know AI was existing but it was just very hard to do but now it has come out to play for real and with in business starting about five years ago really where we were able to find application of AI that were what I am calling pragmatic AI this is what was not about the sexy or plus side of AI or I can you know find gaps on YouTube and all that kind of stuff but more about how can I use the various techniques that AI has in its tool box we need to transform the business we think that are very tangible right now I sometimes use the example of chat bot. Chat bots have been around for long time, in all common days at HP I think we met a few of those, but they tend to be very done to certain extent, you would try to find something behind the scene the system will try to find the few words that were in the final document and give whatever it would call an answer, but really not understanding what was your intent in your question and what was the right thing to answer. But today, chat bots are I am not going to say they are not human like yet, but they are getting there and really it is a pleasant experience as opposed to something you know you were not going to get an answer anyway so just were going through motion. So that what I call pragmatic AI and there other e.g., hyper personalisation for e.g., which is very important into market, so we are in an era of pragmatic ai were we see every day significant improvement that can really be used by client in those digital transformation that I spoke about.



Sean: How about some comments about state of edge AI where BRN is focused on what is going on from your point of view on edge ai?

Jean: I think you are in the right place, because edge ai is now sort of emerging from darkness, starting to be used in some very specific domain, it is not yet everywhere, but it certainly is getting some action today and its where probably in the long term most of the AI will sit in fact, a lot of people are thinking AI demands massive infrastructure behind the scene, which is not untrue there are still some aspect of AI that are very demanding in terms of computing power and we all require the customers thought their partner like cloud providers or others who have significant infrastructure to address the problem. But thanks to the progress that are being made on the silicon side or other semiconductor you can use but not just generic terms, the progress is now allowing AI to move to the edge, they are still constraint on the edge, such as power in general is always a challenge and space in terms real estate you have available however I think a company like yours are really taking that in account because they’re too hard and now bringing the power of AI with a full understanding of those complaints if you put something in satellite for e.g. you got to make sure you know every people be using because it is limited so Edge AI certainly very exciting domain and 2023-2024 is going to be really the time where we are going to see a lot more of it. I mean they are of course and you are already successful others are successful but I think that we’re going to see the next situation like deployment of this technology on the edge.



Sean: That’s great and its certainly consistent with our interactions with our customers 23 and 24 the interest level is very high right now but its also what you comment were a great set for next question I was going to ask you we talked about power elements on edge, what’s your thoughts on neuromorphic which of course that the basis of our tech how do you see neuromorphic what do you see in the future, do you see the direction for a lot of ai being neuromorphic?


Jean: Yes, the reason is the approach of neuromorphic AI is not to try to bend the business of ai to what the plastic techniques are doing, I never believe you should try to mould your customer run the technology you are but try to have technology that can embrace the needs of customers. In this case, you know when you talk about AI we are talking about some form of emulation or aspiring to emulate how the human brain is functioning and a lot of these exercises going to build digital human brain I’ve been down for at least ten year there were all about trying to see how they can make a mirror copy of it, which is kind of an engineering approach right, I am going to break down how the brain function I’m going to try to rebuild the pcs that makes the brain, the neuron of course but these techniques have proven to be unsuccessful and not scalable which is really important, and that to me was trying to make sure you could make literally a computer version of your brain, but I don’t think that what neuromorphic is doing, the effort that I see in the neuromorphic world is understand the brain function so take it a with a cognitive view of the world and then mould the silicon and techniques you are going to be using to but try to achieve the same goal that brain is using when they make a decision forces, so I am a big believer that neuromorphic tech in general is going to be a significant part of the future of AI.



Sean: I couldn’t agree with you more on that because that’s exactly how we view it, we certainly not trying to copy the brain, and usually when I run into people don’t understand the tech well what we do is obviously we take the best part of brain which is a very efficient computation machine, and use that so that’s all we are simply doing inspired by the brain, taking those principles to get a lot more done break through the Von Newman bottleneck and just get things done with less power its just that simple and allow you to apply that for the use cases you are doing not mimic the brain, but use the principles of the brain so agree 100 percent with that. Well let’s shift gears little bit lets talk about what’s going on with models I know you think a lot about that what’s going on with the latest trends on models what’s new with transformers gives us couple of comments about that kind of stuff.

Jean: yes, it’s a very interesting domain we are in now for full disclosure I am a big fan of Chris Ray at Stanford which to me is really one of the best vision of the evolution of AI today, and elimination of what we call model ideas, so in the world of AI its very tempting for data scientists to create model for everything every problem they have they want to create a model right, and there’s also a lot of duplication because there’s not enough sharing happening in this domain, but that’s another subject, but there’s this model like we create way many models, in industry and these models they all highly tuned to very specific use cases scenario but this is dramatically changing there has been an emergence of new technique and model that are called transformers of course we call them supermodel right we are engineers so we are going to have some form of humour and the supermodel they are just better and you know, doing the work than there’s a bunch of little ones that are bespoke but they are very function defined better say they are very focused on type of information you are processing, so we see a lot of transformer activity in the tech space, because the first barrier to break down has been natural language processing and that’s where transformer have made literally significant progress I think if quote Dr Ray, google translate by itself was 500,000 line of code five years ago, today its 500 lines of code by using transformers. So these transformer born in tech space, you will hear names like gpt3, which is very popular around here, what this supermodel do they have capability to understand pretty much everything in their domain and they just need to be tuned when there is specificity, so say it would be they speak English, but you can tune them for may an industry that is specific dialect and that’s you have to do, once you tune in dialect it will understand it is about you know interpreting text in industry, we see that happening in video and in voice. So future is less model, that are bespoke, more transformer, and then a shift in the personality of the data scientist that is going to be using AI instead of being people you picture as a white lab code and blackboard writing mathematical formula, data scientist will wear a hard hat, if thy are gas industry, meaning they will be very industry savvy and they’ll be able to tune these transformer and select the right data in order to solve the problem in their industry and we going to see the across all industry and there will be a shift in profile of data scientist and also we are seeing it today because data is the fuel for AI, we will see an emergence a massive number of data engineer s people that can prep data, right in order to behave it ready to fed to the transformers to then solve the problem of the client, it s a fascinating domain and in full evolution and I think we are still at the beginning, so more innovation is going to happen as days are going by.



Sean: its great we might have to come back and talk about that at a later date, yeah but let’s get ready for close, but I want to ask you I know you usually put out a prediction for the year but I would love to hear your thoughts, have you got a prediction one or two?

Jean: I will give you one or two, I have to write 20 and I am not there yet, but one the side of data, I really want to send a message that if you don’t have good data, you won’t get good AI, it just really key, and unfortunately a lot of our clients or all clients in general not just ours, they are number one problem is data. And that’s just because of the immersion the whole the year the system of record that are different legacy essentially that are creating massive amount they’re recitals so there is an approach to managing data which is the strategy no longer a tech which is very important , which is data mesh, and data mesh is a way you can break down info silo that you have into your enterprise, in a way that does not remove ownership of the orders of the silos because they are business owner, so they understand the application, but make that data available to whoever needs it within their enterprises itself, data mesh strategy again we cant say it enough to our customers is not a technology it’s a reporting instrument, data mesh strategy supported with thing called data product and metadata stores and governance is going to catch on fire in 2023, 24 25.

Because we have to breakdown the silo, we really enable AI in human knowledge in something that is understandable by computers, knowledge is absolutely necessary in every single type of business application in today’s world, and 5 years ago every application we have not just database and UI but database, knowledge graph and UI, is proven true. I can see that being deployed very well, so that’s my prediction, its going to keep on going. I promise I will find some more, 18 are missing.
Jean: yes, it’s a very interesting domain we are in now for full disclosure I am a big fan of Chris Ray at Stanford which to me is really one of the best vision of the evolution of AI today



 
  • Like
  • Love
  • Fire
Reactions: 14 users

Diogenese

Top 20
Love it
“@Neuromorphic technologies make efficient onboard AI possible. In a recent collaboration with an automotive client, we demonstrated that spiking neural networks running on a neuromorphic processor can recognize simple voice commands up to 0.2 seconds faster than a commonly used embedded GPU accelerator, while using up to a thousand times less power. This brings truly intelligent, low latency interactions into play, at the edge, even within the power-limited constraints of a parked vehicle.”
Hi MA,

That quote is based on Intel Loihi -
 
  • Like
Reactions: 1 users

Moonshot

Regular
Liked by Anil


Imagine getting the performance of today’s supercomputers but drawing just a few hundred watts instead of megawatts. Or computer hardware that can run models of neurons, synapses, and high-level functions of the human brain. Or a flexible patch that could be worn on the skin that could detect serious health disorders before symptoms develop. Those are a few applications that could be enabled by neuromorphic computing.
Today’s high-performance computers have a von Neumann architecture, in which the central processing or graphics processing units (CPUs and GPUs) are separate from memory units, with the data and instructions kept in memory. That separation creates a bottleneck that slows throughput. Accessing data from main memory also consumes a considerable amount of energy.
In so-called neuromorphic systems, units known as neurons and synapses operate as both processors and memory. Just like neurons in the brain, artificial neurons only perform work when there is an input, or spike, to process. Neuromorphic systems are most often associated with machine learning and neural networks, but they can perform a variety of other computing applications.
Just a handful of large-scale neuromorphic computers are in operation today. The Spiking Neural Network Architecture (SpiNNaker) system located at the University of Manchester in the UK has been operating since 2011 and now has 450 registered users, says Stephen Furber, the Manchester computer engineer who led the computer’s construction. The UK-government-funded 1-million-core platform was optimized to simulate neural networks.
A next-generation machine, dubbed SpiNNaker2, is under construction in Dresden, Germany. It’s being supported by the state of Saxony and by the Human Brain Project, the European Union’s decade-long flagship program, whose goal is advancing neuroscience, computing, and medicine. That program was initiated in 2013 and will end in March. (See Physics Today, December 2013, page 20.) Based on a more powerful SpiNNaker2 chip, the eponymous computer will consist of 10 million processers, each of which has 10 times the processing and storage capacity of the SpiNNaker chip, Furber says.
figure

Circuit boards containing 48 SpiNNaker neuromorphic chips are at the heart of a 1-million-core computer built at the University of Manchester for the Human Brain Project. The Technical University of Dresden, in collaboration with Manchester, is developing a 10-million-core machine built around the more powerful SpiNNaker2 chip.
STEPHEN FURBER/UNIVERSITY OF MANCHESTER
Targeted applications for SpiNNaker2 include remote learning, robotics interaction, autonomous driving, and real-time predictive maintenance for industry, says Christian Mayr, an electrical engineering professor at the Technical University of Dresden. Mayr coleads SpiNNaker2 with Furber. SpiNNcloud Systems, a spin-off company Mayr cofounded to commercialize neuromorphic technology, is in discussions to supply a neuromorphic system to a “large smart city” customer that he declined to identify.
Germany is host to another large-scale neuromorphic platform, BrainScaleS (brain-inspired multiscale computation in neuromorphic hybrid systems) at Heidelberg University. That project also began as a component of the Human Brain Project.
Working with Intel, Sandia National Laboratories plans to complete assembly this spring of a neuromorphic computer consisting of 1 billion neurons. The human brain is estimated to contain 80 billion neurons. “There’s a lot of reason to expect that we’ll be able to achieve more biological-like capabilities as we get to that scale,” says James Bradley Aimone, a Sandia computational neurological scientist.
Sandia has built a 128-million-neuron neuromorphic system, based on Intel’s Loihi chip. Each Loihi chip houses 131 000 neurons. The billion-neuron machine will be based on Intel’s Loihi 2 chips, which contain 1 million neurons each. (Loihi is named for an active underwater volcano in Hawaii.) The new machine is expected to draw less energy than a high-end workstation typically used for applications such as three-dimensional graphics, engineering design, and data science visualization, says Craig Vineyard, a Sandia researcher.
For Sandia, a nuclear weapons lab that hosts some of the world’s largest high-performance computer (HPC) assets, energy savings and Moore’s-law limitations are the main attractions of the neuromorphic approach. Conventional supercomputers are power hungry, and the potential to further scale their computational capacity is expected to be held in check by that growing appetite and the inability to further increase processor density, says Vineyard. The world’s first exascale HPC, for example, is expected to draw 40 MW, enough power to supply 30 000 homes and businesses, when it begins full operation at Oak Ridge National Laboratory this year. Exascale is at least 1018 floating-point operations per second (FLOPS). “Things like neuromorphic offer a viable path forward, because we can’t just keep building larger and larger systems,” Vineyard says.
Energy-use comparisons between neuromorphic and classic supercomputing will vary depending on the application. But in some cases, a billion-neuron Loihi system should perform a petascale-equivalent calculation in the same amount of time for as little as 200 W. (Petascale is at least 1015 FLOPS.) That’s a job for which the most power-efficient supercomputers require 20 kW, says Sandia’s Aimone.
Some of today’s very large deep-learning neural networks require hours just to train. Deep learning is a subfield of artificial intelligence (AI) that uses brain-inspired algorithms to help computers develop intelligence without explicit programming. The Generative Pre-trained Transformer 3 (GPT-3) language model, for example, can generate text that is difficult to distinguish from that of a human. Training it is estimated to require more than 1 GWh. Equal or greater amounts of energy are consumed when deep-learning models are put to use. The human brain, with its vastly greater computing capacity, operates on 20–30 W, says Mayr.
Unparalleled parallelism
Apart from energy savings, proponents of neuromorphic computing say it can offer equal or faster performance over classical HPCs for some applications. The number of processors in even the most powerful HPC machines pales in comparison with hundreds of millions of simulated neurons, though each neuron is far less computationally powerful than a GPU or CPU. Neuromorphic’s unrivaled level of parallelism is well suited for calculating certain kinds of algorithms, such as Monte Carlo random-walk simulations, says Aimone. Those algorithms are used in modeling molecular dynamics in drug discovery, stock-market predictions, weather forecasts, and a host of other applications.
“Is it possible to spread out this exploration of where a stock price may go over the large population of neurons? It turns out that you can,” Aimone says. Sandia demonstrated that a neuromorphic simulation of how radiation diffuses through materials performed on a Loihi system was nearly as fast as one accomplished on a CPU or GPU platform and at far lower energy cost.
“From the algorithm side, we’ve recognized that neuromorphic systems provide computational advantages, but that only becomes apparent at large scale,” says Aimone. That’s partly due to the overhead associated with setting up the machine to solve a new problem, adds Vineyard. “It’s not a magic solution, so research is identifying where those advantages are.”
Neuromorphic computers won’t pose a threat to HPCs, says Rick Stevens, associate director for computing, environmental, and life sciences at Argonne National Laboratory. “There are a handful of examples that have been demonstrated where you can do interesting problems. But it’s nowhere near a general-purpose platform that can replace a conventional supercomputer.” Neuromorphic hardware is particularly well suited to simulate computational neuroscience problems, he says, “because that’s the computational model it’s directly implementing.”
figure

Intel’s Loihi second-generation neuromorphic chip was unveiled in September 2021. With up to 1 million neurons per chip, it supports new classes of neuro-inspired algorithms and has 15 times the storage density, faster processing speed, and improved energy efficiency compared with the predecessor Loihi chip. Sandia National Laboratories is building a 1-billion-neuron neuromorphic computer based on Loihi 2 architecture.
WALDEN KIRSCH/INTEL CORP
But for the hundreds of applications that general-purpose supercomputers can perform, the alternative hardware has yet to show it offers advantages or even works well, Stevens says. And there are companies building specialized accelerators for deep learning that can implement abstractions of neurons without making any claims about being neuromorphic.
Comparing neuromorphic computing power with that of HPCs is not straightforward. “They are different animals,” says Furber. Each of SpiNNaker’s processors, for instance, is capable of delivering 200 million instructions per second, so the million-core machine can deliver 200 trillion instructions per second. HPCs are measured in FLOPS, but the Manchester machine has no floating-point hardware, he says.
“[Neuromorphic] is kind of similar to quantum in that it’s a technology waiting to prove itself at scale,” Stevens says. “Loihi is a great research project, but it’s not at the point where commercial groups are going to deploy large-scale versions to replace existing computing.”
At the opposite extreme from supercomputers, neuromorphic computers may benefit so-called edge-computing applications where energy conservation is a must. They include satellites, remote sensing stations, weather buoys, and visual monitors for intrusion detection. Instead of sending data on a regular clock cycle, spiking neuromorphic smart sensors would transmit only when something is detected or when a threshold value is crossed. “It should be smart about what it collects, what it transmits, and wake up if something is going on,” says Aimone. “That requires computation.” Adds Stevens, “You’re trying to go from a sensor input to some digital compact classification or representation of what that sensor is doing.”
figure

A skin-like sensor developed by Argonne National Laboratory and the University of Chicago features stretchable neuromorphic electronics. The technology could lead to precision medical sensors that would attach to the skin and perform health monitoring and diagnosis. Holding the device is the project’s principal investigator, Sihong Wang.
UCHICAGO’S PME
In November, Argonne and the University of Chicago’s Pritzker School of Molecular Engineering announced the development of a skin-like wearable patch featuring flexible and stretchable neuromorphic circuitry. If developed further, such wearable electronics hold promise for detecting possible emerging health problems, such as heart disease, cancer, or multiple sclerosis, according to a lab press release. Devices might also perform a personalized analysis of tracked health data while minimizing the need for their wireless transmission.
In one test, the research team built an AI device and trained it to distinguish healthy electrocardiogram signals from four different signals indicating health problems. After training, the device was more than 95% effective at correctly identifying the electrocardiogram signals.
A tall order
In Europe, a major motivation for neuromorphic R&D has been to improve understanding of how the brain works, and that’s no small task. “First and foremost the goal is fundamental research to see if we can learn from biology a different way of computing, and if this alternative way of computing can help in neuroscience as a research platform,” says Johannes Schemmel, a Heidelberg University researcher who heads BrainScaleS.
“We have a very massive neural network in the brain, but it’s on the scale of 99% idle,” says John Paul Strachan, who leads the neuromorphic compute nodes subinstitute at the Peter Grünberg Institute at the Jülich Research Center in Germany.
SpiNNaker and Loihi systems are fully digital. But BrainScaleS is a hybrid: It has analog signals for emulating individual neurons and digital ones for communications among neurons. “We’ve developed electronic circuits from transistors that behave similar to neuron synapses in the biological brain,” says Schemmel. “They are all continuous analog quantities.”
At higher levels, however, “we use digital communication between the neurons, because in principle there is no real analog communication possible,” Schemmel says.
The brain is highly sparse: When a neuron fires in response to a stimulus, its signal is transmitted only to the thousands of other neurons it connects to, not to the billions of others in the brain. Sparsity is critical to brain function. “If all our neurons were firing and communicating, we’d heat up and die,” says Strachan. But working with sparse data isn’t an ideal fit for HPCs. “If the hardware has been designed to optimize for dense computations, it will be idle or doing a bunch of multiply-by-zero operations,” he says. That means that simulating one second of just a tiny portion of the brain on an HPC today requires minutes of processing time.
The brain has various mechanisms to keep processing to the absolute minimum required, says Mayr, “but AI networks do a lot of irrelevant stuff. Take a video task, where every new frame of a video only contains maybe 2–3% new information, and even that can be compressed. All the rest is rubbish; you don’t need it. But with a conventional AI HPC chip-based approach, you have to compute all of it.”
“One of the biggest research tasks in computational neuroscience is to merge the function of the brain with what works in AI. This is the holy grail,” says Schemmel. Ideally, AI training would use localized learning rules that work at the level of neurons and synapses. Such rules could also permit robots to learn without needing to be uplinked to computers.
Limited funding is a restraint on the mostly academic field. Hardware is expensive to build, and it’s not surprising that Intel has the most advanced neuromorphic chips, says Schemmel. Algorithms are needed, and their development lags the state of hardware. “We now have large-scale platforms that can support spiking networks at bigger scales than most people can work out what’s useful to do with them,” says Furber.
Interchangeability is another issue. “We haven’t come up yet with a neat software framework that the neuromorphic guys can all subscribe to,” says Mayr. “We need to standardize a lot more.”
Yet perhaps the greatest limitation on R&D is a paucity of trained scientists, particularly computational neuroscientists. “Students have to have knowledge on a lot of different levels. You need longer to train; you can’t compartmentalize problems like you can do in software engineering nowadays,” says Schemmel.

  1. © 2023 American Institute of Physics.
 
  • Like
  • Fire
  • Love
Reactions: 22 users

Taproot

Regular
Transcript of today's podcast by Sean with Jean.

Disclosure: At times I found it hard to understand a few words by Jean, so there will be some errors in the transcript, but I have made every effort to stay aligned with the message and context.
I have omitted a few filling words, to make it easy when I was typing.
Some parts of introduction and conclusion are omitted, because I couldn't care less :cautious:😂🤣
Hopefully I haven't missed much in between, if you find any grave errors, feel free to speak up.

Anyways, here's Sean and Jean.





Sean: Why don’t you start by taking a moment to position Accenture for the viewers who may not be familiar with their firm and a little bit about yourself?

Jean: Sure, so as you mentioned we are the largest TSM integrator out there, we are now over 700,000 people strong which is the size of a small city and we focus on helping our customers through their digital transformation journey. Of course, they are not all at the same stage, some are beginning, some are in the middle of it some are tuning it, but what we do we really help people harness the best the technology can give in order to transform their business and to make them more efficient.

We like to say and it is very true that we love to find net new value in our customer’s business. So, it is all about what else can I do to keep the customer I have happy and capture new customers but all of that not by selling tech for tech but really selling tech for value and AI is a good example of the type of thing that we do. As far as AI is concerned, we are a group of about 25k people now an extending network is much bigger than that probably around 60 to 80k people that we can draw upon because not everything in AI is about an algorithms that’s all the change management associated with AI of course, but that about the size of the our group, and we do listing in AI we do the plastic I would say Advanced genetic type of work, the more modern type of algorithmic approach that AI has brought to light, and we do a lot of information also, so all of that is under that umbrella and the blook of AI is data so of course we do a lot of work around data and we in fact have a sister group that all they are focusing is dealing with data in order to make sure AI is going to do the right thing. Last thing I want is from a customer point of view, we serve the global 2000. This is really where we are focused, and we do that with a very strong industry angle, meaning that we they are very proud of the fat that we serve about 20 large industry and we have groups that are really understand the business of our customers and that is how you do good business is when you understand new customers business and you can really emphasis with that challenge and helps them get you where you want to go.



Sean: That’s great, with that kind of customer base we know that the requirements and demands are very high and very unrelenting so we know the quality work that Accenture does to meet that customer base. Having said that share a little bit about your current thinking and your view about AI in general in the state of AI.

Jean: Well AI is finally out of the not sweet dreadful winters. I would say that started many moons ago, I think when I got out of school you know AI was existing but it was just very hard to do but now it has come out to play for real and with in business starting about five years ago really where we were able to find application of AI that were what I am calling pragmatic AI this is what was not about the sexy or plus side of AI or I can you know find gaps on YouTube and all that kind of stuff but more about how can I use the various techniques that AI has in its tool box we need to transform the business we think that are very tangible right now I sometimes use the example of chat bot. Chat bots have been around for long time, in all common days at HP I think we met a few of those, but they tend to be very done to certain extent, you would try to find something behind the scene the system will try to find the few words that were in the final document and give whatever it would call an answer, but really not understanding what was your intent in your question and what was the right thing to answer. But today, chat bots are I am not going to say they are not human like yet, but they are getting there and really it is a pleasant experience as opposed to something you know you were not going to get an answer anyway so just were going through motion. So that what I call pragmatic AI and there other e.g., hyper personalisation for e.g., which is very important into market, so we are in an era of pragmatic ai were we see every day significant improvement that can really be used by client in those digital transformation that I spoke about.



Sean: How about some comments about state of edge AI where BRN is focused on what is going on from your point of view on edge ai?

Jean: I think you are in the right place, because edge ai is now sort of emerging from darkness, starting to be used in some very specific domain, it is not yet everywhere, but it certainly is getting some action today and its where probably in the long term most of the AI will sit in fact, a lot of people are thinking AI demands massive infrastructure behind the scene, which is not untrue there are still some aspect of AI that are very demanding in terms of computing power and we all require the customers thought their partner like cloud providers or others who have significant infrastructure to address the problem. But thanks to the progress that are being made on the silicon side or other semiconductor you can use but not just generic terms, the progress is now allowing AI to move to the edge, they are still constraint on the edge, such as power in general is always a challenge and space in terms real estate you have available however I think a company like yours are really taking that in account because they’re too hard and now bringing the power of AI with a full understanding of those complaints if you put something in satellite for e.g. you got to make sure you know every people be using because it is limited so Edge AI certainly very exciting domain and 2023-2024 is going to be really the time where we are going to see a lot more of it. I mean they are of course and you are already successful others are successful but I think that we’re going to see the next situation like deployment of this technology on the edge.



Sean: That’s great and its certainly consistent with our interactions with our customers 23 and 24 the interest level is very high right now but its also what you comment were a great set for next question I was going to ask you we talked about power elements on edge, what’s your thoughts on neuromorphic which of course that the basis of our tech how do you see neuromorphic what do you see in the future, do you see the direction for a lot of ai being neuromorphic?


Jean: Yes, the reason is the approach of neuromorphic AI is not to try to bend the business of ai to what the plastic techniques are doing, I never believe you should try to mould your customer run the technology you are but try to have technology that can embrace the needs of customers. In this case, you know when you talk about AI we are talking about some form of emulation or aspiring to emulate how the human brain is functioning and a lot of these exercises going to build digital human brain I’ve been down for at least ten year there were all about trying to see how they can make a mirror copy of it, which is kind of an engineering approach right, I am going to break down how the brain function I’m going to try to rebuild the pcs that makes the brain, the neuron of course but these techniques have proven to be unsuccessful and not scalable which is really important, and that to me was trying to make sure you could make literally a computer version of your brain, but I don’t think that what neuromorphic is doing, the effort that I see in the neuromorphic world is understand the brain function so take it a with a cognitive view of the world and then mould the silicon and techniques you are going to be using to but try to achieve the same goal that brain is using when they make a decision forces, so I am a big believer that neuromorphic tech in general is going to be a significant part of the future of AI.



Sean: I couldn’t agree with you more on that because that’s exactly how we view it, we certainly not trying to copy the brain, and usually when I run into people don’t understand the tech well what we do is obviously we take the best part of brain which is a very efficient computation machine, and use that so that’s all we are simply doing inspired by the brain, taking those principles to get a lot more done break through the Von Newman bottleneck and just get things done with less power its just that simple and allow you to apply that for the use cases you are doing not mimic the brain, but use the principles of the brain so agree 100 percent with that. Well let’s shift gears little bit lets talk about what’s going on with models I know you think a lot about that what’s going on with the latest trends on models what’s new with transformers gives us couple of comments about that kind of stuff.

Jean: yes, it’s a very interesting domain we are in now for full disclosure I am a big fan of Chris Re at Stanford which to me is really one of the best vision of the evolution of AI today, and elimination of what we call model ideas, so in the world of AI its very tempting for data scientists to create model for everything every problem they have they want to create a model right, and there’s also a lot of duplication because there’s not enough sharing happening in this domain, but that’s another subject, but there’s this model like we create way many models, in industry and these models they all highly tuned to very specific use cases scenario but this is dramatically changing there has been an emergence of new technique and model that are called transformers of course we call them supermodel right we are engineers so we are going to have some form of humour and the supermodel they are just better and you know, doing the work than there’s a bunch of little ones that are bespoke but they are very function defined better say they are very focused on type of information you are processing, so we see a lot of transformer activity in the tech space, because the first barrier to break down has been natural language processing and that’s where transformer have made literally significant progress I think if quote Dr Re, google translate by itself was 500,000 line of code five years ago, today its 500 lines of code by using transformers. So these transformer born in tech space, you will hear names like gpt3, which is very popular around here, what this supermodel do they have capability to understand pretty much everything in their domain and they just need to be tuned when there is specificity, so say it would be they speak English, but you can tune them for may an industry that is specific dialect and that’s you have to do, once you tune in dialect it will understand it is about you know interpreting text in industry, we see that happening in video and in voice. So future is less model, that are bespoke, more transformer, and then a shift in the personality of the data scientist that is going to be using AI instead of being people you picture as a white lab code and blackboard writing mathematical formula, data scientist will wear a hard hat, if thy are gas industry, meaning they will be very industry savvy and they’ll be able to tune these transformer and select the right data in order to solve the problem in their industry and we going to see the across all industry and there will be a shift in profile of data scientist and also we are seeing it today because data is the fuel for AI, we will see an emergence a massive number of data engineer s people that can prep data, right in order to behave it ready to fed to the transformers to then solve the problem of the client, it s a fascinating domain and in full evolution and I think we are still at the beginning, so more innovation is going to happen as days are going by.



Sean: its great we might have to come back and talk about that at a later date, yeah but let’s get ready for close, but I want to ask you I know you usually put out a prediction for the year but I would love to hear your thoughts, have you got a prediction one or two?

Jean: I will give you one or two, I have to write 20 and I am not there yet, but one the side of data, I really want to send a message that if you don’t have good data, you won’t get good AI, it just really key, and unfortunately a lot of our clients or all clients in general not just ours, they are number one problem is data. And that’s just because of the immersion the whole the year the system of record that are different legacy essentially that are creating massive amount they’re recitals so there is an approach to managing data which is the strategy no longer a tech which is very important , which is data mesh, and data mesh is a way you can break down info silo that you have into your enterprise, in a way that does not remove ownership of the orders of the silos because they are business owner, so they understand the application, but make that data available to whoever needs it within their enterprises itself, data mesh strategy again we cant say it enough to our customers is not a technology it’s a reporting instrument, data mesh strategy supported with thing called data product and metadata stores and governance is going to catch on fire in 2023, 24 25.

Because we have to breakdown the silo, we really enable AI in human knowledge in something that is understandable by computers, knowledge is absolutely necessary in every single type of business application in today’s world, and 5 years ago every application we have not just database and UI but database, knowledge graph and UI, is proven true. I can see that being deployed very well, so that’s my prediction, its going to keep on going. I promise I will find some more, 18 are missing.
Jean: I will give you one or two, I have to write 20 and I am not there yet, but one the side of data, I really want to send a message that if you don’t have good data, you won’t get good AI, it just really key, and unfortunately a lot of our clients or all clients in general not just ours, they are number one problem is data. And that’s just because of the immersion the whole the year the system of record that are different legacy essentially that are creating massive amount they’re recitals so there is an approach to managing data which is the strategy no longer a tech which is very important , which is data mesh, and data mesh is a way you can break down info silo that you have into your enterprise, in a way that does not remove ownership of the orders of the silos because they are business owner, so they understand the application, but make that data available to whoever needs it within their enterprises itself, data mesh strategy again we cant say it enough to our customers is not a technology it’s a reporting instrument, data mesh strategy supported with thing called data product and metadata stores and governance is going to catch on fire in 2023, 24 25.



 
  • Like
  • Fire
  • Love
Reactions: 32 users

Calsco

Regular
I agree the Segway into talking about transformers makes it seem pretty likely we will hear upcoming news about Brainchip and this tech.
 
  • Like
  • Fire
Reactions: 18 users
Hi FF,

First, thanks for your fact finding and needlelike focus on understanding the technology and documenting the progress of BRN. It has been a great help to those of us less technology aware and a wonderful resource for our own research, confidence and understanding.


Second, please ignore the wannabe CEO’s of BRN who offer nothing but self interest in their commentary. Pitiful advice to sign contracts, generate income mean nothing, if there is no strategy in place to achieve commercialisation. The current Board and Management have implemented many concrete steps towards the promised revenue. As a former CEO I applaud their strategic directions and implementation.

Third, apart from Shareholders (such as FF) who contributed money and capital to a BRN share raising, the remaining shareholders have simply purchased a certificate of shareholding from another shareholder.. If they have lost money on their share purchase, they have only themselves to blame for the decision to purchase at the time they did. The company does not owe it to them to generate a profit on their share purchase in the short term, only to grow the company over time

Fourth, as a long term shareholder I have been able to tick off significant milestones achieved over the years towards revenue generation. The technology works, strategic partnerships are in place, partent protection, use case markets identified and tapped, great Board and Management etc etc. A grateful thanks to all the 1000 eyes for giving us a “heads up” on these achievements


Finally FF, can I ask you to resume posting your factual insights and just ignore those who only offer opinions.


PS don’t stop posting your gems of wisdom about your time in the police force, courtroom and conversations with Blind Freddy etc etc.
Well put @LC200Explorer 💚
 
  • Like
Reactions: 10 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Latest statement from SiFive confirming their commitment to Brainchip.

I thought this part very interesting:

“Additionally, the X280 processor with the new SiFive Vector Coprocessor Interface Extension (VCIX) is being used as the AI Compute Host to provide flexible programming in a leading datacenter”

Given we know that X280 is the Intelligence Series and SiFive said in the original press release with Brainchip that is where they will be finding a home for AKIDA IP.

From the Edge to the Data Centre is Peter van der Made’s vision/mission.




SiFive — Dec 13, 2022

SiFive Delivers Record Growth in 2022 with Fast-Growing Roster of New Customers and Products

Highlights leadership in RISC-V and proven performance and power density benefits
San Jose, Calif., Dec. 13, 2022
– At the RISC-V Summit today, SiFive, Inc., the founder and leader of RISC-V computing, celebrated its impressive year of growth and technical achievements. In 2022 SiFive announced collaborations with some of the world’s largest chip companies and hyperscale datacenters, as the company has been laser-focused on expanding growth. Today SiFive has design wins with more than 100 customers, including 8 of the top 10 semiconductor companies, in applications including automotive, AR/VR, client computing, datacenter, and the intelligent edge. This year the company rolled out new products for a range of fast-growing and high-volume markets, including a comprehensive automotive portfolio, and expanded its presence globally. This momentum was recognized last week when SiFive was awarded the prestigious 2022 Most Respected Private Semiconductor Company Award by the Global Semiconductor Alliance GSA.
“This was a standout year for SiFive, as we collaborated with some of the biggest companies on the planet to tackle their unmet needs, shifted our portfolio and revenues from embedded to high performance RISC-V products that are shaking up the industry, and expanded our global footprint,” said Patrick Little, CEO and Chairman at SiFive. “With the fast-paced growth of SiFive and rapidly increasing demand for our products, and the overall growth of the RISC-V ecosystem, as we’ve said before, the future of RISC-V ‘has no limits’ as we take the company to new heights.”
SiFive has made incredible technical progress over the last year, rolling out several products with unparalleled compute performance and efficiency. The new SiFive Performance™ P670 and P470 RISC-V processors raise the bar for innovative designs in high volume applications like wearables, smart home, industrial automation, AR/VR, and other consumer devices. The company introduced its SiFive Automotive™ E6-A, X280-A, and S7-A solutions to address critical needs for current and future applications like infotainment, cockpit, connectivity, ADAS, and electrification. Plus, SiFive enhanced its popular SiFive Intelligence™ X280 processor IP to meet the accelerated demand for vector processing, especially for AI and ML applications.
The company has continued to deepen its collaborations and partnerships as it works to transform the future of compute and define what comes next. Through partnership with Microchip, SiFive is a part of NASA’s next generation High-Performance Spaceflight Computing (HPSC) processor, which delivers a 100x increase in computational capability to help propel next-generation planetary and surface missions. Additionally, the X280 processor with the new SiFive Vector Coprocessor Interface Extension (VCIX) is being used as the AI Compute Host to provide flexible programming in a leading datacenter. SiFive also announced its work with companies including BrainChip,
Kinara (Deep Vision), Synopsys, and ProvenRun, as well as a broad set of OS, Software Tools, and EDA ecosystem processors for the SiFive Automotive Family of processors
Another big milestone for SiFive is the company’s partnership with Intel to spark innovation in high-performance RISC-V platforms. SiFive is supporting Intel Foundry Services (IFS) Innovation Fund’s goal to build innovative new multi-ISA computing platforms including RISC-V platforms optimized for Intel process technology. The IFS Innovation fund will support the creation of disruptive technologies to address modern computing challenges, with the Intel-SiFive collaboration aiming to extend the RISC-V ecosystem. At the show, the companies unveiled more details about the HiFive Pro P550 Development System (code named Horse Creek); this high-performance development system will enable the RISC-V ecosystem to productively create software when it is commercially available later in 2023.
Patrick Little and Bob Brennan of Intel show off the new Hi-Five Pro P550 development board

SiFive’s stellar growth, technical achievements, and partnerships have been recognized by prestigious organizations. In addition to the recent GSA Awards recognition, SiFive was awarded a 2022 TSMC Open Innovation Platform® (OIP) Partner of the Year award. Additionally, SiFive ranked in the top 10 percent of Inc.’s fastest growing private companies in America list.
To meet the strong customer demand for SiFive’s innovative RISC-V IP, SiFive has expanded its headcount to more than 550 employees and has opened new offices around the world, including a Research & Development (R&D) Center in Cambridge, United Kingdom, a design center in Bangalore, India, and a new office in Hyderabad, India..
At the RISC-V Summit, taking place from Dec. 12-15 in San Jose, Calif. and virtually, SiFive is presenting in more than 10 sessions, including a keynote on Dec. 13 at 10:35 a.m. PT with SiFive’s CEO Patrick Little: “RISC-V Spotlight: Delivering on Real-World Customer Challenges.” To learn more about SiFive’s business, stop by the SiFive booth in the RISC-V Summit Expo Hall”


My opinion only DYOR
FF

AKIDA BALLISTA


MicroChip's PolarFire 2 SoC FPGAs

Here's something interesting to ponder. After reading this recent article, I'm wondering whether it could have potentially uncovered something that may be referring to us being incorporated in Microchip’s formal rollout of the new mid-range FPGA and SoC family that will occur at the Mi-V conference later this year.

Looking back on the history, we know that BrainChip has a relationship with SiFive. In a press release from SiFive, they said they'll be working with us on the X280 in the Intelligence Series (see below).

Screen Shot 2023-01-12 at 12.46.19 pm.png



And Microchip has been working with SiFive since 2015. And MicroChip and Si-Five and NASA have also been working together. A couple of snippets from this article (below) make me wonder if AKIDA could be what SiFive is referring to as their “Intelligence Extensions,” which they say “are custom instructions that SiFive developed to accelerate AI/ML operations”. I wouldn't be at all surprised if we're involved in some way, shape or form in the roll-out of PolarFire 2 SoC FPGAs later this year. Anyway, it's all very interconnected and exciting IMO but my cousin is coming to lunch in a few minutes time so I'm typing this as fast as my little fingers will allow, so those who have a penchant for dot collecting, can do your thang!

ttttpm.png



Screen Shot 2023-01-12 at 12.32.19 pm.png

 

Attachments

  • Screen Shot 2023-01-12 at 12.32.12 pm.png
    Screen Shot 2023-01-12 at 12.32.12 pm.png
    23.5 KB · Views: 76
Last edited:
  • Like
  • Love
  • Fire
Reactions: 27 users

Mccabe84

Regular
Lots of small trades going on today, possibly LDA selling their shares?
7E60273A-65BF-40BA-A7FC-945D272F5AB5.jpeg
 
  • Like
  • Sad
Reactions: 4 users

Labsy

Regular
I know that, but if MERC stayed with the Akida chip rather than using Akida IP, I'm sure BRN would oblige.
I believe if merc needed chips, Brn would pass on as a good will gesture to a 3 rd party chip maker currently in partnership for this. But I am still adamant Bosch are supplying our chips to Merc under NDA..time will tell.
 
  • Like
  • Fire
  • Thinking
Reactions: 11 users

Labsy

Regular
My first parcel was .40 cents then followed with 50c,60,70, up to $1.50 now my average is 87cents but I did not regret buying this stock it's my decision if I want to blame someone it would be myself only. Brn team did not ask anybody here to invest to them. Some here are like kids where if it didn't go there way they throw a tantrums to BRN team. Which is wrong! I'm very happy of investing with BRN coz in 3 to 5years I can say I'm a millionaire 😆😆😆

TheDon
My buy history is similar...hanging on now...unlike many other holder, my brn shares are limited edition...worth at least $35 a share...
Waiting untill someone is happy to pay this to unload a third...
 
  • Like
  • Fire
  • Love
Reactions: 19 users

VictorG

Member
I believe if merc needed chips, Brn would pass on as a good will gesture to a 3 rd party chip maker currently in partnership for this. But I am still adamant Bosch are supplying our chips to Merc under NDA..time will tell.
If only one of us is wrong, that means the other is correct. I'd be very happy with that 😎
 
  • Haha
  • Like
  • Love
Reactions: 9 users

HarryCool1

Regular
giphy.gif
 
  • Haha
  • Like
  • Fire
Reactions: 16 users

ndefries

Regular
  • Like
  • Love
  • Fire
Reactions: 15 users

Diogenese

Top 20
Hi MA,

That quote is based on Intel Loihi -
Correction: The burning finger has writ, and having writ, it is Intel's Kapoho Bay.

If you can't find the facts, someone will.
 
Last edited:
  • Like
Reactions: 4 users

dippY22

Regular
Not sure what to think here. Not sure I even need to listen to Sean and Jean because others have choosen to tell me what to think before I even had a chance to listen. Bummer. I love a humans voice (i.e. Sean or Jean), opposed to a critics written voice regardless of their point of view.

Doesn't matter. Nothing matters except the current stock price? Right? .... Rigjht?

This interview was such a coup for Brainchip. In the world today it is safe to say Accenture has a seat at the table of whatever important is going on.

I beg you, let people listen and don't tell them what you think happens.
 
  • Like
  • Fire
  • Love
Reactions: 5 users
Top Bottom