BRN Discussion Ongoing

Moonshot

Regular
An old interview, but good snapshot summary of CNN vs SNN (GPU v Akida) and fintech applications…




artificial-intelligence-3382521_1280-300x200.jpg

AI tech: are your neural networks convoluted? Try spikes.​

October 26, 2018
Anna Reitman
FacebookTwitterLinkedInEmail
The application of artificial intelligence across industries depends on hardware acceleration, and some of the ways that’s being done was detailed at a recent Nvidia conference for GPU (graphics processing unit) developers, held in Tel Aviv.
But the advance of neural networks is itself branching out, from the computationally intensive convoluted neural networks (CNNs) to the emerging spiking neural networks (SNNs), also known as neuromorphic computing.
Nvidia’s chief scientist, Bill Dally, was dismissive of the challenge presented by SNNs, saying that because the design is inspired by biological systems, computation is analog, and therefore inefficient. He compared it to “making airplanes that flap wings.”
Bob-Beachler-e1540546142968.jpg

Robert Beachler, SVP, BrainChip
ASX-listed firm BrainChip recently announced that it’s going to market with the Akida Neuromorphic System-on-Chip (NSoC) for embedded vision, cybersecurity, and fintech, targeting a $10 price point for the chip.
We asked Robert Beachler, SVP of marketing and business development to explain how SNNs are developing for fintech applications.

Fintech Capital Markets: How would you respond to Bill Dally’s description of SNNs?

Robert Beachler:
His thesis is that SNNs need to use analog computational elements to be efficient. This is an incorrect statement. BrainChip’s neuron model is purely digital, and BrainChip accelerates spiking neural networks in a digital manner.
On representing numbers as spikes: This is a common misperception that spokespeople working with CNNs have of SNNs. In CNNs they use numbers to represent data and therefore think that SNNs need to use numbers as well – which is incorrect.
Neurons don’t think in mathematical and floating point values, which is why humans invented calculators and computers – we recognized that the human mind is not good at complex math.
BrainChip’s data-to-spike converters take the external information (whether it is pixels, audio, fintech data, etc.) and translates it to a series of spikes. This translation does not encode the values of numbers.
Rather, the spike series represent pertinent information such as frequency, magnitude change, or other characteristics. This is what sensory organs do, like the retina, cochlea, etc.; they don’t send numbers – they send spike series that represent the pertinent information.
Akida-Slide-11-768x432.jpg

Source: BrainChip

The reason CNNs take so much power and silicon real estate is that they try to emulate what neurons perform using complex math functions – taking hundreds of watts. SNNs, by contrast, are more efficient as they more closely emulate the most efficient thinking machine – the human mind, which only uses 20 watts.
And regarding planes that flap and have feathers: This is a rather elementary attack that shows a lack of understanding of how state-of-the-art SNNs operate. If you tried to exactly model the complex bio-electrical process of the human neuron, like electron potentials, ion channels, dopamine reactions, etc., you would indeed have a very complex and inefficient neuron model.
But if you emulate only the most important neuron characteristics, synaptic connections and firing thresholds, in a purely digital manner, you end up with a very efficient neuron model, which BrainChip has been researching for many years. That is why the BrainChip neuromorphic system-on-chip is an order of magnitude more efficient on an images / second / watt basis compared to GPUs and FPGAs (field-programmable gate arrays).
FTCM: Can you describe the difference between CNNs and SNNs?
RB:
Traditional computer architecture is really set up for sequential execution, whereas an artificial neural network is a distributed parallel feed-forward architecture that doesn’t lend itself to sequential execution; it really wants to be distributed, it wants to be a lot of compute in parallel. And the primary difference between a CNN and an SNN is what is being done in the neuron itself.
With CNNs, it’s essentially a filtering operation. At its base, it is matrix multiplication, so all of the CNN acceleration chips are really just looking at ways to do more multiply accumulates with lower amounts of silicon and lower amounts of power consumption.
For SNNs, instead of using this math function, the actual neuron that we use is modeled after biology where you have synapses and neurons, and the data between the synapses are spikes in the brain. The way that it learns is that you reinforce or inhibit these synaptic connections and then you also can change the firing threshold of the neuron itself. The chip learns through reinforcement and inhibition of synaptic connections and the thresholds.

Figure3-768x574.jpg

Source: BrainChip

FTCM: Can you give me an example that makes sense for finance?
RB:
In the case of financial technology, there’s a lot of unlabeled data. What these neural networks will do is cluster things that are similar together, and that could be successful trading patterns, economic indicators, it could be basically just about any type of data that financial traders or economists want to look at.
There’s an entire science of converting data into spikes. We have embedded in our chips certain data-to-spike conversions for specific applications, but for fintech it’s more of a general data-to-spike conversion, that takes generic data and puts it into a time series of spikes. Inside, on a neuron fabric, a spiking neural network model is doing pattern recognition in an unsupervised mode using unlabeled data.
FTCM: One of the first things I think of is the “black box” problem. Can you see how decisions get made?
RB:
That’s really a conversation that hasn’t been solved, because it’s really hard to understand; you have these deep neural networks, multiple layers, looking inside of them to see what it’s doing. It’s very hard to visualize and it’s not something that you really can do very well.
It’s really a self-organizing, self-training type of system, and you’re right, it’s very difficult to explain the pattern.
FTCM: This is commercially available now, right? Can you tell us about your financial services clients?
RB: We have the development environment that’s available now for our customers to start creating the SNNs; we anticipate sampling the silicon mid-2019. We are focusing on three areas: vision systems, cybersecurity and fintech.
The financial sector is a rarefied environment; it’s not going to be thousands of customers, whereas there can be thousands of drones and vision-guided robotic systems. Our fintech and cybersecurity will be a smaller percentage in terms of number of customers, but it might be higher in terms of the potential revenue size for us because they would be buying lots of chips in big groups for servers. I can’t say anything more specific at the moment.
In the fintech community, there’s not a lot of public domain information about what are the neural networks that these companies are using. They are in the business of making money and they are really protecting that intellectual property.
FTCM: And what does it actually look like to set up?
RB:
In most fintech applications, it would be a PCI Express card that may have between 20 and 50 of these chips in order to be able to execute very large neural networks very quickly.
We measure things in terms of neurons, and the neuron fabric has 1.2 million neurons and 10 billion synapses. It has a lot of compute capability. When we talk about replacing math-intensive convolutions of backpropagation methods, we are talking about replacing those matrices with the neurons and the synapses.
Each neuron-processing core can do a certain number of neurons and synapses. We made their functionality to accelerate this computation but also to be able to emulate a lot of the convolutional neural network styles.
Akida-Slide-5-3.jpg

Source: BrainChip
FTCM: How do you see the future unfolding for the different kinds of hardware acceleration architecture?
RB:
You’re seeing a melding of the different technologies on any given chip. If you were to look into the future, you could have an ARM processor (architecture for systems-on-chips) and an embedded FPGA for data manipulation and scaling, then you have a neuron fabric which is either accelerating SNNs and CNNs, and that all comes together in a monolithic piece of silicon.
 
  • Like
  • Love
  • Fire
Reactions: 17 users

Zedjack33

Regular

Attachments

  • C150CA6F-59F7-4C28-8A2D-FCEF9E06A225.jpeg
    C150CA6F-59F7-4C28-8A2D-FCEF9E06A225.jpeg
    109.1 KB · Views: 166
  • Haha
  • Like
  • Love
Reactions: 23 users

TopCat

Regular
An article on Sifive. Apologies if already discussed.

Future Automobiles will be Powered by RISC-V

SiFive is creating a complete lineup of compute IP for MCUs, MPUs, and high-performance SoCs, as well as vector processing solutions tailored for automotive applications, with the first high-performance, out-of-order, Automotive family cores planned for late 2023.

With several lead customers already, the SiFive Automotive E6 products will ship in Q4 of this year and the S7A and X280A are expected to be available shortly after.
 
  • Like
  • Fire
  • Love
Reactions: 19 users
Hi @Zedjack33
Just have to say that cartoon has made my day.
Regards
FF

AKIDA BALLISTA
 
  • Like
  • Haha
  • Love
Reactions: 8 users
An old interview, but good snapshot summary of CNN vs SNN (GPU v Akida) and fintech applications…




artificial-intelligence-3382521_1280-300x200.jpg

AI tech: are your neural networks convoluted? Try spikes.​

October 26, 2018
Anna Reitman
FacebookTwitterLinkedInEmail
The application of artificial intelligence across industries depends on hardware acceleration, and some of the ways that’s being done was detailed at a recent Nvidia conference for GPU (graphics processing unit) developers, held in Tel Aviv.
But the advance of neural networks is itself branching out, from the computationally intensive convoluted neural networks (CNNs) to the emerging spiking neural networks (SNNs), also known as neuromorphic computing.
Nvidia’s chief scientist, Bill Dally, was dismissive of the challenge presented by SNNs, saying that because the design is inspired by biological systems, computation is analog, and therefore inefficient. He compared it to “making airplanes that flap wings.”
Bob-Beachler-e1540546142968.jpg

Robert Beachler, SVP, BrainChip
ASX-listed firm BrainChip recently announced that it’s going to market with the Akida Neuromorphic System-on-Chip (NSoC) for embedded vision, cybersecurity, and fintech, targeting a $10 price point for the chip.
We asked Robert Beachler, SVP of marketing and business development to explain how SNNs are developing for fintech applications.

Fintech Capital Markets: How would you respond to Bill Dally’s description of SNNs?

Robert Beachler:
His thesis is that SNNs need to use analog computational elements to be efficient. This is an incorrect statement. BrainChip’s neuron model is purely digital, and BrainChip accelerates spiking neural networks in a digital manner.
On representing numbers as spikes: This is a common misperception that spokespeople working with CNNs have of SNNs. In CNNs they use numbers to represent data and therefore think that SNNs need to use numbers as well – which is incorrect.
Neurons don’t think in mathematical and floating point values, which is why humans invented calculators and computers – we recognized that the human mind is not good at complex math.
BrainChip’s data-to-spike converters take the external information (whether it is pixels, audio, fintech data, etc.) and translates it to a series of spikes. This translation does not encode the values of numbers.
Rather, the spike series represent pertinent information such as frequency, magnitude change, or other characteristics. This is what sensory organs do, like the retina, cochlea, etc.; they don’t send numbers – they send spike series that represent the pertinent information.
Akida-Slide-11-768x432.jpg

Source: BrainChip

The reason CNNs take so much power and silicon real estate is that they try to emulate what neurons perform using complex math functions – taking hundreds of watts. SNNs, by contrast, are more efficient as they more closely emulate the most efficient thinking machine – the human mind, which only uses 20 watts.
And regarding planes that flap and have feathers: This is a rather elementary attack that shows a lack of understanding of how state-of-the-art SNNs operate. If you tried to exactly model the complex bio-electrical process of the human neuron, like electron potentials, ion channels, dopamine reactions, etc., you would indeed have a very complex and inefficient neuron model.
But if you emulate only the most important neuron characteristics, synaptic connections and firing thresholds, in a purely digital manner, you end up with a very efficient neuron model, which BrainChip has been researching for many years. That is why the BrainChip neuromorphic system-on-chip is an order of magnitude more efficient on an images / second / watt basis compared to GPUs and FPGAs (field-programmable gate arrays).
FTCM: Can you describe the difference between CNNs and SNNs?
RB:
Traditional computer architecture is really set up for sequential execution, whereas an artificial neural network is a distributed parallel feed-forward architecture that doesn’t lend itself to sequential execution; it really wants to be distributed, it wants to be a lot of compute in parallel. And the primary difference between a CNN and an SNN is what is being done in the neuron itself.
With CNNs, it’s essentially a filtering operation. At its base, it is matrix multiplication, so all of the CNN acceleration chips are really just looking at ways to do more multiply accumulates with lower amounts of silicon and lower amounts of power consumption.
For SNNs, instead of using this math function, the actual neuron that we use is modeled after biology where you have synapses and neurons, and the data between the synapses are spikes in the brain. The way that it learns is that you reinforce or inhibit these synaptic connections and then you also can change the firing threshold of the neuron itself. The chip learns through reinforcement and inhibition of synaptic connections and the thresholds.

Figure3-768x574.jpg

Source: BrainChip

FTCM: Can you give me an example that makes sense for finance?
RB:
In the case of financial technology, there’s a lot of unlabeled data. What these neural networks will do is cluster things that are similar together, and that could be successful trading patterns, economic indicators, it could be basically just about any type of data that financial traders or economists want to look at.
There’s an entire science of converting data into spikes. We have embedded in our chips certain data-to-spike conversions for specific applications, but for fintech it’s more of a general data-to-spike conversion, that takes generic data and puts it into a time series of spikes. Inside, on a neuron fabric, a spiking neural network model is doing pattern recognition in an unsupervised mode using unlabeled data.
FTCM: One of the first things I think of is the “black box” problem. Can you see how decisions get made?
RB:
That’s really a conversation that hasn’t been solved, because it’s really hard to understand; you have these deep neural networks, multiple layers, looking inside of them to see what it’s doing. It’s very hard to visualize and it’s not something that you really can do very well.
It’s really a self-organizing, self-training type of system, and you’re right, it’s very difficult to explain the pattern.
FTCM: This is commercially available now, right? Can you tell us about your financial services clients?
RB: We have the development environment that’s available now for our customers to start creating the SNNs; we anticipate sampling the silicon mid-2019. We are focusing on three areas: vision systems, cybersecurity and fintech.
The financial sector is a rarefied environment; it’s not going to be thousands of customers, whereas there can be thousands of drones and vision-guided robotic systems. Our fintech and cybersecurity will be a smaller percentage in terms of number of customers, but it might be higher in terms of the potential revenue size for us because they would be buying lots of chips in big groups for servers. I can’t say anything more specific at the moment.
In the fintech community, there’s not a lot of public domain information about what are the neural networks that these companies are using. They are in the business of making money and they are really protecting that intellectual property.
FTCM: And what does it actually look like to set up?
RB:
In most fintech applications, it would be a PCI Express card that may have between 20 and 50 of these chips in order to be able to execute very large neural networks very quickly.
We measure things in terms of neurons, and the neuron fabric has 1.2 million neurons and 10 billion synapses. It has a lot of compute capability. When we talk about replacing math-intensive convolutions of backpropagation methods, we are talking about replacing those matrices with the neurons and the synapses.
Each neuron-processing core can do a certain number of neurons and synapses. We made their functionality to accelerate this computation but also to be able to emulate a lot of the convolutional neural network styles.
Akida-Slide-5-3.jpg

Source: BrainChip
FTCM: How do you see the future unfolding for the different kinds of hardware acceleration architecture?
RB:
You’re seeing a melding of the different technologies on any given chip. If you were to look into the future, you could have an ARM processor (architecture for systems-on-chips) and an embedded FPGA for data manipulation and scaling, then you have a neuron fabric which is either accelerating SNNs and CNNs, and that all comes together in a monolithic piece of silicon.
“The financial sector is a rarefied environment; it’s not going to be thousands of customers, whereas there can be thousands of drones and vision-guided robotic systems. Our fintech and cybersecurity will be a smaller percentage in terms of number of customers, but it might be higher in terms of the potential revenue size for us because they would be buying lots of chips in big groups for servers. I can’t say anything more specific at the moment.
In the fintech community, there’s not a lot of public domain information about what are the neural networks that these companies are using. They are in the business of making money and they are really protecting that intellectual property”

Additionally what we do know from the former CEO Mr. Dinardo is that Peter van der Made flew to Europe for the sole purpose of face to face meetings with Fintech reps.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Fire
  • Love
Reactions: 13 users

Fox151

Regular
Just looking for creases in the table cloth and I have found something strange. Remember last time the secret clue was the toy stag well this time in front of the screen to the right of the photo is a small brown block.

Could this be a clue?

What could the hidden meaning of a block possibly be???

Are there other clues to be found in the chain of lights around the table?

A block and a chain what can it mean???

😂🤣😂🤣😂🤡🤡🤡🤡

No opinion just a bit of fun so DYOR
FF


AKIDA BALLISTA
LG monitors! There's our big south Korean company!
 
  • Like
  • Haha
  • Fire
Reactions: 12 users

alwaysgreen

Top 20
LG monitors! There's our big south Korean company!
Might not actually be a coincidence. I know I support other businesses that support mine.
 
  • Like
  • Love
  • Fire
Reactions: 12 users

BaconLover

Founding Member
Those of us who hold through these tough times, definitely do too....



 
  • Haha
  • Like
  • Love
Reactions: 28 users

alwaysgreen

Top 20
We should be in the Top 5 gainers on the ASX website, for most of today..

I contacted them about it again, as I haven't heard back, since I made an official complaint, on the 10th of August.

They said that the Digital Services Team, can't work it out and the problem has been transferred, to the external vendors, of the system.

I suggested that since the system is incorrect and lying to the Market, about the going ons of Australia's most important index, that they shouldn't publish any Top 5 information, until the problem is resolved, or do it manually.

That to me, is the only right thing to do.

Anything else.. Continuing to publish daily 20 minute updated data, which they know is incorrect.

Is lying to the Australian investing public and in fact the World, as the S&P/ASX200 is our biggest index and is of Global importance.

Isn't that a reasonable conclusion??
Thankfully, they haven't fixed this issue overnight!
 
  • Haha
  • Like
Reactions: 8 users
D

Deleted member 118

Guest
Why did I



 
  • Haha
  • Like
Reactions: 4 users

H2 goes up

Emerged
Hello Chippers,

I have extreme concern for my liver. I have been having a celebratory drink (or two) every time BRN hits $1 on the way up and a 'commiseratory' drink (or two) every time BRN goes through $1 on the way down. It seems to be a habit that I can't kick so I have a plan to only have one more $1 party (given that it is currently under $1, we must pass through it one more time).

The 500 of us (1000 eyes) get together and invent something, anything, that uses akida. We then get a contract with BRN to supply us the chips for the initial run. We WILL NOT have a non-disclosure agreement. The contract gets announced to the public, the share price goes up through $1 never to return and my liver can start to regenerate.

Who wants to invent something? Don't make it too good as some liver generation needs to happen before this problem begins again at $2.

Disclaimer: An economic evaluation of this 'good' idea has not been carried out (hung over from yesterday's celebratory and lining up for today's commiseratory).
 
  • Haha
  • Like
  • Love
Reactions: 37 users

VictorG

Member
Well wouldn't it be just great if BRN closes above $1 today. I'm thinking we've seen the bottom for the day and the only way is up up and a way!
justice league vintage GIF
 
  • Like
  • Fire
  • Haha
Reactions: 10 users
Hello Chippers,

I have extreme concern for my liver. I have been having a celebratory drink (or two) every time BRN hits $1 on the way up and a 'commiseratory' drink (or two) every time BRN goes through $1 on the way down. It seems to be a habit that I can't kick so I have a plan to only have one more $1 party (given that it is currently under $1, we must pass through it one more time).

The 500 of us (1000 eyes) get together and invent something, anything, that uses akida. We then get a contract with BRN to supply us the chips for the initial run. We WILL NOT have a non-disclosure agreement. The contract gets announced to the public, the share price goes up through $1 never to return and my liver can start to regenerate.

Who wants to invent something? Don't make it too good as some liver generation needs to happen before this problem begins again at $2.

Disclaimer: An economic evaluation of this 'good' idea has not been carried out (hung over from yesterday's celebratory and lining up for today's commiseratory).
That’s a cracking idea you have.
I know how your liver feels it’s worse when you also have to put work into the mix
 
  • Haha
  • Fire
Reactions: 4 users

KMuzza

Mad Scientist
Wow- this is incredible😎

1663116567311.jpeg
 
  • Like
  • Love
  • Fire
Reactions: 27 users

KMuzza

Mad Scientist
An old interview, but good snapshot summary of CNN vs SNN (GPU v Akida) and fintech applications…




artificial-intelligence-3382521_1280-300x200.jpg

AI tech: are your neural networks convoluted? Try spikes.​

October 26, 2018
Anna Reitman
FacebookTwitterLinkedInEmail
The application of artificial intelligence across industries depends on hardware acceleration, and some of the ways that’s being done was detailed at a recent Nvidia conference for GPU (graphics processing unit) developers, held in Tel Aviv.
But the advance of neural networks is itself branching out, from the computationally intensive convoluted neural networks (CNNs) to the emerging spiking neural networks (SNNs), also known as neuromorphic computing.
Nvidia’s chief scientist, Bill Dally, was dismissive of the challenge presented by SNNs, saying that because the design is inspired by biological systems, computation is analog, and therefore inefficient. He compared it to “making airplanes that flap wings.”
Bob-Beachler-e1540546142968.jpg

Robert Beachler, SVP, BrainChip
ASX-listed firm BrainChip recently announced that it’s going to market with the Akida Neuromorphic System-on-Chip (NSoC) for embedded vision, cybersecurity, and fintech, targeting a $10 price point for the chip.
We asked Robert Beachler, SVP of marketing and business development to explain how SNNs are developing for fintech applications.

Fintech Capital Markets: How would you respond to Bill Dally’s description of SNNs?

Robert Beachler:
His thesis is that SNNs need to use analog computational elements to be efficient. This is an incorrect statement. BrainChip’s neuron model is purely digital, and BrainChip accelerates spiking neural networks in a digital manner.
On representing numbers as spikes: This is a common misperception that spokespeople working with CNNs have of SNNs. In CNNs they use numbers to represent data and therefore think that SNNs need to use numbers as well – which is incorrect.
Neurons don’t think in mathematical and floating point values, which is why humans invented calculators and computers – we recognized that the human mind is not good at complex math.
BrainChip’s data-to-spike converters take the external information (whether it is pixels, audio, fintech data, etc.) and translates it to a series of spikes. This translation does not encode the values of numbers.
Rather, the spike series represent pertinent information such as frequency, magnitude change, or other characteristics. This is what sensory organs do, like the retina, cochlea, etc.; they don’t send numbers – they send spike series that represent the pertinent information.
Akida-Slide-11-768x432.jpg

Source: BrainChip

The reason CNNs take so much power and silicon real estate is that they try to emulate what neurons perform using complex math functions – taking hundreds of watts. SNNs, by contrast, are more efficient as they more closely emulate the most efficient thinking machine – the human mind, which only uses 20 watts.
And regarding planes that flap and have feathers: This is a rather elementary attack that shows a lack of understanding of how state-of-the-art SNNs operate. If you tried to exactly model the complex bio-electrical process of the human neuron, like electron potentials, ion channels, dopamine reactions, etc., you would indeed have a very complex and inefficient neuron model.
But if you emulate only the most important neuron characteristics, synaptic connections and firing thresholds, in a purely digital manner, you end up with a very efficient neuron model, which BrainChip has been researching for many years. That is why the BrainChip neuromorphic system-on-chip is an order of magnitude more efficient on an images / second / watt basis compared to GPUs and FPGAs (field-programmable gate arrays).
FTCM: Can you describe the difference between CNNs and SNNs?
RB:
Traditional computer architecture is really set up for sequential execution, whereas an artificial neural network is a distributed parallel feed-forward architecture that doesn’t lend itself to sequential execution; it really wants to be distributed, it wants to be a lot of compute in parallel. And the primary difference between a CNN and an SNN is what is being done in the neuron itself.
With CNNs, it’s essentially a filtering operation. At its base, it is matrix multiplication, so all of the CNN acceleration chips are really just looking at ways to do more multiply accumulates with lower amounts of silicon and lower amounts of power consumption.
For SNNs, instead of using this math function, the actual neuron that we use is modeled after biology where you have synapses and neurons, and the data between the synapses are spikes in the brain. The way that it learns is that you reinforce or inhibit these synaptic connections and then you also can change the firing threshold of the neuron itself. The chip learns through reinforcement and inhibition of synaptic connections and the thresholds.

Figure3-768x574.jpg

Source: BrainChip

FTCM: Can you give me an example that makes sense for finance?
RB:
In the case of financial technology, there’s a lot of unlabeled data. What these neural networks will do is cluster things that are similar together, and that could be successful trading patterns, economic indicators, it could be basically just about any type of data that financial traders or economists want to look at.
There’s an entire science of converting data into spikes. We have embedded in our chips certain data-to-spike conversions for specific applications, but for fintech it’s more of a general data-to-spike conversion, that takes generic data and puts it into a time series of spikes. Inside, on a neuron fabric, a spiking neural network model is doing pattern recognition in an unsupervised mode using unlabeled data.
FTCM: One of the first things I think of is the “black box” problem. Can you see how decisions get made?
RB:
That’s really a conversation that hasn’t been solved, because it’s really hard to understand; you have these deep neural networks, multiple layers, looking inside of them to see what it’s doing. It’s very hard to visualize and it’s not something that you really can do very well.
It’s really a self-organizing, self-training type of system, and you’re right, it’s very difficult to explain the pattern.
FTCM: This is commercially available now, right? Can you tell us about your financial services clients?
RB: We have the development environment that’s available now for our customers to start creating the SNNs; we anticipate sampling the silicon mid-2019. We are focusing on three areas: vision systems, cybersecurity and fintech.
The financial sector is a rarefied environment; it’s not going to be thousands of customers, whereas there can be thousands of drones and vision-guided robotic systems. Our fintech and cybersecurity will be a smaller percentage in terms of number of customers, but it might be higher in terms of the potential revenue size for us because they would be buying lots of chips in big groups for servers. I can’t say anything more specific at the moment.
In the fintech community, there’s not a lot of public domain information about what are the neural networks that these companies are using. They are in the business of making money and they are really protecting that intellectual property.
FTCM: And what does it actually look like to set up?
RB:
In most fintech applications, it would be a PCI Express card that may have between 20 and 50 of these chips in order to be able to execute very large neural networks very quickly.
We measure things in terms of neurons, and the neuron fabric has 1.2 million neurons and 10 billion synapses. It has a lot of compute capability. When we talk about replacing math-intensive convolutions of backpropagation methods, we are talking about replacing those matrices with the neurons and the synapses.
Each neuron-processing core can do a certain number of neurons and synapses. We made their functionality to accelerate this computation but also to be able to emulate a lot of the convolutional neural network styles.
Akida-Slide-5-3.jpg

Source: BrainChip
FTCM: How do you see the future unfolding for the different kinds of hardware acceleration architecture?
RB:
You’re seeing a melding of the different technologies on any given chip. If you were to look into the future, you could have an ARM processor (architecture for systems-on-chips) and an embedded FPGA for data manipulation and scaling, then you have a neuron fabric which is either accelerating SNNs and CNNs, and that all comes together in a monolithic piece of silicon.
Great article Moonshot-👍
 
  • Like
  • Fire
  • Love
Reactions: 6 users

mrgds

Regular
Just looking for creases in the table cloth and I have found something strange. Remember last time the secret clue was the toy stag well this time in front of the screen to the right of the photo is a small brown block.

Could this be a clue?

What could the hidden meaning of a block possibly be???

Are there other clues to be found in the chain of lights around the table?

A block and a chain what can it mean???

😂🤣😂🤣😂🤡🤡🤡🤡

No opinion just a bit of fun so DYOR
FF


AKIDA BALLISTA
Well FF, we have all spoken at length with regards to AKIDAs power saving abilities,
Blockchain technology is something alot of the worlds "thinkers" believe will eventuate in some form or another with crypto currency.
If indeed this is a "clue" ( unofficial announcement) .................................... then im off to the Hot Tub, ( yes, im feelin it )
I do think the s/p should rise at least 3% based on just the tablecloth, ....................... how proffessional is that!!!!

Akida Ballista.
 
  • Like
  • Haha
  • Love
Reactions: 7 users

robsmark

Regular
Not much, but snagged another 1,000 this morning.
 
  • Like
  • Fire
  • Love
Reactions: 12 users

Deadpool

hyper-efficient Ai
OMG, The phrase, When Wall Street sneezes and every one catches a cold, literally translates to this.
 
Last edited:
  • Haha
  • Like
  • Wow
Reactions: 25 users

Xhosa12345

Regular
a whole 0.4% of SOI traded......
definitely would not be panicking.....
 
  • Like
  • Love
Reactions: 7 users

JK200SX

Regular
Just looking for creases in the table cloth and I have found something strange. Remember last time the secret clue was the toy stag well this time in front of the screen to the right of the photo is a small brown block.

Could this be a clue?

What could the hidden meaning of a block possibly be???

Are there other clues to be found in the chain of lights around the table?

A block and a chain what can it mean???

😂🤣😂🤣😂🤡🤡🤡🤡

No opinion just a bit of fun so DYOR
FF


AKIDA BALLISTA
Which block are you referring to?

The LED's on the table could change colour based on facial expressions of people visiting the stand :)

Also, can anyone make out t he name /brand of the tablet?
 
Last edited:
  • Like
Reactions: 2 users
Top Bottom