BRN Discussion Ongoing

stan9614

Regular
looking forward to see tonys feedback on this
 
  • Like
Reactions: 4 users

Slade

Top 20
Anyone who took my sure bet tip the other day and put a place bet on Mishani in race one at Eagle farm yesterday, then we’ll done. Hope you enjoy your winnings.

F4755F14-86EE-4C75-9340-C16EB3F16865.png
 
  • Haha
  • Like
  • Fire
Reactions: 22 users
Respectfully, today's drop had nothing to do with the broader market and everything to do with us making zero sales for the quarter when Sean Hehir clearly stated that revenue growth would exceed expense growth by the end of the year. He's got 12 weeks left to make this happen.
I think the comments in the quarry mean that this no longer applies
 
  • Like
Reactions: 1 users
... but the ABS managed to avoid locking up the wheels ...
The suspension dynamics handled the heavy breaking perhaps too well in my opinion. Particularly when the momentum of the trailer came into play.

Watching this video I am pretty sure they have fiddled with the frame rate to speed it up to improve the dramatic effect. Speeding up frame rates can dramatically affect perception. If the truck was travelling as fast as it appears in this video the suspension would have been working a lot harder.

I won an assault case for a police officer because I noticed that the speed of movement was slightly off and after having the frame rate brought back to real time it was clear he was not using excessive force. The security cameras monitoring the police charge room used compression to ensure one tape could cover the full 8 hour
shift.

The idea that ‘one picture is worth a thousand words’ now has little value in this modern world as every image is being manipulated and adjusted even before human interference comes into play.

Regards
FF

AKIDA BALLISTA
 
  • Like
  • Fire
  • Wow
Reactions: 10 users
looking forward to see tonys feedback on this
Hi Stan
The only response I received from Tony Dawe was Peter is aware.

The email contained a series of questions drafted by @Diogenese but alas silence was the stern reply to those questions.😞😂🤣😂

I also sent a follow up to equally deaf ears.😞😞😂🤣😂

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Sad
Reactions: 8 users

HopalongPetrovski

I'm Spartacus!
I personally want to thank all of the positive proactive people on this site for being here, not only do you help me stay grounded, but you fill me with hope and positivity for the future.
Thank you guys
It means a lot to me as this roller coaster has been quite wild from the highs of 350k plus to the lows of - 15k
Agreed. But don't forget Zeeb0t who set it up and runs it for us all.
Before we had the facility of this site the positive people were under constant and sustained attack from orchestrated and likely paid downrampers whose aim was to undermine everything they said as well as instil fear and doubt in the minds and hearts of holders and those evaluating our company.
For those who haven't yet please consider subscribing to help Zeeb0t keep this invaluable resource available for us all.
I pay $50 a year and certainly get my money's worth.
 
  • Like
  • Love
  • Fire
Reactions: 28 users
Afternoon Diogenese,

Stop it , A .... a, Great sense of humour 😄 .

Thankyou for your expertise breaking down all of the technical articles & questions of late for all mere mortles of late.

Extremely greatful.

Roughly two days ago , Anistasi In tech, did a short vidio on IBM'S new chip . I briefly watched her video and a few things rang a bell, in the technical jargon side of things.

Only if you are board, and I'm shaw you are not , Might be worth having a look at.
Anistaci's post / video popped up in my mail box two days ago so I presume it is her latest release.

Regards,
Esq.
Hi ya Esq

Now did I hear it right
That the chip is partly dell hardware ???
Don’t Dell and BRN have a connection?
Mmmmm it’s got me thinking 🤔
 
  • Like
  • Love
Reactions: 5 users
  • Like
  • Love
  • Fire
Reactions: 14 users

Esq.111

Fascinatingly Intuitive.
Hi ya Esq

Now did I hear it right
That the chip is partly dell hardware ???
Don’t Dell and BRN have a connection?
Mmmmm it’s got me thinking 🤔
Morning Food4 thought,

Think she said this tech was derived from IBM'S AI accelerator, IBM Down Processor.

From the IBM Artificial Intelligence Unit,
5 NM chip ,
ASIC,
32 bit down to 8 & 4 bit,

One for Diogenese to hopefully pull apart for us all.

Regards,
Esq.
 
  • Like
  • Love
Reactions: 7 users

Steve10

Regular
Boston Dynamics have developed an AI neural network with onboard compute for their Atlas robot. DARPA also involved.

At 8.10 min of video paradigm shifting AI neural network mentioned with on board compute for control, perception & estimation. Has enabled the Atlas robot to become self learning.




BRN selling on Friday has resulted in highly oversold RSI same as June bottom low. SP then bounced from 78c to 127c in about 6 weeks.

BRN daily chart not looking good, however, the weekly chart still appears ok with recent sell off breaking through 100MA support at 75.5c. With market rising may bounce to the 100MA at 75.5c & next target will be the 50MA at $1 on weekly chart. Has a gap at 77.5c to 85c that needs to be filled.

S&P 500 MA ribbons on hourly chart are aligned above each other similar to June/July rally. 20MA green above 50MA blue above 100MA red above 200MA purple.


S&P 500 (^GSPC) Charts, Data & News - Yahoo Finance_page-0001.jpg


The 100MA has also just crossed the 200MA & this has not happened since July during last rally.

Based on RSI & timeframe appears the rally has legs to get to August peak by the start of December. Will be above 200MA on daily charts if it can rally similar to June/July rally.

ASX XTX index currently at 2,030 with minimum target of 2,250 based on RSI & timeframe will be above the 200MA by around 11/11/2022. Could potentially be a trend change as it has not been above 200MA this bear market.

ASX XTX index 100MA has started rising for the first time since the bear market commenced. Appears the 100MA will cross over the 200MA on daily chart at the start of December 2022.

Crossover could fail similar to late 2008 resulting in another leg lower for about 3 months = early March 2023 final bottom. In 2002 the crossover was successful. Difference between 2008 crossover failure & 2002 crossover success was the alignment of the MA's.

In 2008 the MA's were not aligned 20/50/100 above each other when crossover of 200MA failed whereas in 2002 the MA's were aligned 20/50/100 above each other when crossed above 200MA resulting in a successful market trend change.

Keep an eye on the MA's alignment at the start of December for indication of whether or not the crossover will be successful.

This could be an opportunity to buy discounted oversold BRN shares similar to 37c shares in October 2021. $2.34 intraday peak / 37c = x6.32 upside within 3 months last time. 67c x 6.32 = $4.23 is within the Elliot wave target range for the next peak.

Will history rhyme and/or repeat this time?
 
  • Like
  • Fire
  • Wow
Reactions: 44 users
Hi to all the Quantum Professors out there who read this forum. This is the email to Tony Dawe that met with a brick wall response regarding Quantum Annealing:

Hi Tony
While doing some of my rabbit hole research I came across this NASA funded research;

webicon_green.png
https://ui.adsabs.harvard.edu/abs/2021QuIP...20...70Z/abstract

Which leads me to ask the following questions:

1. Was it an AKIDA software simulation of a spiking neural network that they used?

2. If not, would their findings regarding the compatibility of their Quantum annealing algorithm likely still apply to AKIDA technology?

3. If so, would this significantly advance AKIDA's capabilities to solve route identification for industrial picking robots in factories, drone delivery vehicles and/or air traffic control?

4. Overall what does Quantum computing have to offer AKIDA technology?

5. What does AKIDA technology have to offer Quantum computing?

Please remember I am a retired lawyer not a quantum physics professor so answers of one syllable are required if it is thought reasonable to provide answers.

Kind regards

Though I wish I could take credit for the questions these were provided by my expert @Diogenese as is the practise in my former career.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Fire
  • Love
Reactions: 35 users
Great price to purchase shares atm, look at our partnerships , Akida 2 in the works ,we are in a great position I'm not worried, wait till revenue comes in, new partnerships formed, where all impatient I'm afraid ,wait till 10 products hit the market then 100 then 1000 etc , never ending
 
  • Like
  • Love
Reactions: 5 users

wilzy123

Founding Member
Hi to all the Quantum Professors out there who read this forum. This is the email to Tony Dawe that met with a brick wall response regarding Quantum Annealing:

Hi Tony
While doing some of my rabbit hole research I came across this NASA funded research;

webicon_green.png
https://ui.adsabs.harvard.edu/abs/2021QuIP...20...70Z/abstract

Which leads me to ask the following questions:

1. Was it an AKIDA software simulation of a spiking neural network that they used?

2. If not, would their findings regarding the compatibility of their Quantum annealing algorithm likely still apply to AKIDA technology?

3. If so, would this significantly advance AKIDA's capabilities to solve route identification for industrial picking robots in factories, drone delivery vehicles and/or air traffic control?

4. Overall what does Quantum computing have to offer AKIDA technology?

5. What does AKIDA technology have to offer Quantum computing?

Please remember I am a retired lawyer not a quantum physics professor so answers of one syllable are required if it is thought reasonable to provide answers.


Kind regards

Though I wish I could take credit for the questions these were provided by my expert @Diogenese as is the practise in my former career.

My opinion only DYOR
FF

AKIDA BALLISTA

The brick wall was perhaps due to the complexity you introduced in your last sentence ;)🤣:cool:
 
  • Like
  • Haha
Reactions: 9 users
I really think this is an article worth reading.

Probably because ever since the wombats over at HC back in 2019 started to use the China Technology argument to support the view that Brainchip had no chance as China would out compete them and they were so advanced technologically anyway they probably already had something more advanced even if AKIDA was real and Blind Freddie responded to them with the same argument on HC and here. Why? Is Blind Freddie a genius? No. Its just that history is there for everyone to learn from not just historians and Blind Freddie is nothing if not someone who listens to and learns from history:

MarketWatch
FollowView Profile

China’s economy is rotting from the head​

Daron Acemoglu - 8h ago


China’s economy is rotting from the head
China’s economy is rotting from the head© Getty Images
PROJECT SYNDICATE

View on Watch

BOSTON (Project Syndicate)—China highlights a long-debated question about economic development: Can a top-down autocracy outperform liberal market economies in terms of innovation and growth?
Between 1980 and 2019, China’s average annual GDP growth rate was over 8%—faster than any Western economy—and in the 2000s, its economic trajectory exceeded mere catch-up growth (using Western technologies). China started making its own technology investments, producing patents and academic publications, and spawning innovative companies such as Alibaba Tencent Baidu and Huawei.

Even more and greater blunders are likely to follow now that Xi wields unchecked power and is surrounded by yes-men who will avoid telling him what he needs to hear.

Science withers without freedom of thought​

Some naysayers had thought this unlikely. While plenty of autocrats had presided over rapid economic expansions, never before had a non-democratic regime generated sustained, innovation-based growth. Some Westerners were mesmerized by Soviet scientific prowess in the 1950s and 1960s, but often they were channeling their own biases. By the 1970s, the Soviet Union was clearly falling behind and stagnating, owing to its inability to innovate across a broad range of sectors.

WSJ’s Shelby Holliday explains why China’s Belt and Road Initiative is broken, what a Belt and Road 2.0 might look like, and the challenges such an overhaul could pose for China. Illustration: Adele Morgan
True, some astute China observers pointed out that the Communist Party of China’s iron grip did not bode well for the country’s prospects. But the more common view was that China would sustain its astonishing growth. While there were debates over whether China would be a benign or malign force globally, there was little disagreement that its growth was unstoppable. The International Monetary Fund and the World Bank made a habit of projecting past Chinese growth rates into the future, and books with titles like “When China Rules the World” proliferated.
For years, one also heard arguments that China had achieved “accountability without democracy,” or that the CPC leadership was at least constrained by term limits, a balance of powers, and other good-governance stopgaps. China won praise for demonstrating the virtue of government planning and offering an alternative to the neoliberal Washington Consensus.
Even those who recognized China’s model as a form of “state capitalism”—with all the contradictions that entails—projected that its growth would continue largely unabated.

Dominance in artificial intelligence​

Perhaps the most potent argument was that China would control the world by dint of its ability to achieve global dominance in artificial intelligence. With access to so much data from its massive population, with fewer ethical and privacy restrictions than those faced by researchers in the West, and with so much state investment in AI, China was said to have an obvious advantage in this domain.
But this argument was always suspect. One cannot simply assume that advances in AI will be the main source of economic advantage in the future; that the Chinese government will allow for ongoing high-quality research in the sector; or that Western companies are significantly hampered by privacy and other data regulations.
China’s prospects today look far less rosy than they once did. Having already eliminated many internal checks, President Xi Jinping used the CPC’s 20th National Congress to secure an unprecedented third term (with no future term limits in sight), and stacked the all-powerful Politburo Standing Committee with loyal supporters.

Unforced errors​

This consolidation of power comes despite major unforced errors by Xi that are dragging down the economy and sapping China’s innovative potential. Xi’s “zero-COVID” policy was largely avoidable and has come at significant cost, as has his support for Russia’s war in Ukraine.
Even more and greater blunders are likely to follow now that Xi wields unchecked power and is surrounded by yes-men who will avoid telling him what he needs to hear.
But it would be a mistake to conclude that China’s growth model is crumbling just because the wrong person ascended the throne. The turn toward a harder line of control that started during Xi’s first term (after 2012) may have been inevitable.
China’s rapid industrial growth in the 1990s and 2000s was built on huge investments, technology transfers from the West, production for export, and financial and wage repression.
But such export-led growth can go only so far. As Xi’s predecessor, Hu Jintao, recognized in 2012, China’s growth would have to become “much more balanced, coordinated, and sustainable,” with far less reliance on external demand and much greater reliance on domestic consumption.

Maintain its political monopoly​

At the time, many experts believed that Xi would respond to the challenge with an “ambitious reform agenda” to introduce more market-based incentives. But these interpretations overlooked a key question that China’s regime was already grappling with: how to maintain the CPC’s political monopoly in the face of a rapidly expanding, economically empowered middle-class. The most obvious answer—and perhaps the only answer—was greater repression and censorship, which is exactly the path Xi took.
For a while, Xi, his entourage, and even many outside experts believed that the economy could still flourish under conditions of tightening central control, censorship, indoctrination, and repression. Again, many looked to AI as an unprecedentedly powerful tool for monitoring and controlling society.
Yet there is mounting evidence to suggest that Xi and advisers misread the situation, and that China is poised to pay a hefty economic price for the regime’s intensifying control. Following sweeping regulatory crackdowns on Alibaba, Tencent, and others in 2021, Chinese companies are increasingly focused on remaining in the political authorities’ good graces, rather than on innovating.
The inefficiencies and other problems created by the politically motivated allocation of credit are also piling up, and state-led innovation is starting to reach its limits. Despite a large increase in government support since 2013, the quality of Chinese academic research is improving only slowly. Even in AI, the government’s top scientific priority, advances are lagging behind the global tech leaders—most of them in the United States.

Currying favor instead of seeking truth​

My own recent research with Jie Zhou of MIT and David Yang of Harvard University shows that the top-down control in Chinese academia is distorting the direction of research, too. Many faculty members are choosing their research areas to curry favor with heads of departments or deans, who have considerable power over their careers. As they shift their priorities, the evidence suggests that the overall quality of research is suffering.
Xi’s tightening grip over science and the economy means that these problems will intensify. And as is true in all autocracies, no independent experts or domestic media will speak up about the train wreck he has set in motion.
Daron Acemoglu, professor of economics at MIT, is co-author (with James A. Robinson) of “Why Nations Fail: The Origins of Power, Prosperity and Poverty” (Profile, 2019) and “The Narrow Corridor: States, Societies, and the Fate of Liberty “(Penguin, 2020).
 

Attachments

  • 1667095336750.png
    1667095336750.png
    68 bytes · Views: 60
  • Like
  • Fire
  • Love
Reactions: 18 users

BaconLover

Founding Member
Boston Dynamics have developed an AI neural network with onboard compute for their Atlas robot. DARPA also involved.

At 8.10 min of video paradigm shifting AI neural network mentioned with on board compute for control, perception & estimation. Has enabled the Atlas robot to become self learning.




BRN selling on Friday has resulted in highly oversold RSI same as June bottom low. SP then bounced from 78c to 127c in about 6 weeks.

BRN daily chart not looking good, however, the weekly chart still appears ok with recent sell off breaking through 100MA support at 75.5c. With market rising may bounce to the 100MA at 75.5c & next target will be the 50MA at $1 on weekly chart. Has a gap at 77.5c to 85c that needs to be filled.

S&P 500 MA ribbons on hourly chart are aligned above each other similar to June/July rally. 20MA green above 50MA blue above 100MA red above 200MA purple.


View attachment 20594

The 100MA has also just crossed the 200MA & this has not happened since July during last rally.

Based on RSI & timeframe appears the rally has legs to get to August peak by the start of December. Will be above 200MA on daily charts if it can rally similar to June/July rally.

ASX XTX index currently at 2,030 with minimum target of 2,250 based on RSI & timeframe will be above the 200MA by around 11/11/2022. Could potentially be a trend change as it has not been above 200MA this bear market.

ASX XTX index 100MA has started rising for the first time since the bear market commenced. Appears the 100MA will cross over the 200MA on daily chart at the start of December 2022.

Crossover could fail similar to late 2008 resulting in another leg lower for about 3 months = early March 2023 final bottom. In 2002 the crossover was successful. Difference between 2008 crossover failure & 2002 crossover success was the alignment of the MA's.

In 2008 the MA's were not aligned 20/50/100 above each other when crossover of 200MA failed whereas in 2002 the MA's were aligned 20/50/100 above each other when crossed above 200MA resulting in a successful market trend change.

Keep an eye on the MA's alignment at the start of December for indication of whether or not the crossover will be successful.

This could be an opportunity to buy discounted oversold BRN shares similar to 37c shares in October 2021. $2.34 intraday peak / 37c = x6.32 upside within 3 months last time. 67c x 6.32 = $4.23 is within the Elliot wave target range for the next peak.

Will history rhyme and/or repeat this time?

SPX 30 Oct.jpg


Thanks Steve.
I personally still think this will take some heavy buying to get the whole market out of trouble. We have been making lower lows for a while now both on SPX and ASX.

Funny how you mention the 11/11/2022 being a key date as a step to escape this, I saw something similar and shared with a few mates (7th Nov) but let us not forget 8th November is the Midterm in US. So some crazy volatility coming up.

Bear market will test the patience of many, and chew up and spit out the retailers so take care everyone.
 
  • Like
  • Love
  • Wow
Reactions: 20 users

Milo

Member
As I said I only know what I am told and read.

Therefore I can tell you that you are wrong as to your view at point 2. I was present at the 2019 AGM as were many who post here when Peter van der Made made it perfectly clear that what he was saying was that he could throw away the GPU and CPUs in the current cars and do all the compute with 100 AKD1000 chips as they were then called. He further stated that with nine AKD1000 including providing redundancy he could cover all the sensors necessary for ADAS.

( Also intended to mention that as well as the throwing out statement he referenced the fact that the current compute was costing a minimum of three thousand plus dollars and that AKIDA was likely to be $10 a chip in bulk so total compute cost per vehicle would be one third or about $1,000 he could not have been any clearer. Following this in early 2020 the then CEO Mr Dinardo in one of his webinars went to some length to hose down shareholder enthusiasm about this use of AKIDA saying that while Peter van der Made was correct their sole focus was to target the Edge.)

This debate about AKIDA only being suitable for Edge compute is a long dead smelly red herring. It has been stated by the company many times that they have chosen to target the Edge because there is no incumbent player and so they do not have any true competitor in that market.

The intention has always been to move back up the supply chain into the data centre as the company becomes established and the technology is recognised and understood.

You will notice that up to this point @Diogenese has not taken up my challenge to correct my view point as he has on many prior occasions.

I accept you are genuine but you really do need to do a lot more research around the AKIDA technology value proposition.

For example the following research from Sandia makes clear that the full power of SNN computing is not yet understood with the reservation that this is not the case where Peter van der Made and his team are concerned:


Sandia Researchers Show Neuromorphic Computing Widely Applicable​

March 10, 2022
ALBUQUERQUE, N.M., March 10, 2022 — With the insertion of a little math, Sandia National Laboratories researchers have shown that neuromorphic computers, which synthetically replicate the brain’s logic, can solve more complex problems than those posed by artificial intelligence and may even earn a place in high-performance computing.
The findings, detailed in a recent article in the journal Nature Electronics, show that neuromorphic simulations using the statistical method called random walks can track X-rays passing through bone and soft tissue, disease passing through a population, information flowing through social networks and the movements of financial markets, among other uses, said Sandia theoretical neuroscientist and lead researcher James Bradley Aimone.
Showing a neuromorphic advantage, both the IBM TrueNorth and Intel Loihi neuromorphic chips observed by Sandia National Laboratories researchers were significantly more energy efficient than conventional computing hardware. The graph shows Loihi can perform about 10 times more calculations per unit of energy than a conventional processor.
“Basically, we have shown that neuromorphic hardware can yield computational advantages relevant to many applications, not just artificial intelligence to which it’s obviously kin,” said Aimone. “Newly discovered applications range from radiation transport and molecular simulations to computational finance, biology modeling and particle physics.”
In optimal cases, neuromorphic computers will solve problems faster and use less energy than conventional computing, he said.
The bold assertions should be of interest to the high-performance computing community because finding capabilities to solve statistical problems is of increasing concern, Aimone said.
“These problems aren’t really well-suited for GPUs [graphics processing units], which is what future exascale systems are likely going to rely on,” Aimone said. “What’s exciting is that no one really has looked at neuromorphic computing for these types of applications before.”
Sandia engineer and paper author Brian Franke said, “The natural randomness of the processes you list will make them inefficient when directly mapped onto vector processors like GPUs on next-generation computational efforts. Meanwhile, neuromorphic architectures are an intriguing and radically different alternative for particle simulation that may lead to a scalable and energy-efficient approach for solving problems of interest to us.”
Franke models photon and electron radiation to understand their effects on components.
The team successfully applied neuromorphic-computing algorithms to model random walks of gaseous molecules diffusing through a barrier, a basic chemistry problem, using the 50-million-chip Loihi platform Sandia received approximately a year and a half ago from Intel Corp., said Aimone. “Then we showed that our algorithm can be extended to more sophisticated diffusion processes useful in a range of applications.”
The claims are not meant to challenge the primacy of standard computing methods used to run utilities, desktops and phones. “There are, however, areas in which the combination of computing speed and lower energy costs may make neuromorphic computing the ultimately desirable choice,” he said.
Unlike the difficulties posed by adding qubits to quantum computers — another interesting method of moving beyond the limitations of conventional computing — chips containing artificial neurons are cheap and easy to install, Aimone said.
There can still be a high cost for moving data on or off the neurochip processor. “As you collect more, it slows down the system, and eventually it won’t run at all,” said Sandia mathematician and paper author William Severa. “But we overcame this by configuring a small group of neurons that effectively computed summary statistics, and we output those summaries instead of the raw data.”
Severa wrote several of the experiment’s algorithms.
Like the brain, neuromorphic computing works by electrifying small pin-like structures, adding tiny charges emitted from surrounding sensors until a certain electrical level is reached. Then the pin, like a biological neuron, flashes a tiny electrical burst, an action known as spiking. Unlike the metronomical regularity with which information is passed along in conventional computers, said Aimone, the artificial neurons of neuromorphic computing flash irregularly, as biological ones do in the brain, and so may take longer to transmit information. But because the process only depletes energies from sensors and neurons if they contribute data, it requires less energy than formal computing, which must poll every processor whether contributing or not. The conceptually bio-based process has another advantage: Its computing and memory components exist in the same structure, while conventional computing uses up energy by distant transfer between these two functions. The slow reaction time of the artificial neurons initially may slow down its solutions, but this factor disappears as the number of neurons is increased so more information is available in the same time period to be totaled, said Aimone.
The process begins by using a Markov chain — a mathematical construct where, like a Monopoly gameboard, the next outcome depends only on the current state and not the history of all previous states. That randomness contrasts, said Sandia mathematician and paper author Darby Smith, with most linked events. For example, he said, the number of days a patient must remain in the hospital are at least partially determined by the preceding length of stay.
Beginning with the Markov random basis, the researchers used Monte Carlo simulations, a fundamental computational tool, to run a series of random walks that attempt to cover as many routes as possible.
“Monte Carlo algorithms are a natural solution method for radiation transport problems,” said Franke. “Particles are simulated in a process that mirrors the physical process.”
The energy of each walk was recorded as a single energy spike by an artificial neuron reading the result of each walk in turn. “This neural net is more energy efficient in sum than recording each moment of each walk, as ordinary computing must do. This partially accounts for the speed and efficiency of the neuromorphic process,” said Aimone. More chips will help the process move faster using the same amount of energy, he said.
The next version of Loihi, said Sandia researcher Craig Vineyard, will increase its current chip scale from 128,000 neurons per chip to up to one million. Larger scale systems then combine multiple chips to a board.
“Perhaps it makes sense that a technology like Loihi may find its way into a future high-performance computing platform,” said Aimone. “This could help make HPC much more energy efficient, climate-friendly and just all around more affordable.”
The work was funded under the NNSA Advanced Simulation and Computing program and Sandia’s Laboratory Directed Research and Development program.

A random walk diffusion model based on data from Sandia National Laboratories algorithms running on an Intel Loihi neuromorphic platform. Video courtesy of Sandia National Laboratories.
About Sandia National Laboratories
Sandia National Laboratories is a multimission laboratory operated by National Technology and Engineering Solutions of Sandia LLC, a wholly owned subsidiary of Honeywell International Inc., for the U.S. Department of Energy’s National Nuclear Security Administration. Sandia Labs has major research and development responsibilities in nuclear deterrence, global security, defense, energy technologies and economic competitiveness, with main facilities in Albuquerque, New Mexico, and Livermore, California.

Source: Sandia National Laboratories

*********************************************************************************************************************************************************

Moving on from this in the final report to NASA of the outcome of its Phase 1 project to provide a design for a hardsil AKIDA 1000 for unconnected autonomous space applications Vorago stated that AKIDA would allow Rover to achieve full autonomy and the NASA goal of speeds up to 20 kph.

AKIDA technology is not just about processing sensors and again accepting you are genuine why do you not contact Edge Impulse or Brainchip and ask them for the details of the bench marking they engaged in.

I will say this at the 2021 Ai Field Day Anil Mankar said this about GPUs and AKIDA


"And that's why we are able to do low power analysis.. The same Mobilenet V1 that you can run on a GPU I can do inference on it. I'll be doing exactly the same level of computation that you want to do because depending on the number of parameters in your CNN, what your input resolutions, you have to do certain calculations to find object classification. We do something similar, but because we do an event domain, I will not be doing, I will be avoiding operations where they are zero value events."

Your statement that if AKIDA could do these things then our valuation would not be where it is and that is the whole point AKIDA technology is beyond anything you are imaging to be its limits.

Finally these researchers do not share your view regarding being able to do regression analysis using SNN technology in fact they think they are on to something however Peter van der Made and team beat them to it by a significant margin:


Spiking Neural Networks for Nonlinear Regression
Alexander Henkes, Jason K. Eshraghian, Member, IEEE, Henning Wessels
Abstract—Spiking neural networks, also often referred to as
the third generation of neural networks, carry the potential for
a massive reduction in memory and energy consumption over
traditional, second-generation neural networks. Inspired by the
undisputed efficiency of the human brain, they introduce temporal
and neuronal sparsity, which can be exploited by next-generation
neuromorphic hardware. To broaden the pathway toward engi-
neering applications, where regression tasks are omnipresent, we
introduce this exciting technology in the context of continuum
mechanics. However, the nature of spiking neural networks poses
a challenge for regression problems, which frequently arise in
the modeling of engineering sciences. To overcome this problem,
a framework for regression using spiking neural networks
is proposed. In particular, a network topology for decoding
binary spike trains to real numbers is introduced, utilizing the
membrane potential of spiking neurons. Several different spiking
neural architectures, ranging from simple spiking feed-forward
to complex spiking long short-term memory neural networks,
are derived. Numerical experiments directed towards regression
of linear and nonlinear, history-dependent material models are
carried out. As SNNs exhibit memory-dependent dynamics, they
are a natural fit for modelling history-dependent materials which
are prevalent through all of engineering sciences. For example, we
show that SNNs can accurately model materials that are stressed
beyond reversibility, which is a challenging type of non-linearity.
A direct comparison with counterparts of traditional neural
networks shows that the proposed framework is much more
efficient while retaining precision and generalizability. All code
has been made publicly available in the interest of reproducibility
and to promote continued enhancement in this new domain

(Sorry left out this paper was published in October, 2022)


My opinion only so DYOR
FF

AKIDA BALLISTA


Well I just finished reading many comments on these and thanks @Diogenese, @JDelekto and @alwaysgreen for your thoughts.
@Fact Finder I saw you had made fun of me again due to the delayed reply but apologies for having a life outside TSEx and BRN. We had some visitors for dinner last night.

I see that you haven't answered my question in 2 but is referring to a statement from Peter. I acknowledge that he was speaking to shareholders and not to a tech audience. So he may have used certain examples or words that meant something similar to make the message more comprehensible, or maybe he just got carried away but I wasn't in there so I don't know in what context he said that.
But I don't believe that and think GPUs are still required in cars in addition to Akida. If I'm not correct I would like to be pointed to at least one presentation slide or other company source that claims that Akida can replace GPUs in cars or other graphics-heavy applications like gaming. My opinion on this is actually perfectly summed up by @JDelekto.

I never said that Akida is only suitable for Edge applications. What I said was it would outperform GPUs in edge applications when it comes to power consumption and cost. I believe that is why the company is targeting edge use cases.

Regarding the use of Akida in data centres, from memory what the company said was they can take some load off data centres by processing data at the edge device and thereby eliminating the need to send data to the data centre for processing. Would love to be corrected here as well and learn how Akida can fit into a data centre (To be used for general data processing like streaming, banking transactions, Office365 data, etc and not for HPC use cases as described in Sandia report because those are specific use cases)

With that in mind I would like to point you to a couple of statements from the same Sandia research you quoted above;

"In optimal cases, neuromorphic computers will solve problems faster and use less energy than conventional computing, he said."
"The bold assertions should be of interest to the high-performance computing community because finding capabilities to solve statistical problems is of increasing concern, Brad said." - So this is not one size fits all and there are specific use cases where SNN may outperform GPUs.

"The claims are not meant to challenge the primacy of standard computing methods used to run utilities, desktops and phones. “There are, however, areas in which the combination of computing speed and lower energy costs may make neuromorphic computing the ultimately desirable choice,” he said." - So we can't just dismiss GPUs and CPUs just because SNN can perform better in specific use cases.

Therefore still you can't say "AKIDA 1.0 out performs GPUs and CPUs." in a general way without mentioning in what way or what use case.
Anil's quote you have given is actually how you should have made your claim as well. There he is explaining that he is comparing the power consumption of Akida and a GPU when performing the same task.

On another note, if you look at Akida Shuttle PC System Information, it has an Intel Core i5, i7 or i9 processor to run the OS. It also includes a Heatpipe cooling system with two fans. Now given that Peter and the team understand the full power of SNN computing (which I don't challenge) why would they go through all that trouble to include a crappy intel CPU in there when it could have been replaced by a superior Akida 1.0?
 
  • Like
  • Fire
Reactions: 6 users
Well I just finished reading many comments on these and thanks @Diogenese, @JDelekto and @alwaysgreen for your thoughts.
@Fact Finder I saw you had made fun of me again due to the delayed reply but apologies for having a life outside TSEx and BRN. We had some visitors for dinner last night.

I see that you haven't answered my question in 2 but is referring to a statement from Peter. I acknowledge that he was speaking to shareholders and not to a tech audience. So he may have used certain examples or words that meant something similar to make the message more comprehensible, or maybe he just got carried away but I wasn't in there so I don't know in what context he said that.
But I don't believe that and think GPUs are still required in cars in addition to Akida. If I'm not correct I would like to be pointed to at least one presentation slide or other company source that claims that Akida can replace GPUs in cars or other graphics-heavy applications like gaming. My opinion on this is actually perfectly summed up by @JDelekto.

I never said that Akida is only suitable for Edge applications. What I said was it would outperform GPUs in edge applications when it comes to power consumption and cost. I believe that is why the company is targeting edge use cases.

Regarding the use of Akida in data centres, from memory what the company said was they can take some load off data centres by processing data at the edge device and thereby eliminating the need to send data to the data centre for processing. Would love to be corrected here as well and learn how Akida can fit into a data centre (To be used for general data processing like streaming, banking transactions, Office365 data, etc and not for HPC use cases as described in Sandia report because those are specific use cases)

With that in mind I would like to point you to a couple of statements from the same Sandia research you quoted above;

"In optimal cases, neuromorphic computers will solve problems faster and use less energy than conventional computing, he said."
"The bold assertions should be of interest to the high-performance computing community because finding capabilities to solve statistical problems is of increasing concern, Brad said." - So this is not one size fits all and there are specific use cases where SNN may outperform GPUs.

"The claims are not meant to challenge the primacy of standard computing methods used to run utilities, desktops and phones. “There are, however, areas in which the combination of computing speed and lower energy costs may make neuromorphic computing the ultimately desirable choice,” he said." - So we can't just dismiss GPUs and CPUs just because SNN can perform better in specific use cases.

Therefore still you can't say "AKIDA 1.0 out performs GPUs and CPUs." in a general way without mentioning in what way or what use case.
Anil's quote you have given is actually how you should have made your claim as well. There he is explaining that he is comparing the power consumption of Akida and a GPU when performing the same task.

On another note, if you look at Akida Shuttle PC System Information, it has an Intel Core i5, i7 or i9 processor to run the OS. It also includes a Heatpipe cooling system with two fans. Now given that Peter and the team understand the full power of SNN computing (which I don't challenge) why would they go through all that trouble to include a crappy intel CPU in there when it could have been replaced by a superior Akida 1.0?
I am a shareholder. Nothing more nothing less. I believe what the Chief Technology Officer of the company states publicly at an AGM in a presentation to shareholders as he has obligations at law to not mislead shareholders.

It is not my job to convince you of the truth of his statements they are true till proven otherwise.

This boils down to your statement:

“But I don't believe that and think GPUs are still required in cars in addition to Akida.”

I don’t believe you I prefer to believe the CTO. So prove you are right. That is not my job and using your own words I also have a life and unbelievable as it is to you I only post from memory so my posting is a very small part of my daily activities.

Finally if I made fun of you I apologise and if you point out where the offending comment is I will remove it.

As for your cherry picked portions of the Sandia article I posted the article.

I found it originally.

I posted it then and I reposted it for your reference. I am a technophobe so I am not going to debate a scientific paper with an electrical engineer. What is the point.

My point is that until this paper was published all the world including highly trained electrical engineers did not believe this was something SNN’s could manage. WRONG.

Peter van der Made the CTO way back in 2019 did and he said so to shareholders.

Before the paper on Quantum Annealing algorithms being shown to run on SNN’s even highly trained electrical engineers did not think it was possible.

Before Luca Verre CEO at Prophesee was introduced to AKIDA technology having tried Loihi and SynSense he believed he and his co inventor had built a house of straw.

So I have put up what I have heard and read and disclaimed constantly since day one at HC any technical expertise so as you are a highly trained electrical engineer do your own research and prove Peter van der Made and all the others wrong.

Proving me wrong takes no one anywhere as I am not asserting anything other than at the outset that a GPU costing hundreds to thousands of dollars has a fps processing rates that fall well short of a $25.00 AKIDA chip.

But I will give you this for free in the original notice from Brainchip announcing the agreement with MEGACHIPS it states:

https://***************.com.au/brainchip-holdings-asxbrn-partners-with-megachips-2021-11-22/amp/
  • BrainChip Holdings (BRN) partners with MegaChips to develop the BrainChip Akida IP through a multi-year license agreement
  • The agreement will give MegaChip an intellectual property license for designing and manufacturing BrainChip’s Akida tech into external customer’s system on chip designs
  • In exchange, BrainChip will receive an upfront license fee and additional payments over the term of the agreement
  • MegaChips says the agreement will enable it to supply the automotive, camera, gaming and industrial robotics markets
  • BrainChip Holdings is up 9.26 per cent to 59 cents at 11:08 AEDT
I draw your attention to the word ‘gaming’ a use case which you suggest to be impossible.

Anyway as I said if you point out where exactly I made fun of you I will remove it as this was not my intent. After all why would I want to make fun of you or anyone else who is a genuine shareholder in Brainchip.

My opinion only DYOR
FF

AKIDA BALLISTA
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 64 users

TopCat

Regular
With some discussion just recently regarding Ford, has Bluecruise been looked at lately? Has just had some improvements added recently. All theses ADAS systems start to sound the same when researching them. The thing which stood out for me here was the driver camera to detect eyes on road.

Ford Beefs Up BlueCruise AI​


The BlueCruise technology builds on the Intelligent Adaptive Cruise Control with Stop-and-Go and Lane Centering and Speed Sign Recognition. The vehicle then uses its driver-facing camera to make sure you’re keeping your eyes on the road.
 
  • Like
  • Fire
  • Love
Reactions: 6 users

Diogenese

Top 20
Well I just finished reading many comments on these and thanks @Diogenese, @JDelekto and @alwaysgreen for your thoughts.
@Fact Finder I saw you had made fun of me again due to the delayed reply but apologies for having a life outside TSEx and BRN. We had some visitors for dinner last night.

I see that you haven't answered my question in 2 but is referring to a statement from Peter. I acknowledge that he was speaking to shareholders and not to a tech audience. So he may have used certain examples or words that meant something similar to make the message more comprehensible, or maybe he just got carried away but I wasn't in there so I don't know in what context he said that.
But I don't believe that and think GPUs are still required in cars in addition to Akida. If I'm not correct I would like to be pointed to at least one presentation slide or other company source that claims that Akida can replace GPUs in cars or other graphics-heavy applications like gaming. My opinion on this is actually perfectly summed up by @JDelekto.

I never said that Akida is only suitable for Edge applications. What I said was it would outperform GPUs in edge applications when it comes to power consumption and cost. I believe that is why the company is targeting edge use cases.

Regarding the use of Akida in data centres, from memory what the company said was they can take some load off data centres by processing data at the edge device and thereby eliminating the need to send data to the data centre for processing. Would love to be corrected here as well and learn how Akida can fit into a data centre (To be used for general data processing like streaming, banking transactions, Office365 data, etc and not for HPC use cases as described in Sandia report because those are specific use cases)

With that in mind I would like to point you to a couple of statements from the same Sandia research you quoted above;

"In optimal cases, neuromorphic computers will solve problems faster and use less energy than conventional computing, he said."
"The bold assertions should be of interest to the high-performance computing community because finding capabilities to solve statistical problems is of increasing concern, Brad said." - So this is not one size fits all and there are specific use cases where SNN may outperform GPUs.

"The claims are not meant to challenge the primacy of standard computing methods used to run utilities, desktops and phones. “There are, however, areas in which the combination of computing speed and lower energy costs may make neuromorphic computing the ultimately desirable choice,” he said." - So we can't just dismiss GPUs and CPUs just because SNN can perform better in specific use cases.

Therefore still you can't say "AKIDA 1.0 out performs GPUs and CPUs." in a general way without mentioning in what way or what use case.
Anil's quote you have given is actually how you should have made your claim as well. There he is explaining that he is comparing the power consumption of Akida and a GPU when performing the same task.

On another note, if you look at Akida Shuttle PC System Information, it has an Intel Core i5, i7 or i9 processor to run the OS. It also includes a Heatpipe cooling system with two fans. Now given that Peter and the team understand the full power of SNN computing (which I don't challenge) why would they go through all that trouble to include a crappy intel CPU in there when it could have been replaced by a superior Akida 1.0?
I'm not sure of the relevance of the plumbing of the Shuttle to Akida. The Shuttle DH410 is an COTS product to which the Akida PCIe board is designed to plug in. Peter and team did not include the cooling system. It was already included as part of the Shuttle DH410.
https://us.shuttle.com/products/dh410/

The Akida Shuttle PCIe development kit is intended for demonstration/familiarization:
https://shop.brainchipinc.com/products/akida-enablement-platform-shuttle-pc
" DEVELOPMENT KITS ARE NOT INTENDED TO BE USED FOR PRODUCTION PURPOSES"
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 21 users
Top Bottom