BRN Discussion Ongoing

Bravo

If ARM was an arm, BRN would be its biceps💪!
I hope you don't mind a frivolous post, I just want to be on the iconic 1000th page of the 1000 eyes 🤗

Hi @Quercuskid, I just landed the #20,000th post on the iconic 1000th page of the 1000 eyes, thanks to your heads-up! :love:
 
Last edited:
  • Like
  • Haha
  • Love
Reactions: 41 users
Oh dear, please don't shoot the messenger!

PS: Do you think we should email this ding-bat and let him know how much faster the Rover is going to go because of BrainChip?





Why did the BrainChip share price crash 30% in June?​

BrainChip’s shares were sold off in June. Here’s why…


James Mickleboro
Published July 3, 9:45 am AEST

BRN
A man in a suit face palms at the downturn happening with shares today.

Image source: Getty Images


You’re reading a free article with opinions that may differ from The Motley Fool’s Premium Investing Services. Become a Motley Fool member today to get instant access to our top analyst recommendations, in-depth research, investing resources, and more. Learn More


The BrainChip Holdings Ltd (ASX: BRN) share price had a disappointing month in June.

The semiconductor company’s shares ended the month 30% lower than where they started it.

This was despite BrainChip’s shares being added to the illustrious ASX 200 index during the month.

What happened to the BrainChip share price?​


Investors were selling down the BrainChip share price in June amid broad market weakness. With interest rates increasing to combat rising inflation, this put pressure on equities.

This was particularly the case at the higher risk side of the market, where BrainChip certainly sits.

For example, even after June’s decline, the company has a market capitalisation of over $1.4 billion despite its revenue year to date being just $205,000.

When annualised to $820,000, this means its shares are changing hands for a ridiculous 1700 times revenue. And this is before the company has even proven that it has a market for its Akida neuromorphic processor.

In light of this, it is no surprise that when the market wobbles, the BrainChip share price tumbles.

What’s next?​


The next 12 months will be very interesting for the BrainChip share price. With the company now commercialising its technology, it will have to let its sales do the talking rather than its press releases or podcasts.

Which may not be as easy as many first thought. Especially given that some of the hyped-up partnerships from the last 2-3 years appear to have amounted to nothing.

For example, its partnership with NASA was big news back in 2020 and is still talked about today as a reason to invest in BrainChip. But this seems to have ended after just three weeks on 18 January 2021 based on NASA data. It’s also worth noting that there was no mention of NASA in its most recent annual report.

So, should sales fail to materialise in a market dominated by some huge tech behemoths such as AMD, Intel, and Nvidia, then there’s a distinct danger that its days as a billion dollar plus company could be numbered.

Wondering where you should invest $1,000 right now?


When investing expert Scott Phillips has a stock tip, it can pay to listen. After all, the flagship Motley Fool Share Advisor newsletter he has run for over ten years has provided thousands of paying members with stock picks that have doubled, tripled or even more.* Scott just revealed what he believes could be the "five best ASX stocks" for investors to buy right now. These stocks are trading at near dirt-cheap prices and Scott thinks they could be great buys right now



PS: This was my reaction!


View attachment 10633
There are so many errors in this ‘A Story’ by James Mickelboro aged twelve and a half it would crash the web if you tried to send a comprehensive reply.

I try not to judge on appearance but it does come to something when Mr. Bean looks more intelligent.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Haha
  • Like
  • Fire
Reactions: 37 users
There are so many errors in this ‘A Story’ by James Mickelboro aged twelve and a half it would crash the web if you tried to send a comprehensive reply.

I try not to judge on appearance but it does come to something when Mr. Bean looks more intelligent.

My opinion only DYOR
FF

AKIDA BALLISTA
texturised crop

1656827555539.jpeg


Thomas Shelby's texturised crop is short on the back and sides of the head with a slightly longer length on top. Ask the barber: Ask for a "crop" but indicate you would like to have the fringe sweeping to the right or left and short around the back and sides with extra length left on top.20 Aug 2019”

It comes to something when someone in the Australian Financial Services industry chooses to model his hairstyle on that of Thomas Shelby a fictional TV character who is a gangster, murderer, racketeer and drunk who has visions and speaks to the dead.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Haha
Reactions: 10 users
From Brainchip’s Home page at 4.07 pm 3 July, 2022.

I suppose attempting to manipulate the share price of an ASX listed company by publishing a falsehood is a kind of racketeering:

Trusted by:​



mega_chips.png

Renesas.png

NASA.png

Valeo.png

mercedes.png

The NASA Seal is not permitted on merchandise and is only permitted to be used by the NASA Administrator or Administrator's office. The names, logos, devices or graphics of NASA programs may be used on merchandise subject to review and approval by NASA, and subject to the prohibitions on co-branding noted above.23 Mar 2022
1656828845549.png

https://www.nasa.gov › features › M...

NASA Regulations for Merchandising Requests



My opinion only DYOR
FF

AKIDA BALLISTA
 

Attachments

  • 1656828720728.png
    1656828720728.png
    737 bytes · Views: 81
  • Like
  • Fire
  • Love
Reactions: 41 users
From Brainchip’s Home page at 4.07 pm 3 July, 2022.

I suppose attempting to manipulate the share price of an ASX listed company by publishing a falsehood is a kind of racketeering:

Trusted by:​



mega_chips.png

Renesas.png

NASA.png

Valeo.png

mercedes.png

The NASA Seal is not permitted on merchandise and is only permitted to be used by the NASA Administrator or Administrator's office. The names, logos, devices or graphics of NASA programs may be used on merchandise subject to review and approval by NASA, and subject to the prohibitions on co-branding noted above.23 Mar 2022
View attachment 10636
https://www.nasa.gov › features › M...

NASA Regulations for Merchandising Requests



My opinion only DYOR
FF

AKIDA BALLISTA
To further amplify the ignorance of this ‘Story’ read the following extract from how NASA works:

Frequently Asked Questions About NASA Partnerships​


1. What is a NASA “partnership?”
NASA uses the term “partnership” to describe a wide variety of relationships with various external entities (e.g., contractors, academia, the public, other stakeholders). For the purpose of these FAQs, a “partnership” is a distinct type of non-procurement business relationship that does not involve the acquisition of goods and services for the direct benefit of the Agency.

2. Why does NASA engage in Partnerships?
Partnerships help the Agency accomplish its mission objectives in several ways, including:
  • Facilitating collaborative opportunities with domestic and international partners
  • Helping NASA resolve gaps in technical capabilities that are important to meeting our mission objectives
  • Supporting U.S. economic innovation and industrial competitiveness
  • Serving as a tool for meeting NASA’s mandate under the Space Act of encouraging the “fullest commercial use of space”
  • Helping to maintain essential NASA expertise and facilities
  • Advancing NASA’s STEM education and outreach goals

3. What is the difference between Partnership Agreements and Procurement Contracts?
  • Partnership agreements are generally used to: (1)support the needs of the external partner where the partner reimburses government expenses (reimbursable partnership) or (2) achieve a mutual goal when working collaboratively on a no-exchange-of-funds basis (nonreimbursable partnership).
  • Procurement contracts, which are subject to the Federal Acquisition Regulations (FAR) and procurement statutes, are required when the principal purpose of the transaction is to acquire property or services for the direct benefit or use of the Federal Government.
  • Both procurements and partnerships are important tools used by NASA in meeting its missions.

4. What is a Space Act Agreement (SAA)?
The most common legal instrument used to formulate partnerships at NASA is called the Space Act Agreement (SAA). NASA is authorized by Congress to enter into these kinds of agreement per its “Other Transactions Authority (OTA)” under the National Aeronautics and Space Act (51 U.S.C. § 20113(e)). These agreements are similar to Cooperative Research and Development Agreements (CRADAs) that some other Federal agencies use when partnering with industry. SAAs can be nonreimbursable, reimbursable, funded, or unfunded.
  • Nonreimbursable SAAs are collaborative agreements in which NASA and another party each contribute resources – which can include personnel, facilities, expertise, equipment or technology – with no transfer of funds between the parties. Each party agrees to fund its own participation in the activity for their mutual benefit.
  • Reimbursable SAAs involve the payment of funds to NASA in exchange for the use of unique NASA resources – personnel, facilities, expertise, equipment or technology. The terms, conditions and schedules are negotiable, but NASA must be paid in advance for each stage of the effort. NASA is prohibited from competing with the U.S. private sector, so NASA may not provide services which are reasonably available from the U.S. private sector.
  • Funded SAAs are agreements where NASA provides funding to a domestic Partner to accomplish an Agency objective where there is no direct benefit to NASA.
  • Unfunded SAAs are agreements in which the Agency provides goods, services, facilities, or equipment on a no-exchange-of-funds basis to a domestic Partner to accomplish an Agency objective where there is no direct benefit to NASA.

5. With whom does NASA partner?
NASA partners with a wide variety of entities, including:
  • U.S. Industry (large and small)
  • Other Federal agencies
  • Research institutions
  • Public outreach organizations (e.g., museums)
  • State and local governments
  • Colleges and universities
  • Foreign entities (businesses, academia, research institutions, governments)
  • Professional associations and non-profits

6. How do I partner with NASA?
There are multiple ways to initiate a partnership with NASA—
  • In response to a Public Announcement: NASA uses various types of public announcements to communicate information about available assets. These formal communications, including Announcement for Proposal (AFP), Request for Information (RFI) and Notice of Availability (NOA) can be found on the Contract Opportunities website (beta.SAM.gov), and the NASA Acquisition Internet Service website (https://prod.nais.nasa.gov/cgibin/nais/index.cgi).
  • Other inquiries: Inquiries, questions, or request for information can be sent to the applicable NASA partnerships points of contact listed here: NASA Locations, Capabilities and Points of Contact | NASA

Having read this you will understand that when NASA wants to keep something secret they will enter a partnership. No money need change hands defeating Stock Exchange requirements for full disclosure here and in the USA.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Love
  • Fire
Reactions: 38 users

Quercuskid

Regular
texturised crop

View attachment 10634

Thomas Shelby's texturised crop is short on the back and sides of the head with a slightly longer length on top. Ask the barber: Ask for a "crop" but indicate you would like to have the fringe sweeping to the right or left and short around the back and sides with extra length left on top.20 Aug 2019”

It comes to something when someone in the Australian Financial Services industry chooses to model his hairstyle on that of Thomas Shelby a fictional TV character who is a gangster, murderer, racketeer and drunk who has visions and speaks to the dead.

My opinion only DYOR
FF

AKIDA BALLISTA
But Shelby does have incredibly beautiful blue eyes
 
  • Haha
  • Love
Reactions: 3 users
The following covers the Aiot market and does not mention Brainchip by name but there are two very interesting paragraphs which I have emboldened and partitioned to make easy to locate:

What’s a Neural microcontroller?​

MAY 30, 2022 BY JEFF SHEPARD

FacebookTwitterLinkedInEmail
The ability to run neural networks (NNs) on MCUs is growing in importance to support artificial intelligence (AI) and machine learning (ML) in the Internet of Things (IoT) nodes and other embedded edge applications. Unfortunately, running NNs on MCUs is challenging due to the relatively small memory capacities of most MCUs. This FAQ details the memory challenges of running NNs on MCUs and looks at possible system-level solutions. It then presents recently announced MCUs with embedded NN accelerators. It closes by looking at how the Glow machine learning compiler for NNs can help reduce memory requirements.
Running NNs on MCUs (sometimes called tinyML) offers advantages over sending raw data to the cloud for analysis and action. Those advantages include the ability to tolerate poor or even no network connectivity and safeguard data privacy and security. MCU memory capacities are often limited to the main memory of hundreds of KB of SRAM, often less, and byte-addressable Flash of no more than a few MBs for read-only data.
To achieve high accuracy, most NNs require larger memory capacities. The memory needed by a NN includes read-only parameters and so-called feature maps that contain intermediate and final results. It can be tempting to process an NN layer on an MCU in the embedded memory before loading the next layer, but it’s often impractical. A single NN layer’s parameters and feature maps can require up to 100 MB of storage, exceeding the MCU memory size by as much as two orders of magnitude. Recently developed NNs with higher accuracies require even more memory, resulting in a widening gap between the available memory on most MCUs and the memory requirements of NNs (Figure 1).
Figure 1: The available memory on most MCUs is much too small to support the needs of the majority of NNs. (Image: Arxiv)
One solution to address MCU memory limitations is to dynamically swap NN data blocks between the MCU SRAM and a larger external (out-of-core) cash memory. Out-of-core NN implementations can suffer from several limitations, including: execution slowdown, storage wear out, higher energy consumption, and data security. If these concerns can be adequately addressed in a specific application, an MCU can be used to run large NNs with full accuracy and generality.
One approach to out-of-core NN implementation is to split one NN layer into a series of tiles small enough to fit into the MCU memory. This approach has been successfully applied to NN systems on servers where the NN tiles are swapped between the CPU/GPU memory and the server’s memory. Most embedded systems don’t have access to the large memory spaces available on servers. Using memory swapping approaches with MCUs can run into problems using a relatively small external SRAM or an SD card, such as lower SD card durability and reliability, slower execution due to I/O operations, higher energy consumption, and safety and security of out-of-core NN data storage.
Another approach to overcoming MCU memory limitations is optimizing the NN more completely using techniques such as model compression, parameter quantization, and designing tiny NNs from scratch. These approaches involve varying tradeoffs between model accuracy and generality, or both. In most cases, the techniques used to fit an NN into the memory space of an MCU result in the NN becoming too inaccurate (< 60% accuracy) or too specialized and not generalized enough (the NN can only detect a few object classes). These challenges can disqualify the use of MCUs where NNs with high accuracy and generality are needed, even if inference delays can be tolerated, such as:
  • NN inference on slowly changing signals such as monitoring crop health by analyzing hourly photos or traffic patterns by analyzing video frames taken every 20-30 minutes
  • Profiling NNs on the device by occasionally running a full-blown NN to estimate the accuracy of long-running smaller NNs
  • Transfer learning includes retraining NNs on MCUs with data collected from deployment every hour or day
NN accelerators embedded in MCUs
Many of the challenges of implementing NNs on MCU are being addressed by MCUs with embedded NN accelerators. These advanced MCUs are an emerging device category that promises to provide designers with new opportunities to develop IoT node and edge ML solutions. For example, an MCU with a hardware-based embedded convolutional neural network (CNN) accelerator enables battery-powered applications to execute AI inferences while spending only microjoules of energy (Figure 2).
Figure 2: Neural MCU block diagram showing the basic MCU blocks (upper left) and the CNN accelerator section (right). (Image: Maxim)
*******************************************************************************************************************************************************
The MCU with an embedded CNN accelerator is a system on chip combining an Arm Cortex-M4 with a RISC-V core that can execute application and control codes as well as drive the CNN accelerator. The CNN engine has a weight storage memory of 442KB and can support 1-, 2-, 4-, and 8-bit weights (supporting networks of up to 3.5 million weights). On the fly, AI network updates are supported by the SRAM-based CNN weight memory structure. The architecture is flexible and allows CNNs to be trained using conventional toolsets such as PyTorch and TensorFlow.
*********************************************************************************************************************************************************
Another MCU supplier has pre-announced developing a neural processing unit integrated with an ARM Cortex core. The new neural MCU is scheduled to ship later this year and will provide the same level of AI performance as a quad-core processor with an AI accelerator but at one-tenth the cost and one-twelfth the power consumption.
*********************************************************************************************************************************************************

Additional neural MCUs are expected to emerge in the near future.

Glow for smaller NN memories
Glow (graph lowering) is a machine learning compiler for neural network graphs. It’s available on Github and is designed to optimize the neural network graphs and generate code for various hardware devices. Two versions of Glow are available, one for Ahead of Time (AOT) and one for Just in Time (JIT) compilations. As the names suggest, AOT compilation is performed offline (ahead of time) and generates an object file (bundle) which is later linked with the application code, while JIT compilation is performed at runtime just before the model is executed.
MCUs are available that support AOT compilation using Glow. The compiler converts the neural networks into object files, which the user converts into a binary image for increased performance and a smaller memory footprint than a JIT (runtime) inference engine. In this case, Glow is used as a software back-end for the PyTorch machine learning framework and the ONNX model format (Figure 3).
Figure 3: Example of an AOT compilation flow diagram using Glow. (Image: NXP)
The Glow NN complier lowers a NN into a two-phase, strongly-typed intermediate representation. Domain-specific optimizations are performed in the first phase, while the second phase performs optimizations focused on specialized back-end hardware features. NNs on MCUs are available that combine support for Arm Cortex-M cores and Cadence Tensilica HiFi 4 DSP support, accelerating performance by utilizing Arm CMSIS-NN and HiFi NN libraries, respectively. Its features include:
  • Lower latency and smaller solution size for edge inference NNs.
  • Accelerate NN applications with CMSIS-NN and Cadence HiFi NN Library
  • Speed time to market using the available software development kit
  • Flexible implementation since Glow is open source with Apache License 2.0
Summary
Running NNs on MCUs is important for IoT nodes and other embedded edge applications, but it can be challenging due to MCU memory limitations. Several approaches have been developed to address memory limitations, including out-of-core designs that swap blocks of NN data between the MCU memory and an external memory and various NN software ‘optimization’ techniques. Unfortunately, these approaches involve tradeoffs between model accuracy and generality, which result in the NN becoming too inaccurate and/or too specialized to be of use in practical applications. The emergence of MCUs with integrated NN accelerators is beginning to address those concerns and enables the development of practical NN implementations for IoT and edge applications. Finally, the availability of the Glow NN compiler gives designers an additional tool for optimizing NN for smaller applications”
I have been looking for the following which I have copied from the Brainchip website:

“Akida IP​

BrainChip’s first-to-market neuromorphic processor IP, Akida™, mimics the human brain to analyze only essential sensor inputs at the point of acquisition—processing data with unparalleled efficiency, precision, and economy of energy.

Keeping AI/ML local to the chip and independent of the cloud dramatically reduces latency while improving privacy and data security.

Infer and learn at the edge with Akida’s fully customizable event-based AI neural processor.

Akida’s scalable architecture and small footprint boosts efficiency by orders of magnitude – supporting up to 1024 nodes that connect over a mesh network.

Every node consists of four Neural Processing Units (NPUs), each with scalable and configurable SRAM.

Within each node, the NPUs can be configured as either convolutional or fully connected.

The Akida neural processor is event based – leveraging data sparsity, activations, and weights to reduce the number of operations by at least 2X”

Now remembering the partnership with SiFive consider the first of the embolden paragraphs again:

“The MCU with an embedded CNN accelerator is a system on chip combining an Arm Cortex-M4 with a RISC-V core that can execute application and control codes as well as drive the CNN accelerator. The CNN engine has a weight storage memory of 442KB and can support 1-, 2-, 4-, and 8-bit weights (supporting networks of up to 3.5 million weights). On the fly, AI network updates are supported by the SRAM-based CNN weight memory structure. The architecture is flexible and allows CNNs to be trained using conventional toolsets such as PyTorch and TensorFlow”

My opinion only DYOR
FF

AKIDA BALLISTA

Unfortunately the thought that the first emboldened paragraph was AKIDA has been well Ogred by @Diogenese but the second emboldened paragraph remains dent free. FF
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 39 users
But Shelby does have incredibly beautiful blue eyes
You sound like my wife and he also is intelligent and makes lots of money. 😂🤣😂 FF
 
  • Haha
  • Like
Reactions: 9 users

Rskiff

Regular
  • Haha
  • Like
Reactions: 7 users
D

Deleted member 118

Guest
Not sure I agree it depends in what akida can do and enables. If a rover can travel 200x faster cover more distance and collecting more data it would be like having 200 rovers. A business would expect to get a good slice of those savings as it would be beneficial to both. I would think there is a lot of scope for brainchip to make huge money from NASA depending in what it enables.


It would be the company that owned the rover that makes the extra $ not the company supplying the product
 
  • Like
Reactions: 2 users

Quatrojos

Regular
I was just reviewing the NVISO slides and trying to work out which company had the most potential upside and growth?

The prices next to the companies are listed in US dollars.

Any suggestions on which company out of that group has the only commercial neuromorphic AI chip heading into this revolutionary and game changing era?

View attachment 10631

Exciting times to be a Brainchip shareholder!
This is a clever comparison. Thanks, SG.
 
  • Like
  • Fire
  • Love
Reactions: 7 users

jtardif999

Regular
"Remember when i talked building up to 100 people or so this year, we think at that point we actually at a pretty good size scale and were are going to say after that the revenue growth will start to outgrow the expense growth pretty rapidly, thats the model that were going to start to see in leverage"

I think you're interpereting, what he said, incorrectly.

The "expense growth" is the increase in costs of the new employees, not previous expenses, of 7 to 8 million dollars per quarter.

My interpretation, is that he's saying the "revenue growth" will begin to outgrow, these "increased" expenses.

Expecting around 10 million dollars US, in the December quarterly, is entirely unrealistic, in my opinion..
Hey Dingo, I think Sean also said that the 7/8 million was staff costs per year - not per quarter or about 2 million per quarter and that when they ramp up to 100 staff the cost would be about 12 million or 1 million per month (3 million per quarter) and that’s where they see their requirements for the foreseeable future.
 
  • Like
  • Fire
  • Love
Reactions: 13 users

Sirod69

bavarian girl ;-)
  • Like
  • Fire
  • Love
Reactions: 8 users

Sirod69

bavarian girl ;-)
  • Partnership to transform the manufacturing industry with immersive experiences across the lifecycle from design through operation
  • Companies will connect NVIDIA Omniverse and Siemens Xcelerator platforms to enable full-fidelity digital twins and connect software-defined AI systems from edge to cloud
 
  • Like
  • Thinking
  • Love
Reactions: 9 users

Fenris78

Regular
Has anyone heard news on Nanose and Akida? I thought clinical trials were completed in May or June... but have lost track? I thought that the relationship between Brainchip and Nanose was a promising one?
 
  • Like
Reactions: 7 users

Sirod69

bavarian girl ;-)
Has anyone heard news on Nanose and Akida? I thought clinical trials were completed in May or June... but have lost track? I thought that the relationship between Brainchip and Nanose was a promising one?
Thats a good question I wrote this question Hossam Haick, perhaps I get an answer, we will see
 
  • Like
  • Fire
Reactions: 7 users

hamilton66

Regular
There are so many errors in this ‘A Story’ by James Mickelboro aged twelve and a half it would crash the web if you tried to send a comprehensive reply.

I try not to judge on appearance but it does come to something when Mr. Bean looks more intelligent.

My opinion only DYOR
FF

AKIDA BALLISTA
F/F, not only does Mr Bean look more intelligent, it appears that he is. GLTA
 
  • Like
  • Love
  • Fire
Reactions: 6 users

Originally posted in a NASA thread by @uiux but now has greater significance:

Proposal Summary​

Proposal Information


Proposal Number:
21-2- H6.22-1743

Phase 1 Contract #:
80NSSC21C0233

Subtopic Title:
Deep Neural Net and Neuromorphic Processors for In-Space Autonomy and Cognition

Proposal Title:
Neuromorphic Enhanced Cognitive Radio

Small Business Concern


Firm: Intellisense Systems, Inc.

Address:

21041 South Western Avenue, Torrance, CA 90501

Phone:
(310) 320-1827

Principal Investigator:

Name:
Mr. Wenjian Wang Ph.D.

E-mail:
wwang@intellisenseinc.com


Address:

21041 South Western Avenue, CA 90501 - 1727

Phone: (310) 320-1827

Business Official:


Name: Selvy Utama

E-mail:
notify@intellisenseinc.com

Address:
21041 South Western Avenue, CA 90501 - 1727

Phone: (310) 320-1827

Summary Details:

Estimated Technology Readiness Level (TRL) :
Begin: 3
End: 4


Technical Abstract (Limit 2000 characters, approximately 200 words):
Intellisense Systems, Inc. proposes in Phase II to advance development of a Neuromorphic Enhanced Cognitive Radio (NECR) device to enable autonomous space operations on platforms constrained by size, weight, and power (SWaP). NECR is a low-size, -weight, and -power (-SWaP) cognitive radio built on the open-source framework, i.e., GNU Radio and RFNoC™, with new enhancements in environment learning and improvements in transmission quality and data processing. Due to the high efficiency of spiking neural networks and their low-latency, energy-efficient implementation on neuromorphic computing hardware, NECR can be integrated into SWaP-constrained platforms in spacecraft and robotics, to provide reliable communication in unknown and uncharacterized space environments such as the Moon and Mars. In Phase II, Intellisense will improve the NECR system for cognitive communication capabilities accelerated by neuromorphic hardware. We will refine the overall NECR system architecture to achieve cognitive communication capabilities accelerated by neuromorphic hardware, on which a special focus will be the mapping, optimization, and implementation of smart sensing algorithms on the neuromorphic hardware. The Phase II smart sensing algorithm library will include Kalman filter, Carrier Frequency Offset estimation, symbol rate estimation, energy detection- and matched filter-based spectrum sensing, signal-to-noise ratio estimation, and automatic modulation identification.

These algorithms will be implemented on COTS neuromorphic computing hardware such as Akida processor from BrainChip, and then integrated with radio frequency modules and radiation-hardened packaging into a Phase II prototype.

At the end of Phase II, the prototype will be delivered to NASA for testing and evaluation, along with a plan describing a path to meeting fault and tolerance requirements for mission deployment and API documents for integration with CubeSat, SmallSat, and 'ROVER' for flight demonstration.

Potential NASA Applications (Limit 1500 characters, approximately 150 words):
NECR technology will have many NASA applications due to its low-SWaP and low-cost cognitive sensing capability. It can be used to enhance the robustness and reliability of space communication and networking, especially cognitive radio devices. NECR can be directly transitioned to the Human Exploration and Operations Mission Directorate (HEOMD) Space Communications and Navigation (SCaN) Program, CubeSat, SmallSat, and 'ROVER' to address the needs of the Cognitive Communications project.

Potential Non-NASA Applications (Limit 1500 characters, approximately 150 words):
NECR technology’s low-SWaP and low-cost cognitive sensing capability will have many non-NASA applications. The NECR technology can be integrated into commercial communication systems to enhance cognitive sensing and communication capability. Automakers can integrate the NECR technology into automobiles for cognitive sensing and communication.

Duration: 24
It menrions cognitive communication project. Here is what I have found so far. Haven't had time to read it all but sharing anyway.

https://www1.grc.nasa.gov/space/scan/acs/cognitive-communications/

SC
 
  • Like
  • Fire
  • Love
Reactions: 5 users

jtardif999

Regular
According to my calculations 4m/sec is 14.4 km an hour.
144 meters/hour. - 4cm x 3600 secs = 14,400 cm/hour
14,400 / 100 (cm in meter) = 144 meters/hour…
 
  • Like
  • Fire
  • Love
Reactions: 6 users

Pepsin

Regular
I was just reviewing the NVISO slides and trying to work out which company had the most potential upside and growth?

The prices next to the companies are listed in US dollars.

Any suggestions on which company out of that group has the only commercial neuromorphic AI chip heading into this revolutionary and game changing era?

View attachment 10631

Exciting times to be a Brainchip shareholder!
Please remember, that these numbers just represent the price per share and have nothing to do with marked-cap or potential growth. The number of shares out there differs widely for these companies!
It would have been better to indicate the piece of cake that every company gains from the overall AI-hardware market or something like that.

MegaChips is often seen as a big player here in the forum but from the perspective of market-cap has only about 400 - 500 mio. USD and is smaller than BRN. Despite that it is a great costomer, of course!
 
  • Like
Reactions: 8 users
Top Bottom