BRN Discussion Ongoing

Just received the following email from BRN regarding MosChip. My registration does seem to be working:

New Press Release From Brainchip: Media Alert: BrainChip and MosChip to Demonstrate Capabilities of Neural Processor IP and ASICs for Smart Edge Devi​

External
Inbox




profile_mask2.png

BrainChip Inc <noreply@brainchip.com> Unsubscribe​

11:01 AM (34 minutes ago)








Laguna Hills, Calif. – May 9, 2022
webicon_green.png
BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of ultra-low power neuromorphic AI IP, and
webicon_green.png
MosChip Technologies Limited (BSE: MOSCHIP), a semiconductor and system design services company, are jointly presenting a session at the India Electronics & Semiconductor Association (IESA) AI Summit discussing how the companies are working collaboratively to enable neural network IP for edge applications.
With the advent of new emerging technologies in the Intelligent Electronics & Semiconductor ecosystem, IESA is looking at tapping the opportunities brought forward by AI in the hardware space. The objective of this Global Summit is to share insights on how AI is driving the hardware and software ecosystem in the country.
BrainChip and MosChip are co-presenting at the May 11 IESA AI Summit session with comments by Murty Gonella, Site Director at BrainChip India, followed by Swamy Irrinki, VP of Marketing and Business Development at MosChip. The presentation ends with a demonstration of BrainChip’s Akida
™
neural processor IP, enabling high performance and ultra-low power on-chip inference and learning and MosChip’s ASIC platform for smart edge devices.
The IESA AI Summit is a two-day conference, May 11 and 12, showcasing how AI is driving the hardware and software ecosystem in India. It features panel discussions, keynote addresses, session and showcases from top India policy makers and global thought leaders. Additional information about the event is available at
webicon_green.png
https://iesaaisummit.org/.

About BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY)
BrainChip is the worldwide leader in edge AI on-chip processing and learning. The company’s first-to-market neuromorphic processor, AkidaTM, mimics the human brain to analyze only essential sensor inputs at the point of acquisition, processing data with unparalleled efficiency, precision, and economy of energy. Keeping machine learning local to the chip, independent of the cloud, also dramatically reduces latency while improving privacy and data security. In enabling effective edge compute to be universally deployable across real world applications such as connected cars, consumer electronics, and industrial IoT, BrainChip is proving that on-chip AI, close to the sensor, is the future, for its customers’ products, as well as the planet. Explore the benefits of Essentia
 
  • Like
  • Fire
  • Love
Reactions: 26 users

Yak52

Regular
WE are currently well of the lows of the day ($1.00) and at $1.11

That dump was definitely mostly done by one Brokerage (Insto) because as soon as they had finished unloading the shares selling dried up and we re-bounded back from $1.00 mark. A Broker report (one day) Stan would be able to show this.

Our SP is now the same as at midday yesterday (Mon) and we will see what the rest of today brings. Fustrating!

yak52
 
  • Like
  • Fire
  • Wow
Reactions: 32 users

Hadn't heard of these guys before for whatever reason.

Thought take a look and from their site late Mar.



The Present and Future of AI Semiconductor (2): Various Applications of AI Technology​

March 24, 2022
Artificial intelligence (AI), which is regarded as the ‘the most significant paradigm shift in history,’ is becoming the center of our lives in remarkable speed. From autonomous vehicles, AI assistants to neuromorphic semiconductor that mimics the human brain, artificial intelligence has already exceeded human intelligence and learning speed, and is now quickly being applied across various areas by affecting many aspects of our lives. What are the key applications of AI technology and how is it realized?
(Check here to discover more insights from SNU professor Deog-Kyoon Jeong about AI semiconductor!)

Cloud Computing vs. Edge Computing​

220317_Figure_1.jpg

Figure 1. Cloud Computing vs. Edge Computing
Image Download
One AI application, which is an antipode to cloud services, is edge computing1. Applications that require processing massive amounts of input data such as videos or image data must process data using edge computing or transfer the data to a cloud service through wired or wireless communication preferably by reducing the amount of data. Accelerators specifically designed for edge computing for this purpose take up a huge part of AI chip design. AI chips used in autonomous driving are a good example. These chips perform image classification and object detection by processing images that contain massive amounts of data using CNN2 and a series of neural operations.

AI and the Issue of Privacy​

220317_Figure_2._SKT_NUGU.jpg

Figure 2. SK Telecom’s NUGU
(Source: SKT NUGU )
Image Download
220317_Figure_2._AmazonAlexa.png

Figure 2. Amazon’s Alexa
(Source: NY Times)
Image Download
220317_Figure_2._SKT_NUGU.jpg

Figure 2. SK Telecom’s NUGU
(Source: SKT NUGU )
Image Download
220317_Figure_2._AmazonAlexa.png

Figure 2. Amazon’s Alexa
(Source: NY Times)
Image Download



Another area of AI application is conversational services like Amazon’s Alexa or SK Telecom’s NUGU. However, such services cannot be used widely if privacy is not protected. Conversational AI services, where conversations at home are continuously eavesdropped by a microphone, cannot be developed beyond a simple recreational service by nature, and therefore, many efforts are being made to resolve these privacy issues.
The latest research trend in solving the privacy issue is homomorphic encryption3 . Homomorphic encryption does not transmit users’ voice or other sensitive information such as medical data as is. It is a form of encryption that allows computations of multiplication and addition on encrypted data in the form of ciphertext, which only the user can decrypt, on a cloud service without first decrypting it. The outcome or results are sent to the user again in an encrypted form and only the user can decrypt to see the results. Therefore, no one including the server can see the original data other than the individual user. Homomorphic service requires an immense amount of computation up to several thousand or tens of thousand times more compared to the general plaintext DNN4service. The key area for research in the future will be around reducing the service time by dramatically enhancing computation performance through specially designed homomorphic accelerators5.

AI Chip and Memory​

In a large-scale DNN, the number of weights is too high to contain all of them in a processor. As a result, it has to make a read access whenever it requires a weight stored in an external large capacity DRAM and bring it to the processor. If a weight is used only once and cannot be reused after accessing it, the data that was pulled with considerable amount of energy and time consumption will be wasted. This is an extremely inefficient method as it consumes additional time and energy compared to storing and utilizing all weights in the processor. Therefore, processing an intense amount of data using enormous number of weights in large-scale DNN requires a parallel connection and/or a batch operation that uses the same weights over several times. In other words, there is a need to perform computations by connecting several processors with DRAMs in parallel to disperse and store weight or intermediate data in several DRAMs to reuse them. High speed connection among processors is essential in this structure, which is more efficient compared to having all processors access through one route. And only this structure can deliver the maximum performance.

Interconnection of AI Chips​

220318_SK-hynix_0308_02.png

Figure 3. Interconnection Network of AI Chips
Image Download
The performance bottleneck that occurs when connecting numerous processors depends on the provided bandwidth, latency as well as the form of interconnection. These elements define the size and performance of the DNN. In other words, if one were to deliver ‘N-times’ higher performance by connecting ‘N’ number of accelerators in parallel, bottleneck occurs in the latency and bandwidth provided by the interconnections and will not be able to deliver the performance as one desires.
Therefore, the interconnection structure between a processor and another is crucial in efficiently providing the scalability of performance. In the case of NVIDIA A100 GPU, NVLink 3.0 plays that role. There are 12 NVLink channels in this GPU and each provides 50 GBps in bandwidth. Connecting 4 GPUs together can be done by direct connections using 4 channels each in the form of a clique. But to connect 16 GPUs, an NVSwitch, which is an external chip dedicated just for interconnection, is required. In the case of Google TPU v2, it is designed to enable a connection of a 2D torus structure using Inter-Core Interconnect (ICI) with an aggregate bandwidth of 496 GBps.
220317_Figure_4._Nvidia%E2%80%99s_GPU_Accelerator_A100_using_6_HBMs.jpg

Figure 4. NVIDIA’s GPU Accelerator A100 using 6 HBMs
(Source: The Verge)
Image Download
The way in which processors are interconnected has a huge impact on the whole system. For example, if they are interconnected in a mesh or torus structure, the structure is easy to compose as the physical connection between chips is simple. But latency increases proportionally to the distance as it requires hopping over several processors to interconnect between nodes that are far away. The most extreme method would be in the form of a clique that interconnects all processors one to one, but this would lead to a significant increase in the number of chip pins by N!, causing PCB congestion beyond allowable so that in actual design, connecting up to only four processors would be the limit.
Most generally, using a crossbar switch like a NVSwitch is another attractive option, but this method also converges all connections on the switch. Therefore, the more the number of processors you want to interconnect, the more difficult the PCB layout becomes as transmission lines concentrate around the switch. The best method is structuring the whole network in a binary tree, connecting processors at the bottom end, and allocating the most bandwidth to the top of the binary tree. Therefore, creating a binary fat tree will be the most ideal and will be able to deliver maximum performance with scalability.

Neuromorphic AI Chip​

220317_Figure_5.jpg

Figure 5. Cloud Server Processor vs. Neuromorphic AI Processor
Image Download
Data representation and processing method of processors for cloud servers that serve as DNN accelerators take the form of digital, since the computational structure is fundamentally simulation of NN through software on top of hardware. Recently, there is an increase in research on neuromorphic AI chip which, unlike the previous simulation method, directly mimics the neural network of a living organism and its signals and maps to an analog electronic circuit and performs in the same manner. This method takes the form of being analog in the representation of original data in actual applications. This means that one signal is represented in one node, and the interconnection is by hardwire and not defined by the software, while the weights are stored in an analog form.
220317_Figure_6.jpg

Figure 6. Previous semiconductor vs. Neuromorphic semiconductor
Image Download
The advantage of such structure is that it has maximum parallelism to perform with minimum energy. And neuromorphic chips can secure great advantage in certain applications. Because the structure is fixed, it lacks programmability, but it can offer a great advantage in certain edge computing applications of a small scale. In fact, neuromorphic processor has significance in applications such as processing AI signals of sensors used in IoT by delivering high energy efficiency or image classification that requires processing large amounts of video data using CNN of a fixed weight. However, because the weight is fixed, it will be difficult to use in areas of applications that require continued learning. Also, it is difficult to leverage parallelism that interconnects several chips in parallel due to a structural limitation when it comes to large-scale computations, making its actual area of application restricted to edge computing. It is also possible to realize the neuromorphic structure in a digital form, and IBM’s TrueNorth is an example. It is known, however, that the scalability is limited, making it difficult to find wide practical applications.

Current Status of AI Chip Development​

To create a smart digital assistant that can converse with humans, Meta (formerly known as Facebook), which needs to process massive amounts of user data, is designing an AI chip specialized to have basic knowledge about the world. The company is also internally developing AI chips that will perform moderation to decide whether to post real-time videos that are uploaded to Facebook.
Amazon, a technology company that mainly focuses on e-commerce and cloud computing, has already developed its own AI accelerator called AWS Inferentiato power its digital assistant Alexa and uses it to recognize audio signals. Cloud service provider AWS has developed an infrastructure that uses the Inferentia chip and provides services for cloud service users that can accelerate deep learning workloads like Google’s TPU.
Microsoft, on the other hand, uses field programmable gate array (FPGA) in its data centers and has introduced a method of securing the best performance by reconfiguring precision and DNN structure according to application algorithms in order to create AI chips optimized not only in current applications, but also in future applications. This method, however, creates a lot of overhead to refigure the structure and logic circuit even if it has identified an optimal structure. As a result, it is unclear that it will have actual benefit because it is inevitably disadvantaged in terms of energy and performance compared to ASIC chips specifically designed for certain purposes.
A number of fabless startups are competing against NVIDIA by developing general-purpose programmable accelerators that are not specialized to certain areas of application. Many companies, including Cerebras Systems, Graphcore, and Groq, are joining the fierce competition. In Korea, SK Telecom, in collaboration with SK hynix, has developed SAPEON and will soon be used as the AI chip in data centers. And Furiosa AI is preparing to commercialize its silicon chip, Warboy, as well.
220317_Figure_7._SAPEON_X220.jpg

Figure 7. SAPEON X220
(Source: SK Telecom Press Release)
Image Download

The Importance of the Compiler​

The performance of such AI hardware depends greatly on how optimized its software is. Operating thousands or tens of thousands of computational circuits at the same time through systolic array and gathering the outcome efficiently require highly advanced coordination. Setting up the order of the input data to feed numerous computational circuits in the AI chip and make them to work continuously in a lockstep and then transmitting the output to the next stage can only be done through a specialized library. This means that developing an efficient library and the compiler to use them is as important as designing the hardware.
NVIDIA GPU started as a graphics engine. But NVIDIA provided a development environment, CUDA, , to enable users to write programs easily and enabled them to run efficiently on the GPU, which made it popularly and commonly used across the AI community. Google also provides its own development environment, TensorFlow, to help develop software using TPUs. As a result, it supports users to utilize TPU easily. More and more diverse development environments must be provided in the future, which will increase the applicability of AI chips.

AI Chip and its Energy Consumption​

TThe direction of AI services in the future must absolutely focus on enhancing the quality of service and reducing the required energy consumption. Therefore, it is expected that efforts will focus around reducing power consumption of AI chips and accelerating the development of energy-saving DNN structure. In fact, it is known that it takes 10^19 floating-point arithmetic in the training of ImageNet to reduce error rate to less than 5%. This is the equivalent to the amount of energy consumed by New York City citizens for a month. In the example of AlphaGo that was used in the game of Go against 9-Dan professional player Lee Sedol in 2016, a total of 1,202 CPUs and 176 GPUs were used in the inference to play Go and estimated 1 MW in power consumption, which is tremendous compared with the human brain using only 20 W.
AlphaGo Zero, which was developed later, became a system of a performance that exceeds AlphaGo merely after 72 hours of training using self-play reinforcement learning with only 4 TPUs. This case proves that there is potential in reducing energy consumption using a new neural network structure and a learning method. And we must continue to pursue research and development on energy-saving DNN structures.

The Future of the AI Semiconductor Market​

220317_Figure_8.jpg

Figure 8. AI Chip Market Outlook
(Source: Statista)
Image Download
The successful accomplishments made in the field of AI will expand the scope of application, triggering stunning market growth as well. For example, SK hynix recently developed a next-generation intelligence semiconductor memory, or processing-in-memory (PIM)6, to resolve the bottleneck issue in data access in AI and big data processing. SK hynix unveiled the ‘GDDR6-AiM (Accelerator in Memory)’ sample as the first product to apply the PIM, and announced the achievement of its PIM development at the International Solid-State Circuits Conference, ISSCC 20227, an international conference of the highest authority in the field of semiconductor, held in San Francisco in the end of February this year.
220317_Figure_9._%E2%80%98GDDR6-AiM%E2%80%99_of_SK_hynix.jpg

Figure 9. GDDR6-AiM developed by SK hynix
Image Download
Application systems will further drive a wider AI market and continuously create new areas, enabling differentiated service quality backed by the quality of inference based on a structure of neural network. AI semiconductors, which are the backbone of the AI system, will be differentiated based on how fast and accurately they can conduct inference and training tasks using low energy. Latest research findings show that energy efficiency per se is extremely poor. Therefore, there is an increasing need for research on new neural network structures with a focus not only on function, but also on energy efficiency. And in terms of hardware, the core element that defines energy efficiency lies around improving memory access methods. As such, Processing-In-Memory (PIM), which processes within a memory and not by accessing memory separately, and neuromorphic computing that mimics the neural network by storing synapse weights in analog memory will become important fields of research.
 
  • Like
  • Fire
Reactions: 18 users

Earlyrelease

Regular
  • Haha
  • Like
Reactions: 7 users

Yak52

Regular


SO...........an engineer (Samar Shekhar) from INTEL has liked our joint Demo with MOSCHIP at the Summit.

More & more INTEL dots keep popping up to the point of where can we ignore them anymore?

To say that INTEL does know how good we are is an understatement and they are watching us with interest it would seem!

Yak52
 
  • Like
  • Fire
  • Love
Reactions: 32 users

HopalongPetrovski

I'm Spartacus!
Since starting his podcast in 2014, bestselling author Tim Ferriss has interviewed well over 100 highly successful people, from Navy SEALs to billionaire entrepreneurs.

He uses his interviews to pick apart the, as he puts it, "tactics, routines, and habits," that have brought these subjects to the tops of their fields. He's collected his favorite lessons from these discussions, along with a few new ones, in his book "Tools of Titans."

..... there is a passage in Herman Hesse's 1922 novel "Siddhartha" that offers a suitable lens for all of the "tools" he shares in his book.

Siddhartha tells the merchant that, "Everyone gives what he has," and the merchant replies, "Very well, and what can you give? What have you learned that you can give?"
"I can think, I can wait, I can fast," Siddhartha says.


"I can think: Having good rules for decision-making, and having good questions you can ask yourself and others.

"I can wait: Being able to plan long-term, play the long game, and not mis-allocate your resources.

"I can fast: Being able to withstand difficulties and disaster. Training yourself to be uncommonly resilient and have a high pain tolerance."
 
  • Like
  • Fire
  • Love
Reactions: 20 users
We know that Quantum Ventura mentioned Akida for possible Cyber- NeuroRT & wonder if we will get a crack at this one as well.

They wanting AI / ML Neuromorphic.

They make it clear that offeror can't disclose specific platforms through unclassified channels.

The following dates apply to this DARPA Topic release: April 26, 2022: Topics issued for pre-release May 11, 2022: Topics open; DARPA begins accepting proposals via DSIP June 07, 2022: Deadline for technical question submission June 14, 2022: Deadline for receipt of proposals no later than 12:00 pm ET


HR0011SB20224-06 TITLE: Hardening Aircraft Systems through Hardware (HASH)
OUSD (R&E) MODERNIZATION PRIORITY: Cybersecurity
TECHNOLOGY AREA(S): Air Platform,Information Systems
OBJECTIVE: The effort will develop, validate and harden aircraft systems against errors, failures, and
cyber-attacks arising from the introduction of electronic pilot kneeboards and maintenance connections
into the cockpit.
DESCRIPTION: Electronic pilot kneeboards and the cost advantages of condition- and network-based
maintenance processes offer new potential mission benefits and new requirements for connectivity in the
cockpit of DoD aircraft systems. At the same time, these open new concerns associated with pilot and
operator errors, system failures, and cyber-vulnerabilities. Hardware hardening capabilities are required
that are impervious to malicious software yet mindful of Size, Weight, and Power (SWaP) constraints.
Unlike most ground-based installations, DoD aircraft defenses must respond in real-time, provide alerts to
the pilot, prevent undesirable outcomes, and instantly adapt to the level of threat.
The last five years have seen a quiet revolution in the underlying fabric of systems engineering with the
coming of age of many enabling technologies: open standards for system and sensor busses have emerged
that enable competitive acquisition processes; System-on-Chip and Field Programmable Gate Array
(FPGA) devices offer new levels of integration and performance; High-Level Synthesis accelerates circuit
design; Partial Reconfiguration allows real-time circuit adaptivity; formally verified software subsystems
offer new levels of system assurance. These advances are revolutionizing commercial networking and
systems design, but have yet to have a significant presence in the cockpit, especially on DoD legacy
platforms.
This SBIR topic will develop, harden and validate system design, software, and hardware innovations that
improve aircraft resilience while reducing SWaP. Approaches should address the hardware to be
developed, expected path of integration, metrics of success, assessment methods, and integration of
solutions into robust, real-time cyber defenses of interest to the DoD.
PHASE I: The Phase I feasibility study shall include the documentation of a basic prototype consisting of
the co-designed software code and hardware capabilities that are demonstrably impervious to advanced
cyber-attacks and malicious software infiltrations of the supply chain yet mindful of Size, Weight, and
Power (SWaP) constraints for connectivity in the cockpit of DoD aircraft systems.
Proposers interested in submitting a Direct to Phase II (DP2) proposal must provide documentation to
substantiate that the scientific and technical merit and feasibility described above has been met and
describes the potential military and/or commercial applications. Documentation should include all
relevant information including, but not limited to: technical reports, test data, prototype designs/models,
and performance goals/results. For detailed information on DP2 requirements and eligibility, please refer
to Section 4.2, Direct to Phase II (DP2) Requirements, and Appendix B of the DARPA Instructions for
DoD BAA 2022.4
PHASE II: Phase II shall produce system design, implementation, and maintenance capabilities to
significantly advance the state of the art in security and resilience of cockpit connectivity and integration
of modern computational architectures and user interfaces. These integrated systems of co-designed
software and hardware architectures will support Artificial Intelligence (AI)-based or Neuromorphicbased capabilities, including a cyber-attack detection capability. This capability will detect anomalous
sequences of instructions, using strategies for tight integration of CPUs and Artificial Intelligence
DARPA - 33
(AI)/Machine Learning (ML)/neuromorphic fabrics. It will provide for effective cyber warning with an
acceptable false alarm rate in a SWaP-constrained environment for efficient runtime cyber warning.
Strong technical approaches will provide innovative concepts for coupling AI/ML or neuromorphic logic
with conventional CPU cores. Thus, it will provide the ability to monitor an instruction queue of the
frontside bus of CPU cores to mitigate cyber vulnerabilities. The AI or ML techniques shall capture an
understanding of a system design and determine vulnerabilities.
The DoD has requirements for implementing cyber resiliency and tamper resistance in its aircraft
platforms, ordnance systems and associated support systems. The DoD has significant interest in
advanced software engineering and digital design technologies that implement robust security related to
Platform IT (PIT), programmable logic, and physical digital electronics hardware involving, but not
limited to, the following:
• Software, hardware and/or programmable logic implementing security that significantly advances the
state of the art while simultaneously supporting performance and SWaP in areas regarding:
1. Protocol checking logic for detection of maliciously formed packets with advanced secure parsing
and input validation logic residing on hardware or FPGA fabric, to implement a vetting function
prior to reaching an objective network stack process residing on the objective CPU core. Said
capability shall provide minimal impact on performance, latency and throughput.
2. Packet inspection logic supporting high throughput and minimal latency for detection of
malicious payloads prior to reaching an objective network stack process residing on the objective
CPU core.
3. Avionics networking defensive logic, especially targeting MIL-STD-1553, ARINC-429, ARINC629, ARINC-664, Fibre Channel and Ethernet. Said approaches shall be retrofittable with
minimal impact on target platforms.
4. Advanced approaches to implement secure loader and secure monitor functionality on a SoC type
implementation with security core residing on fabric interacting with processes running on
contained CPU cores for robust detection of malicious activity on protected CPU cores.
5. Innovative methods to improve the capability of standard FPGA security cores, regarding
performance and resource utilization.
a. Methods to detect and/or prevent the adversary utilizing undefined semantics for malicious
purposes.
b. Methods to detect and/or prevent the adversary from utilizing emergent behaviors of existing
implementations for malicious purposes.
c. Methods to implement Root of Trust (RoT), secure boot (cold boot), and secure restart (warm
boot).
d. Methods to advance the secure loading of FPGA configuration files over existing approaches.
e. Methods in volume protection that increase security while simultaneously supporting high
heat dissipation.
6. Methods to implement security in a powered-off state with only limited battery powered
functionality available for sensors and defensive logic.
a. Methods that address known computer processor hardware vulnerabilities that are
retrofittable into existing systems. [IMPORTANT: Offeror in an UNCLASSIFIED proposal
should not explicitly mention specific platform subject to said vulnerability.]
b. Methods that address known crypto implementational security issues (not basic cryptological
algorithm issues) in embedded crypto systems that are retrofittable into existing systems.
[IMPORTANT: Offeror in an UNCLASSIFIED proposal should not explicitly mention
specific platform subject to said vulnerability.]
DARPA - 34
c. Methods to thwart Reverse Engineering (RE) of sensitive software, hardware and/or
programmable logic that strongly obscures the functionality, effectively denying the ability to
perform RE but provides for the ability to operate in a hidden/obfuscated/encrypted state with
minimal and/or acceptable impact on performance and/or latency.
d. Methods for implementing a covert communication channel (intended to be unknown to the
attacker) between various avionics components or subsystems to support alerting, logging or
a coordinated response to a RE attack or a cyber attack.
• Techniques to provide for provability and traceability of software, hardware and programmable logic
regarding:
1. Innovative approaches to formal methods that in addition to proof of correctness, provide proofs
of Confidentiality, Integrity, and Availability (CIA):
a. Approaches to supporting scalability of formal methods to support large scale software
packages and large circuit design Hardware Description Language (HDL) code bases.
b. Robust approaches to dealing with covert channels, timing channels and side channels.
c. Provability regarding software targeting multiprocessing implementations including
Symmetric Multi-Processing (SMP) and other multiprocessing arrangements such as
Asymmetric Multi-Processing (AMP) (in part, related to the previous bullet).
d. Techniques to support verification for mixed implementations involving both software with
hardware and/or programmable logic, where the software is tightly coupled to
hardware/programmable logic in a target such as a System on a Chip (SoC).
e. Techniques to provide for formal verification of Machine Learning (ML) and neuromorphic
hardware and use cases where software is coupled to a ML/neuromorphic system in support
of some Naval Aviation Enterprise (NAE) application such as sensor data processing,
tracking or autonomy.
• Technologies that provide the ability to rapidly and effectively assess the provenance of software,
programmable logic and hardware in a manner significantly more robust than code signing (cf. the
recent SolarWinds attack subverting the software build environment to bypass code signing). These
technologies must provide the capability to prove that no unauthorized and potentially malicious
modification has been made anywhere in the supply chain or development system. They shall have
traceability back to the software/hardware development system and relate to the software
module/hardware cell level. They shall provide the ability to vet the individual software/IP
blocks/hardware cells at the target or at the software loader/device programmer, accessing artifacts
providing proof such as:
1. Software/hardware/programmable logic fully confirms to system program office approved design
specification with no additional functionality.
2. Software/hardware/programmable logic was only developed and/or modified by authorized
developer personnel.
3. Software/hardware/programmable logic was only developed and/or modified using approved
toolchains.
4. Software/hardware/programmable logic was only developed and/or modified on approved
development systems.
5. Software/hardware/programmable logic was only developed and/or modified during an approved
period.
DARPA - 35
Successful offerors in their proposals will demonstrate a strong understanding of the technology areas that
they respond to and they will articulate a compelling necessity for S&T funding to support their
respective proposed technology approaches over existing capabilities.
Schedule/Milestones/Deliverables Phase II fixed payable milestones for this program shall include:
• Month 2: New Capabilities Report, that identifies additions and modifications that will be researched,
developed, and customized for incorporation in the pilot demonstration.
• Month 4: PI meeting presentation material, including demonstration of progress to date, PowerPoint
presentations of accomplishments and plans.
• Month 6: Demonstration Plan that identifies schedule, location, computing resources, and any other
requirements for the pilot demonstration.
• Month 9: Initial demonstration of stand-alone pilot application to DARPA; identification of military
transition partner(s) and other interested DoD organizations
• Month 12: PI meeting presentation material, including demonstration of progress to date, PowerPoint
presentations of accomplishments and plans.
• Month 15: Demonstration to military transition partners (s) and other DoD organizations.
• Month 18: PI meeting presentation material, including demonstration of progress to date, PowerPoint
presentations of accomplishments and plans.
• Month 21: PI meeting presentation material, including demonstration of progress to date, PowerPoint
presentations of accomplishments and plans.
• Month 24: Final software and hardware delivery, both object and source code, for operation by
DARPA or other Government personnel for additional demonstrations, with suitable documentation
in a contractor proposed format. Deliver a Final Report, including quantitative metrics on decision
making benefits, costs, risks, and schedule for implementation of a full prototype capability based on
the pilot demonstration. This report shall include an identification of estimated level of effort to
integrate the pilot capability into an operational environment, addressing computing infrastructure
and environment, decision making processes, real-time and archival data sources, maintenance and
updating needs; reliability, sensitivity, and uncertainty quantification; and transferability to other
military users and problems. The report shall also document any scientific advances that have been
achieved under the program. (A brief statement of claims supplemented by publication material will
meet this requirement.) Provide Final PI meeting presentation material.
Phase II Option: The option shall address preliminary steps toward the certification, accreditation and/or
verification of the resulting base effort's hardening capability.
Schedule/Milestones/Deliverables for Phase II Option Phase II fixed payable milestones for this
program option shall include:
• Month 2: Plan that identifies the schedule, location, computing resources and/or any other
requirements for the hardening capability's certification, accreditation, and/or verification.
• Month 4: Presentation on the detailed software and hardware plan for the technical capability.
• Month 7: Interim report on progress toward certification, accreditation and/or verification of the
technical capability.
• Month 10: Review and/or demonstration of the prototype capability with the documentation
supporting certification, accreditation and/or verification.
• Month 12: Final Phase II option report summarizing the certification, accreditation and/or verification
approach, architecture and algorithms; data sets; results; performance characterization and
quantification of robustness.
PHASE III DUAL USE APPLICATIONS: (U) The DoD and the commercial world have similar
challenges with respect to maintaining the cyber integrity of their computing and communications
infrastructure. The Phase III effort will see the developed technical capability transitioned into a DoD
DARPA - 36
enterprise aircraft system that can be used to discover, analyze, and mitigate cyber threats. Government
and commercial aircraft systems have similar challenges in tracking, understanding, and mitigating the
varied cyber threats facing them in the cockpit of aircraft systems. Thus, the resulting hardening
capability is directly transitionable to both the DoD and the commercial sectors: military and commercial
air, sea, space and ground vehicles; commercial hardening of critical industrial plant (i.e. control systems,
manufacturing lines, chemical processes, etc.) through secure programmable logic controllers; securing
cloud infrastructure associated with optimization of industrial processes and condition-based maintenance
of air, sea, space and ground vehicles.
As part of Phase III, the developed capability should be transitioned into an enterprise level system that
can be used to detect heavily obfuscated or anti-debugging and integrity checking techniques employed
by a cyber intruder. The resulting hardening capability is directly transitionable to the DoD for use by the
services (e.g., Naval Aviation Enterprise (NAE), etc.) that have requirements for implementing cyber
resiliency and tamper resistance in its aircraft platforms. This is a dual-use technology that applies to both
military and commercial aviation environments affected by cyber adversaries.
REFERENCES:
1. C. Adams, “HUMS Technology”, Aviation Today, May 2012.
2. https://www.aviationtoday.com/2012/05/01/hums-technology/
3. Shanthakumaran, P. (2010) “Usage Based Fatigue Damage Calculation for AH-64 Apache
Dynamic Components”, The American Helicopter Society 66th Annual Forum, Phoenix, Arizona.
4. P. Murvay and B. Groza, "Security Shortcomings and Countermeasures for the SAE J1939
Commercial Vehicle Bus Protocol," in IEEE Transactions on Vehicular Technology, vol. 67, no.
5, pp. 4325-4339, May 2018, doi: 10.1109/TVT.2018.2795384.
KEYWORDS: aircraft systems, cyber attacks, operator errors, cyber vulnerabilities, hardware hardening
TPOC-1: DARPA SBIR/STTR Program Office
 

Attachments

  • DARPA_SBIR_224_R3.PDF
    649.5 KB · Views: 190
  • Like
  • Fire
  • Love
Reactions: 28 users
I agree with you, the confusing part for me is the ARM statement is bundled with the SiFive statement. We know SiFive

Good Morning Chippers,

An interesting website to keep an eye on regarding our company and where she sits in the ASX , by market capitalisation.

Market Index. Com .au.

Brainchip,

Pressently ranked 194 out of 2,416 market participants.

And number 11 out of 234 tech sector participants.

WooHoo.

Regards,
Esq.
As far as I know it has to make it to 179 to force way into asx200. This info is available on Market Index.

SC
 
  • Like
  • Fire
Reactions: 7 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Since starting his podcast in 2014, bestselling author Tim Ferriss has interviewed well over 100 highly successful people, from Navy SEALs to billionaire entrepreneurs.

He uses his interviews to pick apart the, as he puts it, "tactics, routines, and habits," that have brought these subjects to the tops of their fields. He's collected his favorite lessons from these discussions, along with a few new ones, in his book "Tools of Titans."

..... there is a passage in Herman Hesse's 1922 novel "Siddhartha" that offers a suitable lens for all of the "tools" he shares in his book.

Siddhartha tells the merchant that, "Everyone gives what he has," and the merchant replies, "Very well, and what can you give? What have you learned that you can give?"
"I can think, I can wait, I can fast," Siddhartha says.


"I can think: Having good rules for decision-making, and having good questions you can ask yourself and others.

"I can wait: Being able to plan long-term, play the long game, and not mis-allocate your resources.

"I can fast: Being able to withstand difficulties and disaster. Training yourself to be uncommonly resilient and have a high pain tolerance."


So, after you've done an appropriate amount of 'thinking, waiting and fasting' are you then allowed to go out and buy yourself a tropical island and place a large cash deposit on a castle masquerading as a house?😝🤣



1570167515_84803.jpg

6f7162502ae3d913.jpg
 
  • Haha
  • Like
  • Love
Reactions: 19 users

Clipclop

Emerged
A question; it has probably been raised before; has anybody queried the Aust Future fund as to [A] have they invested in BRN or Why not?
With all the talk about AI and how Aust wants to be at the fore front of this research surely BRN should be front and centre. A large holding in BRN would not just be a wealth generator, but give us a stake in something front and centre of world tech.
 
  • Like
  • Love
  • Thinking
Reactions: 16 users

TECH

Regular
Interesting isn't it....The ASX dishes out a please explain and the answers are generally always
No No No No.....thanks for coming, can we carry on now, undisturbed.

Today $1.20 down as low as $1.00 that's a 16.5% move to the downslide, where's the please explain now.

The whole thing is a sham in my opinion. :unsure::geek:😜
 
  • Like
  • Haha
  • Love
Reactions: 52 users

Dang Son

Regular
The game is not fair. Do not play as you cannot win day to day. It will eat you up and spit you out.

I have mentioned my son previously. At one point after going to the UK he worked at the Financial Conduct Authority.
After the Lehman affair collapsed the world financial system the big players all signed up to a voluntary code to play nice in future. Barclays signed this code. Two weeks later a retail investor complained that his account had been mismanaged or worse still fraudulently dealt with and rather than having lost his 1.5 million pounds he should have made 4.5 million pounds. Barclays said we will investigate and came back and said no nothing to see here you just made a bad trade and kept their million pound plus commission and left their employees 200 thousand pound commission in his account. This retail investor rang the FCA and was referred to my son who carried out the investigation and called for all the files including emails which were supplied and after analysing these documents showed the trader at Barclays had in fact defrauded this retail investor of his investment for his and Barclays benefit with the assistance of all traders not just those at Barclays. The wink, wink say no more brigade at work across all the desks. The retail investor received his money about 4.5 million pounds and fines and disqualifications were imposed on Barclays and the trader and there was a lot of press coverage in the UK and around the world. Barclays once again promised to be good in future. I was greatly reassured by this promise.

The only leverage retail have is Doing Their Own Research and having a Plan.

My opinion only DYOR
FF

AKIDA BALLISTA
Hi FF what's a good example of a plan that Management and yourself refers to?
 
  • Like
  • Love
Reactions: 3 users

Slade

Top 20
Share Price up, Share Price down
Traders running round and round
But for me, I don’t care
I got WANCA proof underwear
 
  • Like
  • Haha
  • Love
Reactions: 36 users

HopalongPetrovski

I'm Spartacus!
So, after you've done an appropriate amount of 'thinking, waiting and fasting' are you then allowed to go out and buy yourself a tropical island and place a large cash deposit on a castle masquerading as a house?😝🤣



View attachment 6121
View attachment 6122
Absolutely, if that is part of your plan. ;)
 
  • Like
  • Love
Reactions: 4 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
richdiesslin_smartcar.jpg




index.jpg
 
  • Haha
  • Like
Reactions: 15 users

equanimous

Norse clairvoyant shapeshifter goddess
ARMed and dangerous.png
 
  • Like
  • Love
Reactions: 30 users
Hi FF what's a good example of a plan that Management and yourself refers to?
Hi Dang Son

I cannot speak for management as I am an independent retail investor.

Can you clarify what your question is please?

Regards
FF
 
  • Like
Reactions: 3 users

Mercfan

Member
  • Haha
  • Fire
Reactions: 3 users
Top Bottom