BRN - NASA

Stockbob

Regular
New? as I know they worked on the rad hardening in 2020 and it was one of our 1st announcements from nasa.?

Scroll down as this was January 2022



Silicon Space Technology Corporation
1501 South MoPac Expressway, Suite 350
Austin, TX 78746-6966
United States
Hubzone Owned:
No

Socially and Economically Disadvantaged:
No
Woman Owned:
No

Duns:
147671957
Principal Investigator
Name: Patrice Parris
Phone: (602) 463-5757
Email: pparris@voragotech.com
Business Contact
Name: Garry Nash
Phone: (631) 559-1550
Email: gnash@siliconspacetech.com
Research Institution
N/A
Abstract
The goal of this project is the creation of a radiation-hardened Spiking Neural Network (SNN) SoC based on the BrainChip Akida Neuron Fabric IP. Akida is a member of a small set of existing SNN architectures structured to more closely emulate computation in a human brain. The rationale for using a SNN for Edge AI Computing is because of its efficiencies. The neurmorphic approach used in the Akida architecture takes fewer MACs per operation since it creates and uses sparsity of both weights and activation by its event-based model. BrainChip’s studies have shown that, for the small models studied, the sparsity on Akida averaged 52.3%. For medium and large image recognition models, the Akida model sparsity averaged 53.8%. This means that the Akida model needs roughly half the MACs to solve the same problem. In addition, Akida reduces memory consumption by quantizing and compressing network parameters. This helps to reduce power consumption and die size while maintaining performance.The Akida fabric is built of a collection of Neural Processing Units (NPUs) which are connected by and communicate over a mesh network. This allows the layers of a neural network to be distributed across NPUs. The NPU are arranged in groups of 4 called Nodes. Each NPU has 8 Compute Engines and 100KB of local SRAM. The SRAM stores internal events, network parameters and activations. Having SRAM local to the nodes saves energy since the node data is not being constantly moved around on the network. Packets on the mesh network are filtered locally so that each NPU only has to process packets addressed to it. The Compute Engines generate Output Events which are packetized and placed on the mesh network. Communication over the network is routed without intervention from the supervisory CPU, preventing the CPU from limiting the communication bandwidth and reducing the energy needed to transfer data between nodes.
Was about to say this might get lost in the main thread if not posted here, like your thinking Pom down under, had not seen this before , so another company added to the list , great work !!!
 
  • Like
Reactions: 2 users
Was about to say this might get lost in the main thread if not posted here, like your thinking Pom down under, had not seen this before , so another company added to the list , great work !!!
Same company that did the original rad hardening, I think they changed names
 
  • Like
Reactions: 2 users
No mention of Akida, but it’s either Akida or Loihi


KrishCorp
1201 Connecticut Ave NW Suite 600
Washington, DC 20036-0282
United States
Hubzone Owned:
Yes

Socially and Economically Disadvantaged:
Yes
Woman Owned:
No

Duns:
079295508
Principal Investigator
Name: Shivkumar Krishnamoorthy
Phone: (301) 213-6850
Email: raja@krishcorp.com
Business Contact
Name: Shivkumar Krishnamoorthy
Phone: (301) 213-6850
Email: raja@krishcorp.com
Research Institution
N/A
Abstract
KrishCorp is proposing a solution that simulates cognitive data capture, mimicking the way a human brain detects, contextualizes, and classifies data, through an image classification engine. This engine will identify and classify different objects and artifacts in an image or video feed and present an output of tagged metadata as well as a descriptive text transcript of the objects, artifacts and/or pertinent relational data. The engine will consist of two models; one which will be trained by using conventional image classification machine learning, and a second which will be a complementary contextual layer relating the objects to one another in a nuanced and innovative way. The engine will be able to identify patterns and make associations while processing multisensory inputs in the same manner as neuro-biological systems. To this end, the contextual layer of the engine, trained by a second model, can be extended to incorporate, analyze, and correlate multiple types of data from various sensors like air pressure, temperature, and other complex telemetry available in space-based applications. Using our solution, NASA systems will be able to ultimately perform autonomous decision making based on the multi-dimensional cognitive awareness our system will provide. The engine we will build will not only classify and tag objects contained in the data sources fed to it but also will build the context around it to give a more robust and nuanced representation. The model will employ a unique combination of a Spiking Neural Network (SNN) for image recognition and Long Short-term Memory (LSTM) to aid in unsupervised learning. The processed data will then be fed to the context building model where it will interpret the relational elements of the classified objects to provide a cognitive This model will in turn provide an accurate textual description of the original data source.
 
  • Like
  • Fire
Reactions: 5 users

Proposal Summary​

Proposal Information


Proposal Number:
23-1- H6.22-2297


Subtopic Title:
Deep Neural Net and Neuromorphic Processors for In-Space Autonomy and Cognition


Proposal Title:
Radiation Hardened Programmable Deep Neural Processor in 22nm FDSOI CMOS process


Small Business Concern


Firm:

Alphacore, Inc.


Address:

304 South Rockford Drive, Tempe, AZ 85288


Phone:

(480) 494-5618

Principal Investigator:


Name:

Dr. Chandarasekaran Ramamurthy


E-mail:

chandru.ramamurthy@alphacoreinc.com


Address:

304 South Rockford Drive, AZ 85288 - 3052


Phone:

(214) 960-7889

Business Official:


Name:

Esko Mikkola


E-mail:

engineering@alphacoreinc.com


Address:

304 S Rockford Dr, AZ 85288 - 3052


Phone:

(480) 494-5618

Summary Details:


Estimated Technology Readiness Level (TRL) :
Begin: 2
End: 3





Technical Abstract (Limit 2000 characters):
The need for Extreme Radiation Hard Neuromorphic Hardware is overwhelming for NASA, other government agencies and private industry. Neuromorphic computing is recognized by the electronics industry and aerospace industry as a promising tool for enabling high-performance computing and ultra-low power consumption to achieve what clients need. Satellites, Rovers and other key assets impose limits on size, weight and power consumption, as well as the need for radiation-tolerance. We propose to radiation harden a programmable in-memory compute neural network processor for deep neural networks by circuit, microarchitectural and architectural means. This processing paradigm has the potential to provide a full stack solution in the fields of in-situ cognition and autonomous decision making in extreme environments while bridging the gap between commercial state-of-the-art and the research efforts in the fields of neuromorphic space computing. Our solution can provide 10s of TOPS/W in inference performance when fully developed with comprehensive radiation assurance. Alphacore’s proposed library includes blocks designed in 22nm FDSOI process which have gone through multiple development cycles. These will be suitable to function under high radiation and wide temperatures of planets, asteroids and comets in deep space. With Alphacore’s solution, designers can develop technologies that are lightweight, highly efficient and can deliver advanced capabilities for next-generation missions, all without the need for heavy protective housing to ensure functionality in deep space.




Potential NASA Applications (Limit 550 characters):
Alphacore’s cost-effective and energy efficient, rad-hard neuromorphic processor solution will enhance future missions for lunar, Martian and other deep space missions in applications such of in-situ cognition and autonomous decision making in entry, descent and landing type critical phases in presence of solar flares as well as radiation environments of outer planets.




Potential Non-NASA Applications (Limit 400 characters):
Neuromorphic computing is recognized by the electronics industry and aerospace industry as a promising tool for enabling high-performance computing and ultra-low power consumption to achieve autonomy and machine cognition. Satellites, Rovers, Rockets and other key assets require radiation-hardness for processors in critical deep space critical missions.
 
  • Like
  • Fire
Reactions: 6 users
Can’t find any information on this one


IMG_5245.png
 
  • Fire
Reactions: 2 users
I wouldn’t be surprised if Akida isn’t involved somewhere down the line




LeNgineer
1313 S Washington Ave
Titusville, FL 32780-4292
United States
Hubzone Owned:
No

Socially and Economically Disadvantaged:
Yes
Woman Owned:
Yes

Duns:
080506457
Principal Investigator
Name: Tuan Le
Phone: (407) 766-6677
Email: tuan.le@lengineer.com
Business Contact
Name: Al Rahrooh
Phone: (407) 733-1714
Email: al.rahrooh@lengineer.com
Research Institution
N/A
Abstract
NASA and the aviation industry are in need of innovative solutions to fulfill requirements, close capability gaps, and provide technological advancements for NASA science and engineering within the use of artificial intelligence (AI) and machine learning (ML) at the extreme edge. There has been a rapid increase in data rates for instruments which leads to an increasing need for computing at the edge. This computing is often seen in constrained computing environments that require reliable software platforms to provide the necessary information. However, in the aerospace industry there is a lack of AI and ML at the extreme edge to enable the rapid detection of changes in a flight vehicle in order to sense anomalies across multiple data sets. Without the proper technology for rapid data processing, there is a decrease in efficient data capture and analysis that doesn’t allow a system to make accurate predictions. The current methods of data monitoring are highly inefficient, which has increased the demand for research on the application of AI/ML on spacecrafts, rovers, within a constellation of SmallSats, or other remote sensing platforms where the latency and bandwidth between the remote platform and the ground station are not sufficient to adequately download all the data. Currently, there isn't a tool in the market that has the capabilities to address these demands. In order to address all these issues, LeNgineer is developing PICA (Prediction and Innovative Computational Analysis), which has been designed for integrated use at NASA. PICA seeks to improve the use of AI/ML at the extreme edge for rapid detection, classifications, segmentations, multi-data analysis of in-flight test conditions, expand measurement and analysis methodologies to improve test data acquisition and management. PICA can detect issues in real time and has the capability to present solutions or resolve the problem entirely.
* information listed above is at the time of submission.
Proposal Information
Agency:
National Aeronautics and Space Administration

Branch:
N/A
Program:


Phase:

Proposal Year:
2021

Proposal Number:
S5.05-3167
Solicitation Year:
2021

Agency Tracking Number:
213167
Solicitation Number:
SBIR_21_P1

Solicitation Topic Code:
S5
Related Award:
N/A
Small Business Information
LeNgineer
1313 S Washington Ave
Titusville, FL 32780-4292
United States
Hubzone Owned:
No

Socially and Economically Disadvantaged:
Yes
Woman Owned:
Yes

Duns:
080506457
Principal Investigator
Name: Tuan Le
Phone: (407) 766-6677
Email: tuan.le@lengineer.com
Business Contact
Name: Al Rahrooh
Phone: (407) 733-1714
Email: al.rahrooh@lengineer.com
Research Institution
N/A
Abstract
NASA and the aviation industry are in need of innovative solutions for NASA science and engineering within the use of Fault Management (FM) as a key component of system autonomy and mitigate failures that threaten mission success. A system to address these needs must also consider failure of sensors or the flow of sensor data, harmful or unexpected system interaction with the environment, and problems due to faults in software or incorrect control inputs—including failure of autonomy components themselves. Currently, there isn't a tool in the market that has the capabilities to address these demands. LeNgineer’s approach to Fault Management Operations utilizes our developing technology, PICA (Prediction and Innovative Computational Analysis), which has been designed for integrated use in NASA projects. PICA seeks to improve the use of AI/ML as a FM system for rapid detection and multi-data analysis of in-flight test conditions. PICA can detect issues in real time and has the capability to present solutions or resolve the problem entirely. There has been a rapid increase in data rates for instruments which leads to an increasing need for computing at the edge. This computing is often seen in constrained computing environments that require reliable software platforms to provide the necessary information. However, in the aerospace industry there is a lack of AI and ML at the extreme edge to enable the rapid detection of changes in a flight vehicle in order to sense anomalies across multiple data sets. Without the proper technology for rapid data processing, there is a decrease in efficient data capture and analysis that doesn’t allow a system to make accurate fault predictions. The current methods of data monitoring are highly inefficient, which has increased the demand for research on the application of AI/ML on spacecrafts, UAVs, rovers, SmallSats, landers or other remote sensing platforms where the detection of faults is of extreme importance.
 
  • Like
  • Love
Reactions: 4 users

Intelligence Systems, Inc.
21041 S. Western Ave.
Torrance, CA 90501-1727
United States
Hubzone Owned:
No

Socially and Economically Disadvantaged:
No
Woman Owned:
No

Duns:
080921977
Principal Investigator
Name: Marc SeGall
Phone: (310) 320-1827
Email: msegall@intellisenseinc.com
Business Contact
Name: Selvy Utama
Phone: (310) 320-1827
Email: notify@intellisenseinc.com
Research Institution
N/A
Abstract
To address the United States Space Force’s need for an automated process for detecting threat events directly at the sensor in order to support course of action (COA) selection and execution in relevant timelines, Intellisense Systems, Inc. (Intellisense) proposes to develop a new Edge-based, Real-time Detector for Space Object Threats (ERDSPOT), which uses an innovative artificial intelligence (AI) approach running on embedded hardware to detect satellites and space debris in real time with >95% accuracy from non-resolved imagery without requiring a known object state estimate catalog or any other downstream information. ERDSPOT then utilizes a novel, AI-assisted multi-frame analysis to identify threat events or potential threat events such as closely spaced objects, breakups, and collisions in as little as two frames (at <1 s per frame), thereby providing alerts to operators for COA selection with <10 s of image collection. To ensure that all relevant information is relayed to the operator, ERDSPOT provides the type of event detected, threat level, and, in the case of potential threats, a probability of the event occurring and the expected timeframe for it to occur. ERDSPOT supports all telescopes and telescope tracking methodologies, including fixed staring, single object tracking, and general scanning, can monitor objects in low Earth orbit, medium Earth orbit, and geosynchronous orbit, and is sensitive enough to monitor objects as small as 1 cm in low Earth orbit. In Phase I, Intellisense will identify the types of events that can be visually categorized in non-resolved imagery and develop and demonstrate a proof-of-concept system that can automatically detect those events in simulated images. From this, Intellisense will identify the remaining key technical challenges and the technology readiness level of the system. This will be used to generate a technology maturation plan to mature ERDSPOT in later phases. In Phase II, Intellisense will develop a software package based on the Phase I demonstrator software to generate alerts that enable operator COA selection and execution in <10 s of imagery collection without access to a known object state estimate catalog. Intellisense will demonstrate the capabilities of the software package by comparing its performance to traditional processing approaches and by demonstrating that ERDSPOT has a minimal number of false positives, false negatives, and minimal alert latency. We will also work with the USSF to establish an integration path under a program of record and establish preliminary requirements for third party integration of ERDSPOT.
 
  • Fire
Reactions: 2 users
So many use cases for akida in the link below, so I’ve decided just to put them all up instead, scroll down and read them all and you’ll will see

 
  • Fire
  • Love
Reactions: 2 users

Adaptive Deep Onboard Reinforcement Bidirectional Learning System​

Award Information
Agency:National Aeronautics and Space Administration
Branch:N/A
Contract:80NSSC22PB053
Agency Tracking Number:221780
Amount:$149,996.00
Phase:phase I
Program:SBIR
Solicitation Topic Code:H6
Solicitation Number:SBIR_22_P1
Timeline
Solicitation Year:2022
Award Year:2022
Award Start Date (Proposal Award Date):2022-07-22
Award End Date (Contract End Date):2023-01-25
Small Business Information
INTELLISENSE SYSTEMS INC
21041 South Western Avenue
Torrance, CA 90501-1727
United States
DUNS:080921977
HUBZone Owned:No
Woman Owned:No
Socially and Economically Disadvantaged:No
Principal Investigator
Name: Fang Zhang
Phone: (310) 320-1827
Email: notify@intellisenseinc.com
Business Contact
Name: Jeanine Newcomb
Phone: (424) 319-7748
Email: jnewcomb@intellisenseinc.com
Research Institution
N/A
Abstract
NASA is seeking innovative neuromorphic processing methods and tools to enable autonomous space operations on platforms constrained by size, weight, and power (SWaP). To address this need, Intellisense Systems, Inc. (Intellisense) proposes to develop an Adaptive Deep Onboard Reinforcement Bidirectional Learning (ADORBL) processor based on neuromorphic processing and its efficient implementation on neuromorphic computing hardware. Neuromorphic processors are a key enabler to the cognitive radio and image processing system architecture, which play a larger role in mitigating complexity and reducing autonomous operations costs as communications and control become complex. ADORBL is a low-SWaP neuromorphic processing solution consisting of multispectral and/or synthetic aperture radar (SAR) data acquisition and an onboard computer running the neural network algorithms. The implementation of artificial intelligence and machine learning enables ADORBL to choose processing configurations and adjust for impairments and failures. Due to its speed, energy efficiency, and higher performance for processing, ADORBL processes raw images, finds potential targets and thus allows for autonomous missions and can easily integrate into SWaP-constrained platforms in spacecraft and robotics to support NASA missions to establish a lunar presence, to visit asteroids, and to extend human reach to Mars. In Phase I, we will develop the CONOPS and key algorithms, integrate a Phase I ADORBL processing prototype to demonstrate its feasibility, and develop a Phase II plan with a path forward. In Phase II, ADORBL will be further matured, implemented on available commercial neuromorphic computing chips, and then integrated into a Phase II working prototype along with documentation and tools necessary for NASA to use the product and modify and use the software. The Phase II prototype will be tested and delivered to NASA to demonstrate for applications to CubeSat, SmallSat, and rover flights.
 
  • Like
  • Fire
Reactions: 6 users

Smart Image Recognition Sensor with Ultralow System Latency and Power Consumption​

Agency:
Department of Defense
Branch:
Navy
Program | Phase | Year:
STTR | BOTH | 2022
Solicitation:
DoD STTR 22.A
Topic Number:
N22A-T008
NOTE: The Solicitations and topics listed on this site are copies from the various SBIR agency solicitations and are not necessarily the latest and most up-to-date. For this reason, you should use the agency link listed below which will take you directly to the appropriate agency server where you can read the official version of this solicitation and download the appropriate forms and rules.
The official link for this solicitation is: https://rt.cto.mil/rtl-small-business-resources/sbir-sttr
Release Date:
December 01, 2021
Open Date:
January 12, 2022
Application Due Date:
February 10, 2022
Close Date:
February 10, 2022
Description:
OUSD (R&E) MODERNIZATION PRIORITY: General Warfighting Requirements (GWR);Microelectronics;Quantum Science

TECHNOLOGY AREA(S): Electronics

OBJECTIVE: Develop a novel smart visual image recognition system that has intrinsic ultralow power consumption and system latency, and physics-based security and privacy.

DESCRIPTION: Image-based recognition in general requires a complicated technology stack, including lenses to form images, optical sensors for opto-to-electrical conversion, and computer chips to implement the necessary digital computation process. This process is serial in nature, and hence, is slow and burdened by high-power consumption. It can take as long as milliseconds, and require milliwatts of power supply, to process and recognize an image. The image that is digitized in a digital domain is also vulnerable to cyber-attacks, putting the users’ security and privacy at risk. Furthermore, as the information content of images needs to be surveilled and reconnoitered, and continues to be more complex over time, the system will soon face great challenges in system bottleneck regarding energy efficiency, system latency, and security, as the existing digital technologies are based on digital computing, because of the required sequential analog-to-digital processing, analog sensing, and digital computing.

It is the focus of this STTR topic to explore a much more promising solution to mitigate the legacy digital image recognition latency and power consumption issues via processing visual data in the optical domain at the edge. This proposed technology shifts the paradigm of conventional digital image processing by using analog instead of digital computing, and thus can merge the analog sensing and computing into a single physical hardware. In this methodology, the original images do not need to be digitized into digital domain as an intermediate pre-processing step. Instead, incident light is directly processed by a physical medium. An example is image recognition [Ref 1], and signal processing [Ref 2], using physics of wave dynamics. For example, the smart image sensors [Ref 1] have judiciously designed internal structures made of air bubbles. These bubbles scatter the incident light to perform the deep-learning-based neuromorphic computing. Without any digital processing, this passive sensor can guide the optical field to different locations depending on the identity of the object. The visual information of the scene is never converted to a digitized image, and yet the object can be identified in this unique computation process. These novel image sensors are extremely energy efficient (a fraction of a micro Watt) because the computing is performed passively without active use of energy. Combined with photovoltaic cells, in theory, it can compute without any energy consumption, and a small amount of energy will be expended upon successful image recognition and an electronic signal needs to be delivered to the optical and digital domain interface. It is also extremely fast, and has extremely low latency, because the computing is done in the optical domain. The latency is determined by the propagation time of light in the device, which is on the order of no more than hundreds of nanoseconds. Therefore, its performance metrics in terms of energy consumption and latency are projected to exceed those of conventional digital image processing and recognition by up to at least six orders of magnitude (i.e., 100,000 times improvement). Furthermore, it has the embedded intrinsic physics-based security and privacy because the coherent properties of light are exploited for image recognition. When these standalone devices are connected to system networks, cyber hackers cannot gain access to original images because such images have never been created in the digital domain in the entire computation process. Hence, this low-energy, low-latency image sensor system is well suited for the application of 24/7 persistent target recognition surveillance system for any intended targets.

In summary, these novel image recognition sensors, which use the nature of wave physics to perform passive computing that exploits the coherent properties of light, is a game changer for image recognition in the future. They could improve target recognition and identification in degraded vision environment accompanied by heavy rain, smoke, and fog. This smart image recognition sensor, coupled with analog computing capability, is an unparalleled alternative solution to traditional imaging sensor and digital computing systems, when ultralow power dissipation and system latency, and higher system security and reliability provided by analog domain, are the most critical key performance metrics of the system.

PHASE I: Develop, design, and demonstrate the feasibility of an image recognition device based on a structured optical medium. Proof of concept demonstration should reach over 90% accuracy for arbitrary monochrome images under both coherent and incoherent illumination. The computing time should be less than 10 µs. The throughput of the computing is over 100,000 pictures per second. The projected energy consumption is less than 1 mW. The Phase I effort will include prototype plans to be developed under Phase II.

PHASE II: Design image recognition devices for general images, including color images in the visible or multiband images in the near-infrared (near-IR). The accuracy should reach 90% for objects in ImageNet. The throughput reaches over 10 million pictures per second with computation time of 100 ns and with an energy consumption less than 0.1 mW. Experimentally demonstrate working prototype of devices to recognize barcodes, handwritten digits, and other general symbolic characters. The device size should be no larger than the current digital camera-based imaging system.

PHASE III DUAL USE APPLICATIONS: Fabricate, test, and finalize the technology based on the design and demonstration results developed during Phase II, and transition the technology with finalized specifications for DoD applications in the areas of persistent target recognition surveillance and image recognition in the future for improved target recognition and identification in degraded vision environment accompanied by heavy rain, smoke, and fog.

The commercial sector can also benefit from this crucial, game-changing technology development in the areas of high-speed image and facial recognition. Commercialize the hardware and the deep-learning-based image recognition sensor for law enforcement, marine navigation, commercial aviation enhanced vision, medical applications, and industrial manufacturing processing.
 
  • Like
  • Fire
Reactions: 4 users
American GNC Corporation
888 Easy Street
Simi Valley, CA 93065-1812
United States
Hubzone Owned:
No

Socially and Economically Disadvantaged:
Yes
Woman Owned:
Yes

Duns:
611466855
Principal Investigator
Name: Francisco Maldonado
Phone: (805) 582-0582
Email: fmald@americangnc.com
Business Contact
Name: Emily Melgarejo
Phone: (805) 582-0582
Email: emelgarejo@americangnc.com
Research Institution
N/A
Abstract
The “On-Board Distributed Autonomous LearnIng for Satellite (ODALIS) Communication System” provides a cognitive approach that senses, detects, adapts, and learns from both experiences and the environment to optimize communications while addressing NASA’s needs to leverage artificial intelligence and machine learning technologies to optimize space communication links, networks, and systems. While CubeSats and Software Defined Radio (SDR) support are the initial target focus, the ODALIS system can also be applied to lunar surface assets (e.g. surface relays, science stations, astronaut communications, and rovers) and relay satellites, Earth ground stations, the International Space Station, and spacecraft communications. The ODALIS system provides an innovative on-board embedded implementation and spectrum availability prognostics based upon an ensemble of learning paradigms that involves: (i) Federated Learning; (ii) local learning by a Long Short Term Memory (LSTM) based recurrent neural network, which is applied to spectrum prediction; and (iii) State-of-the-Art (SOTA) software defined radio with embedded distributed learning. Innovations are aimed at improvement and enhancement of cognitive communications by leveraging machine learning technology and SDR with optimized size, weight, and power (SWaP) to conduct RF spectrum availability detection and prognostics and include: (1) Over-the-Air Federated Learning implementation (a Phase II result); (2) novel hardware implementation based upon a SOTA RFSoC (FPGA) and Machine Learning toolboxes; (3) fully embedded Machine Learning within the hardware core for achieving real time operation in cognitive communications; and (4) design for automated configuration support within Software Defined Radio and Cognitive Radio.

 
  • Like
Reactions: 4 users

SBIR Phase I:A Wearable, Independent, Braille-Assistive Learning Device​

Award Information
Agency:National Science Foundation
Branch:N/A
Contract:2236574
Agency Tracking Number:2236574
Amount:$274,999.00
Phase:phase I
Program:SBIR
Solicitation Topic Code:HC
Solicitation Number:NSF 22-551
Timeline
Solicitation Year:2022
Award Year:2023
Award Start Date (Proposal Award Date):2023-04-01
Award End Date (Contract End Date):2024-03-31
Small Business Information
BRAILLEWEAR
611 South DuPont Highway, Suite 102
Dover, DE 19901
United States
DUNS:N/A
HUBZone Owned:No
Woman Owned:No
Socially and Economically Disadvantaged:No
Principal Investigator
Name: Kushagra Jain
Phone: (609) 373-3437
Email: kj228@cornell.edu
Business Contact
Name: Kushagra Jain
Phone: (609) 373-3437
Email: kj228@cornell.edu
Research Institution
N/A
Abstract
The broader/commercial impact of this Small Business Innovation Research (SBIR) Phase I project is in creating an independent. assistive Braille learning device for blind people. The ability to read Braille is highly correlated with improved independence and quality of life. An estimated 70% of the blind are unemployed yet, of that subpopulation that is Braille literate, only 10% are unemployed. There is a Braille literacy crisis - only 8.5% of the blind population in the US can read Braille today, compared to 50% in the 1960s. There are several factors theorized to contribute to increasing Braille illiteracy including: 1) a shortage of teachers qualified to teach Braille, 2) negative outlooks on the difficulty and cost of Braille learning, and 3) and difficulties integrating blind students into mainstream schools that don’t have the specialized resources for this population. The results of this project will assist students of all ages in learning how to read Braille, including secondary Braille learners who become blind later in life. Aiming at inhibiting the Braille literacy crisis, the technology enables the blind to be given the same opportunities as their sighted peers, including better chances at graduating from high school and college, obtaining employment, and having high independence levels._x000D_
_x000D_
_x000D_
The intellectual merit of this project is in development of a wearable, computer vision-based, real-time Braille-to-speech learning device. While the primary mission of the project is to unlock the full potential of blind individuals through Braille literacy, the overall goal for the technology is to unlock the full potential of human touch with computer-assisted augmentation cues in response to intricate textural patterns. The proposed technology will detect such patterns in a contactless approach, preserving the integrity of the material, and provide auditory feedback in real-time to allow for mechanosensory-augmented feedback. This project focuses on establishing the technical feasibility of such an approach by: 1) determining if the device and interpreting algorithms can be made robust to environmental and user postural variations, 2) developing capabilities to perform well on textured and/or patterned surfaces, and 3) conducting usability testing to identify areas of the user experience that must be enhanced in the future to be viable in the market with two vital stakeholders - Braille tutors and Braille students. These goals, if completed successfully, will not only impact Braille learners but also open up other market applications for this technology such as manufacturing and medicine._x000D_
_x000D_
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
 
  • Like
  • Fire
Reactions: 7 users

A good chance they could be adapting Akida in some way​

Green Mountain Semiconductor Inc. awarded a contract from NASA to Pave the Way for Space-Ready Neural Processors​

BURLINGTON, VT– October 23, 2023 -- Green Mountain Semiconductor Inc. (GMS), a custom circuit design provider, is proud to announce its recent achievement of a Phase I Small Business Innovation Research (SBIR) contract. This significant milestone underscores GMS' commitment to pushing the boundaries of technology in support of space applications. The contract was granted under NASA's subtopic "Deep Neural Net and Neuromorphic Processors for In-Space Autonomy and Cognition."
Under the Phase I SBIR, GMS is embarking on a groundbreaking initiative, "A Radiation Hard Neural Processor With Embedded MRAM," aimed at tackling the critical challenge of radiation resilience in electronic components intended for space missions. GMS's approach involves the development of an analysis tool to optimize circuit designs for reliability and radiation resistance. This advancement will enable the use of off-the-shelf technology in space applications without the need for excessive design modifications to ensure radiation resistance.
As a pivotal initial step, GMS will concentrate on enhancing its in-house designed memory neural processor, specially tailored for cube-sat format satellite payloads. By harnessing GMS's extensive expertise in emerging memory technologies and incorporating radiation resistance into the design process, this endeavor promises to revolutionize the accessibility and dependability of neural processors for space missions.
"Through this project, GMS aims to empower companies seeking radiation-hardened solutions by creating a predictive tool for assessing radiation exposure on custom silicon designs," stated Ryan Jurasek, VP Research & Development at GMS. "Our inaugural application of this tool will focus on optimizing our in-memory neural processor, designed using SOI process with MRAM integration, and intended for testing in a cube-satellite. GMS's experience will shape the design and optimization of future products, both in space and beyond."
In an era where the economics of space technology are evolving, GMS recognizes the increasing demand for advanced silicon in space missions. However, the space environment presents inherent challenges, including exposure to cosmic rays, solar flares, and other ionizing radiation sources, which can disrupt electronics and degrade silicon. Radiation testing of commercial products is often costly and raises questions about circuitry utilization and failure statistics. Moreover, off-the-shelf components are not inherently optimized for space applications in terms of size, weight, and power considerations.
GMS is poised to address these challenges by leveraging its in-depth knowledge of emerging memory technologies and developing radiation-resistant solutions by design. This strategic approach positions GMS to meet the growing demand for in-memory neuromorphic design.
GMS' commitment to innovation, reliability, and technological advancement continues to drive its goal to enhance the efficiency and accessibility of space exploration. The Phase I SBIR award underscores GMS's dedication to meeting the evolving needs of the aerospace industry and fostering a new era of space-ready neural processors.
About Green Mountain Semiconductor
Green Mountain Semiconductor is a pioneering semiconductor design provider headquartered in Burlington, Vermont. With a strong focus on emerging technologies and a dedication to excellence, GMS specializes in creating innovative solutions for a wide range of applications, including space technology. The business offers turnkey custom circuit design, characterization, and testing services, with a focus on innovative memory concepts and processing-in-memory architectures. Learn more at greenmountainsemi.com.
 
  • Like
  • Fire
Reactions: 6 users
  • Like
  • Love
Reactions: 7 users
No mention of brainchip but looks extremely promising



An Energy-efficient and Self-diagnostic Portable Edge-Computing Platform for Traffic Monitoring and Safety​

Award Information
Agency:Department of Transportation
Branch:N/A
Contract:6913G623P800056
Agency Tracking Number:DOT-23-FH2-015
Amount:$149,995.68
Phase:phase I
Program:SBIR
Solicitation Topic Code:23-FH2
Solicitation Number:6913G623QSBIR1
Timeline
Solicitation Year:2023
Award Year:2023
Award Start Date (Proposal Award Date):2023-07-13
Award End Date (Contract End Date):2024-01-12
Small Business Information
CLR ANALYTICS INC
52 Gardenhouse Way
Irvine, CA 92620
United States
DUNS:N/A
HUBZone Owned:No
Woman Owned:No
Socially and Economically Disadvantaged:Yes
Principal Investigator
Name: Lianyu Chu
Phone: (949) 864-6696
Email: lchu@clr-analytics.com
Business Contact
Name: Lianyu Chu
Title: Lianyu Chu
Phone: (949) 864-6696
Email: lchu@clr-analytics.com
Research Institution
N/A
Abstract
Recent advances in technologies have shown great potential for widespread use of Artificial Intelligence (AI) techniques in real-time Intelligent Transportation Systems (ITS) applications. However, the massive amounts of data collected and generated from ITS sensors pose a major challenge in data processing and transmission. This requires a shift from centralized repositories and cloud computing to edge computing. This project proposes an integrated low-power edge-computing system to work with computation-intensive traffic sensors (e.g., video, high-resolution radar, and Lidar) and weather sensors. The system will be designed to be portable, have self-diagnostic capabilities through monitoring sensors and system operations, and send out alerts and data when necessary. The proposed system will include an edge server, which will be developed based on a System-on-Module (SoM) using the latest AI chip, and an innovative hybrid camera that integrates a regular video camera and a FLIR thermal image camera. The project will identify and implement in-situ information processing and extraction algorithms based on machine learning and deep learning techniques to classify vehicles and detect events such as vehicle crashes, the presence of stopped vehicles, pavement and environmental conditions, and wildlife. The prototype will be demonstrated at a California test site in collaboration with Caltrans.
 
  • Like
  • Fire
  • Wow
Reactions: 6 users
Not much more info on this new award


Intelligent Climate Edge Classification Platform​

Award Information
Agency:Department of Defense
Branch:Air Force
Contract:FA2330-23-C-B002
Agency Tracking Number:F2D-8346
Amount:$1,799,992.00
Phase:phase II
Program:SBIR
Solicitation Topic Code:AF231-D020
Solicitation Number:23.1
Timeline
Solicitation Year:2023
Award Year:2023
Award Start Date (Proposal Award Date):2023-07-29
Award End Date (Contract End Date):2025-06-29
Small Business Information
INTELLISENSE SYSTEMS INC
21041 S. Western Ave.
Torrance, CA 90501-1727
United States
DUNS:080921977
HUBZone Owned:No
Woman Owned:No
Socially and Economically Disadvantaged:No
Principal Investigator
Name: Jeremy Frank
Phone: (310) 320-1827
Email: jfrank@intellisenseinc.com
Business Contact
Name: Selvy Utama
Phone: (310) 320-1827
Email: notify@intellisenseinc.com
Research Institution
N/A
Abstract
To address the Air Force need for systems that employ machine learning to augment intelligence gathered by weather sensors and/or streamline analysis of local area environmental intelligence to increase fidelity of understanding, Intellisense Systems, Inc
 
  • Like
  • Love
Reactions: 7 users

Advanced Real-Time Seismic Monitoring Platform​

Award Information
Agency:Department of Energy
Branch:N/A
Contract:DE-SC0023750
Agency Tracking Number:0000272557
Amount:$206,493.00
Phase:phase I
Program:SBIR
Solicitation Topic Code:C56-06a
Solicitation Number:DE-FOA-0002903
Timeline
Solicitation Year:2023
Award Year:2023
Award Start Date (Proposal Award Date):2023-07-10
Award End Date (Contract End Date):2024-07-09
Small Business Information
SEQUENT LOGIC LLC
1300 N 200 E STE 118
Logan, UT 84341
United States
DUNS:087443253
HUBZone Owned:Yes
Woman Owned:No
Socially and Economically Disadvantaged:No
Principal Investigator
Name: Ryan Seeley
Phone: (435) 994-8044
Email: seeleyr@sequentlogic.com
Business Contact
Name: Ryan Seeley
Phone: (435) 994-8044
Email: seeleyr@sequentlogic.com
Research Institution
N/A
Abstract
Distributed acoustic sensing (DAS) systems produce data at very high throughputs (~100 MB/s) that quickly accumulate to unmanageable data volumes (~10 TB/day). These data throughputs lead to difficulty in establishing a distributed seismic observatory integrating DAS nodes at the network edge because data cannot be transferred and interpreted in a real-time fashion. The problem can be addressed by compressing data and/or performing additional real-time signal processing at the network edge prior to sending data to a centralized controller of a distributed seismic observatory. The proposed solution creates and provisions, respectively, a flexible set of hardware at the edge and in the cloud, allowing compression and/or seismic event detection algorithms to target CPU, GPU, and/or FPGA hardware both within edge-based nodes and cloud-based controller. The Phase-I project is devoted to implementing DAS data preprocessing and compression with the intent to achieve optimum compression of DAS data at the edge. State-of-the-art, opensource compressors will be selected and employed to compress publicly available DAS data. Seismic event detection algorithms will be used to assess and optimize compressor settings. Compression algorithms will be implemented and tested in CPU, GPU, and FPGA hardware. This project enables a distributed seismic observatory comprising many DAS-based edge nodes, including nodes deployed in rural and/or remote locations. Real-time DAS data compression will make it easier to store and share DAS datasets, facilitating analysis and scientific discovery. The project will also likely lead to additional publicly available DAS data.
 
  • Fire
Reactions: 4 users
Here goes and at least one should contain Akida lol


Modeling Neuromorphic and Advanced Computing Architectures​

Agency:
Department of Defense
Branch:
Navy
Program | Phase | Year:
N/A | N/A |
Solicitation:

Topic Number:
N202-108
NOTE: The Solicitations and topics listed on this site are copies from the various SBIR agency solicitations and are not necessarily the latest and most up-to-date. For this reason, you should use the agency link listed below which will take you directly to the appropriate agency server where you can read the official version of this solicitation and download the appropriate forms and rules.
The official link for this solicitation is: N/A
Release Date:
N/A
Open Date:
N/A
Application Due Date:
N/A
Close Date:
N/A
Description:
OBJECTIVE: Develop a software tool to optimize the signal processing chain across various sensors and systems, e.g., radar, electronic warfare (EW), electro-optical/infrared (EO/IR), and communications, that consists of functional models that can be assembled to produce an integrated network model used to predict overall detection/classification, power, and throughput performance to make design trade-off decisions.
DESCRIPTION: Conventional computing architectures are running up against a quantum limit in terms of transistor size and efficiency, sometimes referred to as the end of Moore’s Law. To regain our competitive edge, we need to find a way around this limit. This is especially relevant for small size, weight, and power (SWaP)-constrained platforms. For these systems, scaling Von Neumann computing becomes prohibitively expensive in terms of power and/or SWaP.Biologically inspired neural networks provide the basis for modern signal processing and classification algorithms. Implementation of these algorithms on conventional computing hardware requires significant compromises in efficiency and latency due to fundamental design differences. A new class of hardware is emerging that more closely resembles the biological neuron model, also known as a spiking neuron model; mathematically describing the systems found in nature and may solve some of these limitations and bottlenecks. Recent work has demonstrated performance gains using these new hardware architectures and have shown equivalence to converge on a solution with the same accuracy [Ref 1].The most promising of the new class are based on Spiking Neural Networks (SNN) and analog Processing in Memory (PiM) where information is spatially and temporally encoded onto the network. It can be shown that a simple spiking network can reproduce the complex behavior found in the neural cortex with significant reduction in complexity and power requirements [Ref 2]. Fundamentally, there should be no difference in algorithms based on neural networks. In fact, they can easily be transferred between hardware architectures [Ref 4]. Performance gains and the relative ease of transitioning current algorithms over to the new hardware motivates consideration of this SBIR topic.Hardware based on SNN is currently under development at various stages of maturity. Two prominent examples are the IBM True North and the Intel Loihi chips. The IBM approach uses conventional Complementary Metal-Oxide Semiconductor (CMOS) technology and the IBM approach uses a less mature memristor architecture. Estimated efficiency performance increase is greater than 3 orders of magnitude better than state-of-the-art graphics processing units (GPU) or field-programmable gate arrays (FPGA). More advanced architectures based on an all optical or photonic-based SNN show even more promise. Nano-Photonic-based systems are estimated to achieve 6 orders of magnitude increase in efficiency and computational density, approaching the performance of a human neural cortex. Modeling these systems to make design and acquisition decisions is of great interest and importance. Validating these performance estimates and providing a modeling tool is the basis for this SBIR topic.The primary goal of this effort is to create a software tool that captures the non-linear physics of these SNNs, and possibly other neuromorphic and related low-SWaP architectures, as well as functionally model their behavior. It is recommended to use open source languages, software, and hardware when possible. A similar approach [Ref 6] should be considered as a starting point, with the ultimate goal of producing a viable and flexible product for capturing, modeling, and understanding the behaviors of a composite system constructed to employ these adaptive learning systems, including all systems ranging from CMOS to photonics. Additionally, the model should be able to take an algorithm developed on a conventional neural network framework like Caffe, PyTorch, TensorFlow, etc. and run it through the functional model to predict performance criteria like latency and throughput. The secondary goal is to build up a network framework to model multi-step processing chains. For example, a hypothetical processing chain for a communications system might be filter, in-phase quadrature (IQ) demodulation, frequency decomposition, symbol detection, interference mitigation, filter, and decryption.Work produced in Phase II may become classified. Note: The prospective contractor(s) must be U.S. owned and operated with no foreign influence as defined by DoD 5220.22-M, National Industrial Security Program Operating Manual, unless acceptable mitigating procedures can and have been implemented and approved by the Defense Counterintelligence and Security Agency (DCSA). The selected contractor and/or subcontractor must be able to acquire and maintain a secret level facility and Personnel Security Clearances. This will allow contractor personnel to perform on advanced phases of this project as set forth by DCSA and NAVAIR in order to gain access to classified information pertaining to the national defense of the United States and its allies; this will be an inherent requirement. The selected company will be required to safeguard classified material IAW DoD 5220.22-M during the advanced phases of this contract.
PHASE I: Design and develop the modeling approach and demonstrate feasibility to capture the relevant physics and computational complexity. Demonstrate a functional model of a SNN. The Phase I effort will include prototype plans to be developed under Phase II.
PHASE II: Validate the functional model using test cases from literature. Model validation with hardware is strongly encouraged, however, due to the limited availability of hardware this is not a requirement. The model will need to contain a network framework for various processing steps across multiple sensor areas using lower level functional models. Priorities sensor/functional areas are EW, radar, communications, and EO/IR.Work in Phase II may become classified. Please see note in Description section.
PHASE III: Refine algorithms and test with hardware. Validate models with data provided by Naval Air Warfare Center (NAWC) Aircraft Division (AD)/Weapons Division (WD). Transition model to the warfare centers. Development of documentation, training manuals, and software maintenance may be required.Heavy commercial investments in machine learning and artificial intelligence will likely continue for the foreseeable future. Adoption of hardware that can deliver on orders of magnitude in SWaP performance for intelligent mobile machine applications is estimated to be worth 10^9-10^12 global dollars annually.) Provide the software tools needed to optimize the algorithms and hardware integration. This effort would be a significant contribution to this requirement. Industries that would benefit from successful technology development include automotive (self-driving vehicles), personal robots, and a variety of intelligent sensors.
 
  • Love
  • Like
  • Fire
Reactions: 3 users
Top Bottom