BRN Discussion Ongoing

TECH

Regular
Considering the main two GPU companies are NVIDIA and AMD... Would more than likely go with NVIDIA, in my slight opinion. ;)

A number of interesting deductions can be arrived at, first off, I and many others must be quietly thinking, "Isn't AKD 1000 doing us
all proud" I fully understand what Sean was implying when he said that AKD 1000 was too narrow in it's offering, and to really put a
stamp on our leadership in this Edge AI space we had to keep driving forward with more iterations and fast, which the Brainchip team
has delivered in spades, and is still doing so, both Peter and Anil were under the pump to get AKD II over the line and nearly had to be
wheelchaired out of Sean's office from exhaustion (joke) :ROFLMAO:

Second point, what company would choose to go with technology that was 300x worse off, and I'm obviously referring to Jensen
and his mob.

Final deduction, as I have for years suggested IF a suitor appeared on the horizon, my pick would be Jensen, but because of his
stubbornness not to make an honest play for us, say in the $20 USD + range, our market price (despite what the share price indicates)
will be steadily rising over the coming years.

The above is a mixture of fact and tongue n' cheek.

Tech (y)
 
  • Like
  • Thinking
  • Fire
Reactions: 22 users

Guzzi62

Regular
I can’t help it if some people haven’t managed to get anything going and are now putting everything on one card… Like I said, if it shoots up, I’ll be happy for you and for me… . If not, then it just wasn’t meant to be. I don’t see what the problem is. And again.. we can not change anything anyway
You just can't keep your mouth shut, huh!!

Did I write anything about putting everything on one card??

You are speaking like the wise older looking down at his flock, arrogant as hell!

It's about age and how long time it's been taking so far, and I bet you, I am not the only one felling this way.

Just for your info, I got other investments, thank you very much.
 
  • Like
  • Fire
  • Haha
Reactions: 13 users

Tothemoon24

Top 20

Mobile AI Features Evolve: Training LLM Models Directly on Smartphones​

Article By : Anthea Chuang, EE Times Taiwan​

AI-Smartphone.jpg

MediaTek's Dimensity 9400 chipset enhances smartphones with advanced Edge AI, immersive gaming, and superior imaging, while offering improved energy efficiency and performance for a smarter mobile experience...
What if users could train Large Language Models (LLMs) directly on their smartphones, incorporating their personal characteristics? Could this spark a new “golden age” for smartphones, more than a decade after their initial debut?

The shift of artificial intelligence (AI) from cloud systems to edge devices has accelerated the growth of Edge AI. The arrival of generative AI models like ChatGPT and LLMs has ignited a wave of new AI-powered applications. However, the same challenges persist: edge devices, including smartphones and PCs, are working to overcome issues related to computing power and energy consumption, enabling LLMs to function efficiently on mobile devices.

Since AI-powered PCs made their debut, smartphones with integrated AI features have increasingly captured consumer interest. But is simply supporting LLMs on smartphones enough to meet user expectations? Could the ability to train LLMs directly on phones—embedding them with individual traits—usher in a new era for smartphones, similar to the one that followed their first release over ten years ago?

Gallium Nitride (GaN) Power Solutions
MediaTek has unveiled its next-generation Dimensity 9400 flagship chipset, designed to enhance AI experiences on smartphones by improving both performance and efficiency. JC Hsu, Senior Vice President of MediaTek, explained that the Dimensity 9400 uses a second-generation big-core architecture, combining an Arm v9.2 CPU, GPU, and NPU. The chipset is purpose-built for Edge AI, immersive gaming, and superior imaging, positioning it as a 5G Agentic AI flagship product.

Energy Efficiency and Performance​

Built on TSMC’s second-generation 3nm process, the Dimensity 9400 delivers 40% lower power consumption compared to its predecessor. Hsu detailed that the second-generation big-core CPU architecture integrates one Arm Cortex-X925 core running at up to 3.62GHz, three Cortex-X4 cores, and four Cortex-A720 cores. This results in a 35% increase in single-core performance and a 28% increase in multi-core performance over the Dimensity 9300. Additionally, the chipset includes MediaTek’s 8th-generation AI processor (NPU 890) and the Dimensity Agentic AI engine. The NPU supports device-side LoRA training and the generation of high-quality images, enabling the Dimensity 9300 to enhance generative AI performance and provide developers with Agentic AI capabilities. This allows AI applications to evolve into autonomous, reasoning-driven, and action-oriented experiences.

These advanced features enable users to train AI models directly on their smartphones. Agentic AI applications learn from user habits, proactively suggesting responses and improving overall user experiences. To further develop a rich AI ecosystem, Hsu emphasized MediaTek’s collaboration with developers to create a unified interface for connecting AI agents, third-party apps, and models. This initiative streamlines AI operations between edge devices and cloud services, while reducing product development cycles.

20241106NT31P1.jpg


MediaTek Dimensity 9400 chipset delivers innovative Edge AI, immersive gaming, and exceptional imaging experiences for users.
(Source: MediaTek)


Enhanced Edge AI and More​

Beyond enhancing Edge AI, the Dimensity 9400 also offers significant upgrades in gaming, photography, and wireless connectivity. Hsu highlighted the integration of a 12-core Arm Immortalis-G925 GPU and a PC-grade Dimensity OMM ray-tracing engine, delivering an immersive gaming experience with realistic lighting effects. The flagship ISP, Imagiq 1090, supports full-range HDR, enabling smooth zooming and clear tracking of moving subjects. For wireless communication, the chipset’s 5G modem, based on 3GPP Release 17, supports dual SIM and dual data functionalities. Its 4nm Wi-Fi/Bluetooth combo chip boosts Wi-Fi 7 multi-link operation (MLO) to 7.3Gbps and extends coverage by up to 30 meters.

The Dimensity 9400 chipset supports foldable smartphones, offering manufacturers more design flexibility while bringing innovative Edge AI, immersive gaming, and enhanced imaging to users.

Balancing Performance and Battery Life​

With generative AI requiring significant power, the enhanced AI features of the Dimensity 9400 raise concerns about battery life. Hsu reassured that despite the major performance boosts, the new chipset’s advanced manufacturing techniques and second-generation big-core architecture lead to improved energy efficiency. For example, LLM prompt processing performance improves by 80%, while power consumption is reduced by 35%. GPU performance increases by 41%, while power consumption is cut by 44%. Additionally, optimized photography and video processing reduces power consumption by 14%, ensuring a balance between performance and battery life.
 
  • Wow
  • Like
Reactions: 4 users
LLM at the edge by our competition on mobile phones, we’re is our mobile phone deal ?. Three year lead is we’re commercially now ?.
 
  • Like
Reactions: 8 users
Whilst we'd all like to see additional IP licences etc, I take some positives that there are now at least 3 companies we know of that have done the groundwork, development etc and passed the POC stage to offer end products.

That does reveal some progress imo.

Like any business, they obviously need to go to mkt and see what traction they get and if demand / contracts are there, then ramp up for production which I suspect would see supply of Akida through someone like MegaChips or maybe a direct licence with BRN at that point.

That is what I am envisioning anyway.

The 3 products are in 3 different mkts as well which is good.

Bascom Hunter Snap Card

VVDN Edge Box

Quantum Ventura CyberNeuro-RT
 
  • Like
  • Fire
  • Love
Reactions: 52 users
Whilst we'd all like to see additional IP licences etc, I take some positives that there are now at least 3 companies we know of that have done the groundwork, development etc and passed the POC stage to offer end products.

That does reveal some progress imo.

Like any business, they obviously need to go to mkt and see what traction they get and if demand / contracts are there, then ramp up for production which I suspect would see supply of Akida through someone like MegaChips or maybe a direct licence with BRN at that point.

That is what I am envisioning anyway.

The 3 products are in 3 different mkts as well which is good.

Bascom Hunter Snap Card

VVDN Edge Box

Quantum Ventura CyberNeuro-RT
Though, one could argue that Unigen have their Cupcake server with Akida configuration available and also BeEmotion.Ai have their offering Smart Edge products running on Akida available too.


 
  • Like
  • Fire
Reactions: 28 users

ndefries

Regular
Though, one could argue that Unigen have their Cupcake server with Akida configuration available and also BeEmotion.Ai have their offering Smart Edge products running on Akida available too.


Don't forget there is an akida floating around in space after it was lost.
 
  • Like
  • Haha
Reactions: 19 users
Don't forget there is an akida floating around in space after it was lost.
Too true....Ant61.

Shame they lost contact before they could prove up Akida....at least it wasn't an Akida issue.
 
  • Like
Reactions: 9 users

Taproot

Regular
These things obviously take time to develop. A lot longer than any of us anticipated.
Here is the original SBIR award for N202-099.
May 6 2020
Mentions IBM and Intel. No mention of BrainChip at this point.
Interesting little spiel highlighted below. I wonder if this research / work ended up with having anything to do with the Perth office getting closed down and certain people retiring or removing themselves from BrainChip. ?



Implementing Neural Network Algorithms on Neuromorphic Processors

Navy SBIR 20.2 - Topic N202-099

Naval Air Systems Command (NAVAIR) - Ms. Donna Attick navairsbir@navy.mil

Opens: June 3, 2020 - Closes: July 2, 2020 (12:00 pm ET)





N202-099 TITLE: Implementing Neural Network Algorithms on Neuromorphic Processors



RT&L FOCUS AREA(S): Artificial Intelligence/ Machine Learning, General Warfighting Requirements (GWR)

TECHNOLOGY AREA(S): Air Platform



OBJECTIVE: Deploy Deep Neural Network algorithms on near-commercially available Neuromorphic or equivalent Spiking Neural Network processing hardware.



DESCRIPTION: Biological inspired Neural Networks provide the basis for modern signal processing and classification algorithms. Implementation of these algorithms on conventional computing hardware requires significant compromises in efficiency and latency due to fundamental design differences. A new class of hardware is emerging that more closely resembles the biological Neuron/Synapse model found in Nature and may solve some of these limitations and bottlenecks. Recent work has demonstrated significant performance gains using these new hardware architectures and have shown equivalence to converge on a solution with the same accuracy [Ref 1].



The most promising of the new class are based on Spiking Neural Networks (SNN) and analog Processing in Memory (PiM), where information is spatially and temporally encoded onto the network. A simple spiking network can reproduce the complex behavior found in the Neural Cortex with significant reduction in complexity and power requirements [Ref 2]. Fundamentally, there should be no difference between algorithms based on Neural Network and current processing hardware. In fact, the algorithms can easily be transferred between hardware architectures [Ref 4]. The performance gains, application of neural networks and the relative ease of transitioning current algorithms over to the new hardware motivates the consideration of this topic.�

�

Hardware based on Spiking Neural Networks (SNN) are currently under development at various stages of maturity. Two prominent examples are the IBM True North and the INTEL Loihi Chips, respectively. The IBM approach uses conventional CMOS technology and the INTEL approach uses a less mature memrisistor architecture. Estimated efficiency performance increase is greater than 3 orders of magnitude better than state of the art Graphic Processing Unit (GPUs) or Field-programmable gate array (FPGAs). More advanced architectures based on an all-optical or photonic based SNN show even more promise. Nano-Photonic based systems are estimated to achieve 6 orders of magnitude increase in efficiency and computational density; approaching the performance of a Human Neural Cortex. The primary goal of this effort is to deploy Deep Neural Network algorithms on near-commercially available Neuromorphic or equivalent Spiking Neural Network processing hardware. Benchmark the performance gains and validate the suitability to warfighter application.



Work produced in Phase II may become classified. Note: The prospective contractor(s) must be U.S. owned and operated with no foreign influence as defined by DoD 5220.22-M, National Industrial Security Program Operating Manual, unless acceptable mitigating procedures can and have been implemented and approved by the Defense Counterintelligence and Security Agency (DCSA). The selected contractor and/or subcontractor must be able to acquire and maintain a secret level facility and Personnel Security Clearances. This will allow contractor personnel to perform on advanced phases of this project as set forth by DCSA and NAVAIR in order to gain access to classified information pertaining to the national defense of the United States and its allies; this will be an inherent requirement. The selected company will be required to safeguard classified material IAW DoD 5220.22-M during the advanced phases of this contract.



PHASE I: Develop an approach for deploying Neural Network algorithms and identify suitable hardware, learning algorithm framework and benchmark testing and validation methodology plan. Demonstrate performance enhancements and integration of technology as described in the description above. The Phase I effort will include plans to be developed under Phase II.



PHASE II: Transfer government furnished algorithms and training data running on a desktop computing environment to the new hardware environment. An example algorithm development frame for this work would be TensorFlow. Some modification of the framework and/or algorithms may be required to facilitate transfer. Some optimization will be required and is expected to maximize the performance of the algorithms on the new hardware. This optimization should focus on throughput, latency, and power draw/dissipation. Benchmark testing should be conducted against these metrics. Develop a transition plan for Phase III.



It is probable that the work under this effort will be classified under Phase II (see Description section for details).



PHASE III DUAL USE APPLICATIONS: Optimize algorithm and conduct benchmark testing. Adjust algorithms as needed and transition to final hardware environment. Successful technology development could benefit industries that conduct data mining and high-end processing, computer modeling and machine learning such as manufacturing, automotive, and aerospace industries.



REFERENCES:

1. Ambrogio, S., Narayanan, P., Tsai, H., Shelby, R., Boybat, I., Nolfo, C., . . . Burr, G. �Equivalent-Accuracy Accelerated Neural-Network Training Using Analogue Memory.� Nature, June 6, 2018, pp. 60-67. https://www.nature.com/articles/s41586-018-0180-5



2. Izhikevich, E. �Simple Model of Spiking Neurons.� IEEE Transactions on Neural Networks, 2003, pp. 1569-1572. https://ieeexplore.ieee.org/document/1257420



3. Diehl, P., Zarrella, G., Cassidy, A., Pedroni, B. & Neftci, E. �Conversion of Artificial Recurrent Neural Networks to Spiking Neural Networks for Low-Power Neuromorphic Hardware.� Cornell University, 2016. https://arxiv.org/abs/1601.04187



4. Esser, S., Merolla, P., Arthur, J., Cassidy, A., Appuswamy, R., Andreopoulos, A., . . . Modha, D. �Convolutional Networks for Fast, Energy-Efficient Neuromorphic Computing.� IBM Research: Almaden, May 24, 2016. https://arxiv.org/pdf/1603.08270.pdf



5. Department of Defense. National Defense Strategy 2018. United States Congress. https://dod.defense.gov/Portals/1/Documents/pubs/2018-National-Defense-Strategy-Summary.pdf



KEYWORDS: Neural Networks, Neuromorphic, Processor, Algorithm, Spiking Neurons, Machine Learning



<="" a="" style="color: rgb(0, 0, 0); font-family: "Times New Roman"; font-size: medium; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; white-space: normal; background-color: rgb(255, 255, 255); text-decoration-thickness: initial; text-decoration-style: initial; text-decoration-color: initial;">

** TOPIC NOTICE **
The Navy Topic above is an "unofficial" copy from the overall DoD 20.2 SBIR BAA. Please see the official DoD DSIP Topic website at rt.cto.mil/rtl-small-business-resources/sbir-sttr/ for any updates. The DoD issued its 20.2 SBIR BAA on May 6, 2020, which opens to receive proposals on June 3, 2020, and closes July 2, 2020 at 12:00 noon ET.

Direct Contact with Topic Authors. During the pre-release period (May 6 to June 2, 2020) proposing firms have an opportunity to directly contact the Technical Point of Contact (TPOC) to ask technical questions about the specific BAA topic.

Questions should be limited to specific information related to improving the understanding of a particular topic�s requirements. Proposing firms may not ask for advice or guidance on solution approach and you may not submit additional material to the topic author. If information provided during an exchange with the topic author is deemed necessary for proposal preparation, that information will be made available to all parties through SITIS (SBIR/STTR Interactive Topic Information System). After the pre-release period, questions must be asked through the SITIS on-line system as described below.
SITIS Q&A System. Once DoD begins accepting proposals on June 3, 2020 no further direct contact between proposers and topic authors is allowed unless the Topic Author is responding to a question submitted during the Pre-release period. However, proposers may submit written questions through SITIS at www.dodsbirsttr.mil/submissions/login, login and follow instructions. In SITIS, the questioner and respondent remain anonymous but all questions and answers are posted for general viewing.
Topics Search Engine: Visit the DoD Topic Search Tool at www.dodsbirsttr.mil/topics-app/ to find topics by keyword across all DoD Components participating in this BAA.

Help: If you have general questions about DoD SBIR program, please contact the DoD SBIR Help Desk at 703-214-1333 or via email at DoDSBIRSupport@reisystems.com
 
  • Like
  • Wow
  • Fire
Reactions: 9 users

JB49

Regular
  • Like
  • Fire
Reactions: 4 users

Diogenese

Top 20
These things obviously take time to develop. A lot longer than any of us anticipated.
Here is the original SBIR award for N202-099.
May 6 2020
Mentions IBM and Intel. No mention of BrainChip at this point.
Interesting little spiel highlighted below. I wonder if this research / work ended up with having anything to do with the Perth office getting closed down and certain people retiring or removing themselves from BrainChip. ?



Implementing Neural Network Algorithms on Neuromorphic Processors

Navy SBIR 20.2 - Topic N202-099

Naval Air Systems Command (NAVAIR) - Ms. Donna Attick navairsbir@navy.mil

Opens: June 3, 2020 - Closes: July 2, 2020 (12:00 pm ET)





N202-099 TITLE: Implementing Neural Network Algorithms on Neuromorphic Processors



RT&L FOCUS AREA(S): Artificial Intelligence/ Machine Learning, General Warfighting Requirements (GWR)

TECHNOLOGY AREA(S): Air Platform



OBJECTIVE: Deploy Deep Neural Network algorithms on near-commercially available Neuromorphic or equivalent Spiking Neural Network processing hardware.



DESCRIPTION: Biological inspired Neural Networks provide the basis for modern signal processing and classification algorithms. Implementation of these algorithms on conventional computing hardware requires significant compromises in efficiency and latency due to fundamental design differences. A new class of hardware is emerging that more closely resembles the biological Neuron/Synapse model found in Nature and may solve some of these limitations and bottlenecks. Recent work has demonstrated significant performance gains using these new hardware architectures and have shown equivalence to converge on a solution with the same accuracy [Ref 1].



The most promising of the new class are based on Spiking Neural Networks (SNN) and analog Processing in Memory (PiM), where information is spatially and temporally encoded onto the network. A simple spiking network can reproduce the complex behavior found in the Neural Cortex with significant reduction in complexity and power requirements [Ref 2]. Fundamentally, there should be no difference between algorithms based on Neural Network and current processing hardware. In fact, the algorithms can easily be transferred between hardware architectures [Ref 4]. The performance gains, application of neural networks and the relative ease of transitioning current algorithms over to the new hardware motivates the consideration of this topic.�

�

Hardware based on Spiking Neural Networks (SNN) are currently under development at various stages of maturity. Two prominent examples are the IBM True North and the INTEL Loihi Chips, respectively. The IBM approach uses conventional CMOS technology and the INTEL approach uses a less mature memrisistor architecture. Estimated efficiency performance increase is greater than 3 orders of magnitude better than state of the art Graphic Processing Unit (GPUs) or Field-programmable gate array (FPGAs). More advanced architectures based on an all-optical or photonic based SNN show even more promise. Nano-Photonic based systems are estimated to achieve 6 orders of magnitude increase in efficiency and computational density; approaching the performance of a Human Neural Cortex. The primary goal of this effort is to deploy Deep Neural Network algorithms on near-commercially available Neuromorphic or equivalent Spiking Neural Network processing hardware. Benchmark the performance gains and validate the suitability to warfighter application.



Work produced in Phase II may become classified. Note: The prospective contractor(s) must be U.S. owned and operated with no foreign influence as defined by DoD 5220.22-M, National Industrial Security Program Operating Manual, unless acceptable mitigating procedures can and have been implemented and approved by the Defense Counterintelligence and Security Agency (DCSA). The selected contractor and/or subcontractor must be able to acquire and maintain a secret level facility and Personnel Security Clearances. This will allow contractor personnel to perform on advanced phases of this project as set forth by DCSA and NAVAIR in order to gain access to classified information pertaining to the national defense of the United States and its allies; this will be an inherent requirement. The selected company will be required to safeguard classified material IAW DoD 5220.22-M during the advanced phases of this contract.



PHASE I: Develop an approach for deploying Neural Network algorithms and identify suitable hardware, learning algorithm framework and benchmark testing and validation methodology plan. Demonstrate performance enhancements and integration of technology as described in the description above. The Phase I effort will include plans to be developed under Phase II.



PHASE II: Transfer government furnished algorithms and training data running on a desktop computing environment to the new hardware environment. An example algorithm development frame for this work would be TensorFlow. Some modification of the framework and/or algorithms may be required to facilitate transfer. Some optimization will be required and is expected to maximize the performance of the algorithms on the new hardware. This optimization should focus on throughput, latency, and power draw/dissipation. Benchmark testing should be conducted against these metrics. Develop a transition plan for Phase III.



It is probable that the work under this effort will be classified under Phase II (see Description section for details).



PHASE III DUAL USE APPLICATIONS: Optimize algorithm and conduct benchmark testing. Adjust algorithms as needed and transition to final hardware environment. Successful technology development could benefit industries that conduct data mining and high-end processing, computer modeling and machine learning such as manufacturing, automotive, and aerospace industries.



REFERENCES:

1. Ambrogio, S., Narayanan, P., Tsai, H., Shelby, R., Boybat, I., Nolfo, C., . . . Burr, G. �Equivalent-Accuracy Accelerated Neural-Network Training Using Analogue Memory.� Nature, June 6, 2018, pp. 60-67. https://www.nature.com/articles/s41586-018-0180-5



2. Izhikevich, E. �Simple Model of Spiking Neurons.� IEEE Transactions on Neural Networks, 2003, pp. 1569-1572. https://ieeexplore.ieee.org/document/1257420



3. Diehl, P., Zarrella, G., Cassidy, A., Pedroni, B. & Neftci, E. �Conversion of Artificial Recurrent Neural Networks to Spiking Neural Networks for Low-Power Neuromorphic Hardware.� Cornell University, 2016. https://arxiv.org/abs/1601.04187



4. Esser, S., Merolla, P., Arthur, J., Cassidy, A., Appuswamy, R., Andreopoulos, A., . . . Modha, D. �Convolutional Networks for Fast, Energy-Efficient Neuromorphic Computing.� IBM Research: Almaden, May 24, 2016. https://arxiv.org/pdf/1603.08270.pdf



5. Department of Defense. National Defense Strategy 2018. United States Congress. https://dod.defense.gov/Portals/1/Documents/pubs/2018-National-Defense-Strategy-Summary.pdf



KEYWORDS: Neural Networks, Neuromorphic, Processor, Algorithm, Spiking Neurons, Machine Learning



<="" a="" style="color: rgb(0, 0, 0); font-family: "Times New Roman"; font-size: medium; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; white-space: normal; background-color: rgb(255, 255, 255); text-decoration-thickness: initial; text-decoration-style: initial; text-decoration-color: initial;">

** TOPIC NOTICE **
The Navy Topic above is an "unofficial" copy from the overall DoD 20.2 SBIR BAA. Please see the official DoD DSIP Topic website at rt.cto.mil/rtl-small-business-resources/sbir-sttr/ for any updates. The DoD issued its 20.2 SBIR BAA on May 6, 2020, which opens to receive proposals on June 3, 2020, and closes July 2, 2020 at 12:00 noon ET.

Direct Contact with Topic Authors. During the pre-release period (May 6 to June 2, 2020) proposing firms have an opportunity to directly contact the Technical Point of Contact (TPOC) to ask technical questions about the specific BAA topic.

Questions should be limited to specific information related to improving the understanding of a particular topic�s requirements. Proposing firms may not ask for advice or guidance on solution approach and you may not submit additional material to the topic author. If information provided during an exchange with the topic author is deemed necessary for proposal preparation, that information will be made available to all parties through SITIS (SBIR/STTR Interactive Topic Information System). After the pre-release period, questions must be asked through the SITIS on-line system as described below.
SITIS Q&A System. Once DoD begins accepting proposals on June 3, 2020 no further direct contact between proposers and topic authors is allowed unless the Topic Author is responding to a question submitted during the Pre-release period. However, proposers may submit written questions through SITIS at www.dodsbirsttr.mil/submissions/login, login and follow instructions. In SITIS, the questioner and respondent remain anonymous but all questions and answers are posted for general viewing.
Topics Search Engine: Visit the DoD Topic Search Tool at www.dodsbirsttr.mil/topics-app/ to find topics by keyword across all DoD Components participating in this BAA.

Help: If you have general questions about DoD SBIR program, please contact the DoD SBIR Help Desk at 703-214-1333 or via email at DoDSBIRSupport@reisystems.com

Hi Taproot,

The bit about optical/photonic SNNs squares the circle with our friends Bascom Hunter who have been working on this tech for 15 years.

Hardware based on Spiking Neural Networks (SNN) are currently under development at various stages of maturity. Two prominent examples are the IBM True North and the INTEL Loihi Chips, respectively. The IBM approach uses conventional CMOS technology and the INTEL approach uses a less mature memrisistor architecture. Estimated efficiency performance increase is greater than 3 orders of magnitude better than state of the art Graphic Processing Unit (GPUs) or Field-programmable gate array (FPGAs). More advanced architectures based on an all-optical or photonic based SNN show even more promise. Nano-Photonic based systems are estimated to achieve 6 orders of magnitude increase in efficiency and computational density; approaching the performance of a Human Neural Cortex. The primary goal of this effort is to deploy Deep Neural Network algorithms on near-commercially available Neuromorphic or equivalent Spiking Neural Network processing hardware. Benchmark the performance gains and validate the suitability to warfighter application.

What I find significant is they they have gone with processor board with 5 Akida 1000s.

The problem is that, after Phase I, the security gets more and more restrictive = NDA to the power of CIA/National Security Council so:-


https://au.video.search.yahoo.com/s...=251d01e90e59e62f4b44a6ab7e455acd&action=view
 
  • Like
  • Fire
  • Love
Reactions: 26 users

JB49

Regular
  • Like
  • Fire
Reactions: 6 users

Taproot

Regular
Phase 2.
End date = 18/08/2025




This bloke founded BH 2010
Paul Prucnal

Paul R. Prucnal (born 1953) is an American electrical engineer. He is a professor of electrical engineering at Princeton University. He is best known for his seminal work in Neuromorphic Photonics,[1] optical code division multiple access (OCDMA) and the invention of the terahertz optical asymmetric demultiplexor (TOAD).[2] He is currently a fellow of IEEE for contributions to photonic switching and fiber-optic networks,[3] Optical Society of America and National Academy of Inventors.[4][5][6]

Life and career​

[edit]
Prucnal received his A.B. In mathematics and physics from Bowdoin College in 1974, graduating summa cum laud, where he also studied piano with William Eves, a pupil of Robert Casadesus. He then earned M.S., M.Phil. and Ph. D. degrees in electrical engineering from Columbia University in 1976, 1978 and 1979, respectively,[4] where he did his doctoral work with Malvin Carl Teich.[7] After his doctorate, Prucnal joined the faculty at Columbia University in 1979. As a member of the Columbia Radiation Laboratory, he performed groundbreaking work in OCDMA[8] and self-routed photonic switching. In 1988, he joined the faculty at Princeton University.

His developmental research on optical CDMA initiated a new research field[9] in which more than 1000 papers have since been published, exploring applications ranging from information security[10] to communication speed and bandwidth.[11] In 1993, he invented the "Terahertz Optical Asymmetric Demultiplexer,"[12][13] the first optical switch capable of processing terabit per second (Tb/s) pulse trains.[14][15] With support from DARPA in the 1990s, his group was the first to demonstrate an all-optical 100 gigabit/sec photonic packet switching node and optical multiprocessor interconnect.[16]

Prucnal is author of the book, Neuromorphic Photonics,[1] and editor of the book, Optical Code Division Multiple Access: Fundamentals and Applications.[17] He was an Area Editor of IEEE Transactions on Communications. He has authored or co-authored more than 350 journal articles and book chapters and holds 28 U.S. patents. He is a fellow of the Institute of Electrical and Electronics Engineers (IEEE), the Optical Society of America (OSA) and the National Academy of Inventors (NAI), and a member of honor societies including Phi Beta Kappa and Sigma Xi. He was the recipient of the 1990 Rudolf Kingslake Medal[18] for his paper entitled "Self-routing photonic switching with optically-processed control" and has won multiple teaching awards at Princeton.[4]

He has been instrumental in founding the field of Neuromorphic Photonics[1] and developing the "photonic neuron", a high speed optical computing device modeled on neural networks,[19] as well as integrated optical circuits to improve wireless signal quality by cancelling radio interference. [20][21]
 
  • Like
  • Fire
Reactions: 6 users

Taproot

Regular
Hi Taproot,

The bit about optical/photonic SNNs squares the circle with our friends Bascom Hunter who have been working on this tech for 15 years.

Hardware based on Spiking Neural Networks (SNN) are currently under development at various stages of maturity. Two prominent examples are the IBM True North and the INTEL Loihi Chips, respectively. The IBM approach uses conventional CMOS technology and the INTEL approach uses a less mature memrisistor architecture. Estimated efficiency performance increase is greater than 3 orders of magnitude better than state of the art Graphic Processing Unit (GPUs) or Field-programmable gate array (FPGAs). More advanced architectures based on an all-optical or photonic based SNN show even more promise. Nano-Photonic based systems are estimated to achieve 6 orders of magnitude increase in efficiency and computational density; approaching the performance of a Human Neural Cortex. The primary goal of this effort is to deploy Deep Neural Network algorithms on near-commercially available Neuromorphic or equivalent Spiking Neural Network processing hardware. Benchmark the performance gains and validate the suitability to warfighter application.

What I find significant is they they have gone with processor board with 5 Akida 1000s.

The problem is that, after Phase I, the security gets more and more restrictive = NDA to the power of CIA/National Security Council so:-


https://au.video.search.yahoo.com/s...=251d01e90e59e62f4b44a6ab7e455acd&action=view
Yep, add them to the Secret Squirrel Club.

The phrase "Secret Squirrel stuff" is used by people working in U.S. intelligence to lightheartedly describe material that is highly classified, usually as a non-answer to a question.

At least we know now, a previously unknown known.
Exciting stuff and a much needed shot in the arm.
 
  • Like
  • Fire
  • Love
Reactions: 13 users

ndefries

Regular
Too true....Ant61.

Shame they lost contact before they could prove up Akida....at least it wasn't an Akida issue.
It does mean aliens can steal akida now without an IP licence.
 
  • Haha
  • Thinking
Reactions: 12 users
It does mean aliens can steal akida now without an IP licence.
Yes that's a point ...but then they have to reverse engineer it first :LOL:
 
  • Like
  • Haha
Reactions: 5 users

Diogenese

Top 20
  • Like
  • Haha
Reactions: 8 users
Whilst we'd all like to see additional IP licences etc, I take some positives that there are now at least 3 companies we know of that have done the groundwork, development etc and passed the POC stage to offer end products.

That does reveal some progress imo.

Like any business, they obviously need to go to mkt and see what traction they get and if demand / contracts are there, then ramp up for production which I suspect would see supply of Akida through someone like MegaChips or maybe a direct licence with BRN at that point.

That is what I am envisioning anyway.

The 3 products are in 3 different mkts as well which is good.

Bascom Hunter Snap Card

VVDN Edge Box

Quantum Ventura CyberNeuro-RT
Hey this Bascom Hunter Snap card is news to me..
When did we find out about that?

With 5 AKIDA 1000 chips per unit, a production run is surely on the cards at some point?

There can't be "that" many of them floating around, with them also going into the VVDN Edge boxes (at 2 a piece)..
 
  • Like
  • Love
Reactions: 7 users

manny100

Regular
Hi Taproot,

The bit about optical/photonic SNNs squares the circle with our friends Bascom Hunter who have been working on this tech for 15 years.

Hardware based on Spiking Neural Networks (SNN) are currently under development at various stages of maturity. Two prominent examples are the IBM True North and the INTEL Loihi Chips, respectively. The IBM approach uses conventional CMOS technology and the INTEL approach uses a less mature memrisistor architecture. Estimated efficiency performance increase is greater than 3 orders of magnitude better than state of the art Graphic Processing Unit (GPUs) or Field-programmable gate array (FPGAs). More advanced architectures based on an all-optical or photonic based SNN show even more promise. Nano-Photonic based systems are estimated to achieve 6 orders of magnitude increase in efficiency and computational density; approaching the performance of a Human Neural Cortex. The primary goal of this effort is to deploy Deep Neural Network algorithms on near-commercially available Neuromorphic or equivalent Spiking Neural Network processing hardware. Benchmark the performance gains and validate the suitability to warfighter application.

What I find significant is they they have gone with processor board with 5 Akida 1000s.

The problem is that, after Phase I, the security gets more and more restrictive = NDA to the power of CIA/National Security Council so:-


https://au.video.search.yahoo.com/s...=251d01e90e59e62f4b44a6ab7e455acd&action=view
There is a poster on the crapper peddling that a DOD contract would preclude BRN from other non defence contracts due to secrecy etc..
That is why they are calling it the crapper.
 
  • Like
  • Haha
Reactions: 9 users

Guzzi62

Regular
Hey this Bascom Hunter Snap card is news to me..
When did we find out about that?

With 5 AKIDA 1000 chips per unit, a production run is surely on the cards at some point?

There can't be "that" many of them floating around, with them also going into the VVDN Edge boxes (at 2 a piece)..
I think it's a low volume card for limited purpose, hardened and all, so likely not cheap as well.

The most important is IMO that they choose the chip in the first place.
 
  • Like
  • Fire
Reactions: 9 users
Top Bottom