It is in my opinion just going back to fair value based on customer engagements ,patents and the general investment population finally becoming aware of brainchips technologyI really have to ask you all now, what is the reason that BRN is rising so much?
Can someone please tag @Dave Evans for me as he blocked meWhat do you say we make "Dave" the new "shorthand" () or nickname, for shorters and the manipulators?
It rolls off the tongue better and it would be a fitting honour, to the guy who graced this forum, with so much knowledge and insight.
I can really only see other "Dave's" protesting...
We haven't seen him at all.
Dingo -Really like your input - BUT FARK OFF for using my Rodin thinking logo— whilst DUI- @ at an ungodly hour in the morning-Bollocks!
How dare you question my Golden Rule on upward gaps!
What you need to understand is, I mean what you're not seeing is.. ....
..Is that my rule applies specifically to BRN, so there!!
Don't believe everything you read in outdated books and resources.
Knowledge is fixed in Time, whereas knowing is a "movement".
View attachment 74917
Not sure if posted here today at all but did anyone see what Nimble AI are up to with our 1500 and Hailo8 courtesy of @Rayz on the other site.
Full credit to Rayz who is a great poster over there for finding info like many others over here. If u still frequent over there, worth giving a like and a follow
View attachment 74968
eu
Perceiving a 3D world
from a 3D silicon architecture
100x 50x ≈10s mW
Energy-efficiency Latency reduction Energy budget improvement
Expected outcomes
World’s first light-field dynamic vision sensor and SDK for monocular-image- based depth perception.
Silicon-proven implementations
for use in next-generation commercial neuromorphic chips.
EDA tools to advance 3D silicon integration and exceed the pace of Moore’s Law.
World’s first event-driven full perception stack that runs industry standard convolutional neural networks.
Prototypic platform and programming tools to test new AI and computer vision algorithms.
Applications that showcase the competitive advantage of NimbleAI technology.
World’s first Light-field
Dynamic Vision Sensor Prototype
In NimbleAI, we are designing a
3D integrated sensing-processing neuromorphic chip that mimics
the efficient way our eyes and brains capture and process visual information. NimbleAI also advances towards new vision modalities
not present in humans, such as insect-inspired light-field vision, for instantaneous 3D perception.
Key features of our chip are:
The top layer in the architecture senses light and delivers meaningful visual information to processing and inference engines in the interior layers to achieve efficient end-to-end perception. NimbleAI adopts the biological data economy principle systematically across the chip layers, starting
in the light-electrical sensing interface.
Sense
Ignore?
Process
Adaptive
3D
light and depth
or recognise
efficiently
visual pathways
integrated silicon
Sensing, memory, and processing components are physically fused
in a 3D silicon volume to boost the communication bandwidth.
ONLY changing light is sensed, inspired by the retina. Depth perception is inspired by the insect compound eye.
Our chip ONLY processes feature- rich and/or critical sensor regions.
ONLY significant neuron state changes are propagated and processed by other neurons.
Sensing and processing are adjusted at runtime to operate jointly
at the optimal temporal and data resolution.
How it works
Sensing
Sensor pixels generate visual events ONLY if/when significant light changes are detected. Pixels can be dynamically grouped and ungrouped to allocate different resolution levels across sensor regions. This mimics the foveation mechanism in eyes, which allows foveated regions to be
n seen in greater detail than peripheral regions.
evird- The NimbleAI sensing layer enables depth perception in the sub-ms range tne by capturing directional information of incoming light by means of light- vE field micro-lenses by Raytrix. This is the world’s first light-field DVS sensor, which estimates the origin of light rays by triangulating disparities from neighbour views formed by the micro-lenses. 3D visual scenes are thus encoded in the form of sparse visual event flows.
Early Perception:
Our always-on early perception engine continuously analyzes the sensed n
visual events in a spatio-temporal mode to extract the optical flow and evir
identify and select ONLY salient regions of interest (ROIs) for further
d-
processing in high-resolution (foveated regions). This engine is powered tne
by Spiking Neural Networks (SNNs), which process incoming visual events vE
and adjust foveation settings in the DVS sensor with ultra-low latency and minimal energy consumption.
Processing:
Format and properties of visual event flows from salient regions are adapted in the processing engine to match data structures of user AI models (e.g., Convolutional Neural Networks - CNNs) and to best exploit optimization mechanisms implemented in the inference engine (e.g., sparsity). Processing kernels are tailored to each salient region properties, including size, shape and movement patterns of objects in those regions. The processing engine uses in-memory computing blocks by CEA and a Menta eFPGA fabric, both tightly coupled to a Codasip RISC-V CPU.
Inference with user AI models:
We are exploring the use of event-driven dataflow architectures that exploit sparsity properties of incoming visual data. For practical use in real-world applications, size-limited CNNs can be run on-chip using the NimbleAI processing engine above, while industry standard AI models can be run in mainstream commercial architectures, including GPUs and NPUs.
Light-field DVS using Prophesee IMX 636
Foveated DVS testchip
Prototyping MPSoC XCZU15EG
HAILO-8 /Akida 1500 (ROI inference)
SNN testchip (ROI selection)
Digital foveation settings
Harness the biological advantage
in your vision pipelines
NimbleAI will deliver a functional prototype of the 3D integrated sensing-processing neuromorphic chip along with the corresponding programming tools and OS drivers (i.e., Linux/ROS) to enable users run their AI models on it. The prototype will be flexible to accommodate user RTL IP in a Xilinx MPSoC and combines commercial neuromorphic and AI chips (e.g., HAILO, BrainChip, Prophesee) and NimbleAI 2D testchips (e.g., foveated DVS sensor and SNN engine).
Raytrix is advancing its light-field SDK to support event-based inputs, making it easy for researchers and early adopters to seamlessly integrate nimbleAI‘s groundbreaking vision modality –
3D perception DVS – and evolve this technology with their projects, prior to deployment on the NimbleAI functional prototype. The NimbleAI light-field SDK by Raytrix will be compatible with Prophesee’s Metavision DVS SDK.
Sensing
User RTL IP
NimbleAI RTL IP
Processing
Inference
User CNN models
SNN models
Early perception
Reach out to test combined use of your vision pipelines and NimbleAI technology.
PCIe M2
Modules
Use cases
Hand-held medical imaging
Smart monitors with 3D perception for highly automated and autonomous cars by AVL
Human attention for worm-inspired neural networks by TU Wien
device by ULMA
Eye-tracking sensors for smart
glasses by Viewpointsystem Follow our journey!
@NimbleAI_EU NimbleAI.eu
Partners NimbleAI coordinator: Xabier Iturbe (xiturbe@ikerlan.es)
nimbleai.eu
The prototype will be flexible to accommodate user RTL IP in a Xilinx MPSoC and combines commercial neuromorphic and AI chips (e.g., HAILO, BrainChip, Prophesee) and NimbleAI 2D testchips (e.g., foveated DVS sensor and SNN engine).
Raytrix is advancing its light-field SDK to support event-based inputs, making it easy for researchers and early adopters to seamlessly integrate nimbleAI‘s groundbreaking vision modality –
3D perception DVS – and evolve this technology with their projects, prior to deployment on the NimbleAI functional prototype. The NimbleAI light-field SDK by Raytrix will be compatible with Prophesee’s Metavision DVS SDK.
View attachment 74969
Not sure if posted here today at all but did anyone see what Nimble AI are up to with our 1500 and Hailo8 courtesy of @Rayz on the other site.
Full credit to Rayz who is a great poster over there for finding info like many others over here. If u still frequent over there, worth giving a like and a follow
View attachment 74968
eu
Perceiving a 3D world
from a 3D silicon architecture
100x 50x ≈10s mW
Energy-efficiency Latency reduction Energy budget improvement
Expected outcomes
World’s first light-field dynamic vision sensor and SDK for monocular-image- based depth perception.
Silicon-proven implementations
for use in next-generation commercial neuromorphic chips.
EDA tools to advance 3D silicon integration and exceed the pace of Moore’s Law.
World’s first event-driven full perception stack that runs industry standard convolutional neural networks.
Prototypic platform and programming tools to test new AI and computer vision algorithms.
Applications that showcase the competitive advantage of NimbleAI technology.
World’s first Light-field
Dynamic Vision Sensor Prototype
In NimbleAI, we are designing a
3D integrated sensing-processing neuromorphic chip that mimics
the efficient way our eyes and brains capture and process visual information. NimbleAI also advances towards new vision modalities
not present in humans, such as insect-inspired light-field vision, for instantaneous 3D perception.
Key features of our chip are:
The top layer in the architecture senses light and delivers meaningful visual information to processing and inference engines in the interior layers to achieve efficient end-to-end perception. NimbleAI adopts the biological data economy principle systematically across the chip layers, starting
in the light-electrical sensing interface.
Sense
Ignore?
Process
Adaptive
3D
light and depth
or recognise
efficiently
visual pathways
integrated silicon
Sensing, memory, and processing components are physically fused
in a 3D silicon volume to boost the communication bandwidth.
ONLY changing light is sensed, inspired by the retina. Depth perception is inspired by the insect compound eye.
Our chip ONLY processes feature- rich and/or critical sensor regions.
ONLY significant neuron state changes are propagated and processed by other neurons.
Sensing and processing are adjusted at runtime to operate jointly
at the optimal temporal and data resolution.
How it works
Sensing
Sensor pixels generate visual events ONLY if/when significant light changes are detected. Pixels can be dynamically grouped and ungrouped to allocate different resolution levels across sensor regions. This mimics the foveation mechanism in eyes, which allows foveated regions to be
n seen in greater detail than peripheral regions.
evird- The NimbleAI sensing layer enables depth perception in the sub-ms range tne by capturing directional information of incoming light by means of light- vE field micro-lenses by Raytrix. This is the world’s first light-field DVS sensor, which estimates the origin of light rays by triangulating disparities from neighbour views formed by the micro-lenses. 3D visual scenes are thus encoded in the form of sparse visual event flows.
Early Perception:
Our always-on early perception engine continuously analyzes the sensed n
visual events in a spatio-temporal mode to extract the optical flow and evir
identify and select ONLY salient regions of interest (ROIs) for further
d-
processing in high-resolution (foveated regions). This engine is powered tne
by Spiking Neural Networks (SNNs), which process incoming visual events vE
and adjust foveation settings in the DVS sensor with ultra-low latency and minimal energy consumption.
Processing:
Format and properties of visual event flows from salient regions are adapted in the processing engine to match data structures of user AI models (e.g., Convolutional Neural Networks - CNNs) and to best exploit optimization mechanisms implemented in the inference engine (e.g., sparsity). Processing kernels are tailored to each salient region properties, including size, shape and movement patterns of objects in those regions. The processing engine uses in-memory computing blocks by CEA and a Menta eFPGA fabric, both tightly coupled to a Codasip RISC-V CPU.
Inference with user AI models:
We are exploring the use of event-driven dataflow architectures that exploit sparsity properties of incoming visual data. For practical use in real-world applications, size-limited CNNs can be run on-chip using the NimbleAI processing engine above, while industry standard AI models can be run in mainstream commercial architectures, including GPUs and NPUs.
Light-field DVS using Prophesee IMX 636
Foveated DVS testchip
Prototyping MPSoC XCZU15EG
HAILO-8 /Akida 1500 (ROI inference)
SNN testchip (ROI selection)
Digital foveation settings
Harness the biological advantage
in your vision pipelines
NimbleAI will deliver a functional prototype of the 3D integrated sensing-processing neuromorphic chip along with the corresponding programming tools and OS drivers (i.e., Linux/ROS) to enable users run their AI models on it. The prototype will be flexible to accommodate user RTL IP in a Xilinx MPSoC and combines commercial neuromorphic and AI chips (e.g., HAILO, BrainChip, Prophesee) and NimbleAI 2D testchips (e.g., foveated DVS sensor and SNN engine).
Raytrix is advancing its light-field SDK to support event-based inputs, making it easy for researchers and early adopters to seamlessly integrate nimbleAI‘s groundbreaking vision modality –
3D perception DVS – and evolve this technology with their projects, prior to deployment on the NimbleAI functional prototype. The NimbleAI light-field SDK by Raytrix will be compatible with Prophesee’s Metavision DVS SDK.
Sensing
User RTL IP
NimbleAI RTL IP
Processing
Inference
User CNN models
SNN models
Early perception
Reach out to test combined use of your vision pipelines and NimbleAI technology.
PCIe M2
Modules
Use cases
Hand-held medical imaging
Smart monitors with 3D perception for highly automated and autonomous cars by AVL
Human attention for worm-inspired neural networks by TU Wien
device by ULMA
Eye-tracking sensors for smart
glasses by Viewpointsystem Follow our journey!
@NimbleAI_EU NimbleAI.eu
Partners NimbleAI coordinator: Xabier Iturbe (xiturbe@ikerlan.es)
nimbleai.eu
The prototype will be flexible to accommodate user RTL IP in a Xilinx MPSoC and combines commercial neuromorphic and AI chips (e.g., HAILO, BrainChip, Prophesee) and NimbleAI 2D testchips (e.g., foveated DVS sensor and SNN engine).
Raytrix is advancing its light-field SDK to support event-based inputs, making it easy for researchers and early adopters to seamlessly integrate nimbleAI‘s groundbreaking vision modality –
3D perception DVS – and evolve this technology with their projects, prior to deployment on the NimbleAI functional prototype. The NimbleAI light-field SDK by Raytrix will be compatible with Prophesee’s Metavision DVS SDK.
View attachment 74969
Yes you right.. thats why i wrote thisI've never read a charting book.
But I'd have to say, today's price movement would have to be text book strength, with all the "tests" having been completed, to confirm that this is a real move and not just a pump.
But hey, I really don't know and I think we still need to find out what's been the catalyst, to hold or improve on these gains.
Hey sorry KDingo -Really like your input - BUT FARK OFF for using my Rodin thinking logo— whilst DUI- @ at an ungodly hour in the morning-
Can the list please be updated with @Rayz update- for Akida 1500-
2025 is going to be -
AKIDA BALLISTA UBQTS
Dingo - no worries mate - all in good jestHey sorry K
I didn't realise that was your logo ..
I actually have an affinity for Rodin's "The Thinker" myself and was introduced to that particular piece of art at a young age. It was attributed to my Father, along with the lavatorial reference..
I have a very small statue, but plan to get a decent sized one, in bronze.
And I don't PUI But do keep odd hours..
Maybe I "should" be saying I PUI ...
View attachment 74995
2025 is just around the corner, and I can’t believe how quickly time has flown!
It feels like just yesterday we were starting our journey as Data Science UA. But this year, we've celebrated our 8th anniversary already!
In 2024, we've also...
Delivered over 100 AI models, including one with an impressive 1 billion parameters.
Worked on dozens of exciting AI projects, particularly in fintech, pharma, and green energy.
Built strong partnerships and welcomed nearly 4,000 new members to our growing community.
Wrapped up our own R&D project featuring BrainChip Akida neuromorphic chips.
Hosted three grand offline team meetings and nine online meetups with leading industry experts.
Welcomed 29 talented new members to our team.
None of this would have been possible without the dedication and unmatched expertise of the Data Science UA team and the incredible support of our clients, partners, and community.
Now, as we’re stepping confidently into the next chapter of our story, I can’t wait to see what the future holds.
Wishing you a joyful and successful New Year!
I'mHey sorry K
I didn't realise that was your logo ..
I actually have an affinity for Rodin's "The Thinker" myself and was introduced to that particular piece of art at a young age. It was attributed to my Father, along with the lavatorial reference..
I have a very small statue, but plan to get a decent sized one, in bronze.
And I don't PUI But do keep odd hours..
Maybe I "should" be saying I PUI ...
You have to fill in the gaps more, with country folk..
The above was before the close. Update below. 435% up on ave volume.The US market has followed us with a nice gain and almost 4 times daily volume.
View attachment 74997