BRN Discussion Ongoing

I have it on good authority from someone who lives in a barrel that for the following patent to be actioned they would need to already have access to a convolutional spiking neural network processor.

Now some might completely discount the fact that Arijit Mukherjee from the above article who is one of the inventors of the following patent and who was a member of the Brainchip Tata team that presented a joint demonstration on 14.12.19 of AKIDA technology performing live gesture recognition and that Brainchip having the only commercially available patent protected convolutional spiking neural network chip in the world 3 years ahead of anyone else as proving or even pointing to Brainchip as providing this chip to Tata but I am not in that camp.

This is one huge statement for TATA to make in my opinion: "Neuromorphic Computing Brings AI to the Edge How conventional processor architecture is becoming a thing of the past".

My opinion only DYOR
FF

AKIDA BALLISTA


System and method of gesture recognition using a reservoir based convolutional spiking neural network​

Dec 17, 2020
This disclosure relates to method of identifying a gesture from a plurality of gestures using a reservoir based convolutional spiking neural network. A two-dimensional spike streams is received from neuromorphic event camera as an input. The two-dimensional spike streams associated with at least one gestures from a plurality of gestures is preprocessed to obtain plurality of spike frames. The plurality of spike frames is processed by a multi layered convolutional spiking neural network to learn plurality of spatial features from the at least one gesture. A filter block is deactivated from the plurality of filter blocks corresponds to at least one gesture which are not currently being learnt. A spatio-temporal features is obtained by allowing the spike activations from CSNN layer to flow through the reservoir. The spatial feature is classified by classifier from the CSNN layer and the spatio-temporal features from the reservoir to obtain set of prioritized gestures.
Skip to: Description · Claims · References Cited · Patent History · Patent History
Description
PRIORITY CLAIM

This U.S. patent application claims priority under 35 U.S.C. § 119 to: India Application No. 202021025784, filed on Jun. 18, 2020. The entire contents of the aforementioned application are incorporated herein by reference.
TECHNICAL FIELD
This disclosure relates generally to gesture recognition, and, more particularly, to system and method of gesture recognition using a reservoir based convolutional spiking neural network.
BACKGROUND
In an age of artificial intelligence, robots and drones are key enablers of task automation and they are being used in various domains such as manufacturing, healthcare, warehouses, disaster management etc. As a consequence, they often need to share work-space with and interact with human workers and thus evolving the area of research named Human Robot Interaction (HRI). Problems in this domain are mainly centered around learning and identifying of gestures/speech/intention of human coworkers along with classical problems of learning and identification of surrounding environment (and obstacles, objects etc. therein). All these essentially are needed to be done in a dynamic and noisy practical work environment. As of current state of the art vision based solutions using artificial neural networks (including deep neural networks) have high accuracy, however the models are not the most efficient solutions as learning methods and inference frameworks of the conventional deep neural networks require huge amount of training data and are typically compute and energy intensive. They are also bounded by one or more conventional architectures that leads to data transfer bottleneck between memory and processing units and related power consumption issues. Hence, this genre of solutions does not really help robots and drones to do their jobs as they are classically constrained by their battery life.
SUMMARY
Embodiments of the present disclosure present technological improvements as solutions to one or more of the above-mentioned technical problems recognized by the inventors in conventional systems. For example, in one aspect, a processor implemented method of identifying a gesture from a plurality of gestures using a reservoir based convolutional spiking neural network is provided. The processor implemented method includes at least one of: receiving, from a neuromorphic event camera, two-dimensional spike streams as an input; preprocessing, via one or more hardware processors, the address event representation (AER) record associated with at least one gestures from a plurality of gestures to obtain a plurality of spike frames; processing, by a multi layered convolutional spiking neural network, the plurality of spike frames to learn a plurality of spatial features from the at least one gesture; deactivating, via the one or more hardware processors, at least one filter block from the plurality of filter blocks corresponds to at least one gesture which are not currently being learnt; obtaining, via the one or more hardware processors, spatio-temporal features by allowing the spike activations from a CSNN layer to flow through the reservoir; and classifying, by a classifier, the at least one of spatial feature from the CSNN layer and the spatio-temporal features from the reservoir to obtain a set of prioritized gestures. In an embodiment, the two-dimensional spike streams are represented as an address event representation (AER) record. In an embodiment, each sliding convolutional window in the plurality of spike frames are connected to a neuron corresponding to a filter among plurality of filters corresponding to a filter block among plurality of filter blocks in each convolutional layer from plurality of convolutional layers. In an embodiment, the plurality of filter blocks are configured to concentrate a plurality of class-wise spatial features to the filter block for learning associated patterns based on a long-term lateral inhibition mechanism. In an embodiment, the CSNN layer is stacked to provide at least one of: (i) a low-level spatial features, (ii) a high-level spatial features, or combination thereof.
In an embodiment, the spike streams may be compressed per neuronal level by accumulating spikes at a sliding window of time, to obtain a plurality of output frames with reduced time granularity. In an embodiment, plurality of learned different spatially co-located features may be distributed on the plurality of filters from the plurality of filter blocks. In an embodiment, a special node between filters of the filter block may be configured to switch between different filters based on an associated decay constant to distribute learning of different spatially co-located features on the different filters. In an embodiment, a plurality of weights of a synapse between input and the CSNN layer may be learned using an unsupervised two trace STDP learning rule upon at least one spiking activity of the input layer. In an embodiment, the reservoir may include a sparse random cyclic connectivity which acts as a random projection of the input spikes to an expanded spatio-temporal embedding.
In another aspect, there is provided a system to identify a gesture from a plurality of gestures using a reservoir based convolutional spiking neural network. The system comprises a memory storing instructions; one or more communication interfaces; and one or more hardware processors coupled to the memory via the one or more communication interfaces. The one or more hardware processors are configured by the instructions to: receive, from a neuromorphic event camera, two-dimensional spike streams as an input; preprocess, the address event representation (AER) record associated with at least one gestures from a plurality of gestures to obtain a plurality of spike frames; process, by a multi layered convolutional spiking neural network, the plurality of spike frames to learn a plurality of spatial features from the at least one gesture; deactivate, at least one filter block from the plurality of filter blocks corresponds to at least one gesture which are not currently being learnt; obtain, spatiotemporal features by allowing the spike activations from a CSNN layer to flow through the reservoir; and classify, by a classifier, the at least one of spatial feature from the CSNN layer and the spatiotemporal features from the reservoir to obtain a set of prioritized gestures. In an embodiment, the two-dimensional spike streams is represented as an address event representation (AER) record. In an embodiment, each sliding convolutional window in the plurality of spike frames are connected to a neuron corresponding to a filter among plurality of filters corresponding to a filter block among plurality of filter blocks in each convolutional layer from plurality of convolutional layers. In an embodiment, the plurality of filter blocks are configured to concentrate a plurality of class-wise spatial features to the filter block for learning associated patterns based on a long-term lateral inhibition mechanism. In an embodiment, the CSNN layer is stacked to provide at least one of: (i) a low-level spatial features, (ii) a high-level spatial features, or combination thereof.
In an embodiment, the spike streams may be compressed per neuronal level by accumulating spikes at a sliding window of time, to obtain a plurality of output frames with reduced time granularity. In an embodiment, plurality of learned different spatially co-located features may be distributed on the plurality of filters from the plurality of filter blocks. In an embodiment, a special node between filters of the filter block may be configured to switch between different filters based on an associated decay constant to distribute learning of different spatially co-located features on the different filters. In an embodiment, a plurality of weights of a synapse between input and the CSNN layer may be learned using an unsupervised two trace STDP learning rule upon at least one spiking activity of the input layer. In an embodiment, the reservoir may include a sparse random cyclic connectivity which acts as a random projection of the input spikes to an expanded spatio-temporal embedding.
In yet another aspect, there are provided one or more non-transitory machine readable information storage mediums comprising one or more instructions which when executed by one or more hardware processors causes at least one of: receiving, from a neuromorphic event camera, two-dimensional spike streams as an input; preprocessing, the address event representation (AER) record associated with at least one gestures from a plurality of gestures to obtain a plurality of spike frames; processing, by a multi layered convolutional spiking neural network, the plurality of spike frames to learn a plurality of spatial features from the at least one gesture; deactivating, at least one filter block from the plurality of filter blocks corresponds to at least one gesture which are not currently being learnt; obtaining, spatio-temporal features by allowing the spike activations from a CSNN layer to flow through the reservoir; and classifying, by a classifier, the at least one of spatial feature from the CSNN layer and the spatio-temporal features from the reservoir to obtain a set of prioritized gestures. In an embodiment, the two-dimensional spike streams are represented as an address event representation (AER) record. In an embodiment, each sliding convolutional window in the plurality of spike frames are connected to a neuron corresponding to a filter among plurality of filters corresponding to a filter block among plurality of filter blocks in each convolutional layer from plurality of convolutional layers. In an embodiment, the plurality of filter blocks are configured to concentrate a plurality of class-wise spatial features to the filter block for learning associated patterns based on a long-term lateral inhibition mechanism. In an embodiment, the CSNN layer is stacked to provide at least one of: (i) a low-level spatial features, (ii) a high-level spatial features, or combination thereof.
In an embodiment, the spike streams may be compressed per neuronal level by accumulating spikes at a sliding window of time, to obtain a plurality of output frames with reduced time granularity. In an embodiment, plurality of learned different spatially co-located features may be distributed on the plurality of filters from the plurality of filter blocks. In an embodiment, a special node between filters of the filter block may be configured to switch between different filters based on an associated decay constant to distribute learning of different spatially co-located features on the different filters. In an embodiment, a plurality of weights of a synapse between input and the CSNN layer may be learned using an unsupervised two trace STDP learning rule upon at least one spiking activity of the input layer. In an embodiment, the reservoir may include a sparse random cyclic connectivity which acts as a random projection of the input spikes to an expanded spatio-temporal embedding.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed....

Claims​

1. A processor implemented method of identifying a gesture from a plurality of gestures using a reservoir based convolutional spiking neural network, comprising:
receiving, from a neuromorphic event camera, two-dimensional spike streams as an input, wherein the two-dimensional spike streams are represented as an address event representation (AER) record;preprocessing, via one or more hardware processors, the address event representation (AER) record associated with at least one gestures from a plurality of gestures to obtain a plurality of spike frames;processing, by a multi layered convolutional spiking neural network, the plurality of spike frames to learn a plurality of spatial features from the at least one gesture, wherein each sliding convolutional window in the plurality of spike frames are connected to a neuron corresponding to a filter among plurality of filters corresponding to a filter block among plurality of filter blocks in each convolutional layer from plurality of convolutional layers;deactivating, via the one or more hardware processors, at least one filter block from the plurality of filter blocks corresponds to at least one gesture which are not currently being learnt, wherein the plurality of filter blocks are configured to concentrate a plurality of class-wise spatial features to the filter block for learning associated patterns based on a long-term lateral inhibition mechanism;obtaining, via the one or more hardware processors, spatio-temporal features by allowing the spike activations from a CSNN layer to flow through the reservoir, wherein the CSNN layer is stacked to provide at least one of: (i) a low-level spatial features, (ii) a high-level spatial features, or combination thereof; andclassifying, by a classifier, the at least one of spatial feature from the CSNN layer and the spatio-temporal features from the reservoir to obtain a set of prioritized gestures.
2. The processor implemented method of claim 1, wherein the spike streams are compressed per neuronal level by accumulating spikes at a sliding window of time, to obtain a plurality of output frames with reduced time granularity.
3. The processor implemented method of claim 1, wherein a plurality of learned different spatially co-located features are distributed on the plurality of filters from the plurality of filter blocks.
4. The processor implemented method of claim 1, wherein a special node between filters of the filter block is configured to switch between different filters based on an associated decay constant to distribute learning of different spatially co-located features on the different filters.
5. The processor implemented method of claim 1, wherein a plurality of weights of a synapse between input and the CSNN layer are learned using an unsupervised two trace STDP learning rule upon at least one spiking activity of the input layer.
6. The processor implemented method of claim 1, wherein the reservoir comprises a sparse random cyclic connectivity which acts as a random projection of the input spikes to an expanded spatio-temporal embedding.
7. A system (100) to identify a gesture from a plurality of gestures using a reservoir based convolutional spiking neural network, comprising:
a memory (102) storing instructions;one or more communication interfaces (106); andone or more hardware processors (104) coupled to the memory (102) via the one or more communication interfaces (106), wherein the one or more hardware processors (104) are configured by the instructions to: receive, from a neuromorphic event camera, two-dimensional spike streams as an input, wherein the two-dimensional spike streams are represented as an address event representation (AER) record; preprocess, the address event representation (AER) record associated with at least one gestures from a plurality of gestures to obtain a plurality of spike frames; process, by a multi layered convolutional spiking neural network, the plurality of spike frames to learn a plurality of spatial features from the at least one gesture, wherein each sliding convolutional window in the plurality of spike frames are connected to a neuron corresponding to a filter among plurality of filters corresponding to a filter block among plurality of filter blocks in each convolutional layer from plurality of convolutional layers; deactivate, at least one filter block from the plurality of filter blocks corresponds to at least one gesture which are not currently being learnt, wherein the plurality of filter blocks are configured to concentrate a plurality of class-wise spatial features to the filter block for learning associated patterns based on a long-term lateral inhibition mechanism; obtain, spatiotemporal features by allowing the spike activations from a CSNN layer to flow through the reservoir, wherein the CSNN layer is stacked to provide at least one of: (i) a low-level spatial features, (ii) a high-level spatial features, or combination thereof; and classify, by a classifier, the at least one of spatial feature from the CSNN layer and the spatiotemporal features from the reservoir to obtain a set of prioritized gestures.
8. The system (100) of claim 7, wherein the spike streams are compressed per neuronal level by accumulating spikes at a sliding window of time, to obtain a plurality of output frames with reduced time granularity.
9. The system (100) of claim 7, wherein plurality of learned different spatially co-located features are distributed on the plurality of filters from the plurality of filter blocks.
10. The system (100) of claim 7, wherein a special node between filters of the filter block is configured to switch between different filters based on an associated decay constant to distribute learning of different spatially co-located features on the different filters.
11. The system (100) of claim 7, wherein a plurality of weights of a synapse between input and the CSNN layer are learned using an unsupervised two trace STDP learning rule upon at least one spiking activity of the input layer.
12. The system (100) of claim 7, wherein the reservoir comprises a sparse random cyclic connectivity which acts as a random projection of the input spikes to an expanded spatio-temporal embedding.
13. One or more non-transitory machine-readable information storage mediums comprising one or more instructions which when executed by one or more hardware processors perform actions comprising:
receiving, from a neuromorphic event camera, two-dimensional spike streams as an input, wherein the two-dimensional spike streams are represented as an address event representation (AER) record;preprocessing, the address event representation (AER) record associated with at least one gestures from a plurality of gestures to obtain a plurality of spike frames;processing, by a multi layered convolutional spiking neural network, the plurality of spike frames to learn a plurality of spatial features from the at least one gesture, wherein each sliding convolutional window in the plurality of spike frames are connected to a neuron corresponding to a filter among plurality of filters corresponding to a filter block among plurality of filter blocks in each convolutional layer from plurality of convolutional layers;deactivating, at least one filter block from the plurality of filter blocks corresponds to at least one gesture which are not currently being learnt, wherein the plurality of filter blocks are configured to concentrate a plurality of class-wise spatial features to the filter block for learning associated patterns based on a long-term lateral inhibition mechanism;obtaining, spatio-temporal features by allowing the spike activations from a CSNN layer to flow through the reservoir, wherein the CSNN layer is stacked to provide at least one of: (i) a low-level spatial features, (ii) a high-level spatial features, or combination thereof; andclassifying, by a classifier, the at least one of spatial feature from the CSNN layer and the spatio-temporal features from the reservoir to obtain a set of prioritized gestures.
14. The one or more non-transitory machine-readable information storage mediums of claim 13, wherein the spike streams are compressed per neuronal level by accumulating spikes at a sliding window of time, to obtain a plurality of output frames with reduced time granularity.
15. The one or more non-transitory machine-readable information storage mediums of claim 13, wherein a plurality of learned different spatially co-located features are distributed on the plurality of filters from the plurality of filter blocks.
16. The one or more non-transitory machine-readable information storage mediums of claim 13, wherein a special node between filters of the filter block is configured to switch between different filters based on an associated decay constant to distribute learning of different spatially co-located features on the different filters.
17. The one or more non-transitory machine-readable information storage mediums of claim 13, wherein a plurality of weights of a synapse between input and the CSNN layer are learned using an unsupervised two trace STDP learning rule upon at least one spiking activity of the input layer.
18. The one or more non-transitory machine-readable information storage mediums of claim 13, wherein the reservoir comprises a sparse random cyclic connectivity which acts as a random projection of the input spikes to an expanded spatio-temporal embedding.
Referenced Cited
U.S. Patent Documents

6028626February 22, 2000Aviv
6236736May 22, 2001Crabtree
6701016March 2, 2004Jojic
7152051December 19, 2006Commons
7280697October 9, 2007Perona
8504361August 6, 2013Collobert
8811726August 19, 2014Belhumeur
8942466January 27, 2015Petre et al.
9015093April 21, 2015Commons
9299022March 29, 2016Buibas et al.
Foreign Patent Documents
109144260January 2019CN
WO2019074532April 2019WO
Other references
  • Panda, Priyadarshini et al., “Learning to Recognize Actions from Limited Training Examples Using a Recurrent Spiking Neural Model,” Frontiers in Neuroscience, Oct. 2017, Publisher: Arxiv Link: https://arxiv.org/pdf/1710.07354.pdf.
Patent History
Patent number
: 11256954
Type: Grant
Filed: Dec 17, 2020
Date of Patent: Feb 22, 2022
Patent Publication Number: 20210397878
Assignee: Tala Consultancy Services Limited (Mumbai)
Inventors: Arun George (Bangalore), Dighanchal Banerjee (Kolkata), Sounak Dey (Kolkata), Arijit Mukherjee (Kolkata)
Primary Examiner: Yosef Kassa
Application Number: 17/124,584
Classifications
Current U.S. Class
: Intrusion Detection (348/152)
International Classification
: G06K 9/62 (20060101); G06K 9/00 (20060101); G06N 3/04 (20060101);

Haven't looked much into TATA & your post prompted a look.

On Arijit Mukherjee and the below may been posted previously(?) I see he also co-authored a white paper as per snip below.

Is on the TCS website and whilst doesn't appear dated it can be seen that the links on the bottom of pages to some reference material they quote has access dates late Feb 22 which suggests maybe a Mar 22 release earliest?




1656649679462.png
 
  • Like
  • Fire
  • Love
Reactions: 30 users

Learning

Learning to the Top 🕵‍♂️

Attachments

  • x280-datasheet-21G3.pdf
    382.8 KB · Views: 45
  • Like
  • Fire
Reactions: 12 users
Thanks Davidfitz,

Dio, when you have time;

Can Akida IP easily implement into this?
View attachment 10564



Post in thread 'BrainChip + SiFive' https://thestockexchange.com.au/threads/brainchip-sifive.27097/post-89264

Thank you in advance.
Learning.
Its great to be a shareholder.

Wild stab in the dark answer: Yes.

Why: Because original press release by BRN and SiFive said they had proven interoperability between RISC-V and AKIDA technology.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Fire
  • Love
Reactions: 37 users

Learning

Learning to the Top 🕵‍♂️
Wild stab in the dark answer: Yes.

Why: Because original press release by BRN and SiFive said they had proven interoperability between RISC-V and AKIDA technology.

My opinion only DYOR
FF

AKIDA BALLISTA
❤ the answer FF,

Learning.
 
  • Like
  • Fire
  • Love
Reactions: 8 users

another beauty comin in HOT 🔥


5E43AE6A-F6A5-465E-AA9B-318FD568A8E6.jpeg
 
  • Like
  • Fire
Reactions: 17 users
The taking back of Snake Island, by Ukraine in the last 24 hours, is a big strategic win for them.

Putin, the bullshit artist, claimed it was given back as a token of "good will"..

It will go some way to calming markets, as grain from Ukraine, will soon be able to be transported, to countries that desperately need it.
And it shows some positive progress in the conflict (despite things in the East).

Everything that is wrong at the moment, stems from the Ukraine conflict (without considering the insustainable path, the World was already on).

A new financial year and the next week, will show whether we have a change in trend here..

I think things will be looking up.

US futures, not looking good at the moment, but I wouldn't be surprised, if their markets end in the green.

Good fortune to all Holders!
 
  • Like
  • Fire
  • Love
Reactions: 35 users

Boab

I wish I could paint like Vincent
There are interesting things happening with Tata. The first is they appear to be in line to produce components and electronics for the India Apple IPhone. The second is what their company Titan is planning to bring to market in India in the wearables area:


My opinion only DYOR
FF

AKIDA BALLISTA
Personally I love my wearable (Apple watch) synced to my Iphone. Through my life I have done a lot of amateur sport and during training I was always "watching the clock". As I am still active (mainly brisk walking) I still like to keep notes of my time taken, heart rate, be it average, high and low, estimated VO2 max, double step time (the amount of time you spend with one foot only on the ground (an indication of balance)) and this list goes on.
So as the "weekend warriors" age I feel they will still have the desire to measure the efforts.
I have referred some info gathered to my Doctor at the annual checkup and this is where I see the biggest area of growth. Health Apps in wearables.
Busy doctors are the ones that will be suggesting wearables as the information they can provide will grow and grow.
Let's hope we can have Akida inside.....
For those that have never used a wearable here is a screen shot of my apps on my Apple watch.
And no, I don't know what all of them are for😁😁
Apple watch.jpg
 
  • Like
  • Love
  • Fire
Reactions: 20 users

Dang Son

Regular
There are interesting things happening with Tata. The first is they appear to be in line to produce components and electronics for the India Apple IPhone. The second is what their company Titan is planning to bring to market in India in the wearables area:


My opinion only DYOR
FF

AKIDA BALLISTA
Hi FF
Quote from your Tata article

"The future of wearables

Wearables are seeing a huge transformation powered by continuous innovation, and every component of wearables is being disrupted in this process. Some predictions:

  • :)Edge computing: Deep learning and edge processing capabilities on the wearable without the need to push data to mobile app/cloud.
  • Preventive care to clinical care: Sensors, approved by the Food and Drug Administration, to assist in clinical diagnosis (ECG, BP, non-invasive glucose monitoring).
  • Remote patient monitoring as a service: Evolving platforms, supported by wearables, to enable remote patient monitoring for senior citizens.
  • Energy harvesting: Using RFID (radio frequency identification) for converting kinetic energy to charge the battery, thereby extending the battery life of wearables.
Observing the change in market dynamics, Titan has planned an aggressive roadmap for the next 24 months, including both device and platform strategy. The company will be launching products at various price points, hosting a range of features that its consumers, distributed across age groups, are seeking.

Titan has invested in technology that includes daily activities, challenges, rewards, rich content, music and more, to personalise consumer experiences and drive better engagement with them. Compared to the past, Titan has put a plan in place to reduce the gap between product launches. This strategy will help the company to gauge the consumer pulse and launch relevant products that fit their needs in a timely manner."

The author, Raj Neravati, is the head of Technology, Watches and Wearables, Titan Company Ltd
 
  • Like
  • Love
  • Fire
Reactions: 21 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Howdy All,

What's the extent of our current relationship with CISCO? Wasn't there some sort of chatter previosuly about Cisco, LSTM and AKIDA 2000?

Its syas here "Ford is serious about its electric autonomous car efforts. Ford’s electric offerings with the Mustang Mach-e and Ford F-150 Lightning are impressive, but they will be quickly eclipsed by the next generation of increasingly autonomous cars networked by Cisco and due in the 2025-2026 timeframe."

Does this mean the next gen includes AKIDA?


Ford, Cisco, and the Future of Autonomous Vehicles​


HomeBig Data
Rob Enderle
By Rob Enderle
June 30, 2022

Cisco logo icon.

One of the more interesting sessions at the recent Cisco Live featured Ford CEO Jim Farley on stage talking about the future of electric and autonomous cars. Ford is one of the few automotive companies that has seriously considered what the emergence of autonomous cars means for the industry and has begun exploring a potential future where cars are provided as a service and not for purchase.
Given cars are an underutilized but expensive resource, Ford is likely on the right track and making its appearance at Cisco’s event potentially prophetic.
Let’s talk about autonomous electric cars and why Ford’s use of Cisco technology could be a game-changer:

V2X

One of the most interesting dynamics to watch as autonomous vehicles move to market was how the automotive industry flip-flopped on V2X technology.
Initially the very thought of vehicles from different makers being able to share information about traffic and road conditions was anything but popular. But over time, the industry came to realize that it was through this communications medium that the cars could be made safer and any related navigation far more efficient. This coming wave of autonomous cars will be more connected to cities, other cars, third-party services, and the companies that made them than ever before.
But with that level of connectivity, there is a huge potential for these autonomous machines to become compromised, and the auto industry isn’t exactly on the cutting edge when it comes to computing technology, let alone security. Yet the risk of getting either wrong when the car is operating without a driver is extremely high, and there could be dire consequences.

Enter Cisco

Using an enterprise-class networking vendor is certainly more expensive initially than doing it yourself, but it comes with a series of critical benefits that make the extra cost a bargain.
Cisco could better assure both the security and throughput of the networking elements Ford plans to put in the car. Rather than learn the hard way, which most automakers will likely do, Ford is learning from the market leader in enterprise networking that knows, institutionally, far more about securing wired and wireless networks than anyone in the automotive industry.
Cisco is well penetrated in both the operations side of companies like Ford and in most cities worldwide. This should allow far better V2X performance from the resulting future Ford vehicles to both other vehicles and the cities and countries in which they operate.
Using a networking and security expert in Ford’s autonomous electric cars should avoid the problems some of the competition has experienced. And Ford is less likely to overpromise and under-deliver on autonomous offerings.
Because this is Cisco’s area of expertise, it will be better able to assure both the quality and the speed of the resulting network, while also ensuring it remains secure.

Wrapping up

Ford is serious about its electric autonomous car efforts. Ford’s electric offerings with the Mustang Mach-e and Ford F-150 Lightning are impressive, but they will be quickly eclipsed by the next generation of increasingly autonomous cars networked by Cisco and due in the 2025-2026 timeframe.
Ford’s use of Cisco for the critical networking component of these coming vehicles was inspired and should result in far fewer related problems for their cars over competitive offerings. While we watch some of the competition and their drivers experience the pain of doing it yourself in a technology area where you have yet to prove competence, Ford should be able to showcase the far better path of using subject-matter experts like Cisco to solve the networking and security exposures that will surround this new class of increasingly autonomous vehicles.
In short, by using Cisco, Ford has better assured that its electric autonomous future will be a successful one.

 
  • Like
  • Fire
  • Love
Reactions: 17 users

AnotherBrain develops a new generation of artificial intelligence: Organic AITM. It is bio-inspired and close to how cortex works. Our technology is frugal in energy and data, it learns autonomously, and can explain its decisions. Embedded on an ASIC chip, this human-friendly technology works in real time without using the cloud and offers new possibilities for industries such as Supply Chain/Manufacturing, Automotive, Defense or IoT.
Our goal is to bring a true (or so called “general”) intelligence for people and companies that will be able to adapt and react appropriately in unexpected situations. With this Organic AITM, we hope to make life easier for people, but also to respond to the major societal stakes (healthcare, defense, environment, security) and to expand the field of possibilities for humankind.

Are they breaching our patents or using our IP? Or have they found another way to do learning on chip?
 
  • Like
  • Fire
  • Wow
Reactions: 10 users

TECH

Regular
After the most recent AGM, I would have expected to see some sizable increase in the upcoming BRN company 4C revenue report especially given the fact that Sean H was so adamant at that meeting that we were to start now watching the quarterly results for anticipated financial progress.

The AGM was held basically 2/3's through the 2nd quarter.

I believe that he was referring to future quarters, meaning, being disclosed in 4C's in late October 2022 and late January 2023 for example, for the next 2 quarters.

If any material contract had taken place in April, May or June 2022, we would have been informed, maybe I'm wrong, maybe there will be
an explosion in revenue, which would be fantastic, but in my opinion, it clearly isn't coming in the reported 4C in late July 2022.

I respect your view, I'm wrong about plenty of things, and always happy to admit it.

Regards....Tech :cool:
 
  • Like
  • Love
Reactions: 35 users
D

Deleted member 118

Guest
The AGM was held basically 2/3's through the 2nd quarter.

I believe that he was referring to future quarters, meaning, being disclosed in 4C's in late October 2022 and late January 2023 for example, for the next 2 quarters.

If any material contract had taken place in April, May or June 2022, we would have been informed, maybe I'm wrong, maybe there will be
an explosion in revenue, which would be fantastic, but in my opinion, it clearly isn't coming in the reported 4C in late July 2022.

I respect your view, I'm wrong about plenty of things, and always happy to admit it.

Regards....Tech :cool:


Good time to save in the coming weeks to buy more in 4 weeks time I recon, plus I might have my tax rebate as well 🙏
 
  • Like
  • Thinking
Reactions: 8 users
That’s right look at lithium. Unfortunately this is how it is for now, nearly all companys have dropped upto 80% down. I’m just hoping our ceo statement is factual from earlier this year and we get to be break even and pay for all of our own shit. That will make me so fking happy. Remember all have a plan before investing. 2025 is when I will legit be upset if we haven’t done anything by then. For now BRN is in the right place at the right time educating our future clients and working on projects I assume to prove this e.g. nasa and our most recent nvisio.
Hey Izzy and @Dozzaman1977, Sean, definitely did not say, that we would be breakeven by the end of the year, with the increase to 100 employees.

We have to keep it real here, to avoid unrealistic expectations, in regards to near term revenue.

I've got no idea, what the increase in employees will cost the Company, but it's likely to increase our quarterly costs, to around US10 million, or more (from around 7 to 8 now).

It's this increase in costs, that he said "could" or "should" be covered, by revenue.
So ongoing costs, for the time being, would remain the same (7 to 8 million per quarter)..
For now..

He definitely, wouldn't have made a definitive "statement" about what revenue, was to be expected, he chooses his words carefully.

As we are an IP Company, with margins over 90%, revenues will eventually be and continue to grow, to possibly magnitudes greater than expenses.

It would be good, if someone could post the actual transcript, of that part of the interview, as there has been a lot of confusion, over what he said..
 
Last edited:
  • Like
  • Fire
Reactions: 14 users
D

Deleted member 118

Guest
Hey Izzy and @Dozzaman1977, Sean, definitely did not say, that we would be breakeven by the end of the year, with the increase to 100 employees.

We have to keep it real here, to avoid unrealistic expectations, in regards to near term revenue.

I've got no idea, what the increase in employees will cost the Company, but it's likely to increase our quarterly costs, to around US10 million, or more (from around 7 to 8 now).

It's this increase in costs, that he said "could" or "should" be covered, by revenue.
So ongoing costs, for the time being, would remain the same (7 to 8 million per quarter)..
For now..

He definitely, wouldn't have made a definitive "statement" about what revenue, was to be expected, he chooses his words carefully.

It would be good, if someone could post the actual transcript, of that part of the interview, as there has been a lot of confusion, over what he said..
The company is really struggling to fill the last 5 positions, which I hope doesn’t doesn’t causes any delays for our customers, but at least it will keep our costs down, each position has been advertised for over 30 days and not one position filled, so that does ring some alarm bells with me, maybe they need someone dedicated to recruitment.

Ps that’s all my negativeness of my chest now.

 
Last edited by a moderator:
  • Like
Reactions: 4 users

robsmark

Regular
Hey Izzy and @Dozzaman1977, Sean, definitely did not say, that we would be breakeven by the end of the year, with the increase to 100 employees.

We have to keep it real here, to avoid unrealistic expectations, in regards to near term revenue.

I've got no idea, what the increase in employees will cost the Company, but it's likely to increase our quarterly costs, to around US10 million, or more (from around 7 to 8 now).

It's this increase in costs, that he said "could" or "should" be covered, by revenue.
So ongoing costs, for the time being, would remain the same (7 to 8 million per quarter)..
For now..

He definitely, wouldn't have made a definitive "statement" about what revenue, was to be expected, he chooses his words carefully.

It would be good, if someone could post the actual transcript, of that part of the interview, as there has been a lot of confusion, over what he said..
I think we need to find a source for this, because there conflicting opinions on what our CEO said. I remember it differently, but will happily retract my post in the previous thread if proven wrong.
 
Last edited:
  • Like
Reactions: 7 users

hotty4040

Regular
Just made a chocolate and wallnut brownie cake for deserts later as well, just needs popping in the oven and served with either custard or ice cream View attachment 10557

And one man size pot of stew that has been slow cooking for 5 hours and about to add the potatoes already for my poker playing friends later.

View attachment 10558
Party time, in the extreme, R577. Just where ever you are, please invite me over for a round or 2 ( hoping of course that your playing the short hand game, which I love ) and a serve of what's on offer, with some nice hot chillies, and a can or 3 of my favorite guinness, I'd be in heaven again, absolutely. The aces will surely appear for you often I reckon. Enjoy everything. Your living like a king IMHO.

Akida Ballista >>>>>> Royal Flush in the making, and 4oak already <<<<<<


hotty...
 
  • Like
  • Love
Reactions: 5 users
Haven't looked much into TATA & your post prompted a look.

On Arijit Mukherjee and the below may been posted previously(?) I see he also co-authored a white paper as per snip below.

Is on the TCS website and whilst doesn't appear dated it can be seen that the links on the bottom of pages to some reference material they quote has access dates late Feb 22 which suggests maybe a Mar 22 release earliest?




View attachment 10561
The following is so new it is not even on Tata Powers web site. I have read at least ten articles and not one explains what the Ai component of the Ai smart sensor is beyond the letters “Ai”.


In the process of trying to find details I did see a lot of articles publicising the Tata Renesas partnership.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Fire
Reactions: 14 users

mrgds

Regular
Time for a "GREEN BABY ROCKETS YEAH" @wilzy12 .................
Heres my
"GREEN BABY, ............ YEAH" :alien::alien::alien::alien:

Screenshot (36).png
 
  • Like
  • Fire
  • Love
Reactions: 9 users

AnotherBrain develops a new generation of artificial intelligence: Organic AITM. It is bio-inspired and close to how cortex works. Our technology is frugal in energy and data, it learns autonomously, and can explain its decisions. Embedded on an ASIC chip, this human-friendly technology works in real time without using the cloud and offers new possibilities for industries such as Supply Chain/Manufacturing, Automotive, Defense or IoT.
Our goal is to bring a true (or so called “general”) intelligence for people and companies that will be able to adapt and react appropriately in unexpected situations. With this Organic AITM, we hope to make life easier for people, but also to respond to the major societal stakes (healthcare, defense, environment, security) and to expand the field of possibilities for humankind.

Are they breaching our patents or using our IP? Or have they found another way to do learning on chip?
When you dig into their website you discover:

“To overcome productivity problems manufacturers experienced due to visual quality control, AnotherBrain has developed a solution embedding AI algorithms that can imitate an operator’s thinking methods.
Our solution allows to address all types of defects on all types of products.”

They are software.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Fire
Reactions: 8 users
The company is really struggling to fill the last 5 positions, which I hope doesn’t doesn’t causes any delays for our customers, but at least it will keep our costs down, each position has been advertised for over 30 days and not one position filled, so that does ring some alarm bells with me, maybe they need someone dedicated to recruitment.

Ps that’s all my negativeness of my chest now.


Well those who are holding Nvidia, Google and Amazon need to panic as well as they too are struggling to recruit.

There is a world wide shortage of engineers and data scientists.

By the way there is a salmonella alert for European chocolate out today.
 
  • Haha
  • Like
  • Love
Reactions: 12 users
Top Bottom