BRN Discussion Ongoing

Diogenese

Top 20
While it is perfectly conceivable that Meta could be exploring BrainChip’s offerings behind an NDA wall of silence, I don’t see any conclusive evidence at present that we are indeed engaged with them.

View attachment 65036


In my opinion, FF is once again jumping to conclusions.
He is basing his reasoning for adding Meta to his personal list of BrainChip’s engagements on a supposed fact, namely that Chris Jones was introduced to BrainChip and TENNs while working at Meta, even though this premise has not been verified - it is merely FF’s interpretation of the following quote (from 3:18 min, my transcript):

“So, about a year ago, uh, I was at Meta, I was in their AI Infrastructure Group, and on an almost daily basis I would see new neural network architectures.

So, when I was introduced to BrainChip, I didn’t think I would really be impressed by anything a small team was gonna develop, erm. They told me about TENNs, I was a little bit skeptical to be honest at first. As I started getting to understand the benchmarks and a little bit more of the math and how it worked, I started to get pretty excited by what they had.”


It is a hasty judgement to draw the conclusion that the above quote necessarily expresses simultaneity rather than considering the alternative - posteriority.

Chris Jones himself does not explicitly state that he was introduced to BrainChip and TENNs while working for Meta (picture BrainChip staff giving a power point presentation at Meta’s premises).
Rather, this is what FF read into his words.

But there is another way to interpret that quote; one, which makes more sense to me:

When I started watching the video of Chris Jones’s presentation on TENNs, my immediate thoughts were that a) BrainChip had sent an excellent speaker to the 2024 Embedded Vision Summit in May to replace Nandan Nayampally (who had originally been scheduled to give that talk), and b) how job loss can turn out to be a blessing in disguise.

What FF doesn’t seem to be aware of is that Chris Jones had been affected by one of Meta’s 2023 mass layoffs. It must have been the April 2023 round, as in his LinkedIn post below he mentions over 4000 other employees sharing his fate as well as the fact that he had been with the company for only 11 months. This aligns with his employment history on LinkedIn, according to which he started as a Senior Product Technical Program Manager at Meta in June 2022.

View attachment 65030


View attachment 65031


Under California law (the so-called WARN Act) companies over a certain size need to give affected employees at least 60 days advance notice in case of significant workforce reductions, which in combination with a severance pay package would account for Chris Jones’s LinkedIn profile stating he was with Meta until July 2023, even though he appears to have been laid off in April 2023.


View attachment 65033


Judging from the practice observed with other US tech giants handling domestic layoffs, it is however highly likely that from the day the layoff was communicated, the affected employees would - with immediate effect - no longer have had any access to their company emails, computer files and confidential information, despite remaining on the company payroll for another two to four months (depending on their respective severance pay packages).

And unless Meta was an EAP customer of BrainChip at the time (for which there has never been any indication whatsoever), Meta employees would not have been privy to any details about TENNs prior to BrainChip’s white paper publication on June 9, 2023 - weeks after Chris Jones had found out about being laid off and had since presumably been cut off from the company’s internal communication channels and flow of information.


31dee624-a87d-4bea-8043-c1c136c3b032-jpeg.65034


So chances are he did not develop his enthusiasm for BrainChip and TENNs in his role as Senior Product Technical Program Manager at Meta - but rather while job hunting post-layoff!

Luckily for both Chris Jones as well as BrainChip, getting introduced to each other seems to have turned out a win-win situation.
Chris Jones is clearly an asset to our company from what I can tell through him presenting in public, and hopefully he will be able to proudly tell his daughter one day that when she was a toddler, he turned a job loss into an immense gain by serendipitously discovering what BrainChip had to offer.

And who knows - maybe Chris Jones has been/is/will be the one introducing BrainChip to his former Meta colleagues. But as for now, Meta stays off my personal (virtual) list of BrainChip’s engagements.

DYOR.
That is a tendentious reading of Mr Jones' comments about when he found out about Akida. The normal reading of the following is that he saw several NNs while he worked at Meta and, following on that statement, he says "So, when I was introduced to to BrainChip ... they told me about TeNNs."

"So, about a year ago, uh, I was at Meta, I was in their AI Infrastructure Group, and on an almost daily basis I would see new neural network architectures.

So, when I was introduced to BrainChip, I didn’t think I would really be impressed by anything a small team was gonna develop, erm. They told me about TENNs, I was a little bit skeptical to be honest at first. As I started getting to understand the benchmarks and a little bit more of the math and how it worked, I started to get pretty excited by what they had
.”

It's about Attention and LSTM. You need to take the whole context into consideration. The statements would normally be linked by the man on the Clapham omnibus. This is in normal English speech, not a statutory declaration or Kantian transcendentalism.

The following argument is a case of pulling the trigger before removing the pistol from its holster:

What FF doesn’t seem to be aware of is that Chris Jones had been affected by one of Meta’s 2023 mass layoffs. It must have been the April 2023 round, as in his LinkedIn post below he mentions over 4000 other employees sharing his fate as well as the fact that he had been with the company for only 11 months. This aligns with his employment history on LinkedIn, according to which he started as a Senior Product Technical Program Manager at Meta in June 2022.

The interesting thing is that the TeNNs patent was filed in mid-2022, so BrainChip would have only started talking to EAPs about TeNNs after the patent was filed, although public discussion of TeNNs did not take place until much later. He worked for Meta for 11 months from April 2022. Given that the patent filing preceded coincides with the period of Mr Jones' employment with Meta, the inference is open that Meta was an EAP or in discussion with BrainChip at least before Mr Jones was outplaced/right-sized from Meta. There was a 9 month period when Mr Jones was working with Meta when Brainchip was free to discuss TeNNs with EAPs under NDA.
 
  • Like
  • Love
  • Fire
Reactions: 57 users

Frangipani

Regular
Those of you who have taken a closer look at the global neuromorphic research community will likely have come across the annual Telluride Neuromorphic Cognition Engineering Workshop, a three week project-based meeting in eponymous Telluride, a charming former Victorian mining town in the Rocky Mountain high country of southwestern Colorado. Nestled in a deep glacial valley, Telluride sits at an elevation of 8750 ft (2667 m) and is surrounded by majestic rugged peaks. Truly a scenic location for a workshop.

The National Science Foundation (NSF), which has continuously supported the Telluride Workshop since its beginnings in the 1990s, described it in a 2023 announcement as follows: It “will bring together an interdisciplinary group of researchers from academia and industry, including engineers, computer scientists, neuroscientists, behavioral and cognitive scientists (…) The annual three-week hands-on, project-based meeting is organized around specific topic areas to explore organizing principles of neural cognition that can inspire implementation in artificial systems. Each topic area is guided by a group of experts who will provide tutorials, lectures and hands-on project guidance.”

https://new.nsf.gov/funding/opportu...ng-augmented-intelligence/announcements/95341

View attachment 59073

View attachment 59075



The topic areas for the 2024 Telluride Neuromorphic Workshop are now online. As every year, the list of topic leaders and invited speakers includes the crème de la crème of neuromorphic researchers from all over the world. While no one from Brainchip has made the invited speakers’ list (at least not to date), I was extremely pleased to notice that Akida will be featured nevertheless! It has taken the academic neuromorphic community ages to take Brainchip seriously (cf my previous post on Open Neuromorphic: https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-404235), but here we are, finally getting acknowledged alongside the usual suspects:

View attachment 59076
View attachment 59077
Some readers will now presumably shrug their shoulders and consider this mention of Brainchip in a workshop programme as being insignificant as opposed to those coveted commercial announcements. To me, however, the inclusion of Brainchip at Telluride marks a milestone.

Also keep in mind what NSF Program Director Soo-Siang Lim said about Telluride (see link above): “This workshop has a long and successful track-record of advancing and integrating our understanding of biological and artificial systems of learning. Many collaborations catalyzed by the workshop have led to significant technology innovations, and the training of future industry and academic leaders.”

I’d just love to know who of the four topic leaders and/or co-organisers had suggested to include Brainchip for their hands-on project “Processing space-based data using neuromorphic computing hardware” (and whether this was readily agreed on or not):

Was it one of the two colleagues from Western Sydney University’s International Centre for Neuromorphic Systems (ICNS)? Gregory Cohen (who is responsible for Astrosite, WSU’s containerised neuromorphic inspired mobile telescope observatory as well as for the modification of the two neuromorphic cameras on the ISS as part of the USAFA Falcon Neuro project) or Alexandre Marcireau?

Or was it Gregor Lenz, who left Synsense in mid-2023 to co-found Neurobus (“At Neurobus we’re harnessing the power of neuromorphic computing to transform space technology”) and is also one of the co-founders of the Open Neuromorphic community? He was one of the few live viewers of Cristian Axenie’s January 15 online presentation on the TinyML Vision Zero San Jose Competition (where his TH Nürnberg team, utilising Akida for their event-based visual motion detection and tracking of pedestrians, had come runner-up), and asked a number of intriguing questions about Akida during the live broadcast.

Or was it possibly Jens Egholm Pedersen, the Danish doctoral student at Stockholm’s KTH Royal Institute of Technology, Sweden’s largest technical university, who hosted said presentation by Cristian Axenie on the Open Neuromorphic YouTube channel and appeared to be genuinely impressed about Akida (and the Edge Impulse platform), too?

Oh, and last, but not least:
Our CTO Anthony M Lewis aka Tony Lewis has been to Telluride numerous times: the workshop website lists him as one of the early participants back in 1996 (when he was with UCLA’s Computer Science Department). Tony Lewis is subsequently listed as a guest speaker for the 1999, 2000, 2001, 2002, 2003 and 2004 workshops (in his then capacity as the founder of Iguana Robotics) - information on the participants between 2006 - 2009 as well as for the year 2011 is marked as “lost”. In 2019, Tony Lewis had once again been invited as either topic leader or guest speaker, but according to the website could not come.

So I guess there is a good chance we will see him return to Telluride one day, this time as CTO of Brainchip, catching up with a lot old friends and acquaintances, many of whom he also keeps in touch with via his extensive LinkedIn network, so they’d definitely know what he’s been up to.

As I said in another post six weeks ago:

Anyone interested in registering online for remote participation in the upcoming hybrid Telluride Neuromorphic Workshop (June 30 - July 19)?




0FAEC2C7-79D6-4F18-B1AC-0E98365B2C2A.jpeg

Meanwhile, two more speakers have been invited by the organisers of the topic area SPA24: Neuromorphic systems for space applications, which is the one that will provide on-site participants with the opportunity to get hands-on-experience with neuromorphic hardware including Akida:

874DD3F7-6E62-4A50-BAE0-188318ADCB8A.jpeg


Dr Damien Joubert from Prophesee and … 🥁 🥁 🥁 Laurent Hili, our friend from ESA!



2A9DA16A-94B7-43D4-9C9C-54BC42EA85CA.jpeg
 
  • Like
  • Love
  • Fire
Reactions: 16 users
Latest paper just released atoday from Gregor Lenz at Neurobus and Doug McLelland at Brainchip.


Low-power Ship Detection in Satellite Images Using Neuromorphic Hardware​

Gregor LenzCorresponding author. E-Mail: gregor@neurobus.space Neurobus, Toulouse, FranceDouglas McLellandBrainChip, Toulouse, France

Abstract​

Transmitting Earth observation image data from satellites to ground stations incurs significant costs in terms of power and bandwidth. For maritime ship detection, on-board data processing can identify ships and reduce the amount of data sent to the ground. However, most images captured on board contain only bodies of water or land, with the Airbus Ship Detection dataset showing only 22.1% of images containing ships. We designed a low-power, two-stage system to optimize performance instead of relying on a single complex model. The first stage is a lightweight binary classifier that acts as a gating mechanism to detect the presence of ships. This stage runs on Brainchip’s Akida 1.0, which leverages activation sparsity to minimize dynamic power consumption. The second stage employs a YOLOv5 object detection model to identify the location and size of ships. This approach achieves a mean Average Precision (mAP) of 76.9%, which increases to 79.3% when evaluated solely on images containing ships, by reducing false positives. Additionally, we calculated that evaluating the full validation set on a NVIDIA Jetson Nano device requires 111.4 kJ of energy. Our two-stage system reduces this energy consumption to 27.3 kJ, which is less than a fourth, demonstrating the efficiency of a heterogeneous computing system.
\makeCustomtitle

1Introduction​

Ship detection from satellite imagery is a critical application within the field of remote sensing, offering significant benefits for maritime safety, traffic monitoring, and environmental protection. The vast amount of data generated by satellite imagery cannot all be treated on the ground in data centers, as the downlinking of image data from a satellite is a costly process in terms of power and bandwidth.
Refer to caption
Figure 1:Data flow chart of our system.
To help satellites identify the most relevant data to downlink and alleviate processing on the ground, recent years have seen the emergence of edge artificial intelligence (AI) applications for Earth observation [xu2022lite, zhang2020ls, xu2021board, ghosh2021board, yao2019board, alghazo2021maritime, vstepec2019automated]. By sifting through the data on-board the satellite, we can discard a large number of irrelevant images and focus on the relevant information. Because satellites are subject to extreme constraints in size, weight and power, energy-efficient AI systems are crucial. In response to these demands, our research focuses on using low-power neuromorphic chips for ship detection tasks in satellite images. Neuromorphic computing, inspired by the neural structure of the human brain, offers a promising avenue for processing data with remarkable energy efficiency. The Airbus Ship Detection challenge [al2021airbus] on Kaggle aimed to identify the best object detection models. A post-challenge analysis [faudi2023detecting] revealed that a binary classification pre-processing stage was crucial in winning the challenge, as it reduced the rates of false positives and therefore boosted the relevant segmentation score. We introduce a ship detection system that combines a binary classifier with a powerful downstream object detection model. The first stage is implemented on a state-of-the-art neuromorphic chip and determines the presence of ships. Images identified as containing ships are then processed by a more complex detection model in the second stage, which can be run on more flexible hardware. Our work showcases a heterogeneous computing pipeline for a complex real-world task, combining the low-power efficiency of neuromorphic computing with the increased accuracy of a more complex model.
Refer to caption
Figure 2:The first two images are examples of 22% of annotated samples. The second two images are examples of the majority of images that do not contain ships but only clouds, water or land.

2Dataset​

The Airbus Ship Detection dataset [airbus_ship_detection_2018] contains 192k satellite images, of which 22.1% contain annotated bounding boxes for a single ship class. Key metrics of the dataset are described in Table 1. As can be seen in the sample images in Figure 2, a large part of the overall pixel space captures relatively homogenuous parts such as open water or clouds. We chose this dataset as it is part of the European Space Agency’s (ESA) On-Board Processing Benchmark suite for machine learning applications [obpmark], with the goal in mind to test and compare a variety of edge computing hardware platforms for the most common ML tasks related to space applications. The annotated ship bounding boxes have diagonals that vary from 1 to 380 pixels in length, and 48.3% of bounding boxes have diagonals of 40 pixels or shorter. Given that the images are 768×768px in size, this makes it a challenging dataset, as the model needs to be able to detect ships of a large variety of sizes. Since on Kaggle there are only annotations for the training set available, we used a random 80/20 split for training and validation, similarly to Huang et al [huang2020fast]. For our binary classifier, we downsized all images to 256×256px, to be compatible with the input resolution of Akida 1.0, and labeled the images as 1 if they contained at least one bounding box of any size, otherwise 0. For our detection model, we downsized all images to 640×640px in size.
RGB image size768×768
Total number of images192,555
Number of training images154,044
Percentage of images that contain ships22.1%
Total number of bounding boxes81,723
Median diagonal of all bounding boxes43.19px
Ratio of bounding box to image area0.3%
Table 1:Summary of image and bounding box data for the Airbus Ship Detection Training dataset.

3Models​

For our binary classifier, we used a 866k parameter model named AkidaNet 0.5, which is loosely inspired from MobileNet [howard2017mobilenets] with alpha = 0.5. It consists of standard convolutional, separable convolutional and linear layers, to reduce the number of parameters and to be compatible with Akida 1.0 hardware. To train the network, we used binary crossentropy loss, the Adam optimizer, a cosine decay learning rate scheduler with initial rate of 0.001 and lightweight L1 regularization on all model parameters over 10 epochs. For our detection model, we trained a YOLOv5 medium [ge2021yolox] model of 25m parameters with stochastic gradient descent, a learning rate of 0.01 and 0.9 momentum, plus blurring and contrast augmentations over 25 epochs.

4Akida hardware​

Akida by Brainchip is an advanced artificial intelligence processor inspired by the neural architecture of the human brain, designed to provide high-performance AI capabilities at the edge with exceptional energy efficiency. Version 1.0 is available for purchase in the form factor of PCIe x1 as shown in Figure 3, and supports convolutional neural network architectures. Version 2.0 adds support for a variety of neural network types including RNNs and transformer architectures, but is currently only available in simulation. The Akida processor operates in an event-based mode for intermediate layer activations, which only performs computations for non-zero inputs, significantly reducing operation counts and allowing direct, CPU-free communication between nodes. Akida 1.0 supports flexible activation and weight quantization schemes of 1, 2, or 4 bit. Models are trained in Brainchip’s MetaTF, which is a lightweight wrapper around Tensorflow. In March 2024, Akida has also been sent to space for the first time [brainchip2024launch].
Refer to caption
Figure 3:AKD1000 chip on a PCB with PCIe x1 connector.

5Results​

5.1Classification accuracy​

The key metrics for our binary classification model are provided in Table 2. The trained floating point model reaches an accuracy of 97.91%, which drops to 95.75% after quantizing the weights and activations to 4 bit and the input layer weights to 8 bit. After one epoch of quantization-aware training with a tenth of the normal learning rate, the model recovers nearly its floating point accuracy, at 97.67%. Work by Alghazo et al [alghazo2021maritime] reaches an accuracy of 89.7% in the same binary classification setting, albeit on a subset of the dataset and on images that are downscaled to 100 pixels. In addition, the corresponding recall and precision metrics for our model are shown in the table. In our system we prioritize recall, because false negatives (missing a ship) have a higher cost than false positives (detecting ships where there are none), as the downstream detection model can correct for mistakes of the classifier stage. By default we obtain a recall of 94.4 and a precision of 95.07%, but by adjusting the decision threshold on the output, we bias the model to include more images at the cost of precision, obtaining a recall of 97.64% for a precision of 89.73%.
Table 2:Model performance comparison in percent. FP is floating point, 4 bit is the quantized model of 8 bit inputs, and 4 bit activations and weights. QAT is quantization-aware training for 1 epoch with reduced learning rate. Precision and recall values are given for a decision threshold of 0.5 and 0.1.
FP4 bitAfter QAT
Accuracy97.9195.7597.67
Accuracy [alghazo2021maritime] 89.70--
Recall95.2385.1294.40/97.64
Precision95.3895.3295.07/89.73

5.2Performance on Akida 1.0​

The model that underwent QAT is then deployed to the Akida 1.0 reference chip, AKD1000, where the same accuracy, recall and precision are observed as in simulation. As detailed in Table 3, feeding a batch of 100 input images takes 1.168 s and consumes 440 mW of dynamic power. The dynamic energy used to process the whole batch is therefore 515 mJ, which translates to 5.15 mJ per image. The network is distributed across 51 of the 78 available neuromorphic processing cores. During our experiments, we measured 921 mW of static power usage on the AKD1000 reference chip. We note that this value is considerably reduced in later chip generations.

Table 3:Summary of performance metrics on Akida 1.0 for a batch size of 100.
Total duration (ms)1167.95
Duration per sample (ms)11.7
Throughput (fps)85.7
Total dynamic power (mW)440.8
Energy per batch (mJ)514.84
Energy per sample (mJ)5.15
Total neuromorphic processing cores51
We can further break down performance across the different layers in the model. The top plot in Figure 4 shows the latency per frame: it increases as layers are added up to layer 7, but beyond that, the later layers make almost no difference. As each layer is added, we can measure energy consumption, and estimate the per-layer contribution as the difference from the previous measurement, shown in the middle plot of Figure 4. We observe that most of the energy during inference is spent on earlier layers, even though the work required per layer is expected to be relatively constant as spatial input sizes are halved, but the number of filters doubled throughout the model. The very low energy measurements of later layers are explained by the fact that Akida is an event-based processor that exploits sparsity. When measuring the density of input events per layer as shown in the bottom plot of Figure 4, we observe that energy per layer correlates well with the event density. The very first layer receives dense images, but subsequent layers have much sparser inputs, presumably due to a lot of input pixels that are not ships, which in turn reduces the activation of filters that encode ship features. We observe an average event density of 29.3% over all layers including input, reaching less than 5% in later layers. This level of sparsity is achieved through the combination of ReLU activation functions and L1 regularization on activations during training.
Refer to caption
Figure 4:Layer-wise statistics per sample image for inference of binary classifier on Akida v1, measured for a batch of 100 images over 10 repeats.

5.3Detection model performance​

For our subsequent stage, our YOLOv5 model of 25m parameters achieves 76.9% mAP when evaluated on the full validation set containing both ship and non-ship images. When we evaluate the same model on the subset of the validation set that just contains ships, the mAP jumps to 79.3%, as the false positives are reduced considerably. That means that our classifier stage already has a beneficial influence on the detection performance of the downstream model. Table 4 provides an overview of detection performance in the literature. Machado et al. [machado2022estimating] provide measurements for different YOLOv5 models on the NVIDIA Jetson Nano series, a hardware platform designed for edge computing. For the YOLOv5 medium model, the authors report an energy consumption of 0.804 mWh per frame and a throughput of 2.7 frames per second at input resolution of 640 pixels, which translates to a power consumption of 7.81 W. The energy necessary to process the full validation dataset of 38,511 images on a Jetson is therefore 38511×7.81/2.7=111.4 kJ. For our proposed two-stage system, we calculate the total energy as the sum of processing the full validation set on Akida plus processing the identified ship images on the Jetson device. Akida has a power consumption of 0.921+0.44=1.361 W at a throughput of 85.7 images/s. With a recall of 97.64% and a precision of 89.73%, 9243 images, equal to 24.03% of the validation data, are classified to contain ships, in contrast to the actual 22.1%. We therefore obtain an overall energy consumption of 38511×1.361/85.7+9243×7.81/2.7=27.3 kJ. Our proposed system uses 4.07 times less energy to evaluate this specific dataset.
ModelmAP (%)Energy (kJ)
YOLOv3 [patel2022deep] 49-
YOLOv4 [patel2022deep] 61-
YOLOv5 [patel2022deep] 65-
Faster RCNN [al2021airbus] 80-
YOLOv576.9111.4
AkidaNet + YOLOv579.327.3
Table 4:Mean average precision and energy consumption evaluated on the Airbus ship detection dataset.

6Discussion​

For edge computing tasks, it is common to have a small gating model which activates more costly downstream processing whenever necessary. As only 22.1% of images in the Airbus detection dataset contain ships, a two-stage processing pipeline can leverage different model and hardware architectures to optimize the overall system. We show that our classifier stage running on Akida benefits from a high degree of sparsity when processing the vast amounts of homogeneous bodies of water, clouds or land in satellite images, where only 0.3% of the pixels are objects of interest. We hypothesise that many filter maps that encode ship features are not activated most of the times. This has a direct impact on the dynamic power consumption and latency during inference due to Akida’s event-based processing nature. In addition, we show that a two-stage system actually increases the mAP of the downstream model by reducing false positive rates, as is also mentioned in the post-challenge analysis of the Airbus Kaggle challenge [faudi2023detecting]. The energy consumption of the hybrid system is less than a fourth compared to running the detection model on the full dataset, with more room for improvement when using Akida v2, which is going to reduce both static and dynamic power consumption and allow the deployment of more complex models that likely achieve higher recall rates. The limitations of our system are the increased needs of size, having to fit two different accelerators instead of a single one. But by combining the strengths of different hardware platforms, we can optimize the overall performance, which is critical for edge computing applications in space
 
  • Like
  • Fire
  • Love
Reactions: 69 users
  • Haha
  • Like
  • Fire
Reactions: 11 users

Frangipani

Regular
The interesting thing is that the TeNNs patent was filed in mid-2022, so BrainChip would have only started talking to EAPs about TeNNs after the patent was filed, although public discussion of TeNNs did not take place until much later. He worked for Meta for 11 months from April 2022.

No, Chris Jones started working for Meta in June 2022, see my post and his LinkedIn profile.

Given that the patent filing preceded coincides with the period of Mr Jones' employment with Meta, the inference is open that Meta was an EAP or in discussion with BrainChip at least before Mr Jones was outplaced/right-sized from Meta.


That’s exactly why I wrote the following:

And unless Meta was an EAP customer of BrainChip at the time (for which there has never been any indication whatsoever), Meta employees would not have been privy to any details about TENNs prior to BrainChip’s white paper publication on June 9, 2023 - weeks after Chris Jones had found out about being laid off and had since presumably been cut off from the company’s internal communication channels and flow of information.


31dee624-a87d-4bea-8043-c1c136c3b032-jpeg.65034


So chances are he did not develop his enthusiasm for BrainChip and TENNs in his role as Senior Product Technical Program Manager at Meta - but rather while job hunting post-layoff!

But has there ever been any indication that Meta was indeed an EAP customer at the time? Any announcement as with other EAP customers? If not, we shouldn’t simply assume so.

But for the sake of the discussion, let’s assume for a minute it was indeed the case.
Given that the patent filing preceded coincides with the period of Mr Jones' employment with Meta, the inference is open that Meta was an EAP or in discussion with BrainChip at least before Mr Jones was outplaced/right-sized from Meta. There was a 9 month period when Mr Jones was working with Meta when Brainchip was free to discuss TeNNs with EAPs under NDA.

In practice, the overlapping period regarding Chris Jones working for Meta and his introduction to BrainChip and TENNs would have been much shorter, though, given that Chris Jones said on May 23, 2024

So, about a year ago, uh, I was at Meta, I was in their AI Infrastructure Group, and on an almost daily basis I would see new neural network architectures.

So, when I was introduced to BrainChip, I didn’t think I would really be impressed by anything a small team was gonna develop, erm. They told me about TENNs, I was a little bit skeptical to be honest at first. As I started getting to understand the benchmarks and a little bit more of the math and how it worked, I started to get pretty excited by what they had.”



Saying “about a year ago” on May 23, 2024 could mean July, June, May, April, possibly even March 2023. He certainly wouldn’t have put it that way if he and his colleagues had already been introduced to BrainChip in let’s say October 2022. And since he appears to have been laid off in mid-April, the potential time window shrinks to a maximum of six weeks, I’d say. That’s far from the nine months you claimed.

Your use of the participle “outplaced” instead of “laid off” implies that Meta would have helped him to find his current job? Again, there is no indication of that at all when you read his LinkedIn post, especially the last paragraph:

5069E8E9-DB90-439B-A5B2-A60C00F3E355.jpeg



Mind you, I did not say there is no way that Chris Jones could have found out about TENNs while still working for Meta, but to me his words are certainly not conclusive evidence, the way FF presented them. They can very well be interpreted differently, especially with the background knowledge that he was laid off more than a year ago (which he didn’t mention in the video). I had already taken notice of that a while ago, when I had had a look at his LinkedIn profile after learning that he would be the one giving the talk Nandan Nayampally was supposed to have presented. (This was even before we found out from the Quarterly Investor podcast that Nandan and Rob had all of a sudden left the company.)

So no, mine is not a tendentious reading and I am not shooting myself in the foot either, if that is what you meant to say. My argument is well-founded. I don’t exclude the possibility that Chris Jones got introduced to BrainChip while still working for Meta, but I believe it is the unlikelier sequence of events for the reasons stated.

Also: Why would he have asked his LinkedIn network for assistance in finding a new job in his April 2023 LinkedIn post and only started working for BrainChip in October 2023? If he had already been that excited about our company prior to being laid off at Meta, they might even have been able to offer him a new position from August onwards, a smooth transition from Meta to BrainChip without a paycheck missing. Of course I have no idea whether it was possibly a deliberate decision of Chris Jones to pick October as the start date for his new job (maybe he wanted to spend quality time with his family between jobs, go on a long vacation, rest and recharge, renovate the house or perhaps he was suffering from an illness, was taking care of elderly relatives or was grieving for a loved one etc) or whether there was simply no earlier job vacancy for his position at BrainChip, but the two month gap between jobs could just as well signify that he didn’t yet know about BrainChip’s offerings by the time he started looking for a new job and that they were possibly not even his first choice.

Ultimately, everything - and that includes FF’s reading - is speculation, unless we hear it from the horse’s mouth. Can we at least agree on that?
 
  • Like
  • Fire
Reactions: 14 users

Frangipani

Regular
  • Haha
Reactions: 3 users

Frangipani

Regular
7D04C7C9-F891-4614-886A-C06DD06A9D8C.jpeg
 
  • Like
  • Fire
  • Love
Reactions: 26 users
  • Haha
  • Like
Reactions: 7 users

Frangipani

Regular
Fast forward to April 20, 2024 when @Pmel shared a great find, namely a LinkedIn post by SnT researcher Geoffrey Eappen, in which Flor Ortiz is named as part of a team that successfully demonstrated keyword spotting implemented on Akida. (You won’t get this post as a search result for “Flor Ortiz”on TSE, though, as her name appears in an image, not in a text).


View attachment 64489

While it is heartwarming for us BRN shareholders to read about the Uni Luxembourg researchers’ enthusiasm and catch a glimpse of the Akida Shuttle PC in action, this reveal about the SnT SIGCOM researchers playing with AKD1000 didn’t really come as a surprise, given we had previously spotted SnT colleagues Jorge Querol and Swetha Varadarajulu liking BrainChip posts on LinkedIn:

https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-408941

Nevertheless, it is another exciting validation of Akida technology for the researchers’ whole professional network to see!




Today, the University of Luxembourg posted the following:

View attachment 64490



View attachment 64491
View attachment 64492




I also discovered an interview with Flor Ortiz (published on Feb 1, 2024) on the Uni Luxembourg website:


UNI EN UNI EN
SnT EN



News

[Series: SnT Young Researchers] Flor Ortiz on Artificial Intelligence for Satellite Communication​

Flora_Ortiz.png

  • Interdisciplinary Centre for Security, Reliability and Trust (SnT)
    01 February 2024
  • Category
    Research
  • Topic
    Space

Young researchers shape our future. Bringing their innovative ideas into our projects, they contribute not only to the excellence of research of the University of Luxembourg’s Interdisciplinary Centre for Security, Reliability and Trust (SnT) but also to our impact in society. They take our research to the next generation.
In this edition of the series, we feature Dr. Flor Ortiz and her research on artificial intelligence and machine learning for satellite communication.

Dr. Flor Ortiz, research associate at the Signal Processing and Communication research group (SIGCOM), gave us some insights into the research project she is working on, reflected on how this project will shape the future, and shared her future plans with us.

Flor, what are you working on in your research?
We are working in collaboration with SES to utilise artificial intelligence (AI) to empower the next generation of satellite communications. In our SmartSpace project, we study and evaluate how and where we can implement AI techniques to improve satellite communication systems.
We are exploring several use cases, including AI for resource allocations, AI for interference management, and a boarder connectivity ecosystem using big data, for example, for satellite data traffic forecasting. In addition, we focus on training the data for machine learning.

What is the motivation of the project?
In the future, we may have access to the Internet and communication anytime and anywhere in the world. This is expected for the next generation of satellite communication. This will be possible due to non-terrestrialnetworks and satellite communication. Our vision for the future is that by integrating terrestrial and non-terrestrial networks, we can provide ubiquitous coverage of communication services.

How does this project shape the future?

Non-terrestrial networks comprise satellites providing connectivity to deliver equal service quality across the world. By introducing such networks, we can address different problems, and the most prominent example is the digital divide. At the same time, we must tackle new challenges which we want to solve utilising AI: For instance, traditional techniques will not be enough to guarantee full reconfigurability of our satellites. Machine learning models give us the opportunity to obtain this reconfigurability.

What are the solutions in the project?
For example, the more satellite data traffic is changing over time, we can see that there are some specific areas with wasted resources, and some with insufficient resources. AI can help us in these cases to develop a resource allocation model based on machine learning. With AI, we can modify resources, such as power and bandwidth. This increases the capacity during a very high traffic demand and reduces it during a low traffic demand. To achieve this in the case of a network of many satellites, we need to first know where and when expected congestion will occur. AI provides us with a tool for traffic forecasting models. As another use case, if demand increases in a service area on earth, we will have to launch more and more satellites. In this case, we have more capacity, but the satellites may interfere with each other. And again, AI gives us the opportunity to manage these interferences.

What inspired you to work in research at SnT?
What inspired me to work in research at SnT was its dynamic and multicultural environment, which has significantly influenced the research landscape, especially in satellite communications. The presence of esteemed researchers such as Dr. Eva Lagunas and Dr. Symeon Chatzinotas, who are not only pioneers in their field, but were also pivotal figures in my doctoral thesis, profoundly influenced my decision. The opportunity to join SnT represented more than just a professional change; it was a chance to be part of a community at the forefront. Since joining SnT, I have experienced tremendous personal and professional growth, which has allowed me to deepen my knowledge, participate in cutting-edge projects, and contribute to ground-breaking research.

What are your future plans?
My future plans are firmly rooted in the advancement of the field of artificial intelligence applications within satellite communications. I am particularly excited about the potential of integrating neuromorphic computing to design communication systems that are not only more efficient but also sustainable. Regarding my career path, I am committed to academia. My goal is to become a leading researcher in my field, known for pioneering work that bridges theoretical research and practical applications. I aim to create impactful knowledge that not only benefits the scientific community, but also has tangible implications for industry and society, especially in Luxembourg.

About Flor: Flor Ortiz received the B.S. degree in telecommunications engineering and the M.S. degree in electrical engineering-telecommunications from the Universidad Nacional Autónoma de México (UNAM), Mexico City, Mexico, in 2015 and 2016, respectively, and the Ph.D. degree in telecommunication engineering from Universidad Politécnica de Madrid (UPM), Madrid, Spain, in September 2021. In 2021, she joined as a Research Associate with the Interdisciplinary Centre for Security, Reliability, and Trust (SnT), University of Luxembourg. Her research interests include implementing cutting-edge machine learning techniques, including continual learning and neuromorphic computing for operations in satellite communications systems.




While there is no 100% guarantee that future neuromorphic research at Uni Luxembourg will continue to involve Akida, I doubt the SnT SIGCOM research group would have splurged US$ 9,995 on a Brainchip Shuttle PC Dev Kit, if they hadn’t been serious about utilising it intensively… 🇱🇺 🛰

New job listing shared on LinkedIn by Flor Ortiz from Uni Luxembourg’s SnT SigCom, seeking “a highly motivated and skilled Research and Development Specialist focused on software development to join the BrainSat project. This project aims to revolutionize satellite communications by leveraging brain-inspired neuromorphic computing techniques. The successful candidate will be responsible for developing and optimizing software solutions, including spiking neural network (SNN) models, for on-board satellite communication systems.”

450B959C-E4CA-410A-B0E7-4B90ECF4FB5D.jpeg




AB8B4972-CEA3-4952-B774-38BD15356589.jpeg
 
  • Like
  • Fire
Reactions: 17 users
  • Fire
  • Like
Reactions: 4 users

IloveLamp

Top 20
Latest paper just released atoday from Gregor Lenz at Neurobus and Doug McLelland at Brainchip.


Low-power Ship Detection in Satellite Images Using Neuromorphic Hardware​

Gregor LenzCorresponding author. E-Mail: gregor@neurobus.space Neurobus, Toulouse, FranceDouglas McLellandBrainChip, Toulouse, France

Abstract​

Transmitting Earth observation image data from satellites to ground stations incurs significant costs in terms of power and bandwidth. For maritime ship detection, on-board data processing can identify ships and reduce the amount of data sent to the ground. However, most images captured on board contain only bodies of water or land, with the Airbus Ship Detection dataset showing only 22.1% of images containing ships. We designed a low-power, two-stage system to optimize performance instead of relying on a single complex model. The first stage is a lightweight binary classifier that acts as a gating mechanism to detect the presence of ships. This stage runs on Brainchip’s Akida 1.0, which leverages activation sparsity to minimize dynamic power consumption. The second stage employs a YOLOv5 object detection model to identify the location and size of ships. This approach achieves a mean Average Precision (mAP) of 76.9%, which increases to 79.3% when evaluated solely on images containing ships, by reducing false positives. Additionally, we calculated that evaluating the full validation set on a NVIDIA Jetson Nano device requires 111.4 kJ of energy. Our two-stage system reduces this energy consumption to 27.3 kJ, which is less than a fourth, demonstrating the efficiency of a heterogeneous computing system.
\makeCustomtitle

1Introduction​

Ship detection from satellite imagery is a critical application within the field of remote sensing, offering significant benefits for maritime safety, traffic monitoring, and environmental protection. The vast amount of data generated by satellite imagery cannot all be treated on the ground in data centers, as the downlinking of image data from a satellite is a costly process in terms of power and bandwidth.
Refer to caption
Figure 1:Data flow chart of our system.
To help satellites identify the most relevant data to downlink and alleviate processing on the ground, recent years have seen the emergence of edge artificial intelligence (AI) applications for Earth observation [xu2022lite, zhang2020ls, xu2021board, ghosh2021board, yao2019board, alghazo2021maritime, vstepec2019automated]. By sifting through the data on-board the satellite, we can discard a large number of irrelevant images and focus on the relevant information. Because satellites are subject to extreme constraints in size, weight and power, energy-efficient AI systems are crucial. In response to these demands, our research focuses on using low-power neuromorphic chips for ship detection tasks in satellite images. Neuromorphic computing, inspired by the neural structure of the human brain, offers a promising avenue for processing data with remarkable energy efficiency. The Airbus Ship Detection challenge [al2021airbus] on Kaggle aimed to identify the best object detection models. A post-challenge analysis [faudi2023detecting] revealed that a binary classification pre-processing stage was crucial in winning the challenge, as it reduced the rates of false positives and therefore boosted the relevant segmentation score. We introduce a ship detection system that combines a binary classifier with a powerful downstream object detection model. The first stage is implemented on a state-of-the-art neuromorphic chip and determines the presence of ships. Images identified as containing ships are then processed by a more complex detection model in the second stage, which can be run on more flexible hardware. Our work showcases a heterogeneous computing pipeline for a complex real-world task, combining the low-power efficiency of neuromorphic computing with the increased accuracy of a more complex model.
Refer to caption
Figure 2:The first two images are examples of 22% of annotated samples. The second two images are examples of the majority of images that do not contain ships but only clouds, water or land.

2Dataset​

The Airbus Ship Detection dataset [airbus_ship_detection_2018] contains 192k satellite images, of which 22.1% contain annotated bounding boxes for a single ship class. Key metrics of the dataset are described in Table 1. As can be seen in the sample images in Figure 2, a large part of the overall pixel space captures relatively homogenuous parts such as open water or clouds. We chose this dataset as it is part of the European Space Agency’s (ESA) On-Board Processing Benchmark suite for machine learning applications [obpmark], with the goal in mind to test and compare a variety of edge computing hardware platforms for the most common ML tasks related to space applications. The annotated ship bounding boxes have diagonals that vary from 1 to 380 pixels in length, and 48.3% of bounding boxes have diagonals of 40 pixels or shorter. Given that the images are 768×768px in size, this makes it a challenging dataset, as the model needs to be able to detect ships of a large variety of sizes. Since on Kaggle there are only annotations for the training set available, we used a random 80/20 split for training and validation, similarly to Huang et al [huang2020fast]. For our binary classifier, we downsized all images to 256×256px, to be compatible with the input resolution of Akida 1.0, and labeled the images as 1 if they contained at least one bounding box of any size, otherwise 0. For our detection model, we downsized all images to 640×640px in size.
RGB image size768×768
Total number of images192,555
Number of training images154,044
Percentage of images that contain ships22.1%
Total number of bounding boxes81,723
Median diagonal of all bounding boxes43.19px
Ratio of bounding box to image area0.3%
Table 1:Summary of image and bounding box data for the Airbus Ship Detection Training dataset.

3Models​

For our binary classifier, we used a 866k parameter model named AkidaNet 0.5, which is loosely inspired from MobileNet [howard2017mobilenets] with alpha = 0.5. It consists of standard convolutional, separable convolutional and linear layers, to reduce the number of parameters and to be compatible with Akida 1.0 hardware. To train the network, we used binary crossentropy loss, the Adam optimizer, a cosine decay learning rate scheduler with initial rate of 0.001 and lightweight L1 regularization on all model parameters over 10 epochs. For our detection model, we trained a YOLOv5 medium [ge2021yolox] model of 25m parameters with stochastic gradient descent, a learning rate of 0.01 and 0.9 momentum, plus blurring and contrast augmentations over 25 epochs.

4Akida hardware​

Akida by Brainchip is an advanced artificial intelligence processor inspired by the neural architecture of the human brain, designed to provide high-performance AI capabilities at the edge with exceptional energy efficiency. Version 1.0 is available for purchase in the form factor of PCIe x1 as shown in Figure 3, and supports convolutional neural network architectures. Version 2.0 adds support for a variety of neural network types including RNNs and transformer architectures, but is currently only available in simulation. The Akida processor operates in an event-based mode for intermediate layer activations, which only performs computations for non-zero inputs, significantly reducing operation counts and allowing direct, CPU-free communication between nodes. Akida 1.0 supports flexible activation and weight quantization schemes of 1, 2, or 4 bit. Models are trained in Brainchip’s MetaTF, which is a lightweight wrapper around Tensorflow. In March 2024, Akida has also been sent to space for the first time [brainchip2024launch].
Refer to caption
Figure 3:AKD1000 chip on a PCB with PCIe x1 connector.

5Results​

5.1Classification accuracy​

The key metrics for our binary classification model are provided in Table 2. The trained floating point model reaches an accuracy of 97.91%, which drops to 95.75% after quantizing the weights and activations to 4 bit and the input layer weights to 8 bit. After one epoch of quantization-aware training with a tenth of the normal learning rate, the model recovers nearly its floating point accuracy, at 97.67%. Work by Alghazo et al [alghazo2021maritime] reaches an accuracy of 89.7% in the same binary classification setting, albeit on a subset of the dataset and on images that are downscaled to 100 pixels. In addition, the corresponding recall and precision metrics for our model are shown in the table. In our system we prioritize recall, because false negatives (missing a ship) have a higher cost than false positives (detecting ships where there are none), as the downstream detection model can correct for mistakes of the classifier stage. By default we obtain a recall of 94.4 and a precision of 95.07%, but by adjusting the decision threshold on the output, we bias the model to include more images at the cost of precision, obtaining a recall of 97.64% for a precision of 89.73%.
Table 2:Model performance comparison in percent. FP is floating point, 4 bit is the quantized model of 8 bit inputs, and 4 bit activations and weights. QAT is quantization-aware training for 1 epoch with reduced learning rate. Precision and recall values are given for a decision threshold of 0.5 and 0.1.
FP4 bitAfter QAT
Accuracy97.9195.7597.67
Accuracy [alghazo2021maritime]89.70--
Recall95.2385.1294.40/97.64
Precision95.3895.3295.07/89.73

5.2Performance on Akida 1.0​

The model that underwent QAT is then deployed to the Akida 1.0 reference chip, AKD1000, where the same accuracy, recall and precision are observed as in simulation. As detailed in Table 3, feeding a batch of 100 input images takes 1.168 s and consumes 440 mW of dynamic power. The dynamic energy used to process the whole batch is therefore 515 mJ, which translates to 5.15 mJ per image. The network is distributed across 51 of the 78 available neuromorphic processing cores. During our experiments, we measured 921 mW of static power usage on the AKD1000 reference chip. We note that this value is considerably reduced in later chip generations.

Table 3:Summary of performance metrics on Akida 1.0 for a batch size of 100.
Total duration (ms)1167.95
Duration per sample (ms)11.7
Throughput (fps)85.7
Total dynamic power (mW)440.8
Energy per batch (mJ)514.84
Energy per sample (mJ)5.15
Total neuromorphic processing cores51
We can further break down performance across the different layers in the model. The top plot in Figure 4 shows the latency per frame: it increases as layers are added up to layer 7, but beyond that, the later layers make almost no difference. As each layer is added, we can measure energy consumption, and estimate the per-layer contribution as the difference from the previous measurement, shown in the middle plot of Figure 4. We observe that most of the energy during inference is spent on earlier layers, even though the work required per layer is expected to be relatively constant as spatial input sizes are halved, but the number of filters doubled throughout the model. The very low energy measurements of later layers are explained by the fact that Akida is an event-based processor that exploits sparsity. When measuring the density of input events per layer as shown in the bottom plot of Figure 4, we observe that energy per layer correlates well with the event density. The very first layer receives dense images, but subsequent layers have much sparser inputs, presumably due to a lot of input pixels that are not ships, which in turn reduces the activation of filters that encode ship features. We observe an average event density of 29.3% over all layers including input, reaching less than 5% in later layers. This level of sparsity is achieved through the combination of ReLU activation functions and L1 regularization on activations during training.
Refer to caption
Figure 4:Layer-wise statistics per sample image for inference of binary classifier on Akida v1, measured for a batch of 100 images over 10 repeats.

5.3Detection model performance​

For our subsequent stage, our YOLOv5 model of 25m parameters achieves 76.9% mAP when evaluated on the full validation set containing both ship and non-ship images. When we evaluate the same model on the subset of the validation set that just contains ships, the mAP jumps to 79.3%, as the false positives are reduced considerably. That means that our classifier stage already has a beneficial influence on the detection performance of the downstream model. Table 4 provides an overview of detection performance in the literature. Machado et al. [machado2022estimating] provide measurements for different YOLOv5 models on the NVIDIA Jetson Nano series, a hardware platform designed for edge computing. For the YOLOv5 medium model, the authors report an energy consumption of 0.804 mWh per frame and a throughput of 2.7 frames per second at input resolution of 640 pixels, which translates to a power consumption of 7.81 W. The energy necessary to process the full validation dataset of 38,511 images on a Jetson is therefore 38511×7.81/2.7=111.4 kJ. For our proposed two-stage system, we calculate the total energy as the sum of processing the full validation set on Akida plus processing the identified ship images on the Jetson device. Akida has a power consumption of 0.921+0.44=1.361 W at a throughput of 85.7 images/s. With a recall of 97.64% and a precision of 89.73%, 9243 images, equal to 24.03% of the validation data, are classified to contain ships, in contrast to the actual 22.1%. We therefore obtain an overall energy consumption of 38511×1.361/85.7+9243×7.81/2.7=27.3 kJ. Our proposed system uses 4.07 times less energy to evaluate this specific dataset.
ModelmAP (%)Energy (kJ)
YOLOv3 [patel2022deep]49-
YOLOv4 [patel2022deep]61-
YOLOv5 [patel2022deep]65-
Faster RCNN [al2021airbus]80-
YOLOv576.9111.4
AkidaNet + YOLOv579.327.3
Table 4:Mean average precision and energy consumption evaluated on the Airbus ship detection dataset.

6Discussion​

For edge computing tasks, it is common to have a small gating model which activates more costly downstream processing whenever necessary. As only 22.1% of images in the Airbus detection dataset contain ships, a two-stage processing pipeline can leverage different model and hardware architectures to optimize the overall system. We show that our classifier stage running on Akida benefits from a high degree of sparsity when processing the vast amounts of homogeneous bodies of water, clouds or land in satellite images, where only 0.3% of the pixels are objects of interest. We hypothesise that many filter maps that encode ship features are not activated most of the times. This has a direct impact on the dynamic power consumption and latency during inference due to Akida’s event-based processing nature. In addition, we show that a two-stage system actually increases the mAP of the downstream model by reducing false positive rates, as is also mentioned in the post-challenge analysis of the Airbus Kaggle challenge [faudi2023detecting]. The energy consumption of the hybrid system is less than a fourth compared to running the detection model on the full dataset, with more room for improvement when using Akida v2, which is going to reduce both static and dynamic power consumption and allow the deployment of more complex models that likely achieve higher recall rates. The limitations of our system are the increased needs of size, having to fit two different accelerators instead of a single one. But by combining the strengths of different hardware platforms, we can optimize the overall performance, which is critical for edge computing applications in space
1000016486.jpg
1000016484.jpg
 
  • Like
  • Fire
  • Love
Reactions: 33 users
  • Like
  • Fire
  • Love
Reactions: 27 users

davidfitz

Regular
With a current buy/sell ratio of 2:1 and the 'buzz' around TENNs you would have to expect that we are poised for a good run. Add a price sensitive announcement and who knows what could happen ;)
 
  • Like
  • Fire
  • Thinking
Reactions: 28 users

Iseki

Regular
No, Chris Jones started working for Meta in June 2022, see my post and his LinkedIn profile.




That’s exactly why I wrote the following:



But has there ever been any indication that Meta was indeed an EAP customer at the time? Any announcement as with other EAP customers? If not, we shouldn’t simply assume so.

But for the sake of the discussion, let’s assume for a minute it was indeed the case.


In practice, the overlapping period regarding Chris Jones working for Meta and his introduction to BrainChip and TENNs would have been much shorter, though, given that Chris Jones said on May 23, 2024

So, about a year ago, uh, I was at Meta, I was in their AI Infrastructure Group, and on an almost daily basis I would see new neural network architectures.

So, when I was introduced to BrainChip, I didn’t think I would really be impressed by anything a small team was gonna develop, erm. They told me about TENNs, I was a little bit skeptical to be honest at first. As I started getting to understand the benchmarks and a little bit more of the math and how it worked, I started to get pretty excited by what they had.”



Saying “about a year ago” on May 23, 2024 could mean July, June, May, April, possibly even March 2023. He certainly wouldn’t have put it that way if he and his colleagues had already been introduced to BrainChip in let’s say October 2022. And since he appears to have been laid off in mid-April, the potential time window shrinks to a maximum of six weeks, I’d say. That’s far from the nine months you claimed.

Your use of the participle “outplaced” instead of “laid off” implies that Meta would have helped him to find his current job? Again, there is no indication of that at all when you read his LinkedIn post, especially the last paragraph:

View attachment 65061


Mind you, I did not say there is no way that Chris Jones could have found out about TENNs while still working for Meta, but to me his words are certainly not conclusive evidence, the way FF presented them. They can very well be interpreted differently, especially with the background knowledge that he was laid off more than a year ago (which he didn’t mention in the video). I had already taken notice of that a while ago, when I had had a look at his LinkedIn profile after learning that he would be the one giving the talk Nandan Nayampally was supposed to have presented. (This was even before we found out from the Quarterly Investor podcast that Nandan and Rob had all of a sudden left the company.)

So no, mine is not a tendentious reading and I am not shooting myself in the foot either, if that is what you meant to say. My argument is well-founded. I don’t exclude the possibility that Chris Jones got introduced to BrainChip while still working for Meta, but I believe it is the unlikelier sequence of events for the reasons stated.

Also: Why would he have asked his LinkedIn network for assistance in finding a new job in his April 2023 LinkedIn post and only started working for BrainChip in October 2023? If he had already been that excited about our company prior to being laid off at Meta, they might even have been able to offer him a new position from August onwards, a smooth transition from Meta to BrainChip without a paycheck missing. Of course I have no idea whether it was possibly a deliberate decision of Chris Jones to pick October as the start date for his new job (maybe he wanted to spend quality time with his family between jobs, go on a long vacation, rest and recharge, renovate the house or perhaps he was suffering from an illness, was taking care of elderly relatives or was grieving for a loved one etc) or whether there was simply no earlier job vacancy for his position at BrainChip, but the two month gap between jobs could just as well signify that he didn’t yet know about BrainChip’s offerings by the time he started looking for a new job and that they were possibly not even his first choice.

Ultimately, everything - and that includes FF’s reading - is speculation, unless we hear it from the horse’s mouth. Can we at least agree on that?
I definitely agree. The notion that Meta introduced ML staff to TENNS only to terminate them shortly after would not be a good look for us.
 

Guzzi62

Regular
With a current buy/sell ratio of 2:1 and the 'buzz' around TENNs you would have to expect that we are poised for a good run. Add a price sensitive announcement and who knows what could happen ;)
Akida by Brainchip is an advanced artificial intelligence processor inspired by the neural architecture of the human brain, designed to provide high-performance AI capabilities at the edge with exceptional energy efficiency. Version 1.0 is available for purchase in the form factor of PCIe x1 as shown in Figure 3, and supports convolutional neural network architectures. Version 2.0 adds support for a variety of neural network types including RNNs and transformer architectures, but is currently only available in simulation.

You apparently still can't get your hands on the 2.0 launched March 2023! Hmm, hopefully one day eh!

The 6 month price chart is very depressing looking, a very short spike to 0.49 cent and then slowly bleeding out from then on.

Not sure the TENN's whitepaper will move the needle, doubt it but hopefully people in the industry will be very impressed.

It's over my pay-grade I must admit.

We need a IP deal or two to move the needle, that I know.

https://brainchip.com/wp-content/uploads/2023/06/TENNs_Whitepaper_Final.pdf
 
  • Fire
  • Like
  • Thinking
Reactions: 6 users

7für7

Regular
Hey @DJM263 , do you recall what press release this was from.
cheers
Don’t be fooled it’s a simple saying ..a kind of quote… nothing to do with Apple inc…
 
  • Like
Reactions: 1 users
Top Bottom