BRN Discussion Ongoing

Could someone tell me are we still partners with socionext?

BrainChip and Socionext Provide a New Low-Power Artificial Intelligence Platform for AI Edge Applications.​

It's just that Socionext today on the Tokyo stock exchange was down nearly 23%,
and I was wondering why?

View attachment 39360
Here is an article why:


Haven't looked into why 3 of the top shareholders have sold but looks like it was planned, and doesn't seem to be a real issue for the long term.
 
  • Like
  • Thinking
  • Sad
Reactions: 8 users
Here is an article why:


Haven't looked into why 3 of the top shareholders have sold but looks like it was planned, and doesn't seem to be a real issue for the long term.
Looks like Socionext has also had an impact on other major chip-related firms, such as Intel, Renensas and Texas Instruments who've also had a decline:

 
  • Like
  • Fire
Reactions: 10 users

Tothemoon24

Top 20
3. Development of Neuromorphic Chips
Researchers from the Royal Melbourne Institute of Technology (RMIT) University in Australia have achieved significant breakthroughs in the development of neuromorphic chips.





Written by BIS Research | Jul 4, 2023 8:23:45 AM
The global semiconductor industry plays a pivotal role in powering modern technology, ranging from smartphones and computers to automobiles and advanced medical devices.
However, recent disruptions and challenges in the semiconductor supply chain have highlighted vulnerabilities in the global electronics ecosystem. In response, global economies are intensifying their efforts to innovate and strengthen semiconductor manufacturing technologies, aiming to counter supply chain challenges and ensure a reliable and resilient electronics industry.
This article explores the recent global initiatives boosting semiconductor manufacturing and supply chain resilience.

1. Successful Implementation of IoT in Chip Manufacturing
Semiconductor manufacturers are actively addressing the unique requirements of Internet of Things (IoT) devices, such as smaller sizes, diverse connectivity options, and lower power consumption. They are focusing on the development of sensors and integrated circuits to meet these demands. Flexible multifunctional chipsets are being developed that incorporate more circuits. These chipsets combine microcontrollers and analytics, enhancing the resilience of IoT devices and bringing computing closer to the source. The implementation of IoT in chip manufacturing brings financial benefits through continuous process and asset monitoring and also improves visibility into production operations.
For instance, Taiwanese startup IMOSTAR offers multi-band IoT chips that integrate multiple low-power IoT radios into a single chip, resulting in space and cost savings. These chips feature compact and versatile monolithic antennas, expanding the application range of IoT devices and simplifying their manufacturing process. Similarly, Chinese startup Nano-Core Chip specializes in artificial intelligence of things (AIoT) chips. Their chips leverage event-driven architecture, dynamic charge domain signal chains, closed-loop circuit topology, and memory-computing fusion simulation. These features enable high energy efficiency and a small chip area, supporting AI computations with low latency and high storage density.

2. Integration of AI in Manufacturing Workflows
Semiconductor companies are integrating AI into manufacturing workflows to optimize operations and enhance product quality.
semiconductor1

For instance, South Korean startup Rebellions specializes in domain-specific AI processors that bridge silicon architectures and deep learning algorithms. By modifying processor architecture using silicon kernels, they accelerate machine learning computations, improve performance, and reduce deployment costs. Meanwhile, US-based startup Gauss Labs offers AI-based solutions for semiconductor manufacturing. Its solutions utilize machine sensor measurements and metrology data to predict factory anomalies and provide guidance to engineers, enabling AI-driven precision manufacturing and minimizing disruptions in the process.

3. Development of Neuromorphic Chips
Researchers from the Royal Melbourne Institute of Technology (RMIT) University in Australia have achieved significant breakthroughs in the development of neuromorphic chips. They developed a single-chip device using doped indium oxide. This device mimics human vision and memory, capturing, processing, and storing visual information akin to the human eye, optic nerve, and memory system.
The neuromorphic chip enables ultra-fast decision-making, eliminates the need for energy-intensive computation, and facilitates real-time processing. Through longer memory retention without frequent electrical signals, this advancement reduces energy consumption while enhancing performance. This chip can find applications in bionic vision, autonomous operations, food shelf-life assessment, and advanced forensics.

4. International Collaborations and Partnerships Countering Supply Chain Challenges
The complexity and global nature of the industry require close cooperation among nations, companies, and research institutions. Through strategic alliances and information sharing, countries can pool resources, expertise, and technology to address issues such as raw material shortages, production bottlenecks, and logistics disruptions.
For instance, in March 2023, India and the U.S. joined forces through the India-U.S. initiative on Critical and Emerging Technologies (iCET) to reshape global semiconductor supply chains. The iCET focuses on collaboration in areas such as AI, quantum computing, semiconductors, telecommunications, defense, and space, aiming to address regulatory and supply chain barriers as well as export control issues.

Conclusion

The future of the semiconductor industry holds great promise as advancements in technology address current vulnerabilities, paving the way for transformative breakthroughs and driving economic growth.
The advent of artificial intelligence, 5G, the Internet of Things, and autonomous vehicles will drive increased demand for semiconductors. Emerging areas such as quantum computing and neuromorphic engineering promise to revolutionize the industry further, shaping a world of limitless possibilities.
Interested to know more about the growing technologies in your industry vertical? Get the latest market studies and insights from BIS Research. Connect with us at
 
  • Like
  • Fire
Reactions: 8 users

Jchandel

Regular
A new video from Edge AI and Vision Alliance

Related to the video posted earlier but now posted by Edge Impulse on their LinkedIn page:
1688647212416.jpeg
 

Attachments

  • IMG_7302.jpeg
    IMG_7302.jpeg
    1.1 MB · Views: 65
  • Like
  • Fire
  • Love
Reactions: 55 users
Another item maybe or maybe not posted but I haven't seen it myself.

There was a conference Nov last year in Syd and I see a team from BRN did a presso as below.

Haven't tried to search the paper yet.

Appears based on Akidanet models for agri crop / weed ID at the edge.



View attachment 27769

View attachment 27770
Just googling for dots and this presso / paper I posted about back in Jan popped up.

Couldn't find it back then but it's HERE if anyone wanted a read.

@Diogenese thoughts whenever if you have time or anything of interest in it?

TIA
 
  • Like
  • Love
Reactions: 9 users

Diogenese

Top 20
Just googling for dots and this presso / paper I posted about back in Jan popped up.

Couldn't find it back then but it's HERE if anyone wanted a read.

@Diogenese thoughts whenever if you have time or anything of interest in it?

TIA
Hi Fmf,

The paper discusses a model (AkidaNet) used for distinguishing weed from crop and describes how it was adapted to run on Akida 1000 using CNN2SNN.
The model has sveral layers, the first using 8-bits to deal with the model, the remaining layers running on 4 bit weights and activations. This is interesting in that the recently publishes BRN patent application added ALUs to the NPU, presumably to handle 8-bits more efficiently, so these results would not reflect the performance of Akida 2, or whatever version has the ALUs.


. AkidaNet

In this paper, we propose a new lightweight and energy-efficient model and convert it to a Spiking Neural Network(SNN) for implementation on a NSoC hardware platform for weed identification. AkidaNet is built with reference to thewell-known MobileNetV1 and VGG-11 CNN architectures. The full architecture of AkidaNet has been developed to enhance power efficiency and reduce computational latency as shown in Fig.3. Specifically, the model begins with four3×3regular convolutional layers (Conv2D), followed by eleven depthwise separable convolutional layers (Separable-Conv2D), and ends with a softmax activation with 4 outputs for crops/weeds classification. As can be observed in Fig.4,the differences among the standard convolution block, theMobileNetV1 DSC block, and the AkidaNet DSC block are illustrated in detail. The AkidaNet structure allows the NSoC platform to process the AkidaNet DSC blocks more efficiently with less memory. Furthermore, it is important to note that instead of using max pooling operations in this model, a stride of 2 is used for the specific layers including Conv2D 0, Conv2D 2, SeparableConv2D 4, Separable-Conv2D 6 and SeparableConv2D 12 to reduce the resolution of the outputs of all standard convolution and AkidaNet DSC blocks. In addition, all standard convolution and depthwise separable convolution operations have a kernel size of 3×3and zero padding. Finally, the Classification top layer is kept as a separable convolutional layer instead of a dense layer to reduce memory usage and allow AkidaNet’s architecture to utilize the NSoC hardware more efficiently. With regard to the AKD1000 NSoC hardware version, it currently supports most layers found in feed-forward network architectures such as Dense, Standard Convolutional, Depth-wise Separable Convolutional, Batch Normalization, MaxPooling Layers. AkikaNet’s optimized architecture came from a limited architecture search that measured accuracy, power, and latency of each model variation deployed on the NSoC. Additionally, we also investigated the impacts of max pooling and stride-2 convolution on reducing spatial dimensions. As a result, the stride-2 option gave better performance on power and accuracy. The AkidaNet model can be adjusted to further reduce computational cost by scaling the number of output channels in each layer with a hyper-parameter α, or the width multiplier, inspired by the original MobileNet paper [43]. Specifically, the range of αis from 0 to 1, with typical settings of 0.25, 0.5,0.75 and 1. For example, if α= 1, it will be the baseline AkidaNet. If α= 0.5, it will generate a model with only half the output channels used in each layer. Decreasing the output channels results in the decreasing of the number of model parameters, which reduces the final size of the model. Therefore, changing the width multiplier αin AkidaNet allows one to meet the resource constraints of the NSoC platform for real-time weed recognition in practice with a trade off , accuracy, and model size.



D. AkidaNet to Spiking Neural Network Conversion

The goal of the AkidaNet design is allow target applications to achieve low latency and power consumption when implemented on the AKD1000 NSoC hardware platform. To optimize the performance of this model on edge devices, a potential solution mentioned in this paper is the use of Spiking Neural Networks (SNNs). According to [50, 51, 52], before converting the architecture of AkidaNet to SNN for execution on the AKD1000 NSoC, the model needs to be quantized to use 4-bit activations and 4-bit parameters. The quantization process plays an indispensable role in significantly reducing model size and power consumption when compared with CPU or GPU implementations. After that, the quantized AkidaNet model is converted to an SNN in a process called CNN2SNNas shown in Fig.5. Particularly, all the communication between neurons in the SNN takes the form of “spikes” or “events “corresponding to binary impulses that are generated when a neuron crosses a threshold level of activation. If all neurons on an Neural Processing Unit (NPU) do not cross the threshold ,it will generate no output. Hence, this feature is the key to the efficiency of SNNs and contributes to further minimizing computational cost.



E. Evaluation of AkidaNet and MobileNetV1 models onImageNet-1K dataset

We trained both MobileNetV1 and AkidaNet models from scratch on the ImageNet-1K classification dataset [54] using Tensorflow/Keras for 90 epochs on a single NVIDIA GeForceRTX 2080 Ti GPU with a batch size of 128 images. Weused Stochastic Gradient Descent (SGD) with momentumoptimizer and the learning rate was set to 0.1 for the first10 epochs and then decreased to 0.0001 with an exponentialdecay. We also used the L2 weight regularizer in Conv2Dkernels and in pointwise kernels of the SeparableConv2Dlayers. In addition, we applied basic data augmentation (i.e.,random resized cropping and horizontal flipping). We trainedat the resolution of 160x160 and evaluated at the resolutionof 224x224 based on [55]. Finally, the last step is Akidaconversion and evaluation. The model was quantized to 4bit weights and activations except for the first convolutionallayer’s weights, which were quantized to 8 bits. After that, themodel was tuned for an additional 10 epochs with an initiallearning rate of 0.0001 that was kept constant for 2 epochsand then decreased to 1e-8.



F. Evaluation of the AkidaNet model on the bccr-segsetcrop/weed image dataset

We performed extensive experiments to evaluate the performance of our proposed method and some baseline approaches. All experiments were executed on an Ubuntu machine with an Intel Core 3.7GHz i7 CPU, 32Gb RAM and a GeforceGTX 1080Ti GPU. Our network was implemented in Python using TensorFlow and the MetaTF ML framework [50]. We compared our model against the following models: ResNet-50,VGG-16, and InceptionV3. For all experiments, we used 80% (24,000 images) of the dataset for training the model and 20% (6,000 images) for testing. In order to have a fair comparison, all models were trained with an Adam optimizer, 10 epochs, a batch size 32, and a learning rate 0.0001. The input size of VGG-16, ResNet-50, AkidaNet α= 0.25, and AkidaNetα= 0.5was 224 ×224, while the input size of InceptionV3was 299 ×299. Moreover, All CNN models, including the AkidaNet variants, were pre-trained with the ImageNet-1kdataset and then trained on our bccr-segset dataset. In the next steps, we quantized the Keras AkidaNet model and then converted the resulting quantized model to an SNN for execution on the AKD1000 NSoC. Specifically, AkidaNet models with α= 0.25 and α= 0.5were quantized by using the cnn2snn.quantize function in MetaTF framework [50]. The first layer of weights were quantized to 8-bits while the remaining model weights and activations were quantized with 4-bits in order to meet the requirements of the NSoC hardware. After having obtained a quantized model with satisfactory performance, the Keras quantized model was converted into the SNN model by using the cnn2snn.convert function [50],which returns a model in an Akida-compatible format. This model file format can be run efficiently on the NSoC hardware device in inference mode.
 
  • Like
  • Love
  • Wow
Reactions: 36 users

cosors

👀
Totally off topic.
Who opens the 3000th page with her/his post?


"Hello Fellow Chippers,

Thanks to ZeeBot for creating this amazing new forum so we can all have civilized thoughtful BrainChip discussions.

I'll get us started, 2022 has already exceeded my expectations & we are only in the first few days of February.

Can't wait to see what the year brings.

Love all you guys & Girls xxxx"
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 16 users

cosors

👀
Is this finally heading in the direction of MosChip?

ASIC Verification Engineer​

BrainChip Hyderabad, Telangana, India

Responsibilities

- Test plan, Test bench development, execution, and debugging
- Block and Chip level- IP/ASIC/SOC/CPU/AMS Verification
- SystemVerilog with Testbench methodologies UVM
- Verilog, VHDL, C++, Vera, e, System C
- Protocols: PCIe / USB / DDR / Ethernet / ARM 💪/CNN/RNN
- Scripting: Perl, Tcl, Unix sc
ripting

https://www.linkedin.com/jobs/view/...at-brainchip-3650966374/?originalSubdomain=in
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 18 users

Esq.111

Fascinatingly Intuitive.
  • Like
  • Thinking
  • Fire
Reactions: 14 users

cosors

👀
@TechGirl where are you it's your turn! ;)
 
  • Love
  • Like
  • Fire
Reactions: 5 users

AARONASX

Holding onto what I've got
 
  • Like
  • Love
  • Fire
Reactions: 32 users

Tothemoon24

Top 20
Click on the above for the full read & a insightful interview

The Air Force Research Laboratory (AFRL) posted the new video covering aspects of the Autonomous Aircraft Experimentation (AAx) initiative on the Defense Visual Information Distribution Service(DVIDS) website earlier today. AAx's main focus is on testing and refining artificial intelligence and machine learning-driven autonomous capabilities for use on future advanced uncrewed aircraft and helping to move those technologies out of the laboratory and onto actual operational platforms.

We are trying to figure out how to integrate artificially trained neural networks, trained in a simulation... into the real world," Bill "Evil" Gray, the chief test pilot at the Air Force's Test Pilot School, explains in the newly released video. "In this case [through AAx], integrate them into controlling an airplane."
"We need to recognize that AI [artificial intelligence] is here. It's here to stay. It's a powerful tool," Air Force Col. Tucker "Cinco" Hamilton, the service's chief of AI Test and Operations, says at another point in the footage. "Collaborative Combat Aircraft and that type of autonomy is revolutionary. And will be the future battle space."
 
  • Like
  • Fire
Reactions: 16 users

manny100

Regular
  • Like
  • Love
Reactions: 13 users

Draed

Regular
3000? Just taking a pot shot at it....🤣
 
  • Haha
  • Like
Reactions: 3 users

Worker122

Regular
  • Haha
  • Like
Reactions: 11 users

IloveLamp

Top 20
Last edited:
  • Like
  • Thinking
  • Love
Reactions: 22 users

AARONASX

Holding onto what I've got
 
  • Like
  • Love
Reactions: 24 users

TECH

Regular
Good morning,

Brainchip is once again getting "our" story out there for all interested parties to engage now or into the future, it's a case of
"the early bird catches the worm"....don't hesitate, spread the word believers, check out the latest marketing video presented
by Nandan in the link below.



Cheers Tech :coffee:(y)
 
  • Like
  • Love
  • Fire
Reactions: 52 users

Damo4

Regular
But is seems you were closer😉

Not if we all report a post on page 2999 and have it bump back ;)
*no posts were harmed (reported) in the making of this milestone*

Edit: This 3000 is mine!
Big-mistake-huge GIFs - Get the best GIF on GIPHY
 
Last edited:
  • Haha
  • Like
Reactions: 24 users
Top Bottom