BRN Discussion Ongoing

Damo4

Regular
Yea :(

Partnerships when we have almost 0 cash inflows sucks it's like making/committing future investments when you cant even pay ur monthly bills lol

You're welcome to have your opinion, but you gotta spend money to make money. This is a business that can raise capital for advertising, operating expenses, technical support etc.
Strong patents, strong marketing, strong engagements and a strong ecosystem are part of the foundations.

I'm glad Brainchip cares more about the customers opinions than that of a few vocal shareholders.
Makes it easy for some of us to just tag along for the ride.
 
  • Like
  • Fire
Reactions: 13 users

Perhaps

Regular
Ok, I know when it's time to write something positive. I'm notorious in destroying speculations and it can hurt sometimes.

So let's talk about SocioNext.

BrainChip partnering with SocioNext on ADAS solutions is a well known fact:

1699016066076.png



A maybe lesser known fact is the origin of SocioNext. It's not an independent company, it belongs to Fujitsu and Panasonic.

1699016420276.png



Both Fujitsu and Panasonic offer ADAS solutions to the car industry. So it would be just a natural thing when developed solutions of SocioNext find their way to the portfolio of Fujitsu and Panasonic. But... don't forget about the timelines.

1699016867503.png


1699016970956.png

 
  • Like
  • Love
  • Fire
Reactions: 38 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Yea :(

Partnerships when we have almost 0 cash inflows sucks it's like making/committing future investments when you cant even pay ur monthly bills lol
Yes and Apple was once 90 days from going broke and having to make their entire workforce unemployed and look at where Apple is today.

No-one is forcing you to stick around. So if you don’t like it as much as you profess then why not shuffle on the buffalo outa here? We certainly won’t be wringing our hands and begging you to stay since you seem to trot out the same repetitive gloom and doom-scroll ad nauseam which can get a tad boring babe in case you didn’t know. (y)🤭
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 48 users

Diogenese

Top 20
Ok, I know when it's time to write something positive. I'm notorious in destroying speculations and it can hurt sometimes.

So let's talk about SocioNext.

BrainChip partnering with SocioNext on ADAS solutions is a well known fact:

View attachment 48764


A maybe lesser known fact is the origin of SocioNext. It's not an independent company, it belongs to Fujitsu and Panasonic.

View attachment 48765


Both Fujitsu and Panasonic offer ADAS solutions to the car industry. So it would be just a natural thing when developed solutions of SocioNext find their way to the portfolio of Fujitsu and Panasonic. But... don't forget about the timelines.

View attachment 48766

View attachment 48767

Thanks Perhaps,

The 1000 eyes had dug out Socionext's pedigree when the "partnership?" was announced back in early 2020.

https://brainchip.com/brainchip-and...rm-for-ai-edge-applications-brainchip-230320/

While the announcement does talk about Akida on its own and in combination with Synquacer, ...

In addition to integrating BrainChip’s AI technology in an SoC, system developers and OEMs may combine BrainChip’s proprietary Akida device and Socionext’s processor to create high-speed, high-density, low-power systems to perform image and video analysis, recognition and segmentation in surveillance systems, live-streaming and other video applications.

... the announcement leaves other options open ended:

"The combination of BrainChip’s technology and Socionext’s ASIC expertise fulfills the requirements of edge applications. We look forward to working with the Socionext in commercial engagements.” (LDN)

“As a leading provider of ASICs worldwide, we are pleased to offer our customers advanced technologies driving new innovations,” said Noriaki Kubo, Corporate Executive Vice President of Socionext Inc. “The Akida family of products allows us to stay at the forefront of the burgeoning AI market. BrainChip and Socionext have successfully collaborated on the Akida IC development and together, we aim to commercialize this product family and support our increasingly diverse customer base.”

On the automotive front Socionext lists some of their achievements, many of which could use Akida:

http://www.socionext.com/en/products/customsoc/automotive/

Socionext's automotive custom SoCs are used in the following products:
LiDAR
V2X (Vehicle-to-everything) system
HMI(Human Machine Interface)
Camera (Viewing/Sensing)
Car Navigation System
OFDM(Orthogonal Frequency Division Multiplexing)
CarTV




Unfortunately, we do not make the credits on their automotive page:

1699019205274.png


... but what are NDAs for?
 
  • Like
  • Fire
  • Haha
Reactions: 39 users

jtardif999

Regular
TECHNOLOGY

November 1, 2023 6:42 PM UTC

PC market recovery gathers pace as Intel, AMD tout potential of 'AI PC'

The earnings of Intel (INTC.O) and Advanced Micro Devices (AMD.O) have offered more evidence a recovery is gathering pace in the personal computer market, boding well for an industry that had been grappling with a supply glut after the pandemic.

Executives at both the companies talked up the stabilizing PC market on earnings calls this week and said they expected the integration of artificial intelligence to boost growth.

"The arrival of the AI PC represents an inflection point in the PC industry," said Intel CEO Pat Gelsinger. AMD boss Lisa Su said she "expected some growth going into 2024 as we think about sort of the AI PC cycle and some of the (Microsoft) Windows refresh cycles".

AI-enabled PCs refer to machines that come with advanced chips capable of running large-language models and apps powered by the technology directly on the device, instead of the cloud.

In the September quarter, AMD's PC-focused business posted its strongest growth in two years. Revenue decline at Intel's PC unit was the slowest in eight quarters.

"The PC market saw a significant pull in of demand due to all kinds of impacts of the pandemic (such as remote work)," said Justin Sumner, senior portfolio manager at Voya Investment Management, an investor in both AMD and Intel.

"We are finally starting to see a bottoming of this trend. This should lead to a typical inventory refresh and an improvement in the market."

PC makers have been trying to clear their inventory as they expect a boost in demand during the holiday season and ahead of an expected Windows update next year from Microsoft (MSFT.O) .

Data from research firms such as Canalys has fanned those expectations. After a slower decline in industry-wide PC shipments in the third quarter, Canalys said it expected the market to return to growth during the highly anticipated holiday season.

It expects the adoption of AI-capable PCs to speed up from 2025 onwards, and make up around 60% of all PCs shipped in 2027.

Still, some investors see the lack of AI apps as a potential hurdle for their adoption. Microsoft is so far the only major firm to create such offerings with its genAI-powered Copilot software that became available for its Microsoft 365 enterprise customers on Wednesday.

"It is still unclear to us that there is a "killer app" which will spur this upgrade cycle," said Dave Egan, senior research analyst at Columbia Threadneedle, an investor in both AMD and Intel.


Reporting by Arsheeya Bajwa, Akash Sriram and Chavi Mehta in Bengaluru; Editing by Krishna Chandra Eluri
 
  • Like
  • Fire
  • Love
Reactions: 16 users
Wonder if these guys may become a reseller or at least stock the upcoming VVDN Akida Edge AI Box?

Recent blog on their site indicating Akida as a possible neuromorphic option in edge boxes.

They have a small online shop with several Aaeon AI Edge Boxes using Jetson at the mo.

I still feel BRN "engaged" VVDN to create the box for the mkt as a real world POC.

From what I've seen, VVDN create products or partner with companies like Ambarella, NVIDIA, TI, Intel, NXP, Kinara, AMD and even Renesas (camera).




Edge AI Nodes

Edge AI Box Inference Options​

An important element to research and evaluate in Edge AI projects is inference workload performance, which consists of model size, speed of each inference (usually measured in milliseconds), or frames per second for video applications. There are various model frameworks such as YOLO, MobileNet, and many others that underpin a model, and of course the model parameters and features, number of items being classified, and other factors can increase the model size. But given a static or constant neural network, the speed at which it runs is dependent upon the hardware being used. There is slower, cheaper hardware, and there is faster, more expensive hardware.
However, where the model is run on the hardware, can vary. Inference can be performed directly on the device’s CPU cores, can be run on a GPU for parallel processing, or could be offloaded to a custom AI accelerator chip such as a Tensor Processing Unit (TPU) like the one in a Google Coral, neuromorphic processors like Brainchip Akida, or other dedicated math and matrix multipliers designed for loading and processing AI models. Most Edge AI boxes have one or more options available, or could be upgraded or configured with added GPU or Accelerator options if there is enough expansion capability.
karbon-804-vibration-mount.jpg

Here is some general information on each option, and a bit of guidance to help you choose the right solution for your AI project.
CPU – Using the device’s native processor is usually the easiest and simplest form of running an AI model, though it might also be the slowest. This is useful for beginning to explore Edge AI, as it is typically quick and easy to load a model and begin to understand sensor data, image classification, object detection, or audio classification. If you are using small devices like single board computers (such as the Raspberry Pi or similar), this may be your only option to perform inferencing, as expansion, size and power constraints, or processing power may be your limiting factor. Once deployed and tested, if the performance is adequate and the project’s use case or needs are being met, there may not be any reason to worry about the increases that can be achieved with GPU’s or custom accelerators. To keep power consumption down, save on costs, and in smaller projects, CPU inferencing could be the best solution.
GPU – Offloading machine learning models to graphics processing units (GPU) will in most cases speed up inferencing of complex algorithms, but exact performance should be tested and benchmarked. GPU’s come in many shapes and sizes. Some are integrated into the CPU, like Iris or Xe graphics cores on Intel processors, or Mali on may Arm SoC’s, while Jetson SoC’s will contain Nvidia GPU cores. Edge AI boxes could also contain a standalone GPU connected via PCIe or MXM, with Nvidia GeForce or A-class GPUs or perhaps AMD Radeon devices. GPUs come in a wide range of cost and performance options, and power needs. Keep in mind that with large GPUs, several hundred watts could be required, which may or may not be possible depending upon the the end destination / location of the device. High power GPUs can also be expensive, so a thorough analysis of cost versus actual performance increases should be considered.
AI Accelerator – Custom silicon solutions dedicated to running machine learning models also exist, and could be used in Edge AI projects. This type of hardware is generally added by plugging a device into a USB port or PCIe slot, and again come in various performance and price points, though in some cases can be integrated directly into a PCB. The accelerators have varying intended model frameworks that they are best suited for loading and running, so you’ll need to make sure your model is suitable and can actually benefit from the device and your inference performance will indeed increase. This might require some testing using a development kit or sample unit. Here again, because the performance options and price points vary, you will need to evaluate the performance gain and do a cost/benefit analysis to determine if it’s worthwhile. However, dedicated AI Accelerators might also enable features or functionality that would not otherwise be possible, such as event-based tracking where objects are tracked through space and time (for example a golf swing or path of a ball), aggregated video streams and camera frames, or other specific capabilities not possible (or with feasible performance) on CPUs and GPUs.


Posted
October 14, 2023
in
Blog
by
admin
 
  • Like
  • Fire
  • Love
Reactions: 42 users

cosors

👀
just for info
The US Space Force has announced its first non-domestic Cooperative Research and Development Agreement with two Indian start-ups, 3rd ITECH and 114AI.
https://www.spaceconnectonline.com.au/r-d/6035-us-space-force-announce-first-non-domestic-cooperative-deal-with-indian-tech-companies

...
“It is exciting when mutually beneficial collaborations, such as this agreement with 114AI and 3rd ITECH, are signed to advance the state-of-the-art in space domain awareness and Earth observation sensor technologies.”

The signing of the agreement was followed by a joint leader’s statement from President Joe Biden and Indian Prime Minister Narendra Modi during his visit to the White House on 22 June 2023.

That statement also emphasised the establishment and launch of the India–US Defense Acceleration Ecosystem (INDUS-X) network fostering joint defence technology innovation between the two country’s universities, start-ups, industry, and think tanks as part of the US–India initiative on critical and emerging technology.

...
 
  • Like
Reactions: 7 users
Last edited:
  • Like
  • Fire
  • Love
Reactions: 10 users
IMG_6992.jpeg
IMG_6993.jpeg

IMG_6999.jpeg

IMG_7001.jpeg

IMG_6994.jpeg
IMG_6995.jpeg
IMG_6996.jpeg
IMG_6997.jpeg
IMG_6998.jpeg


 
  • Like
  • Fire
  • Thinking
Reactions: 27 users

Perhaps

Regular
Can’t find any info on this one apart from its intellisense systems carrying this out.


View attachment 48771

Seems this project is finished and didn't made it to Phase II.

 
  • Like
Reactions: 5 users

Sam

Nothing changes if nothing changes
  • Like
Reactions: 3 users

IloveLamp

Top 20
Does anyone else find it strange that both Qualcomm and Samsung have never been referred to as competitors in any of Brainchips investor presentations.......?

🤔🤫
 
  • Like
  • Fire
  • Thinking
Reactions: 18 users

Esq.111

Fascinatingly Intuitive.
Morning IIoveLamb ,

Yep , heaps of strange goins on....

Molder & Scully would have a field day.

Top job on all the sleuthing. Appreciated.

😃.

Regards,
Esq.
 
  • Like
  • Haha
  • Love
Reactions: 12 users

Perhaps

Regular
Does anyone else find it strange that both Qualcomm and Samsung have never been referred to as competitors in any of Brainchips investor presentations.......?

🤔🤫
Common sense, same happened with SynSense, GrAiMatterLabs, Innatera.
 
  • Like
  • Haha
Reactions: 3 users
  • Like
  • Fire
Reactions: 4 users

rgupta

Regular
Does anyone else find it strange that both Qualcomm and Samsung have never been referred to as competitors in any of Brainchips investor presentations.......?

🤔🤫
Does brainchip ever released a list of competitors?
Brainchip only compare other snn technologies with akida and if Qualcomm and Samsung does officially claim an snn technology that means brainchip is not interested in comparing the same.
Now Antonio told us in last investor update that similarly technologies can do the work what of ever akida 1000 can do that means there are competing technologies in the market place and brainchip never try to tell investors about them and what is a road map to beat the same except that akida 2000 is going to much faster and versatile than akida 1000.
 
  • Haha
  • Like
  • Thinking
Reactions: 6 users
Get on it

View attachment 48754
Excuse the pun, but if everyone on here votes "other- neuromorphic/AKIDA", we may tip this vote over the edge.
 
  • Like
  • Fire
Reactions: 7 users
Top Bottom