BRN Discussion Ongoing

Sirod69

bavarian girl ;-)
Magnus Östberg (Mercedes):

It feels like Hashtag#AI is everywhere, right? 🤯 And rightly so, because it can simplify existing processes – including Hashtag#SoftwareDevelopment. In fact, our team at Mercedes-Benz uses AI to improve the efficiency of MB.OS development! 🚀

We’ve recently integrated a terrific new AI solution: As part of our Hashtag#MBOS software development process, we have created a private marketplace in Visual Studio Code to provide software tools for our developers. Our latest extension, GitHub Copilot, uses the OpenAI Codex to recommend code and complete functions in real-time. This is the same technology that powers the Hashtag#ChatGPT beta we are currently testing for our “Hey Mercedes” voice assistant for U.S. customers (https://lnkd.in/e48Tharn).

Copilot can vastly improve our team’s efficiency, by suggesting lines of code, allowing us to produce code faster and it helps us with the associated documentation. Copilot code completion isn’t a perfect code-writing solution, it still requires user input and review, but it gives our programmers more time to focus on complex problems and the overall software architecture.

And Copilot is only the beginning, as our technology partners continue to offer new AI solutions. Our goal is to leverage AI throughout the entire MB.OS development process, so our programmers can seamlessly and simultaneously engage them all for the best possible developer experience.

I want to give a huge shout-out to Christian Braunagel, Julian Harfmann, Jasmine Ramos, Andy Krieger, Shweta Kaushik, Holger Fahner, Dionysios Satikidis, Bastian Stahmer, Daniel Rheinbay, Graham Page, Ajith Damodaran and all the other team members, which identified and integrated the useful Copilot tool for our team.
1690360170924.png
 
  • Like
  • Fire
  • Thinking
Reactions: 34 users

Sirod69

bavarian girl ;-)
Teksun Inc

Teksun Inc

26. Juli 2023

What is the Role of AI in Human Activity Tracking?​


In recent years, the advancement of artificial intelligence (AI) has revolutionized various industries, and one area where it has made a significant impact is human activity tracking. From fitness enthusiasts to healthcare professionals, AI-based activity human tracking systems have become invaluable tools for monitoring, analyzing, and improving human performance.

.......

Bottom Line

The role of AI in human activity tracking is transformative, offering personalized insights, real-time monitoring, and integration with smart devices. It has the potential to empower individuals in achieving their fitness goals, optimize their well-being, and make informed decisions about their activities.

Moreover, AI-based activity human tracking holds promise in advancing medical research and revolutionizing healthcare practices. As AI continues to evolve, we can expect even more sophisticated and accurate activity-tracking systems that further enhance our understanding of human performance and contribute to a healthier and more productive society.

 
  • Like
  • Fire
Reactions: 14 users
Referring to the recent graph posted here showing the decreasing share price chart, and the bar chart showing an increase in institutional ownership, can somebody explain how this can occur without collusion and fraud. If they are buying and selling to each other on a daily basis, there must be some coordination between the parties to enable them to keep the price down, accumulate vast amounts of stock, and make extra money the whole time by making small margins on hundreds of trades. It doesn't make sense that this can happen in a legitimate (lawful) manner.
It happens unfortunately and there can be 100s of reasons why it does.

The only thing that can disrupt are big market moving announcements or earnings growth.
 
  • Like
  • Sad
Reactions: 7 users

JB49

Regular
Magnus Östberg (Mercedes):

It feels like Hashtag#AI is everywhere, right? 🤯 And rightly so, because it can simplify existing processes – including Hashtag#SoftwareDevelopment. In fact, our team at Mercedes-Benz uses AI to improve the efficiency of MB.OS development! 🚀

We’ve recently integrated a terrific new AI solution: As part of our Hashtag#MBOS software development process, we have created a private marketplace in Visual Studio Code to provide software tools for our developers. Our latest extension, GitHub Copilot, uses the OpenAI Codex to recommend code and complete functions in real-time. This is the same technology that powers the Hashtag#ChatGPT beta we are currently testing for our “Hey Mercedes” voice assistant for U.S. customers (https://lnkd.in/e48Tharn).

Copilot can vastly improve our team’s efficiency, by suggesting lines of code, allowing us to produce code faster and it helps us with the associated documentation. Copilot code completion isn’t a perfect code-writing solution, it still requires user input and review, but it gives our programmers more time to focus on complex problems and the overall software architecture.

And Copilot is only the beginning, as our technology partners continue to offer new AI solutions. Our goal is to leverage AI throughout the entire MB.OS development process, so our programmers can seamlessly and simultaneously engage them all for the best possible developer experience.

I want to give a huge shout-out to Christian Braunagel, Julian Harfmann, Jasmine Ramos, Andy Krieger, Shweta Kaushik, Holger Fahner, Dionysios Satikidis, Bastian Stahmer, Daniel Rheinbay, Graham Page, Ajith Damodaran and all the other team members, which identified and integrated the useful Copilot tool for our team.
View attachment 40826
Is it just me that thinks the AI they are referring to is from NVIDIA and not Brainchip. We will find out soon enough.
 
  • Like
  • Thinking
Reactions: 7 users
Maybe / maybe not posted prev but I've not seen this one yet.

I have a couple of "huhs :unsure:" from it and further context would be great.

Given the ASIC comment makes Megachips sense.



9 March 2023

BrainChip’s second-gen neuromorphic silicon gunning for the big boys​


By Alex Davies

Neuromorphic chip designer BrainChip has unveiled the second generation of its Akida platform. Taking inspiration from the human brain, these chips promise to slash the compute costs burdening enterprises and operators, by moving those workloads into bespoke edgy silicon, and out of expensive centralized cloud environments.

Nandan Nayampally, Chief Marketing Officer at BrainChip and formerly a VP at Arm, pointed to the scale of this pricing problem, in conversation with Faultline this week. “It took $6 million to train one model, with ChatGPT. There is something like $50 billion in productivity losses from unplanned downtime in manufacturing alone, and $1.1 trillion in losses from people missing work due to preventable chronic illness in the US. With 1 TB of data per connected car per day too, there are big opportunities for AI, but big challenges.”

The industry is quite used to huge numbers getting hurled around. For context, the World Bank estimated global GDP to be around $96.5 trillion dollars in 2021, so even a fraction of a percentage increase in productivity could have a profound impact.

With billions more connected devices coming online, all that data has to be harvested and fed into the right processing systems, in order to achieve the $15 trillion global benefit of AI that Nayampally anticipates.

“We’re aiming to reduce the amount of compute done in the cloud, by doing intelligent compute on the device. This saves on cloud training, storage, and compute costs, and allows for real-time responses in the devices. This includes portable devices, and there is also the data security angle to consider too. So, that’s the background,” said Nayampally.

This brings us to the Akida platform; the designs that BrainChip licenses to companies that want to add AI-powered silicon functions to their designs.

BrainChip’s business model is quite similar to Arm, which does not manufacture chips itself. BrainChip provides much of the software and code necessary for developers to put these designs to work.

There are three tiers to the platform. The energy-sipping Akida-E series is the smallest, and intended for use in sensors. The Akida-S is a general-purpose design, intended for use in microcontrollers, while the Akida-P is the maximum performance variant, which is most applicable to the Faultline ecosystem.

“These are effectively ASICs,” said Nayampally, of the chips that include the BrainChip IP cores, “and this is where building ASICs is very cost-effective, as you are replacing a thousand-dollar GPU or card with a sub-$10 part.”

These Akida-P designs are intended for advanced speech recognition, object detection and classification, video object detection and tracking, and vision transformer networks.

The second-generation has added support for the Vision Transformer (ViT) core, which can be combined with the new Temporal Event-based Neural Network (TENN) cores, which claim benefits over RNNs (Recurrent Neural Networks) and CNNs (Convolutional Neural Networks) via their ability to process both spatial and temporal data.



A company called Prophesee has built an object detection system for road vehicles, which claims 30% better precision using the new technology.

More importantly, this uses 50x fewer parameters (inputs) and 30x fewer operations (processing cycles) than the old system. This should mean lower power consumption and faster response times, as well as a cheaper design, as the system requires less on-device memory and storage.

A benchmark test using another video object recognition test resulted in a system that could process a 1382x512p video at 30 fps using less than 75 mW of power, in a 16 nm silicon design. This needed 50x fewer parameters and 5x fewer operations than the Resnet50 reference design.

Notably, the Akida platform claims to improve with time, thanks to the on-device learning capabilities of the silicon.

“This is the most important thing. We don’t train models on the devices. We extract the model’s features, and then port the object classes. You can design for today and then be able to upgrade to more complex models in time,” said Nayampally.

There are no public second generation customers yet, but Renesas and MegaChips have both licensed the first-generation designs. For video capture, production, and initial distribution, there are plenty of applications for this neuromorphic silicon, which should mature in time.

However, “I think the pay TV operators are at the very tail end of the technical spectrum, and I say that in a nice way,” said Nayampally. “They are a very margin-driven business, so they work on chips they can build with – the lowest common denominators. So, they will require a silicon partner to promote these new capabilities, and we are working on it.”

As BrainChip’s targets include many Internet of Things (IoT) applications, it was only a matter of time until the smart home opportunity came up. Asked whether operators were showing interest in the Matter-spurred second wave of smart home, Nayampally said “we’re not seeing it yet, to be brutally honest.”

“It’s mostly because we’re not in the phase where we are pushing a proven solution. Most want a complete reference design for their stack, and as the IP model, in our first-generation designs, we were essentially proving the silicon. With the second, we can start to build these reference designs, and help them scale it.”

Founded in Australia, in 2004, BrainChip went public in 2015. Its share price has been quite volatile, leaping from $0.41 AUD in October 2021 to $1.76 AUD in January 2022, before declining steadily to reach around $0.56 AUD today. It acquired SpikeNet Technologies in 2016, a French firm specializing in computer vision, for around USD$1.45 million.
 
  • Like
  • Fire
  • Love
Reactions: 49 users

TheDon

Regular
The logic would be that we can grow faster and be more dominantly introduced across data centers and edge. Our IP could be offered for free with certain levels of revenue spend on Nvidia products.

So as a BRN shareholder the value would be a premium valuation of say $1.5 converted into Nvidia shares and then we have the option to keep seeing the value grow or sell down.

This will stop any need for capital injections, allow for sharing of skills and will freak out some large chip makers.
Only shareman would accept $1.50
Maybe $150 or $1500.
Now that's more I like it.
 
  • Like
  • Fire
Reactions: 16 users

Sirod69

bavarian girl ;-)
Is it just me that thinks the AI they are referring to is from NVIDIA and not Brainchip. We will find out soon enough.
I can't really say either, I just think we don't really know yet if Nvidia is working with Brainchip either. I think they do.
 
  • Like
  • Thinking
Reactions: 6 users

HopalongPetrovski

I'm Spartacus!
Just had a better look.
The image on right looks like a door knocker and the right obviously is a ring, so combined I believe you have a knock ring🤔
A knock ring? Is that like a bratwurst?:p


maxresdefault.jpg
 
  • Haha
Reactions: 8 users
Another new job at Brainchip.....go, get em tigers...:)




BrainChip  logo



Senior Product Manager (AI, IP, and Semiconductor Experience Required)​


BrainChip · Laguna Hills, CA (On-site) 11 hours ago · 22 applicants



$180,000/yr - $200,000/yr · Full-time
11-50 employees
  • Job poster joined LinkedIn in 2008

About the job​


As a Sr. Product Manager, you will drive the product roadmap, develop requirements and value proposition, manage product lifecycle as well as create materials and collateral needed to communicate the capabilities of product including competitive positioning. Responsibilities will span all BrainChip product initiatives in Essential AI solutions that may include delivery of silicon, modules, evaluation kits and/or IP.

The position requires a keen focus on market trends, customer needs, application architectures, and technology trends. Understanding of semiconductor AI technologies will help analyze product choices and tradeoffs.

BrainChip’s Akida fully-digital neuromorphic AI processor has the first of its kind, licensable product that delivers great efficiency in Edge AI devices along with unique capabilities like learning on the edge. Akida’s success is dependent on finding the best product market fit in large markets, supported by a product roadmap that has the right combination of features and capabilities that delivers the best differentiated value, with many internal functions and external entities.

**This is an ON-SITE ROLE however we will consider a flexible work schedule for the right candidate.**

Essential Job Functions:

· Ownership of the product lifecycle process from concept to end of life for the product portfolio, including the timely preparation and presentation of stage gates and key product decisions.
· Partner with business development and technical colleagues to explore product choices that best serve BrainChip in financial success.
· Develop and optimize a business case for each initiative considering key technology trends, ecosystem inflection points, product differentiation and best chance for success.
· Refine, quantify, and articulate the key value propositions of our products.
· Support the product development process as the key product owner - providing product targets, priority guidance, and resolving trade-off decisions.
· Evaluate business opportunities to partner with external technology providers to enhance the product portfolio.
· Participate in standardization efforts that will inform product strategy.


Job Requirements:
· 10+ years of experience in AI IP for semiconductor technology, including both technical and business roles.
· Understanding of Edge AI/ML infrastructure, and enterprise markets and ecosystem
· Experience in product management roles, working with cross-functional teams across geographies.
· Deep knowledge of interconnect technologies (PCI Express, CXL, Ethernet, Memory interfaces, etc.), and relevant industry consortia specifications is preferred.
· Excellent communication and presentation skills.
· Proven track record of strategy development.
· Demonstrated business acumen with semiconductor products and/or
· Bachelor’s Degree or equivalent in Electrical Engineering, Computer Engineering, or a related field.
· MBA or alternative master’s degree in management is preferred.


Who said there's no free lunch????
 
  • Like
  • Fire
  • Wow
Reactions: 28 users

manny100

Regular
Respect and understand you logic though don't entirely agree.

I do agree though that a NVIDIA confirmed and stated connection would be ridiculously massive haha.

M&A activity is commonplace in most industries though.

A behemoth like NVIDIA would just buy us if we were seen as that much of a threat to their bottom line whilst we are still a drop in the ocean $ wise for them.

Eventually we may not be.

There are multiple start ups, research entities etc playing with neuromorphic in different variations and it isn't a given that one of them doesn't come out of left field and snares a major contract.

That's why Megachips and Renesas are critical for mine. They committed as early adopters seeing our benefits and it could just give them an edge (no pun) in the market if they can secure contracts using our tech.

NVIDIA play in the GPU space and diff markets traditionally to us. Where they make their money below as Dec 22.


The chip company serves five primary markets—gaming, data center, professional visualization, automotive, original equipment manufacturer (OEM) and other—and provides a revenue breakdown for each of those markets: Gaming revenue, comprising 45% of total revenue, rose 41.8% YOY in the third quarter; data center revenue (41% of total) grew 54.5% YOY; professional visualization revenue (8%) was up 144.5% YOY; automotive revenue (2%) increased 8% YOY; and OEM and other revenue (3%) expanded 20.6% YOY.4

Yes, the Edge Impulse hook up is moving onto our area though again I see us as complimentary to their GPU.

I go back to that Nikunj webinar at around the 37min mark where he speaks about Akida doing its thing but also if the Dev needs a GPU for certain functions then we can also dovetail in there for some of the offloading as well.


I'm not sure what Qualcomm / ARM / NVIDIA statements are about?

None of them really have any direct input / influence over us. We are merely part of the ARM ecosystem.

You could essentially say Intel as well given we are part of the IFS?

We have multiple high level partnerships as we are building an ecosystem and as per my prev thoughts, we need end user Devs onboard as they are the people who build and push through models and products to assist in eventual uptake which ultimately validates Akida and then we should start the snowball imo.

The SP gets messed with by accumulators (big, small & poss other tech companies), shorters, etc etc.

Can't be pinned to a NVIDIA generalisation for mine.

Tech players post about the evolving AI mkt and those involved in it...is like Rob liking diff posts....some may be very pertinent and have a genuine relationship somewhere (we may or may not know of) and some may just be an acknowledgement of an achievement.

Anyway, just my thoughts and trust we will keep chipping away into the market and secure more signatures on the dotted line in due course.
RE: Is NVIDIA a competitor?
I can only add what Sean has said and made public via slides at Investor presentations in April this year.
Vimeo Video: :
Slide 12 Titled Distributed AI from Cloud to Edge
BRN gets to the Edge by passing the Cloud via a Neuromorphic approach whereas NVIDIA gets there via the Cloud
Sean said that AI on the chip and cloud will complement each other. As Data grows exponentially some will be more suited to the cloud and
other data to the chip via Neuromorphic approach.
In this sense NVIDIA are not competitors at this moment in time.
Slide 13 then addresses a Competitive Analysis.
This has a table listing 5 criteria. BRN ticks off the whole 5. " Brainchip stands alone with the most performant Commercially available Edge AI solution".
The other 4 listed companies (including NVIDIA) listed in the Table getting only one tick out of 5 for the criterion.
So according to Brainchip NVIDIA is not a competitor very basically because it specialises in getting to the Edge via the Cloud, whereas we go straight to the Edge and the 2 approaches complement each other. This may change over time of course.
 
  • Like
  • Love
  • Fire
Reactions: 36 users

Cartagena

Regular
Respect and understand you logic though don't entirely agree.

I do agree though that a NVIDIA confirmed and stated connection would be ridiculously massive haha.

M&A activity is commonplace in most industries though.

A behemoth like NVIDIA would just buy us if we were seen as that much of a threat to their bottom line whilst we are still a drop in the ocean $ wise for them.

Eventually we may not be.

There are multiple start ups, research entities etc playing with neuromorphic in different variations and it isn't a given that one of them doesn't come out of left field and snares a major contract.

That's why Megachips and Renesas are critical for mine. They committed as early adopters seeing our benefits and it could just give them an edge (no pun) in the market if they can secure contracts using our tech.

NVIDIA play in the GPU space and diff markets traditionally to us. Where they make their money below as Dec 22.


The chip company serves five primary markets—gaming, data center, professional visualization, automotive, original equipment manufacturer (OEM) and other—and provides a revenue breakdown for each of those markets: Gaming revenue, comprising 45% of total revenue, rose 41.8% YOY in the third quarter; data center revenue (41% of total) grew 54.5% YOY; professional visualization revenue (8%) was up 144.5% YOY; automotive revenue (2%) increased 8% YOY; and OEM and other revenue (3%) expanded 20.6% YOY.4

Yes, the Edge Impulse hook up is moving onto our area though again I see us as complimentary to their GPU.

I go back to that Nikunj webinar at around the 37min mark where he speaks about Akida doing its thing but also if the Dev needs a GPU for certain functions then we can also dovetail in there for some of the offloading as well.


I'm not sure what Qualcomm / ARM / NVIDIA statements are about?

None of them really have any direct input / influence over us. We are merely part of the ARM ecosystem.

You could essentially say Intel as well given we are part of the IFS?

We have multiple high level partnerships as we are building an ecosystem and as per my prev thoughts, we need end user Devs onboard as they are the people who build and push through models and products to assist in eventual uptake which ultimately validates Akida and then we should start the snowball imo.

The SP gets messed with by accumulators (big, small & poss other tech companies), shorters, etc etc.

Can't be pinned to a NVIDIA generalisation for mine.

Tech players post about the evolving AI mkt and those involved in it...is like Rob liking diff posts....some may be very pertinent and have a genuine relationship somewhere (we may or may not know of) and some may just be an acknowledgement of an achievement.

Anyway, just my thoughts and trust we will keep chipping away into the market and secure more signatures on the dotted line in due course.


ST Micro is launching their Low power thermal motion sensor. Considering Akida Gen 2 will be released in late Qtr 3 which is around September, I'm wondering if this has any relevance to our tech? We do have a partnership with ST Micro from my understanding.
 

Attachments

  • Screenshot_20230726-222721.png
    Screenshot_20230726-222721.png
    1.4 MB · Views: 82
Last edited:
  • Like
  • Thinking
  • Fire
Reactions: 7 users

cosors

👀
The post yesterday with the LinkedIn comment screenshot of an ex Brainchip employee that Tim Llewellynn was/is no longer at Nviso has been deleted here.
 
Last edited:
  • Thinking
  • Love
Reactions: 4 users

miaeffect

Oat latte lover
The post yesterday with the LinkedIn comment screenshot of an ex Brainchip employee that Tim Llewellynn was/is no longer at Nviso has been deleted here and also the comment on LinkedIn as far as I can see.
Still on Linkedin
 
  • Like
Reactions: 3 users
Nice bit of overflow exposure for us by the US Tech & Business VP Sony Semi.

With them also involved in the hackathon the post link obviously takes you to the rego and webinar pages incl Infineon and BRN.

Hopefully finds our cutting edge AI tech cool as well :)



Screenshot_2023-07-26-21-19-01-11_4641ebc0df1485bf6b47ebd018b5ee76.jpg
 
Last edited:
  • Like
  • Fire
  • Thinking
Reactions: 16 users

cosors

👀
  • Like
Reactions: 2 users

Beebo

Regular
Maybe / maybe not posted prev but I've not seen this one yet.

I have a couple of "huhs :unsure:" from it and further context would be great.

Given the ASIC comment makes Megachips sense.



9 March 2023

BrainChip’s second-gen neuromorphic silicon gunning for the big boys​


By Alex Davies

Neuromorphic chip designer BrainChip has unveiled the second generation of its Akida platform. Taking inspiration from the human brain, these chips promise to slash the compute costs burdening enterprises and operators, by moving those workloads into bespoke edgy silicon, and out of expensive centralized cloud environments.

Nandan Nayampally, Chief Marketing Officer at BrainChip and formerly a VP at Arm, pointed to the scale of this pricing problem, in conversation with Faultline this week. “It took $6 million to train one model, with ChatGPT. There is something like $50 billion in productivity losses from unplanned downtime in manufacturing alone, and $1.1 trillion in losses from people missing work due to preventable chronic illness in the US. With 1 TB of data per connected car per day too, there are big opportunities for AI, but big challenges.”

The industry is quite used to huge numbers getting hurled around. For context, the World Bank estimated global GDP to be around $96.5 trillion dollars in 2021, so even a fraction of a percentage increase in productivity could have a profound impact.

With billions more connected devices coming online, all that data has to be harvested and fed into the right processing systems, in order to achieve the $15 trillion global benefit of AI that Nayampally anticipates.

“We’re aiming to reduce the amount of compute done in the cloud, by doing intelligent compute on the device. This saves on cloud training, storage, and compute costs, and allows for real-time responses in the devices. This includes portable devices, and there is also the data security angle to consider too. So, that’s the background,” said Nayampally.

This brings us to the Akida platform; the designs that BrainChip licenses to companies that want to add AI-powered silicon functions to their designs.

BrainChip’s business model is quite similar to Arm, which does not manufacture chips itself. BrainChip provides much of the software and code necessary for developers to put these designs to work.

There are three tiers to the platform. The energy-sipping Akida-E series is the smallest, and intended for use in sensors. The Akida-S is a general-purpose design, intended for use in microcontrollers, while the Akida-P is the maximum performance variant, which is most applicable to the Faultline ecosystem.

“These are effectively ASICs,” said Nayampally, of the chips that include the BrainChip IP cores, “and this is where building ASICs is very cost-effective, as you are replacing a thousand-dollar GPU or card with a sub-$10 part.”

These Akida-P designs are intended for advanced speech recognition, object detection and classification, video object detection and tracking, and vision transformer networks.

The second-generation has added support for the Vision Transformer (ViT) core, which can be combined with the new Temporal Event-based Neural Network (TENN) cores, which claim benefits over RNNs (Recurrent Neural Networks) and CNNs (Convolutional Neural Networks) via their ability to process both spatial and temporal data.



A company called Prophesee has built an object detection system for road vehicles, which claims 30% better precision using the new technology.

More importantly, this uses 50x fewer parameters (inputs) and 30x fewer operations (processing cycles) than the old system. This should mean lower power consumption and faster response times, as well as a cheaper design, as the system requires less on-device memory and storage.

A benchmark test using another video object recognition test resulted in a system that could process a 1382x512p video at 30 fps using less than 75 mW of power, in a 16 nm silicon design. This needed 50x fewer parameters and 5x fewer operations than the Resnet50 reference design.

Notably, the Akida platform claims to improve with time, thanks to the on-device learning capabilities of the silicon.

“This is the most important thing. We don’t train models on the devices. We extract the model’s features, and then port the object classes. You can design for today and then be able to upgrade to more complex models in time,” said Nayampally.

There are no public second generation customers yet, but Renesas and MegaChips have both licensed the first-generation designs. For video capture, production, and initial distribution, there are plenty of applications for this neuromorphic silicon, which should mature in time.

However, “I think the pay TV operators are at the very tail end of the technical spectrum, and I say that in a nice way,” said Nayampally. “They are a very margin-driven business, so they work on chips they can build with – the lowest common denominators. So, they will require a silicon partner to promote these new capabilities, and we are working on it.”

As BrainChip’s targets include many Internet of Things (IoT) applications, it was only a matter of time until the smart home opportunity came up. Asked whether operators were showing interest in the Matter-spurred second wave of smart home, Nayampally said “we’re not seeing it yet, to be brutally honest.”

“It’s mostly because we’re not in the phase where we are pushing a proven solution. Most want a complete reference design for their stack, and as the IP model, in our first-generation designs, we were essentially proving the silicon. With the second, we can start to build these reference designs, and help them scale it.”

Founded in Australia, in 2004, BrainChip went public in 2015. Its share price has been quite volatile, leaping from $0.41 AUD in October 2021 to $1.76 AUD in January 2022, before declining steadily to reach around $0.56 AUD today. It acquired SpikeNet Technologies in 2016, a French firm specializing in computer vision, for around USD$1.45 million.
Hi Fullmoonfever,
How do you interpret the following paragraph from the article you shared above?

…However, “I think the pay TV operators are at the very tail end of the technical spectrum, and I say that in a nice way,” said Nayampally. “They are a very margin-driven business, so they work on chips they can build with – the lowest common denominators. So, they will require a silicon partner to promote these new capabilities, and we are working on it.”…
 
  • Like
  • Fire
Reactions: 8 users

FKE

Regular
  • Like
  • Love
  • Fire
Reactions: 43 users

IloveLamp

Top 20
I am of the opinion we are involved with each and every one of the companies mentioned here (not based on this, but dyor)

Screenshot_20230727_055822_LinkedIn.jpg
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 25 users
Top Bottom