BRN Discussion Ongoing

View attachment 52790

LG is gearing up to unleash its most advanced processor to date, poised to revolutionize what their OLED TVs can deliver. The much-anticipated successor to the Alpha 9, tentatively named the Alpha 10, holds significant promise in enhancing picture quality and AI capabilities.

According to reports emerging from the Korean newspaper Etnews (via FlatpanelsHD), LG Electronics has been diligently crafting the Alpha 10 processor at its System Integrated Circuit (SIC) Center. The standout feature of this new processor is its Neural Processing Unit (NPU), a development that is expected to significantly boost on-device performance compared to previous AI-based chips.

Essentially, this means that complex processing tasks will be handled locally within the TV itself, akin to the capabilities witnessed in high-end smartphones, eliminating the need for off-site data centers.

One of the key improvements with the Alpha 10 processor is its capacity to analyze images more effectively, reduce noise, identify and prioritize objects, and potentially enhance audio quality. These enhancements promise not only a visually striking experience but also an audible one across LG's 2024 TV range.

One intriguing aspect hinted at in the reports is the potential for motion-based services enabled by the Alpha 10. While specifics remain undisclosed, the applications could span areas like gaming, gesture control, object-based audio positioning, and possibly video calling.

However, it is unclear whether LG is actively pursuing these features or simply contemplating their potential. Moreover, it's worth noting that the Alpha 10 isn't limited to TVs alone, hinting at broader possibilities for interactive devices beyond Smart TVs.

LG typically unveils its upcoming TV models for the year ahead in January, which presents an opportunity to witness firsthand the performance of this new Alpha 10 processor.

Back in 2018, LG introduced the Alpha 9 processor, marking the first step towards incorporating machine learning-based picture enhancement algorithms. Since then, LG has introduced incremental enhancements to the Alpha 9 processor, with the 2023 LG OLED models, such as the C3, G3, M3, and Z3, featuring the sixth generation of this processor. For its more budget-friendly TV models, LG has employed the Alpha 7 or Alpha 5 processors.

The Alpha 10 processor represents a leap forward in processing power, with the incorporation of an advanced NPU and a focus on executing AI algorithms locally. This shift aligns with the trend observed in smartphones, such as the iPhone, where local hardware processing of AI models has become increasingly prevalent.

The Alpha 10's local AI processing capabilities are expected to yield improved image analysis, noise reduction, object recognition, and AI-driven audio enhancements. These advancements promise not only sharper and clearer visuals but also a more immersive and engaging audio experience for viewers.

Additionally, the Alpha 10's potential for motion detection opens the door to innovative services and applications. While specific use cases remain undisclosed, the possibilities include fitness tracking and motion-controlled gaming within the confines of one's living room. LG also plans to make software tools available to app developers, which could further enrich the ecosystem of applications designed to leverage the Alpha 10's capabilities.

While the Alpha 10 processor is initially intended for Smart TVs, LG has expressed interest in extending the reach of its webOS platform to a wider array of devices, potentially beyond the realm of television. Enthusiasts and tech aficionados eager to witness the potential of the Alpha 10 processor will likely get their chance at the Consumer Electronics Show (CES) scheduled for January 2024, LG's traditional platform for unveiling its new lineup of TVs and innovations.
Let's see what brings at the CES Event , if a big dude mentions Brainchips akida ,lookout or it'll be the standard watch the Financials on how BRN Are tracking, at some stage royalties will kick in then everybody will dot join again to find where the secret sauce is
 
  • Thinking
  • Like
Reactions: 4 users

wilzy123

Founding Member
Let's see what brings at the CES Event , if a big dude mentions Brainchips akida ,lookout or it'll be the standard watch the Financials on how BRN Are tracking, at some stage royalties will kick in then everybody will dot join again to find where the secret sauce is
lH5ExZq.gif

wat
 
  • Haha
  • Like
Reactions: 14 users
Let's see what brings at the CES Event , if a big dude mentions Brainchips akida ,lookout or it'll be the standard watch the Financials on how BRN Are tracking, at some stage royalties will kick in then everybody will dot join again to find where the secret sauce is
1703657021177.gif
 
  • Haha
  • Like
  • Fire
Reactions: 8 users

Esq.111

Fascinatingly Intuitive.
Afternoon Chippers & Season Greetings,

Bit baffled today , what with a new patent granted ( No 19 , covering America ).

Each and every new patent granted to our company adds , in my view , an extra $25 to $30 mill to our market cap.


Patience is a virtue , no matter how good the secret sauce is , until the masses get addicted , one has to endure.

Regards,
Esq.
 
  • Like
  • Fire
  • Love
Reactions: 41 users

Diogenese

Top 20
View attachment 52790

LG is gearing up to unleash its most advanced processor to date, poised to revolutionize what their OLED TVs can deliver. The much-anticipated successor to the Alpha 9, tentatively named the Alpha 10, holds significant promise in enhancing picture quality and AI capabilities.

According to reports emerging from the Korean newspaper Etnews (via FlatpanelsHD), LG Electronics has been diligently crafting the Alpha 10 processor at its System Integrated Circuit (SIC) Center. The standout feature of this new processor is its Neural Processing Unit (NPU), a development that is expected to significantly boost on-device performance compared to previous AI-based chips.

Essentially, this means that complex processing tasks will be handled locally within the TV itself, akin to the capabilities witnessed in high-end smartphones, eliminating the need for off-site data centers.

One of the key improvements with the Alpha 10 processor is its capacity to analyze images more effectively, reduce noise, identify and prioritize objects, and potentially enhance audio quality. These enhancements promise not only a visually striking experience but also an audible one across LG's 2024 TV range.

One intriguing aspect hinted at in the reports is the potential for motion-based services enabled by the Alpha 10. While specifics remain undisclosed, the applications could span areas like gaming, gesture control, object-based audio positioning, and possibly video calling.

However, it is unclear whether LG is actively pursuing these features or simply contemplating their potential. Moreover, it's worth noting that the Alpha 10 isn't limited to TVs alone, hinting at broader possibilities for interactive devices beyond Smart TVs.

LG typically unveils its upcoming TV models for the year ahead in January, which presents an opportunity to witness firsthand the performance of this new Alpha 10 processor.

Back in 2018, LG introduced the Alpha 9 processor, marking the first step towards incorporating machine learning-based picture enhancement algorithms. Since then, LG has introduced incremental enhancements to the Alpha 9 processor, with the 2023 LG OLED models, such as the C3, G3, M3, and Z3, featuring the sixth generation of this processor. For its more budget-friendly TV models, LG has employed the Alpha 7 or Alpha 5 processors.

The Alpha 10 processor represents a leap forward in processing power, with the incorporation of an advanced NPU and a focus on executing AI algorithms locally. This shift aligns with the trend observed in smartphones, such as the iPhone, where local hardware processing of AI models has become increasingly prevalent.

The Alpha 10's local AI processing capabilities are expected to yield improved image analysis, noise reduction, object recognition, and AI-driven audio enhancements. These advancements promise not only sharper and clearer visuals but also a more immersive and engaging audio experience for viewers.

Additionally, the Alpha 10's potential for motion detection opens the door to innovative services and applications. While specific use cases remain undisclosed, the possibilities include fitness tracking and motion-controlled gaming within the confines of one's living room. LG also plans to make software tools available to app developers, which could further enrich the ecosystem of applications designed to leverage the Alpha 10's capabilities.

While the Alpha 10 processor is initially intended for Smart TVs, LG has expressed interest in extending the reach of its webOS platform to a wider array of devices, potentially beyond the realm of television. Enthusiasts and tech aficionados eager to witness the potential of the Alpha 10 processor will likely get their chance at the Consumer Electronics Show (CES) scheduled for January 2024, LG's traditional platform for unveiling its new lineup of TVs and innovations.
LG has been playing with NNs since 2010 or earlier.

This is one of the few of their numerous patents which mention machine learning SNNs:

US2022278755A1 METHOD FOR TRANSMITTING OR RECEIVING SIGNAL IN LOW-BIT QUANTIZATION SYSTEM AND DEVICE THEREFOR 20190820




1703657562808.png





1703657600742.png



Note that Fig 12 relates to rate-based signal coding, while Fig 14 uses time (temporal) based coding.

Akida uses rank/temporal based coding.

What is interesting about that is that the original Korean application only referred to rate-based coding.

That means that the temporal based coding was added in the 12 months after the original Korean application was filed on 20190820. What a coincidence!



CORRECTION 19:18 pm:

I tried to correct this post a while ago, but I must have forgotten to save changes.

In my exuberance, I missed a figure on sheet 19 of the drawings which shows a "sequence-based" coding system, which looks equivalent to the time-based coding system in the PCT application from 2019.

This predates the announcement of the EAP in June 2020. The EAP announcement did refer to the partners as having already had use of the Akida simulation software, particularly as the drawing is late in the sequence (tacked-on?), so this does not rule out Akida, but it's not as convincing as I first imagined.
 
Last edited:
  • Like
  • Love
  • Thinking
Reactions: 39 users
Not all over CES but is there any info out there where we are expecting public displays because all I see is bookings in private suites which says to me more secret squirrel. Company did say recently more info over coming weeks so maybe some companies are willing to loosen the NDAs? One can only hope. Just need one big name to mention us really.

Hoping but not expecting.

SC
 
  • Like
  • Fire
Reactions: 19 users
Not all over CES but is there any info out there where we are expecting public displays because all I see is bookings in private suites which says to me more secret squirrel. Company did say recently more info over coming weeks so maybe some companies are willing to loosen the NDAs? One can only hope. Just need one big name to mention us really.

Hoping but not expecting.

SC
It'll always be a secret , but as holders eventually I'm hoping every company that can utilise the product becomes the norm or there left behind
 
  • Like
Reactions: 3 users

Cyw

Regular
Can't catch a break. New patent granted and we still here 😫.
2023 can't end sooner for some.
2021, 2022, 2023, 2024 is our year!
I sincerely hope that is the absolute last time a year is crossed.
 
  • Like
  • Thinking
  • Haha
Reactions: 13 users
Not all over CES but is there any info out there where we are expecting public displays because all I see is bookings in private suites which says to me more secret squirrel. Company did say recently more info over coming weeks so maybe some companies are willing to loosen the NDAs? One can only hope. Just need one big name to mention us really.

Hoping but not expecting.

SC
The Company's Tweet (or whatever it was 🙄) in relation to CES 2024 mentioned "New partners" and "New demos" so you'ld assume these were public?

Are the "New partners" just emphasis on new partners made since CES 2023?
Or will previously unannounced partnerships be revealed?

There's a strong possibility, that some of the "New demos" may be directly related, to the "New partners" and may involve "New products".

The whole event, may end up being, "Much Ado About Nothing" too..


Solidifying IP contracts and revenue producing arrangements, is the only thing, that will really give us the re-rate, we all know is warranted
 
  • Like
  • Love
  • Fire
Reactions: 28 users

wilzy123

Founding Member
I sincerely hope that is the absolute last time a year is crossed.

Same. I really don't want to see another low value post like that again either.
 
  • Haha
  • Like
Reactions: 6 users

IloveLamp

Top 20
One for the Qualcomm believers.......


View attachment 52789
Screenshot_20231227_185301_LinkedIn.jpg
 
  • Like
  • Fire
  • Love
Reactions: 12 users
Not all over CES but is there any info out there where we are expecting public displays because all I see is bookings in private suites which says to me more secret squirrel. Company did say recently more info over coming weeks so maybe some companies are willing to loosen the NDAs? One can only hope. Just need one big name to mention us really.

Hoping but not expecting.

SC
I’m not expecting anything apart VVDN but hoping.
 
  • Like
  • Love
Reactions: 5 users

CHIPS

Regular
Very strange, but some charts are shown when I try to edit this post, but after saving, they disappear here. Better open the link.



“Inception” — Tiny Stock teased as “Next NVIDIA” by Tim Bohen?​

Here’s the intro to a recent pitch from Tim Bohen:

“The Largest Economic Disruption Since Ford’s Model T

“Inception
“The world’s most powerful tech companies are all quietly racing to be at the forefront of this new phenomenon… but only one small company holds the master key to disrupting this $7 trillion industry.”
Bohen has been working with penny stock trader Tim Sykes for years now at StocksToTrade, and this ad is selling access to the StocksToTrade Advisory ($49/yr), which seems to offer some trading suggestions based on the algorithms and trading software that they sell as the StocksToTrade system (which is more like $2,000/yr, so odds are pretty good that this is the “entry level” pitch, and that Advisory subscribers get the hard sell on the full StocksToTrade platform later).

But what we’re interested in, of course, is what “next NVIDIA” stock they’re pitching. This ad has actually been rolling for several months now (the oldest copy I can find in our archive is from late August), but I just got asked about it again today, and I’ve not written about it before, so we’ll dig in and see what this secret “Inception” stock might be. Let’s just jump right into the tease and get started…

“… right now, the most important thing to know is that Inception has already started making a small group of savvy people very rich…
“And as we speak, one unassuming company listed under $2 a share is leading the way.
“You see, despite being under the radar to most investors, this small company has recently received approval on 16 patents for its unique technology…
“Ensuring that no one else can replicate their smart microchip… no matter how hard they try.
“And if anyone wants to partner with them…
“This inception company holds the keys and the patent to make it happen.
“And because of this, they’ll receive significant royalties every time this technology is used.
“Just like when Intel disrupted the computer industry with its patented DRAM technology…
“And saw a meteoric rise to a $151 billion valuation…”
The big picture spiel is all about AI in general, as you might imagine… but when he drills down into some detail, most of the pitch is really about autonomous driving — which hasn’t been the focus of AI pitches much this year, thanks to the new AI models and products that have captured all our attention, like ChatGPT… but was arguably the primary driver of interest in AI a few years ago.

(And as coincidence would have it, “autonomous cars” was the driving theme behind the first pitch we ever saw for NVIDIA — that was from David Gardner at the Motley Fool back in May of 2014… and yes, NVDA is up more than 10,000% since then, so of course I wish I had bought it back then instead of several years later. Right now I think that’s still the second-best teaser pick in Stock Gumshoe history, behind only David’s brother, Tom Gardner, and his pick of Netflix in 2007, currently showing a gain of about 18,000%. No, I didn’t buy that one, either.)

Other clues about this “next NVIDIA” story?

“By making car ownership attainable for 122 million Americans, Ford’s assembly line revolutionized the car industry practically overnight…
“I’m going on the record right here to say:
“Inception will be just as revolutionary…
“By finally making Autonomous Driving Vehicles a REALITY.”
Are you getting our free Daily Update
"reveal" emails? If not,
just click here...
And for some reason he compares this to LiDAR, which seems like a bit of a straw man here…

“… all the major tech companies and auto companies have put false hope in the belief that Lidar would be the answer to these problems….
“Many experts have failed to recognize the true downfalls of this technology.
“First, Lidar fails in bad weather conditions. Since lidar uses visible lasers to measure distance, it doesn’t work in bad weather conditions like heavy rain, snow, and fog. As one expert recently put it: “Lidar is essentially blind in bad weather.”
“It also doesn’t adjust fast enough when another car or a pedestrian makes an unexpected change.
“Secondly, Lidar is extremely power-hungry and expensive. It requires the most power out of all sensors, which ultimately decreases the car’s already low driving range…
“More frequent recharging and charging costs would be a major turnoff for consumers…
“Not to mention, the 360-degree Lidar system would cost, at minimum, an extra $23,000 per car….
“And on top of that…
“Let’s face it: Lidar is just plain ugly. Who wants a car with a massive camera on top?”
I can’t say I’m an engineering expert, but most of that sounds awfully nonsensical to me. LiDAR sensors are now getting very small, and presumably they’re getting more efficient. They no longer require a massive $1,000 tripod on the roof of the car, and one assumes that they no longer cause a 10%+ drag on EV range in very “feature rich” areas like cities where the scanner is extremely active.

But more importantly, of course, comparing a sensor to a processor doesn’t make much sense. Whether you use LiDAR, or radar, or cameras, or whatever else to sense the surrounding environment, something has to do that sensing… and you need a processing unit that deals with all that sensor data, figures out what it means, and sends instructions to the engine and the brakes and the steering wheel and whatever else. You can’t do it without “eyes,” but you also can’t do it with just eyes. Here’s how he continues with that strange straw man argument…

“For self-driving technology to truly be rolled out to the masses…
“A fully functional self-driving car requires not just an excellent set of sonar “eyes” …
“But it requires an excellent ‘brain’ that can quickly decipher the right movements and quick actions that replicate a human’s instinct.
“And after multiple accidents and even some fatalities stemming from Lidar inefficiencies…
“We’ve learned the hard way that…
“Lidar is NOT the answer…
“In fact, Elon Musk said it best when he stated, ‘Lidar is a fool’s errand. Anyone relying on Lidar is doomed.'”
“Unlike Lidar, the technology behind Inception’s microchip has the potential to be the greatest health achievement of our lifetime…
“By saving lives and preventing injuries.”
And the hype gets pretty strong with this one…

“At its core, Inception contains the most disruptive innovation the world has ever seen.
“More powerful than combustion engines, blockchain technology, and the personal computer.
“If you don’t already know… I’m referring to the massive adoption of Artificial Intelligence.
“These split-second decisions we make every day on the road have been impossible for self-driving car companies to address…
“But thanks to its unique intellectual power, when a vehicle equipped with Inception’s microchip system detects an object or obstacle on the road such as other cars, pedestrians, animals, or rainy weather…
“The data from these sensors is then processed by the artificial neurons in the system, which can make instant decisions based on the information available.
“And this is what really gives Inception the potential to take Autonomous Driving to the next level – the combination of artificial intelligence and machine learning.
“Its ability to simultaneously store and process information, just like the neurons in the human brain, is something a glorified camera, like LiDar, would never be able to accomplish.”
That’s an odd statement, saying that a LiDAR sensor will never be as powerful as an AI supercomputer. It’s like saying that your eyes will never be able to do the same thing your brain does. You need both, you need sensors that can take in and process and digitize the current environment… and you need a “brain” that can make decisions based on changes in that environment.

And there are, of course, plenty of autonomous cars on the road right now. There have been some challenges, and nobody has really figured out how to make any money from this progress so far, but there are lots of different chips and systems and different sensor configurations being used by robo taxis in a few cities… and plenty of less-advanced “driver assistance” systems that don’t really count as autonomous driving. Some of those use NVIDIA’s chips, some use other designs, but there are several platforms which are making it work, even if it’s not ready for mass commercialization yet.

More from the ad:

“According to venture investor and serial entrepreneur Tony Seba, once Inception goes mainstream, most people will stop owning cars and instead pay by the ride for Uber on demand Autonomous Vehicles…
“It’s estimated that this could put an extra $5,600 into the average American’s pocket every year.
“That’s equivalent to a 10% raise for American households across the board!
“And not only will this breakthrough technology save you money…but it will also save millions of lives.
“95% of car accidents are caused by human error. Inception has the power to not only make self-driving cars a part of our everyday lives, but it will accomplish autonomous transportation in an incredibly safe way.
“One expert claims that using driverless cars can decrease traffic accidents by 90%.”
This is the same argument that Elon Musk and NVIDIA and many investors have been making about autonomous vehicles for a decade now — that they’ll save us from needing to own cars, will reduce accidents, and will make the world far more efficient, getting rid of all those capital-intensive automobiles that spend most of their time parked. Whitney Tilson was full-throated in his bold predictions about “Transportation as a Service” (TaaS) a few years ago, and many others have pitched similar ideas — the self-driving car technology companies who build the machines or develop the software, the sensor companies who make the LiDAR and other systems that allow cars to map the world, the chipmakers who are capitalizing on the coming ability for autonomous cars to talk to each other, making traffic a thing of the past, and the companies who might benefit form autonomous cars and the fact that we no longer need drivers, whether that’s DoorDash or Uber or whoever else. Get-rich daydreams are everywhere… and who knows, maybe some of them will even come true.

Not so much yet, though, and most of the LiDAR and “autonomous driving” stocks have had a tough couple years as we wait for a real “mass market” or opportunity for profit to emerge int his area.

So which stock is Bohen revved up about? He says they’ve partnered with an $80 billion automaker to build something impressive…

“they recently stated that this new model is: ‘The most efficient car we’ve ever built.'”
And some other hints about folks who are using “Inception” ….

“Tech giants like Megachips, ARM, and Magic Eye have also already made huge deals with the maker of Inception.
“Even NASA jumped on board to use this incredible technology.
“While Google has already invested nearly $2.8 billion into Waymo, its self-driving car project…
“A tiny company trading for under $2 holds the REAL key to the future of this industry that’s ready to explode.”
And some other clues about our “Inception” stock…

“You see, not only has the tiny company that invented and holds the patents to this revolutionary technology seen a 500% increase in revenue over the last year…
“It has also been on a partnership spree over the last handful of months.
“In fact, their recent partnerships with companies valued at over $242 Billion have them prepared for growth on a grand scale…
“And experts have taken notice…predicting that in the years to come…
“Inception will be partnered with dozens… if not hundreds more household names, potentially generating tens of millions in cash flow.”
And some hinting about the CEO…

“This CEO is not a household name like Musk, Bezos, or Theil…
“And you won’t recognize his face.
“But once you see all that he’s accomplished… you’ll understand my excitement.
“This tech savant has unmatched experience in driving explosive growth for the world’s largest tech companies.
“His strategic hiring shows the dynamic plan of this company and the potential they hold.
“This leader’s track record includes some of the biggest names in tech…
“He led the way in growing Fusion-io’s annual revenue to $432.4 million…
“During his tenure, HP Inc.’s revenue grew by 50%, causing its stock to rise 313%…
“And by leveraging his expertise, Compaq Corp. beat earnings after earnings, boosting its stock valuation to an appealing $25 billion…”
Compaq and HP merged more than 20 years ago, so I guess we’re at least dealing with someone who’s a grown-up. And we’re told that the stock has already soared…

“… this tiny company has already seen a 988% stock jump.
“And I believe that’s just the beginning.”
Here’s a screenshot of the chart he shows in the ad, which, if you look closely, will also call attention to the fact that the $2 share price was back in the Spring of 2022… more than 15 months before the earliest version of this ad that I can find. Is this just a retread idea? All I know is that we’ve been getting the ad pretty incessantly for a couple months now, including this week, so apparently it’s still a big focus of Tim Bohen. Or, at least, it’s still bringing in new subscribers.

advertisement

so… a tiny stock that had a 500% revenue increase and a huge surge in the stock price (988% growth in two years) is the best idea for “Inception” and the brain behind self-driving cars, you say? Who might that be?

Well, the good news, assuming you don’t own the stock, is that the share price is a lot lower than $2 now. This is a little Australian company called Brainchip (BRN.AX in Sidney, BRCHF OTC in the US), and here’s my version of the stock chart from those two years, from early 2020 through to early 2022… it’s a clear match, though since this was often a wildly-trading penny stock in 2020 and 2021, you can see that Bohen’s chart must be smoothed out a bit, maybe it’s just a weekly or monthly chart.

advertisement

And the full picture, up through today, as Tim Bohen keeps pitching this as the “Inception” chipmaker and the key to self-driving cars? Here’s what the five-year chart looks like:

advertisement

We’ve actually looked a BrainChip a few times over the years, though I think I’ve only written one article about it — that was back in 2017, covering a teaser ad for an Australian investment newsletter. BrainChip was really a startup back then, though a well-funded one, and was just shipping its first test equipment (they were calling them “accelerator boards” at the time) to automakers to try to get some customers in the autonomous vehicle business… though they were being pitched because of the potential use of their new Akida processors in blockchain projects, because, well, everything was about blockchain in 2017.

Now, no surprise, everything is about AI, so it’s an “AI” company.
Here’s how they describe themselves:

“BrainChip is the worldwide leader in edge AI on-chip processing and learning. The company’s first-to-market neuromorphic processor, AkidaTM, mimics the human brain to analyze only essential sensor inputs at the point of acquisition, processing data with unparalleled efficiency, precision, and economy of energy. Keeping machine learning local to the chip, independent of the cloud, also dramatically reduces latency while improving privacy and data security.
“BrainChip enables effective edge compute to be universally deployable across real-world applications such as connected cars, consumer electronics, and industrial IoT.
“BrainChip is proving that on-chip AI, close to the sensor, is the future for its customers’ products and the planet.”
And yes, they did bring in a new CEO, Sean Hehir, and he did come from HP, Compaq and Fusion-io… though that was a little over two years ago, when their stock was still soaring. Here’s what Hehir said when he was hired:

“I’m excited to join BrainChip at this pivotal time in their history. It is a time when the company is heavily invested in working with partners that will benefit from AI at the edge. I believe my experience in building strong relationships with top-tier global clients will prove beneficial to BrainChip as they enter this next phase of growth. Preparing for the broad commercial launch of Akida silicon and intellectual property will be my top priority, so BrainChip can begin its transformation from a company developing groundbreaking AI technology to a company supplying cutting-edge markets like automotive, transportation, consumer, aerospace, medical, and industrial IoT with the best AI technology available.”
The challenge, at least for investors, is that on the financial side BrainChip still looks like an R&D company, without any meaningful revenue. They have had some R&D income, presumably from funded pilot projects over the years, but it has never reached the level of covering even 20% of their operating expenses (and that was in their best year, 2022, when they had some kind of $5 million revenue bump that hasn’t been repeated, which is also what created that one-time 500%+ surge in revenue that Bohen hints at — most years, the revenue covers less than 1% of expenses).

They do still have $20 million or so in cash on the books, mostly because they’ve steadily sold more shares in most years — the good news is that their largest share sale was back in 2022, so they at least got an elevated price for that dilution, but the share count has gone up by 60% in five years. And it’s about to go up again, with a big share sale at a MUCH lower price (more on that in a minute).

Is this going to turn into a commercial enterprise sometime soon? Will the need for customizable edge AI chips drive some more partnerships that move into real products, and some actual volume production of their Akida chips? I have no idea. They say lots of positive things, but their Investor Presentation has no financial information in it, it’s all about their product development and their early adopter customers, including Mercedes (that teased “$80 billion partner”), who are apparently still working with them but not spending much money on that work. Yet, at least.

It sounds like a cool idea. There are a lot of silicon-based AI products trying to build and take advantage of this current level of AI enthusiasm, including a few companies who sound somewhat similar to BrainChip in trying to build smaller, much more efficient AI processors that can handle specific tasks, rather than the big generalist AI chips, like NVIDIA’s GPUs, which can handle almost any task but are overkill for some specific processing needs and have to be housed in huge data centers rather than at the Edge, or in a specific product (like a car). I don’t know enough to know what kind of tasks BrainChip’s Akida designs are best for in this idea of “distributed AI,” whether that’s inference or training or some very specific AI task that could be done by a custom-built chip, but the idea seems to be having AI processing closer that’s done closer to the real-world action, and much more efficient.

It’s a little similar to the evolution of Bitcoin hardware, frankly — originally Bitcoin was mined with regular ol’ CPUs, then some people realized that GPU’s like NVIDIA’s would be far faster if the mining was programmed to use them properly, and those took over… and then Bitcoin got big enough, and established enough, that people could justify spending a big chunk of money on developing customized ASICs (application-specific integrated circuits) that would handle just one task, Bitcoin mining, much more efficiently than a high-end and expensive GPU chipset, and a few companies started making those chips and building that hardware, and before you know it almost all Bitcoin mining was done with ASIC miners from Bitmain, Canaan and others, and ASICs were being designed for other popular cryptocurrencies, too… though GPUs still get a lot of use for the thousands of tinier cryptos that don’t yet justify the development of specific hardware.

Maybe we’ll see the same thing happen with AI… that’s certainly the bet of a lot of people, including the folks behind Etched.ai and many others. And maybe that rising interest in a broader variety of AI hardware will help out BrainChip, too. I do not have the market expertise to place a bet on which startup chip hopeful might take the world by storm as AI moves out of this dominance by NVIDIA, or whether there’s a place for BrainChip’s Akida project. All I know is that it’s not there, yet, and they’re now facing a wild storm of funding going into hundreds of other application-specific or otherwise customized AI silicon projects, so the competition should be wild. Whether BrainChip has an advantage because they’ve been working on this for six or seven years, or a disadvantage because they didn’t design for what the AI customers are lusting for right this second, I have no idea… all I know is that semiconductor changes in big industrial projects like cars and machinery often come very slowly, and they’re not booking sales now… so if there’s good news to come, it’s not here yet, and it looks like investors have largely moved on to the next shiny object. For now, at least.

As of today, BrainChip is valued at about $200 million… so it’s very small, but not minuscule. As of the June semiannual report they had about $22 million in cash, which meant that they sold enough stock ($9 million) in those six months to almost cover the cash they consumed (about $10.5 million). They have a funding agreement with a firm called LDA Capital, and they exercised their option on that again this month to notify the market that they’re selling up to 25 million more shares — LDA capital gets a 8.5% discount to the share price, so that money will be somewhat dilutive, but not dramatically so. They say they have about A$16 million in capacity still available from LDA, under their funding agreement, so that and their current cash balance are probably enough to get them through 2024, at least at their current cash burn rate.

Will the cash burn continue? I don’t know. We see this with early-stage biotechs pretty regularly, and it’s often very hard to break that cycle once you start selling millions of shares at lower and lower share prices. They say they’re trying to “aggressively expand our go-to-market capabilities” and are “embarking on numerous initiatives to accelerate market adoption of our groundbreaking Akida technology.” So they’ve still got ambition and plans, just no real numbers for me to work with.

Given my lack of expertise about their deals and their technology, I’ll leave it there — personally, I’m much more likely to wait until there’s something I can sink my teeth into — some proof of commercial viability with their product, in the form of real orders from their partner companies at least, if not actual revenue.


But it’s also true that you’ve gotta start somewhere. Investing in these kinds of R&D projects that are early in partnering and not yet commercial is really more like venture capital investing than stock investing, this is inherently a binary bet on the future — either the product will gain acceptance, or not — and if you’re not going to be patient in the usually very slow process of trying to build a commercial market for a new kind of chip, there’s not much point in getting involved at all.

Unless you’re a penny stock trader, of course, and want to try to ride the wild moves that tiny little stocks can sometimes make when they get a little attention. Maybe that’s what Tim Bohen is doing here, I have no idea, otherwise I’m not sure why anyone would put out such a vociferous pitch for a penny stock, using year+ old numbers.

So I’ll turn it back to you, dear friends — want another early-stage speculation in the AI world, with a different kind of chip designer? Think BrainChip has what it takes to really become a commercial business and succeed as a “next NVIDIA” idea? Am I being too cruel or too kind? Let us know with a comment below.
 
Last edited:
  • Like
  • Thinking
  • Love
Reactions: 35 users

RobjHunt

Regular
Afternoon Chippers & Season Greetings,

Bit baffled today , what with a new patent granted ( No 19 , covering America ).

Each and every new patent granted to our company adds , in my view , an extra $25 to $30 mill to our market cap.


Patience is a virtue , no matter how good the secret sauce is , until the masses get addicted , one has to endure.

Regards,
Esq.
Spot on Esqy. A little more Pantene applied liberally is recommended 😉
 
  • Haha
  • Fire
Reactions: 4 users

Diogenese

Top 20
  • Like
  • Sad
  • Fire
Reactions: 8 users

Jasonk

Regular
View attachment 52804

Drawing a longbow here and thought it made for a good Christmas story.
1703680028335.png

Reminds me of CES 2019 KiKi AI Robot which seemed to have vanished in 2021 after raising over 100,000 dollars in 24 hours, I have no idea if it was ever mass produced which was the intention. Can only dream Zoetic was purchased by a larger company and further developed.... the CEO previously worked for google as a software engineer and then returned as a senior engineering manager after 5 years developing this (Jun 2022)) which was a month after the below article was written. Maybe it was placed back into development for 18 months under new ownership :)


1703679260761.png



1703679461556.png
 

Attachments

  • 1703679335276.png
    1703679335276.png
    393.8 KB · Views: 41
  • 1703679429913.png
    1703679429913.png
    506.2 KB · Views: 42
Last edited:
  • Like
  • Fire
  • Thinking
Reactions: 19 users
  • Haha
  • Like
  • Fire
Reactions: 9 users
Was just checking in on Quantum Ventura and Quantum X to see if could find anything new.

Not anything outright but found this list on their site.

Sure I recall we are in with the DHS and DOE projects but you gotta think we'd be in the mix maybe with a couple of these others possibly :unsure:



Our Current Federally-Funded Research Projects as the Prime Contractor:


AI/ML/ Hyperspectral Imaging/ Neuromorphic/ Cybersecurity related topics:


DARPA:
"AI Verification with provable guarantees" using advanced AI verification tools.

Partner: NC State University

Navy Air Warfare: "Certification of AI Systems - CORSI" using Advanced AI to certify AI/ML applications. (Phase 1 and Phase 2 SBIRs)

Partner: Lockheed Martin.

Missile Defense Agency: "Hypersonic Threat Detection" using bio-inspired processing, neuromorphic computing and Advanced AI. (Phase 1 STTR)

Partners: University of Florida and
Lockheed Martin.

Navy Air Warfare: "Detection of UAVs and rogue drones using hyperspectral Imaging" (SBIR Phase 1).

Partners: Bodkin Imaging and Lockheed

Navy Air Warfare: "Vulnerability detection of source code using advanced AI/ML" - SBIR Phase 1

Homeland Security: "Opioid/contraband detection using hyperspectral imaging" - SBIR Phase 1

Department of Energy: "Cyber threat-detection using neuromorphic computing" - SBIR Phase 1

Also found this article / write up on the QV partnership from May this year which I hadn't seen. By the IMC - IOT M2M Council. Appears to be a group of various different companies and members involved.


BrainChip AI detects cyber threats for US government​

  • May 17, 2023
  • Steve Rogerson
brainchip-1022x620.jpg

Quantum Ventura, a San Jose-based provider of AI and ML research and technologies, is using BrainChip’s Akida technology to develop cyber threat-detection tools.

Australian firm BrainChip is a commercial producer of low power, digital, event-based, neuromorphic AI IP.

In a federally funded phase-two programme, Quantum Ventura is creating cyber-security applications for the US Department of Energy under the Small Business Innovation Research (SBIR) programme. The programme is focused on cyber threat-detection using neuromorphic computing, which aims to develop an approach to detect and prevent cyber attacks on computer networks and critical infrastructure using brain-inspired artificial intelligence.

“Neuromorphic computing is an ideal technology for threat detection because of its small size and power, accuracy, and, in particular, its ability to learn and adapt, since attackers are constantly changing their tactics,” said Srini Vasan, CEO of Quantum Ventura. “We believe that our solution incorporating BrainChip’s Akida will be a breakthrough for defending against cyber threats and address additional applications as well.”
Rob Telson, vice president at BrainChip, added: “This project with the Department of Energy offers an ideal opportunity to demonstrate how Akida opens up new possibilities in cyber security, including the ability to run complex AI algorithms at the edge, reducing the dependency on the cloud. We are excited about the progress that Quantum Ventura is making with BrainChip in this project which is extremely vital to the safety of the nation’s infrastructure.”

The Akida neural processor and AI IP can find unknown, repeating patterns in vast amounts of noisy data, which is an asset in cyber threat detection. Once Akida learns what normal network traffic patterns look like, it can detect malware, attack signatures and other types of malicious activity. Because of Akida’s ability to learn on device in a secure fashion, without need for cloud retraining, it can quickly learn new attack patterns, enabling it to adapt to emerging threats.

BrainChip IP supports incremental learning, on-chip learning and high-speed inference with performance in micro watt to milli-watt power budgets, suitable for AI and ML devices such as intelligent sensors, medical devices, high-end video-object detection, and ADAS and autonomous systems.

Akida is an event-based technology that is lower power than conventional neural network accelerators, providing energy efficiency with high performance for partners to deliver AI products previously not possible on even battery-operated or fan-less embedded edge devices.

Headquartered in Silicon Valley, Quantum Ventura is creating technologies in the areas of AI and ML verification and validation, cyber security, secure mobile technology and HPC-driven big data analytics. Its R&D division, QuantumX Research Labs, provides technology to federal agencies and corporations throughout the USA.

BrainChip specialises in edge AI on-chip processing and learning. The company’s first-to-market, fully digital, event-based AI processor, Akida, uses neuromorphic principles to mimic the human brain, analysing only essential sensor inputs at the point of acquisition, processing data with efficiency, precision and economy of energy. It enables edge learning local to the chip, independent of the cloud, reducing latency while improving privacy and data security.

Akida neural processor IP, which can be integrated into SoCs on any process technology, has shown benefits on today’s workloads and networks, and offers a platform for developers to create, tune and run their models using standard AI workflows such as Tensorflow and Keras. In enabling effective edge compute to be universally deployable across real world applications such as connected cars, consumer electronics and industrial IoT, BrainChip says on-chip AI, close to the sensor, is the future.
 
  • Like
  • Love
  • Fire
Reactions: 46 users
Surfing some scholar articles and saw this one with some Akida keywords.

Appears their tweaking got a 5.7 x data augmentation boost.

I can't get access but copied what was available as the authors were who interested me...not the US but Japanese division :)


Data Augmentation for Edge-AI on-chip Learning

N Yoshida, H Miura, T Matsutani… - … on Internet of Things …, 2022 - ieeexplore.ieee.org
evaluation chip AKD1000 [1] to execute finetuning with just one-shot input. Subsequent investigations revealed that it realized a neuromorphic … The Akida neuromorphic ML Framework

Data Augmentation for Edge-AI on-chip Learning​

Naoto Yoshida, Hina Miura, Takashi Matsutani, Hideto Motomura
2022 IEEE 8th World Forum on Internet of Things (WF-IoT), 1-6, 2022
This study applied data augmentation to improve the inference accuracy of edge-artificial intelligence on-chip learning, which uses the fine-tuning technique with limited knowledge and without a cloud server. Subsequently, keyword spotting was adopted as an example of the edge-AI application to evaluate inference accuracy. Investigations revealed that all four data augmentation types contributed to inference accuracy improvements, boosting data augmentation by 5.7 times rather than the one-shot boost without data augmentation recorded previously.

Naoto Yoshida
AI Division, MegaChips Corporation, Osaka, Japan

Hina Miura
AI Division, MegaChips Corporation, Osaka, Japan

Takashi Matsutani
AI Division, MegaChips Corporation, Osaka, Japan

Hideto Motomura
AI Division, MegaChips Corporation, Osaka, Japan

toc-icon.png
Contents

Introduction​

Edge-artificial intelligence (edge-AI) operations are useful to promote mobility services from the viewpoint of personal optimizations. However, major operations required to control vehicles must be identical to ensure safety. Furthermore, since drivers' skills and experience differ, personal optimizations are required to make them comfortable. Therefore, edge-AI has been considered suitable for modifying each driver's major AI-based functions.
 
  • Like
  • Fire
  • Love
Reactions: 31 users
Top Bottom