BRN Discussion Ongoing

Tothemoon24

Top 20
Brainchip pick up the phone


Feb 10, 2023,03:09pm EST

Listen to article8 minutes

Running ChatGPT costs millions of dollars a day, which is why OpenAI, the company behind the viral natural-language processing artificial intelligence has started ChatGPT Plus, a $20/month subscription plan. But our brains are a million times more efficient than the GPUs, CPUs, and memory that make up ChatGPT’s cloud hardware. And neuromorphic computing researchers are working hard to make the miracles that big server farms in the clouds can do today much simpler and cheaper, bringing them down to the small devices in our hands, our homes, our hospitals, and our workplaces.


One of the keys: modeling computing hardware after the computing wetware in human





“We have to give up immortality,” the CEO of Rain AI, Gordon Wilson, told me in a recent TechFirst podcast. “We have to give up the idea that, you know, we can save software, we can save the memory of the system after the hardware dies.”


Wilson is quoting Geoff Hinton, a cognitive psychologist and computer scientist, author or co-author of over 200 peer-reviewed publications, current Google employee working on Google Brain, and one of the “godfathers” of deep learning. At a recent NeurIPS machine learning conference, he talked about the need for a different kind of hardware substrate to form the foundation of AI that is both smarter and more efficient. It’s analog and neuromorphic — built with artificial neurons in a very human style — and it’s co-designed with software to form a tight blend of hardware and software that is massively more efficient than current AI hardware.


Achieving this is not just a nice-to-have, or a vague theoretical dream.

Building a next-generation foundation for artificial intelligence is literally a multi-billion-dollar concern in the coming age of generative AI and search. One reason is that when training large language models (LLM) in the real world, there are two sets of costs to consider.

Training a large language model like that used by ChatGPT is expensive — likely in the tens of millions of dollars — but running it is the true expense. Running the model, responding to people’s questions and queries, uses what AI experts call “inference.”

That’s precisely what runs ChatGPT compute costs into the millions regularly. But it will cost Microsoft’s AI-enhanced Bing much more.

And the costs for Google to respond to the competitive threat and duplicate this capability could be literally astronomical.

“Inference costs far exceed training costs when deploying a model at any reasonable scale,” say Dylan Patel and Afzal Ahmad in SemiAnalysis. “In fact, the costs to inference ChatGPT exceed the training costs on a weekly basis. If ChatGPT-like LLMs are deployed into search, that represents a direct transfer of $30 billion of Google’s profit into the hands of the picks and shovels of the computing industry.”

If you run the numbers like they have, the implications are staggering.

“Deploying current ChatGPT into every search done by Google would require 512,820 A100 HGX servers with a total of 4,102,568 A100 GPUs,” they write. “The total cost of these servers and networking exceeds $100 billion of Capex alone, of which Nvidia would receive a large portion.”

Assuming that’s not going to happen (likely a good assumption), Google has to find another way to approach similar capability. In fact, Microsoft, which has only released its new ChatGPT-enhanced Bing in very limited availability for very good reasons probably including hardware and cost, needs another way.

Perhaps that other way is analogous to something we already have a lot of familiarity with.

According to Rain AI’s Wilson, we have to learn from the most efficient computing platform we currently know of: the human brain. Our brain is “a million times” more efficient than the AI technology that ChatGPT and large language models use, Wilson says. And it happens to come in a very flexible, convenient, and portable package.

“I always like to talk about scale and efficiency, right? The brain has achieved both,” Wilson says. “Typically, when we’re looking at compute platforms, we have to choose.”

That means you can get the creativity that is obvious in ChatGPT or Stable Diffusion, which relies on data center compute to build AI-generated answers or art (trained, yes, on copyrighted images), or you can get something small and efficient enough to deploy and run on a mobile phone, but doesn’t have much intelligence.

That, Wilson says, is a trade-off that we don’t want to keep having to make.

Which is why, he says, an artificial brain built with memristors that can “ultimately enable 100 billion-parameter models in a chip the size of a thumbnail,” is critical.

For reference, ChatGPT’s large language model is built on 175 billion parameters, and it’s one of the largest and most powerful yet built. ChatGPT 4, which rumors say is as big a leap from ChatGPT 3 as the third version was from its predecessors — will likely be much larger. But even the current version used 10,000 Nvidia GPUs just for training, with likely more to support actual queries, and costs about a penny an answer.

Running something of roughly similar scale on your finger is going to be multiple orders of magnitude cheaper.

And if we can do that, it unlocks much smarter machines that generate that intelligence in much more local ways.

“How can we make training so cheap and so efficient that you can push that all the way to the edge?” Wilson asks. “Because if you can do that, then I think that’s what really encapsulates an artificial brain. It’s a device. It’s a piece of hardware and software that can exist, untethered, perhaps in a cell phone, or AirPods, or a robot, or a drone. And it importantly has the ability to learn on the fly. To adapt to a changing environment or a changing self.”

That’s a critical evolution in the development of artificial intelligence. Doing so enables smarts in machines we own and not just rent, which means intelligence that is not dependent on full-time access to the cloud. Also: intelligence that doesn’t upload everything known about us to systems owned by corporations we end up having no choice but to trust.

It also, potentially, enables machines that differentiate. Learn. Adapt. Maybe even grow.

My car should know me and my area better than a distant colleagues’ car. Your personal robot should know you and your routines, your likes and dislikes, better than mine. And those likes and dislikes, with your personal data, should stay local on that local machine.

There’s a lot more development, however, to be done on analog systems and neuromorphic computing: at least several years. Rain has been working on the problem for six years, and Wilson thinks shipping product in quantity — 10,000 units for Open AI, 100,000 units for Google — is at least “a few years away.” Other companies like chip giant Intel are also working on neuromorphic computing with the Loihi chip, but we haven’t seen that come to the market in scale yet.

If and when we do, however, the brain-emulation approach shows great promise. And the potential for great disruption.

“A brain is a platform that sports intelligence,” says Wilson. “And a brain, a biological brain, is hardware and software and algorithms all blended together in a very deeply intertwined way. An artificial brain, like what we’re building at Rain, is also hardware plus algorithms plus software, co-designed, intertwined, in a way that is really ... inseparable.”

Even, possibly, at shutdown.







© 2023 Forbes Media LLC. All Rights Reserved.
AdChoicesPrivacy StatementDo Not Sell My Personal InformationDigital Terms of SaleTerms and ConditionsContact UsReport a Security IssueJobs At ForbesReprints & PermissionsForbes Press RoomAdvertise
 
  • Like
  • Love
  • Thinking
Reactions: 42 users
There has never been a time in Brainchip’s history when so many positive facts are in full public view.

I won’t list them all but Renesas, NASA, Valeo, Mercedes Benz, Intel, ISL, US Airforce Research, ARM, SiFive, Edge Impulse, MegaChips, MOSCHIPS, VVDN, Prophesee, Nviso, Numera, Tata, SOCIONEXT, Biotome……….

The CTO Head of Accenture stating publicly on a Brainchip Podcast that Brainchip is already successful commercially.

Highly credentialed people in the semiconductor space electing to join Brainchip during difficult global financial and political times.

A new AKD1500 chip taped out with GlobalFoundries in response to customer demand.

My off the wall theory on this one is SiFive, DARPA & NASA.

SiFive is ARM’s major competitor. ARM is SiFive’s major competitor.

Would SiFive really want to be promoting AKD1000 with it using an ARM CPU? It is illogical to my mind so the solution is to produce an AKD1500 without an ARM CPU using GlobalFoundries a US Defence and Space preferred foundry.

There is clear evidence that AKD2000 is still on track as temporary staff are being employed for the stated purpose to facilitate its launch in the next 2 to 3 months.

Healthy cash reserves and an early call on LDA Capital for purpose of funding dedicated sales staff in Japan and Korea and the funding of AKD1500 which has clearly been accelerated.

More sales enquires in Brainchip’s history.

Staff members being publicly awarded for their research.

It truly is exciting times despite what some will try preach.
Hi Chapman,

I generally agree with what you said but have a couple of differing opinions.

Firstly, I don't think the project manager job advertisement mentioning an immediate 2.0 launch confirms AKD2000 is tracking on time and will be ready in a few months. Assuming that AKD2000 is still going to an optimised AKD1500, this likely won't be finalised until after the first AKD1500 reference chip is back and tested. My guess is AKD2000 will be ready closer to a year from now.

Keeping in mind that this project manager job reports to the CMO, I think it's more likely to be referring to either a brand refresh or the launch of marketing for the new AKD1500 chip (job ad includes mentions of user manuals and training materials). AKD1500 will bring a lot of new functionality - hopefully transformers (which I suspect it will from the changelogs) as this will allow use cases with advanced predictive capabilities and may attract some of the attention currently focussed on ChatGPT. So with that comes a lot of great new marketing opportunities.

Secondly, while I get the potential link with Global Foundries and NASA / DARPA, I wouldn't expect the choice of GF 22nm FDSOI to be their request. A large reason for this is that governments generally take the slow but safer route. Decision making is usually slower and I also don't recall seeing any SBIRs awarded for neuromorphic / SNN transformers. In my view it's more likely that NASA will wait for the reference chip to produced, do internal testing and then decide what they want to do with it. I'm also not convinced they would care about such a low nm node size.
My guess is it's more likely to be a request from the other EAPs like Mercedes, one of the most advanced early adopters. The below link about a joint STMicro and GF facility offers some perspective in favour of this.

Pure speculation, DYOR


“This new manufacturing facility will support our $20 billion+ revenue ambition. Working with GF will allow us to go faster, lower the risk thresholds, and reinforce the European FD-SOI ecosystem. We will have more capacity to support our European and global customers as they transition to digitalization and decarbonization” said Jean-Marc Chery, President and CEO of STMicroelectronics. “ST is transforming its manufacturing base. We already have a unique position in our 300mm wafer fab in Crolles, France which will be further strengthened by today’s announcement. We continue to invest into our new 300mm wafer fab in Agrate (near Milan, Italy), ramping up in H1 2023 with an expected full saturation by end 2025, as well as in our vertically integrated silicon carbide and gallium nitride manufacturing.”

“Our customers are seeking broad access to 22FDX® capacity for auto and industrial applications.


 
  • Like
  • Fire
  • Love
Reactions: 29 users

Diogenese

Top 20
Hi Chapman,

I generally agree with what you said but have a couple of differing opinions.

Firstly, I don't think the project manager job advertisement mentioning an immediate 2.0 launch confirms AKD2000 is tracking on time and will be ready in a few months. Assuming that AKD2000 is still going to an optimised AKD1500, this likely won't be finalised until after the first AKD1500 reference chip is back and tested. My guess is AKD2000 will be ready closer to a year from now.

Keeping in mind that this project manager job reports to the CMO, I think it's more likely to be referring to either a brand refresh or the launch of marketing for the new AKD1500 chip (job ad includes mentions of user manuals and training materials). AKD1500 will bring a lot of new functionality - hopefully transformers (which I suspect it will from the changelogs) as this will allow use cases with advanced predictive capabilities and may attract some of the attention currently focussed on ChatGPT. So with that comes a lot of great new marketing opportunities.

Secondly, while I get the potential link with Global Foundries and NASA / DARPA, I wouldn't expect the choice of GF 22nm FDSOI to be their request. A large reason for this is that governments generally take the slow but safer route. Decision making is usually slower and I also don't recall seeing any SBIRs awarded for neuromorphic / SNN transformers. In my view it's more likely that NASA will wait for the reference chip to produced, do internal testing and then decide what they want to do with it. I'm also not convinced they would care about such a low nm node size.
My guess is it's more likely to be a request from the other EAPs like Mercedes, one of the most advanced early adopters. The below link about a joint STMicro and GF facility offers some perspective in favour of this.

Pure speculation, DYOR


“This new manufacturing facility will support our $20 billion+ revenue ambition. Working with GF will allow us to go faster, lower the risk thresholds, and reinforce the European FD-SOI ecosystem. We will have more capacity to support our European and global customers as they transition to digitalization and decarbonization” said Jean-Marc Chery, President and CEO of STMicroelectronics. “ST is transforming its manufacturing base. We already have a unique position in our 300mm wafer fab in Crolles, France which will be further strengthened by today’s announcement. We continue to invest into our new 300mm wafer fab in Agrate (near Milan, Italy), ramping up in H1 2023 with an expected full saturation by end 2025, as well as in our vertically integrated silicon carbide and gallium nitride manufacturing.”

“Our customers are seeking broad access to 22FDX® capacity for auto and industrial applications.


Hi Idd,

You are correct that, in the past NASA has opted for proven technology, but in the last week or so someone posted a NASA SBIR calling for 22nm FD-SOI and complaining about the wasted silicon in having a built-in processor - pretty much what Akida 1500 provides.

Akida 1000 has an ARM Cortex (licenced from ARM) and several comms interfaces, none of which form part of the Akida IP.

It may be that, for a specific purpose, only one comms interface would be needed.

Such a pared down Akida would need an off-chip processor for configuration, but this would not significantly affect performance as Akida does not need any processor for inference, and, if memory serves, no more than 5% of CNN2SNN processing is done on the processor.

This frees up Akida to work with SiFive, Intel or any other processor without the need to pay an additional licence royalty to ARM. It also reduces the silicon footprint so more Akida 1500s can fit on a wafer.

The Akida IP includes the NN array and the CNN2SNN conversion hardware, and the configuration software.

The US DoD has reacted to the Sino-Russian situation. The US has a new Chips Bill compelling Intel and others to repatriate chip manufacturing.

That said, your STM article indicates that there is significant demand for the GF tech from industry as well.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 58 users
Hi Idd,

You are correct that, in the past NASA has opted for proven technology, but in the last week or so someone posted a NASA SBIR calling for 22nm FD-SOI and complaining about the wasted silicon in having a built-in processor - pretty much what Akida 1500 provides.

Akida 1000 has an ARM Cortex (licenced from ARM) and several comms interfaces, none of which form part of the Akida IP.

It may be that, for a specific purpose, only one comms interface would be needed.

Such a pared down Akida would need an off-chip processor for configuration, but this would not significantly affect performance as Akida does not need any processor for inference, and, if memory serves, only 5% of CNN2SNN processing is done on the processor.

This frees up Akida to work with SiFive, Intel or any other processor without the need to pay an additional licence royalty to ARM. It also reduces the silicon footprint so more Akida 1500s can fit on a wafer.

The Akida IP includes the NN array and the CNN2SNN conversion hardware, and the configuration software.

The US DoD has reacted to the Sino-Russian situation. The US has a new Chips Bill compelling Intel and others to repatriate chip manufacturing.

That said, your STM article indicates that there is significant demand for the GF tech from industry as well.
Hi D,

Thanks, I must have missed that SBIR. That explains why there's been talk about not needing a processor.
The timing of the SBIR and the annoucement of our 22nm AKD1500 GF reference chip does align very nicely.


For anyone else that missed it:

Release Date:
January 10, 2023
Open Date:
January 10, 2023
Application Due Date:
March 13, 2023
Close Date:
March 13, 2023 (closing in 29 days

The preference is for a prototype processor fabricated in a technology node suitable for the space environment, such as 22-nm FDSOI, which has become increasingly affordable.


Neuromorphic and deep neural net software for point applications has become widespread and is state of the art. Integrated solutions that achieve space-relevant mission capabilities with high throughput and energy efficiency is a critical gap. For example, terrestrial neuromorphic processors such as Intels Cooporation's LoihiTM, Brainchip's AkidaTM, and Google Inc's Tensor Processing Unit (TPUTM) require full host processors for integration for their software development kit (SDK) that are power hungry or limit throughput. This by itself is inhibiting the use of neuromorphic processors for low SWaP space missions.
 
  • Like
  • Fire
  • Love
Reactions: 34 users

Boab

I wish I could paint like Vincent
Hi D,

Thanks, I must have missed that SBIR. That explains why there's been talk about not needing a processor.
The timing of the SBIR and the annoucement of our 22nm AKD1500 GF reference chip does align very nicely.


For anyone else that missed it:

Release Date:
January 10, 2023
Open Date:
January 10, 2023
Application Due Date:
March 13, 2023
Close Date:
March 13, 2023 (closing in 29 days

The preference is for a prototype processor fabricated in a technology node suitable for the space environment, such as 22-nm FDSOI, which has become increasingly affordable.


Neuromorphic and deep neural net software for point applications has become widespread and is state of the art. Integrated solutions that achieve space-relevant mission capabilities with high throughput and energy efficiency is a critical gap. For example, terrestrial neuromorphic processors such as Intels Cooporation's LoihiTM, Brainchip's AkidaTM, and Google Inc's Tensor Processing Unit (TPUTM) require full host processors for integration for their software development kit (SDK) that are power hungry or limit throughput. This by itself is inhibiting the use of neuromorphic processors for low SWaP space missions.
And remember Dio's response.

Deep Neural Net and Neuromorphic Processors for In-Space Autonomy and Cognition | SBIR.gov


www.sbir.gov
www.sbir.gov
A nice mention for brainchip
I suppose that's ok on the "No publicity is bad publicity" principle.

" ... terrestrial neuromorphic processors such as Intels Cooporation's LoihiTM, Brainchip's AkidaTM, and Google Inc's Tensor Processing Unit (TPUTM) require full host processors for integration for their software development kit (SDK) that are power hungry or limit throughput. This by itself is inhibiting the use of neuromorphic processors for low SWaP space missions."

The redeeming feature is:

"The preference is for a prototype processor fabricated in a technology node suitable for the space environment, such as 22-nm FDSOI, which has become increasingly affordable."

So, suddenly, Global Foundries is about to make Akida 1500 in FD-SOI, without the encumbrance of a full host processor.
 
  • Like
  • Fire
  • Love
Reactions: 35 users

Quatrojos

Regular
Hi Chapman,

I generally agree with what you said but have a couple of differing opinions.

Firstly, I don't think the project manager job advertisement mentioning an immediate 2.0 launch confirms AKD2000 is tracking on time and will be ready in a few months. Assuming that AKD2000 is still going to an optimised AKD1500, this likely won't be finalised until after the first AKD1500 reference chip is back and tested. My guess is AKD2000 will be ready closer to a year from now.

Keeping in mind that this project manager job reports to the CMO, I think it's more likely to be referring to either a brand refresh or the launch of marketing for the new AKD1500 chip (job ad includes mentions of user manuals and training materials). AKD1500 will bring a lot of new functionality - hopefully transformers (which I suspect it will from the changelogs) as this will allow use cases with advanced predictive capabilities and may attract some of the attention currently focussed on ChatGPT. So with that comes a lot of great new marketing opportunities.

Secondly, while I get the potential link with Global Foundries and NASA / DARPA, I wouldn't expect the choice of GF 22nm FDSOI to be their request. A large reason for this is that governments generally take the slow but safer route. Decision making is usually slower and I also don't recall seeing any SBIRs awarded for neuromorphic / SNN transformers. In my view it's more likely that NASA will wait for the reference chip to produced, do internal testing and then decide what they want to do with it. I'm also not convinced they would care about such a low nm node size.
My guess is it's more likely to be a request from the other EAPs like Mercedes, one of the most advanced early adopters. The below link about a joint STMicro and GF facility offers some perspective in favour of this.

Pure speculation, DYOR


“This new manufacturing facility will support our $20 billion+ revenue ambition. Working with GF will allow us to go faster, lower the risk thresholds, and reinforce the European FD-SOI ecosystem. We will have more capacity to support our European and global customers as they transition to digitalization and decarbonization” said Jean-Marc Chery, President and CEO of STMicroelectronics. “ST is transforming its manufacturing base. We already have a unique position in our 300mm wafer fab in Crolles, France which will be further strengthened by today’s announcement. We continue to invest into our new 300mm wafer fab in Agrate (near Milan, Italy), ramping up in H1 2023 with an expected full saturation by end 2025, as well as in our vertically integrated silicon carbide and gallium nitride manufacturing.”

“Our customers are seeking broad access to 22FDX® capacity for auto and industrial applications.


Perhaps PVDM's availability for a chat on twitter (or whatever it is) is indicative that he's now kicking back with some spare time...
 
  • Like
  • Fire
  • Thinking
Reactions: 10 users

jtardif999

Regular
Renesas and Megachips alone can easily give us 2% market share and more which would be enough to make BRN successful and profitable.

These are bonuses: NASA, Valeo, Mercedes Benz, Intel, ISL, US Airforce Research, ARM, SiFive, Edge Impulse, MOSCHIPS, VVDN, Prophesee, Nviso, Numera, Tata, SOCIONEXT, Biotome
Just curious, how do you arrive at the 2 percent? I would hope that they could potentially provide us with more market share than that. I believe BRNs tech is good enough to have a profound impact on the market - no matter who is first in doing the delivery. AIMO.
 
  • Like
  • Fire
Reactions: 11 users

Learning

Learning to the Top 🕵‍♂️
This certainly put Brainchip's Neuromorphic in the box seat. "Beneficial AI" better for the planet.

Screenshot_20230212_225354_LinkedIn.jpg



Learning 🏖
 

Attachments

  • Screenshot_20230212_225354_LinkedIn.jpg
    Screenshot_20230212_225354_LinkedIn.jpg
    574.1 KB · Views: 109
  • Like
  • Fire
  • Love
Reactions: 45 users

cosors

👀
How many chips are in a Range Rover? “A modern car can have anywhere from 1,500 to 3,000 chips in up to 80 separate electronic control units with the luxury vehicles built by Jaguar Land Rover being at the higher end of the spectrum.

How many computer chips does an electric car have?


3,000 chips

During his remarks, Biden said the average electric vehicle uses about 3,000 chips, meaning EV will need more than double the amount of what's in a non-electric car.
It is not only the chip itself that is embedded and supplied by a periphery. Perhaps the more important argument for me in your train of thought is the footprint in manufacturing. As an engineer I like to think: as little as possible as much as necessary which ultimately helps the climate and still ensures that everything necessary is addressed and this approach reduces costs at the same time.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 12 users
As an engineer, I like to think: as little as possible as much as necessary

@cosors “as little as possible, as much as necessary” 👌🏼
 
  • Like
  • Love
  • Haha
Reactions: 11 users

cosors

👀
Akida: beyond Nvidia.
I keep having this crude thought that some big player is behind our unpleasantness. I know, a conspiracy theory thought. But before Intel, Nvidia would have a bigger motive in this my broken thought. Actually, this belongs in the other thread.
Screenshot_2023-02-06-21-51-55-61_40deb401b9ffe8e1df2f1cc5ba480b12.jpg

Perhaps it may have been a humiliation for him. Hands up off and Hey Mercedes. And the system runs via Nvidia.
 
Last edited:
  • Like
  • Fire
Reactions: 10 users

cosors

👀
This certainly put Brainchip's Neuromorphic in the box seat. "Beneficial AI" better for the planet.

View attachment 29371


Learning 🏖
For me a very important thought even if most end users who may be green minded do not face this. It is so comfortable not to reflect one's own behavior.
I've spend to much time with such matter and the againsts with TLG 😅
 
  • Like
Reactions: 5 users

Tony Coles

Regular
This certainly put Brainchip's Neuromorphic in the box seat. "Beneficial AI" better for the planet.

View attachment 29371


Learning 🏖
This certainly put Brainchip's Neuromorphic in the box seat. "Beneficial AI" better for the planet.

View attachment 29371


Learning 🏖
That’s impressive @Learning, we are witnessing BRN getting mentioned in articles a little more often now, hopefully it’s just a good start for 2023. Have a great day all. 👍
 
  • Like
  • Love
  • Fire
Reactions: 18 users
Screenshot_20230213-065634.png

Another Rob telson like. Lol do people really get bothered by the likes he does on LinkedIn?
 
  • Like
  • Fire
  • Love
Reactions: 31 users

Goldphish

Emerged
  • Like
Reactions: 11 users

Deadpool

hyper-efficient Ai
  • Haha
  • Like
Reactions: 6 users
  • Fire
  • Like
Reactions: 2 users

Sirod69

bavarian girl ;-)
There is certainly nothing about our Akida here, but Mercedes is always happy to remind everyone of this car. I'm curious to see when we'll appear again in the reports from VISION EQXX. 🥰 The more love him the better.

Mercedes-Benz AG
Mercedes-Benz AG
5 Std.


Fusion of tech and aesthetic: The interplay of design and aerodynamics in the VISION EQXX.

Aerodynamic drag can have a big impact on range. On a regular long-distance drive, a typical electric vehicle dedicates almost two thirds of its battery capacity to cutting its way through the air ahead, which is why the VISON EQXX has an ultra-sleek and slippery drag coefficient of 0.17.

A huge amount of work went into painstakingly integrating the passive and active aerodynamic features into the external form of the VISION EQXX. The remarkable result was achieved on an impressively short timescale. The inter-disciplinary team used advanced digital modelling techniques to reach a compromise that reduces drag while retaining the sensual purity of the Mercedes-Benz design language and the practicalities of a road car.

Despite the practical challenges and the compressed timescale, the success of the collaboration is clearly evident in the sophistication and poise of the exterior design. The surfaces of the VISION EQXX run smoothly from the front, developing powerful yet sensual shoulders above the rear wheel arches. This natural flow concludes with a cleanly defined, aerodynamically effective tear-off edge accentuated by a gloss-black end trim, punctuated by the rear light clusters.

Learn more about the VISION EQXX: https://lnkd.in/dtddQ8rK
 
  • Like
  • Love
  • Thinking
Reactions: 11 users

jtardif999

Regular
Renesas full year report came out on Feb 9. They had some explaining to do!

Notice Concerning the Difference between Financial Results for the Year Ended December 31, 2022 and Results in the Previous Period

February 09, 2023 01:00 AM Eastern Standard Time
TOKYO--(BUSINESS WIRE)--Renesas Electronics Corporation (TSE: 6723, “Renesas”), a premier supplier of advanced semiconductor solutions, today announced the difference between its consolidated financial results for the year ended December 31, 2022 (January 1, 2022 to December 31, 2022), which it disclosed on February 9, 2023, and the financial results in the previous period (January 1, 2021 to December 31, 2021).
The forecasts for the above period are not based on IFRS, therefore the differences are shown as the actual figures.

1. Difference between consolidated financial results for the year ended December 31, 2022 and the year ended December 31, 2021
In millions of yen
RevenueOperating ProfitProfit before taxProfitProfit attributable to owners of parent
Year ended
December 31, 2021
993,908173,827142,718119,687119,536
Year ended
December 31, 2022
1,500,853424,170362,299256,787256,632
Difference506,945250,343219,581137,100137,096
Difference (%)51.0%144.0%153.9%114.5%114.7%

2. Background to the difference
Consolidated revenue for the year ended December 31, 2022 was 1,500.9 billion yen, a 51.0% increase year on year. This was mainly due to a sales increase effect from the consolidation of Dialog Semiconductor Plc acquired on August 31, 2021 and yen depreciation, in addition to an increase in revenue in the Automotive Business supported by continued growth in semiconductor contents per vehicle as well as an increase in revenue in the Industrial/Infrastructure/IoT Business from demand expansion in the infrastructure market such as datacenters.
The gross profit increased from improvements in product mix in addition to growth in revenue. In addition, the operating profit, profit before tax, profit, and profit attributable to owners of parent for the year ended December 31, 2022 significantly exceeded the results from the previous period, supported by the company’s efforts to streamline business operations.
 
  • Like
  • Fire
  • Wow
Reactions: 28 users

stuart888

Regular
This guy is a 20 year programmer, hard core, deep learning engineer. Sharp guy, good videos.

He says he now uses ChatGBT to do 80% of his coding now. That is disruptive. He supervises the code and learns from it.

Thought it was interesting. Plus he summarizes ChatGBT and Transformer Neural Networks, "Attention is all you need". Famous Google article as they created Transformers in 2017.

I feel Brainchip is going to win in TinyML, as you don't need big data to train edge AI on Industrial patterns, ultra low power.

https://hub.packtpub.com/paper-in-two-minutes-attention-is-all-you-need/

 
  • Like
  • Fire
  • Wow
Reactions: 24 users
Top Bottom