BRN Discussion Ongoing

Tothemoon24

Top 20
Ducking beautiful

🏄‍♀️




1708691077088.png

IMG_8471.jpeg
 
  • Like
  • Fire
  • Love
Reactions: 51 users

FJ-215

Regular
Many have mentioned ANT61 and their upcoming launch......

Not my research but a good reminder from their past Linkedin posts...... you might have to click on the link (It's Friday, figure it out yourself)

ANT61 Linkedin


1708691193103.png




Cool bit of video if you follow the link.


Great week, another splash or two of red and I'm off to bed.
 
  • Like
  • Love
  • Fire
Reactions: 25 users

FJ-215

Regular
Many have mentioned ANT61 and their upcoming launch......

Not my research but a good reminder from their past Linkedin posts...... you might have to click on the link (It's Friday, figure it out yourself)

ANT61 Linkedin


View attachment 57743



Cool bit of video if you follow the link.


Great week, another splash or two of red and I'm off to bed.
I did mention the video didn't I......

Hmm....

1708692861293.png
 
  • Like
  • Love
  • Fire
Reactions: 13 users

Sirod69

bavarian girl ;-)

The Future of AI is Built on Arm​

Visit Arm at MWC Hall 2, Stand I60
Arm is showcasing our latest developments on the world's most pervasive and efficient compute architecture. Through cutting-edge demonstrations, meetings, and thought leadership, we are highlighting the AI-powered possibilities presented in 5G, datacenter, mobile, and automotive use cases.

The Infrastructure Underpinning AI​

See why industry leaders choose Arm custom silicon as the foundational platform for their AI transformation. Get the latest on how Arm Neoverse enables unmatched flexibility in integrating workload-specific acceleration or other innovation to foundational Arm compute, unlocking the path to custom, purpose-built silicon for AI-era infrastructure – delivering compute performance, sustainability and low development TCO.

The Future of Mobile is Built on Arm​

See why the Arm compute platform is the number 1 AI processing target for 3rd party developers, with 70 per cent of ML in today’s third-party smartphone applications already running on Arm CPUs. We will demonstrate our foundational platform, easy to access, ubiquitous: 2.5 billion Arm-based client devices form the foundation of MWC, 1.7 billion of which are already enjoying best-in-class AI thanks to Arm NN and software libraries that make your AI transformation transparent and frictionless.
 
  • Like
  • Love
Reactions: 10 users

Diogenese

Top 20
  • Haha
  • Like
  • Love
Reactions: 20 users

Diogenese

Top 20
Great to see Brainchip top of the list. What a month.


Brainchip has been on a rocket ride with little in the form of company announcements to explain the move. As I've noted in the Interesting Moves section, it does kick off a major investor roadshow next week.

The run has bashed up against some substantial historical resistance in the form of 0.52. You can clearly see the impact of this level on the demand-supply environment in today's candle, more specifically, the long upward pointing shadow. Upward pointing shadows are the fingerprints of excess supply.

Otherwise, the short term uptrend (light green ribbon) is well-established and the long term trend appears to be transitioning from neutral to up. Support now moves to the last peak at 0.39.


Investors should be on the lookout for further supply-side showings around 0.52, for example black candles or upward pointing shadows. Otherwise, white candles and downward pointing shadows would indicate the demand-side remains very much in control, and in this case 0.61 is the next likely overhead resistance point.

On the other hand the Lapsang Souchong index indicates that stray floaters have escaped the strainer and met resistance from the teaspoon and splashed onto the cup and handle ...
 
Last edited:
  • Haha
  • Like
  • Love
Reactions: 48 users

cosors

👀
Sometimes it's good to refresh the mind by rereading articles like this one attached.
NASA ~ Intellisense Systems Inc ~ Brainchip 7 Months to go before handover to NASA


Regards...Tech
That's not my cup of tea, but maybe for you. I saw that you were the last who posted this URL. Can you do anything with it? I couldn't find anything here using the keyword search for the Mentium Technologies or sub orbital either.
Maybe it is interesting. I'm closing the tab as this will take until 2026 anyway.

"Testing neuromorphic architectures for high capacity/low-power AI in Sub Orbital flight

Project Description​

The technology is a radiation-hardened Artificial Intelligence (AI) inference accelerator. The system consists of a co-processor that is able to expand the AI capabilities of existing systems by orders of magnitudes while consuming less than 0.4W of power.

Anticipated Benefits​

Artificial Intelligence systems in space are significantly impacted by hardware limitations such as power, mass, and radiation protection. This technology has the potential to bring world-class efficiency of 50 TOPS/W efficiency, at 0.4W power consumption, revolutionizing onboard data analysis, sensor enhancement, and system autonomy, all with a radiation-hardened design."
https://techport.nasa.gov/view/155249



1708697704417.png

But then it will probably come from Synopsys? I have no clue. That's really not my field. I can't get my head around who is doing what with whom for whom and so on; whomwhom. But ARM? I couldn't find them in Neuromorphia's ecosystem thread, which she has thankfully just updated. So I'll leave it here before I close the dab.

1708698543364.png
 
Last edited:
  • Like
  • Fire
Reactions: 7 users

Diogenese

Top 20
That's definitely not my cup of tea. But I saw that you were the last who posted this URL. Can you do anything with it? I couldn't find anything here using the keyword search for the Mentium Technologies or sub orbital either.
Maybe it is interesting. I'm closing the tab as this will take until 2026 anyway. I don't have that much patience at the moment.

reads somehow well

"Testing neuromorphic architectures for high capacity/low-power AI in Sub Orbital flight

Project Description​

The technology is a radiation-hardened Artificial Intelligence (AI) inference accelerator. The system consists of a co-processor that is able to expand the AI capabilities of existing systems by orders of magnitudes while consuming less than 0.4W of power.

Anticipated Benefits​

Artificial Intelligence systems in space are significantly impacted by hardware limitations such as power, mass, and radiation protection. This technology has the potential to bring world-class efficiency of 50 TOPS/W efficiency, at 0.4W power* consumption, revolutionizing onboard data analysis, sensor enhancement, and system autonomy, all with a radiation-hardened design."
https://techport.nasa.gov/view/155249


FF
*For those of you who follow such things the first release of the AKIDA 2.0 specs indicated that 'P' would have a capacity of 50 TOPS however after it was in the hands of the early access customers some further upgrades by way of adding extra nodes were made lifting 'P' to 131 TOPS. It might be thought sensible to not race in and produce a proof of concept chip until you have settled on the IP design but I will leave that to others better qualified to say.
Hi cosors,

The number of nodes has always been flexible. Even Akida 1 was made with less than the maximum possible number of nodes/NPEs.

https://brainchip.com/akida-generations/

Akida 2E can have 1 or 2 nodes

Akida 2S can have from 3 to 8 nodes, and

Akida 2P can have from 8 to 256 nodes.

Interestingly VIT is only available as an option with Akida 2P.

I may be wrong, but didn't Akida 1 SoC have 80 nodes although the IP was capable of supporting up to 256 nodes? I guess the 256 limitation is down to the mesh network interconnecting the nodes.

So making Akida 2P capable of having 256 nodes (1024 NPEs) means that BRN contemplates Akida 2P being capable of some serious heavy lifting, not that I can think of any application which would require that capacity ...?
 
  • Like
  • Love
  • Fire
Reactions: 29 users

Diogenese

Top 20
That's definitely not my cup of tea. But I saw that you were the last who posted this URL. Can you do anything with it? I couldn't find anything here using the keyword search for the Mentium Technologies or sub orbital either.
Maybe it is interesting. I'm closing the tab as this will take until 2026 anyway. I don't have that much patience at the moment.

reads somehow well

"Testing neuromorphic architectures for high capacity/low-power AI in Sub Orbital flight

Project Description​

The technology is a radiation-hardened Artificial Intelligence (AI) inference accelerator. The system consists of a co-processor that is able to expand the AI capabilities of existing systems by orders of magnitudes while consuming less than 0.4W of power.

Anticipated Benefits​

Artificial Intelligence systems in space are significantly impacted by hardware limitations such as power, mass, and radiation protection. This technology has the potential to bring world-class efficiency of 50 TOPS/W efficiency, at 0.4W power* consumption, revolutionizing onboard data analysis, sensor enhancement, and system autonomy, all with a radiation-hardened design."
https://techport.nasa.gov/view/155249


FF
*For those of you who follow such things the first release of the AKIDA 2.0 specs indicated that 'P' would have a capacity of 50 TOPS however after it was in the hands of the early access customers some further upgrades by way of adding extra nodes were made lifting 'P' to 131 TOPS. It might be thought sensible to not race in and produce a proof of concept chip until you have settled on the IP design but I will leave that to others better qualified to say.

View attachment 57751

But then it will probably come from Synopsys? I have no clue. That's really not my field. I can't get my head around who is doing what with whom to whom and so on; whomwhom. But ARM? I couldn't find them in Neuromorphia's ecosystem thread, which she has thankfully just updated. So I'll leave it here before I close the dab.

View attachment 57752
whomwhom ...

Two Indian gentlemen were having a discussion on a bus and were overheard by a lady in the next seat.

1st gentleman: "It's spelt whoomb".

2nd gentleman: "No, no, no. It's spelt woombh."

The lady turned around and said: "Perhaps I can be of assistance. The correct spelling is womb".

"Goodness gracious me!" the gentlemen declared in unison "When did you ever hear a hippopotamus fart under water?"
 
  • Haha
  • Like
  • Thinking
Reactions: 21 users
Whilst this author doesn't sound convinced neuromorphic is there....yet, he does believe in the tech and added what some of his TCS colleagues are up to.



#215: Neuromorphic Machines - Mimicking The Brain​

Bio-mimicry meets the future of computing. It might be the great hope for AI, and will help in getting beyond the limitations of current computing architectures.​


VED SEN
14 FEB 2024

Excerpt...

On the other hand, we are still to see full-scale development with the billions of neurons - the hardware complexity and power consumption challenges are yet to be solved at scale. There is also very little by way of programming techniques or algorithms that utilize the abilities of the hardware design. Many aspects are still being researched. My colleagues at TCS are researching spike encoding for gesture recognition applications, for example.
 
  • Like
  • Love
  • Fire
Reactions: 20 users

cosors

👀
Yes yes, I'll leave it for today, poking around in things I don't understand.

But I think I got the joke.

Have all a nice weekend!
 
  • Like
  • Fire
  • Love
Reactions: 7 users

cosors

👀
What do you expect? They call themselves the fools of investing and have just discovered HC for fundamental analysis for their paid! accounts through this candidate and author here.
View attachment 57735
What's going on with them? I've just had a look at their website. On the one hand I wonder what has become of pickle and his favorite stock, I mean us. And on the other hand I see that they are churning out superficial articles in bulk. Is that what all the fools do? Who pays for that? I mean, today alone he has cobbled together 18 articles, if he even knows what's all about.
When he was still writing about us I at least had the feeling that he was taking it personally. Now he produces articles in mass. Are they paid per article?
Seriously, who works for a place like that voluntarily and who puts money on their table?
I picked out an article that recommended buying shares before a next week. And at the bottom it says quite clearly and obvious why. They hold the stocks that they recommend in this article! 🤷‍♂️

MF contributor pickle has positions in Lovisa and Woodside Energy Group. The MF Australia's parent company MF Holdings Inc. has positions in and has recommended Lovisa, Macquarie Group, and WiseTech Global. The MF Australia has positions in and has recommended Macquarie Group and WiseTech Global. The MF Australia has recommended Lovisa. The MF has a disclosure policy. This article contains general investment advice only (under AFSL 400691). Authorised by who cares.

And I think they're serious about it. I need to wash my eyes.

Seriously, I think pickle has problems. While his colleague tonyoo casually generates two or three articles per day, pickle has to produce on an assembly line. Is he in financial mess?
who cares 🤷‍♂️

Give me your signature and I'll tell you the best way to invest $1000, which I was told in confidence by my stealth under cover avatar colleague in a super exclusive secret forum.


Subscribe here
1708710044847.png

An Ethical Oasis?!!!? 🤣🤣🤣🤣🤣🤣

___________________-
Seriously, that's what it really says on the subscription page!
 
Last edited:
  • Haha
  • Like
Reactions: 19 users

Damo4

Regular
Nothing major, just another refresher
 
  • Like
  • Love
Reactions: 25 users

Sirod69

bavarian girl ;-)
INTEL CEO PAT GELSINGER:
Intel also wants to produce AMD processors
An AMD CPU from Intel? What seems unlikely at the moment could make sense for both companies in the medium term.
22. Februar 2024, 16:41 Uhr, Martin Böckmann

1708714632376.png

AMD processors from Intel? This might be less absurd than it seems.

During a question and answer session at yesterday's IFS event, Intel CEO Pat Gelsinger answered positively the question about future production for arch-rival AMD. Intel Foundry is intended to become a “world foundry” that is available to all companies worldwide and makes the entire product portfolio available. This includes Jensen (Nvidia), Satya (Microsoft) and Lisa (AMD).

The goal of the foundry team is to utilize the factories to capacity, said Gelsinger. Pat Gelsinger does not see the conflict of interest suspected by Paul Alcorn (Tomshardware) in the event of a better AMD design, which could become a better product thanks to Intel's leading manufacturing process.

Intel Foundry has already been spun off into its own legally separate company, which will also publish business figures separately in the future. This does not fundamentally change the competitive relationship between the development departments at Intel and AMD; it is only possible for both manufacturers to have their products manufactured using the same manufacturing process.

Colorful mix of chiplets possible
Gelsinger also expects that other manufacturers will also use Intel products directly in the future, even if they have a better product in other areas. For example, it would be possible to build a chip with Intel tiles, but I/O, security or graphics chips from a different manufacturer, or as in the case of Meteor Lake, even from a different foundry.

Gelsinger reiterated the openness to competing companies in a later answer to another question: "I want my factory to be used by everyone, period. We want to help Nvidia build chips and AMD, TPU chips for Google and also inference chips for Amazon. We want to help them and we want to provide them with the most powerful, high-performance and efficient technology for their systems."

 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 12 users

Sirod69

bavarian girl ;-)
Results of Operation Verification Using an Embedded AI-MPU Prototype Announced at ISSCC 2024
February 22, 2024
TOKYO, Japan ― Renesas Electronics Corporation (TSE: 6723), a premier supplier of advanced semiconductor solutions, today announced the development of embedded processor technology that enables higher speeds and lower power consumption in microprocessor units (MPUs) that realize advanced vision AI. The newly developed technologies are as follows: (1) A dynamically reconfigurable processor (DRP)-based AI accelerator that efficiently processes lightweight AI models and (2) Heterogeneous architecture technology that enables real-time processing by cooperatively operating processor IPs, such as the CPU. Renesas produced a prototype of an embedded AI-MPU with these technologies and confirmed its high-speed and low-power-consumption operation. It achieved up to 16 times faster processing (130 TOPS) than before the introduction of these new technologies, and world-class power efficiency (up to 23.9 TOPS/W at 0.8 V supply).
1708716430843.png

 
  • Like
  • Fire
  • Thinking
Reactions: 28 users
Results of Operation Verification Using an Embedded AI-MPU Prototype Announced at ISSCC 2024
February 22, 2024
TOKYO, Japan ― Renesas Electronics Corporation (TSE: 6723), a premier supplier of advanced semiconductor solutions, today announced the development of embedded processor technology that enables higher speeds and lower power consumption in microprocessor units (MPUs) that realize advanced vision AI. The newly developed technologies are as follows: (1) A dynamically reconfigurable processor (DRP)-based AI accelerator that efficiently processes lightweight AI models and (2) Heterogeneous architecture technology that enables real-time processing by cooperatively operating processor IPs, such as the CPU. Renesas produced a prototype of an embedded AI-MPU with these technologies and confirmed its high-speed and low-power-consumption operation. It achieved up to 16 times faster processing (130 TOPS) than before the introduction of these new technologies, and world-class power efficiency (up to 23.9 TOPS/W at 0.8 V supply).
View attachment 57755
I like the last line of the article.

“These technologies will be applied to Renesas’ RZ/V series—MPUs for vision AI applications.”
 
  • Like
  • Fire
  • Love
Reactions: 18 users
Impressive





 
Last edited:
  • Like
  • Love
Reactions: 9 users

The Pope

Regular
  • Like
  • Fire
  • Thinking
Reactions: 9 users
Top Bottom