BRN Discussion Ongoing

RobjHunt

Regular
  • Like
  • Fire
  • Love
Reactions: 20 users

Kozikan

Regular
Hi All
Thanks to the efforts of the person who upsets some here the small dissertation I posted now has its own identity in cyber space. Whether it deserves it or not is for others to judge but here is the link:



Remember the life and times of poor Mr. Robert Goddard and always DYOR.
FF.

Hiya FF , always a pleasure to enjoy your thoughts and be reminded of the elephant like memory you possess.
A pity a few impatient, poorly researched cry babies can’t appreciate, that the time scales of a significant technological disruptive change (afforded by BrainChips AKIDA chip) is beginning to approach their doorsteps.
Ps : Robert Goddard
Only recently researched this amazing maligned genius
After being publicly derided by a huge major American tabloid for his work, many decades later, said tabloid ,published an apology to Robert Goddard.
Hope this post finds you in good spirits.
 
  • Like
  • Fire
Reactions: 14 users

Newk R

Regular
Thus the case has been made.....we are selling Akida way too cheaply. Double the price and no one bats an eye with the advantages front and center.

Maybe like the price of perfume.....raise it and more find it more desirable.

If I was NVDA......this Table 6 comparison would be a call to action.
Maybe get real market penetration for the world to see, and perhaps then would be the time to increase the price.
 
  • Like
  • Fire
Reactions: 3 users
What the F#%K is going on?

View attachment 44828
Most of the shorters most likely got out on Friday when there was around 80 Million shares traded in the ASX200 rebalance day of insto’s needing to sell - most aftermarket on the auction close. The buyers were shorters and other insto’s and the day still ended green!
 
  • Like
Reactions: 3 users

Xray1

Regular
Yes that would be nice.

The Buy side are stacked up and don’t want the share price going lower. Happy to buy up the bulk cheap shares.

Prepare for the green!
Almost 8 Million shares traded thus far today ..... I wonder if the shorters are moving out slowly ???
 
  • Like
Reactions: 1 users
There still looks like a lot of active shorts. 16.2 million reported sales on Fri. 7.3% actively short 4 days prior on Tuesday.

There were only about 15 million shares traded between Tue-Fri. Outstanding shorts look at least to be in the vicinity of the 110 million range as of today.

That bodes well if there's any positive official market announcements.

I mean, if you're the owner of those shares being returned, are you then selling them on the market if there is a squeeze? Seems unlikely if not already liquidated. 🤷‍♂️
There were around 80million shares traded just on Friday- most at auction close. So most shorters most likely got out!

It’s good if they did otherwise it is just more manipulation of our SP. Trust me, it’s good.
 
  • Like
Reactions: 3 users

skutza

Regular
Only 2 ways we are going to see a significant gain with the SP. The reasons we have bought in is clear. All the 1000 eyes and great posts only tells us what we already know, and maybe saves a few retail holders getting nervy hands.

Sean was right when he said watch the financials. The deals that get Ann and the financials are our only way to the gravy train. Sorry, but that is the truth. But still, we are getting great research and posts, which keep us interested :)
 
  • Like
  • Love
Reactions: 12 users

wilzy123

Founding Member
  • Like
Reactions: 4 users

Damo4

Regular
  • Like
Reactions: 1 users

Shadow59

Regular
I see poor WBT are suffering the slings and arrows of joining the ASX200.
So glad we are out of there for a while.
 
  • Like
Reactions: 10 users

IloveLamp

Top 20

Q: What’s next for Renesas in terms of enabling customers to expand their space mission ambitions?

Chris: One of the big trends that’s happening is putting additional processing capability up in orbit for various market subsegments like commercial and government. Our customers are moving towards reconfigurable and reprogrammable processing with FPGAs that enable real-time decision making using artificial intelligence.
Screenshot_20230918_141205_LinkedIn.jpg
 
Last edited:
  • Like
  • Love
  • Wow
Reactions: 16 users
Most of the shorters most likely got out on Friday when there was around 80 Million shares traded in the ASX200 rebalance day of insto’s needing to sell - most aftermarket on the auction close. The buyers were shorters and other insto’s and the day still ended green!
I don't think thats the case Fastback if you note my earlier post.

As of Tuesday there were still 129 million shorts.

14 million shares traded in total Tues, Weds, Thurs.

16.255 million reported short sales on Friday..

By deduction that would leave over 100million shorts outstanding still.

1695009840853.png


Fri 15th Sept

1695009416819.png
 
  • Like
  • Sad
Reactions: 9 users
77million shares traded on Friday of which 16.255million short sold.

There will need to be a churn of possibly another circa 450-550million shares traded at Friday's rate to cover all the shorts.

So any new IP deal most welcome BRN if you will. :LOL::LOL::LOL:
 
  • Like
  • Haha
Reactions: 15 users
We got any hook with SK we think?



SK hynix Newsroom

[We Do Future Technology] Become a Semiconductor Expert with SK hynix – AI Semiconductors​

May 9, 2023




AI semiconductors are ultra-fast and low-power chips that efficiently process big data and algorithms that are applied to AI services. In the video above, the exhibition “Today’s Record” shows how humans have been recording information through a variety of ways including drawing and writing for thousands of years. Today, people record information in the form of data at an ever-increasing rate. As this large volume of data is used to create new data, we call this the era of big data.

It is now believed that the total amount of data created up until the early 2000s can be generated in a single day. As ICT and AI technology advances and takes on a bigger role in our lives, the amount of data will only continue to grow exponentially. This is because, in addition to data recording and processing, AI technologies learn from existing data and create large amounts of new data. To process this massive volume of data, memory chips and processors need to constantly operate and work together.

In the Von Neumann architecture1 that is commonly used for most modern computers, the processor and memory communicate through I/O2 pins that are mounted on a motherboard. This creates a bottleneck when transferring data and consumes about 1,000 times more power compared to standard computing operations. Therefore, the role of memory solutions in facilitating fast and efficient data transfer is crucial for the proper function of AI semiconductors and AI services.

1Von Neumann architecture: A computing structure that sequentially processes commands through three stages: memory, CPU, and I/O device.

2Input/Output (I/O): An information processing system designed to send and receive data from a computer hardware component, device, or network.

Ultimately, AI semiconductors need to combine the functions of a processor and memory while providing more enhanced qualities than the Von Neumann architecture. SAPEON Korea, an AI startup jointly founded by SK hynix, recently developed an AI semiconductor for data centers named after the company. The SAPEON AI processor offers a deep learning computation speed which is 1.5 times faster than that of conventional GPUs and uses 80% less power. In the future, SAPEON will be expanded to other areas like autonomous cars and mobile devices. SK hynix’s commitment to developing technologies to support AI is further highlighted by its establishment of the SK ICT Alliance alongside SK Telecom and SK Square. The alliance invests and develops in diverse ICT areas such as semiconductors and AI to secure global competitiveness. Furthermore, SK hynix is also developing a next-generation CIM3 with neuromorphic semiconductor4 devices.

3Computing-in-memory (CIM): The next generation of intelligent memory that combines the processor and semiconductor memory on a single chip.

4Neuromorphic semiconductor: A semiconductor for computing that can simultaneously compute and store like a human brain, reducing power consumption while increasing computational speed.

As AI technology and services continue to rapidly develop, SK hynix’s semiconductors for AI will also evolve to meet the market and consumer needs. The company’s chips will be the backbone for key AI services in the big data era and beyond.
 
  • Like
  • Thinking
  • Fire
Reactions: 16 users

AusEire

Founding Member. It's ok to say No to Dot Joining
Hi All
Thanks to the efforts of the person who upsets some here the small dissertation I posted now has its own identity in cyber space. Whether it deserves it or not is for others to judge but here is the link:



Remember the life and times of poor Mr. Robert Goddard and always DYOR.
FF.

Fantastic article @Fact Finder

Big thanks to @wilzy123 too

Akida Ballista baby
 
  • Like
  • Fire
  • Love
Reactions: 16 users

Damo4

Regular
  • Haha
  • Like
  • Love
Reactions: 15 users

jtardif999

Regular
We got any hook with SK we think?



SK hynix Newsroom

[We Do Future Technology] Become a Semiconductor Expert with SK hynix – AI Semiconductors​

May 9, 2023




AI semiconductors are ultra-fast and low-power chips that efficiently process big data and algorithms that are applied to AI services. In the video above, the exhibition “Today’s Record” shows how humans have been recording information through a variety of ways including drawing and writing for thousands of years. Today, people record information in the form of data at an ever-increasing rate. As this large volume of data is used to create new data, we call this the era of big data.

It is now believed that the total amount of data created up until the early 2000s can be generated in a single day. As ICT and AI technology advances and takes on a bigger role in our lives, the amount of data will only continue to grow exponentially. This is because, in addition to data recording and processing, AI technologies learn from existing data and create large amounts of new data. To process this massive volume of data, memory chips and processors need to constantly operate and work together.

In the Von Neumann architecture1 that is commonly used for most modern computers, the processor and memory communicate through I/O2 pins that are mounted on a motherboard. This creates a bottleneck when transferring data and consumes about 1,000 times more power compared to standard computing operations. Therefore, the role of memory solutions in facilitating fast and efficient data transfer is crucial for the proper function of AI semiconductors and AI services.

1Von Neumann architecture: A computing structure that sequentially processes commands through three stages: memory, CPU, and I/O device.

2Input/Output (I/O): An information processing system designed to send and receive data from a computer hardware component, device, or network.

Ultimately, AI semiconductors need to combine the functions of a processor and memory while providing more enhanced qualities than the Von Neumann architecture. SAPEON Korea, an AI startup jointly founded by SK hynix, recently developed an AI semiconductor for data centers named after the company. The SAPEON AI processor offers a deep learning computation speed which is 1.5 times faster than that of conventional GPUs and uses 80% less power. In the future, SAPEON will be expanded to other areas like autonomous cars and mobile devices. SK hynix’s commitment to developing technologies to support AI is further highlighted by its establishment of the SK ICT Alliance alongside SK Telecom and SK Square. The alliance invests and develops in diverse ICT areas such as semiconductors and AI to secure global competitiveness. Furthermore, SK hynix is also developing a next-generation CIM3 with neuromorphic semiconductor4 devices.

3Computing-in-memory (CIM): The next generation of intelligent memory that combines the processor and semiconductor memory on a single chip.

4Neuromorphic semiconductor: A semiconductor for computing that can simultaneously compute and store like a human brain, reducing power consumption while increasing computational speed.

As AI technology and services continue to rapidly develop, SK hynix’s semiconductors for AI will also evolve to meet the market and consumer needs. The company’s chips will be the backbone for key AI services in the big data era and beyond.

No actual proof of a relationship. Previously (2018) LDN stated that a large Korean company was testing Akida “even more than we have been” and then recently Shaun said we now have a Korean sales presence that we didn’t have before. Personally I’ve always thought there was a good chance the company Lou mentioned would be in the SK group - simply because SK is so big, something comparable to Tata.
 
  • Like
  • Love
Reactions: 11 users

GpHiggsBoson

Regular
Bit more selling today probably as Fund Managers ditch the BRN stock. A lot of Fund Managers are limited to investing in companies listed on the ASX 200.

The problem now is the FM won’t be able buy into BRN even if they wanted to.

Hopefully we have some news that will help lift the SP or we may see a further drop in SP. This may also present good buying opportunity?
Meanwhile if you have your fill batten down the hatches and wait things out or not.

Either way no advice, just another opinion. Please dyor.
Keep up the great posts. I do read them from time to time. Have a great week ahead.

Good luck all!
 
  • Like
  • Fire
  • Love
Reactions: 23 users
Few months old but see that Fujitsu also chasing neuromorphic program as part of their Fugaku Next processors.

Within the slides.

We got someone in Japan now hey?

Get door knocking haha



Fujitsu To Fork Arm Server Chip Line To Chase Clouds

Timothy Prickett Morgan
Timothy Prickett Morgan

6 months ago
fujitsu-monaka-logo-1024x1024.jpg

When it comes to chips, there is a big difference between a kicker and a fork.
The kicker is a successor that implements an architecture and design and that includes microarchitecture enhancements to boost core performance (core in the dual meanings of that word when it comes to CPUs) as well as taking advantage of chip manufacturing processes (and now packaging) to scale the performance further in a socket.
The fork is a divergence of some sort, literally a fork in the road that makes all the difference as Robert Frost might say. There can be compatibility – such as the differences between big and little cores in the Arm, Power, and now X86 markets. Intel and AMD are going to be implementing big-little core strategies in their server CPU lines this year, AMD in its “Bergamo” Epycs and Intel in its “Sierra Forest” Xeon SPs. Intel has had X86 compatible Atom and Xeon chips and now E and P cores for a decade and a half, so this is not precisely new to the world’s largest CPU maker.
And this kind of fork is what we think Japanese CPU and system maker Fujitsu will be doing with its future “Monaka” and “Fugaku-Next” processors, the former of which was revealed recently and the latter of which went onto the whiteboards with stinky markers – well, it was the beginning of a feasibility study by the Japanese Ministry of Education, Culture, Sports, Science, and Technology, with Education, Culture, Sports, Science being a variable X and thus making up the abbreviation MEXT – back in August 2022.
Fujitsu has been a tight partner of the RIKEN Lab, the country’s pre-eminent HPC research center, since the design of the $1.2 billion “Keisuko” K supercomputer, which began in 2006 to break the 10 petaflops barrier in 64-bit precision floating point processing and which was delivered in 2011. Design on the follow-on the $910 million, 513.9 petaflops “Fugaku” supercomputer, which saw Fujitsu switch from its Sparc64 architecture to a custom, vector-turbocharged Arm architecture, started in 2012. The Fugaku system was delivered in June 2020, was fully operational in 2021, and work on the Fugaku-Next system started a year later, right on schedule.
1695017859641.png

According to the roadmap put out by Fujitsu and RIKEN Lab at SC22 last November, the plan is for the Fugaku-Next machine to be operational “around 2030,” and that timing is important (we will get into that in a moment).
Here are the research ideas being tackled and the technology embodied in Fugaku-Next and who is doing the tackling:
1695017908119.png

All of the ideas you would expect in a machine being installed in six or seven years are there – a mix of traditional HPC and AI and the addition of quantum and neuromorphic computing. Supercomputers in the future will be powerful, no doubt, but it might be better called “flow computing” more than “super computing” because there will be a mix of different kinds of compute and applications comprised of workflows of different smaller applications working in concert, either in a serial manner or in iterative loops.
Significantly, Fujitsu and RIKEN are emphasizing “compatibility with the existing ecosystem” and “heterogeneous systems connected by high bandwidth networks.” Fujitsu says further that the architecture of the Fugaku-Next system will use emerging high density packaging, have energy efficient and high performance accelerators, low latency and high bandwidth memory.
If history is any guide, and with Japanese supercomputers it absolutely is, then a machine is installed in the year before it goes operational, which means Fugaku-Next will be installed in “around 2029” or so.
Keep that all in mind as we look at the “Monaka” CPU that Fujitsu is working on under the auspices of the Japanese government’s New Energy and Industrial Technology Development Organization (NEDO). At the end of February, Fujitsu, NEC, AIO Core, Kioxia, and Kyocera were all tapped to work on more energy efficient datacenter processing and interconnects. Specifically, the NEDO effort wants to have energy efficient server CPUs and photonics-boosts SmartNICs.
Within this effort, it looks like Fujitsu is making a derivative of the A64FX Arm processor at the heart of the Fugaku system, but people are conflating this with meaning that Monaka is the follow-on processor that will be used in the Fugaku-Next system.
This is precisely what was said: “Fujitsu will further refine this technology and develop a low-power consumption CPU that can be used in next-generation green datacenters.”
Here are the tasks assigned to the NEDO partners:
  • Fujitsu: Development of low-power consumption CPUs and photonics smart NIC
  • NEC: Development of low-power consumption accelerators and disaggregation technologies
  • AIO Core: Development of photoelectric fusion devices
  • Kioxia: Development of Wideband SSD
  • Fujitsu Optical Components: Development of photonics smart NIC
  • Kyocera: Development of photonics smart NIC
The Monaka CPU is due in 2027 and aims to provid higher performance at lower energy consumption:
1695017952112.png


How this will happen is unclear, but the implication is that it will be an Arm-based server processor, but one optimized for hyperscalers and cloud builders and not for HPC and AI centers. That should mean more cores and less vector processing relative to A64FX (or rather, the kicker to A64FX in Fugaku-Next system) and very likely the addition of low-precision matrix math units for AI inference. Something conceptually like Intel’s “Sapphire Rapids” Xeon SPs and future AMD Epyc processors with Xilinx DSP AI engines in terms of capabilities, but with an Arm core and a focus on energy efficiency, much higher performance per watt.
In fact, as Fujitsu looks ahead to 2027, when Monaka will go into production systems, it says it will be able to deliver 1.7X the application performance and 2X the performance per watt of “Another – 2027” CPU, whatever that might be.
The confusing bit, which has led some people to believe that Monaka is the processor that will be the kicker to A64FX and used in the Fugaku-Next systen, is this sentence: “Not only boosting traditional HPC workloads, but also providing high performance for AI & Data Analytical workloads.”
But here is the thing, which we point out often:HPC is about getting performance at any cost, and hyperscalers and cloud builders need to get the best reasonable performance at the lowest cost and lowest power.
These are very different design points, and while you can build HPC in the cloud, you can’t build a cloud optimized for running Web applications and expect it to do well on HPC simulation and modeling or even AI training workloads. And vice versa. An HPC cluster would not be optimized for low cost and low power and would be a bad choice for Web applications. You can sell real HPC systems under a cloud model, of course, by putting InfiniBand and fat nodes with lots of GPUs in 20 percent of the nodes in a cloud, but it is never going to be as cheap as the plain vanilla cloud infrastructure, which has that different design point.
With Fugaku-Next being a heterogeneous, “flow computer” style of supercomputer, it is very reasonable to think that a kicker to the Monaka Arm CPU aimed at cloud infrastructure could end up in the Fugaku-Next system. But that is not the same thing as saying there will not be a successor to A64FX, which researchers have already shown can have its performance boosted by 10X by 2028 with huge amounts of stacked L3 cache and process shrinks on the Arm cores. That is with no architectural improvements on the A64FX cores, and you know there will be tweaks here.
We think it is far more likely that a successor to Monaka, which we would expect in 2029 given a two year processor cadence, could be included in Fugaku-Next, but that there is very little chance it will be the sole CPU in the system – unless the economy tanks and MEXT and NEDO have to share money.
The Great Recession messed up the original “Keisuko” project, which had Fujitsu doing a scalar CPU, NEC doing a vector CPU, and Hitachi doing the torus interconnect we now know as Tofu. NEC and Fujitsu backed out because the project was too expensive and they did not think the technology could be commercialized enough to cover the costs. Fujitsu took over the project and delivered brilliantly, but we suspect that making money from Sparc64fx and A64FX has been difficult.
But, with government backing, as Fujitsu has thanks to its relationship with RIKEN, and Japan’s desire to be independent when it comes to its fastest supercomputer, none of that matters. What was true in 2009 about the value of supply chain independence (which many countries ignored for the sake of ease and lower cost supercomputers) is even more true in 2029.
Fujitsu is not being specific, and Satoshi Matsuoka, director of RIKEN Lab and a professor at Tokyo Institute of Technology, commented on the reports that Monaka was being used in Fugaku-Next machine in his Twitter feed thus: “Nothing has been decided yet whether Monaka will power #FugakuNEXT; it is certainly one of the technical elements under consideration.” But he also added this: “Since #HPC(w/AI, BD) is no longer a niche market, the point is not to create a singleton ***scale machine, but S&T platforms that will span across SCs, clouds etc. For that purpose, SW generality & market penetratability esp. to hyperscalars are must. We will partner w/vendors sharing the same vision.”
We think there will be two Arm CPUs used in Fugaku-Next: One keyed to AI inference and generic CPU workloads and one tuned to do really hard HPC simulation and AI training. Call them A64FX2 and Monaka2 if you want. The only way there will be one chip is if the budget compels it, just as happened with the K machine.
But, this is admittedly just speculation, and we will have to wait and see
 
  • Like
  • Fire
  • Love
Reactions: 16 users

legendzaz

Emerged
0.28 per share. #timemachine.
#Rocketship, #dejavu #dyor
 
Top Bottom