BRN Discussion Ongoing

jtardif999

Regular
Ah, @Diogenese this from the article I referred to in a previous post (where I couldn’t link the article and you commented on) and, I think I saw this description as referring to operating system tasks when really perhaps it is just an analogy for understanding the concept of spikes - so apologies for that red herring 🤓
 
  • Like
Reactions: 6 users

Getupthere

Regular

So how would it all work?

According to an in-depth explanation from Bloomberg’s Mark Gurman, Apple is developing the feature using a specific chip technology, as well as a measurement process known as optical absorption spectroscopy.
 
  • Like
  • Thinking
  • Fire
Reactions: 11 users
D

Deleted member 118

Guest
Hoping for the end of year report at close of business tomorrow followed by 5% gains on US markets Friday night, a BRN trading halt Monday morning followed by a price sensitive MOA announcement, BRN with a 50% price jump after trading halt lifted and every shorter reduced to ashes.
 
  • Haha
Reactions: 7 users

ndefries

Regular

Samsung will manufacture 5nm chips for autonomous cars​

21 February 2023
arenaev_001.jpg

Samsung announced it is joining the autonomous vehicle business, but rather than following Xiaomi's example, the company will supply chips used by AI that controls the self-driving systems. A US-based semiconductor company Ambarella will be the customer and the two companies are promising to transform the next generation of autonomous vehicles.
Ambarella is a Tier-1 automotive supplier, but the company started in 2004 with the aim of developing H.264 video encoders for professional broadcast services. The company quickly expanded its technology into consumer video and transitioned to developing low-power video compression chips. If not for Ambarella, we wouldn’t have GoPro, there would be no Dropcam from Nest, we wouldn’t have Garmin dashcams and DJI Phantom drones would have never happened.
Over the last few years, Ambarella acquired a series of companies from the automotive field. In 2015 it took over VisLab responsible for computer vision and incorporated its own SoC into its solutions to provide ADAS for autonomous vehicles. In 2021 Oculii was purchased as well with its entire portfolio of technology focused on improved resolution of radars for self-driving cars.
Last year Ambarella and Incepto Technology announced a partnership to provide a solution for an automotive-grade central computing platform. Ambarella provides its CVflow SoCs that can simultaneously process seven 8MP cameras, surround camera perception and front ADAS safety features.
arenaev_002.jpg

This year the company signed a partnership with Continental to focus on the field of AI-based software and hardware systems for advanced ADAS and fully automated driving. PArtnership with Samsung will focus on delivering the latest semiconductors for this technology.
The 5nm chip that Samsung will manufacture for Ambarella is the CV3-AD685 and it’s the first one based on the CV3-AD family of central controllers. The CV3 is based on the CVflow AI engine in its already third generation, the previous CV2 processors were about 20 times slower than the latest CV3.
The chip comes with ARM Cortex A78AE, there are R52 CPU cores and automotive-grade GPU. It has a dedicated security module and advanced ISP for processing multiple camera inputs. The algorithm-first architecture supports a complete software stack for Level 2+ all the way to Level 4 autonomous driving.
Ambarella chose Samsung 5nm process due to its optimization for automotive-grade semiconductors. Samsung is known for its tight process controls and advanced IP that helps with traceability and reliability.
The new chip can handle neural network processing for 4D imaging radars, it can handle computer vision and deep sensor fusion combined with path planning in ADAS. This really is the future and both companies - Samsung and Ambarella - have expertise to give us safe autonomous cars by providing the best components.
Via

Share​

FACEBOOKTWITTER

RELATED​

Apple's Car Key expansion coming soon as Test app surfaces on App Store

Apple's Car Key expansion coming soon as Test app surfaces on App Store​


Tesla finally shows its actual prices on its US website

Tesla finally shows its actual prices on its US website​


EV sales in Norway collapsed in January

EV sales in Norway collapsed in January​


Tesla Model Y's wheel falls off on the highway

Tesla Model Y's wheel falls off on the highway​




READER COMMENTS

Exynos chips are bad as a mobile SOCs, BUT this is a car and 1W of heat output extra over the competition is hardly an issue. Not to mention, an SOC for infotainment is hardly the last chip you will find in a modern car, let alone in "autonomo...
This deal could be given by or to Qualcomm not Samsung. Samsung CPU is equal or close to Speedtrum and simmilar. Exynos is pure garbage in terms of reliability, stability speeds, throttling and overheating.
  • Anonymous
Won't be suprised if more cars goes 🔥🔥🔥
© 2022-2023 ArenaEV.com

From the team behind GSMArena.com

Privacy | Terms | Glossary | About us

EV FINDER
 
  • Like
  • Fire
Reactions: 13 users
D

Deleted member 118

Guest
Not sure what’s this is about

155E3D67-EDA5-4BBE-AB04-A75165CF2155.png







Old news after searching here lol
 
Last edited by a moderator:
  • Like
  • Fire
  • Wow
Reactions: 18 users

Violin1

Regular
Just for fun - the AFR (whatever you think of that publication) has an article about a slow motion car crash of the owning company of hot crapper which has the words Herald and market in it's title. It's just fun to read...
 
  • Like
  • Haha
Reactions: 10 users
D

Deleted member 118

Guest
  • Like
  • Love
  • Fire
Reactions: 7 users
@Diogenese

Do we have a place in something like this?

Obviously low power, an accelerator, didn't we have a paper or patent on packet inspection etc

Offloaded CPU processing to custom network accelerator,

No mention of AI though.




Renesas develops technologies for automotive communication gateway SoCs​

Four Key Technologies Announced at ISSCC 2023 Will Enable High Performance, Low Power Consumption, Fast Start-up and Security
  • February 22, 2023
Renesas Electronics Corporation (TSE: 6723), a premier supplier of advanced semiconductor solutions, today announced that it has developed four technologies for system-on-chip (SoC) devices for in-vehicle communication gateways. These SoCs are expected to play a crucial role in defining the next-generation electrical/electronic (E/E) architecture in automotive systems.
SoCs for automotive gateways must provide both high performance to implement new applications such as cloud services, and low power consumption when they are not in use. They also need to deliver fast CAN response to support instant start-up. Additionally, these SoCs need to provide power-efficient communication technology that enables network functions as a gateway using limited power and security technology to enable safe communication outside the vehicle. To meet these requirements, Renesas has developed (1) an architecture that dynamically changes the circuit operation timing to match the vehicle conditions with optimized performance and power consumption, (2) fast start-up technology by partitioning and powering essential programs only, (3) a network accelerator that achieves a power efficiency of 10 gigabits per second/watt (Gbps/W), and (4) security technology that prevents communication interference by recognizing and protecting vital in-vehicle communication related to vehicle control.
Renesas announced these achievements at the International Solid-State Circuits Conference 2023 (ISSCC 2023), February 19 – 23 in San Francisco, California.
Details of the new technologies include:

1. Architecture that optimizes processing performance and power consumption depending on vehicle conditions​

Communication gateway SoCs need to deliver processing performance exceeding 30,000 Dhrystone million instructions per second (DMIPS) when running, while also keeping standby power consumption to 2 mW or less in order to maintain battery life. Typically, high-performance SoCs also have high power consumption in standby mode, while low-power SoCs with small standby power consumption have performance issues. To resolve this tradeoff, Renesas combined in a single chip a high-performance application system and a control system optimized for ultralow standby power consumption. The new architecture controls the power supplies of these two subsystems and changes the timing of circuit operation to achieve an optimal balance between performance and power efficiency. This results in higher performance during operation and lower power consumption during standby.

2. Fast start-up technology with external flash memory achieving the same fast speed as embedded flash memory​

Since communication gateway SoCs manage processing of critical functions related to vehicle control, they must be able to respond to CAN within 50 milliseconds (msec.) of start-up. However, if the SoC uses a process that does not support embedded flash memory, the start-up program must be encrypted and stored in external flash memory. This means that it takes additional time to load program data and decrypt it. To solve this issue, Renesas developed technology that splits the program into sections and initially loads and decrypts only an essential portion for start-up, while continuing to load the rest of the program in parallel. This enables a fast response to CAN (50ms or less), even when using external flash memory.

3. Highly efficient network accelerator with 10 Gbps/W communication efficiency​

To allow air cooling and heat dissipation for electronic control units (ECUs), communication gateway SoCs must keep power consumption to 7 watts or less. Since computing processing performance of 30,000 DMIPS or higher requires approximately 6 watts of power, only around 1 watt can be used for network processing. This presents a challenge as the total communication of 10 Gbps must be achieved using 1 watt of power, with a processing efficiency of only around 3 Gbps/W when processed by the CPU. To work around this issue, Renesas offloaded processing from the CPU to a custom network accelerator, achieving higher efficiency at 9.4 Gbps/W. Additionally, Renesas boosted efficiency to 11.5 Gbps/W by switching the routing method from a conventional TCAM approach to a hash table in SRAM.

4. Security technology to prevent interference with communication requiring high reliability​

A communication gateway SoC performs a mixed set of tasks such as data processing related to vehicle control that requires a high level of reliability, and large amounts of random data communication with cloud services and others. Since vehicle control is essential to ensuring safety, protecting and separating mission-critical data is important. However, despite the differences in data types, all data is transmitted through the same in-vehicle network, leading to physical intersections and raising security issues. To address this challenge, Renesas developed security technology that analyzes incoming packets to the SoC. It determines whether or not they contain essential data, and assigns them to different pathways and control functions within the network accelerator. This prevents interference with data that requires high reliability and safeguards in-vehicle data communication from a variety of security threats.
These four technologies have been incorporated into Renesas’ R-Car S4 vehicle communication gateway SoC. With the latest R-Car S4, developers can accelerate advances in E/E architectures, implement secure connection with cloud services, and ensure safe and reliable vehicle control at the same time.
SOURCE: Renesas
 
  • Like
  • Thinking
  • Fire
Reactions: 15 users

Getupthere

Regular
Not sure what’s this is about

View attachment 30406




Kunpeng 920 is the industry's leading-edge Arm-based server CPU. Utilizing cutting-edge 7 nm processes, the CPU was independently designed by HUAWEI based on the Arm architecture license. Processor performance is significantly improved by optimizing branch prediction algorithms, increasing the number of execution units, and improving the memory subsystem architecture. At typical frequencies, Kunpeng 920 CPU scores more than an estimated 930 on SPECint®_rate_base2006, while power efficiency is 30% better than that offered by its industry counterparts. Kunpeng 920 provides much higher computing performance for data centers while slashing power consumption.
 
  • Like
Reactions: 13 users

Learning

Learning to the Top 🕵‍♂️
Not sure what’s this is about

View attachment 30406




Hi Rocket577,

This had been discovered by @Fullmoonfever a little while ago.

Thread 'HUAWEI TAISHAN 200 SERVER / KUNPENG 920 PROCESSOR USING AKIDA' https://thestockexchange.com.au/thr...rver-kunpeng-920-processor-using-akida.29899/

Post in thread 'BRN Discussion Ongoing' https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-123255

Learning 🏖
 
  • Like
  • Love
  • Fire
Reactions: 19 users
D

Deleted member 118

Guest
Hi Rocket577,

This had been discovered by @Fullmoonfever a little while ago.

Thread 'HUAWEI TAISHAN 200 SERVER / KUNPENG 920 PROCESSOR USING AKIDA' https://thestockexchange.com.au/thr...rver-kunpeng-920-processor-using-akida.29899/

Post in thread 'BRN Discussion Ongoing' https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-123255

Learning 🏖
I did update my post to old news lol but I guess Brainchip had to cut there links with this due to Huawei
 
Last edited by a moderator:
  • Haha
  • Like
Reactions: 6 users

Cardpro

Regular
  • Haha
  • Like
  • Fire
Reactions: 8 users

Deadpool

hyper-efficient Ai
Not sure what’s this is about

View attachment 30406






Old news after searching here lol
 
  • Like
Reactions: 5 users
D

Deleted member 118

Guest
  • Haha
  • Like
Reactions: 9 users

equanimous

Norse clairvoyant shapeshifter goddess
Brainchip ~ being the world leader in Digital Spiking Neuromorphic Processing at the IOT Edge could/should/maybe does have a Folder of AKIDA use cases - proven and potential.
Whereas Imaginative affiliated people on staff or shareholders could submit conceptual use case ideas.
Then over a long lunch once per week , a Hardware and Software Engineers focus group could brainstorm towards developmental progress of worthy concepts.

My thought is that Brainchip could eventually add value to AKIDA in house with our product development folder growing full with ready to produce product ideas.
~Any manufacturer could call Brainchip and ask for a suitable product use case idea from our product development folder and if they ran with it, then pay Brainchip a healthy product royalty on top of AKIDA IP royalty thus creating a double edged revenue stream.
~If Brainchip at some time in the future were to pay for manufacturing of an inhouse designed consumer product we could see the lions share of product revenue streaming in due to this value adding onto our IP

Because we shareholders are only good for a free lunch and are not privy to the company creative process, I'm feeling a need for more imagination being injected to kick start our business.
"I'm feeling a need for more imagination being injected to kick start our business". Dang Son

These two pies are big enough for any man and we work with 5 senses. Peter is probably got the 6th sense at the back of his mind to accomplish.

1677146680296.png




LAGUNA HILLS, CA / ACCESSWIRE / August 16, 2022 / BrainChip Holdings Ltd (ASX:BRN) (OTCQX:BRCHF) (ADR:BCHPY), the world's first commercial producer of neuromorphic AI IP, is bringing its neuromorphic technology into higher education institutions via the BrainChip University AI Accelerator Program, which shares technical knowledge, promotes leading-edge discoveries and positions students to be next-generation technology innovators.
BrainChip's University AI Accelerator Program provides hardware, training, and guidance to students at higher education institutions with existing AI engineering programs. BrainChip's products can be leveraged by students to support projects in any number of novel use cases or to demonstrate AI enablement. Students participating in the program will have access to real-world, event-based technologies offering unparalleled performance and efficiency to advance their learning through graduation and beyond.
The Program successfully completed a pilot session at Carnegie Mellon University this past spring semester and will be officially launching with Arizona State University in September. There are five universities and institutes of technology expected to participate in the program during its inaugural academic year. Each program session will include a demonstration and education of a working environment for BrainChip's AKD1000 on a Linux-based system, combining lecture-based teaching methods with hands-on experiential exploration
 
  • Like
Reactions: 9 users

Evermont

Stealth Mode
Hi Rocket577,

This had been discovered by @Fullmoonfever a little while ago.

Thread 'HUAWEI TAISHAN 200 SERVER / KUNPENG 920 PROCESSOR USING AKIDA' https://thestockexchange.com.au/thr...rver-kunpeng-920-processor-using-akida.29899/

Post in thread 'BRN Discussion Ongoing' https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-123255

Learning 🏖

Thanks @Learning and apologies @Fullmoonfever

Missed the earlier thread and have deleted my post.

Cheers.
 
  • Like
  • Love
Reactions: 5 users
Hey, one of my accounts disappeared 🤔..
 
  • Haha
  • Like
  • Wow
Reactions: 38 users

equanimous

Norse clairvoyant shapeshifter goddess

Nice assessment ;​

Arm is expected to make large gains .

Brainchip/ Arm = Armed & Dangerous

AMD expected to occupy over 20% of server CPU market and Arm 8% in 2023, according to DIGITIMES Research​

Joseph Tsai, DIGITIMES Asia, TaipeiWednesday 22 February 20230

5_b.jpg

Credit: DIGITIMES
AMD and Arm have been gaining up on Intel in the server CPU market in the past few years, and the margins of the share that AMD had won over were especially large in 2022 as datacenter operators and server brands began finding that solutions from the number-2 maker growing superior to those of the long-time leader, according to Frank Kung, DIGITIMES Research analyst focusing primarily on the server industry, who anticipates that AMD's share will well stand above 20% in 2023, while Arm will get 8%.
Prices are one of the three major drivers that resulted in datacenter operators and server brands switching to AMD. Comparing server CPUs from AMD and Intel with similar numbers of cores, clockspeed, and hardware specifications, the price tags of most of the former's products are at least 30% cheaper than the latter's, and the differences could go as high as over 40%, Kung said.
Such a gap makes a key difference to server companies as they usually procure their CPUs in large volumes and picking AMD's solutions would make a major reduction in their costs. Since Intel's and AMD's processors are both based on the x86 architecture, compatibility is not an issue that server companies need to worry about, Kung noted.
AMD CPUs' high number of cores also makes them perfect for the server environment as the higher the number of cores a CPU has, the more servicing capability it can offer. AMD's 96-core Genoa-architected EPYC processor was launched in the fourth quarter of 2022 with a 128-core CPU set to debut in the first half of 2023, while Intel's best offering in terms of the core number still stays at 60 at the moment.
Support from TSMC is the second driver. AMD's server CPUs are all made via TSMC's latest manufacturing process, allowing them to feature top-notch performances, noted Kung, adding that thanks to TSMC's advanced technologies and high yield rate, AMD has not had a problem with missing its product launch schedule. However, such is not the case with Intel.
The third driver is the fact that Intel is manufacturing all its top-tier CPUs in house. Information from Intel's upstream suppliers shows that Intel's in-house manufacturing technologies have been rather unstable during the past several years, while server brands and datacenter operators have often seen Intel delaying the volume production schedule of its new server platform.
Read more: Meet the Analysts articles
Among datacenter operators, Microsoft and Google are the keenest in procuring servers powered by AMD's solutions. Currently, over 30% of server orders placed by the two cloud service providers are AMD-based models, while within server brands, HP Enterprise (HPE) is keener on AMD-powered servers.
Arm-based processors' penetration in the server market was a bit slower compared to AMD-based ones in 2022 in terms of market share increase, and the growth will decelerate even more in 2023, said Kung. However, in the long term, Arm-based processors will still have the potential for major growth.
Although Arm-based CPUs can achieve a neck-to-neck computing performance compared to x86-based ones from AMD and Intel while consuming much less power, compatibility is currently their biggest weakness.
Since most server programs are designed based on the x86 architecture, the problem is unlikely to fix until more Arm-based servers start to show up, attracting more middleware developers to join the market and write solutions to translate x86 codes for Arm systems.
However, datacenter operators and server brands are still aggressive about Arm processors' development in the server market. Amazon and Alibaba have already started working on Arm-based products before 2022, Microsoft and Google also began projects with Arm products in 2022, and HPE is expanding its adoption of Arm-based servers. Nvidia is now pushing its GPUs to support Arm architecture and Ampere is developing Arm-based chips. In the upcoming years, the opportunity from ESG is expected to take off for Arm CPUs as demand from large-scale datacenter and edge computing servers will surge, Kung added.
Chart 1: Server shipment share by CPU, 2020-2023
Image

Source: DIGITIMES Research, February 2023
Table 1: Server CPU roadmaps by supplier, 2021-2024
Supplier20212022(f)2023(f)2024(f)
IntelWhitley: supports PCIe 4.0; 10+nm nodeSapphire Rapids: supports PCIe 5.0; Intel 7 nodeEmerald Rapids: Intel 7 nodeGranite Rapids: Intel 3 node
AMDMilan: supports PCIe 4.0; 7nm node
Milan-X
Genoa: supports PCIe 5.0, 5nm node
Bergamo, Genoa-X and Siena: 5nm nodeTurin: 3nm node
ArmNeoverse N1V2New N series
AmpereAltra Max: 7nm nodeAmpereOne-1: supports PCIe 5.0; 5nm nodeAmpereOne-2AmpereOne-3: 3nm node
NvidiaGrace: 5nm node
Source: DIGITIMES Research, February 2023
Im not sure AMD will keep gaining more share as they dont appear to incorporate SNN with their offering where as Intel and Arm will most likely for the power savings and efficiencies. This will give more carbon credits to ARM and Intel also.
 
  • Like
Reactions: 5 users
Oh my god how nice it is again to read this forum, job well done cleaning up.

Can someone explain to me how the upcoming report could be a huge? Any revenue would have been disclosed in the 4C in January no? Or will this report cover sales/revenue from January, or is it what developments might be reported by management that could potentially be price sensitive? Nevertheless I am loosing sleep due to excitement :)
 
  • Like
  • Thinking
  • Love
Reactions: 12 users

zeeb0t

Administrator
Staff member
Hey, one of my accounts disappeared 🤔..
did a manual check on your account just in case
 
  • Haha
  • Like
Reactions: 30 users
Top Bottom