BRN Discussion Ongoing

Mea culpa

prəmɪskjuəs
It has been a refreshing treat today to read the contributions from @McHale @chapman89 @AusEire @Stable Genius @Fullmoonfever . I hope the reference from McHale (post #55830) regarding keeping emotions in check, not being overly reactive or the immaturity of name-calling is taken on board. I also found the reference to those manipulative and dishonest posters pertinent.

Well done everyone. I now look forward to some light-hearted exchanges, possibly regarding hot-tub activities from the likes of Bravo, Dodgy, and Hoppy.
 
  • Like
  • Love
  • Fire
Reactions: 35 users
Good post Chapman89, thanks for your thoughts. My recollection is Sean did say something like we would have enough revenue to break even by the end of 2022 at the last AGM. Clearly, something was in the works but something changed and I would like to know more about that. I believe a lot of the discontent stems from this comment and not just the falling share price.

However, the rest of your post I agree with. To put it another way, if you showed me all the progress the company has made in the last couple of years in the kind of summary AusEire did without showing me the share price chart and asked me if this is a company I would like to invest in the answer would be "Hell yeah".

Just returning to the share price chart, the market got way ahead of itself when Mercedes said they would use Akida in the EQXX. However, I've returned recently to look at some of those articles and actually if you read them carefully there is a lot to be encouraged about. If the articles are to be believed, Mercedes were suggesting much more than they planned to trial the technology in the EQXX. I'll let the article from Car Expert (a review of the EQXX) speak for itself:

'Yes, the Vision EQXX is a concept car. But Mercedes-Benz board member and chief technology officer Markus Schäfer says the technologies used to deliver its impressive efficiency are all production feasible.

“The technology programme behind the Vision EQXX will define and enable future Mercedes-Benz models and features” – and that’s why it’s such an important car.'


This suggests a much wider application of Akida. Of course, Marcus Schaffer has also been quite active recently talking about their neuromorphic technology on Linked In and said recently:

'We already made some interesting findings here with our VISION EQXX, where we applied neuromorphic principles to the “Hey Mercedes” hot-word detection. That alone made it five to ten times more energy efficient than conventional voice control. As AI and machine learning take on an increasingly important role in the software-defined vehicle, the energy this consumes is likely to become a critical factor. I’ll touch on our latest findings in an upcoming “In the Loop” and tell you my thoughts on where this is taking us. '

Marcus Schaffer's posts on Linked In are definitely something to be watching carefully.
The wording Sean used at the time, was maybe a little ambiguous, but at no time (in the statement I think people are referring to) did he mention "break even".

My recollection (and it was the same discussion when he spoke of building to 100 employees) was that he said something along the lines of "expecting" revenue growth, to match the increase in costs, of building out the appropriate employee base.

So basically, that expenditures would remain status quo, or about the same.

Not aimed at you directly GDJR69, but it's getting a bit tiring, hearing people say that Sean "promised" this, or Sean "promised" that..

Things have not gone, as anyone has "expected" here..

But we are still moving forward!

Anyway, just my $10 (inflation adjusted)..
 
  • Like
  • Haha
Reactions: 23 users
Just reading an interesting article on MCUs.

Highlighted what I deemed some pertinent comments / requirements / observations as indicated by some industry players where we can definitely assist.




What’s Next for the Microcontroller?​

February 22, 2023 Robert Huntley
Although the microcontroller market is set for sustained growth, do MCU technical features and functions need to evolve to continue meeting customer requirements? Is the general-purpose MCU being replaced by application-specific versions?

The humble microcontroller, now more than 50 years old, represents a sizable chunk of the overall electronic-component industry. MCUs continue to dominate the embedded scene, and with good reason: They are flexible, configurable, and easy to program. With microcontrollers used in everything from laser printers to washing machines and heating thermostats to forklift trucks, MCU shipment data provides a reasonable indication of the state of the electronics industry.
Over the years, application-specific versions have developed to meet the requirements of use cases such as motor control, wireless connectivity and ultra-low power. Some MCUs feature highly configurable analog and digital blocks, borrowing architectural concepts more associated with FPGAs than MCUs. Others are marketed as general-purpose controllers, incorporating an array of fixed-function blocks—from A/D and D/A converters to serial connectivity, timers/counters, GPIO, and cryptographic accelerators—to suit a broad range of applications.

The microcontroller market exhibits continued growth

According to recent research by P&S Intelligence, the global microcontroller market accounted for US$18.80 billion in 2021 and is expected to reach US$43.61 billion in value by 2030, for a 9.8 % CAGR. The reasons for the projected growth are many, from the increasing use of machine learning in smart sensors to the dramatic increase in industrial automation systems.
What’s Next for the Microcontroller?
Joe Thomsen, Microchip Technology
Although the microcontroller market is set for sustained growth, do MCU technical features and functions need to evolve to continue meeting customer requirements? Is the general-purpose MCU being replaced by application-specific versions?
According to Joe Thomsen, vice president of Microchip Technology’s 16-bit MCU Business Unit, the customers define the needs. “One of the things we do regularly is to evaluate what our customers are putting on their boards and what else is being implemented alongside the microcontroller,” said Thomsen. “Then we can determine how we can interface to those items more easily, more effectively, or [whether] we can actually integrate those features into the MCU itself.”

What technical innovations are happening?

Today’s MCUs are typically highly integrated devices with lots of functionality, intended to offer a single-chip solution for many designs. As customer needs and application use cases evolve, how is the MCU keeping up?
One focus is power. “Microcontrollers account for approximately 5% to 10% of the overall power consumption inside the vehicle, so we’re looking for further possibilities to reduce it,” said Ralf Koedel, vice president for microcontrollers at Infineon’s automotive division.
“Low power is one of the key items we need to look into to drive the MCU space further,” said Tim Burgess, senior director of the MCU Business Unit at Renesas. “The [MCU’s] low power is one of the major differentiators from microprocessors.
As we decrease the process technology, it allows us to address low power by design. There’s always a conundrum with process technologies: When you go to a more advanced process node, you get better active consumption, but because the gate is very small, the leakage is significantly higher.”
What’s Next for the Microcontroller?
Steven Tateosian, Infineon
Steven Tateosian, vice president, IoT, Compute and Wireless Business Unit of Infineon, approached the question from an industrial and consumer perspective, pointing to innovation’s role in raising MCU performance for a broader range of applications. “What we started seeing in the last five years is more integration, such as multicore processors, mixed with DSPs and other accelerators,” he said. These additions are being made “without fundamentally changing things around [MCUs’] ease of use and power profiles, and the overall system cost advantages that the MCU brings over microprocessors.”
The versatility of low-power microcontrollers has made them extremely popular for intelligent edge node applications, particularly those based on tinyML. Many voice-assistant–based applications rely on continuous cloud connectivity to conduct interference, only recognizing a trigger word or short phrase locally. However, this approach introduces latency and the risk of a device exploit. The need for local, deterministic decision-making is a priority.
Most microcontroller vendors focus on incorporating neural network accelerators into their MCUs, Koedel said, citing Infineon MCUs that integrate accelerators for automotive functions including graphical displays and ADAS radar processing.

Microchip’s Thomsen called AI a game-changer for MCUs involved in real-time closed-loop control. “I think AI is probably the big revolutionary change for the MCU, and in a lot of cases, it’s going to be a revolutionary change for our customers’ applications,” he said.

Will MCUs reach a limit where MPUs become more attractive?

With the MCU experiencing so much innovation and the number of use cases expanding, one wonders at what point the MCU will reach its limit and MPUs will become a more viable choice. The considerations for or against such a shift extend beyond technical specifications alone. Embedded engineering teams invest substantial time and money when selecting an MCU family for their designs, so they will want to stay with that architecture for as long as possible. Also, MCUs typically consume less power and are lower in cost than MPUs. MPUs are typically selected based on a software decision, the choice of interfaces, or purely for performance reasons, whereas MCU selection is more often related to hardware factors.
What’s Next for the Microcontroller?
Bernd Westhoff, Renesas
For some MPU-based applications, there may well be a strong desire to migrate to an MCU, said Infineon’s Tateosian. “Some high-end smart thermostats with displays full of connectivity are MPU based, and some of them are high-end MCU based. The user won’t know the difference, but I can tell you the unit costs of those [two device types] are very different,” he said. “That’s a good example where some developers are willing to make the jump from an MPU to an MCU to save power and cost, and others see the software development effort as prohibitive.”
Bernd Westhoff, director of IoT product marketing at Renesas, noted that there has always been a degree of overlap between MCU and MPU performance. “MPUs in the past were already at 200 and 400 MHz, and MCUs are easily catching up with that,” he said.
Westhoff also cited some fundamental differences between MCU and MPU developments
. “MPU people expect to have Linux, not an RTOS, so you may have a heavy issue in the future to convince a Linux MPU person to become an RTOS MCU person and develop their software there.”

The general-purpose MCU is here to stay

What’s Next for the Microcontroller?
Tim Burgess, Renesas
As MCUs benefit from more functionality, some inevitably become optimized for specific applications. Application-specific MCUs tend to focus on high-volume use cases, such as motor control. Could this trend continue so that the need for general-purpose microcontrollers declines?
“There are always going to be high-volume, low-cost solutions that ASICs [application-specific ICs] are going to take over,” said Microchip’s Thomsen. With shortening product development timescales, he added, the ability to select an MCU off the shelf today and start programming, even if it might be a bit more expensive, will meet a customer’s time-to-market requirements.
Renesas’ Burgess confirmed the continued need for general-purpose MCUs, observing, “It’s just not possible to tune or optimize application-specific processors for every single use case.
There are thousands of different applications with so many different requirements, memories, packages, RAM, peripheral mixes, and when you get down to what you’d have to design for, there’s just not enough market to really justify it.”

Fifty years plus and still going strong

What’s Next for the Microcontroller?
Ralf Koedel, Infineon
The microcontroller market continues to experience year-on-year growth thanks to technical innovations and an endless list of use cases. In the automotive market, for example, Infineon’s Koedel told EE Times Europe he doesn’t see an end to growth yet. “If you look into motorcycles, for example, the trend now in India is to go from combustion engines to electrification, with a lot more electronic content, enabling a lot more [MCU growth],” he said.
Of all the MCU use cases highlighted by the executives we contacted, it’s clear that machine learning-based applications will become more significant this decade. With its low-power attributes, an architecture optimized with neural network acceleration, and encryption functionality, the MCU is a suitable choice for this use case.

Robert-Huntley.jpg

Robert Huntley
Robert Huntley is a contributor for EE Times Europe.
Tags: Artificial Intelligence (AI), Embedded, ICs/Chips, MCU, Semiconductors
Great post and thank you FMF
 
  • Like
  • Love
Reactions: 4 users
D

Deleted member 118

Guest
  • Wow
  • Fire
Reactions: 4 users

alwaysgreen

Top 20
The wording Sean used at the time, was maybe a little ambiguous, but at no time (in the statement I think people are referring to) did he mention "break even".

My recollection (and it was the same discussion when he spoke of building to 100 employees) was that he said something along the lines of "expecting" revenue growth, to match the increase in costs, of building out the appropriate employee base.

So basically, that expenditures would remain status quo, or about the same.

Not aimed at you directly GDJR69, but it's getting a bit tiring, hearing people say that Sean "promised" this, or Sean "promised" that..

Things have not gone, as anyone has "expected" here..

But we are still moving forward!

Anyway, just my $10 (inflation adjusted)..
I agree, that is was Sean said. There were a number of us dissecting what he said afterwards. Whether it meant growth percentages or dollar figure amount but you are correct, he never mentioned break even.

The whole economy pooping itself and Putin happened which is what some (incl. myself) assumed was the reason it didn't eventuate. I hope we get some clarification or explanation from Sean as to why it didn't eventuate at the AGM.
 
Last edited:
  • Like
  • Love
Reactions: 7 users
D

Deleted member 118

Guest
Just been released earlier if anyone is interested

FD039036-6A6E-4CDE-BCC9-9BFE68B544EF.png
 
  • Like
Reactions: 7 users

Diogenese

Top 20
The wording Sean used at the time, was maybe a little ambiguous, but at no time (in the statement I think people are referring to) did he mention "break even".

My recollection (and it was the same discussion when he spoke of building to 100 employees) was that he said something along the lines of "expecting" revenue growth, to match the increase in costs, of building out the appropriate employee base.

So basically, that expenditures would remain status quo, or about the same.

Not aimed at you directly GDJR69, but it's getting a bit tiring, hearing people say that Sean "promised" this, or Sean "promised" that..

Things have not gone, as anyone has "expected" here..

But we are still moving forward!

Anyway, just my $10 (inflation adjusted)..
2 bob in my day ...
 
  • Haha
  • Like
Reactions: 3 users

Tothemoon24

Top 20

How Neuromorphic Processing and Self-Searching Storage Can Slash Cyber Risk for Federal Agencies​

David Follett
data_center_shutterstock_Stuart-Miles-300x188.jpg

(Stuart Miles/Shutterstock)
The amount of information organizations must process at the edge has exploded. This is especially true for federal agencies and the military, which generate enormous quantities of data from mobile devices and sensors in equipment, buildings, ships, aircraft, and more.
Finding effective ways to manage, use, and protect that data is challenging. But there’s an effective and cost-efficient solution. The combination of neuromorphic processing and self-searching computational storage can enable organizations to quickly process vast troves of edge data.

The Edge Data Dilemma

Edge data can provide insights that enables more effective core missions. Trouble is, the compute and network infrastructure needed to handle that data hasn’t kept up. Organizations lack the compute power to process the data at the edge, and they lack the network bandwidth to transmit the data to a centralized location where they have processing power.
Traditional computing technology takes up too much space and generates too much heat to be useful at the edge. Traditional network technology can’t move extremely large quantities of data over long distances at useful speeds. As one example, the average U.S. Navy ship produces petabytes of data from crew, operational systems, weapons systems, and communications. For many use cases, that data can’t be processed till the ship has docked.

The Cyber Advantage

Agencies must not only find effective ways to manage data, they also need to protect their assets from cyber threats. Today, cybersecurity teams must sift through enormous quantities of data when responding to cyberattacks. To uncover anomalies and home in on root causes, they need to search large datasets from access logs and security information and event management (SIEM) systems. They also need to complete that task in as near real time as possible to prevent a mission-disrupting cyber breach. But to date, they’ve lacked an effective compute and storage solution to achieve that goal at the edge.


(Rapeepat-Pornsipak/Shutterstock)
A new reportfrom Cyberedge found that 68% of government agencies faced a cyberattack in 2021, underlying the need for agencies to find innovative solutions for data protection in case of an attack. Active response capacity can be critical when responding to a cyber incident, substantially reducing cyber risk and protecting the mission by quickly finding the data and alerting analysts in real time.

The Power of Neuromorphic Processing

It would be helpful if computers functioned more like the human brain. A human can look at a field of thousands of yellow flowers and instantly spot the single red flower. A computer needs to process each flower individually until it can find the anomaly.
That’s because the brain has been fine-tuned over eons of evolution to perform specific tasks very well. And it does so while consuming remarkably little energy.
But what if, like the brain, a computer could perform a specific task very quickly while requiring very little power? That’s the promise of a neuromorphic processor – essentially, a computer modeled after systems in the brain.
Here’s how neuromorphic processing can transform cyber risk at the edge. Start with a neuromorphic processing unit (NPU) built on a high-end field-programmable gate array (FPGA) integrated circuit customized to accelerate key workloads. Add a few dozen terabytes of local SSD storage. The result is an NPU-based, self-searching storage appliance that can perform extremely fast searches of very large datasets – at the edge and at very low power.
Just how quickly can NPU technology search a large dataset? Combine multiple NPU appliances in a rack, and you can search 1 PB of data in about 12 minutes. To achieve that result with traditional technology, you’d need 62 server racks – and a very large budget. In testing, the NPU appliance rack requires 84% lower CapEx, 99% lower OpEx, and 99% less power.
Imagine the advantage of searching a petabyte of data in minutes when responding to a situation like the Sunburst hack. Affecting at least 200 organizations–including government departments such as Defense, Homeland Security, Treasury, Commerce, and Justice–the Sunburst hack began around March 2020 but wasn’t discovered until December 2020. Agencies had to search at least nine months of data to determine where breaches occurred, current breach activity, and which systems, networks, and data were affected.
Neuromorphic processing and self-searching storage can slash incident response times in situations like this. That can save costs, accelerate incident resolutions, and reduce cyber risk.

Making the Use Case for NPU Appliances

The NPU search technology was developed in collaboration with Sandia National Laboratories, an R&D lab of the Department of Energy. Today Sandia is actively using multiple NPU systems for cyber defense and other use cases.
One compelling aspect of am NPU appliance is that it can help organizations comply with President Biden’s May 2021 Executive Order on Improving the Nation’s Cybersecurity. In response to the order, the Office of Management and Budget issued a directive requiring agencies to retain 12 months of active data storage and 18 months of cold data storage. For many agencies, that presents a serious budgetary challenge. Am NPU appliance can make such data retention cost-effective.
What’s more, deployment of NPU appliance storage requires no changes to an organization’s current IT infrastructure or cyber defenses. The appliance simply sits alongside existing hardware and cybersecurity solutions. Searching of large datasets occurs at the edge. Any small quantities of relevant data identified can quickly and easily be transmitted for centralized analysis.
There are other potential use cases for an NPU appliance. For instance, one Fortune 50 company used the technology for data labeling to train a machine learning algorithm. The organization reduced the time required from one month to 22 minutes. In the meantime, for federal agencies and the military, neuromorphic processing and self-searching storage is an achievable, cost-effective solution for protecting sensitive data and slashing cyber risk at the edge.
About the author: David Follett is the founder and CEO of Lewis Rhodes Labs. David is a senior technology executive with 30 years of experience in semiconductors, optics, computer architecture and neuroscience. He started his career at Bell Labs Murray Hill and was the founder and CEO of GigaNet, a networking start-up that invented virtualized interfaces, ultimately evolving into Infiniba
 
  • Like
  • Fire
  • Wow
Reactions: 21 users

Tothemoon24

Top 20

Getting flexible​

As automotive compute shifts from hardware to software, demand is growing for infotainment and cockpit features. According to Arm, more than 90% of in-vehicle infotainment (IVI) systems use the company’s chip designs. The architectures are also found in various under-the-hood applications, including meter clusters, e-mirrors, and heating, ventilation, and air conditioning (HVAC) control.

Munich-based automotive company Apostera aims to remove this disconnect between the real world and the infotainment system by transforming the windshield of a vehicle into a mixed reality screen.
Munich-based Apostera is using Arm’s designs to transform car windshields into mixed-reality screens.
The shift to the software-defined vehicle has also stimulated another IT feature: updates. Historically, vehicle software was not only rudimentary, but also fairly static. Today, that’s no longer the case.

“There’s an opportunity to continue to add to the functionality of the vehicle over its lifetime,” says Laudick.

An expanding range of features, from sensor algorithms to user interfaces, can now be enhanced over-the-air (OTA). As cars begin to resemble personal devices, consumers can expect a comparable update service. As Simon Humphries, the chief branding officer of Toyota, put it: “People want control over their own experiences.”

Laudick likens modern cars to platforms, upon which software and functionality can evolve.That’s an obvious magnet for Arm, whose products and processes are fundamentally about running software.

Carmakers are also becoming savvier about software. For example, General Motors’ self-driving unit, Cruise, is now developing its own computer chips for autonomous vehicles. The company has previously used Arm designs, but is now exploring an open-source architecture known as RISC-V — which is becoming a popular alternative. The instruction set’s low costs and flexibility have created a threat to Arm’s automotive ambitions.

“One executive I was talking to said: ‘The best negotiating strategy when Arm comes in is to have a RISC-V brochure sitting on my desk’,” Jim Feldhan, the president of semiconductor consultancy Semico Research, said last year. “It’s a threat. Arm is just not going to have its super dominant position in five or 20 years.”



“THERE’S BEEN A MOVE TO CREATE MORE FLEXIBILITY.


Currently, however, RISC-V could be regarded as riskier than Arm’s established standards. In a further challenge to RISC-V, Arm is gradually becoming more open. The Cortex-M processor series, for instance, now allows clients to add their own instructions, while extra configurability has been added to Arm software and tooling.

“We obviously try to control the products reasonably well, otherwise we just end up with a wild west. But there’s been a move in the company in the last several years to create more flexibility in certain areas,” says Laudick.

Mobileye, the Israeli self-driving unit of chip maker Intel
Mobileye, a self-driving unit of Intel that went public at $16.7 billion last year, is among a growing list of companies applying RISC-V architecture to vehicles. Credit: Mobileye
RISC-V is far from Arm’s only challenger. Established rivals such as Intel and Synopsys are also fighting for a chunk of the expanding market for automotive chips.

Nonetheless, Laudick is bullish about the future. He notes that today’s cars run about 100 million lines of software code, while a Boeing 787 is estimated to have “only” 14 million. By 2030, McKinsey predicts that vehicles will expand to roughly 300 million lines of code.

“I see the vehicle being, without doubt, the most complex software device you will own — if not that will exist,” says Laudick.
 
  • Like
  • Wow
  • Fire
Reactions: 11 users

rgupta

Regular
Great post @McHale . I think you are right re Sean talking about staff numbers being around 100 and breakeven in staff/business cost by end of 2022, was said in Q&A as I recall.
Even I could recall something on those lines. But a million dollar question here is if the same does not happen then what is the next best alternative. There is no doubt Sean is doing good work. Sometimes it takes time for sales to be materialized.
But getting more transparent is important here. Right now company strategy is helping shorters and punishing holders. But that is the beauty of investment, if you believe then encash the opportunity and if you don't believe then move on.
My understanding is technology is real and revolutionary but the same cannot make it a success without hard work. There are still chances of a failure but risk rewards ratio is high enough to take some risky bets.
DYOR
 
  • Like
Reactions: 5 users

Getupthere

Regular
I don’t think it really matters if there’s $40,000 or $2million in revenue at this point.

What matters is building the IP signings order bank which then will allow us to continue revenue grow in the years to come.

Our last IP deal was with our interim CEO Peter.

Most disappointing part of the last 4C was the progress information provided by management for the last quarter.

We don’t get asx announcements every week, so all we can rely on is the information provided in the 4C.

Nobody here was expecting to see revenue.

There is other ways they can inform the market without breaking NDA’s.

Head winds or not, communication needs to change.
 
  • Like
  • Love
  • Fire
Reactions: 47 users

Diogenese

Top 20

Getting flexible​

As automotive compute shifts from hardware to software, demand is growing for infotainment and cockpit features. According to Arm, more than 90% of in-vehicle infotainment (IVI) systems use the company’s chip designs. The architectures are also found in various under-the-hood applications, including meter clusters, e-mirrors, and heating, ventilation, and air conditioning (HVAC) control.

Munich-based automotive company Apostera aims to remove this disconnect between the real world and the infotainment system by transforming the windshield of a vehicle into a mixed reality screen.
Munich-based Apostera is using Arm’s designs to transform car windshields into mixed-reality screens.
The shift to the software-defined vehicle has also stimulated another IT feature: updates. Historically, vehicle software was not only rudimentary, but also fairly static. Today, that’s no longer the case.

“There’s an opportunity to continue to add to the functionality of the vehicle over its lifetime,” says Laudick.

An expanding range of features, from sensor algorithms to user interfaces, can now be enhanced over-the-air (OTA). As cars begin to resemble personal devices, consumers can expect a comparable update service. As Simon Humphries, the chief branding officer of Toyota, put it: “People want control over their own experiences.”

Laudick likens modern cars to platforms, upon which software and functionality can evolve.That’s an obvious magnet for Arm, whose products and processes are fundamentally about running software.

Carmakers are also becoming savvier about software. For example, General Motors’ self-driving unit, Cruise, is now developing its own computer chips for autonomous vehicles. The company has previously used Arm designs, but is now exploring an open-source architecture known as RISC-V — which is becoming a popular alternative. The instruction set’s low costs and flexibility have created a threat to Arm’s automotive ambitions.

“One executive I was talking to said: ‘The best negotiating strategy when Arm comes in is to have a RISC-V brochure sitting on my desk’,” Jim Feldhan, the president of semiconductor consultancy Semico Research, said last year. “It’s a threat. Arm is just not going to have its super dominant position in five or 20 years.”






Currently, however, RISC-V could be regarded as riskier than Arm’s established standards. In a further challenge to RISC-V, Arm is gradually becoming more open. The Cortex-M processor series, for instance, now allows clients to add their own instructions, while extra configurability has been added to Arm software and tooling.

“We obviously try to control the products reasonably well, otherwise we just end up with a wild west. But there’s been a move in the company in the last several years to create more flexibility in certain areas,” says Laudick.

Mobileye, the Israeli self-driving unit of chip maker Intel
Mobileye, a self-driving unit of Intel that went public at $16.7 billion last year, is among a growing list of companies applying RISC-V architecture to vehicles. Credit: Mobileye
RISC-V is far from Arm’s only challenger. Established rivals such as Intel and Synopsys are also fighting for a chunk of the expanding market for automotive chips.

Nonetheless, Laudick is bullish about the future. He notes that today’s cars run about 100 million lines of software code, while a Boeing 787 is estimated to have “only” 14 million. By 2030, McKinsey predicts that vehicles will expand to roughly 300 million lines of code.

“I see the vehicle being, without doubt, the most complex software device you will own — if not that will exist,” says Laudick.
Hi Ttm,

I get a bit confused when someone says such-and-such is becoming savvier about software" and then cites in support the fact that they are developing their own chips.

"Carmakers are also becoming savvier about software. For example, General Motors’ self-driving unit, Cruise, is now developing its own computer chips for autonomous vehicles. The company has previously used Arm designs, but is now exploring an open-source architecture known as RISC-V — which is becoming a popular alternative. The instruction set’s low costs and flexibility have created a threat to Arm’s automotive ambitions."

Grypes aside:

https://brainchip.com/brainchip-sifive-partner-deploy-ai-ml-at-edge/

BrainChip and SiFive Partner to Deploy AI/ML Technology at the Edge​

Laguna Hills, Calif. – April 5, 2022 BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of ultra-low power neuromorphic AI chips and IP, and SiFive, Inc., the founder and leader of RISC-V computing, have combined their respective technologies to offer chip designers optimized AI/ML compute at the edge.

... and for the tea-leaves-readers:

"Driving our technology into a SiFive-based subsystem is exactly the type of partnership that meets these goals.”

... can't be a coincidence, can it??????
 
  • Like
  • Fire
  • Love
Reactions: 34 users

Sirod69

bavarian girl ;-)
Markus Schäfer
Markus Schäfer


Our specialists have achieved a breakthrough in integrating AI!

They have tested it on our drive controllers – and it worked! Believe me – in the development context, that’s a huge step.

It means that, for complex applications, we will start using self-learning processes from the disciplines of deep learning and deep reinforcement learning. Building on these machine-learning processes, our specialists have developed an automated workflow.

This enables them to implement artificial neural networks (ANN) in series-production processors. Now patented, this workflow opens up all sorts of possible applications in a wide range of areas, including powertrain.

Back in 2019, we defined a set of clear principles for how we work with AI to provide us with an operational framework. The four guiding notions under which we develop and use AI are: “responsible use”, “ease of explanation”, “privacy protection” and “safety and reliability”.

I am very excited by this progress and at the same time acutely aware of our responsibilities as leaders in our field. By pushing innovation while at the same time adhering to our principles, I believe we can help unleash the true benefits of AI in a sustainable way.

1683015623986.png
 
  • Like
  • Fire
  • Thinking
Reactions: 80 users

Damo4

Regular
Markus Schäfer
Markus Schäfer

Our specialists have achieved a breakthrough in integrating AI!

They have tested it on our drive controllers – and it worked! Believe me – in the development context, that’s a huge step.

It means that, for complex applications, we will start using self-learning processes from the disciplines of deep learning and deep reinforcement learning. Building on these machine-learning processes, our specialists have developed an automated workflow.

This enables them to implement artificial neural networks (ANN) in series-production processors. Now patented, this workflow opens up all sorts of possible applications in a wide range of areas, including powertrain.

Back in 2019, we defined a set of clear principles for how we work with AI to provide us with an operational framework. The four guiding notions under which we develop and use AI are: “responsible use”, “ease of explanation”, “privacy protection” and “safety and reliability”.

I am very excited by this progress and at the same time acutely aware of our responsibilities as leaders in our field. By pushing innovation while at the same time adhering to our principles, I believe we can help unleash the true benefits of AI in a sustainable way.

View attachment 35491

Was just about to post this, amazing news that points to some good things.

Is anyone able to find the patent mentioned?
 
  • Like
  • Love
Reactions: 11 users

Learning

Learning to the Top 🕵‍♂️
Was just about to post this, amazing news that points to some good things.

Is anyone able to find the patent mentioned?
Maybe Dio, could examine these.


Learning 🏖
 
  • Like
  • Wow
Reactions: 4 users
I always thought the NASA SBIR was a bit bogus, calling for the removal of the Cortex processor from Akida 1 on the basis of SWaP.

The size would be less than a quarter of a fingernail, weight likewise.

For power there would be negligible difference.

The only real difference would be in the Cortex licence fee.

There's another consideration, NASA upgrades / refurbishes their computing systems in space from time to time, so this SBIR may be for an add-on to existing / space approved computing systems, of which the ARM Cortex was not a part of.
 
  • Thinking
Reactions: 1 users

HopalongPetrovski

I'm Spartacus!
It has been a refreshing treat today to read the contributions from @McHale @chapman89 @AusEire @Stable Genius @Fullmoonfever . I hope the reference from McHale (post #55830) regarding keeping emotions in check, not being overly reactive or the immaturity of name-calling is taken on board. I also found the reference to those manipulative and dishonest posters pertinent.

Well done everyone. I now look forward to some light-hearted exchanges, possibly regarding hot-tub activities from the likes of Bravo, Dodgy, and Hoppy.
Sorry Mea Culpa.
I’m off on a road trip in NZ with my mad 80 year old brother.
Bit like this actually so letting the threads look after themselves atm.



Looking forward to meeting some of you at the AGM. :ROFLMAO:
 
  • Haha
  • Love
  • Like
Reactions: 17 users
  • Like
  • Fire
  • Haha
Reactions: 26 users
Well, on the ADAS AI front from a few days ago they predominantly looking for python and CNN at India.

Unless they using CNN2SNN Akida I'd suggest could be a push re Akida on this side at least. Though they do leave the door slightly ajar with the....other machine learning based models comment.

Infotainment / HMI?


April 2023
Mercedes-Benz Research and Development India Private Limited

ADAS - AI/ML, Python Developer, Algorithm Evaluation (SE)​

Tasks

- Responsible for creating evaluation pipelines and KPI’s for CNN based SW modules to evaluate SW performance.
- Responsible to interact with multiple stakeholders to understand the requirement and prepare KPI’s to evaluate the SW module
- Strong Understanding of Object oriented programming in python and cpp.
- experience with Convolutional Neural Networks (CNNs) or other machine learning based models
- has a good understanding on the training and especially evaluation of CNN models
- ideally experience/knowledge of models for object detection, sensor fusion, prediction, planning & control, ...
- can identify problems with specific architectures of CNNs
- a plus is automotive experience, ideally in the field of "Advanced Driver Assistance Systems" (ADAS)
 
  • Like
  • Fire
Reactions: 8 users
Top Bottom