BRN Discussion Ongoing

manny100

Regular
Sean need to address his share holders as this is just about enough for everyone.
It’s been months since the AGM
It meant to happen before years end yet next week is November and nothing
We deserve to be updated Sean

Is it just me or do others feel this way

I am just feeling disappointed atm
Hopefully this will pass I am usually more positive
Agree with your thoughts. It's been a while since the AGM positive deal vibes. Deal closure soon would be comforting for holders.
I guess the next Quarterly podcast will be in November/early December.
I asked but no dates have been set.
 
  • Like
  • Haha
  • Wow
Reactions: 10 users

JB49

Regular
Even searching the keywords "AKIDO Pico", comes up with more exposure for BRN. I'm actually quite surprised at the amount of exposure we are now getting..... hope its a good sign.


View attachment 71722



View attachment 71723
Do you think it might be paid promotions by brainchip?
 
  • Like
  • Thinking
  • Fire
Reactions: 3 users
  • Like
Reactions: 3 users


Ohh Tom you’ve done it again….

IMG_2963.jpeg
 
  • Like
  • Fire
  • Love
Reactions: 25 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Nice work @Jesse Chapman!

Jesse received a comment from Markus Shaefer on LinkedIin.

I love it!

"Neuromorphic computing truly represents a paradigm shift in how we approach technology.." (Markus Shaefer)

"I am convinced Neuromorphic Computing is key to efficient drive automation systems." (Gerrit Ecke).

Screenshot 2024-10-25 at 8.47.01 am.png



A few comments below that you can see this exchange between BrainChip's Anil Mankar, Markus May, Alexander Janisch (R&D Engineer AI and Neuromorphic Computing at Mercedes-Benz) and Gerrit Ecke (Researcher for Future Automotive Software Development and for Neuromorphic Computing at Mercedes-Benz).


Screenshot 2024-10-25 at 8.47.32 am.png
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 56 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Screenshot 2024-10-25 at 9.09.47 am.png




The chips of tomorrow may well take inspiration from the architecture of our brains​

As artificial intelligence demands more and more energy from the computers it runs on, scientists at IBM Research are taking inspiration from the world’s most efficient computer: the human brain.
As artificial intelligence demands more and more energy from the computers it runs on, scientists at IBM Research are taking inspiration from the world’s most efficient computer: the human brain.
Neuromorphic computing is an approach to hardware design and algorithms that seeks to mimic the brain. The concept doesn’t describe an exact replica, a robotic brain full of synthetic neurons and artificial gray matter. Rather, experts working in this area are designing all layers of a computing system to mirror the efficiency of the brain. Compared to conventional computers, the human brain barely uses any power and can effectively solve tasks even when faced with ambiguous or poorly defined data and inputs. IBM Research scientists are using this evolutionary marvel as inspiration for the next generation of hardware and software that can handle the epic amounts of data required by today’s computing tasks — especially artificial intelligence.
In some cases, these efforts are still deep in research and development, and for now they mostly exist in the lab. But in one case, prototype performance numbers suggest that a brain-inspired computer processor will soon be ready for market.

What is neuromorphic computing?​

To break it down into its etymology, the term “neuromorphic” literally means “characteristic of the shape of the brain or neurons.” But whether this is the appropriate term for the field or a given processor may depend on whom you ask. It could mean circuitry that attempts to recreate the behavior of synapses and neurons in the human brain, or it could mean computing that takes conceptual inspiration from how the brain processes and stores information.
If it sounds like the field of neuromorphic — or brain-inspired — computing is somewhat undecided, that’s only because researchers have taken such vastly different approaches to building computer systems that mimic the brain. Scientists at IBM Research and beyond have been working for years to develop these machines, and the field has not yet landed on the quintessential neuromorphic architecture.
One familiar approach to brain-inspired computing involves creating very simple, abstract models of biological neurons and synapses. These are essentially static, nonlinear functions that use scalar multiplication. In this case, information propagates as floating-point numbers. When it’s scaled up, the result is deep learning. At a simplistic level, deep learning is brain inspired — all these mathematical neurons add up to something that mimics certain brain functions.
“In the last decade or so, this has become so successful that the vast majority of people doing anything related to brain-inspired computing are essentially doing something related to this,” says IBM Research scientist Abu Sebastian. Mimicking neurons with math can be done in additional brain-inspired ways, he says, by incorporating neuronal or synaptic dynamics, or by communicating with spikes of activity, instead of floating-point numbers.
An analog approach, on the other hand, uses advanced materials that can store a continuum of conductance values between 0 and 1, and perform multiple levels of processing — multiplying using Ohm’s law and accumulating partial sums using Kirchhoff’s current summation.

How on-chip memory eliminates a classic bottleneck​

A common trait among brain-inspired computing architecture approaches is on-chip memory, also called in-memory computing. It's a fundamental shift in chip structure compared to conventional microprocessors.
The brain is divided into regions and circuits, which co-locate memory formation and learning — in effect, data processing and storage. Classical computers are not set up this way. With a conventional processor, your memory sits apart from the processor where computing happens, and information is ferried back and forth between the two on circuits. But in a neuromorphic architecture that includes on-chip memory, memory is closely intertwined with processing on a fine level — just like in the brain.
This architecture is a chief feature in IBM’s in-memory computing chip designs, whether analog or digital.
The rationale for putting computing and memory side by side is that machine learning tasks are computing-intensive, but the tasks themselves are not necessarily complex. In other words, there’s a high volume of simple calculations called matrix multiplication. The limiting factor isn’t that the processor is too slow, but that moving data back and forth between memory and computing takes too long and uses too much energy, especially when dealing with heavy workloads and AI based applications. This kink is called the von Neumann bottleneck, named for the von Neumann architecture that has been employed in nearly every chip design since the dawn of the microchip era. With in-memory computing, there are huge energy and latency savings to be found by cutting this shuffle out of data-heavy processes like AI training and inferencing.
In the case of AI inference, synaptic weights are stored in memory. These weights dictate the strength of connections between nodes, and in the case of a neural network, they’re values applied to the matrix multiplication operations being run through them. If synaptic weights are stored apart from where processing takes place and must be shuttled back and forth, the energy you spend per operation will always plateau at a certain point, meaning that more energy eventually stops leading to better performance. Sebastian and his colleagues who developed one of IBM’s brain-inspired chips, named Hermes, believe they must break down the barrier created by moving synaptic weights. The goal is much more performant AI accelerators with smaller footprints.
“In-memory computing minimizes or reduces to zero the physical separation between memory and compute,” says IBM Research scientist Valeria Bragaglia, who is part of the Neuromorphic Devices and System group.
In the case of IBM’s NorthPole chip, the computing structure is built around the memory. But rather than locating the memory and computing in exactly the same space, as is done in analog computing, NorthPole intertwines them so that they may be more specifically called “near-memory.” But the effect is essentially the same.
11 IBM_AI_SplitScreen_v08_06.jpg

The analog Hermes chip uses phase-change memory (PCM) devices that store AI model weights in the conductance values of a type of glass that can be switched between amorphous and crystalline phases.

How brain-inspired chips mimic neurons and synapses​

Carver Mead, an electrical engineering researcher at California Institute of Technology, had a huge influence on the field of neuromorphic computing back in the 1990s, when he and his colleagues realized that it was possible to create an analog device that, at a phenomenological level, resembles the firing of neurons.
Decades later, this is essentially what chips like Hermes and IBM’s other prototype analog AI chip are doing: Analog units both perform calculations and store synaptic weights, much like neurons in the brain do. Both analog chips contain millions of nanoscale phase-change memory (PCM) devices, a sort of analog computing version of brain cells.
The PCM devices are assigned their weights by flowing an electrical current through them, changing the physical state of a piece of chalcogenide glass. When more voltage passes through it, this glass is rearranged from a crystalline to an amorphous solid. This makes it less conductive, changing the value of matrix multiplication operations when they are run through it. After an AI model is trained in software, all synaptic weights are stored in these PCM devices, just like memories are in biological synapses.
“Synapses store information, but they also help compute,” says IBM Research scientist Ghazi Sarwat Syed, who works on designing the materials and device architectures used in PCM. “For certain computations, such as deep neural network inference, co-locating compute and memory in PCM not only overcomes the von Neumann bottleneck, but these devices also store intermediate values beyond just the ones and zeros of typical transistors.” The aim is to create devices that compute with greater precision, can be densely packed onto a chip, and can be programmed with ultra-low currents and power.
“Furthermore, we’re trying to give these devices more flavor,” he says. “Biological synapses store information in a nonvolatile way for a long time, but they also have changes that are short-lived.” So, his team is working on ways to make changes in the analog memory that better emulate biological synapses. Once you have that, you can craft new algorithms that solve problems that digital computers have difficulty doing.
One shortcoming of these analog devices, Bragaglia notes, is that they are currently limited to inferencing. “There are no devices that can be used for training because the accuracy of moving the weights isn’t there yet,” she says. The weights can be cemented into PCM cells once an AI model has been trained on digital architecture, but changing the weights directly through training isn’t yet precise enough. Plus, PCM devices are not durable enough to have their conductance changed a trillion and more times, like would happen during training, according to Syed.
IBM Research prototype analog chip design

IBM Research's unnamed prototype analog chip uses PCM to encode up to 35 million model weights in a single chip.
Multiple teams at IBM Research are working to address the issues created by non-ideal material properties and insufficient computational fidelity. One such approach involves new algorithms that work around the errors created during model weight updates in PCM. They’re still in development, but early results suggest that it will soon be possible to perform model training on analog devices.
Bragaglia is involved in a materials science approach to this problem: a different kind of memory device called resistive random-access memory or RRAM. RRAM functions by similar principles as PCM, storing the values of synaptic weights in a physical device. An atomic filament sits between two electrodes, inside an insulator. During AI training, the input voltage changes the oxidation of the filament, which alters its resistance in a very fine manner — and this resistance is read as a weight during inferencing. These cells are arranged on a chip in crossbar arrays, creating a network of synaptic weights. So far, this structure has shown promise for analog chips that can perform computation while remaining flexible to updates. This was made possible only after years of material and algorithm co-optimization by several teams of researchers at IBM.
Beyond the way memories are stored, the way data flows in some neuromorphic computer chips can be fundamentally different from the way it does in conventional ones. In a typical synchronous circuit — most computer processors — streams of data are clock-based, with a continuous oscillating electrical current that synchronizes the actions of the circuit. There can be different structures and multiple layers of clocks, including a clock multiplier that enables a microprocessor to run at a different rate than the rest of the circuit. But on a basic level, things are happening even when no data is being processed.
Instead of this, biology uses event-driven spikes, says Syed. “Our nerve cells are communicating sparsely, which is why we’re so efficient,” he adds. In other words, the brain only works when it must, so by adopting this asynchronous data processing stream, an artificial emulation can save significant amounts of energy.
All three of the brain-inspired chips at IBM Research were designed with a standard clocked process, though.
IBM's AIU NorthPole research prototype chip

NorthPole is a brain-inspired research prototype chip that stored model weights digitally, but like the analog chips, it eliminates the von Neumann bottleneck that usually separates memory and compute.
In one of these cases, IBM Research staff say they’re making significant headway into edge and data center applications. “We want to learn from the brain,” says IBM Fellow Dharmendra Modha, “but we want to learn from the brain in a mathematical fashion while optimizing for silicon.” His lab, which developed NorthPole, doesn’t mimic the phenomena of neurons and synapses via transistor physics, but digitally captures their approximate mathematics. NorthPole is axiomatically designed and incorporates brain-inspired low precision; a distributed, modular, core array with massive compute parallelism within and among cores; memory near compute; and networks-on-chip. NorthPole has also moved from TrueNorth’s spiking neurons and asynchronous design to a synchronous design.
For TrueNorth, an experimental processor that was an early springboard for the more sophisticated and commercially ready NorthPole, Modha and his team realized that event-driven spikes use silicon-based transistors inefficiently. Neurons in the brain fire at about 10 hertz (10 times a second), whereas today’s transistors run in gigahertz — the transistors in IBM’s Z 16 run at 5 GHz, and transistors in a MacBook’s 6-core Intel Core i7 run at 2.6 GHz. If the synapses in the human brain operated at the same rate as a laptop, “our brain would explode,” says Syed. In neuromorphic computer chips such as Hermes — or brain-inspired ones like NorthPole — the goal is to combine the bio-inspiration of how data is processed with the high-bandwidth operation required by AI applications.
Because of their choice to move away from neuron-like spiking and other features that mimic the physics of the brain, Modha says his group leans more towards the term ‘brain-inspired’ computing than ‘neuromorphic.’ He envisions that NorthPole has lots of room for growth, because they can tweak the architecture in purely mathematical and application-centered ways to achieve more gains while also exploiting silicon scaling and lessons gleaned from user feedback. And the data show that their strategy worked: In new results from Modha’s team, NorthPole performed inference on a 3-billion-parameter model 46.9 times faster than the next most energy-efficient GPU, at 72.7 times higher energy efficiency than the next lowest latency one.

Thinking on the edge: neuromorphic computing applications​

Researchers may still be defining what neuromorphic computing is or the best ways to build brain-inspired circuits, says Syed, but they tend to agree that it’s well suited for edge applications — phones, self-driving cars, and other applications that can take advantage of fast, efficient AI inferencing with pre-trained models. A benefit of using PCM chips on the edge, Sebastian says, is that they can be exceptionally small, performant, and inexpensive.
Robotics applications could be well suited for brain-inspired computing, says Modha, as well as video analytics, in-store security cameras for example. Putting neuromorphic computing to work in edge applications could help solve problems of data privacy, says Bragaglia, as in-device inference chips would mean data doesn't need to be shuttled back and forth between devices, or to the cloud, to perform AI inferencing.
Whatever brain-inspired or neuromorphic processors end up coming out on top, researchers also agree that the current crop of AI models are too complicated to be run on classical CPUs or GPUs. There needs to be a new generation of circuits that can run these massive models.
“It’s a very exciting goal,” says Bragaglia. “It’s very hard, but it’s very exciting. And it’s in progress.”
https://www.ibm.com/account/reg/us-en/signup?formid=news-urx-51849
 
  • Like
  • Thinking
Reactions: 17 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Say what?!!!


shocked-cat.gif



Extract
Opteran am.png





Artificial Intelligence

Opteran's neuromorphic software offers a new era for autonomous space robotics​

Opteran is conducting tests with Airbus at its Mars Yard to enable rovers to understand depth perception in the toughest off-world environments.
Cate Lawrence16 hours ago





Opteran's neuromorphic software offers a  new era for autonomous space robotics






Opteran, the natural intelligence company has announced its work with Airbus Defence and Space with support from European Space Agency (ESA) and the UK Space Agency will test the Opteran Mind, its general purpose neuromorphic software, in Airbus space rovers.
Opteran believes nature offers a more efficient, robust solution for autonomy in space robotics which will enable new mission capabilities for future Mars missions and other space exploration projects.
Based on over a decade of research into animal and insect vision, navigation and decision-making, Opteran is conducting tests with Airbus at its Mars Yard to enable rovers to understand depth perception in the toughest off-world environments.

Today’s off-world robots are cumbersome - taking minutes to compute a map of their surroundings from multiple cameras before every movement.
Opteran’s visual and perception systems offer Mars rovers’ the ability to understand their surroundings in milliseconds, in challenging conditions, without adding to the robots critical power consumption.
Opteran has reverse engineered natural brain algorithms into a software mind that enables autonomous machines to efficiently move through the most challenging environments without the need for extensive data or training.
Successful application of this technology to real-world space exploration will significantly extend navigation capabilities in extreme off-world terrain.
opteran-team-working-on-airbus-mars-rover-806.jpg
Image: Opteran team working on Airbus Mars rover.
Ultimately, providing rovers with continuous navigation while being able to drive further and faster.
“We are delighted to be working with ESA and Airbus to demonstrate how Opteran’s neuromorphic software addresses key blockers in space autonomy,” said David Rajan, CEO and co-founder, Opteran.
“Our long-term vision is to provide natural autonomy with the Opteran Mind to every machine, on Earth and beyond, and this project will show how we can enable high speed, continuous safe driving, optimised for the rigours of planetary rover navigation.
Today, no such flight-ready systems exist, so there is a major opportunity for Opteran to step up and resolve a challenge facing all the major players in space robotics.”
This project is funded by ESA’s General Support Technology Programme (GSTP) through the UK Space Agency, which takes leading-edge technologies that are not ready to be sent into space and then develops them to be used in future missions. The near-term focus for the BNEE project is on depth estimation for obstacle detection, and the mid-term focus on infrastructure-free visual navigation.
Once the results of the initial testing have been presented to ESA the goal would be to move to the next stage of grant funding which would start to focus on deployment and commercialisation.
 
  • Like
  • Fire
Reactions: 21 users

Terroni2105

Founding Member
New job advertised on LinkedIn


1729808619493.png
 
  • Like
  • Fire
  • Thinking
Reactions: 18 users

7für7

Top 20
For a moment I thought…. But no… it was a emotional trap



 

AARONASX

Holding onto what I've got
200.gif



1729810322379.png
 
  • Like
  • Fire
Reactions: 6 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
This might be something worht keeping an eye on.

Renesas Collaborates with Intel on Best-in-Class Power Management Solution for New Intel Core Ultra 200V Series Processors

Innovative Solution Offers Highly Compact Form Factor Combined with Improved Battery Life, Enabling the Next Generation of AI PCs
(Photo: Business Wire)
(Photo: Business Wire)
October 24, 2024 08:00 AM Eastern Daylight Time
TOKYO--(BUSINESS WIRE)--Renesas Electronics Corporation (TSE:6723), a premier supplier of advanced semiconductor solutions, today announced a collaboration with Intel resulting in a power management solution that delivers best-in-class battery efficiency for laptops based on the new Intel® Core™ Ultra 200V series.
“With the launch of the newest Intel Core Ultra processors, we are committed to delivering the best battery life experience possible for our customers”
Post this
Collaborating closely with Intel, Renesas has developed an innovative new customized power management IC (PMIC) that covers the entire power management needs for this newest generation of Intel processors. This advanced and highly integrated PMIC, combined with a pre-regulator and a battery charger, offers a complete solution for PCs leveraging the new Intel processor. The three new devices work together to provide a purpose-built power solution targeted at client laptops, particularly those running AI applications that tend to consume a lot of power.
Feature Set Optimized for Mobile Applications
The new devices include the RAA225019 PMIC, the RAA489301 high-efficiency pre-regulator, and the ISL9241 battery charger. These devices have a feature set optimized for low-power mobile computing applications. Renesas solutions are backed by tested reference designs and strong application support.
The RAA225019 PMIC is highly configurable for Lunar Lake applications and features fully integrated power MOSFETs and current sensing circuitry. It supports high switching frequencies, making it well suited for small form factor applications without compromising efficiency.
The RAA489301 pre-regulator is a 3-level buck converter designed to provide an optimized voltage range for the RAA225019 PMIC. Its innovative architecture enhances thermal performance compared to traditional 2-level buck designs, and it supports a wide input and output voltage range. This allows for superior efficiency in compact, high-power-density applications, making it an ideal choice for demanding power solutions.
“With the launch of the newest Intel Core Ultra processors, we are committed to delivering the best battery life experience possible for our customers,” said Josh Newman, Vice President, Client Computing Group and General Manager, Product and Platform Marketing at Intel. “Together with Renesas, we enabled a solution that will bring the next generation of innovative mobile platforms with unrivaled power efficiency.”
"Our mutual commitment with Intel to AI-powered mobile solutions benefits every user with leading-edge technology,” said Tom Truman, Vice President and General Manager, Performance Computing Power at Renesas. “This offering demonstrates the depth and breadth of our power technology and highlights our ability to stay ahead of emerging market trends.”
Device Availability
The RAA225019 PMIC, the RAA489301 high efficiency pre-regulator, and the ISL9241 battery charger are available today from Renesas. For more information, please visit www.renesas.com/power.
Renesas Power Management Leadership
A world leader in power management ICs, Renesas ships more than 1.5 billion units per year, with increased shipments serving the computing industry, and the remainder supporting industrial and Internet of Things applications as well as data center and communications infrastructure. Renesas has the broadest portfolio of power management devices, delivering unmatched quality and efficiency with exceptional battery life. As a trusted supplier, Renesas has decades of experience designing power management ICs, backed by a dual-source production model, the industry’s most advanced process technology, and a vast network of more than 250 ecosystem partners. For more information about Renesas, visit www.renesas.com/power.
About Renesas Electronics Corporation
Renesas Electronics Corporation (TSE: 6723) empowers a safer, smarter and more sustainable future where technology helps make our lives easier. A leading global provider of microcontrollers, Renesas combines our expertise in embedded processing, analog, power and connectivity to deliver complete semiconductor solutions. These Winning Combinations accelerate time to market for automotive, industrial, infrastructure and IoT applications, enabling billions of connected, intelligent devices that enhance the way people work and live. Learn more at renesas.com. Follow us on LinkedIn, Facebook, X, YouTube and Instagram.
 
  • Like
  • Thinking
  • Love
Reactions: 9 users

JB49

Regular
Say what?!!!


View attachment 71777


Extract
View attachment 71769




Artificial Intelligence

Opteran's neuromorphic software offers a new era for autonomous space robotics​

Opteran is conducting tests with Airbus at its Mars Yard to enable rovers to understand depth perception in the toughest off-world environments.
Cate Lawrence16 hours ago





Opteran's neuromorphic software offers a  new era for autonomous space robotics's neuromorphic software offers a  new era for autonomous space robotics






Opteran, the natural intelligence company has announced its work with Airbus Defence and Space with support from European Space Agency (ESA) and the UK Space Agency will test the Opteran Mind, its general purpose neuromorphic software, in Airbus space rovers.
Opteran believes nature offers a more efficient, robust solution for autonomy in space robotics which will enable new mission capabilities for future Mars missions and other space exploration projects.
Based on over a decade of research into animal and insect vision, navigation and decision-making, Opteran is conducting tests with Airbus at its Mars Yard to enable rovers to understand depth perception in the toughest off-world environments.

Today’s off-world robots are cumbersome - taking minutes to compute a map of their surroundings from multiple cameras before every movement.
Opteran’s visual and perception systems offer Mars rovers’ the ability to understand their surroundings in milliseconds, in challenging conditions, without adding to the robots critical power consumption.
Opteran has reverse engineered natural brain algorithms into a software mind that enables autonomous machines to efficiently move through the most challenging environments without the need for extensive data or training.
Successful application of this technology to real-world space exploration will significantly extend navigation capabilities in extreme off-world terrain.
opteran-team-working-on-airbus-mars-rover-806.jpg
Image: Opteran team working on Airbus Mars rover.
Ultimately, providing rovers with continuous navigation while being able to drive further and faster.
“We are delighted to be working with ESA and Airbus to demonstrate how Opteran’s neuromorphic software addresses key blockers in space autonomy,” said David Rajan, CEO and co-founder, Opteran.

This project is funded by ESA’s General Support Technology Programme (GSTP) through the UK Space Agency, which takes leading-edge technologies that are not ready to be sent into space and then develops them to be used in future missions. The near-term focus for the BNEE project is on depth estimation for obstacle detection, and the mid-term focus on infrastructure-free visual navigation.
Once the results of the initial testing have been presented to ESA the goal would be to move to the next stage of grant funding which would start to focus on deployment and commercialisation.
I wonder if they are using TENNs, or are they competition...
 
  • Like
Reactions: 2 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
This might seem like a bit of a left-of-field thought as this feud between Arm and Qualcomm intensifies.

If Arm does cancel Qualcomm's license in 6o days, wouldn't this provide some incentive for Qualcomm to look for an alternative to some of Arm's technology while they sort through all of the legal issues?

One person's loss is another's gain, as they say...


Screenshot 2024-10-25 at 10.00.46 am.png
 
Last edited:
  • Like
  • Fire
  • Wow
Reactions: 8 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Published 8 days ago.

LG to offer on-device AI chips.



HOME Semiconductor
LG Electronics to provide customized AI chiplets for appliances
  • Seonhaeng Lee


Image: TheElec
Image: TheElec
LG Electronics will offer customized AI chiplets for application in appliances as they are more cost-effective than chips in the market, a senior executive said.
Chips offered by Intel, Qualcomm, and others were general-purpose ones, Jin Gyeong Kim, head of LG Electronics’s SoC Center, said during a conference in South Korean on Wednesday. Removing unnecessary features and making a chip using chiplet technology can halve the price, he said.
Chiplet combines separately made chips through packaging and is considered an alternative and cheaper way to make AI chips instead of packing all the features into one silicon die.
The company’s SoC Center is under the supervision of the CTO and also develops its own IPs.
LG Electronics plans to collaborate in foundry with Samsung, Intel, and TSMC. Kim said it plans to manufacture its AI chiplets, which will be on-device AI chips, using 7- and 5-nanometer process nodes.
Kim said these chips can improve the resolution and sound quality in TVs and increase conveniences in home appliances. The company has finished proof-of-concept for the chips, he added.
Last month, LG Electronics announced its partnership with US chip IP firm Blue Cheetah on chiplet.
 
  • Like
  • Thinking
  • Fire
Reactions: 12 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
I wonder if they are using TENNs, or are they competition...


Looks like they might be competition. Neuromorphic technology inspired by the brains of insects, developed by University of Sheffield spin out.

I don't know whether Airbus and ESA would be looking to combine both of our technologies or test and compare them?
 
  • Like
  • Thinking
Reactions: 6 users

GDJR69

Regular
Looks like they might be competition. Neuromorphic technology inspired by the brains of insects, developed by University of Sheffield spin out.

I don't know whether Airbus and ESA would be looking to combine both of our technologies or test and compare them?
For those who want the intellectual capacity of an insect it will be perfect. I'll stick to Akida . . . :ROFLMAO:
 
  • Haha
  • Like
  • Fire
Reactions: 6 users

Bravo

If ARM was an arm, BRN would be its biceps💪!


Ohh Tom you’ve done it again….

View attachment 71764


Tying into my earlier post above about LG planning on-device AI chips.

I thought I'd post this LG article dated Feb 2024.

It's interesting in light of Todd Viera's presentation involving appliances (which @supersonic001 posted above).

So, LG have been working with Upstage to develop small language model (SLM)-based on-device AI technology to fit into LG’s laptops and home appliances.

The aim of the collaboration is to develop AI technology to recognize users' voice commands and translate, summarize, search and recommend documents, etc.

This seems to be exactly what Todd was describing above, especially when he suggested Akida Pico will be able to perform text to speech and speech to speech functions.

Oh, and I also posted a link to an Upstage video published 2 months ago in which they discuss RAG, which Todd also talked about in his presentation.

So, maybe worth keeping an eye on LG in this regard...




LG Elec, Upstage to form partnership for on-device AI​

Upstage's small language model-based AI technology will apply to LG laptops and home appliances​

By Chae-Yeon Kim Feb 06, 2024 (Gmt+09:00)
2 Min read


Choi Hong-joon, Upstage vice president (left) and Gong Hyuk-joon, LG Electronics' head of IT customer experience (Courtesy of LG Electronics)
Choi Hong-joon, Upstage vice president (left) and Gong Hyuk-joon, LG Electronics' head of IT customer experience (Courtesy of LG Electronics)
South Korea’s artificial intelligence startup Upstage said on Tuesday it has signed a memorandum of understanding with LG Electronics Inc. to develop small language model (SLM)-based on-device AI technology to fit into LG’s laptops and home appliances.

On-device AI processes data in smartphones, laptops and tablets without internet connectivity at a faster speed than cloud-based AI and consumes less power. It also has enhanced data security without personal information leakage.


In December last year, the AI startup unveiled Solar, a new 10.7-billion-parameter English language model, the first of its kind in the world. It is an advanced pretrained generative text model.

Upon its debut, Solar secured the top position on Huggingface Open LLM Leaderboard, a global AI platform, beating Meta, Alibaba, 01.AI and Mistral AI.

In the first two weeks of its launch, all LLMs based on Solar took the top 20 spots on the leaderboard.

Solar is an acronym for Specialized and Optimized LLM and Applications with Reliability.

It is less than one-tenth of a GPT-3, a very large language model, in parameter and features a much faster inference speed, allowing it to provide various language-related AI services without affecting the performance and power consumption of the device. It is regarded as an optimal on-device AI.

Upstage and LG will collaborate to develop AI technology to recognize users' voice commands and translate, summarize, search and recommend documents or web pages.

“We will first apply the highest-performance AI to LG gram laptops with cumulative sales of 2 million units and do our best to ensure LG customers can experience AI features in LG Electronics home appliances,” Choi Hong-joon, vice president of Upstage, said in a statement.

Upstage Chief Executive Sung Kim is a professor at the Hong Kong University of Science and Technology (HKUST) and led Naver Clova AI Research.
He co-founded Upstage in 2020 with other AI experts from Nvidia, eBay, Naver Corp. and Kakao Corp., as well as professors from New York University and HKUST.

Naver is South Korea’s largest online portal and Kakao is the mobile giant in the country.

At CEO 2024 last month, LG Electronics CEO Cho Joo-wan defined AI as affectionate intelligence and emphasized that responsible intelligence was one of its characteristics to protect user data and enhance access safety.





478 views 16 Aug 2024
Creating a Retrieval-Augmented Generation (RAG) Workflow with Upstage AI
 
Last edited:
  • Like
  • Thinking
  • Fire
Reactions: 12 users

Earlyrelease

Regular
Warning -Bad Friday morning humour.

If they can send the IT techs to fix the rovers why don’t they just bring the soil samples back with them 😜

IMG_0270.png
 
  • Haha
  • Fire
  • Like
Reactions: 16 users
Top Bottom