BRN Discussion Ongoing

mldf

Emerged
Maybe a bit of national pride of the authors? I can't find anything or such a statement on the KAIST homepage itself.
KAIST was founded by the state and is considered the second most innovative university in the Asia-Pacific region. And I have a colleague who explained to me what I see confirmed here in this comment of an article.

When Koreans look at the map, they see a small country between two superpowers. This leads to a latent and constant fear of being torn between Japan and China, both economically and culturally, which is justified by historical events. For Koreans, with their strong sense of national pride, it is dismaying to realise time and again that Korea is barely recognised outside Asia. Yet they have every reason to be proud of the so-called "miracle on the Han River", because today car and shipbuilding, semiconductor manufacturing, digital electronics, steel and petrochemicals are cited as key industries, and every Korean knows where they currently rank internationally.

[As far as I know, KAIST was the first in the world to develop filament LEDs, which went on to conquer the world much later. I bought one from Asia back then and it took another 2 or 3 years before they were mass-produced and sold here.]

I found the only related article on KAIST website was from 2021.


And I couldn't find anything about the 'world's first neuromorphic chip' on their NEW's page.

 
  • Like
Reactions: 2 users

Diogenese

Top 20
It's a Quill, believe me I used to be an ink well monitor when I was at school. 😎
Pre-fountain pens, I used to be ink up to the elbows ...
 
  • Haha
Reactions: 5 users
From 9.45min they talk about the new Kaist chip, still calling it "The World's First".



This video was posted just over 20min ago.
 
  • Like
  • Thinking
  • Fire
Reactions: 5 users

skutza

Regular
A Horrible and Dumb thing to say that makes me also feel like I don’t want to spend much time on this forum.
If someone had written that about me, I would have used the :ROFLMAO: emoji and said yes, my wife has the patience of a saint.

Then I would've moved on and enjoyed my day. But hey maybe I have tree bark for skin and not much bothers me. It wasn't to be nasty, it was all about his style of posting and the manner in which he responds to posts he doesn't agree with or doesn't like.

Anyway if FF took offence I apologise, or to anyone else. Lets all drink a cup of concreate and harden up a little huh?
 
  • Like
  • Haha
Reactions: 14 users

Interesting development.​


If this is to be accurate, & our lead in the neuromorphic SNN space is correct ….?

Then we would be part of this ….?

If this is accurate & we are not part of it ,… 🤐

KAIST researchers develop world's first 'neuromorphic' AI chip​

Neuromorphic computing technology aims to develop integrated circuits mimicking the human nervous system so that chips could be able to perform more sophisticated tasks. [SHUTTERSTOCK]

Neuromorphic computing technology aims to develop integrated circuits mimicking the human nervous system so that chips could be able to perform more sophisticated tasks. [SHUTTERSTOCK]
A research team at KAIST has developed the world’s first AI semiconductor capable of processing a large language model (LLM) with ultra-low power consumption using neuromorphic computing technology.

The technology aims to develop integrated circuits mimicking the human nervous system so that chips could be able to perform more sophisticated tasks that require adaption and reasoning with far less energy consumption.


The Science Ministry said Wednesday that the team, led by Prof. Yoo Hoi-jun at the KAIST PIM Semiconductor Research Center, developed a “Complementary-Transformer” AI chip, which processes GPT-2 with an ultra-low power consumption of 400 milliwatts and a high speed of 0.4 seconds, according to the Ministry of Science and ICT.


A rendered image of comparing performance of different types of processors [YONHAP]

A rendered image of comparing performance of different types of processors [YONHAP]

The 4.5-millimeter-square chip, developed using Korean tech giant Samsung Electronics' 28 nanometer process, has 625 times less power consumption compared with global AI chip giant Nvidia’s A-100 GPU, which requires 250 watts of power to process LLMs, the ministry explained.

The chip is also 41 times smaller in area than the Nvidia model, enabling it to be used on devices like mobile phones, therefore better protecting user privacy.

The KAIST team has succeeded in demonstrating various language processing tasks with its LLM accelerator on Samsung’s latest smartphone model, the Galaxy S24, which is the world’s first smartphone model with on-device AI, featuring real-time translation for phone calls and improved camera performance, Kim Sang-yeob, a researcher on the team, told reporters in a press briefing.

The ministry said the utilization of neuromorphic computing technology, which functions like a human brain, specifically spiking neural networks (SNNs), is essential to the achievement.

Previously, the technology was less accurate than deep neural networks (DNNs) and mainly capable of simple image classifications, but the research team succeeded in improving the accuracy of the technology to match that of DNNs to apply it to LLMs.

The team said its new AI chip optimizes computational energy consumption while maintaining accuracy by using unique neural network architecture that fuses DNNs and SNNs and effectively compresses the large parameters of LLMs.

“Neuromorphic computing is a technology global tech giants, like IBM and Intel, failed to realize. We believe we are the first to run an LLM with a ultra-low power neuromorphic accelerator,” Yoo said.

BY PARK EUN-JEE, YONHAP [park.eunjee@joongang.co.kr]
Like some of us including me said before getting absolutely bashed in the forum for simple facts.
If we're not involved in the 2024 product roadmap of Samsung, then we're screwed.
No more lead for us. The biggest company that designs and uses its own chipsets does neuromorphic now.
Can't wait for the comments. Is someone gonna write a 3 pager about haunting and ridiculing me again, or will it just be reported?
 
  • Like
  • Sad
Reactions: 3 users

Labsy

Regular
Like some of us including me said before getting absolutely bashed in the forum for simple facts.
If we're not involved in the 2024 product roadmap of Samsung, then we're screwed.
No more lead for us. The biggest company that designs and uses its own chipsets does neuromorphic now.
Can't wait for the comments. Is someone gonna write a 3 pager about haunting and ridiculing me again, or will it just be reported?
Well there was mention from memory that a "Korean company had dissected and experimented with the akida more than even brainchip... anyone remember that comment?
I reckon this is awesome news.... 🚀🚀🚀🔥🔥🔥
 
  • Like
  • Thinking
  • Fire
Reactions: 12 users

Diogenese

Top 20
Well there was mention from memory that a "Korean company had dissected and experimented with the akida more than even brainchip... anyone remember that comment?
I reckon this is awesome news.... 🚀🚀🚀🔥🔥🔥
Has RT liked it?
 
  • Haha
  • Sad
  • Thinking
Reactions: 13 users

Labsy

Regular
Has RT liked it?
"Since Lou told us years ago that a South Korean company has spent possibly more time in validating the technology than BRN itself we have speculated a lot who it could have been."
DerAD, I believe these were your own words.
 
Last edited:
  • Like
Reactions: 6 users

Labsy

Regular
They've been dissecting this thing for years.....
I reckon we are in the S24.... or I'm a monkeys unlce. If I'm wrong then validates neuromophic on mobiles for competitors either way. Great news 🚀🚀🚀
 
  • Like
  • Fire
Reactions: 13 users
Whilst this is from Sep 23, I don't recall seeing or reading it.

Worth a read through imo.




Disruptive’s Substack



BrainChip: The Cloud-Free Future is Here
28 SEPT 2023


Introduction
Comprehending a revolutionary concept is not easy. Too often we get intimidated by jargon, struggle to see the core concept, or simply pretend that we are too busy.
That happened to me when I became curious about BrainChip (NASDAQ: BRCHF, ASX: BRN). It doesn’t help that Elon Musk’s Neuralink is inserting chips into the brains of monkeys or pigs. Such images come to mind when we hear of BrainChip. But no, here you won’t learn about such brain-computer interfaces. BrainChip is a company that commercialized a new semiconductor architecture. So far, this might cause some readers to yawn, stop, and search for other distractions. However, I will explain to you in plain language why this matters for you, as a long-term investor.
Let's start with something you have experienced. Recall the last time you purchased a computer. You were asked to choose between different processor brands and versions, then there was the decision to select a graphic card and finally the question on RAM. All these are chips; semiconductors, right? Apart from the sleek marketing message and promises toward the capabilities that come with selecting one over the other, did you really care about comprehending what was in front of you?
What Is the Problem that BrainChip Has Solved?
In its most simple version, here is the elevator pitch: BrainChip invented and patented a semiconductor architecture that lowers their energy consumption by magnitudes.
At this stage, I will not try to cover all the disclaimers and conditions that will come naturally with such a revolutionary statement. Just let it sink in. Does it matter to me if the energy consumption of my PC gets lowered significantly? Probably not. But when we leave our home turf and think of other concepts like the cloud or data centers, we arrive where the action is. Did you know that these boiler rooms of the Internet consume more than 1% of all energy worldwide1? And finally mentioning the elephant in the room, AI: it costs a company about 100x more to answer a question you type into ChatGPT or Bard in comparison to a Google search.
Now, lifting off and getting the helicopter view: there are huge efficiency and performance gains in case we could find a way to realize what BrainChip’s new chip architecture is promising.
The Details
And here’s where most will get lost: technical jargon. One needs to expand their vocabulary to understand where the investment thesis is. These are concepts not visible or under-appreciated in our daily lives. So, let's dive in!
This above-mentioned boiler room of the internet is not running on steam anymore. We are electrified. Electricity is needed to allow semiconductors to come up with endless sequences of zeros and ones. The plumbing, to stay with the analogy, is what makes electrons take different pathways through the circuitry.
Chip giants like Intel, AMD, and NVIDIA have optimized their designs to achieve a truly astonishing number of computations. Progress has been relentless and it seems no law of physics can stop them. The most advanced chips used in today's desktops or data centers have one thing in common: they must conduct an incomprehensible number of calculations. Electrons race through their circuitry and generate so much heat that data centers spend more on air-conditioning than on the actual semiconductor hardware.1
Here you might say that over the last years, your phone has seldom grown warm and can often do similar calculations on a smaller scale. This is the achievement of a small UK company, now famous and known on the NASDAQ by the ticker ARM. More than 30 years ago, they came up with a novel energy-efficient chip architecture. Today, virtually all phones use this arm design.2
Vocabulary
At this stage, let's get accustomed to a few technical terms required to understand the investment case I am preparing for BrainChip:
- CPU: Central Processing Unit, used for most day-to-day calculations
- GPU: Graphic Processing Unit, used to render graphics and A.I.-type calculations
- Instructions Set: Commands (vocabulary) the processor understands
- Architecture: Circuitry depending on the complexity of this set of instructions
- X86: Complex instruction set, practically unchanged since the 1970s
- Sequential Processing: A CPU starts and finishes a calculation in sequential order
- Parallel Processing: Runs calculations in parallel
- Core: A unit that conducts the individual calculation
Transition
Why is this transition still ongoing, considering the obvious advantages coming with a more energy-efficient architecture? Processors require a set of instructions to do their calculations. Software and hardware need to speak the same language to execute these instructions. Based on a specific task, like running a spreadsheet (CPU) or generating a 3D animation (GPU), different chips and sets of instructions give the best results.
We will always remain in a certain flux: one technology finds more adoptions, and the other gets scaled down. Initially, we fine-tuned the X86 architecture and added more and more cores. This kept up with the demands to a degree. Data centers expanded with parallel computing designs. This became more and more unsustainable and expensive. With the widespread introduction of generative AI (text, images, music, and code), we are experiencing a watershed moment right now.
For home computers, Apple is leading the field to bring arm architecture into our homes. Data centers can’t change their hardware as nimbly as we consumers can. They are stuck for a while with an expensive set of legacy X86-style hardware for the near future.


Data
Don’t you sometimes wonder where all this data resides? Sure, we can see a memory stick or a hard drive. But the bulk of the world's data resides in the cloud, aka data centers.
The amount of data generated by humans writing a message, filling out a form or saving a picture is less than the data volume generated when machines have exchanges with each other. The gap is growing exponentially.3 You might wonder why machines decide to generate data. No, they are not sentient – not so far at least. This data volume is generated by sensors, as well as from simulations, machine learning, and blockchain. Obvious sensors like temperature probes or traffic cameras might come to mind. But we’re getting side-tracked by attempting to understand each of these autonomous data sources: our world is awash in non-human-generated synthetic data. All this data is backed up to central servers where programs run operations to make sense of it.


The Edge
This non-human data is generated to a large degree “on the edge”. Devices/sensors generate data, flowing to the cloud for interpretation.4 This is what is causing the increase in data, inundating our internet with traffic and increasing the size of data centers.

1709818687211.png


BrainChip
Here comes our company. A micro-cap. A company with just a hundred employees and no turnover to speak of. Investor circles call these companies “story stocks.” As they have no turnover to show for it, they convince via their story. That should give the context. Nothing is certain in this domain. Risk is abundant. Success can take a generation.
Arm proved its design & functionality 20 – 30 years ago without instant success. It required the widespread adoption of mobile phone-computer hybrids, commonly referred to now just as phones. I want to provide the reasoning why this will repeat with BrainChip: we are currently experiencing an equivalent catalytic event, as evidenced by the exponential generation of non-human data volumes.
Their Secret Sauce
Understanding how this start-up-like company can find a solution to the global data dilemma requires additional vocabulary.
- Neuromorphic: Brain-like semiconductor architecture
- IoT: Internet of Things, the non-human data tsunami minions
- Event-Based: Running a calculation when needed, when an event occurs
- Latency: The time it takes between exchanges
- Neural Network: A machine-learning model simulating how our brain works
- Spike: Data burst that occurs when an event is recorded
- Convolution: mathematical operation to extract features from images or signals
You could say that they patented the operation model of our brain. This is a lot to take in. We are talking about a set of instructions. Brainchip is not selling a synthetic brain, with neurons and synapses. Their founder was an early proponent of what is now called neuromorphic engineering. The concept was too abstract for many years to warrant commercial attention.
Intriguing? Here is the catch: at this stage, most potential investors will get lost further researching what has been achieved. To a large degree, this is the reason why the community surrounding BrainChip is regarded with suspicion. It is simply difficult to wrap your head/brain around this concept.
In Simple Terms
Our brain operates on a meager 20 watts, a light bulb's worth of energy consumption. This is the case when playing chess or daydreaming. Evolutionarily, we have achieved something that has not been replicated by any commercial chip architecture. It’s because our brain works in an event-based manner. Example needed?
Imagine looking at a blank sheet of A4 paper that has a dot in its center. Humans have no difficulty identifying this dot. In comparison, a camera combined with image analysis will analyze each pixel, line by line, to determine that a certain location has a higher density of contrast. Calculations will then determine the event (localization of the dot). The software can’t differentiate between the data (the white part of the sheet of paper) and the significance of the dot (the black part). It is just stoically analyzing the data from top to bottom.
The neurons of our brain will only pass on information when an event occurs. This way, it can remain in a certain way dormant and only consume energy during an event!
This concept has fascinated scientists and culminated in a neuromorphic design logic. The founder of BrainChip noticed the commercial value and patented these advances. They run what’s called a spiking neural network (SNN) on their chip.
Why Now?
Investors wait until there is a need to fund new innovations. This inflection point is now. We are surrounded by sensors in cars, outdoors, homes, and a multitude of smart devices. Examples needed?
- Driver-Alertness: Detects if a driver is losing attention
- Crowd Management: Build-up of city traffic, or crowds during an event
- Biometric Recognition: Border control and traveling
- Alexa/Siri: Low latency keyword detection to complex questions
- Hearing Aids: Discern and amplify sounds selectively to understand what was said
- Vital Signs: Wearables support monitoring and preventive medicine
- Industrial Predictive Maintenance: Alerts get sent before equipment or infrastructure breaks down
These advances have already been implemented. And we will soon have fully autonomous, self-driving cars/taxis. The amount of additional data that will be transferred with these applications will inundate the internet. Unless we can stop these sensors from communicating their data to the cloud.
Neural Networks
This data (consisting of video/images, sound, and other measurable sensor results) needs to be classified and converted into a neural network. Currently, these data streams are analyzed in a process called convolution, resulting in a Convolutional Neural Network (CNN). This is achieved in a central location utilizing top-of-the-range GPUs. The process can take a year, cost millions, and is reliant on high-quality human-screened data. Once completed the neural network will be installed on the final device (car, hearing aid or sensor). But things stop here. This neural network is a one-trick pony. It can’t learn from its observations. In case something changes, all needs to be recalibrated at HQ.
BrainChip generates as well a neural network, but it's a Spiking Neural Network (SNN). It can be quickly trained on a much smaller number of lower quality, non-human validated samples. On top of it, once the SNN has been established, it keeps learning and continuously updates its model. How is this possible - SNNs are not living, right? Apart from CNNs, BrainChips SNN model parameters (called weights & biases) are not fixed. These values get changed on the chip's memory when the SNN “learns.”
How Can Brainchip Make Money?
They are the only company with a commercial neuromorphic chip architecture and corresponding patent-protected intellectual property (IP). Like arm’s revenue model, BrainChip sells its IP to anyone in the business of chip design. They are already a:
- member of arm’s A.I. partnership program
- partner with Intel’s foundry services
- selected company by PROPHESEE, the global leader in IoT machine vision
- selected company by SiFive: open-source AI chip design
Additionally, they work with enablement partners to provide a vertically complete solution to simplify evaluation and implementation. Completing the offering are integration partners offering ready-to-use system-on-chip (SoC) products.
As their business is essentially licensing software, they have very high margins. Currently, arm is valued at about 50 billion dollars. I see it entirely feasible that BrainChip will reach a comparable valuation when its design architecture gets adopted. They are currently valued at about 200 million dollars. Where else can you get such a 250x potential?

1709818721179.png

The Solution
With BrainChip’s processor, called Akida, all computations are performed on the chip. It requires no internet connection at all.
- It operates at a fraction of the 20 Watts that our brain requires. So, no cooling or main power sources are required; a simple battery will do!
- All data collected remains where it was observed/generated. No more privacy worries about hacked cloud servers. The data does not leave the sensor chip.
In Summary
Data processing will remain in constant flux and hardware updates are costly. Even so, we presently experience the confluence of multiple evolutions:
- X86 architecture will be phased out as arm’s processors have proven faster and more energy efficient. Soon, every computer will run on less complex arm-like instruction architecture
- Chronological data from IoT devices, demand forecasting or image processing will be enabled by BrainChip’s Akida in real-time and cloud-independent
- Governments won’t allow Internet giants to amass data or cross-border transfer of data to central server farms. Regulation will benefit technologies that can function without the need for data transfers
 
  • Like
  • Fire
  • Love
Reactions: 77 users

IloveLamp

Top 20
1000013971.jpg
1000013974.jpg



 
  • Like
  • Thinking
  • Fire
Reactions: 35 users
Like some of us including me said before getting absolutely bashed in the forum for simple facts.
If we're not involved in the 2024 product roadmap of Samsung, then we're screwed.
No more lead for us. The biggest company that designs and uses its own chipsets does neuromorphic now.
Can't wait for the comments. Is someone gonna write a 3 pager about haunting and ridiculing me again, or will it just be reported?
Samsung, is certainly a big spicy cabbage, but some prefer fruit..

I think you're being a bit dramatic, saying we're screwed, if we are not in with them.

Yes, they are a huge player, but they don't control or dominate the World product markets.

For example..

"Apple has overtaken Samsung as the world's top smartphone seller, ending the Korean tech firm's 12-year run as industry leader. The iPhone took the top spot in 2023 with 234.6m units sold, according to figures from the International Data Corporation (IDC), overtaking Samsung's 226.6m units"
17 Jan 2024


Are you saying, that if we got in with Apple, or some other Big players, but not Samsung, that we may as well pack up and go home?..

Of course no guarantees anywhere, but your arguments don't make sense, in my opinion.

I'm not saying you can't have one..
 
Last edited:
  • Like
  • Fire
Reactions: 23 users
Errrrr.....what :unsure:


PI Cm4 GPS 5G Brainchip Prototype​

Posted 2 weeks ago

Worldwide
I need a PCB Board Prototype For a Raspberry Pi CM4 Prototype Device with a Brainchip PCI Express Board - a PI HQ camera -- 5G and GPS. I want a working prototype with a camera based on a Cable, a Front Screen and a Back screen -- Dashcam with external Cable Camera.

See below email.

Hello Mike,

Thank you for your interest in our technology.

As requested, here is some information about our Akida IP.

Below are links to some of our demonstrations. You can find more on YouTube just search for BrainChip.



Our Akida IP is offered as a Licensing model plus a per component royalty.



BrainChip offers a much different approach to AI computations. We lighten the computation load by only computing event-data, and also quantizing the bit count to the lowest possible size while maintaining model accuracy. Since our compute is on chip (within our IP) utilizing our integrated memory, our latency is very low. Our approach lends itself very well to applications at the edge, for example with the sensor on battery operated platforms. For instance, Akida IP only draws uW-mW (depending upon application) for inference at the edge. In addition, our technology offers edge learning (independent from the Cloud) which in turn offers Security of data.



For AI compute, some system level advantages and features of our IP are as follows:

Reduce Power Consumption,
Increase Performance,
Reduce: System Level & BOM cost & Recurring Costs,
Reduce Complex Firmware,
Future Proof your design,
Security on the edge,
Learning on the edge, in the field,
And increase your feature set and capability, etc.
Models are developed via our free MetaTF platform:

https://doc.brainchipinc.com/installation.html

www.Brainchip.com/developer

Overview — Akida Examples documentation (brainchipinc.com)



We also offer development platforms to assist in bringing up your product, to develop models and validate our IP. Akida Enablement Platforms - BrainChip . You can purchase our PCIe dev kits at: Welcome to BrainChip (brainchipinc.com)



Links to some of our demonstrations (all on the edge learning) are below:

Wine Tasting




BrainChip demonstrates Taste Sensing with Akida - YouTube



Edge Based Learning




Keyword Spotting


Visual Wake & Facial Recognition



Smart Automotive In Cabin Experience

Edge Based Learning (High Speed Environment) Racetrack object recognition at the edge

Regression Analysis with Vibration Sensors

BrainChip demonstrates Akida Vibrational Analysis Tactile Sensing – YouTube

Gesture Control

BrainChip + Nviso Emotion Detection Demo

BrainChip Demonstrates Gesture Recognition with Prophesee EV4 Development Camera - YouTube

BrainChip Demonstrates Drone Voice Keyword Spotting - YouTube



TENNs: A New Approach to Streaming and Sequential Data - YouTube



Please let me know if you require any additional information.

We look forward to meeting with you again soon.
 
  • Like
  • Love
  • Thinking
Reactions: 28 users
Haven't bothered signing up for a free trial as Gen 2 old news but one thing caught my eye, highlighted.

Did I read earlier that someone asked about Sean in or coming to OZ for a client? Is that right?

Wonder if anything to do with below thought in the article :unsure:


BrainChip Adds Temporal Networks​

Author: Bryon Moyer

BrainChip Adds Temporal Networks


Read Full Article Start My Free Trial
Akida 2, BrainChip’s latest intellectual property (IP) offering, adds time as a component of convolution, allowing activity identification in video streams. It also accelerates the Transformer encoder block in hardware, speeding models employing that block.

BrainChip’s artificial intelligence (AI) processors employ an event-based architecture that responds only to nonzero activations on internal layers, reducing the amount of required computation. It’s a form of neuromorphic computing that the company first implemented in its Akida 1 IP and coprocessor chip. It has now positioned the original chip as a reference chip for sale in low quantities as evaluation for possible IP licensing; no chip is planned for Akida 2.

The second generation brings four changes to the original architecture; in addition to temporal networks and Transformer encoders, it adds the INT8 data type and the ability to handle long-range forward skip connections. Akida 1 quantized aggressively to INT4 and below, but INT8 has become the most common edge inference data type; Akida 2 acknowledges that.

Since its Akida 1 launch, the company has signed Megachips and Renesas as IP customers. The company says many other prospects have evaluated the reference silicon (including Circle8 Clean Technologies for an application to improve recycling sorting); licensing decisions are pending for those companies. Given its cash and revenue position, it must boost sales to better balance its cash burn.

The company faces no event-based IP competitors, and it claims to provide lower-power inference than its standard-network IP competition. But its uniqueness makes its tool and ecosystem development critical to ensuring that its customers can implement networks without having to be aware of the unique underlying technology.
 
  • Like
  • Fire
  • Love
Reactions: 40 users
Like some of us including me said before getting absolutely bashed in the forum for simple facts.
If we're not involved in the 2024 product roadmap of Samsung, then we're screwed.
No more lead for us. The biggest company that designs and uses its own chipsets does neuromorphic now.
Can't wait for the comments. Is someone gonna write a 3 pager about haunting and ridiculing me again, or will it just be reported?
It was peaceful here for a while, so why don’t you go back to where you belong

1709836127906.gif
 
  • Haha
  • Like
  • Fire
Reactions: 14 users

Tothemoon24

Top 20
IMG_8567.jpeg
 
  • Like
  • Fire
  • Love
Reactions: 38 users

cosors

👀
Like some of us including me said before getting absolutely bashed in the forum for simple facts.
If we're not involved in the 2024 product roadmap of Samsung, then we're screwed.
No more lead for us. The biggest company that designs and uses its own chipsets does neuromorphic now.
Can't wait for the comments. Is someone gonna write a 3 pager about haunting and ridiculing me again, or will it just be reported?
baby-facepalm.gif

Self-esteem
 
Last edited:
  • Haha
  • Like
Reactions: 16 users

cosors

👀
Just a test. I didn't realise that I could still reply to ignore.
 
  • Haha
  • Like
Reactions: 10 users
https://www.cnet.com/tech/mobile/on...-way-of-experiencing-artificial-intelligence/

An interesting 7 min read. We're definitely amongst the big players.
Probably even powering most of the up and coming AI features 😁

On-Device AI Is a Whole New Way of Experiencing Artificial Intelligence​

At MWC 2024, I saw firsthand how AI is fundamentally reshaping current and future devices, from phones to robots.
At Mobile World Congress last week, the show floor was abuzz with AI. It was the same at CES two months earlier: The biggest theme of the biggest consumer tech show was that AI suddenly seemed to be part of every single product. But the hype can make it hard to know what we should be excited about, what we should fear and what we should dismiss as a fad.


"Omnipresent ... but also overwhelming." That's how CCS Insight Chief Analyst Ben Wood described the MWC moment. "For many attendees, I felt it was rapidly reaching levels that risked causing AI fatigue."


But there was a positive side as well. Said Wood: "The most impressive demos were from companies showing the benefits AI could offer rather than just describing a service or a product as being AI-ready."

At last year's MWC, the popular generative AI tool ChatGPT was only around 3 months old, and on-device AI was mostly a twinkle in the eye of the tech companies present. This year, on-device was a reality, and attendees — like me — could experience it on the show floor.

I got to experience several demos featuring AI on devices, and the best of them brought artificial intelligence to life in ways I'd never seen before. In many cases, I could see that products we're already familiar with — from smartphones to cars — are getting a new lease on life thanks to AI, with some offerings using the technology in unique ways to set themselves apart from rivals. In other cases, new types of products, like AI-focused wearables and robots, are emerging that have the potential to displace what we know and love.




Above all, it was clear that on-device AI isn't a technology for tomorrow's world. It's available right here, right now. And it could impact your decision as to what piece of technology you buy next.

The age of AI phones has arrived​

One of my biggest takeaways from MWC was that while all tech companies now have a raft of AI tools at their disposal, most are choosing to deploy them in different ways.

Take smartphones. Samsung has developed Gauss, its own large language model (the tech that underlies AI chatbots), to focus on translation on the Galaxy S24, whereas Honor uses AI to include eye tracking on its newly unveiled Magic 6 Pro — which I got to try out at its booth. Oppo and Xiaomi, meanwhile, both have on-device generative AI that they're applying to phone cameras and photo editing tools.



It goes to show that we're entering a new period of experimentation as tech companies figure out what AI can do, and crucially how it can improve our experience of using their products.

Samsung's Y.J. Kim, an executive vice president at the company and head of its language AI team, told reporters at an MWC roundtable that Samsung thought deeply about what sort of AI tools it wanted to deliver to users that would elevate the Galaxy S24 above the basic smartphone experience we've come to expect. "We have to make sure that customers will see some tangible benefits from their day-to-day use of the product or technologies that we develop," he said.

Conversely, there's also some crossover in AI tools between devices because of the partners these phone-makers share. As the maker of Android, the operating system used by almost all non-Apple phones, Google is experimenting heavily with AI features. These will be available across phones made by Samsung, Xiaomi, Oppo, Honor and a host of others.


Google used its presence at MWC this year to talk about some of its recently introduced AI features, like Circle to Search, a visual search tool that lets you draw a circle around something you see on screen to search for it.

The other, less visible partner that phone-makers have in common is chipmaker Qualcomm, whose chips were in an entire spectrum of devices at MWC this year. Its Snapdragon 8 Gen 3 chip, announced late in 2023, can be found in many of the phones that are now running on-device generative AI.


It's been only a year since Qualcomm first showed a basic demo of what generative AI on a phone might look like. Now phones packing this technology are on sale, said Ziad Asghar, who leads the company's AI product roadmap.

"From our perspective, we are the enablers," said Asghar. "Each and every one of our partners can choose to commercialize with unique experiences that they think are more important for their end consumer."

At MWC, the company launched its AI Hub, which gives developers access to 75 plug-and-play generative AI models that they can pick and choose from to apply to their products. That number will grow, and it means any company making devices with Qualcomm chips will be able to add all sorts of AI features.

As well as deciding which AI features to develop, one of the next big challenges phone-makers will have to tackle is how to get AI onto their cheaper devices. For now AI is primarily reserved for the top-end phones — the Galaxy S24s of the world — but over time this will change. There will be a trickle-down effect where this tech ends up on a wider range of a company's devices.

There will naturally be a difference in quality and speed between what the most expensive and the cheapest devices can do, said Asghar, as is currently the case with a phone's camera tech.

AI is changing how we interact with our devices​

AI enhancements to our phones are all well and good, but already we're seeing artificial intelligence being used in ways that have the power to totally change how we interact with our devices — as well as potentially changing what devices we choose to own.

In addition to enabling companies to bring AI to their existing device lines, Qualcomm's tech is powering concept phones like the T Phone, created by Deutsche Telekom and Brain.AI. Together, these two have tapped Qualcomm's chipset to totally reimagine your phone's interface, creating an appless experience that responds to you based on your needs and the task you're trying to accomplish and generates, on the fly, whatever you see on screen as you go.

n the demo I saw at MWC, AI showed it has the potential to put an end to the days of constant app-swapping as you're trying to make a plan or complete a task. "It really changes the way we interface with devices and becomes a lot more natural," said Asghar.

But, he said, that's only the beginning. He'd like to see the same concept applied to mixed reality glasses. He sees the big benefit of the AI in allowing new inputs through gesture, voice and vision that don't necessarily rely on us tapping on a screen. "Technology is much more interesting when it's not really in your face, but it's solving the problems for you in an almost invisible manner," he said.

His words reminded me of a moment in the MWC keynote presentation when Google DeepMind CEO Demis Hassabis asked an important question. "In five-plus years time, is the phone even really going to be the perfect form factor?" said Hassabis. "There's all sorts of amazing things to be invented."

As we saw at CES with the Rabbit R1 and at MWC with the Humane AI Pin, these things are starting to become a reality. In my demo with the AI Pin — a wearable device with no screen that you interact with through voice and touch — it was clear to me that AI is creating space for experimentation. It's allowing us to ask what may succeed the phone as the dominant piece of technology in our lives.
It's also opening up new possibilities for tech that's been around awhile but for whatever reason hasn't quite struck a chord with consumers and found success outside of niche use cases.

Many of us have now played around with generative AI chatbots such as ChatGPT, and we're increasingly growing familiar with the idea of AI assistants. One company, Integrit from South Korea, brought a robot to the show that demonstrated how we may interact with these services in public settings, such as hotels or stores. Its AI and robotics platform, Stella AI, features a large, pebble-shaped display on a robotic arm that can swivel to address you directly.

Where this differs from previous robots I've encountered in customer service settings, such as the iconic Pepper, is that Stella is integrated with the latest AI models, including OpenAI's GPT-4 and Meta's Llama. This means it's capable of having sophisticated conversations with people in many different languages.

Rather than featuring a humanoid robot face like Pepper does, Stella uses generative AI to present a photorealistic human on its display. It's entirely possible that people will feel more comfortable interacting with a human, even one that isn't real, than a humanoid robot, but it feels very early to know this for sure.

What is clear is that this is just the beginning. This is the first generation of devices to really tap into the power of generative and interactive AI, and the floodgates are now well and truly open.

"I think we'll look back at MWC 2024 as being a foundational year for AI on connected devices," said Wood, the CCS Insight analyst. "All the pieces of the jigsaw are falling into place to enable developers to start innovating around AI to deliver new experiences which will make our interactions with smartphones and PCs more intuitive."

If this is the beginning, I'm intrigued to check back a year from now to see how AI continues to change our devices. Hype aside, there's a lot already happening to be excited about.

Editors' note: CNET is using an AI engine to help create some stories. For more, see this post.
 
  • Like
  • Fire
  • Love
Reactions: 22 users

IloveLamp

Top 20
https://www.cnet.com/tech/mobile/on...-way-of-experiencing-artificial-intelligence/

An interesting 7 min read. We're definitely amongst the big players.
Probably even powering most of the up and coming AI features 😁

On-Device AI Is a Whole New Way of Experiencing Artificial Intelligence​

At MWC 2024, I saw firsthand how AI is fundamentally reshaping current and future devices, from phones to robots.
At Mobile World Congress last week, the show floor was abuzz with AI. It was the same at CES two months earlier: The biggest theme of the biggest consumer tech show was that AI suddenly seemed to be part of every single product. But the hype can make it hard to know what we should be excited about, what we should fear and what we should dismiss as a fad.


"Omnipresent ... but also overwhelming." That's how CCS Insight Chief Analyst Ben Wood described the MWC moment. "For many attendees, I felt it was rapidly reaching levels that risked causing AI fatigue."


But there was a positive side as well. Said Wood: "The most impressive demos were from companies showing the benefits AI could offer rather than just describing a service or a product as being AI-ready."

At last year's MWC, the popular generative AI tool ChatGPT was only around 3 months old, and on-device AI was mostly a twinkle in the eye of the tech companies present. This year, on-device was a reality, and attendees — like me — could experience it on the show floor.

I got to experience several demos featuring AI on devices, and the best of them brought artificial intelligence to life in ways I'd never seen before. In many cases, I could see that products we're already familiar with — from smartphones to cars — are getting a new lease on life thanks to AI, with some offerings using the technology in unique ways to set themselves apart from rivals. In other cases, new types of products, like AI-focused wearables and robots, are emerging that have the potential to displace what we know and love.




Above all, it was clear that on-device AI isn't a technology for tomorrow's world. It's available right here, right now. And it could impact your decision as to what piece of technology you buy next.

The age of AI phones has arrived​

One of my biggest takeaways from MWC was that while all tech companies now have a raft of AI tools at their disposal, most are choosing to deploy them in different ways.

Take smartphones. Samsung has developed Gauss, its own large language model (the tech that underlies AI chatbots), to focus on translation on the Galaxy S24, whereas Honor uses AI to include eye tracking on its newly unveiled Magic 6 Pro — which I got to try out at its booth. Oppo and Xiaomi, meanwhile, both have on-device generative AI that they're applying to phone cameras and photo editing tools.



It goes to show that we're entering a new period of experimentation as tech companies figure out what AI can do, and crucially how it can improve our experience of using their products.

Samsung's Y.J. Kim, an executive vice president at the company and head of its language AI team, told reporters at an MWC roundtable that Samsung thought deeply about what sort of AI tools it wanted to deliver to users that would elevate the Galaxy S24 above the basic smartphone experience we've come to expect. "We have to make sure that customers will see some tangible benefits from their day-to-day use of the product or technologies that we develop," he said.

Conversely, there's also some crossover in AI tools between devices because of the partners these phone-makers share. As the maker of Android, the operating system used by almost all non-Apple phones, Google is experimenting heavily with AI features. These will be available across phones made by Samsung, Xiaomi, Oppo, Honor and a host of others.


Google used its presence at MWC this year to talk about some of its recently introduced AI features, like Circle to Search, a visual search tool that lets you draw a circle around something you see on screen to search for it.

The other, less visible partner that phone-makers have in common is chipmaker Qualcomm, whose chips were in an entire spectrum of devices at MWC this year. Its Snapdragon 8 Gen 3 chip, announced late in 2023, can be found in many of the phones that are now running on-device generative AI.


It's been only a year since Qualcomm first showed a basic demo of what generative AI on a phone might look like. Now phones packing this technology are on sale, said Ziad Asghar, who leads the company's AI product roadmap.

"From our perspective, we are the enablers," said Asghar. "Each and every one of our partners can choose to commercialize with unique experiences that they think are more important for their end consumer."

At MWC, the company launched its AI Hub, which gives developers access to 75 plug-and-play generative AI models that they can pick and choose from to apply to their products. That number will grow, and it means any company making devices with Qualcomm chips will be able to add all sorts of AI features.

As well as deciding which AI features to develop, one of the next big challenges phone-makers will have to tackle is how to get AI onto their cheaper devices. For now AI is primarily reserved for the top-end phones — the Galaxy S24s of the world — but over time this will change. There will be a trickle-down effect where this tech ends up on a wider range of a company's devices.

There will naturally be a difference in quality and speed between what the most expensive and the cheapest devices can do, said Asghar, as is currently the case with a phone's camera tech.

AI is changing how we interact with our devices​

AI enhancements to our phones are all well and good, but already we're seeing artificial intelligence being used in ways that have the power to totally change how we interact with our devices — as well as potentially changing what devices we choose to own.

In addition to enabling companies to bring AI to their existing device lines, Qualcomm's tech is powering concept phones like the T Phone, created by Deutsche Telekom and Brain.AI. Together, these two have tapped Qualcomm's chipset to totally reimagine your phone's interface, creating an appless experience that responds to you based on your needs and the task you're trying to accomplish and generates, on the fly, whatever you see on screen as you go.

n the demo I saw at MWC, AI showed it has the potential to put an end to the days of constant app-swapping as you're trying to make a plan or complete a task. "It really changes the way we interface with devices and becomes a lot more natural," said Asghar.

But, he said, that's only the beginning. He'd like to see the same concept applied to mixed reality glasses. He sees the big benefit of the AI in allowing new inputs through gesture, voice and vision that don't necessarily rely on us tapping on a screen. "Technology is much more interesting when it's not really in your face, but it's solving the problems for you in an almost invisible manner," he said.

His words reminded me of a moment in the MWC keynote presentation when Google DeepMind CEO Demis Hassabis asked an important question. "In five-plus years time, is the phone even really going to be the perfect form factor?" said Hassabis. "There's all sorts of amazing things to be invented."

As we saw at CES with the Rabbit R1 and at MWC with the Humane AI Pin, these things are starting to become a reality. In my demo with the AI Pin — a wearable device with no screen that you interact with through voice and touch — it was clear to me that AI is creating space for experimentation. It's allowing us to ask what may succeed the phone as the dominant piece of technology in our lives.
It's also opening up new possibilities for tech that's been around awhile but for whatever reason hasn't quite struck a chord with consumers and found success outside of niche use cases.

Many of us have now played around with generative AI chatbots such as ChatGPT, and we're increasingly growing familiar with the idea of AI assistants. One company, Integrit from South Korea, brought a robot to the show that demonstrated how we may interact with these services in public settings, such as hotels or stores. Its AI and robotics platform, Stella AI, features a large, pebble-shaped display on a robotic arm that can swivel to address you directly.

Where this differs from previous robots I've encountered in customer service settings, such as the iconic Pepper, is that Stella is integrated with the latest AI models, including OpenAI's GPT-4 and Meta's Llama. This means it's capable of having sophisticated conversations with people in many different languages.

Rather than featuring a humanoid robot face like Pepper does, Stella uses generative AI to present a photorealistic human on its display. It's entirely possible that people will feel more comfortable interacting with a human, even one that isn't real, than a humanoid robot, but it feels very early to know this for sure.

What is clear is that this is just the beginning. This is the first generation of devices to really tap into the power of generative and interactive AI, and the floodgates are now well and truly open.

"I think we'll look back at MWC 2024 as being a foundational year for AI on connected devices," said Wood, the CCS Insight analyst. "All the pieces of the jigsaw are falling into place to enable developers to start innovating around AI to deliver new experiences which will make our interactions with smartphones and PCs more intuitive."

If this is the beginning, I'm intrigued to check back a year from now to see how AI continues to change our devices. Hype aside, there's a lot already happening to be excited about.

Editors' note: CNET is using an AI engine to help create some stories. For more, see this post.
Yep, great post @luvlifetravel . 2024 will be our year imo......is everybody ready?


1000013986.jpg
 
  • Like
  • Love
  • Fire
Reactions: 21 users
Top Bottom