BRN Discussion Ongoing

Pmel

Regular
1000034936.jpg
https://www.linkedin.com/feed/updat...date:(V2,urn:li:activity:7186487347679576064)
 
  • Like
  • Fire
  • Love
Reactions: 34 users

RobjHunt

Regular
AI could gobble up a quarter of all electricity in the U.S. by 2030 if it doesn’t break its energy addiction, says Arm Holdings exec

Before artificial intelligence can transform society, the technology will first have to learn how to live within its means.

Right now generative AI has an “insatiable demand” for electricity to power the tens of thousands of compute clusters needed to operate large language models like OpenAI’s GPT-4, warned chief marketing officer Ami Badani of chip design firm Arm Holdings.

If generative AI is ever going to be able to run on every mobile device from a laptop and tablet to a smartphone, it will have to be able to scale without overwhelming the electricity grid at the same time.

“We won’t be able to continue the advancements of AI without addressing power,” Badani told the Fortune Brainstorm AI conference in London on Monday. “ChatGPT requires 15 times more energy than a traditional web search.”

Not only are more businesses using generative AI, but the tech industry is in a race to develop new and more powerful tools that will mean compute demand is only going to grow—and power consumption with it, unless something can be done.

The latest breakthrough from OpenAI, the company behind ChatGPT, is Sora. It can create super realistic or stylized clips of video footage up to 60 seconds in length purely based on user text prompts.

The marvel of gen AI comes at a steep cost

“It takes 100,000 AI chips working at full compute capacity and full power consumption in order to train Sora,” Badani said. “That’s a huge amount.”

Data centers, where most AI models are trained, currently account for 2% of global electricity consumption, according to Badani. But with generative AI expected to go mainstream, she predicts it could end up devouring a quarter of all power in the United States in 2030.

The solution to this conundrum is to develop semiconductor chips that are optimized to run on a minimum of energy.

That’s where Arm comes in: Its RISC processor designs currently run on 99% of all smartphones, as opposed to the rival x86 architecture developed by Intel. The latter has been a standard for desktop PCs, but proved too inefficient to run battery-powered handheld devices like smartphones and tablets.

Arm is adopting that same design philosophy for AI.

“If you think about AI, it comes with a cost,” Badani said, “and that cost is unfortunately power.”

Well, I wonder who would be able to help with this :unsure:
 
  • Like
  • Fire
Reactions: 10 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Thursday 14th of March 2024
Reading time: 4 min


• Valeo’s Driving Assistance Research Director, Antoine Lafay, talks about the latest innovations for autonomous vehicles (AVs) developed by the French automotive supplier.
• He explains how AVs work, the importance of 5G connectivity and his company's role in improving LiDAR technologies.
• Valeo has recently announced a system to enable AVs to park themselves in car parks, as well as a collaboration with motorway operator APRR to enable them to independently navigate tollgates.

Can you explain the basic functioning of autonomous vehicles?

Autonomous vehicles essentially depend on three classes of components: sensors, processors, and on-board software. At Valeo, we have the widest portfolio of sensors on the market, with cameras, radar modules etc. for level three and upwards, which is the level of automation where drivers can hand over complete control to electronic systems. LiDAR sensors play a key role in these systems. They provide an additional mode of perception by scanning the surrounding environment with lasers to generate highly accurate information on potential obstacles, and because they are active sensors that make use of their own energy to illuminate the road ahead, they are fully functional both during the day and at night. The use of LiDAR has made it possible to match critical levels of safety achieved by systems in the aeronautical industry, with equivalent accident rates.
An end-to-end system that enables autonomous vehicles to take control of all aspects of motorway driving with real-time transmission of dynamic maps of toll plazas

Why is the bandwidth provided by 5G so important for these vehicles?

The 5G network standard brings benefits in terms of quality of service, particularly through network slicing, which guarantees high levels of bandwidth for certain applications. Today’s autonomous vehicles may need to be operated under remote supervision so that human drivers can take control of them under certain circumstances, and they also need to be dynamically updated with, for example, mapping elements. Thus, there are occasions where these vehicles are reliant on high levels of connectivity if they are to function correctly. For example, we are currently working on a collaborative project with the motorway operator APRR (Autoroutes Paris-Rhin-Rhône) on a system to enable AVs to successfully navigate tollgates, which involves the real-time transmission of dynamic maps of toll plazas that take into account hazards like closed lanes and missing road markings etc. The goal of this system is to enable these vehicles to be fully autonomous from the moment they enter the motorway system until they leave it. As it stands, current systems hand back control to drivers whenever they encounter a toll plaza.

You have also developed your own parking assistance system…

We’re working on an autonomous parking system with BMW. It’s a level four system, which means that it can be given total autonomy in a given context, in this case in a car park. It’s an on-board system that allows you to leave your car at the entrance to an airport or supermarket car park, and the car will park itself and later come and pick you up when you need it. And it can manage these tasks independently. Level four systems provide vehicles with a degree of autonomy that fully enables them to cope with specific conditions for which they have been designed. These may be geographical or atmospheric: for example, some systems are designed to work only in motorway traffic jams when the weather is fine etc.

What would you say to consumers who feel that car manufacturers are slow to adopt these new technologies?

Today, we have a sufficient legal framework to regulate the private and public use of autonomous vehicles, that is to say with level-three and level-four autonomy. At the same time, we are seeing the rapid adoption of driving aids: emergency braking assistance, speed limiting systems etc. These technologies, which have resulted from research and innovation in the field of autonomous vehicles, do not replace drivers, but in the event of driver failure or distraction, they do have the capacity to implement safeguards and to take control of vehicles if necessary. In short, today’s drivers are already benefiting from investment in autonomous vehicles that has produced systems that are now in widespread use. And the large-scale deployment of these systems is also reducing the cost of the technological building blocks for self-driving cars so we can look forward to lower prices for level three and four autonomous vehicles in the future.

Can you give us an example?

The first shuttles to make use of LiDAR technology were equipped with laboratory instruments that cost tens of thousands of euros. At Valeo, we have now developed automotive LiDAR sensors that are robust enough to run for more than 100,000 km under normal vibration and temperature conditions at prices that will pave the way for their widespread use. And these technologies are increasingly being deployed in robot taxis and autonomous shuttles, which in the long term will make them more affordable.

Five levels :​

The five levels of vehicle autonomy are:
Driving assistance (Level 1). Example: adaptive cruise control.
Partial autonomy (Level 2). Example: parking assistance.
Conditional autonomy (Level 3). Example: driving in traffic jams.
High autonomy (Level 4). Example: autonomous parking.
Full autonomy (Level 5).

 
  • Like
  • Thinking
  • Love
Reactions: 22 users
  • Like
  • Fire
  • Love
Reactions: 33 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
luluwpp.gif
 
  • Haha
  • Like
  • Fire
Reactions: 19 users

mkg6R

Member
When’s this stock gonna move?!?
 
  • Like
  • Sad
Reactions: 5 users

HopalongPetrovski

I'm Spartacus!
When’s this stock gonna move?!?
Market mainly just treading water today.
Macro's uncertain all round.
Those who "have it already" or "doing good enough" content to see how it all shakes out and a bit risk averse.
Those still trying to "make their stake"running scarred and preserving capital waiting for a clear direction to be set by bigger players.
 
  • Like
Reactions: 9 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
https://www.forbes.com/

iOS 18—Apple Issues New Blow To Google With Bold AI Privacy Decision​

Kate O'Flaherty
Senior Contributor
Cybersecurity and privacy journalist
Follow


Apr 15, 2024,11:05am EDT

Apple has just dealt a new blow to arch rival Google after making an AI decision that will appeal to all iPhone users. According to Bloomberg’s Mark Gurman, Apple’s iOS 18 AI capabilities will function entirely on the device—your iPhone—so there is no need for cloud processing.

The iOS 18 AI move is a huge win if you care about iPhone privacy, but it isn’t surprising, given that Apple is known for its strong focus in the area. It also sends a strong message to Apple’s biggest rival Google’s Android that the iPhone maker will do everything it can to win in the AI battlefield as competition ramps up.


Apple Unveils iPhone 15 And Other New Products

If AI is the next privacy battlefield, Apple’s latest decision is pitting itself to win against its ... [+]
GETTY IMAGES
Apple’s iOS 18 is due to launch at its Worldwide Developers Conference in June, along with powerful new iPhone AI features including enhanced Siri and auto summarising. The iPhone maker is also set to reveal its AI strategy in more detail.


“As the world awaits Apple’s big AI unveiling on June 10, it looks like the initial wave of features will work entirely on-device,” Gurman wrote. “That means there’s no cloud processing component to the company’s large language model (LLM), the software that powers the new capabilities.”

This is model that will power the auto summarising features key to Apple’s AI strategy and hopes of standing out when it unveils iOS 18.

Why On Device Is Better For iPhone Privacy​

On-device processing is far superior to cloud, simply because data doesn’t leave the iPhone.“Processing AI commands on-device means users have more comfort in the knowledge that their input requests are less likely to be monitored and analysed by Apple and third parties,” says Jake

However, Apple’s AI capabilities in iOS 18 and beyond will require a huge amount of data processing power. The iPhone maker has been investing in more hardware able to host AI, and the iPhone 16 will apparently come with an enhanced neural engine. Moore says the next generation iPhones are “more powerful than ever” and “clearly able to handle these large requests.”



And when AI requests are generated in the cloud there is much more potential of data collection and misuse by those owning the large language models, Moore says. “Apple is clearly thinking that as all devices increase in power over time, AI on device is more likely to be the norm, especially for those who are privacy aware.”

Apple Vs Google—The AI Privacy Battlefield​

The AI battlefield is ramping up and it’s clear Apple sees privacy as an area that can help it to win as iOS 18 is unveiled in June. And Apple’s iOS 18 AI move is a clear blow to Google and Android devices in general.
Yet running LLMs similar to ChatGPT without additional cloud support will be a challenge, says Android Authority, pointing out that “some of Samsung’s and Google’s most sophisticated AI features still require the power of cloud servers.”
Samsung’s Galaxy AI includes some offline capabilities in a “hybrid AI” approach, while Google’s Gemini Nano AI model is for on-device, according to Android Authority.
The on-device only approach will certainly make Apple stand out, if the iPhone maker can pull this off in iOS 18. It helps that Apple owns the hardware and software and platform of course, but at least some of the features are likely to be limited to the iPhone 16.
The next 12 months are an important time for Apple, after the iPhone maker was trumped by Samsung to become the biggest smartphone maker in the world. Apple's smartphone shipments dropped about 10% in the first quarter of 2024, as Samsung shot to 20.8% market share, research from analyst IDC shows.
It is against this uncertain backdrop that Apple’s iOS 18 AI features will need to help the iPhone maker stand out—and the firm must be confident its devices can stand the pressure of powering AI LLMs. However, Apple can always change its mind, or give users the choice to give away a little of their privacy for extra AI functionality in the future, beyond iOS 18.


Why On Device Is Better For iPhone Privacy​

On-device processing is far superior to cloud, simply because data doesn’t leave the iPhone.“Processing AI commands on-device means users have more comfort in the knowledge that their input requests are less likely to be monitored and analysed by Apple and third parties,” says Jake Moore, global cybersecurity advisor at ESET.



 
  • Like
  • Fire
  • Thinking
Reactions: 56 users

mcm

Regular
https://www.forbes.com/

iOS 18—Apple Issues New Blow To Google With Bold AI Privacy Decision​

Kate O'Flaherty
Senior Contributor
Cybersecurity and privacy journalist
Follow


Apr 15, 2024,11:05am EDT

Apple has just dealt a new blow to arch rival Google after making an AI decision that will appeal to all iPhone users. According to Bloomberg’s Mark Gurman, Apple’s iOS 18 AI capabilities will function entirely on the device—your iPhone—so there is no need for cloud processing.

The iOS 18 AI move is a huge win if you care about iPhone privacy, but it isn’t surprising, given that Apple is known for its strong focus in the area. It also sends a strong message to Apple’s biggest rival Google’s Android that the iPhone maker will do everything it can to win in the AI battlefield as competition ramps up.


Apple Unveils iPhone 15 And Other New Products

If AI is the next privacy battlefield, Apple’s latest decision is pitting itself to win against its ... [+]
GETTY IMAGES
Apple’s iOS 18 is due to launch at its Worldwide Developers Conference in June, along with powerful new iPhone AI features including enhanced Siri and auto summarising. The iPhone maker is also set to reveal its AI strategy in more detail.


“As the world awaits Apple’s big AI unveiling on June 10, it looks like the initial wave of features will work entirely on-device,” Gurman wrote. “That means there’s no cloud processing component to the company’s large language model (LLM), the software that powers the new capabilities.”

This is model that will power the auto summarising features key to Apple’s AI strategy and hopes of standing out when it unveils iOS 18.

Why On Device Is Better For iPhone Privacy​

On-device processing is far superior to cloud, simply because data doesn’t leave the iPhone.“Processing AI commands on-device means users have more comfort in the knowledge that their input requests are less likely to be monitored and analysed by Apple and third parties,” says Jake

However, Apple’s AI capabilities in iOS 18 and beyond will require a huge amount of data processing power. The iPhone maker has been investing in more hardware able to host AI, and the iPhone 16 will apparently come with an enhanced neural engine. Moore says the next generation iPhones are “more powerful than ever” and “clearly able to handle these large requests.”



And when AI requests are generated in the cloud there is much more potential of data collection and misuse by those owning the large language models, Moore says. “Apple is clearly thinking that as all devices increase in power over time, AI on device is more likely to be the norm, especially for those who are privacy aware.”

Apple Vs Google—The AI Privacy Battlefield​

The AI battlefield is ramping up and it’s clear Apple sees privacy as an area that can help it to win as iOS 18 is unveiled in June. And Apple’s iOS 18 AI move is a clear blow to Google and Android devices in general.
Yet running LLMs similar to ChatGPT without additional cloud support will be a challenge, says Android Authority, pointing out that “some of Samsung’s and Google’s most sophisticated AI features still require the power of cloud servers.”
Samsung’s Galaxy AI includes some offline capabilities in a “hybrid AI” approach, while Google’s Gemini Nano AI model is for on-device, according to Android Authority.
The on-device only approach will certainly make Apple stand out, if the iPhone maker can pull this off in iOS 18. It helps that Apple owns the hardware and software and platform of course, but at least some of the features are likely to be limited to the iPhone 16.
The next 12 months are an important time for Apple, after the iPhone maker was trumped by Samsung to become the biggest smartphone maker in the world. Apple's smartphone shipments dropped about 10% in the first quarter of 2024, as Samsung shot to 20.8% market share, research from analyst IDC shows.
It is against this uncertain backdrop that Apple’s iOS 18 AI features will need to help the iPhone maker stand out—and the firm must be confident its devices can stand the pressure of powering AI LLMs. However, Apple can always change its mind, or give users the choice to give away a little of their privacy for extra AI functionality in the future, beyond iOS 18.


Why On Device Is Better For iPhone Privacy​

On-device processing is far superior to cloud, simply because data doesn’t leave the iPhone.“Processing AI commands on-device means users have more comfort in the knowledge that their input requests are less likely to be monitored and analysed by Apple and third parties,” says Jake Moore, global cybersecurity advisor at ESET.



So is there a chance Apple is utilising Akida?
 
  • Like
  • Thinking
  • Fire
Reactions: 12 users

Shadow59

Regular
So is there a chance Apple is utilising Akida?
Say your prayers and send a new pair of Nikes to Bravo!
 
  • Haha
  • Like
  • Love
Reactions: 27 users

Diogenese

Top 20
https://www.forbes.com/

iOS 18—Apple Issues New Blow To Google With Bold AI Privacy Decision​

Kate O'Flaherty
Senior Contributor
Cybersecurity and privacy journalist
Follow


Apr 15, 2024,11:05am EDT

Apple has just dealt a new blow to arch rival Google after making an AI decision that will appeal to all iPhone users. According to Bloomberg’s Mark Gurman, Apple’s iOS 18 AI capabilities will function entirely on the device—your iPhone—so there is no need for cloud processing.

The iOS 18 AI move is a huge win if you care about iPhone privacy, but it isn’t surprising, given that Apple is known for its strong focus in the area. It also sends a strong message to Apple’s biggest rival Google’s Android that the iPhone maker will do everything it can to win in the AI battlefield as competition ramps up.


Apple Unveils iPhone 15 And Other New Products

If AI is the next privacy battlefield, Apple’s latest decision is pitting itself to win against its ... [+]
GETTY IMAGES
Apple’s iOS 18 is due to launch at its Worldwide Developers Conference in June, along with powerful new iPhone AI features including enhanced Siri and auto summarising. The iPhone maker is also set to reveal its AI strategy in more detail.


“As the world awaits Apple’s big AI unveiling on June 10, it looks like the initial wave of features will work entirely on-device,” Gurman wrote. “That means there’s no cloud processing component to the company’s large language model (LLM), the software that powers the new capabilities.”

This is model that will power the auto summarising features key to Apple’s AI strategy and hopes of standing out when it unveils iOS 18.

Why On Device Is Better For iPhone Privacy​

On-device processing is far superior to cloud, simply because data doesn’t leave the iPhone.“Processing AI commands on-device means users have more comfort in the knowledge that their input requests are less likely to be monitored and analysed by Apple and third parties,” says Jake

However, Apple’s AI capabilities in iOS 18 and beyond will require a huge amount of data processing power. The iPhone maker has been investing in more hardware able to host AI, and the iPhone 16 will apparently come with an enhanced neural engine. Moore says the next generation iPhones are “more powerful than ever” and “clearly able to handle these large requests.”



And when AI requests are generated in the cloud there is much more potential of data collection and misuse by those owning the large language models, Moore says. “Apple is clearly thinking that as all devices increase in power over time, AI on device is more likely to be the norm, especially for those who are privacy aware.”

Apple Vs Google—The AI Privacy Battlefield​

The AI battlefield is ramping up and it’s clear Apple sees privacy as an area that can help it to win as iOS 18 is unveiled in June. And Apple’s iOS 18 AI move is a clear blow to Google and Android devices in general.
Yet running LLMs similar to ChatGPT without additional cloud support will be a challenge, says Android Authority, pointing out that “some of Samsung’s and Google’s most sophisticated AI features still require the power of cloud servers.”
Samsung’s Galaxy AI includes some offline capabilities in a “hybrid AI” approach, while Google’s Gemini Nano AI model is for on-device, according to Android Authority.
The on-device only approach will certainly make Apple stand out, if the iPhone maker can pull this off in iOS 18. It helps that Apple owns the hardware and software and platform of course, but at least some of the features are likely to be limited to the iPhone 16.
The next 12 months are an important time for Apple, after the iPhone maker was trumped by Samsung to become the biggest smartphone maker in the world. Apple's smartphone shipments dropped about 10% in the first quarter of 2024, as Samsung shot to 20.8% market share, research from analyst IDC shows.
It is against this uncertain backdrop that Apple’s iOS 18 AI features will need to help the iPhone maker stand out—and the firm must be confident its devices can stand the pressure of powering AI LLMs. However, Apple can always change its mind, or give users the choice to give away a little of their privacy for extra AI functionality in the future, beyond iOS 18.


Why On Device Is Better For iPhone Privacy​

On-device processing is far superior to cloud, simply because data doesn’t leave the iPhone.“Processing AI commands on-device means users have more comfort in the knowledge that their input requests are less likely to be monitored and analysed by Apple and third parties,” says Jake Moore, global cybersecurity advisor at ESET.




All that glitters ...



Exclusive: Apple acquires Xnor.ai, edge AI spin-out from Paul Allen’s AI2, for price in $200M range

BY ALAN BOYLE, TAYLOR SOPER & TODD BISHOP on January 15, 2020

Apple buys Xnor.ai, an edge-centric AI2 spin-out, for price in $200M range (geekwire.com)


Apple has acquired Xnor.ai, a Seattle startup specializing in low-power, edge-based artificial intelligence tools, sources with knowledge of the deal told GeekWire.

The acquisition echoes Apple’s high-profile purchase of Seattle AI startup Turi in 2016. Speaking on condition of anonymity, sources said Apple paid an amount similar to what was paid for Turi, in the range of $200 million.

Xnor.ai didn’t immediately respond to our inquiries, while Apple emailed us its standard response on questions about acquisitions: “Apple buys smaller technology companies from time to time and we generally do not discuss our purpose or plans.” (The company sent the exact same response when we broke the Turi story.



The arrangement suggests that Xnor’s AI-enabled image recognition tools could well become standard features in future iPhones and webcams.

Xnor.ai’s acquisition marks a big win for the Allen Institute for Artificial Intelligence, or AI2, created by the late Microsoft co-founder Paul Allen to boost AI research. It was the second spin-out from AI2’s startup incubator, following Kitt.ai, which was acquired by the Chinese search engine powerhouse Baidu in 2017 for an undisclosed sum.

The deal is a big win as well for the startup’s early investors, including Seattle’s Madrona Venture Group; and for the University of Washington, which serves as a major source of Xnor.ai’s talent pool.

The three-year-old startup’s secret sauce has to do with AI on the edge — machine learning and image recognition tools that can be executed on low-power devices rather than relying on the cloud. “We’ve been able to scale AI out of the cloud to every device out there,” co-founder Ali Farhadi, who is the venture’s CXO (chief Xnor officer) as well as a UW professor, told GeekWire in 2018
.


This Apple patent is for compressing AI models. one of the inventors is ex-Xnor.

US11651192B2 Compressed convolutional neural network models 20190212 Rastegari nee Xnor

Systems and processes for training and compressing a convolutional neural network model include the use of quantization and layer fusion. Quantized training data is passed through a convolutional layer of a neural network model to generate convolutional results during a first iteration of training the neural network model. The convolutional results are passed through a batch normalization layer of the neural network model to update normalization parameters of the batch normalization layer. The convolutional layer is fused with the batch normalization layer to generate a first fused layer and the fused parameters of the fused layer are quantized. The quantized training data is passed through the fused layer using the quantized fused parameters to generate output data, which may be quantized for a subsequent layer in the training iteration.

[0018] A convolutional neural network (CNN) model may be designed as a deep learning tool capable of complex tasks such as image classification and natural language processing. CNN models typically receive input data in a floating point number format and perform floating point operations on the data as the data progresses through different layers of the CNN model. Floating point operations are relatively inefficient with respect to power consumed, memory usage and processor usage. These inefficiencies limit the computing platforms on which CNN models can be deployed. For example, field-programmable gate arrays (FPGA) may not include dedicated floating point modules for performing floating point operations and may have limited memory bandwidth that would be inefficient working with 32-bit floating point numbers.

[0019] As described in further detail below, the subject technology includes systems and processes for building a compressed CNN model suitable for deployment on different types of computing platforms having different processing, power and memory capabilities
.


Of course, that does not absolutely preclude the possibility that Apple are running the NN model on a more efficient SoC.
 
  • Like
  • Fire
  • Thinking
Reactions: 18 users

Labsy

Regular
So is there a chance Apple is utilising Akida?
My take is I doubt it, but if so we are all millionaires and 1 or 2 billionaires...
I suspect at the very least the likes of Samsung and Google et al will be pouring over research available which focuses on running SLM's (smaller versions of LLM's) on device so they can compete....Enter our CTO who is heavily focused on this...
 
  • Like
  • Fire
  • Love
Reactions: 18 users

7für7

Top 20
I think bravo is just sharing general informations. As well as others here. I wouldn’t take everything “brainchip related” it’s just showing the growing AI market and the possibility we can grow with The market!
 
  • Like
Reactions: 5 users

Lovin this new Chippa Avatar ……

IMG_1586.jpeg
 
  • Like
Reactions: 23 users

7für7

Top 20

Lovin this new Chippa Avatar ……

View attachment 61039
It’s not new.. it’s a facelift! We had it on beginning already!

This one comes with AWD adaptive sensor control .. matrix LED… a hybrid engine with 500km pure electric range. And a EMBUGS system
 
All that glitters ...



Exclusive: Apple acquires Xnor.ai, edge AI spin-out from Paul Allen’s AI2, for price in $200M range

BY ALAN BOYLE, TAYLOR SOPER & TODD BISHOP on January 15, 2020

Apple buys Xnor.ai, an edge-centric AI2 spin-out, for price in $200M range (geekwire.com)


Apple has acquired Xnor.ai, a Seattle startup specializing in low-power, edge-based artificial intelligence tools, sources with knowledge of the deal told GeekWire.

The acquisition echoes Apple’s high-profile purchase of Seattle AI startup Turi in 2016. Speaking on condition of anonymity, sources said Apple paid an amount similar to what was paid for Turi, in the range of $200 million.

Xnor.ai didn’t immediately respond to our inquiries, while Apple emailed us its standard response on questions about acquisitions: “Apple buys smaller technology companies from time to time and we generally do not discuss our purpose or plans.” (The company sent the exact same response when we broke the Turi story.



The arrangement suggests that Xnor’s AI-enabled image recognition tools could well become standard features in future iPhones and webcams.

Xnor.ai’s acquisition marks a big win for the Allen Institute for Artificial Intelligence, or AI2, created by the late Microsoft co-founder Paul Allen to boost AI research. It was the second spin-out from AI2’s startup incubator, following Kitt.ai, which was acquired by the Chinese search engine powerhouse Baidu in 2017 for an undisclosed sum.

The deal is a big win as well for the startup’s early investors, including Seattle’s Madrona Venture Group; and for the University of Washington, which serves as a major source of Xnor.ai’s talent pool.

The three-year-old startup’s secret sauce has to do with AI on the edge — machine learning and image recognition tools that can be executed on low-power devices rather than relying on the cloud. “We’ve been able to scale AI out of the cloud to every device out there,” co-founder Ali Farhadi, who is the venture’s CXO (chief Xnor officer) as well as a UW professor, told GeekWire in 2018
.


This Apple patent is for compressing AI models. one of the inventors is ex-Xnor.

US11651192B2 Compressed convolutional neural network models 20190212 Rastegari nee Xnor

Systems and processes for training and compressing a convolutional neural network model include the use of quantization and layer fusion. Quantized training data is passed through a convolutional layer of a neural network model to generate convolutional results during a first iteration of training the neural network model. The convolutional results are passed through a batch normalization layer of the neural network model to update normalization parameters of the batch normalization layer. The convolutional layer is fused with the batch normalization layer to generate a first fused layer and the fused parameters of the fused layer are quantized. The quantized training data is passed through the fused layer using the quantized fused parameters to generate output data, which may be quantized for a subsequent layer in the training iteration.

[0018] A convolutional neural network (CNN) model may be designed as a deep learning tool capable of complex tasks such as image classification and natural language processing. CNN models typically receive input data in a floating point number format and perform floating point operations on the data as the data progresses through different layers of the CNN model. Floating point operations are relatively inefficient with respect to power consumed, memory usage and processor usage. These inefficiencies limit the computing platforms on which CNN models can be deployed. For example, field-programmable gate arrays (FPGA) may not include dedicated floating point modules for performing floating point operations and may have limited memory bandwidth that would be inefficient working with 32-bit floating point numbers.

[0019] As described in further detail below, the subject technology includes systems and processes for building a compressed CNN model suitable for deployment on different types of computing platforms having different processing, power and memory capabilities
.


Of course, that does not absolutely preclude the possibility that Apple are running the NN model on a more efficient SoC.
All expectations of dealing with them so far, have resulted in the bitter taste of a cooking Apple..
 
  • Like
Reactions: 3 users

cosors

👀
Why would the German trinity of automotive suppliers each have established their own AI Labs if they only assembled components? 🧐

View attachment 61017
View attachment 61018 View attachment 61019


They spend a lot of moolah on R&D in general and also offer all sorts of engineering services:




At least because Ai has to be everywhere at the moment, 'no' company can avoid the topic, it has to be worth a try and there is subsidy money and there have to be conditions for investments in future tec and the old one is out of the question. Just yesterday I saw a global analysis of Ai and where it is taking place, in addition to research. I couldn't see Germany at a quick glance. But maybe I'll have another look to see if I can find it again. What interests me are products that can be bought from companies like those in Germany.) Just because companies are running a research lab here and dropping slices by slices of white paper on projects because they have obviously or perhaps lost touch doesn't convince me to be confident that Germany plays a really important role in this topic. So far, I'm taking MB's hub seriously.
I'm one of the sceptics, but I'm happy to be proven wrong. But I don't want to get into debate 'loops', as I'm just an observer from the sidelines and don't have serious insight and I'm not a politician too.
:)
 
Last edited:
  • Like
Reactions: 5 users
Our CTO thinks this is great


View attachment 61029 View attachment 61030
Impressive first look and demonstration, of the new Atlas.



All these new humanoid robots coming out, will need efficient brains.

AKIDA technology can deliver.
 
  • Like
  • Fire
  • Wow
Reactions: 13 users
  • Like
  • Haha
  • Fire
Reactions: 11 users
Top Bottom