BRN Discussion Ongoing

Qualcomm is talking a big game for CES.... generative AI with no connection to the cloud. They are either with us, or they've caught up quick? I'm keeping faith in the ecosystem we are developing, but we all have hard earned cash invested and naturally getting nervous as other companies claim to do what Akida can... despite our "3 year lead." Bring on these updates on partnerships prior to CES!
Like I've said we either have a lead and prominent companies are knocking down our door,
Because it's all about having a edge on your competitors , Has Sean come out and stated where miles in front ?
 
  • Fire
Reactions: 1 users

Makeme 2020

Regular
I've noticed the fervour and demands for validation both here and over on the crapper have kicked up a notch.
Thrashing themselves into a right old lather. Get it now deadbeat? 🤣
Why don't you reveal yourself to us here in the Tardie? 🤣
Afraid if you show yourself the Messiah will smite you? 🤣
Or is it just too cosy here in the shadows, cutting and pasting tidbits for your would be followers over there in hell. 🤣
Don't worry snookum's, I won't mention you again.
None of these Fear Uncertainty Doubt merchants are worth the oxygen or the key strokes.
That is the troll's game, engagement.
Either for cents per view or the weird little thrill as they rub themselves thinking about how they 'got ya' 🤣

Grow up FFS. No one is holding a gun to anyones head here.
We each of us, as adult individuals, have those three choices.
BUY, SELL, HOLD.
Make your choice and reap the rewards/ take the consequences.

But FFS, stop all the constant whining and whinging.
No one cares.
Not Sean or Antonio or PVDM or me. 🤣
Are you talking to your self.
 
Last edited:
  • Fire
Reactions: 1 users

Iseki

Regular
"I've noticed the fervour and demands for validation both here and over on the crapper have kicked up a notch."

I think it's fair to ask for validation that we can put akida into either an arm chip or a SiFive chip, and that that combination is cheaper than the arm or SiFive base chip with their respective vector processor.

Otherwise, aren't we in trouble?
 
  • Like
  • Thinking
Reactions: 4 users
Modern neuromorphic processor architectures...PLURAL...Hmmm?????



This Tiny Sensor Could Be in Your Next Headset​

Prophesee
PROPHESEE Event-Based Metavision GenX320 Bare Die 2.jpg

Neuromorphic computing company develops event-based vision sensor for edge AI apps.
Spencer Chin | Oct 16, 2023


As edge-based artificial intelligence (AI) applications become more common, there will be a greater need for sensors that can meet the power and environmental needs of edge hardware. Prophesee SA, which supplies advanced neuromorphic vision systems, has introduced an event-based vision sensor for integration into ultra-low-power edge AI vision devices. The GenX320 Metavision sensor, which uses a tiny 3x4mm die, leverages the company’s technology platform into growing intelligent edge market segments, including AR/VR headsets, security and monitoring/detection systems, touchless displays, eye tracking features, and always-on intelligent IoT devices.

According to Luca Verre, CEO and co-founder of Prophesee, the concept of event-based vision has been researched for years, but developing a viable commercial implementation in a sensor-like device has only happened relatively recently. “Prophesee has used a combination of expertise and innovative developments around neuromorphic computing, VLSI design, AL algorithm development, and CMOS image sensing,” said Verre in an e-mail interview with Design News. “Together, those skills and advancements, along with critical partnerships with companies like Sony, Intel, Bosch, Xiaomi, Qualcomm, 🤔and others 🤔have enabled us to optimize a design for the performance, power, size, and cost requirements of various markets.”

Prophesse’s vision sensor is a 320x320, 6.3μm pixel BSI stacked event-based vision sensor that offers a tiny 1/5-in. optical format. Verre said, “The explicit goal was to improve integrability and usability in embedded at-the-edge vision systems, which in addition to size and power improvements, means the design must address the challenge of event-based vision’s unconventional data format, nonconstant data rates, and non-standard interfaces to make it more usable for a wider range of applications. We have done that with multiple integrated event data pre-processing, filtering, and formatting functions to minimize external processing overhead.”

Verre added, “In addition, MIPI or CPI data output interfaces offer low-latency connectivity to embedded processing platforms, including low-power microcontrollers and modern neuromorphic processor architectures.

Low-Power Operation

According to Verre, the GenX320 sensor has been optimized for low-power operation, featuring a hierarchy of power modes and application-specific modes of operation. On-chip power management further improves sensor flexibility and integrability. To meet aggressive size and cost requirements, the chip is fabricated using a CMOS stacked process with pixel-level Cu-Cu bonding interconnects achieving a 6.3μm pixel-pitch.
The sensor performs low latency, µsec resolution timestamping of events with flexible data formatting. On-chip intelligent power management modes reduce power consumption to a low 36uW and enable smart wake-on-events. Deep sleep and standby modes are also featured.

According to Prophesee, the sensor is designed to be easily integrated with standard SoCs with multiple combined event data pre-processing, filtering, and formatting functions to minimize external processing overhead. MIPI or CPI data output interfaces offer low-latency connectivity to embedded processing platforms, including low-power microcontrollers and modern neuromorphic processor architectures.

Prophesee’s Verre expects the sensor to find applications in AR/VR headsets. “We are solving an important issue in our ability to efficiently (i.e. low power/low heat) support foveated rendering in eye tracking for a more realistic, immersive experience. Meta has discussed publicly the use of event-based vision technology, and we are actively involved with our partner Zinn Labs in this area. XPERI has already developed a driver monitor system (DMS) proof of concept based on our previous generation sensor for gaze monitoring and we are working with them on a next-gen solution using GenX320 for both automotive and other potential uses, including micro expression monitoring. The market for gesture and motion detection is very large, and our partner Ultraleap has demonstrated a working prototype of a touch-free display using our solution.”

The sensor incorporates an on-chip histogram output compatible with multiple AI accelerators. The sensor is also natively compatible with Prophesee Metavision Intelligence, an open-source event-based vision software suite that is used by a community of over 10,000 users.

Prophesee will support the GenX320 with a complete range of development tools for easy exploration and optimization, including a comprehensive Evaluation Kit housing a chip-on-board (COB) GenX320 module, or a compact optical flex module. In addition, Prophesee will offer a range of adapter kits that enable seamless connectivity to a large range of embedded platforms, such as an STM32 MCU, speeding time-to-market.

Spencer Chin is a Senior Editor for Design News covering the electronics beat. He has many years of experience covering developments in components, semiconductors, subsystems, power, and other facets of electronics from both a business/supply-chain and technology perspective. He can be reached at Spencer.Chin@informa.com.

Could be interesting at CES

 
  • Like
  • Fire
Reactions: 10 users

wilzy123

Founding Member
  • Haha
Reactions: 7 users
  • Like
  • Fire
Reactions: 11 users

Terroni2105

Founding Member
I haven’t seen this posted so apologies if it has and I missed it.

Direct from the Arm website. It is an article by Stephen Ozoigbo, Senior Director, Ecosystem Development, Education and Research, Arm.
Written on 20th December.

It is about Arm staff and ambassadors travelling across Africa, highlighting the range of AI-based developer experiences running on Arm.

“The additional demos were a range hardware, including the Arduino Pro and BrainChip’s Akida, that highlighted how Arm IP can be implemented across embedded systems that utilize AI workloads. As compute power increases, developers can leverage AI workloads for applications that are targeting the smallest, most power and cost-constrained embedded systems, all built on Arm.”


https://newsroom.arm.com/ai-developer-experiences-africa

Happy New Year Chippers :)
 
  • Like
  • Fire
  • Love
Reactions: 65 users

Tothemoon24

Top 20
IMG_8076.jpeg


The internet has changed every aspect of our lives from communication, shopping, and working. Now, for reasons of latency, privacy, and cost-efficiency, the “internet of things” has been born as the internet has expanded to the network edge.

Now, with artificial intelligence, everything on the internet is easier, more personalized, and more intelligent. However, AI is currently confined to the cloud due to the large servers and high compute capacity it needs. As a result, companies like Hailo are driven by latency, privacy, and cost efficiency to develop technologies that enable AI on the edge.

Undoubtedly, the next big thing is generative AI. Generative AI presents enormous potential across industries. It can be used to streamline work and increase the efficiency of various creators — lawyers, content writers, graphic designers, musicians, and more. It can help discover new therapeutic drugs or aid in medical procedures. Generative AI can improve industrial automation, develop new software code, and enhance transportation security through the automated synthesis of video, audio, imagery, and more.

However, generative AI as it exists today is limited by the technology that enables it. That’s because generative AI happens in the cloud — large data centers of costly, energy-consuming computer processors far removed from actual users. When someone issues a prompt to a generative AI tool like ChatGPT or some new AI-based videoconferencing solution, the request is transmitted via the internet to the cloud, where it’s processed by servers before the results are returned over the network. Data centers are major energy consumers, and as AI becomes more popular, global energy consumption will rapidly increase. This is a growing concern for companies trying to balance between the need to offer innovative solutions to the requirement to reduce operating costs and environmental impact.

As companies develop new applications for generative AI and deploy them on different types of devices — video cameras and security systems, industrial and personal robots, laptops and even cars — the cloud is a bottleneck in terms of bandwidth, cost, safety, and connectivity.

And for applications like driver assist, personal computer software, videoconferencing and security, constantly moving data over a network can be a privacy risk.

The solution is to enable these devices to process generative AI at the edge. In fact, edge-based generative AI stands to benefit many emerging applications.

Generative AI on the rise

Consider that in June, Mercedes-Benz said it would introduce ChatGPT to its cars. In a ChatGPT-enhanced Mercedes, for example, a driver could ask the car — hands free — for a dinner recipe based on ingredients they already have at home. That is, if the car is connected to the internet. In a parking garage or remote location, all bets are off.

In the last couple of years, videoconferencing has become second nature to most of us. Already, software companies are integrating forms of AI into videoconferencing solutions. Maybe it’s to optimize audio and video quality on the fly, or to “place” people in the same virtual space. Now, generative AI-powered videoconferences can automatically create meeting minutes or pull in relevant information from company sources in real-time as different topics are discussed.


However, if a smart car, videoconferencing system, or any other edge device can’t reach back to the cloud, then the generative AI experience can’t happen. But what if they didn’t have to? It sounds like a daunting task considering the enormous processing of cloud AI, but it is now becoming possible.

Generative AI at the edge

Already, there are generative AI tools, for example, that can automatically create rich, engaging PowerPoint presentations. But the user needs the system to work from anywhere, even without an internet connection.

Similarly, we’re already seeing a new class of generative AI-based “co-pilot” assistants that will fundamentally change how we interact with our computing devices by automating many routine tasks, like creating reports or visualizing data. Imagine flipping open a laptop, the laptop recognizing you through its camera, then automatically generating a course of action for the day,week or month based on your most used tools, like Outlook, Teams, Slack, Trello, etc. But to maintain data privacy and a good user experience, you must have the option of running generative AI locally.

In addition to meeting the challenges of unreliable connections and data privacy, edge AI can help reduce bandwidth demands and enhance application performance. For instance, if a generative AI application is creating data-rich content, like a virtual conference space, via the cloud, the process could lag depending on available (and costly) bandwidth. And certain types of generative AI applications, like security, robotics, or healthcare, require high-performance, low-latency responses that cloud connections can’t handle.

In video security, the ability to re-identify people as they move among many cameras — some placed where networks can’t reach — requires data models and AI processing in the actual cameras. In this case, generative AI can be applied to automated descriptions of what the cameras see through simple queries like, “Find the 8-year-old child with the red T shirt and baseball cap.”

That’s generative AI at the edge.

Developments in edge AI

Through the adoption of a new class of AI processors and the development of leaner, more efficient, though no-less-powerful generative AI data models, edge devices can be designed to operate intelligently where cloud connectivity is impossible or undesirable.

Of course, cloud processing will remain a critical component of generative AI. For example, training AI models will remain in the cloud. But the act of applying user inputs to those models, called inferencing, can — and in many cases should — happen at the edge.

The industry is already developing leaner, smaller, more efficient AI models that can be loaded onto edge devices. Companies like Hailo manufacture AI processors purpose-designed to perform neural network processing. Such neural-network processors not only handle AI models incredibly rapidly, but they also do so with less power, making them energy efficient and apt to a variety of edge devices, from smartphones to cameras.

Utilizing generative AI at the edge enables effective load-balancing of growing workloads, allows applications to scale more stably, relieves cloud data centers of costly processing, and helps reduce environmental impact. Generative AI is on the brink of revolutionizing computing once more. In the future, your laptop’s LLM may auto-update the same way your OS does today — and function in much the same way. However, in order to get there, generative AI processing will need to be enabled at the network’s edge. The outcome promises to be greater performance, energy efficiency, security and privacy. All of which leads to AI applications that reshape the world just as significantly as generative AI itself.
 
  • Like
  • Fire
Reactions: 16 users

IloveLamp

Top 20
  • Haha
  • Like
Reactions: 16 users

Kachoo

Regular
Why should BRN have problems with finance if customers sign contracts with brainchip? If customers sign contracts, they do it because they see benefits I guess. Our western capitalistic economy works with depts otherwise it would collapse. So, if everyone would worry about the financial situation of other companies, there would be no business at all… just my opinion
When we do multi year sale tenders there is an audit involved and our customer examines our financial health to see if we are in a good financial position to provide the services and products we say.

It protects them from us going broke and them losing our service and be out money or products. So its kinda a standard practice.
 
  • Like
  • Love
  • Fire
Reactions: 24 users

IloveLamp

Top 20
My god these threads tonight

bait.gif
double-chin-fat.gif
source-1.gif
tenor-7.gif
 
  • Haha
  • Like
Reactions: 14 users

IloveLamp

Top 20
  • Like
  • Thinking
Reactions: 8 users

manny100

Regular
View attachment 53299

The internet has changed every aspect of our lives from communication, shopping, and working. Now, for reasons of latency, privacy, and cost-efficiency, the “internet of things” has been born as the internet has expanded to the network edge.

Now, with artificial intelligence, everything on the internet is easier, more personalized, and more intelligent. However, AI is currently confined to the cloud due to the large servers and high compute capacity it needs. As a result, companies like Hailo are driven by latency, privacy, and cost efficiency to develop technologies that enable AI on the edge.

Undoubtedly, the next big thing is generative AI. Generative AI presents enormous potential across industries. It can be used to streamline work and increase the efficiency of various creators — lawyers, content writers, graphic designers, musicians, and more. It can help discover new therapeutic drugs or aid in medical procedures. Generative AI can improve industrial automation, develop new software code, and enhance transportation security through the automated synthesis of video, audio, imagery, and more.

However, generative AI as it exists today is limited by the technology that enables it. That’s because generative AI happens in the cloud — large data centers of costly, energy-consuming computer processors far removed from actual users. When someone issues a prompt to a generative AI tool like ChatGPT or some new AI-based videoconferencing solution, the request is transmitted via the internet to the cloud, where it’s processed by servers before the results are returned over the network. Data centers are major energy consumers, and as AI becomes more popular, global energy consumption will rapidly increase. This is a growing concern for companies trying to balance between the need to offer innovative solutions to the requirement to reduce operating costs and environmental impact.

As companies develop new applications for generative AI and deploy them on different types of devices — video cameras and security systems, industrial and personal robots, laptops and even cars — the cloud is a bottleneck in terms of bandwidth, cost, safety, and connectivity.

And for applications like driver assist, personal computer software, videoconferencing and security, constantly moving data over a network can be a privacy risk.

The solution is to enable these devices to process generative AI at the edge. In fact, edge-based generative AI stands to benefit many emerging applications.

Generative AI on the rise

Consider that in June, Mercedes-Benz said it would introduce ChatGPT to its cars. In a ChatGPT-enhanced Mercedes, for example, a driver could ask the car — hands free — for a dinner recipe based on ingredients they already have at home. That is, if the car is connected to the internet. In a parking garage or remote location, all bets are off.

In the last couple of years, videoconferencing has become second nature to most of us. Already, software companies are integrating forms of AI into videoconferencing solutions. Maybe it’s to optimize audio and video quality on the fly, or to “place” people in the same virtual space. Now, generative AI-powered videoconferences can automatically create meeting minutes or pull in relevant information from company sources in real-time as different topics are discussed.


However, if a smart car, videoconferencing system, or any other edge device can’t reach back to the cloud, then the generative AI experience can’t happen. But what if they didn’t have to? It sounds like a daunting task considering the enormous processing of cloud AI, but it is now becoming possible.

Generative AI at the edge

Already, there are generative AI tools, for example, that can automatically create rich, engaging PowerPoint presentations. But the user needs the system to work from anywhere, even without an internet connection.

Similarly, we’re already seeing a new class of generative AI-based “co-pilot” assistants that will fundamentally change how we interact with our computing devices by automating many routine tasks, like creating reports or visualizing data. Imagine flipping open a laptop, the laptop recognizing you through its camera, then automatically generating a course of action for the day,week or month based on your most used tools, like Outlook, Teams, Slack, Trello, etc. But to maintain data privacy and a good user experience, you must have the option of running generative AI locally.

In addition to meeting the challenges of unreliable connections and data privacy, edge AI can help reduce bandwidth demands and enhance application performance. For instance, if a generative AI application is creating data-rich content, like a virtual conference space, via the cloud, the process could lag depending on available (and costly) bandwidth. And certain types of generative AI applications, like security, robotics, or healthcare, require high-performance, low-latency responses that cloud connections can’t handle.

In video security, the ability to re-identify people as they move among many cameras — some placed where networks can’t reach — requires data models and AI processing in the actual cameras. In this case, generative AI can be applied to automated descriptions of what the cameras see through simple queries like, “Find the 8-year-old child with the red T shirt and baseball cap.”

That’s generative AI at the edge.

Developments in edge AI

Through the adoption of a new class of AI processors and the development of leaner, more efficient, though no-less-powerful generative AI data models, edge devices can be designed to operate intelligently where cloud connectivity is impossible or undesirable.

Of course, cloud processing will remain a critical component of generative AI. For example, training AI models will remain in the cloud. But the act of applying user inputs to those models, called inferencing, can — and in many cases should — happen at the edge.

The industry is already developing leaner, smaller, more efficient AI models that can be loaded onto edge devices. Companies like Hailo manufacture AI processors purpose-designed to perform neural network processing. Such neural-network processors not only handle AI models incredibly rapidly, but they also do so with less power, making them energy efficient and apt to a variety of edge devices, from smartphones to cameras.

Utilizing generative AI at the edge enables effective load-balancing of growing workloads, allows applications to scale more stably, relieves cloud data centers of costly processing, and helps reduce environmental impact. Generative AI is on the brink of revolutionizing computing once more. In the future, your laptop’s LLM may auto-update the same way your OS does today — and function in much the same way. However, in order to get there, generative AI processing will need to be enabled at the network’s edge. The outcome promises to be greater performance, energy efficiency, security and privacy. All of which leads to AI applications that reshape the world just as significantly as generative AI itself.
I agree about connectivity. When you ambark on a long car regional trip the only thing you remember about the drive are the times you lost internet connection. It dammed annoying!!!
 
  • Like
  • Fire
  • Haha
Reactions: 7 users
View attachment 53305
Interesting that Rob would like a post about Renesas's DRP A.I. which trumpets MACs and is the "long way around" to doing "A.I." tasks and really shouldn't be called Artificial Intelligence at all anyway..

At least that's my limited understanding of that technology (Hey Diogenese bags it..).

Although I don't think Rob likes posts "Willy Nilly' many probably have more to do with networking and building relationships, than anything else..

Which hopefully leads to something down the track.
 
  • Like
Reactions: 12 users

tjcov87

Member
Everyone talking about "revenue" and their corresponding disappointment with the lack of it clearly doesn't understand cash flow; or understand there is a cost associated with generating revenue. Relax, trust your research & due diligence and most of all understand your investment.
 
  • Like
  • Love
  • Fire
Reactions: 30 users

Damo4

Regular
  • Haha
Reactions: 7 users

Damo4

Regular
I haven’t seen this posted so apologies if it has and I missed it.

Direct from the Arm website. It is an article by Stephen Ozoigbo, Senior Director, Ecosystem Development, Education and Research, Arm.
Written on 20th December.

It is about Arm staff and ambassadors travelling across Africa, highlighting the range of AI-based developer experiences running on Arm.

“The additional demos were a range hardware, including the Arduino Pro and BrainChip’s Akida, that highlighted how Arm IP can be implemented across embedded systems that utilize AI workloads. As compute power increases, developers can leverage AI workloads for applications that are targeting the smallest, most power and cost-constrained embedded systems, all built on Arm.”


https://newsroom.arm.com/ai-developer-experiences-africa

Happy New Year Chippers :)

Man f**k this forum sometimes, this is a massive find and I nearly missed it with all of the current bedwetting.
Amazing to see the TOP 2 companies chosen to be modelled for Ai are Arduino and Brainchip.
Another great validation from Arm, something that demonstrates they are actively engaging with us and we aren't just holding on for dear life to their name.
 
  • Like
  • Love
  • Fire
Reactions: 49 users

Frangipani

Regular
Happy New Year 2024!

And hey @CHIPS - you were actually a bit too early with your Chinese New Year greetings and animated Lion Dance GIF, as the 2024 Lunar New Year is not celebrated until Feb 10, marking the start of the Year of the Dragon 🐲, the most auspicious of the twelve Chinese zodiac animals.

That being said, the Japanese already welcomed the Year of the Dragon on January 1st, as the Japanese zodiac, while historically based on its Chinese counterpart, is no longer following the lunar calendar, which was abandoned in 1872, and instead aligned its cycle to the Gregorian solar year. Thus the Japanese New Year is celebrated on Jan 1 like in the Western world. As you will probably all have seen on the news, this year’s holiday celebrations sadly turned somber when the Year of the Dragon that is supposed to bring luck and prosperity tragically started with two disasters - the 7.6 magnitude earthquake that hit a peninsula in central Japan as well as a large passenger plane and a coast guard aircraft colliding and catching fire on a runway at Tokyo’s Haneda Airport, killing five crew members of the coastguard plane which was carrying earthquake relief supplies.

A Japanese electronics company plagued by financial woes that must surely be hoping for an auspicious and prosperous Year of the Dragon is Sharp Corporation (partly-owned by Taiwanese electronics contract manufacturer Foxconn since 2016).
Could Sharp actually be one of the companies utilising Akida through Renesas?

The Osaka-based company that celebrates its 111th anniversary this year has a history with Renesas, including a joint venture in 2008, when the display driver IC business Renesas SP Drivers was established (see below), which was later acquired by touch sensor technology company Synaptics - that had beaten Apple to it - in 2014.

(Fun fact: Synaptics is one of about 20 companies founded by Caltech Professor Emeritus Carver Mead, who became known as the father of neuromorphic computing due to his seminal work in the late 1980s and also popularised the term “Moore’s Law”.)

16141850-166A-4834-9C56-55FA51C58F16.jpeg


Have a look at this recent Instagram post by Sharp:

53A6AC43-D306-4D59-BDAE-01CE821EC3B3.jpeg


Not only does their slogan “Experience the future of the five senses with Sharp at CES 2024!” have an Akidaish ring to it, some of the cutting-edge technologies they will showcase in Las Vegas (which were unveiled during SHARP Tech Day in Tokyo in November), do, too, although the actual term neuromorphic is never used: an AI Avatar acting as a virtual concierge and tour guide equipped with CE-LLM edge AI technology, an AI Olfactory Sensor (“this sensor imitates the sense of smell of living things”) presented in the form of a wine sensor (!), non-contact vital signs sensors and the world's smallest vital signs pulse wave sensors for wearable devices, groundbreaking XR camera technology (“A polymer lens camera that will be the “eye” of the AI world, which is capable of high-speed focus adjustment based on the same mechanism as the human eye”), a driver monitoring camera, high performance-low power consumption gadgets, a next-generation solar cell module for use in outer space…

Could we by any chance be involved? 🤔

Well, even in case we are, the Sharp booth at CES 2024 may not be where we shareholders will find out. As has been said numerous times before: Many companies utilising our tech one day will not promote their products as having “Akida Inside”, as much as we wish they would, especially if they don’t deal with Brainchip directly but through IP licensees such as Renesas or Megachips. We may indeed only find out by watching the financials…





Sharp to Take Part in CES 2024, Major US Tech Event
Introducing a wide range of innovative technologies that will drive sustained, medium- to long-term growth

231221-a.jpg

Sharp’s booth (image)

Sharp Corporation will participate in CES 2024 in Las Vegas, Nevada, United States, to be held from January 9 to 12 (Tuesday to Friday), 2024. CES is one of the largest and most influential tech events in the world. Based on the slogan “Toward the Future for a Better Life,” Sharp will broadly promote a variety of world-class technologies in the global marketplace by introducing its innovative proprietary technologies first unveiled at Sharp Tech-Day, its technology exhibition event held in Tokyo in November 2023.

■ Exhibit Highlights

1. Smart Living
Technologies that reduce various worries at home and make lives richer and more comfortable

A virtual docent (explainer bot) powered by CE-LLM*1 (Communication Edge-LLM) edge AI technology being developed by Sharp will introduce the exhibit through natural, interactive conversation.
Also, in addition to a high Speed Oven that features greatly reduced cooking times thanks to proprietary heating technology and optimal control of the heat source, this exhibit will feature hair dryers and stick vacuum cleaners that combine low noise and high power. Sharp will also introduce non-contact vital sensors and the world’s smallest vital sensors for wearable devices.

2. Smart Industry
Ideas to provide speed and efficiency to industries
This exhibit features camera technology that supports XR technology, in addition to visual representations of conversations and ideas in business settings using XR*2 glass and AI, and information display to support daily life scenarios such as wardrobe coordination suggestions. Also, an AI Olfactory Sensor, developed by adapting display substrate technologies, enables sensing smells by mimicking living organisms, and an IMS (ion mobility spectrometry) gas analysis device using an atmospheric electron release element will be on display. And Sharp will introduce a safe driving support system that uses a small camera combined with an LCD.

3. Sustainability
Technologies that produce renewable energy of the future and address environmental issues by improving power efficiency
Reflective LCD signage for outdoor and ePoster color electronic paper displays featuring low power consumption will be exhibited. Sharp will also introduce the indoor photovoltaic device LC-LH that uses indoor light and offers high power generating efficiency and can supply power to IoT devices. A TV Remote Control with LC-LH device will also be shown. Also, Sharp’s Space Solar Sheet next-generation solar cell module that is thin, lightweight, and flexible will be on display.

■ Location of Sharp booth: 17229, Central Hall, Las Vegas Convention Center (Nevada, USA)
■ Exhibit dates and time: January 9 to January 12, 2024 (Tuesday to Friday); hours: 10:00 to 18:00

*1 In response to questions from users, it provides smooth, natural conversation and interaction by immediately judging whether to handle the question using edge AI such as a local LLM or a cloud-based AI such as Chat GPT.
*2 Extended reality/cross reality: Technology to create new experiences through the fusion of the real world and virtual worlds.

■ Exhibit Highlights (tentative)
Booth AreaExhibits
Smart Living• AI Avatar
A virtual explainer (docent) equipped with CE-LLM (Communication Edge-LLM) edge AI technology now under development will present the exhibition content in a smooth, natural conversational style.
• High Speed Oven
A built-in oven that can significantly reduce cooking times thanks to proprietary heating technology and optimal control of the heat source. It adopts the industry’s first*3 Gold Carbon Heater. A whole chicken typical of an American party menu can be roasted in one-third the time of a conventional oven while delivering charcoal grilled flavor.
• Low noise & High power Hair dryer
A new hair dryer delivers high air-flow volume using two motors while reducing noise levels thanks to a proprietary design for the air-flow path. An innovative form factor enables it to be used hands-free.
• Low noise & Powerful stick cleaner
An upright vacuum cleaner that maintains powerful suction and achieves significantly reduced noise thanks to a proprietary noise-reducing design.
• Non-contact vital Sensor
A device that uses a unique optical filter based on semiconductor film-forming technology to sense pulse waves without contact and with high accuracy.
• Vital Sensors for wearable devices
The world’s smallest pulse wave sensor for wearable devices such as earphones, rings, and eyeglasses.
• AQUOS XLED Model for the Global Market
This TV employs mini-LED backlight and quantum-dot rich-color display technology and features an audio system with speakers placed above and below the screen to combine dynamic images with excellent light and dark rendering with an immersive sound field. This is a new-generation TV that provides viewers with a vividly realistic sense of actually being in the middle of the action.
Smart Industry• Life/work update experience to use XR glass
Conversations and ideas that emerge during meetings can be rendered visually utilizing XR technology and AI. Also, this combination can offer solutions to support users by, for example, analyzing the current weather forecast and personal schedules and displaying recommended clothing coordinates as the user stands in front of their wardrobe closet.
• Polymer Lens Camera and Ultra Compact Camera
A polymer lens camera that will be the “eye” of the AI world, which is capable of high-speed focus adjustment based on the same mechanism as the human eye. Also, an ultra-small camera with high optical performance to be used for gaze tracking. These products will link a person’s intention to see things, go places, and operate devices to the world of AI without requiring touch contact.
• AI Olfactory Sensor
Based on a design derived from display substrate technologies, this sensor imitates the sense of smell of living things. By imaging, it is possible to use AI to process and evaluate smells, so it can determine more complicated smells compared to methods that measure physical quantities of odorants.
• IMS Gas Analyzer
This device uses a proprietary atmospheric electron emission element developed by Sharp. By enabling stable electron emission even at normal atmospheric pressure, which had been considered difficult when not under vacuum conditions, it is expected to find application in a greater variety of fields than ever before.
• Gas decomposition module applying original photocatalyst material
A harmful-gas decomposition module equipped with a filter containing a proprietary photocatalyst material with excellent performance provides improved air quality in offices and inside vehicles. It can also be used to maintain the freshness of perishable foods and for the preservation of historic and artistic assets.
• Driver Monitoring Camera
A camera module integrated into the dashboard LCD detects the driver’s line of sight, head position and orientation, and eye blink pattern to assess driver status. Drowsiness can also be detected.
Sustainability• Low Power Consumption Reflective LCD signage for outdoor
A low-power-consumption display utilizing reflected ambient light. By combining a proprietary reflective structure and low-frequency IGZO drive, a full-color video display with high visibility is possible even in outdoor environments.
Ultra Low Power indoor Display "ePoster"
This display is equipped with E Ink’s latest e-paper platform, E Ink Spectra 6, making it possible to render content with greater color saturation and vividness. IGZO technology has made it possible to make the peripheral circuitry smaller, allowing for narrower bezels. In addition to high visibility like printed paper, it can also maintain the display with zero watts (“0W”) power consumption*4.
• Indoor photovoltaic device "LC-LH"
By combining dye-sensitized solar cells, which can convert indoor light into electricity with high efficiency, and Sharp’s LCD technology, this device achieves power generating efficiency approximately double that of typical solar cells used in clocks and calculators. Applicable not only to a variety of small appliances, it will also help reduce environmental impacts by replacing disposable batteries.
• TV Remote Control with LC-LH device
A remote control for TVs equipped with an LC-LH indoor photovoltaic device. Boasting excellent power generation efficiency in indoor light, it can be used without batteries.
• Space Solar Sheet
Next-generation solar cells for use in outer space featuring high power generating efficiency. Thin, lightweight, and flexible, the solar sheet can be used in unprecedented ways for moving vehicles and mobile equipment. For example, it can be stored compactly as a module that can be rolled up and then expanded to cover a large area for generating electricity.
*3 For built-in ovens; as of December 21, 2023, based on Sharp findings.
*4 Power is consumed when rewriting the content to be displayed on the screen.
Official CES 2024 website : https://www.ces.tech/




Sharp Unveils Cutting-Edge Technologies at CES 2024​

Sharp (PRNewsfoto/Sharp Electronics Corporation USA)


NEWS PROVIDED BY
Sharp Electronics Corporation
02 Jan, 2024, 09:52 ET

SHARPen Your Senses and Experience a Vision for a Better Future

MONTVALE, N.J., Jan. 2, 2024 /PRNewswire/ -- Sharp, a global leader in innovative technology solutions, is bringing the future to the five senses at CES 2024, taking place in Las Vegas, Nevada, from January 9 to 12, 2024. Experience products and components that will extend your sight, please your taste buds, detect the slightest differences in scents, replace traditionally loud household products with quiet alternatives, and even feel your heartbeat with miniature wearable devices to read your vital signs.

"As Sharp celebrates its 111th anniversary, Sharp is proud to lead the way in innovation and game-changing technologies," said Jeff Ashida, Chairman, President & CEO of Sharp Electronics Corporation. "We are excited to share Sharp's cutting-edge solutions that will define the next chapter of Sharp's legacy at CES 2024."

Under the theme "Toward the Future for a Better Life," Sharp will showcase a diverse range of groundbreaking technologies, initially introduced at Sharp Tech-Day in Tokyo in November 2023.

SMART LIVING: REDEFINING HOME EXPERIENCES

Sharp's booth will feature a captivating Smart Living exhibit, where cutting-edge technologies converge to enhance daily life. Central to this display will be the revolutionary CE-LLM (Communication Edge-LLM) edge AI technology, powering a virtual tour guide capable of engaging visitors with naturally paced and lively interactive conversations.

The exhibit will highlight a High-Speed Oven with proprietary Quad Heating Technologies. Combining the industry's first gold carbon heater for charcoal grill flavor, convection circulation, inverter control microwave, and intelligent cooking control, this High-Speed Oven helps reduce cooking time to as little as 1/3 of that of a conventional home oven.

Sharp is also showcasing powerful hair dryers and upright vacuum cleaners designed to operate at or near 50db, approximately as quiet as a household refrigerator. Moreover, non-contact vital signs sensors and the world's smallest vital signs sensors for wearable devices will be shown.

SMART INDUSTRY: ACCELERATING BUSINESS INNOVATION
Sharp will present technologies designed to boost speed and efficiency across various sectors in the Smart Industry section. The exhibit will include camera technology featuring polymer lenses and support for XR technology, XR glasses, and AI-powered visual representations of business conversations and ideas.
With technology inspired by the method living organisms use to sense odors, the AI Olfactory Sensor is designed with a target detection limit of 0.1 ppb (parts per billion). The exhibit will show how the AI Olfactory sensor detects the subtle differences between wine varieties. An IMS (ion mobility spectrometry) gas analysis device using atmospheric electron release technology will also be displayed.

Sharp will introduce a safe driving support system with a miniature camera and LCD. Finely tuned to identify situations where the driver cannot maintain a consistent line of sight, the driver monitor camera can help detect drowsiness, unresponsiveness, and distracted driving.

SUSTAINABILITY: PIONEERING RENEWABLE ENERGIES FOR A GREENER TOMORROW

Sharp's commitment to sustainability will be evident in the technologies to be showcased in the Sustainability exhibit. The display will include Reflective LCD signage for outdoor use and ePoster color electronic paper displays with low power consumption. An indoor photovoltaic device LC-LH, utilizing indoor light for more efficient power generation will be demonstrated with a battery-free TV Remote Control featuring the LC-LH device.

Sharp's Space Solar Sheet, a thin, lightweight, and flexible next-generation solar cell module, will be unveiled, underlining the company's dedication to addressing environmental challenges through improved power efficiency.


Sharp invites attendees, media, and industry professionals to experience the future of technology at CES 2024. Visit Sharp's booth #17229, Central Hall, Las Vegas Convention Center to witness firsthand the innovative solutions poised to shape a better and more connected world.
Official CES 2024 website: https://www.ces.tech/


Booth Exhibits Details (Subject to Change)
SMART LIVING DISPLAY

AI Avatar
Experience a virtual concierge and tour guide equipped with CE-LLM edge AI technology. Engage our AI Avatar to learn about all Sharp's exhibition with smooth, naturally paced, lively conversations.

High-Speed Oven
Revolutionize cooking with our built-in High-Speed Oven. Utilizing proprietary Quad Heating Technology and the industry-first Gold Carbon Heater for a charcoal-grilled flavor, it may reduce cooking time to as little as one-third of a conventional home oven.
Low Noise & High-Power Hair Dryer
Discover innovation in hair care with a high air-flow volume hair dryer. Two motors reduce noise, and its hands-free design provides a unique user experience.
Low Noise & Powerful Stick Cleaner
Explore our upright vacuum cleaner with powerful suction and reduced noise, thanks to a proprietary noise-reducing design.
Non-contact Vital Sensor
Experience precision in health monitoring with our optical filter using semiconductor film-forming technology to sense pulse waves without contact.
Vital Sensors for Wearable Devices
Introducing the world's smallest pulse wave sensor for wearable devices, such as earphones, rings, and eyeglasses.
AQUOS XLED Television for the Global Market
Immerse yourself in a new-generation TV experience. Our AQUOS XLED television combines mini-LED backlight, quantum-dot rich-color display technology, and immersive sound for a realistic viewing sensation.
SMART INDUSTRY DISPLAY
Life/Work Update Experience with XR Glass
Visualize conversations and ideas using XR technology and AI. This innovative combination of AI-powered natural-speed conversation and integration with real-time information services, personal calendars, schedules, and contacts will transform how we interact with our digital life at home and work.
Polymer Lens Camera and Ultra Compact Camera
Step into the world of AI with our polymer lens camera and ultra-compact camera. High-speed focus adjustment and gaze tracking redefine how we interact with AI.
AI Olfactory Sensor
Experience scent recognition like never before. Our sensor, inspired by living organisms, allows AI to process and evaluate complex smells.
IMS Gas Analyzer
Explore our gas analyzer with a proprietary atmospheric electron emission element, enabling stable electron emission even at normal atmospheric pressure.
Gas Decomposition Module Applying Original Photocatalyst Material
Help improve indoor air quality with our gas decomposition module. Equipped with a filter containing a proprietary photocatalyst material, it has potential applications in offices, and vehicles.
Driver Monitoring Camera
Enhance road safety with our camera module integrated into the dashboard LCD. It helps detect the driver's line of sight, head position, orientation, and help identify signs of drowsiness.
SUSTAINABILITY DISPLAY
Low Power Consumption Reflective LCD Signage for Outdoor
Embrace sustainability with our low-power consumption outdoor display. Combining a proprietary reflective structure and low-frequency IGZO drive, it delivers a full-color video display with high visibility.
Ultra Low Power Indoor Display, ePoster
Witness the future of displays with our ePoster. Using E Ink's latest e-paper platform and IGZO technology, it maintains high visibility with extremely low ("0W" class) power consumption.
Indoor Photovoltaic Device, LC-LH
Contribute to a greener future with our indoor photovoltaic device. By combining dye-sensitized solar cells and Sharp's LCD technology, it achieves double the power generating efficiency of common indoor-light solar-powered devices.
TV Remote Control with LC-LH Device
Experience innovation in TV control with our remote equipped with an LC-LH indoor photovoltaic device. It boasts excellent power generation efficiency in indoor light, eliminating the need for batteries.
Space Solar Sheet
Explore next-generation solar cells for use in outer space. Thin, lightweight, and flexible, the solar sheet offers unprecedented possibilities for moving vehicles and mobile equipment.
ABOUT SHARP ELECTRONICS CORPORATION (SEC)
Sharp Electronics Corporation, responsible for the U.S. & Latin American market, is part of Sharp Corporation, a global technology company. We aim to help consumers and organizations of all sizes in our region enhance their performance and adapt to the future through innovative home and business products and services. Sharp is an expert in both business to business (B2B) and consumer (B2C) innovation and continues its commitment to invest in new products and services. Sharp has been named to Fortune magazine's World's Most Admired Company List, ranking the world's most respected and reputable companies. For more information about Sharp Electronics Corporation, visit our website at sharpusa.com.
Press Contact: (Sharp)
Kellyn Curtis
972.816.1355
kellyn.curtis@peppercomm.com
SOURCE Sharp Electronics Corporation



Sharp goes all in on AI at CES​

Business news | January 2, 2024
By Nick Flaherty
DISPLAYS & INTERFACES WEARABLES AI SENSING / CONDITIONINGENERGY HARVESTING


Sharp is showing a wide range of technologies at CES 2024, from generative AI and AI enabled wine sensor to lightweight solar cells for space.​

Sharp has developed its own large language edge AI technology, called CE-LLM (Communication Edge-LLM) that can provide naturally paced interactive conversations, as well as camera technology featuring polymer lenses and support for XR technology, XR glasses, and AI-powered visual representations of business conversations.


With technology inspired by the method living organisms use to sense odours, the AI Olfactory Sensor is designed with a target detection limit of 0.1 ppb (parts per billion) and can detect the subtle differences between wine varieties. An ion mobility spectrometry (IMS) gas analysis device using atmospheric electron release technology will also be displayed as well as an optical filter using semiconductor film-forming technology to sense pulse waves without contact. This is used for the world’s smallest pulse wave sensor for wearable devices such as earphones, rings, and eyeglasses.
https://www.eenewseurope.com/en/sharp-shows-tandem-solar-module-with-record-33-66-efficiency/
Sharp is also introducing a safe driving support system with a miniature camera and LCD. This is tuned to identify situations where the driver cannot maintain a consistent line of sight, the driver monitor camera can help detect drowsiness, unresponsiveness, and distracted driving and comes as the US is evaluating technologies to tackle distracted and impaired driving.

Sharp is also showing its Reflective LCD signage for outdoor use and ePoster colour electronic paper displays with low power consumption. An indoor photovoltaic device LC-LH, using indoor light for more efficient power generation will be demonstrated with a battery-free TV Remote Control featuring the LC-LH device.

The Space Solar Sheet, a thin, lightweight, and flexible next-generation solar cell module, will also be unveiled, aimed at space applications.

global.sharp/
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 56 users

robsmark

Regular
They're not. Robsmark has already confirmed that they are not.
The company confirmed they‘re not. If they were they wouldn’t need this cap raise would they?
 
  • Like
  • Love
Reactions: 4 users

wilzy123

Founding Member
Last edited:
  • Like
Reactions: 9 users
Top Bottom