BRN Discussion Ongoing

IloveLamp

Top 20
Screenshot_20240105_210002_LinkedIn.jpg
 
  • Like
  • Love
  • Thinking
Reactions: 14 users

Iseki

Regular
This pretty much just added the same value (I.e. zero), but you used more words and encouraged people to take a nap half way through reading because it was so rehashed and a great waste to read. Waiting for fresh material from you buddy.
Chris Stevens likes this
 
  • Haha
  • Fire
Reactions: 3 users

Iseki

Regular
This pretty much just added the same value (I.e. zero), but you used more words and encouraged people to take a nap half way through reading because it was so rehashed and a great waste to read. Waiting for fresh material from you buddy.
Rob Telson likes this, but Chris Stevens likes it more..
 
  • Love
Reactions: 1 users
Couple of dots for CES and 4 years in the making perhaps.... may have already been linked prior so apologies in advance if that's the case.

https://www.biometricupdate.com/202...sing-capabilities-opens-developer-environment

BrainChip has demonstrated the capabilities of its latest class of neuromorphic processing IP and Device in two sessions at the tinyML Summit at the Samsung Strategy & Innovation Center in San Jose, California, the company announced.

“We recognize the growing need for low-power machine learning for emerging applications and architectures and have worked diligently to provide a solution that performs complex neural network training and inference for these systems,” said in a prepared statement Louis DiNardo, CEO of BrainChip. “We believe that as a high-performance and ultra-power neural processor, Akida is ideally suited to be implemented at the Edge and IoT applications.”

In a session titled “Bio-Inspired Edge Learning on the Akida Event-Based Neural Processor,” BrainChip rolled out a demo of how the Akida Neuromorphic System-on-Chip processes standard vision CNNs using industry standard flows and distinguishes itself from traditional deep-learning accelerators through key features of design choices and bio-inspired learning algorithm. As a result, Akida takes up 40 to 60 percent less computations to process a CNN compared to a DLA.

https://news.samsung.com/global/sam...ble-expansive-kitchen-experiences-at-ces-2024

AI Features That Enable Food Ideas and Intelligence

To enhance the experience in the kitchen, the 2024 Bespoke 4-Door Flex™ Refrigerator with AI Family Hub™+ has been packed with a variety of innovative technologies. One impressive new feature is AI Vision Inside, which uses a smart internal camera that can recognize items being placed in and out of the refrigerator. Also, it is equipped with “Vision AI” technology, which can identify up to 33 different fresh food items based on a predefined set of training data comprising approximately one million food photographs.2 With the food list that is available and editable on the Family Hub™+ screen, users can also manually add expiration date information for items that they would like to keep track and the refrigerator sends out alerts through its 32” LCD screen for items before reaching that date.
"based on a predefined set of training data comprising approximately one million food photographs"

What are your thoughts on the size of the data set @Diogenese ?

To me it seems too large, to have anything to do with AKIDA, but how is a "zoo" or library, created for use by AKIDA?

It would take more training, to identify say stiff celery, from floppy celery..


Children please 🙄..
 
  • Haha
  • Like
Reactions: 7 users
Hello Fact Finder

I don't like stories, but rather reality!

1. Nobody discussed "all day for a week" beating anybody! You were the one writing most words about it.
2. Secretaries should not make typing mistakes especially nowadays with automatic correction programs and when going public. It always gives bad impressions. In this case, nobody did a second check!
3. It surprises me over and over again how much and often you defend BrainChip even over minor topics.

My end too.

Have a good weekend!

CHIPS
I'm pretty sure the secretaries FactFinder is talking about, worked in black and white offices..

100.gif
 
  • Haha
  • Like
  • Wow
Reactions: 16 users

wilzy123

Founding Member
  • Like
Reactions: 1 users

wilzy123

Founding Member
Stop feeding the troll!

I agree. You've been eating too many chips and drinking so much bin juice that your posts have become the stuff that only a homicide detective would instruct someone to dig through.
 
Last edited:
  • Haha
Reactions: 3 users

JDelekto

Regular
"based on a predefined set of training data comprising approximately one million food photographs"

What are your thoughts on the size of the data set @Diogenese ?

To me it seems too large, to have anything to do with AKIDA, but how is a "zoo" or library, created for use by AKIDA?

It would take more training, to identify say stiff celery, from floppy celery..


Children please 🙄..

When creating a data set used for inference, a set of 'features' is extracted and used for training the model. Various features are identified for image models and tagged, such as shapes. One could, for example, train a model to recognize dress shirts, blouses, t-shirts, hoodies, etc.

As these features are fed into a neural network, the "weights" are calculated and become part of the model. The weights are stored in a binary format and can be quantized, or a fancy word for saying do some "magic math" for these numbers to take up less space. We hear about 8-bit, 4-bit, etc. The fewer bits that can be used to represent a weight, the less memory it requires. This quantization comes at a cost since you can lose some accuracy when doing the inferencing to recognize things.

Microsoft's Resnet-50 model (the 50 represents the number of layers in the model, each set of layers having a set of weights) is trained on millions of images. Yet, the model itself can fit in around 100 megabytes of memory. Most cell phones today are in the gigabytes of storage. To put it more into perspective, most wireless routers today can have between 128 and 512 MB of memory. A Samsung smart refrigerator has about 2.5GB of RAM and 8GB of flash memory, much more space than the model requires.

That being said, a model that is trained on over a million images to recognize most refrigerator contents can easily fit within the confines of an appliance or mobile device.

I like to think of a neural network model as a "snapshot" of a brain state. When you learn new images, you're not necessarily increasing the size or mass of your brain, but you're altering the connections between the neurons in your brain that can remember and recognize those images later. As you see more pictures of a cat to learn the shape of its face, nose, ears, tail, etc. that make up that cat, then the more likely you are to recognize another picture of a cat, even if it is a color you've never seen before.
 
  • Like
  • Love
  • Fire
Reactions: 26 users

Frangipani

Regular
Sigh, sometimes a humble Arduino microcontroller for less than US$100 with minimal computational power is not only deemed good enough for certain research projects, but is also explicitly labelled as being more economical (both price-wise and regarding power consumption) than having to purchase technically far superior hardware such as an Akida PCIe Board or Dev Kit…

But on the upside: Dr Ivan Maksymov, an Associate Professor at Charles Sturt University’s AI and Cyber Futures Institute, where he conducts both theoretical and experimental research on physical foundations of AI (https://researchoutput.csu.edu.au/en/persons/ivan-maksymov), is at least fully aware of Brainchip and its products for the general public. Who knows - he might even find Akida’s prowess useful for another of his research projects…


C571652E-3A80-40AB-BF8C-E0B6D6B3F58F.jpeg



F1334860-F833-443A-A778-5797E53C1D29.jpeg


6863D523-CB2B-4F74-8314-FD5D20805E83.jpeg
 
  • Like
  • Thinking
  • Wow
Reactions: 21 users
When creating a data set used for inference, a set of 'features' is extracted and used for training the model. Various features are identified for image models and tagged, such as shapes. One could, for example, train a model to recognize dress shirts, blouses, t-shirts, hoodies, etc.

As these features are fed into a neural network, the "weights" are calculated and become part of the model. The weights are stored in a binary format and can be quantized, or a fancy word for saying do some "magic math" for these numbers to take up less space. We hear about 8-bit, 4-bit, etc. The fewer bits that can be used to represent a weight, the less memory it requires. This quantization comes at a cost since you can lose some accuracy when doing the inferencing to recognize things.

Microsoft's Resnet-50 model (the 50 represents the number of layers in the model, each set of layers having a set of weights) is trained on millions of images. Yet, the model itself can fit in around 100 megabytes of memory. Most cell phones today are in the gigabytes of storage. To put it more into perspective, most wireless routers today can have between 128 and 512 MB of memory. A Samsung smart refrigerator has about 2.5GB of RAM and 8GB of flash memory, much more space than the model requires.

That being said, a model that is trained on over a million images to recognize most refrigerator contents can easily fit within the confines of an appliance or mobile device.

I like to think of a neural network model as a "snapshot" of a brain state. When you learn new images, you're not necessarily increasing the size or mass of your brain, but you're altering the connections between the neurons in your brain that can remember and recognize those images later. As you see more pictures of a cat to learn the shape of its face, nose, ears, tail, etc. that make up that cat, then the more likely you are to recognize another picture of a cat, even if it is a color you've never seen before.
So you're saying, the use of such a large data set, doesn't exclude the use of AKIDA, but is it indicative of using AKIDA or not? 🤔..
 
  • Like
Reactions: 2 users

Diogenese

Top 20
"based on a predefined set of training data comprising approximately one million food photographs"

What are your thoughts on the size of the data set @Diogenese ?

To me it seems too large, to have anything to do with AKIDA, but how is a "zoo" or library, created for use by AKIDA?

It would take more training, to identify say stiff celery, from floppy celery..


Children please 🙄..
Hi Db,

I'll keep my doodle entendres under wraps.

This Samsung portmanteau patent application covers all sorts of domestic appliance with NNs, cameras, AI models, model training, speech recognition, display screens, menu suggestions ...

They contemplate having software NNs or SoC NNs.

WO2023090725A1 DOMESTIC APPLIANCE HAVING INNER SPACE CAPABLE OF ACCOMMODATING TRAY AT VARIOUS HEIGHTS, AND METHOD FOR ACQUIRING IMAGE OF DOMESTIC APPLIANCE 20211118

1704458661278.png



As shown in FIG. 2 , a home appliance 1000 according to an embodiment of the present disclosure may include a camera 1100 and a processor 1200 . The processor 1200 controls overall operations of the home appliance 1000 . The processor 1200 executes programs stored in the memory 1800, thereby enabling the camera 1100, the driving unit 1300, the sensor unit 1400, the communication interface 1500, the user interface 1600, the lighting 1700, and the memory. (1800) can be controlled.

According to one embodiment of the present disclosure, the home appliance 1000 may be equipped with an artificial intelligence (AI) processor. The artificial intelligence (AI) processor may be manufactured in the form of a dedicated hardware chip for artificial intelligence (AI), or manufactured as part of an existing general-purpose processor (eg CPU or application processor) or graphics-only processor (eg GPU). It may also be mounted on the home appliance 1000.

According to an embodiment of the present disclosure, the processor 1200 obtains a first image including the tray 1001 inserted into the internal space of the home appliance 1000 through the camera 1100, and uses the first image. Thus, the height at which the tray 1001 is inserted in the inner space can be identified. Also, the processor 1000 may identify the height at which the tray 1001 is inserted based on information obtained from at least one of the depth sensor 1410, the weight sensor 1420, and the infrared sensor 1430. An operation of identifying the height at which the tray 1001 is inserted by the processor 1200 will be described later in detail with reference to FIGS. 5 to 8 .

According to an embodiment of the present disclosure, the processor 1200 determines a setting value related to image capture of the interior space according to the height at which the tray 1001 is inserted, and based on the determined setting value, the tray 1001 A second image (hereinafter, also referred to as a monitoring image) including the contents placed thereon may be obtained. For example, the processor 1200 determines the brightness value of the lighting in the interior space according to the height at which the tray 1001 is inserted, and adjusts the brightness of the lighting 1700 disposed in the interior space according to the determined lighting brightness value. The camera 1100 may be controlled to acquire the second image. In addition, the processor 1200 may determine the size of the cropped area according to the height at which the tray 1001 is inserted, and obtain a second image by cropping a portion of the surrounding area from the first image based on the determined size of the cropped area. . Meanwhile, the processor 1200 may obtain a second image by determining a distortion correction value of the camera 1100 according to the height at which the tray 1001 is inserted and applying the distortion correction value to the first image. An operation in which the processor 1200 acquires the second image (monitoring image) by applying a set value according to the height at which the tray 1001 is inserted will be described in detail later with reference to FIGS. 9 to 16 .



The input interface 1620 may include a voice recognition module. For example, the home appliance 1000 may receive a voice signal that is an analog signal through a microphone and convert the voice part into computer-readable text using an Automatic Speech Recognition (ASR) model. The home appliance 1000 may obtain the user's utterance intention by interpreting the converted text using a natural language understanding (NLU) model. Here, the ASR model or NLU model may be an artificial intelligence model. The artificial intelligence model can be processed by an artificial intelligence processor designed with a hardware structure specialized for the processing of artificial intelligence models. AI models can be created through learning. Here, being made through learning means that a basic artificial intelligence model is learned using a plurality of learning data by a learning algorithm, so that a predefined action rule or artificial intelligence model set to perform a desired characteristic (or purpose) is created. means burden. An artificial intelligence model may be composed of a plurality of neural network layers. Each of the plurality of neural network layers has a plurality of weight values, and a neural network operation is performed through an operation between an operation result of a previous layer and a plurality of weight values.



Alternatively, the electronic device 200 may be implemented as an electronic device connected to a display device including a screen through a wired or wireless communication network. For example, the electronic device 200 may be implemented in the form of a media player, a set-top box, or an artificial intelligence (AI) speaker.

...

For example, a refrigerator may provide a service that recommends a menu suitable for stored ingredients. Meanwhile, in order for smart appliances to provide smart services based on object recognition, an object recognition rate needs to be improved.



In an embodiment, at least one neural network and/or a predefined operating rule or AI model may be stored in the memory 220 . In an embodiment, a first neural network for obtaining multi-mood information from at least one of user context information and screen context information may be stored in the memory 220 .



In an embodiment, the processor 210 may use artificial intelligence (AI) technology. AI technology can be composed of machine learning (deep learning) and element technologies using machine learning. AI technology can be implemented by utilizing algorithms. Here, an algorithm or a set of algorithms for implementing AI technology is called a neural network. The neural network may receive input data, perform calculations for analysis and classification, and output result data. In this way, in order for the neural network to accurately output result data corresponding to the input data, it is necessary to train the neural network. Here, 'training' is a method of inputting various data into a neural network, analyzing the input data, classifying the input data, and/or extracting features necessary for generating result data from the input data. It may mean training a neural network so that the neural network can discover or learn a method by itself. Training a neural network means that an artificial intelligence model with desired characteristics is created by applying a learning algorithm to a plurality of learning data. In an embodiment, such learning may be performed in the electronic device 200 itself where artificial intelligence is performed, or through a separate server/system
.
 
  • Like
  • Thinking
Reactions: 15 users
Sigh, sometimes a humble Arduino microcontroller for less than US$100 with minimal computational power is not only deemed good enough for certain research projects, but is also explicitly labelled as being more economical (both price-wise and regarding power consumption) than having to purchase technically far superior hardware such as an Akida PCIe Board or Dev Kit…

But on the upside: Dr Ivan Maksymov, an Associate Professor at Charles Sturt University’s AI and Cyber Futures Institute, where he conducts both theoretical and experimental research on physical foundations of AI (https://researchoutput.csu.edu.au/en/persons/ivan-maksymov), is at least fully aware of Brainchip and its products for the general public. Who knows - he might even find Akida’s prowess useful for another of his research projects…


View attachment 53471


View attachment 53472

View attachment 53473
Is this Dr Ivan Maksymov, really a doctor? Really a scientist? Really??.. Him??..

He's calling the AKIDA PCIe boards (USD499) and the PC based Development Kit (USD9995) "mass-produced" products??
For cost comparison, against his cobbled together $100 Arduino microprocessor based, Reservoir Computing system??

Don't these Development Kits come with "engineering support" etc from BrainChip as well?..

Not sure what point he's trying to make, when mass produced AKIDA chips have been previously estimated at costing $15 to $20 each..

Maybe my logic's all wrong here, but this guy seems like a bit of a DH.. (not a reference to you, Dave).

If his budget for research is a hundred bucks, maybe that's an indication of the value of it..
 
Last edited:
  • Like
  • Fire
Reactions: 14 users
Just popped up a few mins ago.

Really like to see the "AI acceleration engine" be Akida in their SOC for Smart Devices / Video applications & Image signal processing.



Socionext to Showcase Leading-Edge Technologies at CES 2024, Featuring Custom SoC Solutions, Low Power Sensors, Smart Display Controller, and Advanced Image Processor​

MILPITAS, Calif., Jan. 5, 2024 -- Socionext, an innovative custom SoCs provider with a distinctive "Solution SoC" business model, will showcase its leading-edge technologies and products at CES 2024, Jan. 9-12.

Socionext will be at booth 9971 in the Smart Cities, IoT, and Sustainability Zone in the North Hall of the Las Vegas Convention Center. Among Socionext's featured technologies and solutions will be the following:

Custom SoCs
With extensive experience in custom SoC development, Socionext established a distinctive Solution SoC business model that provides SoCs that are both customized and fully optimized to our customers' needs. The company offers the optimal combination of IPs, design expertise, software development, and support to implement large-scale SoCs to meet the most demanding and rigorous performance requirements in automotive, data center, and smart device applications.

With the Solution SoC model, Socionext delivers products and support that enable applications in the automotive segment, such as ADAS sensors, central computing, networking, in-cabin monitoring, satellite connectivity, and infotainment. In the data center segment, they enable high-performance compute, storage and networking applications. In the consumer space, they allow a range of smart devices, from earbuds to smart speakers and smart glasses to AR/VR headsets.

Low Power 60GHz Radar Sensor Solutions
The automotive-grade SC1260 series 60GHz RF CMOS sensor, which was nominated for awards at the Sensors Converge Show in Santa Clara in June 2023, can support multiple in-cabin use cases, including seat occupancy monitoring, child presence detection, and theft prevention.

The SC1240 series RF sensor for consumer applications allows users to easily enable multiple–person detection, gesture detection, and other high-precision sensing.

Socionext's highly-integrated CMOS radar transceivers incorporate antennae-in-package and an embedded signal processing unit for distance, angle, and presence detection in a small form factor package without requiring radio frequency or advanced signal processing design skills.

Smart Display Controller for Automotive
The Socionext's SC172x is a highly integrated, ASIL-B conformant graphics display controller featuring built-in mechanisms to ensure safety-critical content is delivered in compliance with the standards required by today's automotive display applications.

SoCs for Smart Devices
Socionext's Solution SoC model enables high-performance SoC for video applications that features a best-in-class image signal processor with built-in low-light support, coupled with an AI acceleration engine and a high level of system integration. Built using an advanced, low power manufacturing process, the 8K processor can be customized through silicon, packaging, and software suitable for a wide variety of applications.

Creating a proprietary chip requires a complex, highly structured framework with a complete support system to address each phase of the development process.

Socionext's "Solution SoC" business model embodies the company's deep understanding of SoC architectures and technologies - including IP, EDA tools, packaging, quality control and manufacturing – as well as its ecosystem of suppliers that lets customers develop feature-rich custom SoCs while maintaining ownership of key differentiating technologies that can deliver significant competitive advantage.

With extensive experience in custom SoC development, Socionext uses state-of-the-art process technologies to produce SoCs optimized for customer requirements.

Click here to learn more about Socionext's product line-up. To schedule a meeting, please fill out the contact form here.
For the CES 2024 website and programs, visit https://www.ces.tech/.
 
  • Like
  • Fire
  • Love
Reactions: 41 users

JDelekto

Regular
So you're saying, the use of such a large data set, doesn't exclude the use of AKIDA, but is it indicative of using AKIDA or not? 🤔..
It definitely doesn't exclude the use of Akida, and I read on BrainChip's site (March 6th, 2023 post) that with the 2nd gen ViT (Vision Transformers), they could handle RESNET-50 on the neutral processor itself without CPU intervention.
 
  • Like
  • Love
  • Fire
Reactions: 19 users

Sirod69

bavarian girl ;-)

BrainChip and NVISO Group Demonstrate AI-Enabled Human Behavioral Analysis at CES 2024​



Laguna Hills, Calif. – January 5, 2024 BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of ultra-low power, fully digital, event-based, neuromorphic AI IP, and NVISO Group Ltd, a global leader in human behavior AI software, will showcase a joint system that enables more advanced, more capable and more accurate AI on consumer products at the Consumer Electronics Show (CES) 2024 in Las Vegas, Nevada.

NVISO Group’s technology is uniquely able to analyze signals of human behavior such as facial expressions, emotions, identity, head poses, gaze, gestures, activities, and objects with which users interact. BrainChip’s Akida™ IP and processors address the need for high levels of efficient AI performance and on-chip learning, with ultra-low power technologies.

In combining NVISO Group AI Human Behavioral Software and BrainChip’s Akida neuromorphic compute, the resulting system monitors the state of the users through real-time perception and observation of head and body pose, eye tracking and gaze, as well as indicates emotion reasoning.

“In our goal to driving machines to understand people and their behaviors, we have partnered with BrainChip to develop a high performance system that enables efficient and effective human interaction with intelligent systems,” said Virpi Pennanen, CEO of NVISO Group. “This system will be deployable across a variety of consumer-level products at the Edge to enable autonomous machines to improve the quality of life in a safe and secure way.”

“Since first partnering with NVISO Group, we have worked diligently to combine our synergistic technologies to create intelligent systems that can interface with humans by recognizing and interpreting movement through the power of artificial intelligence,” said Rob Telson, Vice President of Ecosystem and Partnerships at BrainChip. “We are pleased to be able to demonstrate our progress at CES and show attendees how they can utilize our platforms for Edge AI devices to better improve the human experience.”

Those that are interested in human behavioral analysis are invited to see the two technologies working together at the BrainChip suite 29-330 at the Venetian Hotel at CES 2024 January 9 to 12, 2024 in Las Vegas.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 75 users

TECH

Regular
  • Haha
  • Like
Reactions: 6 users

MDhere

Regular
CES...Lets get this party started Akida....time to kick some a$#@ Tuesday to Friday lets go Rob, Todd, Sean, Anil and Kris and any other staff
in attendance....do us proud guys, we all love your work...Tech x


Love that song, good choice Tech 👌
 
  • Like
  • Love
  • Fire
Reactions: 6 users
Is this Dr Ivan Maksymov, really a doctor? Really a scientist? Really??.. Him??..

He's calling the AKIDA PCIe boards (USD499) and the PC based Development Kit (USD9995) "mass-produced" products??
For cost comparison, against his cobbled together $100 Arduino microprocessor based, Reservoir Computing system??

Don't these Development Kits come with "engineering support" etc from BrainChip as well?..

Not sure what point he's trying to make, when mass produced AKIDA chips have been previously estimated at costing $15 to $20 each..

Maybe my logic's all wrong here, but this guy seems like a bit of a DH.. (not a reference to you, Dave).

If his budget for research is a hundred bucks, maybe that's an indication of the value of it..
Hi DB

Absolutely correct these Brainchip products were low volume partly assembled and packaged by staff at Brainchip in limited numbers (literally a few hundred) primarily as demonstrators for new and existing customers. There were three. The $499 Raspberry Pi, then the two larger board packages at $4,999 and $9,999.

There was from memory three hours of free support through to 50 hours with the most expensive option. The much greater support accounts for the price differential between the $4,999 and $9,999 packages.

This fellow stands in stark contrast to Quantum Ventura who estimated a price for AKIDA as a USB at $US50.00.

Never doubt how poor the research of a Professor with a vested interest in promoting a theory can be.

My opinion only DYOR
Fact Finder
 
  • Like
  • Fire
  • Love
Reactions: 49 users

Boab

I wish I could paint like Vincent
  • Like
  • Thinking
  • Fire
Reactions: 7 users
Hello Fact Finder

I don't like stories, but rather reality!

1. Nobody discussed "all day for a week" beating anybody! You were the one writing most words about it.
2. Secretaries should not make typing mistakes especially nowadays with automatic correction programs and when going public. It always gives bad impressions. In this case, nobody did a second check!
3. It surprises me over and over again how much and often you defend BrainChip even over minor topics.

My end too.

Have a good weekend!

CHIPS
Probably because he has what he says every other contrarian has. An agenda.

I’d call it the “I’ve done my research and will defend it to the hilt, regardless” agenda. Aka the tribalism agenda.
 
  • Like
  • Love
  • Fire
Reactions: 10 users
Top Bottom