BRN Discussion Ongoing

Boab

I wish I could paint like Vincent
Pvdm spotted in Freo having lunch😜
 

Attachments

  • 53b13625-18d8-4604-8b22-7554f7344c96.jpeg
    53b13625-18d8-4604-8b22-7554f7344c96.jpeg
    178.3 KB · Views: 392
  • Like
  • Haha
  • Love
Reactions: 39 users

7für7

Top 20
Green closing! Have a nice weekend! Looking forward to next week 👍
 
  • Like
  • Fire
Reactions: 6 users

TopCat

Regular


I’m hoping Sean and Justin play a round of golf occasionally!

Hotard will be responsible for Intel’s data center products and its mission to bring AI everywhere.

SANTA CLARA, Calif., January 03, 2024
--(BUSINESS WIRE)--Intel Corporation today announced the appointment of Justin Hotard as executive vice president and general manager of its Data Center and AI Group (DCAI), effective Feb. 1. He joins Intel with more than 20 years of experience driving transformation and growth in computing and data center businesses, and is a leader in delivering scalable AI systems for the enterprise.

Hotard will become a member of Intel’s executive leadership team and report directly to CEO Pat Gelsinger. He will be responsible for Intel’s suite of data center products spanning enterprise and cloud, including its Intel® Xeon® processor family, graphics processing units (GPUs) and accelerators. He will also play an integral role in driving the company’s mission to bring AI everywhere.


Most recently, Hotard served as executive vice president and general manager of High-Performance Computing, AI and Labs at Hewlett Packard Enterprise (HPE). While at HPE, he was responsible for delivering AI capabilities to customers addressing some of the world’s most complex problems through data-intensive workloads. He also directed the company’s central applied research group, Hewlett Packard Labs.
 
  • Fire
  • Like
Reactions: 5 users

CHIPS

Regular
You pick on people who make mistakes on here 123, You pick n choose obviously your not consistent,
Is that how you run your life, pick and choose sort of guy
Stop feeding the troll!
 
  • Like
Reactions: 3 users

CHIPS

Regular
Hi Chips
I love stories so here is one for you.

When I employed Secretary typists it was sufficient to circle the mistake and give it back to them. They understood the circle indicated a mistake. They then corrected it and the work came back to me and I signed and off it went.

I never once spent all day for a week berating the person over a typo. I never sacked anyone for a typo even if it was missed by someone.

I absolutely never after ranting for a week over a typo came back to it in subsequent weeks time and time again making jokes at the persons expense.

I never found the typo so important that I stopped the analyse of a matter of significance to promote the typo’s significance to all making it the only thing I spoke about in the office day after day with every client who came through the door.

So what I would suggest is that those that do are either in need of help or have another agenda.

I would suggest that the appropriate action for even a pedant to take is to circle the typo and send it to Tony Dawe with a please correct.

Then post if they are so inclined that they have detected the typo and notified the company.

You obviously do not agree and consider it should dominate the conversation for weeks on end

If you were an employer I think you would find yourself on the front end of a constructive dismissal case if you behaved like this with your employees.

The End.

My opinion no further research or comment required.
Fact Finder

Hello Fact Finder

I don't like stories, but rather reality!

1. Nobody discussed "all day for a week" beating anybody! You were the one writing most words about it.
2. Secretaries should not make typing mistakes especially nowadays with automatic correction programs and when going public. It always gives bad impressions. In this case, nobody did a second check!
3. It surprises me over and over again how much and often you defend BrainChip even over minor topics.

My end too.

Have a good weekend!

CHIPS
 
  • Love
  • Like
Reactions: 17 users

CHIPS

Regular
  • Like
Reactions: 2 users

tjcov87

Member
Couple of dots for CES and 4 years in the making perhaps.... may have already been linked prior so apologies in advance if that's the case.

https://www.biometricupdate.com/202...sing-capabilities-opens-developer-environment

BrainChip has demonstrated the capabilities of its latest class of neuromorphic processing IP and Device in two sessions at the tinyML Summit at the Samsung Strategy & Innovation Center in San Jose, California, the company announced.

“We recognize the growing need for low-power machine learning for emerging applications and architectures and have worked diligently to provide a solution that performs complex neural network training and inference for these systems,” said in a prepared statement Louis DiNardo, CEO of BrainChip. “We believe that as a high-performance and ultra-power neural processor, Akida is ideally suited to be implemented at the Edge and IoT applications.”

In a session titled “Bio-Inspired Edge Learning on the Akida Event-Based Neural Processor,” BrainChip rolled out a demo of how the Akida Neuromorphic System-on-Chip processes standard vision CNNs using industry standard flows and distinguishes itself from traditional deep-learning accelerators through key features of design choices and bio-inspired learning algorithm. As a result, Akida takes up 40 to 60 percent less computations to process a CNN compared to a DLA.

https://news.samsung.com/global/sam...ble-expansive-kitchen-experiences-at-ces-2024

AI Features That Enable Food Ideas and Intelligence

To enhance the experience in the kitchen, the 2024 Bespoke 4-Door Flex™ Refrigerator with AI Family Hub™+ has been packed with a variety of innovative technologies. One impressive new feature is AI Vision Inside, which uses a smart internal camera that can recognize items being placed in and out of the refrigerator. Also, it is equipped with “Vision AI” technology, which can identify up to 33 different fresh food items based on a predefined set of training data comprising approximately one million food photographs.2 With the food list that is available and editable on the Family Hub™+ screen, users can also manually add expiration date information for items that they would like to keep track and the refrigerator sends out alerts through its 32” LCD screen for items before reaching that date.
 
  • Like
  • Fire
  • Thinking
Reactions: 39 users
Couple of dots for CES and 4 years in the making perhaps.... may have already been linked prior so apologies in advance if that's the case.

https://www.biometricupdate.com/202...sing-capabilities-opens-developer-environment

BrainChip has demonstrated the capabilities of its latest class of neuromorphic processing IP and Device in two sessions at the tinyML Summit at the Samsung Strategy & Innovation Center in San Jose, California, the company announced.

“We recognize the growing need for low-power machine learning for emerging applications and architectures and have worked diligently to provide a solution that performs complex neural network training and inference for these systems,” said in a prepared statement Louis DiNardo, CEO of BrainChip. “We believe that as a high-performance and ultra-power neural processor, Akida is ideally suited to be implemented at the Edge and IoT applications.”

In a session titled “Bio-Inspired Edge Learning on the Akida Event-Based Neural Processor,” BrainChip rolled out a demo of how the Akida Neuromorphic System-on-Chip processes standard vision CNNs using industry standard flows and distinguishes itself from traditional deep-learning accelerators through key features of design choices and bio-inspired learning algorithm. As a result, Akida takes up 40 to 60 percent less computations to process a CNN compared to a DLA.

https://news.samsung.com/global/sam...ble-expansive-kitchen-experiences-at-ces-2024

AI Features That Enable Food Ideas and Intelligence

To enhance the experience in the kitchen, the 2024 Bespoke 4-Door Flex™ Refrigerator with AI Family Hub™+ has been packed with a variety of innovative technologies. One impressive new feature is AI Vision Inside, which uses a smart internal camera that can recognize items being placed in and out of the refrigerator. Also, it is equipped with “Vision AI” technology, which can identify up to 33 different fresh food items based on a predefined set of training data comprising approximately one million food photographs.2 With the food list that is available and editable on the Family Hub™+ screen, users can also manually add expiration date information for items that they would like to keep track and the refrigerator sends out alerts through its 32” LCD screen for items before reaching that date.
Great find and don’t apologise mentioning it. Samsung seems to be such a hot lead. Let’s hope that it’s us in their 2024 roadmap. There are so many hints already.
 
  • Like
  • Fire
Reactions: 11 users
Wonder what, if any, potential impacts this move by Prophesee will have given the current US / China chip issues :unsure:



Hong Kong's dynamic innovation scene attracts French semiconductor company setting up regional headquarters (with photo)
*************************************************

Invest Hong Kong announced today (December 12) that it has helped French semiconductor company Prophesee set up a regional headquarters in the city to push its neuromorphic artificial intelligence (AI) technology across Asia.

The department welcomed the establishment of Prophesee in Hong Kong. The Associate Director-General of Investment Promotion, Dr Jimmy Chiang, said, "We are happy to see that the company capitalises on the city's advantages in research and development capabilities, advanced technology infrastructure, legal system and deep pool of local talent to set up its regional headquarters in Hong Kong."

He added, "With opportunities brought by the National 14th Five-Year Plan and the Guangdong-Hong Kong-Macao Greater Bay Area, Hong Kong can act as a strategic hub for innovative companies, like Prophesee, looking to access the Mainland market."

The Co-founder and CEO of Prophesee, Mr Luca Verre, said that the company's primary objective is to expand the adoption of the neuromorphic AI technology in mass market segments such as mobile phones, augmented reality or virtual reality headsets, and Internet of Things (IoT) cameras, as well as industrial automation and automotive sectors.

He said, "The Greater China region and the broader Asia markets represent the largest and fastest growing market for many of these segments. As we are accelerating our commercial expansion and are already achieving wins with major mobile manufacturers and IoT solution providers in the Greater China region, we think it's an optimal time to come to Hong Kong and make it a regional centre for our further expansion in Asia."

He added, "The new Hong Kong office is not only our regional headquarters in the Asia-Pacific region, but also our new customer innovation centre. It oversees the business operations in the Greater China region, Japan and Korea and develops innovative neuromorphic solutions for our global customers. We will be transferring and also hiring senior executive team members in Hong Kong to facilitate the process."

Prophesee develops a breakthrough Event-Based Vision approach to machine vision. This new vision category allows for significant reduction of power, latency and data processing requirements to reveal what was invisible to traditional frame-based sensors until now.
 
  • Like
  • Thinking
  • Fire
Reactions: 15 users

IloveLamp

Top 20
Screenshot_20240105_210002_LinkedIn.jpg
 
  • Like
  • Love
  • Thinking
Reactions: 14 users

Iseki

Regular
This pretty much just added the same value (I.e. zero), but you used more words and encouraged people to take a nap half way through reading because it was so rehashed and a great waste to read. Waiting for fresh material from you buddy.
Chris Stevens likes this
 
  • Haha
  • Fire
Reactions: 3 users

Iseki

Regular
This pretty much just added the same value (I.e. zero), but you used more words and encouraged people to take a nap half way through reading because it was so rehashed and a great waste to read. Waiting for fresh material from you buddy.
Rob Telson likes this, but Chris Stevens likes it more..
 
  • Love
Reactions: 1 users
Couple of dots for CES and 4 years in the making perhaps.... may have already been linked prior so apologies in advance if that's the case.

https://www.biometricupdate.com/202...sing-capabilities-opens-developer-environment

BrainChip has demonstrated the capabilities of its latest class of neuromorphic processing IP and Device in two sessions at the tinyML Summit at the Samsung Strategy & Innovation Center in San Jose, California, the company announced.

“We recognize the growing need for low-power machine learning for emerging applications and architectures and have worked diligently to provide a solution that performs complex neural network training and inference for these systems,” said in a prepared statement Louis DiNardo, CEO of BrainChip. “We believe that as a high-performance and ultra-power neural processor, Akida is ideally suited to be implemented at the Edge and IoT applications.”

In a session titled “Bio-Inspired Edge Learning on the Akida Event-Based Neural Processor,” BrainChip rolled out a demo of how the Akida Neuromorphic System-on-Chip processes standard vision CNNs using industry standard flows and distinguishes itself from traditional deep-learning accelerators through key features of design choices and bio-inspired learning algorithm. As a result, Akida takes up 40 to 60 percent less computations to process a CNN compared to a DLA.

https://news.samsung.com/global/sam...ble-expansive-kitchen-experiences-at-ces-2024

AI Features That Enable Food Ideas and Intelligence

To enhance the experience in the kitchen, the 2024 Bespoke 4-Door Flex™ Refrigerator with AI Family Hub™+ has been packed with a variety of innovative technologies. One impressive new feature is AI Vision Inside, which uses a smart internal camera that can recognize items being placed in and out of the refrigerator. Also, it is equipped with “Vision AI” technology, which can identify up to 33 different fresh food items based on a predefined set of training data comprising approximately one million food photographs.2 With the food list that is available and editable on the Family Hub™+ screen, users can also manually add expiration date information for items that they would like to keep track and the refrigerator sends out alerts through its 32” LCD screen for items before reaching that date.
"based on a predefined set of training data comprising approximately one million food photographs"

What are your thoughts on the size of the data set @Diogenese ?

To me it seems too large, to have anything to do with AKIDA, but how is a "zoo" or library, created for use by AKIDA?

It would take more training, to identify say stiff celery, from floppy celery..


Children please 🙄..
 
  • Haha
  • Like
Reactions: 7 users
Hello Fact Finder

I don't like stories, but rather reality!

1. Nobody discussed "all day for a week" beating anybody! You were the one writing most words about it.
2. Secretaries should not make typing mistakes especially nowadays with automatic correction programs and when going public. It always gives bad impressions. In this case, nobody did a second check!
3. It surprises me over and over again how much and often you defend BrainChip even over minor topics.

My end too.

Have a good weekend!

CHIPS
I'm pretty sure the secretaries FactFinder is talking about, worked in black and white offices..

100.gif
 
  • Haha
  • Like
  • Wow
Reactions: 16 users

wilzy123

Founding Member
Chris Stevens likes this
Rob Telson likes this, but Chris Stevens likes it more..

You biting onto my post (twice.... ROFL) tells me and everyone else with more than 3 neurons in their brain all that we need to know.
 
  • Like
Reactions: 1 users

wilzy123

Founding Member
Stop feeding the troll!

I agree. You've been eating too many chips and drinking so much bin juice that your posts have become the stuff that only a homicide detective would instruct someone to dig through.
 
Last edited:
  • Haha
Reactions: 3 users

JDelekto

Regular
"based on a predefined set of training data comprising approximately one million food photographs"

What are your thoughts on the size of the data set @Diogenese ?

To me it seems too large, to have anything to do with AKIDA, but how is a "zoo" or library, created for use by AKIDA?

It would take more training, to identify say stiff celery, from floppy celery..


Children please 🙄..

When creating a data set used for inference, a set of 'features' is extracted and used for training the model. Various features are identified for image models and tagged, such as shapes. One could, for example, train a model to recognize dress shirts, blouses, t-shirts, hoodies, etc.

As these features are fed into a neural network, the "weights" are calculated and become part of the model. The weights are stored in a binary format and can be quantized, or a fancy word for saying do some "magic math" for these numbers to take up less space. We hear about 8-bit, 4-bit, etc. The fewer bits that can be used to represent a weight, the less memory it requires. This quantization comes at a cost since you can lose some accuracy when doing the inferencing to recognize things.

Microsoft's Resnet-50 model (the 50 represents the number of layers in the model, each set of layers having a set of weights) is trained on millions of images. Yet, the model itself can fit in around 100 megabytes of memory. Most cell phones today are in the gigabytes of storage. To put it more into perspective, most wireless routers today can have between 128 and 512 MB of memory. A Samsung smart refrigerator has about 2.5GB of RAM and 8GB of flash memory, much more space than the model requires.

That being said, a model that is trained on over a million images to recognize most refrigerator contents can easily fit within the confines of an appliance or mobile device.

I like to think of a neural network model as a "snapshot" of a brain state. When you learn new images, you're not necessarily increasing the size or mass of your brain, but you're altering the connections between the neurons in your brain that can remember and recognize those images later. As you see more pictures of a cat to learn the shape of its face, nose, ears, tail, etc. that make up that cat, then the more likely you are to recognize another picture of a cat, even if it is a color you've never seen before.
 
  • Like
  • Love
  • Fire
Reactions: 26 users

Frangipani

Top 20
Sigh, sometimes a humble Arduino microcontroller for less than US$100 with minimal computational power is not only deemed good enough for certain research projects, but is also explicitly labelled as being more economical (both price-wise and regarding power consumption) than having to purchase technically far superior hardware such as an Akida PCIe Board or Dev Kit…

But on the upside: Dr Ivan Maksymov, an Associate Professor at Charles Sturt University’s AI and Cyber Futures Institute, where he conducts both theoretical and experimental research on physical foundations of AI (https://researchoutput.csu.edu.au/en/persons/ivan-maksymov), is at least fully aware of Brainchip and its products for the general public. Who knows - he might even find Akida’s prowess useful for another of his research projects…


C571652E-3A80-40AB-BF8C-E0B6D6B3F58F.jpeg



F1334860-F833-443A-A778-5797E53C1D29.jpeg


6863D523-CB2B-4F74-8314-FD5D20805E83.jpeg
 
  • Like
  • Thinking
  • Wow
Reactions: 21 users
When creating a data set used for inference, a set of 'features' is extracted and used for training the model. Various features are identified for image models and tagged, such as shapes. One could, for example, train a model to recognize dress shirts, blouses, t-shirts, hoodies, etc.

As these features are fed into a neural network, the "weights" are calculated and become part of the model. The weights are stored in a binary format and can be quantized, or a fancy word for saying do some "magic math" for these numbers to take up less space. We hear about 8-bit, 4-bit, etc. The fewer bits that can be used to represent a weight, the less memory it requires. This quantization comes at a cost since you can lose some accuracy when doing the inferencing to recognize things.

Microsoft's Resnet-50 model (the 50 represents the number of layers in the model, each set of layers having a set of weights) is trained on millions of images. Yet, the model itself can fit in around 100 megabytes of memory. Most cell phones today are in the gigabytes of storage. To put it more into perspective, most wireless routers today can have between 128 and 512 MB of memory. A Samsung smart refrigerator has about 2.5GB of RAM and 8GB of flash memory, much more space than the model requires.

That being said, a model that is trained on over a million images to recognize most refrigerator contents can easily fit within the confines of an appliance or mobile device.

I like to think of a neural network model as a "snapshot" of a brain state. When you learn new images, you're not necessarily increasing the size or mass of your brain, but you're altering the connections between the neurons in your brain that can remember and recognize those images later. As you see more pictures of a cat to learn the shape of its face, nose, ears, tail, etc. that make up that cat, then the more likely you are to recognize another picture of a cat, even if it is a color you've never seen before.
So you're saying, the use of such a large data set, doesn't exclude the use of AKIDA, but is it indicative of using AKIDA or not? 🤔..
 
  • Like
Reactions: 2 users

Diogenese

Top 20
"based on a predefined set of training data comprising approximately one million food photographs"

What are your thoughts on the size of the data set @Diogenese ?

To me it seems too large, to have anything to do with AKIDA, but how is a "zoo" or library, created for use by AKIDA?

It would take more training, to identify say stiff celery, from floppy celery..


Children please 🙄..
Hi Db,

I'll keep my doodle entendres under wraps.

This Samsung portmanteau patent application covers all sorts of domestic appliance with NNs, cameras, AI models, model training, speech recognition, display screens, menu suggestions ...

They contemplate having software NNs or SoC NNs.

WO2023090725A1 DOMESTIC APPLIANCE HAVING INNER SPACE CAPABLE OF ACCOMMODATING TRAY AT VARIOUS HEIGHTS, AND METHOD FOR ACQUIRING IMAGE OF DOMESTIC APPLIANCE 20211118

1704458661278.png



As shown in FIG. 2 , a home appliance 1000 according to an embodiment of the present disclosure may include a camera 1100 and a processor 1200 . The processor 1200 controls overall operations of the home appliance 1000 . The processor 1200 executes programs stored in the memory 1800, thereby enabling the camera 1100, the driving unit 1300, the sensor unit 1400, the communication interface 1500, the user interface 1600, the lighting 1700, and the memory. (1800) can be controlled.

According to one embodiment of the present disclosure, the home appliance 1000 may be equipped with an artificial intelligence (AI) processor. The artificial intelligence (AI) processor may be manufactured in the form of a dedicated hardware chip for artificial intelligence (AI), or manufactured as part of an existing general-purpose processor (eg CPU or application processor) or graphics-only processor (eg GPU). It may also be mounted on the home appliance 1000.

According to an embodiment of the present disclosure, the processor 1200 obtains a first image including the tray 1001 inserted into the internal space of the home appliance 1000 through the camera 1100, and uses the first image. Thus, the height at which the tray 1001 is inserted in the inner space can be identified. Also, the processor 1000 may identify the height at which the tray 1001 is inserted based on information obtained from at least one of the depth sensor 1410, the weight sensor 1420, and the infrared sensor 1430. An operation of identifying the height at which the tray 1001 is inserted by the processor 1200 will be described later in detail with reference to FIGS. 5 to 8 .

According to an embodiment of the present disclosure, the processor 1200 determines a setting value related to image capture of the interior space according to the height at which the tray 1001 is inserted, and based on the determined setting value, the tray 1001 A second image (hereinafter, also referred to as a monitoring image) including the contents placed thereon may be obtained. For example, the processor 1200 determines the brightness value of the lighting in the interior space according to the height at which the tray 1001 is inserted, and adjusts the brightness of the lighting 1700 disposed in the interior space according to the determined lighting brightness value. The camera 1100 may be controlled to acquire the second image. In addition, the processor 1200 may determine the size of the cropped area according to the height at which the tray 1001 is inserted, and obtain a second image by cropping a portion of the surrounding area from the first image based on the determined size of the cropped area. . Meanwhile, the processor 1200 may obtain a second image by determining a distortion correction value of the camera 1100 according to the height at which the tray 1001 is inserted and applying the distortion correction value to the first image. An operation in which the processor 1200 acquires the second image (monitoring image) by applying a set value according to the height at which the tray 1001 is inserted will be described in detail later with reference to FIGS. 9 to 16 .



The input interface 1620 may include a voice recognition module. For example, the home appliance 1000 may receive a voice signal that is an analog signal through a microphone and convert the voice part into computer-readable text using an Automatic Speech Recognition (ASR) model. The home appliance 1000 may obtain the user's utterance intention by interpreting the converted text using a natural language understanding (NLU) model. Here, the ASR model or NLU model may be an artificial intelligence model. The artificial intelligence model can be processed by an artificial intelligence processor designed with a hardware structure specialized for the processing of artificial intelligence models. AI models can be created through learning. Here, being made through learning means that a basic artificial intelligence model is learned using a plurality of learning data by a learning algorithm, so that a predefined action rule or artificial intelligence model set to perform a desired characteristic (or purpose) is created. means burden. An artificial intelligence model may be composed of a plurality of neural network layers. Each of the plurality of neural network layers has a plurality of weight values, and a neural network operation is performed through an operation between an operation result of a previous layer and a plurality of weight values.



Alternatively, the electronic device 200 may be implemented as an electronic device connected to a display device including a screen through a wired or wireless communication network. For example, the electronic device 200 may be implemented in the form of a media player, a set-top box, or an artificial intelligence (AI) speaker.

...

For example, a refrigerator may provide a service that recommends a menu suitable for stored ingredients. Meanwhile, in order for smart appliances to provide smart services based on object recognition, an object recognition rate needs to be improved.



In an embodiment, at least one neural network and/or a predefined operating rule or AI model may be stored in the memory 220 . In an embodiment, a first neural network for obtaining multi-mood information from at least one of user context information and screen context information may be stored in the memory 220 .



In an embodiment, the processor 210 may use artificial intelligence (AI) technology. AI technology can be composed of machine learning (deep learning) and element technologies using machine learning. AI technology can be implemented by utilizing algorithms. Here, an algorithm or a set of algorithms for implementing AI technology is called a neural network. The neural network may receive input data, perform calculations for analysis and classification, and output result data. In this way, in order for the neural network to accurately output result data corresponding to the input data, it is necessary to train the neural network. Here, 'training' is a method of inputting various data into a neural network, analyzing the input data, classifying the input data, and/or extracting features necessary for generating result data from the input data. It may mean training a neural network so that the neural network can discover or learn a method by itself. Training a neural network means that an artificial intelligence model with desired characteristics is created by applying a learning algorithm to a plurality of learning data. In an embodiment, such learning may be performed in the electronic device 200 itself where artificial intelligence is performed, or through a separate server/system
.
 
  • Like
  • Thinking
Reactions: 15 users
Top Bottom