BRN Discussion Ongoing

wilzy123

Founding Member
I’m honestly surprised you notice the difference Wilzy.
I am surprised too

banana-laughing.gif
 
  • Haha
  • Like
Reactions: 9 users

Labsy

Regular
I remember when we hit $2.30 or whatever it was, there was a poster over on the crapper that sold out and said something along the lines of:
"Well that does it, cashed out... Time for me to ride off into the sunset. Later, dudes"
God I envy that person at the moment! Lucky bastard (or smart, probably the second one)
... I was actually a millionaire (on paper) for about 15 minutes lol
A millionaire...again... you will be...my friend. Succeed...brainchip will....
Succumb to negative thoughts.... we should not....
May the nuromorphic force be with you.
 
  • Like
  • Haha
  • Love
Reactions: 36 users

Galaxycar

Regular
Think the next big announcement will be Extraordinary AGM,Change of directors and Sean,next quarterly will sow the seeds for strike 2 and hopefully Tony gets the flick to,biggest waste of money.
A millionaire...again... you will be...my friend. Succeed...brainchip will....
Succumb to negative thoughts.... we should not....
May the nuromorphic force be with you.
 

keyeat

Regular
Yes, hence why i posted it. It also seems quite a few other people were just as intrigued as i.......

instead of making a snarky comment because you can't control your ego / emotions........you could always just as easily scroll on.......

Just a thought
wow chill bro , i just dont think what you post is relevant , i like a load on LinkedIn , doesn't mean the company i work for has any link to it :)

lets leave ego an emotions out of this ....
 
  • Like
Reactions: 2 users

IloveLamp

Top 20
wow chill bro , i just dont think what you post is relevant , i like a load on LinkedIn , doesn't mean the company i work for has any link to it :)

lets leave ego an emotions out of this ....
oh-no-anyway.gif
 
  • Haha
  • Like
Reactions: 7 users

Cartagena

Regular
A millionaire...again... you will be...my friend. Succeed...brainchip will....
Succumb to negative thoughts.... we should not....
May the nuromorphic force be with you.

Talking about neuromorphic, isn't this what Brainchip does?


Australian Defence Magazine
Menu



  • Credit: Defence
NextNext

Sydney Nano develops neuromorphic sensor for RAAF​

24 May 2021
Comments 0 Comments
ShareFacebookTwitterLinkedInRedditTelegramEmailCopy Link

2 / 3 free articles left.
The Jericho Smart Sensing Lab at the University of Sydney Nano Institute has developed a prototype sensor system for the RAAF that mimics the brain’s neural architecture to deliver high-tech sensing technology.
Dubbed MANTIS, the system under development integrates a traditional camera with a visual system inspired by neurobiological architectures. This ‘neuromorphic’ sensor works at 'incredibly high' speeds, which will allow aircraft, ships and vehicles to identify fast moving objects, such as drones.
“Combining traditional visual input with the neuromorphic sensor is inspired by nature. The praying mantis has five eyes – three simple eyes and two composite. Our prototype works a bit like this, too,” Professor Ben Eggleton, Director of Sydney Nano, said.
MANTIS combines two visual sensing modes in a lightweight portable unit, operated via dashboard interface, with a processing system that is on board the craft, ship or vehicle. The aim is to enable a direct comparison of images and allow for the rapid exploration of the neuromorphic sensor capabilities.
Sydney Nano worked with the School of Architecture, Design and Planning to develop the prototype in just three months.
“There are many things that excite me about MANTIS. The level of detail that it provides and its ability to track high-speed events is very impressive," Air Vice-Marshal Cath Roberts, Head of Air Force Capability, said. “It's a promising sensor fusion that has really strong potential across Defence.”

Professor Eggleton leads the Jericho Lab team that saw delivery of the prototype. The four-kilogram small-form design will allow the camera to be easily used on aircraft, ships and vehicles to detect challenging targets.

“The neuromorphic sensor has exquisite sensing capabilities and can see what can't be seen with traditional cameras,” he said. “It invokes the idea of the eye in animals but has leading-edge technology built into it.”

Whereas a traditional camera is constrained by frame rates, each pixel in a neuromorphic camera functions independently and is always ‘on’. This means the imaging system is triggered by events. If it’s monitoring a static scene, the sensor sees nothing and no data is generated.

“When there is an event, the sensor has incredible sensitivity, dynamic range and speed,” Professor Eggleton said. “The data generated is elegantly interfaced with an IT platform allowing us to extract features using machine-learning artificial intelligence.”

“We look forward to developing this device further and collaborating with other experts in this area, including Western Sydney University’s International Centre for Neuromorphic Systems, which are the leaders in neuromorphic research in Australia.

MANTIS is the result of the partnership between the University of Sydney Nano Institute and Air Force’s Jericho Disruptive Innovation.

“A rapid prototype of this type and scale happening in three months, during COVID, is remarkable,” said Wing Commander Paul Hay, head of advanced sensing at RAAF Jericho Disruptive Innovation.

The Defence Science and Technology Group (DSTG) was also involved in the collaboration, providing early guidance and input.
 
  • Like
  • Thinking
  • Wow
Reactions: 17 users
I know a bit older news and it was posted about Amit's previous decade with Qualcomm, but just reading an article from a couple years ago about the company.

Partnership looks pretty promising with the synergies of their existing product tech aolutions and Akida.

Interesting to see the other founder also ex Qualcomm and worked on commercialising Snapdragon chipsets as well as GMAC being included in Qualcomms smart cities accelerator program and also NVIDIAs Inception cohort.

Be interesting to see how this one develops also.


HOW THESE IISC CLASSMATES ARE BUILDING A GLOBAL DEEP TECH COMPANY TO ENABLE AI ON THE EDGE

Amit Mate and Nagaraj B’s GMAC Intelligence is developing real-time, power-efficient and accurate AI/ML algorithms to enable server-less solutions on Edge devices such as surveillance cameras, small robots, drones and autonomous vehicles.

Team YS 14305 Stories Tuesday June 08, 2021, 6 min Read

For Amit Mate and Nagaraj B, co-founders of GMAC, the decision to launch a startup together wasn't a difficult one, given the camaraderie they’d always shared, be it as classmates at the Indian Institute of Science (IISc) or while working at Qualcomm.

The journey from being classmates to business partners was inspired by a single goal: to make innovative products using state-of-the-art technologies such as artificial intelligence, 5G and Edge computing accessible to one and all. With this goal, the duo launched GI4ALL- an AIoT-as-a service for businesses, enterprises and city governments.

GMAC is commercialising its AI/ML software and algorithms to power applications such as touchless attendance, visitor management, automatic licence plate recognition and activity recognition.

ALSO READ GMAC Intelligence Joins Qualcomm Smart Cities Accelerator Program to enable AIoT-as-a-service products

Customers can access these services by signing up on their portal gi4all.web.app and installing the GI4ALL app from Google Playstore on multiple Edge AI cameras.

GMAC’s AI/ML software is capable of processing all AI on the cameras itself in real time, so customers don’t need to buy and maintain expensive servers.

GMAC also provides a server-less cloud component to store relevant data and share intelligence across devices – which makes their solution more robust than any other systems out there in the market.

A real-time dashboard ensures that the authorised stakeholders get regular updates and reports as captured by the AI/ML software.

These offerings run on Android devices that are accessible by almost everyone. Connecting the dots

The inspiration behind GMAC’s design ideology can be traced back to Amit and Nagaraj’s vast experience of working on cutting-edge technologies such as 3G/4G, Edge compute, VRand AI/ML during their stints with Qualcomm between 2008 and 2019.

Amit has vast global R&D experience in leading and bringing new technologies to life such as 3G-Femtocells, 4G modems, real-time VR and AI across Nokia Research (Helsinki, Finland), Qualcomm Research (San Diego, CA) and other high-profile startups (Silicon Valley and East Coast).

He also has several award-winning patents to his name.

Nagaraj was extensively involved in commercialising chipsets like the Snapdragon 630 and Snapdragon 730 which are being used in millions of smartphones today.

With their experience of working on pioneering technologies for consumer electronics, the duo wondered how such solutions can be made available to businesses that lack the IT infrastructure for a massive digital transformation, and whether AI/ML models could simplify such access.

“Once we decided to launch GMAC, we aimed to combine AI, IoT and SaaS platforms in a way that it becomes easy even for SMBs/SMEs/governments to deploy sophisticated tools for tasks like recording attendance, visitor management and access control using facial, vehicle and activity recognition,”

Amit says. “We wanted to design plug-and-play solutions that can be used with various IoT devices such as digital locks and boom barriers using AI.”

Turning a crisis into a deep tech opportunity The business model was finalised by May 2020, right when the COVID-19 pandemic was at its peak.

However, the founders made the best of the situation. “During the pandemic, a lot of students who had plans to study abroad were stuck in the country,” Amit recalls. “At the same time, a lot of talented professionals were also looking for challenging opportunities in the AI/ML space.

This enabled us to hire the best professionals from across India. So, we laid the foundation of the company by choosing the best talent to design our products.”

Another factor that helped GMAC is the launchpad they got from the local deep tech community. “IISc announced GMAC as one of the upcoming startups which were a part of its ‘Deep Tech @ IISc’ cohort in 2019,” said Amit.

The initiative was launched especially for IISc alumni, faculty members and students and aimed at supporting solutions that leveraged deep tech to drive impact in the society.

“Then Qualcomm welcomed us to their Smart City Accelerator programme and NVIDIA to their Inception cohort and provided us the much needed visibility in the global deep tech community,” he adds.

According to the founders, the pandemic helped them design products that were future-forward. “For instance, the residential society where I live has over 100 security guards, domestic help and gardeners coming in every day,” Amit says.

Before the pandemic, they were paid according to the number of days they reported for work, and their attendance was recorded via a finger-printing device.

But during the pandemic, the residential society asked Amit if his company could design a system that can allow attendance to be recorded in a contactless fashion.

“So, we installed the GMAC touchless attendance device on multiple gates to solve the problem,” says Amit. What worked in GMAC’s favour was the product’s easy-to-use interface.

“Almost anybody could use it and it did not require any changes in the existing IT systems of the society. So, every time a new maid or guard was hired, they could easily be added to the system without any hassles.”

Built to scale

Talking about the USPs of GMAC’s offerings, Nagaraj says these products’ versatility is their biggest advantage.

“There are a large number of SMEs and SMBs that don't want to shift from their legacy systems,” he notes.

As GMAC solutions run on Android, they can be accessed by everyone who has a smartphone. “Our solutions are cost-effective, and do not require one to switch to sophisticated technologies like cloud server environments, which might not always be feasible for places like factories, hospitals and schools, given their IT constraints,” Nagaraj explains.

Moreover, as all the AI processing happens on camera, GMAC’s customers do not have to buy and install any servers.

Their cost-effectiveness and accessibility give GMAC an edge over several established players. The startup depends on Google to shield its software and devices from cyberattacks and security breaches.

“Using Google’s security features has helped us build standardised products.

While the software has built-in biometric protection, it is also password-protected and can be accessed only by the admins.

The devices have clamps and locks to hold them in place and prevent theft,” says Amit.

For now, the two friends-turned-business partners are busy scaling their sales by targeting the ideal customer profile for their products.

Less than a year in running, GMAC is already taking strides with a technology that can be scaled, and is charting out growth strategies in areas such as facial recognition, and designing solutions that can sense hazardous situations and accidents in factories.

“We are also working on activity detection solutions, which can help companies improve management of employees and detect staff members who might be idling away during shifts,” Nagaraj says.

While there might be similar solutions being offered by other players, Amit says that most of them are not plug-and-play or 5G-ready.

“This is where GMAC would come in,” he adds. Currently, GMAC solutions have been deployed in residential societies, hotels, hospitals, colleges, movie theatres and factories.

Read more at: https://yourstory.com/2021/05/iisc-classmates-deeptech-gmac-enable-iot
 
  • Like
  • Fire
  • Love
Reactions: 17 users
I know a bit older news and it was posted about Amit's previous decade with Qualcomm, but just reading an article from a couple years ago about the company.

Partnership looks pretty promising with the synergies of their existing product tech aolutions and Akida.

Interesting to see the other founder also ex Qualcomm and worked on commercialising Snapdragon chipsets as well as GMAC being included in Qualcomms smart cities accelerator program and also NVIDIAs Inception cohort.

Be interesting to see how this one develops also.


HOW THESE IISC CLASSMATES ARE BUILDING A GLOBAL DEEP TECH COMPANY TO ENABLE AI ON THE EDGE

Amit Mate and Nagaraj B’s GMAC Intelligence is developing real-time, power-efficient and accurate AI/ML algorithms to enable server-less solutions on Edge devices such as surveillance cameras, small robots, drones and autonomous vehicles.

Team YS 14305 Stories Tuesday June 08, 2021, 6 min Read

For Amit Mate and Nagaraj B, co-founders of GMAC, the decision to launch a startup together wasn't a difficult one, given the camaraderie they’d always shared, be it as classmates at the Indian Institute of Science (IISc) or while working at Qualcomm.

The journey from being classmates to business partners was inspired by a single goal: to make innovative products using state-of-the-art technologies such as artificial intelligence, 5G and Edge computing accessible to one and all. With this goal, the duo launched GI4ALL- an AIoT-as-a service for businesses, enterprises and city governments.

GMAC is commercialising its AI/ML software and algorithms to power applications such as touchless attendance, visitor management, automatic licence plate recognition and activity recognition.

ALSO READ GMAC Intelligence Joins Qualcomm Smart Cities Accelerator Program to enable AIoT-as-a-service products

Customers can access these services by signing up on their portal gi4all.web.app and installing the GI4ALL app from Google Playstore on multiple Edge AI cameras.

GMAC’s AI/ML software is capable of processing all AI on the cameras itself in real time, so customers don’t need to buy and maintain expensive servers.

GMAC also provides a server-less cloud component to store relevant data and share intelligence across devices – which makes their solution more robust than any other systems out there in the market.

A real-time dashboard ensures that the authorised stakeholders get regular updates and reports as captured by the AI/ML software.

These offerings run on Android devices that are accessible by almost everyone. Connecting the dots

The inspiration behind GMAC’s design ideology can be traced back to Amit and Nagaraj’s vast experience of working on cutting-edge technologies such as 3G/4G, Edge compute, VRand AI/ML during their stints with Qualcomm between 2008 and 2019.

Amit has vast global R&D experience in leading and bringing new technologies to life such as 3G-Femtocells, 4G modems, real-time VR and AI across Nokia Research (Helsinki, Finland), Qualcomm Research (San Diego, CA) and other high-profile startups (Silicon Valley and East Coast).

He also has several award-winning patents to his name.

Nagaraj was extensively involved in commercialising chipsets like the Snapdragon 630 and Snapdragon 730 which are being used in millions of smartphones today.

With their experience of working on pioneering technologies for consumer electronics, the duo wondered how such solutions can be made available to businesses that lack the IT infrastructure for a massive digital transformation, and whether AI/ML models could simplify such access.

“Once we decided to launch GMAC, we aimed to combine AI, IoT and SaaS platforms in a way that it becomes easy even for SMBs/SMEs/governments to deploy sophisticated tools for tasks like recording attendance, visitor management and access control using facial, vehicle and activity recognition,”

Amit says. “We wanted to design plug-and-play solutions that can be used with various IoT devices such as digital locks and boom barriers using AI.”

Turning a crisis into a deep tech opportunity The business model was finalised by May 2020, right when the COVID-19 pandemic was at its peak.

However, the founders made the best of the situation. “During the pandemic, a lot of students who had plans to study abroad were stuck in the country,” Amit recalls. “At the same time, a lot of talented professionals were also looking for challenging opportunities in the AI/ML space.

This enabled us to hire the best professionals from across India. So, we laid the foundation of the company by choosing the best talent to design our products.”

Another factor that helped GMAC is the launchpad they got from the local deep tech community. “IISc announced GMAC as one of the upcoming startups which were a part of its ‘Deep Tech @ IISc’ cohort in 2019,” said Amit.

The initiative was launched especially for IISc alumni, faculty members and students and aimed at supporting solutions that leveraged deep tech to drive impact in the society.

“Then Qualcomm welcomed us to their Smart City Accelerator programme and NVIDIA to their Inception cohort and provided us the much needed visibility in the global deep tech community,” he adds.

According to the founders, the pandemic helped them design products that were future-forward. “For instance, the residential society where I live has over 100 security guards, domestic help and gardeners coming in every day,” Amit says.

Before the pandemic, they were paid according to the number of days they reported for work, and their attendance was recorded via a finger-printing device.

But during the pandemic, the residential society asked Amit if his company could design a system that can allow attendance to be recorded in a contactless fashion.

“So, we installed the GMAC touchless attendance device on multiple gates to solve the problem,” says Amit. What worked in GMAC’s favour was the product’s easy-to-use interface.

“Almost anybody could use it and it did not require any changes in the existing IT systems of the society. So, every time a new maid or guard was hired, they could easily be added to the system without any hassles.”

Built to scale

Talking about the USPs of GMAC’s offerings, Nagaraj says these products’ versatility is their biggest advantage.

“There are a large number of SMEs and SMBs that don't want to shift from their legacy systems,” he notes.

As GMAC solutions run on Android, they can be accessed by everyone who has a smartphone. “Our solutions are cost-effective, and do not require one to switch to sophisticated technologies like cloud server environments, which might not always be feasible for places like factories, hospitals and schools, given their IT constraints,” Nagaraj explains.

Moreover, as all the AI processing happens on camera, GMAC’s customers do not have to buy and install any servers.

Their cost-effectiveness and accessibility give GMAC an edge over several established players. The startup depends on Google to shield its software and devices from cyberattacks and security breaches.

“Using Google’s security features has helped us build standardised products.

While the software has built-in biometric protection, it is also password-protected and can be accessed only by the admins.

The devices have clamps and locks to hold them in place and prevent theft,” says Amit.

For now, the two friends-turned-business partners are busy scaling their sales by targeting the ideal customer profile for their products.

Less than a year in running, GMAC is already taking strides with a technology that can be scaled, and is charting out growth strategies in areas such as facial recognition, and designing solutions that can sense hazardous situations and accidents in factories.

“We are also working on activity detection solutions, which can help companies improve management of employees and detect staff members who might be idling away during shifts,” Nagaraj says.

While there might be similar solutions being offered by other players, Amit says that most of them are not plug-and-play or 5G-ready.

“This is where GMAC would come in,” he adds. Currently, GMAC solutions have been deployed in residential societies, hotels, hospitals, colleges, movie theatres and factories.

Read more at: https://yourstory.com/2021/05/iisc-classmates-deeptech-gmac-enable-iot
Further to the above I see a preview uploaded 2 days ago of a recent presso Amit did at a recent Edge AI and Vision Alliance software summit.

Don't know if covers anything on neuromorphic as discusses 4 mins CNN training but given the partnership with BRN, could deduce maybe they looking at the CNN2SNN function at some point?




Haven't bothered to register yet, maybe tomoz but full version apparently here for anyone interested.


“Fundamentals of Training AI Models for Computer Vision Applications,” a Presentation from GMAC Intelligence​

Algorithms, GMAC Intelligence, Software, Summit 2023, Tools, Videos / September 4, 2023
 
  • Like
  • Fire
  • Thinking
Reactions: 21 users
Short Q&A from this time last year.

Def see the synergies.


EDGE EXECUTIVE INSIGHT – AMIT MATE, FOUNDER & CEO , GMAC INTELLIGENCE – INNOVATOR OF THE YEAR FINALIST​

GMAC-Intelligence-Amit-Mate-300x300.jpg


In the lead up to Edge Computing World, we’re taking some time to speak to key Executives from the leading companies. Today we’re talking with Amit Mate, Founder & CEO of GMAC Intelligence

Tell us a bit about yourself – what led you to get involved in the edge computing market and GMAC Intelligence

I have always been fascinated by AI algorithms and how they can solve problems where traditional engineering tools fail. In my career, even before deep learning was a thing, I was using learning algorithms to improve wireless technology and enabling on-device machine vision. As I was working in the Edge compute industry ,I was able to see the Moore’s law taking root in AI compute. It was perfect time to combine two of my passions Connectivity and AI algorithms to build something useful and accessible to the masses. My co-founder , IISc classmate (Masters) and Qualcomm colleague Nagaraj and I decided to take the plunge and build something for the global market.

What is it you & your company are uniquely bringing to the edge market?

We are building server-less AI/ML solutions. Our innovations have facilitated moving all AI inferencing from dedicated cloud servers to on-device, on-premise and server-less cloud/lambda functions in our solutions. We have been able to commercialize our AI/ML solutions with 1000x cost reduction while still maintaining an expense ratio of 0.01.

Tell us more about the company, what’s your advantages compared to others on the market, who is involved, and what are your major milestones so far?

We build connected intelligence solutions, where our algorithms and software enable multiple connected smart devices to learn, infer and collaborate on the Edge.

We build plug-n-play solutions – which enable easy and rapid deployment and scaling.

We are striving for highest useful TOPs/$ on the Edge – by making our solutions multi-stream, multi-AI, multi-delegate (GPU/CPU/NPU).

We are offering our solutions with SaaS business model. Customers can choose any solution from facial/vehicle/activity recognition OR any custom application. We provide a unified dashboard and a LCNC option to integrate the Edge meta-data into existing dashboards. Our solutions have built-in BLE based access control.

We have deployed these solutions commercially in India and now looking to bring our capabilities to solve problems of the US Enterprise market.

How do you see the edge market developing over the next few years?

Edge solutions will increase in complexity and functionality over the next few years starting from a few sectors such as QSRs, Retail, Security (SMEs) and finally engulfing the entire Enterprise space.

Consumer grade devices will increase in AI compute capability from 20 TOPs today to 200 TOPs in couple of years and continue the Moore’s law. This will enable several new applications that were not possible few years ago on the Edge.

What are the main trends you see about integrating edge computing into different verticals?

As we start integrating Edge computing into different verticals and Edge AI starts to proliferate, TCO per Edge site is going to be a major factor in decision making.

Edge compute with built-in connectivity options will also have better value proposition as we start build learning and collaborating Edge AI systems.
 
  • Fire
  • Like
  • Love
Reactions: 8 users

stockduck

Regular
As Brainchip is partner with emotion3D.
Now emotion3D is collaborating with Apex.AI, both are software bases. So will Brainchip's Akida be the hardware? Time will tell.

emotion3D and Apex.AI collaborate to introduce advanced driver and occupant analysis capabilities using Apex.AI’s vehicle operating system


Vienna, Austria / Palo Alto, California, August 31, 2023 — emotion3D, a leading automotive software supplier delivering innovative in-cabin analysis solutions and Apex.AI, a global software provider for safety critical mobility systems, have entered into a collaboration. The seamless integration of emotion3D’s software with Apex.Grace enables a highly effective system for driver and passenger analysis.



As higher automation levels in automotive continue to redefine transportation, ensuring the safety and comfort of passengers remain paramount concerns. Apex.AI’s software platform provides the foundation for safe and reliable automotive and autonomous systems while emotion3D’s in-cabin analysis software stack helps increase the safety and comfort of all passengers. Through this collaboration, the companies have integrated emotion3D's cutting-edge in-cabin analysis technology with Apex.Grace, the software framework provided by Apex.AI, taking a significant step towards enhancing passenger experience and road safety.


Florian Seitner, CEO of emotion3D, expresses his enthusiasm about the partnership by stating “We are excited to enter this partnership with Apex.AI as this collaboration enables us to leverage our driver monitoring technology and Apex.AI's capabilities to create a more secure and connected driving experience.”


“Apex.AI is establishing operating software for the software-defined vehicle age. The partnership with emotion3D is a win-win for both parties: Our development environment, consisting of Apex.Grace and Apex.Ida, is the perfect basis for applications such as emotion3D’s driver and passenger monitoring software. The tight integration enables customers to efficiently develop their functional software responding to drivers’ states even faster” explains Jan Becker, CEO of Apex.AI.


Learning 🪴
So thank you for posting this.
I read it today in the news form the IAA 2023 in munich.
Another Information found :

https ://www.iaa-transportation.com/en/newsroom/iaa-voices-interview-apex-ai

Interview Apex.AI​

".......

Before joining Apex.AI, he held various positions with companies, including Nvidia, Daimler Group, and Delphi.

..."


I love to see this memberchip.....like these "body members" and "voting members" from soafee.

https ://www.soafee.io/about/members

https ://www.prnewswire.com/news-releases/autonomous-driving-moia-counts-on-apexai-software-for-passenger-management-development-301834296.html

"....
The partnership pays towards MOIA's goal of working with Volkswagen Commercial Vehicles (VWCV) to develop Europe's first type-certified AD-MaaS system and successfully launch an integrated autonomous, scalable ridepooling system on the road in Hamburg after 2025.
...."


....and who did shortly an improvement on his patent in case of "semantic segmentation":)
(https ://cdn-api.markitdigital.com/apiman-gateway/ASX/asx-research/1.0/file/2924-02702229-2A1468998?access_token=83ff96335c2d45a094df02a206a39ff4)

well Apex ai seems to know, what they are talking about....

https ://www.apex.ai/wir

Have you ever faced system freezes and shutdowns when processing large amounts of data? Specifically, image processing algorithms like deep neural network based object detection and semantic segmentation in autonomous driving applications are very demanding in terms of data transfer rate and processing power.


In this talk, we show how to efficiently implement a computer vision pipeline using Apex.OS (a safety certified fork of ROS2) which utilizes zero-copy optimizations on the middleware level to reduce bandwidth requirements. In addition, we use hardware accelerated versions of the algorithms to increase the throughput. We also explain how to abstract this hardware acceleration in the application code to decouple it from the underlying SoC.

I have no clue if it has to do with brainchip IP(uuuuu):cry::p
 
Last edited:
  • Like
  • Fire
  • Thinking
Reactions: 8 users

stockduck

Regular
When I think about the presentation of the new Mercedes study on the CLA Concept and the prominently presented water-cooled central processor, which Nvidia claims to have been developed according to various media reports on the Internet,.....I ask myself as a non-physicist:

Why with such a large mass of computing power and the resulting heat for the energy consumption of the computing power per second, water cooling is sufficient?

Why doesn't it have to be some kind of oil fluid, for example, which could get much hotter without evaporating?

Could it be because the bulk of the computing power is handled directly on site at the sensor, and if so, which type of chip technology would be the most economical in terms of mass production?


scientists forward!

Could someone calculate how much thermal energy would be released with this type of Nvidia processor without neuromorphic edge computing on the sensor?:ROFLMAO:

Or maybe like someone posted before: it`s a kind of an "flux-generator"

Just have a little fun.
 
  • Like
  • Fire
  • Love
Reactions: 8 users

MDhere

Regular
  • Like
Reactions: 1 users

MDhere

Regular
I regret to inform you that I am unable to comply with your request due to a personality disorder that I suffer from that makes me a bit dotty.😝

But, what I can do is ask you if you think there'a anything a bit weird about the timeline of events listed below. I mean if Snapdragon Digital Chassis was so brilliant at voice control, why didn't Markus Shafer toot about Qualcomm on his linked blog, which was AFTER Mercedes partnered with Qualcomm on its Snapdragon Digital Chassis Solutions?

  • Jan 2022 - “Working with California-based artificial intelligence experts BrainChip, Mercedes-Benz engineers developed systems based on BrainChip’s Akida hardware and software,” Mercedes noted in a statement describing the Vision EQXX. “The example in the Vision EQXX is the “Hey Mercedes” hot-word detection. Structured along neuromorphic principles, it is five to ten times more efficient than conventional voice control,” the carmaker claimed.
  • Sep 2022 - Mercedes and Qualcomm Collaborate to power upcoming Mercedes vehicles with Snapdragon Digital Chassis Solutions
  • Oct 2022 - Markus Shafer (Mercedes CTO) asks everyone to vote on a poll in which neuromorphic computing wins the vote
  • Jan 2023 - Markus Shafer published his blog on Neuromorphic Computing in which he mentions BrainChip and the amasing efficiency of the voice control in the EQXX
  • Sep 2023 - Qualcomm announced it's Snapdragon Digital Chassis will enhance voice control in both BMW and Mercedes vehicles
cocktail of yr choice at next agm for this post bravo. remind me. ;)
 
  • Like
  • Fire
  • Love
Reactions: 8 users

stockduck

Regular
Bravo I respect your work so much, but I think we have to stop digging and looking for dots.
You/ We've found so many connections over the years but nothing at all eventuated to something material.
It's just time that the company finally delivers. The initial date for revenue was end of 2022. Now everyone acts like it has always been 2024 – 2025.

The last months I received a lot of backlash. I turned from this excited investor who believes in the technology (what I still do btw it's the management that I find highly uncapable) into someone who's very critical in his posts.
The responses are always the same. Either, the shorters are blamed for everything. Or it's just fine losing 80%+ of a company's value.
What I'm saying is that the share price is a consequence of the capability of our management. And they failed massively.
Someone on this forum even said he'd bet his life that we are involved with Valeo. Backed up was this claim by some reactions of the presenter during a presentation when asked if Akida was inside. Or something similar, I don't know what the correct wording was.
Well now we know we're not as literally everyone moved away from our failed first gen product, seemingly even Mercedes Benz who were very outspoken.

Now it's time for the management to finally achieve something material. If that happens we can start looking for dots again. Up until this point I'll never take any dot serious, no matter how convincing it could be.
So, respectfully to answer for me to you is: I have a different kind of understanding, I also can understand your point of view.

For me, brainchip technology has nothing to do with evolving interrupting research start up firms like per example the battery sector.
If there has been developed a game changing battery technology, than it may be much easier to implement this new technology in existing products, because the companies (customers) can do the development for products on their own.

In case of our brainchip "lover", there was never before a market for such products and the customers must develop the new game changing technology hand in hand with the inventor. They also must training their own personal with our researchers in neuromorphic networks. That is, what I believe is very time relvant and needs to have a lot of NDA`s.

It reminds me to the days, when Mr. Daimler and Mr. Benz decided to build an new vehicle without some living horses in front of.:giggle:
There was a motor, but you had to build a car around this game changing technology......long ago.
Is this a possible picture for you, to tell you how I feel the current situation?

But for sure, I could be wrong and would also be happy if there could be more reliable facts to stabilize market capitalization and shareholder value.

But I believe in management decisions. There has to be developed a market for akida IP products, and I think, that is what they do together with this day to day growing ecosystem they have been build up until yet.
 
Last edited:
  • Like
  • Love
Reactions: 19 users

Sirod69

bavarian girl ;-)

BrainChip Showcases Foundation for next generation AI solutions at AI Hardware & Edge AI Summit​



Laguna Hills, Calif. – September 6, 2023 BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of ultra-low power, fully digital, event-based, neuromorphic AI IP, is a Gold Sponsor participant of the AI Hardware & Edge AI Summit September 12-14 at the Santa Clara Marriott in Santa Clara, Calif.

The combined AI Hardware & Edge AI Summit comprehensively covers the design and deployment of ML hardware and software infrastructure across the cloud-edge continuum. As part of the Summit, BrainChip CMO Nandan Nayampally will present “The Edge of Tomorrow: Intelligent Compute to scale AIoT” on September 13 at 4:55 p.m. PDT. The session will detail BrainChip’s holistic, distributed approach that frees up the cloud and creates an explosion of capable, intelligent sensing devices on the Edge that accelerate global artificial intelligence.

“We are pleased to partner with the AI Hardware & Edge AI Summit to demonstrate how BrainChip’s fully digital, event-based Akida™ platform provides radically efficient AI inference on device to substantially increase the level of intelligence delivered on Edge devices,” said Nayampally. “I look forward to discussing how our approach in keeping AI/ML local to the chip while minimizing the need for cloud, dramatically reduces latency while improving privacy and data security.”

Akida processors power the next generation of Edge AI devices that enable growth in intelligence in industrial, home, automotive, and other IoT environments. Akida’s fully digital, customizable, event-based neural processing solution is ideal for advanced intelligent sensing, medical monitoring and prediction, high-end video-object detection and more. Akida’s neuromorphic architecture delivers high performance with extreme energy efficiency enabling AI solutions previously not possible on battery-operated or fan-less embedded Edge devices. Akida also has a unique ability to securely learn on-device without the need for cloud retraining.
 
  • Like
  • Love
  • Fire
Reactions: 69 users

cosors

👀
Short Q&A from this time last year.

Def see the synergies.


EDGE EXECUTIVE INSIGHT – AMIT MATE, FOUNDER & CEO , GMAC INTELLIGENCE – INNOVATOR OF THE YEAR FINALIST​

GMAC-Intelligence-Amit-Mate-300x300.jpg


In the lead up to Edge Computing World, we’re taking some time to speak to key Executives from the leading companies. Today we’re talking with Amit Mate, Founder & CEO of GMAC Intelligence

Tell us a bit about yourself – what led you to get involved in the edge computing market and GMAC Intelligence

I have always been fascinated by AI algorithms and how they can solve problems where traditional engineering tools fail. In my career, even before deep learning was a thing, I was using learning algorithms to improve wireless technology and enabling on-device machine vision. As I was working in the Edge compute industry ,I was able to see the Moore’s law taking root in AI compute. It was perfect time to combine two of my passions Connectivity and AI algorithms to build something useful and accessible to the masses. My co-founder , IISc classmate (Masters) and Qualcomm colleague Nagaraj and I decided to take the plunge and build something for the global market.

What is it you & your company are uniquely bringing to the edge market?

We are building server-less AI/ML solutions. Our innovations have facilitated moving all AI inferencing from dedicated cloud servers to on-device, on-premise and server-less cloud/lambda functions in our solutions. We have been able to commercialize our AI/ML solutions with 1000x cost reduction while still maintaining an expense ratio of 0.01.

Tell us more about the company, what’s your advantages compared to others on the market, who is involved, and what are your major milestones so far?

We build connected intelligence solutions, where our algorithms and software enable multiple connected smart devices to learn, infer and collaborate on the Edge.

We build plug-n-play solutions – which enable easy and rapid deployment and scaling.

We are striving for highest useful TOPs/$ on the Edge – by making our solutions multi-stream, multi-AI, multi-delegate (GPU/CPU/NPU).

We are offering our solutions with SaaS business model. Customers can choose any solution from facial/vehicle/activity recognition OR any custom application. We provide a unified dashboard and a LCNC option to integrate the Edge meta-data into existing dashboards. Our solutions have built-in BLE based access control.

We have deployed these solutions commercially in India and now looking to bring our capabilities to solve problems of the US Enterprise market.

How do you see the edge market developing over the next few years?

Edge solutions will increase in complexity and functionality over the next few years starting from a few sectors such as QSRs, Retail, Security (SMEs) and finally engulfing the entire Enterprise space.

Consumer grade devices will increase in AI compute capability from 20 TOPs today to 200 TOPs in couple of years and continue the Moore’s law. This will enable several new applications that were not possible few years ago on the Edge.

What are the main trends you see about integrating edge computing into different verticals?

As we start integrating Edge computing into different verticals and Edge AI starts to proliferate, TCO per Edge site is going to be a major factor in decision making.

Edge compute with built-in connectivity options will also have better value proposition as we start build learning and collaborating Edge AI systems.
AMITD MATE,
love his name ♥️
 
Last edited:
  • Like
  • Fire
  • Haha
Reactions: 7 users

Cartagena

Regular
So thank you for posting this.
I read it today in the news form the IAA 2023 in munich.
Another Information found :

https ://www.iaa-transportation.com/en/newsroom/iaa-voices-interview-apex-ai

Interview Apex.AI​

".......

Before joining Apex.AI, he held various positions with companies, including Nvidia, Daimler Group, and Delphi.

..."


I love to see this memberchip.....like these "body members" and "voting members" from soafee.

https ://www.soafee.io/about/members

https ://www.prnewswire.com/news-releases/autonomous-driving-moia-counts-on-apexai-software-for-passenger-management-development-301834296.html

"....
The partnership pays towards MOIA's goal of working with Volkswagen Commercial Vehicles (VWCV) to develop Europe's first type-certified AD-MaaS system and successfully launch an integrated autonomous, scalable ridepooling system on the road in Hamburg after 2025.
...."


....and who did shortly an improvement on his patent in case of "semantic segmentation":)
(https ://cdn-api.markitdigital.com/apiman-gateway/ASX/asx-research/1.0/file/2924-02702229-2A1468998?access_token=83ff96335c2d45a094df02a206a39ff4)

well Apex ai seems to know, what they are talking about....

https ://www.apex.ai/wir

Have you ever faced system freezes and shutdowns when processing large amounts of data? Specifically, image processing algorithms like deep neural network based object detection and semantic segmentation in autonomous driving applications are very demanding in terms of data transfer rate and processing power.


In this talk, we show how to efficiently implement a computer vision pipeline using Apex.OS (a safety certified fork of ROS2) which utilizes zero-copy optimizations on the middleware level to reduce bandwidth requirements. In addition, we use hardware accelerated versions of the algorithms to increase the throughput. We also explain how to abstract this hardware acceleration in the application code to decouple it from the underlying SoC.

I have no clue if it has to do with brainchip IP(uuuuu):cry::p

Hi Learning and Stockduck,

Volkswagen Commercial Vehicles sounds like one hell of a client to have onboard and interestingly ApexAI focuses on the main things Brainchip is in like object detection and machine learning. Yes we sure hope that "we are" the hardware in this application and we are not going for a joyride and our brains/tech secrets are being freely shared rather than being used to create future revenue contracts.

In my view shouldn't we be signing IP licensing contracts with our partners if we are collaborating with them to safeguard our IP and bring products to market? Emotion3D is not under an NDA so why have we not heard of such contracts yet?

I remain positive about this and hope we hear an announcement of a license or IP contract with Emotion 3D or any of our other partners in the not too distant future. Meanwhile we remain patient on Akida Gen 2 release, only around 3 weeks to go before end of this quarter. 😑
 
  • Like
  • Fire
Reactions: 9 users

cosors

👀
So thank you for posting this.
I read it today in the news form the IAA 2023 in munich.
Another Information found :

https ://www.iaa-transportation.com/en/newsroom/iaa-voices-interview-apex-ai

Interview Apex.AI​

".......

Before joining Apex.AI, he held various positions with companies, including Nvidia, Daimler Group, and Delphi.

..."


I love to see this memberchip.....like these "body members" and "voting members" from soafee.

https ://www.soafee.io/about/members

https ://www.prnewswire.com/news-releases/autonomous-driving-moia-counts-on-apexai-software-for-passenger-management-development-301834296.html

"....
The partnership pays towards MOIA's goal of working with Volkswagen Commercial Vehicles (VWCV) to develop Europe's first type-certified AD-MaaS system and successfully launch an integrated autonomous, scalable ridepooling system on the road in Hamburg after 2025.
...."


....and who did shortly an improvement on his patent in case of "semantic segmentation":)
(https ://cdn-api.markitdigital.com/apiman-gateway/ASX/asx-research/1.0/file/2924-02702229-2A1468998?access_token=83ff96335c2d45a094df02a206a39ff4)

well Apex ai seems to know, what they are talking about....

https ://www.apex.ai/wir

Have you ever faced system freezes and shutdowns when processing large amounts of data? Specifically, image processing algorithms like deep neural network based object detection and semantic segmentation in autonomous driving applications are very demanding in terms of data transfer rate and processing power.


In this talk, we show how to efficiently implement a computer vision pipeline using Apex.OS (a safety certified fork of ROS2) which utilizes zero-copy optimizations on the middleware level to reduce bandwidth requirements. In addition, we use hardware accelerated versions of the algorithms to increase the throughput. We also explain how to abstract this hardware acceleration in the application code to decouple it from the underlying SoC.

I have no clue if it has to do with brainchip IP(uuuuu):cry::p
They will expand their cooperation with China and are already doing so in order to survive.
I see the change of CEO as critical and negative. They are fighting not to go under, even if we can't quite understand and believe it yet.
So better look for VW in China, maybe SiFive?

They don't really have their own know-how that works as the market matures. So they buy in in China.
The opposite of MB.
But that is just my assumption.
 
Last edited:
  • Like
  • Thinking
Reactions: 4 users

stockduck

Regular
Bravo I respect your work so much, but I think we have to stop digging and looking for dots.
You/ We've found so many connections over the years but nothing at all eventuated to something material.
It's just time that the company finally delivers. The initial date for revenue was end of 2022. Now everyone acts like it has always been 2024 – 2025.

The last months I received a lot of backlash. I turned from this excited investor who believes in the technology (what I still do btw it's the management that I find highly uncapable) into someone who's very critical in his posts.
The responses are always the same. Either, the shorters are blamed for everything. Or it's just fine losing 80%+ of a company's value.
What I'm saying is that the share price is a consequence of the capability of our management. And they failed massively.
Someone on this forum even said he'd bet his life that we are involved with Valeo. Backed up was this claim by some reactions of the presenter during a presentation when asked if Akida was inside. Or something similar, I don't know what the correct wording was.
Well now we know we're not as literally everyone moved away from our failed first gen product, seemingly even Mercedes Benz who were very outspoken.

Now it's time for the management to finally achieve something material. If that happens we can start looking for dots again. Up until this point I'll never take any dot serious, no matter how convincing it could be.
What if the colaboration between valeo and mobileye was born because of an positiv partnerprogram between Intel and brainchip?
So there is a possibility, that everything is fine......., but also maybe not.

here is information from IAA2023 to mobileye and a future goal:

https ://www.iaa-mobility.com/en/newsroom/news/future-technology/the-evolution-of-vehicle-sensor-systems

"....

Lidar systems for real-time image recognition​

Lidar systems are based on optical signals. As a result, obstacles, apart from metal objects, can be better detected than when using radar. Simpler short-distance lidar systems are already being used in emergency braking assists, but the truly high-performance devices are still in development. Alongside a number of smaller manufacturers such as the Intel subsidiary Mobileye, for a few years now the German sectoral giant Continental has also been active in this market. With its High Resolution 3D Flash Lidar, in future Continental is looking to enable real-time 3D monitoring of the surroundings, with image interpretation. The developers at Mobileye are planning something similar, and in addition to the sensor system they are also researching new data processing hardware. For 2025, the specialist is planning the market launch of a silicon-based “system-on-chip”, capable of better processing the huge data volumes generated with lidar systems.
....."
 
  • Like
Reactions: 6 users

stockduck

Regular
fasten your seatbelts ......!:LOL:

https ://www.iaa-mobility.com/en/newsroom/news/autonomous-driving/the-road-to-fully-autonomous-vehicles

"....
For example, a recent OTA provided to Zeekr, an electric vehicle brand owned by Geely that uses SuperVision, allowed the company to provide an advanced update to the vehicle's adaptive cruise control and highway assistance systems. Now, instead of looking only at the vehicle immediately ahead, the updated system takes into account the entire scene around the vehicle, much like a human driver.
For example, it can detect a traffic jam ahead, even if the car immediately ahead has not yet begun to brake. The system can also react to various objects and situations, such as a vehicle on the side of the road with its door open, or a pedestrian on the side of the road. The update also enables the car to drive on any road with clear lane markings at speeds of up to 130 km/h.

...."
 
  • Like
  • Fire
Reactions: 6 users
Top Bottom