BRN Discussion Ongoing


IMG_3144.jpeg
 
  • Like
  • Fire
  • Love
Reactions: 35 users

Baisyet

Regular
The Fraunhofer HHI Wireless Communications & Networks Team will be presenting a remote live demo of their neuromorphic wireless cognition PoC (the one using a COTS Spot robot dog, developed as a 6G-Research and Innovation Cluster project and funded by the BMBF / German Federal Ministry of Education and Research) at the upcoming IEEE Global Communications Conference in Cape Town, South Africa (8 - 12 December 2024):

View attachment 73164
Are they using Akida ? @Frangipani
 
  • Like
Reactions: 1 users

IloveLamp

Top 20

View attachment 73185


1000019896.jpg
 
  • Like
  • Fire
  • Love
Reactions: 34 users

Terroni2105

Founding Member
Maximas gratias tibi agimus, Max Maxfield! 😀


November 21, 2024

Taking the Size and Power of Extreme Edge AI/ML to the Extreme Minimum​

by Max Maxfield
Earlier this year, I penned a couple of columns under the umbrella title “Mind-Boggling Neuromorphic Brain Chips.” One of the first comments I received concerning these columns was short, sharp, and sweet, simply reading, “Also, Brain-Boggling.”

Arrrggghhh. How did I miss that? How could I not have used “Brain-Boggling Neuromorphic Brain Chips”? There was much gnashing of teeth and rending of garb that day, let me tell you.

The articles in question (see Part 1 and Part 2) were focused on the folks at brainchip, whose claim to fame is to be the world’s first commercial producer of neuromorphic IP.

Before we plunge headfirst into the fray with gusto and abandon (and aplomb, of course), let’s first remind ourselves as to what we mean by the “neuromorphic” moniker. Also, as part of setting the scene, let’s remind ourselves that we are focusing our attentions on implementing artificial intelligence (AI) and machine learning (ML) tasks at the extreme edge of the internet. For example, creating intelligent sensors at the point where the “internet rubber” meets the “real-world road.”

Regular artificial neural networks (ANNs) are typically implemented using a humongous quantity of multiply-accumulate (MAC) operations. These are typically used to implement things like convolutional neural networks (CNNs) for working with images and videos, deep neural networks (DNNs) for working with general data, and recurrent neural networks (RNNs) for working with sequential (time-series) data.

When it comes to implementing these types of ANN for use at the extreme edge, the least efficient option is to use a regular microcontroller unit (MCU). The next step up is to use a digital signal processor (DSP), which can be simplistically thought of as being an MCU augmented with MAC functionality. One more step up the ladder takes us to an MCU augmented with a neural processing unit (NPU). For simplicity, we can visualize the NPU as being implemented as a huge array of MACs. In this case, the NPU cannot run in standalone mode—instead, it needs the MCU to be running to manage everything, feed it data, and action any results.

Furthermore, regular NPUs are designed to accelerate traditional ANNs, and they rely on conventional digital computing paradigms and synchronized operations. These NPUs process data in a batch mode, performing matrix computations (e.g., matrix multiplication) on large datasets, which can be resource-intensive.

By comparison, “neuromorphic” refers to a type of computing architecture that’s inspired by the structure and functioning of the human brain. It seeks to emulate neural systems by mimicking the way biological neurons and synapses communicate and process information. These systems focus on event-based, asynchronous processing that mimics how neurons fire.

Neuromorphic networks are often referred to as spiking neural networks (SNNs) because they model neural behavior using “spikes” to convey information. Since they perform processing only when changes occur in their input, SNNs dramatically reduce power consumption and latency.

What about sparsity?” I hear you cry. That’s a good question. Have you been reading my earlier columns? One problem with regular ANNs is that they tend to process everything, even things that aren’t worth processing. If you are multiplying two numbers together and one is 0, for example, then you already know that the answer will be 0. In the context of AI/ML inferencing, a 0 will have no effect on the result (and a very low value will have minimal effect on the result). The idea behind sparsity is to weed out any unnecessary operations.

In fact, there are three kinds of sparsity. The first is related to the coefficients (weights) used by the network. A preprocessor can be used to root through the network, detecting any low value weights (whose effect will be insignificant), setting them to 0, and then pruning any 0 elements from the network. The second type of sparsity is similar, but it relates to the activation functions. Once again, these can be pruned by a preprocessor.

The third type of sparsity is data sparsity. Think 0s being fed into the ANN, which blindly computes these nonsensical values (silly ANN). Data sparsity isn’t something that can be handled by a preprocessor.

How sparse can data be? Well, this depends on the application, but data can be pretty darned sparse, let me tell you. Think of a camera pointing at a door in a wall. I wouldn’t be surprised to learn that, in many cases, nothing was happening 99% of the time. Suppose the camera is running at 30 frames per second (fps). A typical CNN will process every pixel in every frame in every second. That’s a lot of computation being performed, and a lot of energy being consumed, to no avail.

By comparison, a neuromorphic NPU is event-based, which means it does something (on the processing front) only when there’s something to be done. To put this another way, while regular NPUs can handle only one or both weight and activation types of sparsity, neuromorphic NPUs can support all three types, thereby dropping their power consumption to the floor.

The reason I’m bubbling over with all this info is that I was just chatting with Steve Brightfield, who is the Chief Marketing Officer (CMO) at brainchip. The folks at brainchip are in the business of providing digital neuromorphic processor IP in the form of register transfer level (RTL) that ASIC, ASSP, and SoC developers can incorporate into their designs.

In my previous columns, I waxed eloquently about brainchip’s Akida fabric, which mimics the working of the human brain to analyze only essential sensor inputs at the point of acquisition, “processing data with unparalleled performance, precision, and reduced power consumption,” as the chaps and chapesses at brainchip will modestly inform anyone who cannot get out of the way fast enough.

Well, Steve was brimming over with enthusiasm to tell me all about their new Akida Pico ultra-low-power IP core. Since this operates in the microwatt (μW) to milliwatt (mW) range, Akida Pico empowers devices at the extreme edge to perform at their best without sacrificing battery life.

Even better, the Akida Pico can either operate in standalone mode or it can serve as the co-processor to a higher-level processor. In standalone mode, the Akida Pico can operate independently, allowing devices to process audio and vital sign data with minimal power consumption. This is ideal for smart medical devices that monitor vital signs continuously or voice-activated systems that need to respond instantly. By comparison, when used as a co-processor, the Akida Pico can offload demanding AI tasks from the higher-level processor, thereby ensuring that applications run efficiently while conserving energy. This really is the ultimate always-on wake-up core.

Example use cases include medical vitals monitoring and alarms, speech wake-up words for automatic speech recognition (ASR) start-up, and audio noise reduction for outdoor/noisy environments for hearing aids, earbuds, smartphones, and virtual reality/augmented reality (VR/AR) headsets.

How big is this IP? Well, a base configuration without memory will require 150K logic gates and occupy 0.12mm2 die area at a 22nm process. Adding 50KB of SRAM will boost this to 0.18mm2 of die area at a 22nm process. I mean to say, “Seriously?” Less than a fifth of a square millimeter for always on AI that consumes only microwatts of power? Give me strength!

Do you want to hear something really exciting? You do? Well, do you remember my column, Look at Something, Ask a Question, Hear an Answer: Welcome to the Future? In that column, I discussed how the folks at Zinn Labs had developed an event-based gaze-tracking system for AI-enabled smart frames and mixed-reality systems. As a reminder, look at this video:

lg.php




As we see (no pun intended), the user looks at something, asks a spoken question, and receives a spoken answer. This system features the GenX320 metavision sensor from Prophesee.

Why do we care about this? Well, the thing is that this sensor is event-based. Steve from brainchip was chatting with the guys and gals at Prophesee. They told him that they typically need to take the event-based data coming out of their camera and convert it into a frame-based format to be fed to a CNN.

Think about it. The chaps and chapesses at brainchip typically need to take frame-based data and convert it into events that can be fed to their Akida fabric.

So, rather than going event-based data (from the camera) to frame-based data, and then frame-based data to event-based data (to the Akida processor), the folks from Prophesee and brainchip can simply feed the event-based data from the camera directly to the event-based Akida processor, thereby cutting latency and power consumption to a minimum.

My head is still buzzing with ideas pertaining to the applications of—and the implications associated with—Akida’s neuromorphic fabric. What say you? Do you have any thoughts you’d care to share?

Interesting that Steve Brightfield mentions Prophesee but we’ve not had anything official about them
 
  • Like
  • Fire
Reactions: 9 users

Euks

Regular
  • Like
Reactions: 12 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
These guys are using Prohpesee's event-based camera to create a navigation system that allows low-cost drones to navigate reliably without GPS.


Neuromorphic Camera Helps Drones Navigate Without GPS

High-end positioning tech comes to low-cost UAVs​

Edd Gent
5 hours ago
3 min read
Edd Gent is a Contributing Editor for IEEE Spectrum.
An engineer splicing fiber optic cables used in inertial navigation systems.

Researchers are testing new hybrid imaging and inertial-guidance tech (pictured) that could enable drones to navigate even in GPS-denied environments.


Satellite-based navigation is the bedrock of most modern positioning systems, but it can’t always be relied on. Two companies are now joining forces to create a GPS-free navigation system for drones by fusing neuromorphic sensing technology with an inertial navigation system (INS).
GPS relies on receiver units that communicate wirelessly with a network of satellites to triangulate the user’s location with incredible precision. But these signals are vulnerable to interference from large buildings, dense foliage, or extreme weather and can even be deliberately jammed using spoofed radio signals.
This has prompted the design of alternative navigation approaches that can be used when GPS fails, but they have limitations. INS use sensors like accelerometers and gyroscopes to track a vehicle’s location from a known starting point. However, small measurement errors accumulate over time and can ultimately cause a gradual drift in positioning accuracy. Visual navigation systems use cameras to scan the terrain below an aircraft and work out where it is, but this takes considerable computing and data resources that put it out of reach for smaller, less expensive vehicles.
“The two things together really neatly solve navigating in a challenging, GPS-denied environment. You can travel really long distances over a really long time.”—Chris Shaw, Advanced Navigation
A pair of navigation technology companies has now teamed up to merge the approaches and get the best of both worlds. NILEQ, a subsidiary of British missile-maker MBDA based in Bristol, UK, makes a low-power visual navigation system that relies on neuromorphic cameras. This will now be integrated with a fiber optic-based INS developed by Advanced Navigation in Sydney, Australia, to create a positioning system that lets low-cost drones navigate reliably without GPS.
“The two things together really neatly solve navigating in a challenging, GPS-denied environment,” says Advanced Navigation’s CEO Chris Shaw. “You can travel really long distances over a really long time.”
When deciding on a navigation system for a vehicle there is always a price to performance trade-off, says Shaw. It typically doesn’t make sense to install expensive, high accuracy INS on a low-cost platform like a drone, but smaller, cheaper ones are more prone to positioning drift. “Sometimes it could be just 10, 20 minutes, before you start to get such a big error growth that the position accuracy is not good enough,” says Shaw.

Ditching GPS for Cameras​

A visual navigation system can provide a workaround by giving the INS high accuracy position updates at regular intervals, which it can use to recalibrate its location. But the high resolution cameras used in these systems generate huge amounts of data, and this has to be compared against a massive database of satellite imagery using computationally expensive algorithms. Fitting these kinds of computational resources on a small and power-constrained vehicle like a drone is typically not feasible.
NILEQ’s system significantly reduces the resources required for visual navigation by using a neuromorphic camera. Inspired by the way the human retina works, these devices don’t capture a series of images, but instead track changes in brightness across the sensor’s individual pixels. This generates far less data and operates at much higher speeds than a conventional camera.
“Using the neuromorphic camera alongside low-cost, inexpensive inertial sensors [provides] a big cost and size benefit.”—Chris Shaw, Advanced Navigation
The company says its proprietary algorithms process the camera output in real-time to create a terrain fingerprint for the particular patch of land the vehicle is passing over. This is then compared against a database of terrain fingerprints generated from satellite imagery, which is stored on the vehicle. The process of creating these fingerprints compresses the data, according to Phil Houghton, head of future concepting at MBDA. “This means that the size of the database loaded onto the host platform is trivial and searching it in real-time requires minimal computation,” he adds.
On the other hand, neuromorphic cameras are not currently able to operate using infrared, says Houghton, which would enable nighttime operations. But infrared neuromorphic cameras are currently under development and should be available in the next few years, he says.
Neuromorphic cameras are more expensive than conventional ones, often costing in the region of $1000, says Shaw. But this is balanced out by the fact that they can be combined with much cheaper INS. “Some really high-end navigation systems might run into the hundreds of thousands of dollars,” he says. “This approach of using the neuromorphic camera alongside low-cost, inexpensive inertial sensors, there’s a big cost and size benefit.”
Beyond providing the INS, Advanced Navigation will also use its AI-powered sensor fusion software to combine the outputs of the two technologies and provide a single, reliable location reading that can be used by a drone’s navigation system in much the same way as a GPS signal. “A lot of customers in this space want something they can just basically plug in and there’s no big learning curve,” says Shaw. “They don’t want any of the details.”
The companies are planning to start flight trials of the combined navigation system later this year, adds Shaw, with the goal of getting the product into customers hands by the middle of 2025.



So, here's more info about the company discussed in the above post, NILEQ, who are using Prophesee's event-based camera. This article says that the technology is patent-pending and uses neuromorphic sensors to match terrain fingerprints.

I think we need to keep our thousand eyes peeled for this patent when it eventually drops. 👀 👀 👀

As it says in the article, this solution "could form the cornerstone of future airborne navigation systems". 🥰

If you think about all of this in the context of Max Maxfield's article, why would NILEQ want to take the most circuitous route to processing their event-based data when they could simply "feed the event-based data from the camera directly to the event-based Akida processor, thereby cutting latency and power consumption to a minimum."?


EXTRACT from Max Maxfields' Taking the Size and Power of Extreme Edge AI/ML to the Extreme Minimum". November 21, 2024

Screenshot 2024-11-22 at 10.24.31 am.png




Decoding Earth’s Fingerprints: Advanced Navigation and MBDA Collaborate on Breakthrough Resilient Navigation Technology​

Home > News
Decoding Earth’s Fingerprints: Advanced Navigation and MBDA Collaborating on Breakthrough Resilient Navigation Technology

Share:
Published on:

20 November 2024
Global, November 2024 Advanced Navigation, a world leader in APNT technologies, along with global defense leader MBDA, have signed a Memorandum of Understanding (MoU) to co-develop a resilient navigation system integrating MBDA’s NILEQ absolute positioning technology.
Suitable for the modern era, the co-developed solution will provide resilient absolute positioning for a multitude of airborne platforms. The agreement will see the companies foster joint research and technology development between the United Kingdom and Australia.

Decoding Earth’s Fingerprints: Advanced Navigation and MBDA Collaborating on Breakthrough Resilient Navigation Technology

Advanced Navigation‘s vertically integrated labs are capable of developing and delivering products against stringent timelines.

In an increasingly uncertain world where interference is becoming ubiquitous, commercial and military sectors can no longer rely purely on GNSS for flight operations. There is an urgent need for additional navigation aiding to supplement platform inertial navigation and GNSS receiver systems
said Advanced Navigation CEO Chris Shaw.

We look forward to seeing MBDA’s innovation-driven solutions form the cornerstone of future airborne navigation systems. NILEQ seeks to address the enormous demand for resilient absolute positioning information that will complement the existing navigation systems of airborne platforms. Advanced Navigation are an ideal Australian partner to help accelerate the technology towards market entry. Navigation technologies that are not simply accurate and precise, but also provide the ‘resilience’ against interference, is what propels this partnership
said MBDA Australia General Manager, Tom Tizard.

Absolute Positioning using NILEQ Derived Terrain Fingerprints

NILEQ’s patent-pending technology is underpinned by the use of novel :love:neuromorphic sensors :love: to derive and match terrain fingerprints. Initially inspired by biological change detection processes, the sensing technology captures data of the changing terrain as an airborne system flies across it, and matches it to an existing database of the Earth’s surface.
The final solution is set to enable systems, such as uncrewed Air Systems (UAS), to secure an absolute position fix over land with a solution that is passive and resistant to interference. The technology will enhance the safety of beyond visual line of sight (BVLOS) operations, as the solution overcomes many of the conventional limitations of airborne image-based navigation technologies.

Decoding Earth’s Fingerprints: Advanced Navigation and MBDA Collaborating on Breakthrough Resilient Navigation Technology

Demonstration of NILEQ neuromorphic sensor matching a terrain fingerprint

Strengthening Bilateral Relations Between UK and Australia

Highlighting the importance of increasing bilateral opportunities, September marked the three-year anniversary for AUKUS, which was formed on 15th September 2021. The potential applications of the co-developed solution are wide-ranging and span both the civilian and military domains. The solution also further supports growing legal requirements for GPS alternatives. This is of heightened importance at a time when geopolitical conflicts and electronic warfare – the jamming and spoofing of GPS signals – are simultaneously on the rise.
Regarding the strategic context of AUKUS Pillar 2, the companies are working on capabilities to translate novel, complex research and technology into meaningful practical applications that are of mutual benefit to the United Kingdom and Australia. This means informed decision-making, strategic autonomy, and heightened combat efficiency in the face of emerging threats. More than a strategic advantage, it is the key to bolstering national security.

Real World Demonstration

Advanced Navigation and MBDA will validate NILEQ in an airborne demonstration planned in Australia.



Screenshot 2024-11-22 at 10.10.01 am.png


 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 52 users
On my local radio we just had a gentleman talking about power drain from data centres and how they are constantly building new ones which are power hungry.
I took the opportunity to spruce our BRN to the listeners 😆
 
  • Like
  • Love
  • Fire
Reactions: 40 users

IloveLamp

Top 20

1000019900.jpg
 
  • Like
  • Love
Reactions: 10 users

7für7

Top 20
GOOD MORNING CHIPPERS

IMG_7711.jpeg
 
  • Haha
  • Like
  • Love
Reactions: 20 users

Bravo

If ARM was an arm, BRN would be its biceps💪!

Samsung Smart Glasses Could Debut in 2025​

d6625780d137c76c035a452ec1ee5372

By Varun Godinho| 19 Nov 2024
Share




A new report has indicated that Samsung’s new XR glasses are being developed in collaboration with Google and will likely arrive in the third quarter of 2025. The new glasses are expected to share some specs with the Ray-Ban Meta glasses.
Research from Wellsen XR reveals a few new details regarding Samsung’s upcoming XR glasses. Samsung is reported to be planning an initial production run of 500,000 units of these smart glasses.
The glasses will reportedly be powered by Qualcomm’s AR1 chipset, the same chip that’s used in Ray-Ban Meta smart glasses.
The report claims that Samsung’s glasses will have a 12MP Sony IMX681 camera and a 155 mAh battery, almost the same as Ray-Ban’s glasses.
In terms of weight, the glasses would weigh 50g, slightly more than Ray-Ban Meta.
meta-ray-ban-camera-1.webp


Although there is no confirmation regarding a display, given the rumoured specs of the weight and battery size, Samsung’s XR glasses are not expected to feature a display.
In terms of capabilities, Gemini would handle AI tasks alongside support for “payment,” QR code recognition, “gesture recognition,” and “human recognition functions.”
Meta uses AI on its glasses to leverage the camera for multimodal analysis and answers (and scan QR codes), set reminders, and the company has also teased translation features.
Meta recently revealed a prototype for a future product: fully holographic Augmented Reality (AR) glasses. Named Orion, the glasses have a holographic display and users will be able to interact with it using voice, hand-tracking, eye-tracking, and a wrist-based neural interface.
Meta Orion AR Glasses

Meta Orion AR Glasses
While Samsung might trail meta in its smart glasses development, it is far ahead of Apple which is only now beginning to focus resources on developing a smart glasses version of its own. Apple’s initiative, code-named Atlas, began a few weeks ago and involves gathering feedback from Apple employees on smart glasses.
Additional focus groups are planned at Apple, and the studies are reportedly being led by its Product Systems Quality team. However, with Apple’s stringent quality standards and adoption of cutting-edge technology, it may result in a potential launch date of the smart glasses, if it does decide to go ahead with the project, of at least five years from now.

 
  • Like
  • Thinking
  • Fire
Reactions: 13 users

Diogenese

Top 20
Hi Bravo,

Advanced Navigation is an Australian company with your friend and mine Malcolm Turnbull on the board.

Given the newly announced JD with MBDA/NILEQ, BRN should contact them urgently, because they wouldn't want one hand tied behind their back by using obsolete technology.


Since its inception in the 1960s, the Kalman filter has been commonly used to this day for guidance and navigation applications. It has undergone many adjustments designed to improve upon the basic implementation, such as the extended and unscented Kalman filter. In recent years, however, a new approach to filtering based on artificial neural network (ANN) processing has made significant breakthroughs that have pushed the inertial navigation industry into a new era.

Up until recently, little has been concretely achieved in the space of artificial intelligence (AI) for inertial navigation applications, until Advanced Navigation began commercializing a fusion neural network from university research in 2012.

The stakes have been further raised with the widespread use of GNSS jamming and spoofing technologies. This is forcing defence organizations to move away from GNSS-only solutions for position information and, instead, adopt inertial navigation systems (INS) solutions that can provide the necessary precision and reliable dead-reckoning performance.

How does an Artificial Neural Network (ANN) Work?

At its core, an artificial neural network has self-learning capabilities that enable it to convert inputs from various sensors into better resulting outputs as more data becomes available, over time. More precisely, a typical ANN goes through two distinct phases.

  • An initial phase, where processing units making up the ANN are “taught” a set of learning rules used to guide outcomes, recognize patterns in data by comparing actual output produced with the desired output.
  • A second phase, where corrections (referred to as back-propagation) are applied to the actual data to achieve the desired output.
Advanced Navigation’s solution uses the long short-term memory (LSTM) AI principle, which is well-suited to classifying, processing and making predictions based on sensor data with a variable duration between important events.

As LSTM operates over a long timespan, it is relatively insensitive to gap length as an advantage over the hidden Markov model generally associated with Kalman filters.

Advanced Navigation’s ANN relies on three types of memory:

  1. In the lab, long-term learning is hardcoded in the inference engine, based on many hours of testing in various environments.
  2. In the field, short-term learning operates to update the model in the inference engine twice per second. This learning is more constrained and offers what we call “medium level learning”.
  3. Once per minute “deep learning” operates across all sensor data, to self-model the system in order to make the most complex updates to the learned model.

1732241772618.png


They mention optic fibre (OF) navigation. I assume they are using the Sagnac effect which measures the phase difference in light beams travelling in opposite directions around an OF loop to detect changes in direction.

https://en.wikipedia.org/wiki/Sagnac_effect
 
  • Like
  • Love
Reactions: 14 users

Diogenese

Top 20

Samsung Smart Glasses Could Debut in 2025​

d6625780d137c76c035a452ec1ee5372

By Varun Godinho| 19 Nov 2024
Share




A new report has indicated that Samsung’s new XR glasses are being developed in collaboration with Google and will likely arrive in the third quarter of 2025. The new glasses are expected to share some specs with the Ray-Ban Meta glasses.
Research from Wellsen XR reveals a few new details regarding Samsung’s upcoming XR glasses. Samsung is reported to be planning an initial production run of 500,000 units of these smart glasses.
The glasses will reportedly be powered by Qualcomm’s AR1 chipset, the same chip that’s used in Ray-Ban Meta smart glasses.
The report claims that Samsung’s glasses will have a 12MP Sony IMX681 camera and a 155 mAh battery, almost the same as Ray-Ban’s glasses.
In terms of weight, the glasses would weigh 50g, slightly more than Ray-Ban Meta.
meta-ray-ban-camera-1.webp


Although there is no confirmation regarding a display, given the rumoured specs of the weight and battery size, Samsung’s XR glasses are not expected to feature a display.
In terms of capabilities, Gemini would handle AI tasks alongside support for “payment,” QR code recognition, “gesture recognition,” and “human recognition functions.”
Meta uses AI on its glasses to leverage the camera for multimodal analysis and answers (and scan QR codes), set reminders, and the company has also teased translation features.
Meta recently revealed a prototype for a future product: fully holographic Augmented Reality (AR) glasses. Named Orion, the glasses have a holographic display and users will be able to interact with it using voice, hand-tracking, eye-tracking, and a wrist-based neural interface.
Meta Orion AR Glasses

Meta Orion AR Glasses
While Samsung might trail meta in its smart glasses development, it is far ahead of Apple which is only now beginning to focus resources on developing a smart glasses version of its own. Apple’s initiative, code-named Atlas, began a few weeks ago and involves gathering feedback from Apple employees on smart glasses.
Additional focus groups are planned at Apple, and the studies are reportedly being led by its Product Systems Quality team. However, with Apple’s stringent quality standards and adoption of cutting-edge technology, it may result in a potential launch date of the smart glasses, if it does decide to go ahead with the project, of at least five years from now.



"The new glasses are expected to share some specs with the Ray-Ban Meta glasses." 10 out of 10.
 
  • Like
  • Haha
  • Wow
Reactions: 9 users

sb182

Member
  • Like
  • Love
  • Fire
Reactions: 18 users

CHIPS

Regular
Yep, I believe the LInkedIn post from TCS referred to the blog that was posted a couple of years ago.
Still the reference mentioned ... but a date stamp would be nice.
Why the new LinkedIn post then? Reminding us of their enthusiasm for all things neuromorphic ?

However, puts the "next couple of years" comments into a different frame doesn't it!
That would be about NOW then.

I am sure they are still very enthusiastic about neuromorphic computing at the edge. In their last annual report, Tata Elxsi stated that they will be working with BrainChip for their health products in the coming year(s). Why should the parent company TATA not be using Akida/Pico/TENNs then? Maybe something is coming up and this "reminder" is to prepare the followers for it? I hope so!
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 18 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
I wonder what Prophesee is working on with our 2nd gen AKIDA along with all the bells, whistles, vision transformers and TENNs?

Remember this quote from Lucca Verre?


View attachment 60142


BTW, I'm going to assume these smart glasses which Zinn Labs and Prophesee are working on, would be vastly improved with the aid of the AKIDA gen 2.

Event Based Eye Tracking - Zinn Labs | Prophesee

YouTube·PROPHESEE Metavision Technologies·29 Feb 2024



PS: If I hear "let me look that up for you" one more time, I'm going to go cray-cray!



Hi @sb182, I "zinncerely" 😝( hehehe ) hope ChatGPT is wrong in this instance in saying that we're not working with Zinn Labs. I think there's a pretty good chance that we are. It certainly would be TENNs out of ten if we were! :)

IMO of course.
 
Last edited:
  • Like
  • Haha
  • Love
Reactions: 15 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Screenshot 2024-11-22 at 10.36.33 pm.png
 
  • Like
  • Love
  • Fire
Reactions: 17 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
It's interesting because AKIDA Pico is targeting smaller devices. Hala Point wouldn't fit in your ear, unless you have very big ears.


World’s Largest Neuromorphic System, Intel sets world record with Hala Point​

November 22, 2024


[IMG width="377px" height="331.42px" alt="World’s Largest Neuromorphic System, Intel sets world record with Hala Point
"]https://lirp.cdn-website.com/08d313...tem-intel-hala-point-8032b050-1920w.jpg[/IMG]
Santa Clara, California, United States--Intel Corporation (Nasdaq: INTC) has built Hala Point, a large-scale neuromorphic system which can support up to 20 quadrillion operations per second, or 20 petaops, with an efficiency exceeding 15 trillion 8-bit operations per second per watt (TOPS/W), thus setting the world record for being the World’s Largest Neuromorphic System, according to the WORLD RECORD ACADEMY.

 
  • Haha
  • Like
  • Thinking
Reactions: 8 users

Diogenese

Top 20
It's interesting because AKIDA Pico is targeting smaller devices. Hala Point wouldn't fit in your ear, unless you have very big ears.


World’s Largest Neuromorphic System, Intel sets world record with Hala Point​

November 22, 2024


[IMG width="377px" height="331.42px" alt="World’s Largest Neuromorphic System, Intel sets world record with Hala Point
"]https://lirp.cdn-website.com/08d313...tem-intel-hala-point-8032b050-1920w.jpg[/IMG]
Santa Clara, California, United States--Intel Corporation (Nasdaq: INTC) has built Hala Point, a large-scale neuromorphic system which can support up to 20 quadrillion operations per second, or 20 petaops, with an efficiency exceeding 15 trillion 8-bit operations per second per watt (TOPS/W), thus setting the world record for being the World’s Largest Neuromorphic System, according to the WORLD RECORD ACADEMY.

Do elephants need hearing aids, or do they just listen to trunk calls?
 
  • Haha
  • Like
  • Fire
Reactions: 29 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
  • Love
  • Haha
  • Like
Reactions: 4 users
Top Bottom