BRN Discussion Ongoing

Tothemoon24

Top 20

Nokia Announces Speech-Based AI Network Management Tools​

Nokia unveiled its cutting-edge AI solutions amid its plans to scale up networks for the industrial metaverse
6
Nokia-telecom-network.jpg

MIXED REALITYLATEST NEWS
Published: November 1, 2023

Demond Cureton

Nokia Bell Labs announced on Wednesday a novel natural language processing (NLP) solution for configuring networks using artificial intelligence (AI) and machine learning (ML).
Developed by Nokia Bell Labs’ UNEXT research initiative, the company’s Natural-Language Network is completely transforming how networks operate. This is crucial as the Espoo, Finland-based firm scales up its network infrastructure amid the rise of the industrial metaverse.
The company revealed the new digital tools at the Brooklyn 6G Summit in New York, NY, which took place from 31 Oct to 2 November.
At the event, it stated the new NLP solution can configure networks using prompts and speech. It will also understand user intentions and operate autonomously using its AI neural networking.

Nokia AI Researchers Develops NLP for the Telecom​

Using Natural-Language Networks, Nokia can streamline network management with AI, moving away from complex setups to more agile, responsive systems to serve the end user.
Artificial intelligence will power the networks, allowing service providers to maintain operations with rapid configuration capabilities. These intelligent systems will also monitor and learn from previous prompts, responses, and other data to optimise networks after each successful request, the company explained.


As the networking tool builds its neural networks across the infrastructure, it can operate without human intervention.

Csaba Vulkan, Network Systems Automation Research Leader, Nokia Bell Labs, said in a statement,

“Operators won’t need to explore technical catalogues or complex API descriptions when they configure networks. Instead, a simple statement like ‘Optimize the network at X location for Y service’ will work. Those requests could be used to configure a wireless network in a factory for robot automation or optimize networks at a concert for a barrage of social media uploads”

Thank You, Who’s UNEXT?​

Nokia Bell Labs created the UNEXT research initiative to support the company’s efforts to innovate its network infrastructure. This has become a key focus of the company as it focuses on the industrial metaverse, which is currently digitally transforming global enterprises with massive results.

According to the company, UNEXT draws inspiration from UNIX, the groundbreaking operating system (OS) Bell Labs invented in the 1960s with the Massachusetts Institute of Technology (MIT) and General Electric (GE).

The firm said in a press release that “UNEXT will redefine network software and systems the same way UNIX reshaped computing.”

It aims to achieve this by integrating a host of processes in the telecoms network, effectively evolving the network into an operating system.

Azimeh Sefidcon, Head of Network Systems and Security Research, Nokia Bell Labs, added,

“Natural-Language Networks offer a sneak peek into one of the many capabilities of UNEXT. Reducing the complexity of network management fits squarely with UNEXT’s goal of extending the reach of networked systems by breaking down barriers that prevent those systems from interoperating”

AI to Become Central to Next-Gen Networks​

The announcement comes at a critical time when Nokia aims to scale-up its network capacities amid a massive surge in network demand. Many of the world’s tech innovations are now forcing telecoms to rethink their strategies in infrastructural development and solutions, as the former puts exponential pressure on 4G and 5G networks.

Unprecedented demand for telecoms bandwidth, speed, and reliability have not only come from consumers and their handheld devices like smartphones and tablets, but also from a growing need for low-latency, high-bandwidth, and high-speed networks to facilitate immersive tools.

Recent global innovations in virtual, augmented, and mixed reality at the consumer, enterprise, and industry are expected to place enormous pressure on telecoms infrastructure leading up to 2030 and beyond.

Thomas Hainzel, Head of Digital Industries Evolution & Partnerships, Nokia, explained to XR Today the challenges that Nokia faces amid the rise of multiple network-intensive metaverses in Industry 4.0.

Nokia is currently developing tools across the technology stack like the Internet of Things, AI, and cloud and edge computing, to secure network integrity and meet sustainability pledges amid future environmental challenges.


World Leaders Focus on AI Safety​

However, as companies innovate their offerings with AI, governments are developing their roles in ensuring the safety of their respective citizens, at the national and global level.

Nokia’s solution comes just days after United States President Joe Biden signed the Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence.
 
  • Like
  • Love
Reactions: 5 users

Diogenese

Top 20
Looks, very impressive to me, the uninitiated " clever dick - sometimes " learner, one.

Dodgy Ness. It's a good time now for you to put some thoughts together, to explain what this edgybox ( The Akida Ballista one ) is, i.e. so that I, and other " not so techy understandy " individuals can grasp this Technology in more detail. It appears to be a capable device, I think !! Perhaps you could explain why and how it is, and it's significance in terms of it's usage and it's current and future applications.

At this time, my head is a bit hurty trying to understand, " what's going on ( pretty similar to the the " Black box " ( that in actual fact is orange in colour in my understanding ) invention some years ago now ), here in Australia. Still not sure what and how that device works. Will this " edgebox " be a game changer at all, now and into the future, and could it improve/replace the " black box " in any way ?.

These questions/thoughts are open to all on this forum, so that I/others can become aware of what this device " edgebox " is designed to achieve, to enable us all to understand it's usefulness, moving forward.

And what could sales of this device bring realistically. I have other questions in regard to this, but they can wait until I've got my head around this edgebox device first.

TIA......

Akida Ballista ( Still, I'm Sure )....

hotty...
Hi Hotty,

Akida has 2 basic functions:
A. Classification/Inference;
B. Machine learning.

Using images/video as an example, classification is the identification of an object by comparison of similarity with classes of images in a model library. This is basically a guesstimate (probability).

Machine learning is the addition of new objects to the model library for future comparison.

The old CNN software running on CPU/GPU uses lots of Watts preforming MACs (Multiply Accumulate operations) which are maths heavy calculations. Akida operates on spikes indicating events (changes) which were initially represented by a single digital bit (now up to 4 bits in Akida 1 for increased accuracy). In a digital image, if adjacent pixels have the same illumination, then no event is registered, so no current is drawn by the associated transistors which is why Prophesee is a natural fit for Akida. On the other hand, in old style CNN, each pixel, which may be 16 bits is processed in a MAC processing matrix which involves 16*16 mathematical operations switching one or more transistors.

The new features of TeNNs and ViT are used for classification comparisons.

As we know Akida can be used with any sensor - camera, microphone, chemical sensor, vibration detector, ...

The EB can be used where signals from a number of sensors need to be classified.

This could be in a supermarket processing video images from all the checkouts to determine the price from a model and for stocktake purposes.

It can be used in a factory with many machines to determine if vibration indicates a need for maintenance.

1702806158635.png




https://brainchip.com/brainchip-previews-industrys-first-edge-box-powered-by-neuromorphic-ai-ip/

Designed for vision-based AI workloads, the compact Akida Edge box is intended for video analytics, facial recognition, and object detection, and can extend intelligent processing capabilities that integrate inputs from various other sensors. This device is compact, powerful, and enables cost-effective, scalable AI solutions at the Edge.

BrainChip’s event-based neural processing, which closely mimics the learning ability of the human brain, delivers essential performance within an energy-efficient, portable form factor, while offering cost-effectiveness surpassing market standards for edge AI computing appliances. BrainChip’s Akida neuromorphic processors are capable of on-chip learning that enables customization and personalization on device without support from the cloud, enhancing privacy and security while also reducing training overhead, which is a growing cost for AI services.

“BrainChip’s neuromorphic technology gives the Akida Edge box the ‘edge’ in demanding markets such as industrial, manufacturing, warehouse, high-volume retail, and medical care,” said Sean Hehir, CEO of BrainChip. “We are excited to partner with an industry leader like VVDN technologies to bring groundbreaking technology to the marke
t.”
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 47 users
Good morning back in Aussie,

I think an important point to remember with regards VVDN is this: they already make Edge Box's for both of our potential competitors,
namely Nvidia and Qualcomm, adding us to the mix sends a very strong message before a single confirmed sale has even been registered.

Can the others really compete, we shall all find out as CES unfolds, benchmarking is about to be ramped up a number of notches in my
opinion, the more product/s to hit the market will confirm our dominance at the edge, both Nvidia and Qualcomm representatives will be
keenly observing this launch at CES, the new kid on the block has finally arrived....❤️ Akida.

Tech.
I think our shareprice is @ 19 cents, and your talking a big game amongst the big boys, Tech did you talk a big game amongst some of the golfing greats ?
 
  • Haha
  • Sad
Reactions: 2 users

CHIPS

Regular
I think our shareprice is @ 19 cents, and your talking a big game amongst the big boys, Tech did you talk a big game amongst some of the golfing greats ?

The SP does not say anything about quality and competition with others.
 
  • Like
  • Fire
  • Love
Reactions: 9 users

Esq.111

Fascinatingly Intuitive.
AKIDA 2.0 is available, but there hasn't been an announcement of a physical AKD2000 reference chip being produced..

Not sure exactly when the AKD1000 chips were originally made, but there was the engineering samples and then the reference chips, which were greatly improved.

Must be getting close to two years though and they are going into Edge Boxes now and will be State of the Art!

To do that today, in these times of rapid technological development, just goes to show, how far AKIDA is ahead of the curve.

They could also use AKD1500 chips, if they needed to?..

We really have no idea, how many AKD1000 chips were produced, or what percentage of the run were "good".

There's always a non viable percentage and we don't know what the device yield is, when producing AKIDA chips, but I'm guessing, with the engineering samples coming out practically perfect first time (a huge credit to Anil Mankar, as was stated by Louis DiNardo, who had rarely seen that) that it is pretty high, by industry standards.
Evening DingoBorat ,

Yep , one would think thay .....our directors , , would be utilising our latest tech... would be very strange if thay , directors thought fit to incorperate AKIDA 1000
, when shareholders have just paid for AKIDA 1500 to be fabricated & produced ( Global Foundries )...several MILLION $ ,
Nevermind AKIDA 2, in three flavours.

Think the expression ... time will tell... has well and truely passed.


TIME TO DELIVER.

Regards ,
Esq.
 
Last edited:
  • Like
  • Fire
  • Haha
Reactions: 19 users
Evening DingoBorat ,

Yep , one would think thay .....our directors , , would be utilising our latest tech... would be very strange if thay , directors thought fit to incorperate AKIDA 1000
, when shareholders have just paid for AKIDA 1500 to be fabricated & produced ( Global Foundries )...several MILLION $ ,
Nevermind AKIDA 2, in three flavours.

Think the expression ... time will tell... has well and truely passes.

TIME TO DELIVER.

Regards ,
Esg.
trains-planes-and-automobiles-john-candy.gif



Didn't even spell your name right Esq 😛

Well I guess it is Sunday..

I'll reply properly later on..
 
  • Haha
Reactions: 4 users
Evening DingoBorat ,

Yep , one would think thay .....our directors , , would be utilising our latest tech... would be very strange if thay , directors thought fit to incorperate AKIDA 1000
, when shareholders have just paid for AKIDA 1500 to be fabricated & produced ( Global Foundries )...several MILLION $ ,
Nevermind AKIDA 2, in three flavours.

Think the expression ... time will tell... has well and truely passes.

TIME TO DELIVER.

Regards ,
Esg.
Gotta remember Esq that 1500 was done in 22nm FDSOI.

The primary use case I've seen for 22nm FDSOI at the mo is radhard for space.

That says to me things like the SBIR contracts, EdgX, ANT61, Intellisense, possibly SDR / NECR, SatCom, any space relationship we have, known or unknown, is most likely using or testing the 1500 imo.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 24 users
Thoughts are with anyone with family and friends in Cairns that is currently going under water as we speak. Being stuck in Brisbane while the wife is luckily safe in Cairns is more than frustrating. We’ve had nearly 1.6 meter of rain in Trinity beach since Wednesday and it’s only getting worse before It gets better 🙏
 
Last edited:
  • Love
  • Like
  • Sad
Reactions: 29 users

Esq.111

Fascinatingly Intuitive.
View attachment 52161


Didn't even spell your name right Esq 😛

Well I guess it is Sunday..

I'll reply properly later on..
Evening DingoBorat,

Well spotted, 😃, & I thank the.

Only consumed 1/2
Litre of gin for the afternoon & two bottles of tonic water, So one will have to excuse small albeit quite irrelevant details.

Side note , my fruit trees will ither be dead or gain extreme hybrid vigour .

Regards,
ESQ.
 
  • Haha
Reactions: 11 users

JDelekto

Regular


Can’t remember at what time point but Sean specifically says the edge box contains akd1000, the chip we had made and have stock of. At this point there’s no akd2000 reference chip ie “hardware” no doubt that will come in the future.

You are right on the money. Sean says that VVDN uses the AKD1000 chip at the 09:20 position in the video. Given the six-to-twelve-month timeline he noted about creation/testing, I would have hoped that Akida 2.0 was involved and they had early access, but this is not the case.

Given that the IP was made available in October, it could be from June until the end of next year before we see Akida 2.0. Although there was speculation as to how many AKD1000 chips were purchased long ago, based on the company's expenditures and the manufacturing costs from TSMC, I would have thought that a good number of them went into the Raspberry Pi, Shuttle PC, and PCIe devices that were sold for development. However, they may build those on the fly from their chip stock.

It would also explain why significant revenue is not expected from the Edge boxes. As I noted, if the VVDN box sporting Akida 1.0 performs on par or better than the other Edge box offerings by Qualcomm and NVIDIA (given processing speed, latency, power consumption, etc.), then I think that will speak volumes for their tech, notwithstanding the availability of second-generation Akida.
 
  • Like
  • Fire
Reactions: 17 users
Evening DingoBorat ,

Yep , one would think thay .....our directors , , would be utilising our latest tech... would be very strange if thay , directors thought fit to incorperate AKIDA 1000
, when shareholders have just paid for AKIDA 1500 to be fabricated & produced ( Global Foundries )...several MILLION $ ,
Nevermind AKIDA 2, in three flavours.

Think the expression ... time will tell... has well and truely passed.


TIME TO DELIVER.

Regards ,
Esq.
Some might be drinking.. Sorry 😔.. I mean thinking, why is the Company using these "old" AKD1000 chips in the new VVDN Edge Box?..

The fact is, AKIDA 1.0 IP and AKD1000 chips, have not been made obsolete, redundant, or outdated, compared to AKIDA 2.0 IP.

They are still very much current technology and still way in front of the competition, in neuromorphic hardware.

There is also the fact, that in this stage of the Company's journey, we have to be prudent and make the most of the Company's resources, in all areas.

As Sean has stated, this year is crucial for BrainChip and while it would be nice, to pump out some AKD2000 chips and put the latest development of the technology in the Edge Boxes..

The Company simply does not have the funds to throw at that..


I feel strongly, that with our new high calibre hires on board (especially in sales, as that's what we really need) that this will be a very exciting year to come.

Good Luck to the Company and All Holders.

You need much more than luck in Life to succeed, but a little luck goes a long way 😉
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 41 users
Looks like Uni Cote D'Azur exploring neuromorphic vision now.

This new role mentions basing some of this new project on possible previous work with SNN and I wonder if that relates to the other project I posted (below link) that specifically mentioned utilising Akida.

Will also be based at the same location as the prev project.

:unsure:






Université Côte d'Azur

Doctoral Student​

Université Côte d'Azur Greater Nice Metropolitan Area
5 days ago 28 applicants

See who Université Côte d'Azur has hired for this role

ApplySave

Direct message the job poster from Université Côte d'Azur
Jean Martinet

Context
These multiple proposals take place in the context of an international collaborative project co-funded by the French ANR and the Swiss NSF. The project NAMED (Neuromorphic Attention Models for Event Data) that will start on February 1st, 2024.

The field of embedded computer vision has become increasingly important in recent years as the demand for low-latency and energy-efficient vision systems has grown [Thiruvathukal et al. 2022]. One of the key challenges in this area is developing intelligent vision systems that can efficiently process large amounts of visual data while still maintaining high accuracy and reliability.

The biological retina has inspired the development of a new kind of camera [Lichtsteiner et al. 2008]: event-based sensors asynchronously measure per-pixel brightness changes and output a stream of events that encode the time, location, and sign of the brightness changes (positive or negative). In addition to eliminating redundancy, they benefit from several advantages over conventional frame cameras, from which they fundamentally differ. Event sensors are inspired from the human eye, that is primarily sensitive to changes in the luminance falling on its individual sensors. These changes are processed by layers of neurons in the retina through to the retinal ganglion cells that generate action potentials, or spikes, whenever a significant change is detected. Then these spikes propagate through the optic nerve to the brain.

Cognitive attention mechanisms, inspired by the human brain's ability to selectively focus on relevant information, can offer significant benefits in embedded computer vision systems. The human eye has a small high-resolution region (the fovea) in the center of the field of vision, and a much larger peripheral vision, which has much lower resolution, combined with an increased sensitivity to movement. Therefore, limited resources are deployed to extract the most salient information from the scene without wasting energy capturing the entire scene at the highest resolution. This foveation mechanism has inspired the recent development of a variable-resolution event sensor [Serrano-Gotarredona et al., 2022]. This sensor has an electronic control of the resolution in selected regions of interest, allowing to focus downstream computational resources on specific areas of the image that convey the most useful information. This sensor even goes beyond biology by allowing multiple regions of interest.


Scientific objectives
The general objective of this research project (with specific tasks for internship, PhD, postdoc) is to design and implement computer vision attention models adapted to event data. A first step will consist in studying state-of-the-art attentional mechanisms in deep networks and their link with cognitive attention as implemented in the brain. Cognitive attention refers to the selective processing of sensory information by the brain based on its relevance and importance to the current task or goal. It involves the ability to focus one's attention on specific aspects of the environment while filtering out irrelevant or distracting information. In particular, the study will distinguish between both top-down and bottom-up attention. The second step will be the design an attention architecture that will allow selectively focusing on relevant regions while ignoring irrelevant part, which will depend on the target task (e.g., segmentation, object tracking, obstacle avoidance, etc.). The model will be based either on standard deep networks, or on spiking neural networks, based on previous work [GIT]. Spiking Neural Networks are a special class of artificial neural networks, where neurons communicate by sequences of asynchronous spikes. Therefore, they are a natural match for event-based cameras due to their asynchronous operation principle. This selection of regions will result in less data usage and smaller models (frugal system). In the third step, we will evaluate the impact of the attention mechanism on the general performance of the computer vision system. The target metrics will obviously depend on the selected task, and will include accuracy, MIOU, complexity, training time, inference time, etc. of the system.


Job information
Location

Université Côte d’Azur, Sophia Antipolis (Nice area) France

Types of contracts
Internship: duration 4-6 months / PhD: duration 36 months / Postdoc: duration 18 months

Job status
Full-time for all

Candidates’ profiles
Master 2 / PhD in Computer Science (Machine Learning, Computer Vision, AI) or Mathematics or Computational Neuroscience. Programming skills in Python/C++, interest in research, machine learning, bio-inspiration and neurosciences are required.

Salary
Standard French internship allowance / PhD salary / Postdoc (research engineer) salary by CNRS

Offer starting date
Internship: Around March 2024 / PhD: Flexible around October 2024 / Postdoc: Flexible from October 2024
(PhD opportunity after the internship)

Application period
From December 2023 to the offer starting date
 
  • Like
  • Fire
  • Love
Reactions: 18 users

Perhaps

Regular
AKIDA 2.0 is available, but there hasn't been an announcement of a physical AKD2000 reference chip being produced..

Not sure exactly when the AKD1000 chips were originally made, but there was the engineering samples and then the reference chips, which were greatly improved.

Must be getting close to two years though and they are going into Edge Boxes now and will be State of the Art!

To do that today, in these times of rapid technological development, just goes to show, how far AKIDA is ahead of the curve.

They could also use AKD1500 chips, if they needed to?..

We really have no idea, how many AKD1000 chips were produced, or what percentage of the run were "good".

There's always a non viable percentage and we don't know what the device yield is, when producing AKIDA chips, but I'm guessing, with the engineering samples coming out practically perfect first time (a huge credit to Anil Mankar, as was stated by Louis DiNardo, who had rarely seen that) that it is pretty high, by industry standards.
For a commercial production it needs a tape out of the chip first. As of today there exists no silicon wafer for production of AKD 2.0. The tape out of a foundry is a long process, lasting several months and the costs are up to Brainchip. Usual price for a tape out is in the range US$ 4-5 mio. AKD1000 and 1500 are production ready, AKD 2.0 not. Hope the ann. of a tape out will come soon, but maybe the cash reserve isn't good enough to do it now.
Lasting quarters covered by the cash balance in financial reports are based on fixed costs for employees, buildings and so on. Extra costs like a tape out are not part of this calculation. Looks like additional funding is needed first.
 
Last edited:
  • Like
  • Thinking
Reactions: 6 users

Diogenese

Top 20
Looks like Uni Cote D'Azur exploring neuromorphic vision now.

This new role mentions basing some of this new project on possible previous work with SNN and I wonder if that relates to the other project I posted (below link) that specifically mentioned utilising Akida.

Will also be based at the same location as the prev project.

:unsure:






Université Côte d'Azur'Azur

Doctoral Student​

Université Côte d'Azur Greater Nice Metropolitan Area​

5 days ago 28 applicants​

See who Université Côte d'Azur has hired for this role

ApplySave

Direct message the job poster from Université Côte d'Azur
Jean Martinet
Context
These multiple proposals take place in the context of an international collaborative project co-funded by the French ANR and the Swiss NSF. The project NAMED (Neuromorphic Attention Models for Event Data) that will start on February 1st, 2024.

The field of embedded computer vision has become increasingly important in recent years as the demand for low-latency and energy-efficient vision systems has grown [Thiruvathukal et al. 2022]. One of the key challenges in this area is developing intelligent vision systems that can efficiently process large amounts of visual data while still maintaining high accuracy and reliability.

The biological retina has inspired the development of a new kind of camera [Lichtsteiner et al. 2008]: event-based sensors asynchronously measure per-pixel brightness changes and output a stream of events that encode the time, location, and sign of the brightness changes (positive or negative). In addition to eliminating redundancy, they benefit from several advantages over conventional frame cameras, from which they fundamentally differ. Event sensors are inspired from the human eye, that is primarily sensitive to changes in the luminance falling on its individual sensors. These changes are processed by layers of neurons in the retina through to the retinal ganglion cells that generate action potentials, or spikes, whenever a significant change is detected. Then these spikes propagate through the optic nerve to the brain.

Cognitive attention mechanisms, inspired by the human brain's ability to selectively focus on relevant information, can offer significant benefits in embedded computer vision systems. The human eye has a small high-resolution region (the fovea) in the center of the field of vision, and a much larger peripheral vision, which has much lower resolution, combined with an increased sensitivity to movement. Therefore, limited resources are deployed to extract the most salient information from the scene without wasting energy capturing the entire scene at the highest resolution. This foveation mechanism has inspired the recent development of a variable-resolution event sensor [Serrano-Gotarredona et al., 2022]. This sensor has an electronic control of the resolution in selected regions of interest, allowing to focus downstream computational resources on specific areas of the image that convey the most useful information. This sensor even goes beyond biology by allowing multiple regions of interest.


Scientific objectives
The general objective of this research project (with specific tasks for internship, PhD, postdoc) is to design and implement computer vision attention models adapted to event data. A first step will consist in studying state-of-the-art attentional mechanisms in deep networks and their link with cognitive attention as implemented in the brain. Cognitive attention refers to the selective processing of sensory information by the brain based on its relevance and importance to the current task or goal. It involves the ability to focus one's attention on specific aspects of the environment while filtering out irrelevant or distracting information. In particular, the study will distinguish between both top-down and bottom-up attention. The second step will be the design an attention architecture that will allow selectively focusing on relevant regions while ignoring irrelevant part, which will depend on the target task (e.g., segmentation, object tracking, obstacle avoidance, etc.). The model will be based either on standard deep networks, or on spiking neural networks, based on previous work [GIT]. Spiking Neural Networks are a special class of artificial neural networks, where neurons communicate by sequences of asynchronous spikes. Therefore, they are a natural match for event-based cameras due to their asynchronous operation principle. This selection of regions will result in less data usage and smaller models (frugal system). In the third step, we will evaluate the impact of the attention mechanism on the general performance of the computer vision system. The target metrics will obviously depend on the selected task, and will include accuracy, MIOU, complexity, training time, inference time, etc. of the system.


Job information
Location

Université Côte d’Azur, Sophia Antipolis (Nice area) France

Types of contracts
Internship: duration 4-6 months / PhD: duration 36 months / Postdoc: duration 18 months

Job status
Full-time for all

Candidates’ profiles
Master 2 / PhD in Computer Science (Machine Learning, Computer Vision, AI) or Mathematics or Computational Neuroscience. Programming skills in Python/C++, interest in research, machine learning, bio-inspiration and neurosciences are required.

Salary
Standard French internship allowance / PhD salary / Postdoc (research engineer) salary by CNRS

Offer starting date
Internship: Around March 2024 / PhD: Flexible around October 2024 / Postdoc: Flexible from October 2024
(PhD opportunity after the internship)

Application period
From December 2023 to the offer starting date


The human eye has a small high-resolution region (the fovea) in the center of the field of vision, and a much larger peripheral vision, which has much lower resolution, combined with an increased sensitivity to movement. Therefore, limited resources are deployed to extract the most salient information from the scene without wasting energy capturing the entire scene at the highest resolution. This foveation mechanism has inspired the recent development of a variable-resolution event sensor [Serrano-Gotarredona et al., 2022]. This sensor has an electronic control of the resolution in selected regions of interest, allowing to focus downstream computational resources on specific areas of the image that convey the most useful information. This sensor even goes beyond biology by allowing multiple regions of interest.


Interestingly Luminar, who have displaced Valeo on Mercedes Christmas card list, and a couple of other lidar makers, have a foveated lidar system, which increases the laser point density in areas of interest. Apparently, this gives Laminar a greater range than Valeo.


WO2023057666A1 ELECTRONICALLY FOVEATED DYNAMIC VISION SENSOR 20211005

Applicants CONSEJO SUPERIOR INVESTIGACION [ES]​

Inventors LINARES BARRANCO BERNABÉ [ES]; SERRANO GOTARREDONAMARÍA TERESA [ES]




1702821332526.png


The present invention relates to a vision sensor comprising a matrix (1) of pixels (5) on which a foveation mechanism is used, defining a series of low resolution regions of grouped pixels (macro-pixels) such that they operate as a single isolated pixel (5), information being obtained from the groups of pixels (5) and not from each pixel (5) individually. Due to the low resolution regions of macro-pixels, energy and data bandwidth savings are achieved in favour of the high resolution regions that are not grouped or foveated. The regions of grouped pixels can be configured with external electronic signals. In addition, multiple high resolution or foveation regions as well as region sizes can be electronically activated.

As it is known that vphi, voi, vob are proportional to log , so any of these voltages is proportional to the logarithm of the photocurrent. Defining a generic tension ViOgQvbi) such that z¡7¡O5íaío^(/pftf.níj£). and considering the added voltagethen:

The effect of the sum of the voltages of the individual pixels (5) is equivalent to the multiplication of their photocurrents. The pixel group (5) will be sensitive to the relative variation of the product of all the photocurrents in the pixel group (5). The sensitivity of the group is proportional to the total number of pixels (5) in the group, so the event frequency generation would be multiplied by the number of pixels (5) for the same relative variation of the photocurrent and the width of the group. The output band consumed by the group in low resolution mode will be the same as that of all (5) individual pixels in high resolution mode. A simple and practical circuit to add the voltages of the pixels (5) would be to interconnect the floating node of the capacitors C1/C2 of the pixels (5) of the group
.

True to the European roots, they use analog.
 
  • Like
  • Thinking
  • Fire
Reactions: 13 users
The human eye has a small high-resolution region (the fovea) in the center of the field of vision, and a much larger peripheral vision, which has much lower resolution, combined with an increased sensitivity to movement. Therefore, limited resources are deployed to extract the most salient information from the scene without wasting energy capturing the entire scene at the highest resolution. This foveation mechanism has inspired the recent development of a variable-resolution event sensor [Serrano-Gotarredona et al., 2022]. This sensor has an electronic control of the resolution in selected regions of interest, allowing to focus downstream computational resources on specific areas of the image that convey the most useful information. This sensor even goes beyond biology by allowing multiple regions of interest.


Interestingly Luminar, who have displaced Valeo on Mercedes Christmas card list, and a couple of other lidar makers, have a foveated lidar system, which increases the laser point density in areas of interest. Apparently, this gives Laminar a greater range than Valeo.


WO2023057666A1 ELECTRONICALLY FOVEATED DYNAMIC VISION SENSOR 20211005

Applicants CONSEJO SUPERIOR INVESTIGACION [ES]​

Inventors LINARES BARRANCO BERNABÉ [ES]; SERRANO GOTARREDONAMARÍA TERESA [ES]​




View attachment 52173

The present invention relates to a vision sensor comprising a matrix (1) of pixels (5) on which a foveation mechanism is used, defining a series of low resolution regions of grouped pixels (macro-pixels) such that they operate as a single isolated pixel (5), information being obtained from the groups of pixels (5) and not from each pixel (5) individually. Due to the low resolution regions of macro-pixels, energy and data bandwidth savings are achieved in favour of the high resolution regions that are not grouped or foveated. The regions of grouped pixels can be configured with external electronic signals. In addition, multiple high resolution or foveation regions as well as region sizes can be electronically activated.

As it is known that vphi, voi, vob are proportional to log , so any of these voltages is proportional to the logarithm of the photocurrent. Defining a generic tension ViOgQvbi) such that z¡7¡O5íaío^(/pftf.níj£). and considering the added voltagethen:

The effect of the sum of the voltages of the individual pixels (5) is equivalent to the multiplication of their photocurrents. The pixel group (5) will be sensitive to the relative variation of the product of all the photocurrents in the pixel group (5). The sensitivity of the group is proportional to the total number of pixels (5) in the group, so the event frequency generation would be multiplied by the number of pixels (5) for the same relative variation of the photocurrent and the width of the group. The output band consumed by the group in low resolution mode will be the same as that of all (5) individual pixels in high resolution mode. A simple and practical circuit to add the voltages of the pixels (5) would be to interconnect the floating node of the capacitors C1/C2 of the pixels (5) of the group
.

True to the European roots, they use analog.
Some other info on the originally spotted project using Akida.

The project is backed by Renault so they would be aware of its progress and subsequently Akida.


Also from a Renault 2020 report.

Screenshot_2023-12-17-22-57-20-33_4641ebc0df1485bf6b47ebd018b5ee76.jpg



 
  • Like
  • Fire
  • Thinking
Reactions: 9 users
Some other info on the originally spotted project using Akida.

The project is backed by Renault so they would be aware of its progress and subsequently Akida.


Also from a Renault 2020 report.

View attachment 52174


Would appear Prophesee are also involved and the project runs till 2024.



Screenshot_2023-12-17-23-03-53-62_4641ebc0df1485bf6b47ebd018b5ee76.jpg
 
  • Like
  • Fire
Reactions: 17 users

Diogenese

Top 20
Would appear Prophesee are also involved and the project runs till 2024.



View attachment 52175
So the patent for the foveated DVS belongs to the Spanish research organization

CONSEJO SUPERIOR INVESTIGACION​

This is the sensor. while the foveation (pixel density) is digitally controlled, the pixel output is analog.

It is interesting that Prophesee is involved. I wonder if they have rights to use the foveated DVS?

CSIC seems to be the Spanish equivalent of CSIRO, so it is probable that they will license the technology.

As this is a DVS sensor it would be a natural fit for Akida with an appropriate ADC interface because its output is analog.


 
  • Like
  • Fire
Reactions: 13 users

TheFunkMachine

seeds have the potential to become trees.
  • Like
  • Fire
  • Love
Reactions: 24 users

Damo4

Regular
 
  • Like
  • Fire
  • Love
Reactions: 75 users

Damo4

Regular


Thought I read Uniden at first and nearly fainted.
Unigen will do for now
 
  • Haha
  • Like
  • Wow
Reactions: 20 users
Top Bottom