BRN Discussion Ongoing

Makeme 2020

Regular
Renesas Touchless solutions at Embedded world 2022....
 
  • Like
  • Fire
  • Love
Reactions: 26 users
D

Deleted member 118

Guest
Evening Chippers,

Days , weeks , months like these would be nice to see management put their hands in their back pockets and do on market buys.

Would go a long way to settle investors nerves , after all our CEO has said we are Undervalued.

It's one thing to be on six figure salaries with large free share / option issues , ....Completely diffrent to showing true faith in the company and physically purchasing stock in the company thay are running with their own cash.

Regards,
Esq.
Has anyone in the company ever brought any shares ever? Or they all freebies?

 
  • Like
  • Fire
Reactions: 5 users

Diogenese

Top 20
Has anyone in the company ever brought any shares ever? Or they all freebies?


How do you think the company obtained the rights to Akida?
 
  • Like
  • Love
Reactions: 10 users

Slymeat

Move on, nothing to see.
How do you think the company obtained the rights to Akida?
I can‘t understand how that answers the original question of “Has anyone in the company ever brought any shares ever? Or they all freebies?” But I would love to understand your reasoning.

One line of reasoning may be that the rights had a value which was exchanged for shares. But that would be a little tenuous as an intangealble value had to first be assigned to the rights.

I assume the original question was targeted at the age old idea of people having skin in the game by risking their existing personal wealth and purchasing shares with their own cash.
 
  • Like
  • Fire
  • Love
Reactions: 9 users
Recent article on Event Cams and focus on Prophesee & Sony.

Gave the author some slack as our Ann with Prophesee was what, mid June and this article publish July 1 apparently, so she probs wasn't aware of the new partner in town :)

I flicked a note to her to give her the update & link to the joint statement from BRN / Prophesee haha

Highlighted a couple sections in red that supports the need for Akida and Prophesees' interest imo.





ISSUES > 2022 > JULY/AUGUST 2022 > EVENT CAMERAS: A NEW IMAGING PARADIGM

FEATURE OPEN​

Event Cameras: A New Imaging Paradigm​

Susan Curtis

Unlike conventional cameras, neuromorphic cameras mimic how human eyes work by detecting and recording only changes in a scene, opening doors to new imaging possibilities.

figure


[Robotics and Perception Group, University of Zurich, Switzerland]


Imagine the human eye as an optical sensor. It captures a constant stream of visual data, converting incoming light into electrochemical signals that allow the brain to create a panoramic view of the surroundings. But the retina that lines the back of the eye achieves much more than a simple photodiode: a succession of specialized cells decodes the raw optical data, extracting the most important features, discarding redundant information, and sending to the brain only what’s useful for decision making and producing a dynamic image. This crucial pre-processing step allows the brain to make sense of vast amounts of raw optical data more quickly, reconstructing a 3D view of the world in real time and allowing humans to react almost instantaneously to fast-moving events.
It’s hardly surprising that a natural imaging solution’s elegance and efficiency have inspired scientists and engineers to create artificial vision systems that mimic biology.
It’s hardly surprising, then, that this natural imaging solution’s elegance and efficiency have inspired scientists and engineers to create artificial vision systems that mimic biology. In the late 1980s, at the California Institute of Technology (Caltech), USA, Ph.D. student Misha Mahowald worked with microelectronics pioneer Carver Mead to create the first silicon chips to emulate the biological function of the retina, generating a similar response in real time to the signals observed in the human visual system.

“We have taken the first step in simulating the computations done by the brain to process a visual image,” wrote Mahowald and Mead in “The Silicon Retina,” a landmark 1991 article in Scientific American. “Our success persuades us that this approach not only clarifies the nature of biological computation but also demonstrates that the principles of neural information processing offer a powerful new engineering paradigm.”

figure [Enlarge graphic] [Illustration by Phil Saunders]

Only the changes​

The breakthrough demonstrations by the two Caltech researchers ignited the field of neuromorphic engineering—the idea that building electronic systems that replicate the neural architecture of the brain might enable more efficient analysis and computation by taking a differential approach instead of an integrated one, while also offering crucial insights into the way the brain works. Neuromorphic engineering now extends to all areas of sensory perception and processing, with initiatives such as the EU’s Human Brain Project even attempting to build artificial brains to explore the complexity of human cognition and to devise more powerful and more efficient computer architectures.

Meanwhile, the early silicon retinas built by Mahowold and Mead have evolved into a new paradigm of neuromorphic vision systems that can respond much more quickly to high-speed events—while also consuming far less power than standard cameras.

These bio-inspired image sensors exploit the same optical design and layout as a standard CMOS camera, and indeed have benefited from the manufacturing improvements that over the last decade have reduced the pixel size and enhanced the spatial resolution of all silicon-based devices. The difference is that each pixel in a neuromorphic camera operates independently, triggering a response only when the intensity of light exceeds a predefined threshold. In contrast to a conventional camera, which records a complete picture of the scene at regular time intervals, these image sensors capture an event only when one of these smart pixels detects some sort of change.

“An event camera only measures motion in the scene,” explains Davide Scaramuzza, a professor of robotics and perception at the Institute of Neuroinformatics, a joint research center of the University of Zurich and ETH Zurich in Switzerland that Mahowold helped to establish. “Whenever a pixel detects a change in intensity, possibly caused by motion or by blinking patterns, that pixel triggers an event. If nothing moves, there is no information.” In practice, that means the images recorded by an event camera show only the contours of moving objects, rather than the full-color pictures captured by a normal video camera.

This event-driven paradigm has several important consequences for image sensors. For a start, recording the time and location of each discrete event makes it possible to detect any movement in the scene with microsecond resolution. “Most digital cameras record a new frame once every 30 milliseconds, but an event-based camera produces a constant stream of asynchronous events,” explains Scaramuzza. “The output is continuous in both space and time.”

figure


Event cameras can capture the super-fast dynamics of a scene more effectively than a standard image sensor, and can enable an autonomous drone to detect and dodge a flying ball around 10 times more quickly than one equipped with a frame-based camera. [Robotics and Perception Group, University of Zurich, Switzerland]


That makes event cameras ideal for applications that demand fast response times, such as robotics and self-driving cars. As an example, Scaramuzza and his colleagues have tested the performance of an event camera when a ball is thrown at an autonomous drone. “The ability to dodge a moving obstacle is particularly difficult in robotics since in this case the speed between the drone and the ball can reach around 10 m/s, or 36 km/h,” he says. “With a standard camera, you would need to wait at least 30 milliseconds to capture two successive frames, which is too long for the drone to safely avoid the ball.”

In contrast, Scaramuzza’s tests showed that an event camera can detect the ball and instruct the drone to make an evasive maneuver within just 3.5 ms—almost 10 times faster than with a standard camera. In addition to allowing drones to react more quickly to unpredictable events, this type of system could improve the responsiveness and safety of autonomous driving systems, potentially reducing reaction times to fractions of a millisecond.

Such low latency also offers important advantages for industrial automation and machine vision, as well as for augmented- and virtual-reality systems. For the holograms produced by these systems to appear realistic, explains Scaramuzza, any gesture or movement of the head must be rendered on the virtual scene within about 10 ms. “Your brain perceives that something is wrong if it takes any longer,” he says. “You might get motion sickness, or it just doesn’t seem true to life. Anything you can do to shorten the interval is an advantage.”

Sparse data, high dynamic range​

Event-based sensors offer other benefits, too. Each discrete event contains only a tiny amount of information, which can massively reduce the amount of data that must be processed and stored—particularly in applications where the data are inherently sparse. Such low-data-rate operation can drive down the device’s power consumption to the milliwatt regime, an order of magnitude lower than for frame-based image sensors.

Event cameras also boast a very high dynamic range—typically up to 140 dB, compared with 60 dB for a smartphone camera. That means they can capture high-quality data both in bright sunlight and at night, and thereby cope with the changing lighting conditions typical of automotive applications and industrial environments.

Such a unique combination of properties has proved particularly useful for tracking satellites and other objects in space, as Greg Cohen at Western Sydney University in Australia has discovered. “For space applications, all we really want is high dynamic range and a camera that captures the movement of stars and satellites,” he says. “With a normal telescope, you spend a lot of energy and bandwidth taking pictures of empty space, but a camera that only senses movement strips out all that irrelevant information.”

Once Cohen and his team had fitted an event camera to a telescope, they realized that it offered a whole new approach to space observation. “An event-based device cannot compete with a sensitivity of a camera that integrates over time, but it can capture small changes that might otherwise get missed,” he says. “That’s really important for certain tasks, such as looking at satellites, because you can detect small movements that indicate whether it’s tumbling or drifting off course.”

figure


A photo of Astrosite, a mobile observatory built by Greg Cohen and colleagues that fits inside a shipping container.

The high dynamic range of an event camera allows the observatory to operate day or night. [International Centre for Neuromorphic Systems, Western Sydney University]
Since event cameras do not capture data over a fixed time interval, images can even be recorded when the telescope is being moved. That has inspired Cohen and his colleagues to build a mobile observatory that fits inside a shipping container, allowing it to be transported anywhere in the world. “You can just put it down, plug it in and start doing some observing wherever you might be,” he says. “You can even stop near a highway because it’s much less sensitive to vibrations and other types of motion. Sometimes I even like to tap the telescope to make it easier to see things.” The low power and low data rate of event cameras have also enabled Cohen and his team to design two space-based instruments that are currently in orbit on the International Space Station.
The novel applications of event cameras now being demonstrated in both academia and industry have been enabled in large part by ongoing advances in sensor technology.

Commercial potential​

The novel applications of event cameras now being demonstrated in both academia and industry have been enabled in large part by ongoing advances in sensor technology. In the 1990s, Mahowold collaborated with Tobi Delbrück at Zurich’s Institute of Neuroinformatics to improve the design of the early silicon retinas, with Delbrück then working with Patrick Lichtsteiner to unveil the first practical system for event-based sensing in 2005. This demonstration device, with its 64×64-pixel array, kickstarted more than a decade of innovations in design and engineering, yielding event-based sensors that pack in more than a million pixels to improve both their resolution and dynamic range.
figure


The IMX636 event-based vision sensor was produced in a collaboration between Sony and Prophesee. [Prophesee]


Commercial devices are also now emerging that leverage the same manufacturing technologies as more established types of image sensor. The French startup Prophesee has collaborated with Sony to release a new event-based vision sensor with a center-to-center distance between pixels, or a pixel pitch, of just 4.86 µm compared with 15 µm for Prophesee’s previous product, Metavision. In the new device, the photodiode is fabricated using a dedicated process and then stacked on top of the transistor layer, improving the electro-optic performance and using the silicon area more efficiently. “In our previous sensor, the photodiode occupied only a quarter of the pixel’s surface, which meant we were losing more than half of the photons,” explains Luca Verre, Prophesee’s CEO. “The fill factor in the new design is more than 80%, which improves the sensitivity of the sensor as well as its low-light performance.”

Two versions of the sensor are now available—one with a 1280×720-pixel array and a smaller 640×512 device—with an evaluation kit to support rapid prototyping. Initially, Prophesee and Sony are targeting applications in industrial automation, with Prophesee having already worked with companies specializing in machine vision such as Imago, CenturyArks and Lucid to demonstrate practical event-based solutions that can be used for real-time monitoring and control of production processes. “The partnership with Sony provides extra credibility for our efforts so far,” says Verre. “We will also be working together to develop customer opportunities and to boost the adoption of the technology across different applications.”

Need for software

One key market in the two companies’ sights is the Internet of Things (IoT), where intelligent vision systems operating at the edge of the network could capture and process information locally, rather than transmitting all the data to a central computer. Such always-on systems could be used for autonomous monitoring of dynamic processes, such as the flow of traffic or people in busy areas, as well as for smart-home devices and human–machine interfaces.

However, an intelligent vision system also needs software to make sense of the raw optical data, and most conventional algorithms for computer vision and deep learning process frame-based information. One simple solution is to impose an artificial frame rate on the event data, essentially summing all of the discrete events that have been triggered within a short time interval. This offers flexible frame rates while still using readily available software, but clearly sacrifices some of the temporal resolution that event cameras can achieve. “It’s not great to grab this interesting data out of the camera—which is providing both the time and location of the change—and then wrangling it back into a frame,” says Cohen. “To realize the benefit, you really need to rethink the way you process the information.”

Researchers have therefore devised alternative algorithms that return an output every time a new event is triggered. As with standard computer vision, most of these approaches rely on neural networks consisting of thousands or even millions of artificial neurons, or nodes. In conventional systems, all the nodes in the network are connected together, so the whole network is updated when a new frame of data is processed. In an event-by-event framework, however, each artificial neuron only produces a response—or a spike—once it reaches a specific threshold.

These so-called spiking neural networks more closely mimic the way the brain works, with each neuron transmitting a signal to other nodes in the network only when it is triggered by an event. “We try to build systems that work iteratively, and that also use the time between events as a source of information,” explains Cohen. “Processing the data as it arrives means that you don’t have to store it, which makes it possible to handle large amounts of information with very little bandwidth.”

Spiking algorithms have already proved their ability to track moving objects with very low power consumption, but in most cases, they still run on computer processors that use around 10 W in standby mode. “We need to co-design the hardware and software for processing this event-based data,” says Scaramuzza. “That will make it possible to demonstrate complete vision solutions that leverage the microsecond temporal resolution of event cameras while also operating on very little power.”

Such a complete imaging solution would combine an event camera with a so-called neuromorphic processor—an alternative computing platform rooted in the ideas of Carver Mead that exploits transistors to emulate the way the brain works. Instead of the sequential approach taken by conventional digital processing technologies, such bio-inspired processors exploit massively parallel circuits that act as artificial neurons, ready to fire whenever a new event is detected. Some of these neuromorphic processors, such as the Loihi neuromorphic research chip developed by Intel, have already been shown to operate much more quickly and on much less power than a conventional computer chip.

“If you can specialize the camera and specialize the processing, you can achieve the power efficiency, robustness and reliability that biology offers,” says Cohen.
However, the best performance can be achieved by developing application-specific solutions that combine neuromorphic architectures for sensing and processing. “If you can specialize the camera and specialize the processing, you can achieve the power efficiency, robustness and reliability that biology offers,” says Cohen. “That’s what we’re all aiming and striving for, but we’re not quite there yet.”

Technologies such as field-programmable gate arrays (FPGAs) offer an accessible way for researchers to experiment with these neuromorphic approaches. “Once we have developed an algorithm to solve a particular problem, we simulate its performance using a conventional computer,” Cohen explains. “If the results look promising we can start to build the algorithm on an FPGA, and if that looks good we can start to build the circuitry in silicon. With each progression, we can improve the power efficiency by orders of magnitude.”

figure


The explosion of a water balloon, as seen by an event camera. [Robotics and Perception Group, University of Zurich, Switzerland]

Integrated solutions​

Building a solution directly in silicon presents a whole new challenge, but fully integrated neuromorphic solutions for vision applications are now starting to emerge. In 2019, for example, the Chinese/Swiss startup SynSense released a neuromorphic processor that has been optimized to work with event-based image sensors. SynSense’s DYNAP-CNN chip incorporates more than a million spiking neurons and four million programmable parameters, allowing implementation of different algorithms for processing event-based data, and offers a latency below 5 ms and a power efficiency that SynSense claims is 10 to 100 times greater than standard processors.

figure


The DYNAP-CNN chip from SynSense has been optimized for processing event-based data. [SynSense]


SynSense has now partnered with Prophesee to build a single-chip solution that combines its DYNAP-CNN processor with the Metavision event sensor. The initial objective will be to create a small device with a 128×128-pixel array, which should be sufficient for short-range applications such as smart-home devices, facial recognition and simple human-machine interfaces. “By integrating everything on the same chip, we will be able to process data on the fly, allowing us to drive down both the power consumption and the latency,” says Verre. “We also believe we can manufacture the chip at a low enough cost to address always-on IoT applications.”

The first devices are expected to be ready to ship to customers by the end of the year. Verre points out that Prophesee has already shown that the data recorded with its sensor can be processed by Intel’s Loihi chip, while the DYNAP-CNN processor has been designed to accept a direct feed of data from an event camera. “One of the challenges will be programming the chip for a specific task,” says Verre. “At least in the beginning, we need to work with our customers to develop their applications because it will require some dedicated software tools as well as specialized data collection for training the neural network.”

Despite such impressive advances in technology, event cameras have yet to enter the mainstream. One issue has been the cost, with state-of-the-art event cameras typically priced at around US$5,000. But Verre believes that Prophesee’s manufacturing breakthrough with Sony should make event cameras more competitive with more established imaging technologies. “We are not using any exotic technology to manufacture the sensor or the camera module,” he says. “Our pixel size is now in the same ballpark as for a time-of-flight or global-shutter sensor, which should enable the more aggressive price positioning that’s needed to open up high-volume applications in the consumer space.”

Indeed, France-based market analysis firm Yole Devéloppement predicts that the global market for neuromorphic technologies could reach US$5 billion by 2030, driven largely by novel imaging solutions for mobile phones. Other consumer applications are likely to emerge in wearable technologies and smart-home devices, along with automotive applications, autonomous drones and industrial robotics. “It’s no longer a technology or manufacturing challenge, it’s more of a business challenge to get the technology adopted in some of these consumer applications,” says Verre. “That will enable us to reach the scale we need to access more advanced technology nodes and further reduce the size and cost of these neuromorphic solutions.”

Susan Curtis is a freelance science and technology writer based in Bristol, UK.
 
  • Like
  • Love
  • Fire
Reactions: 31 users
Another RT like. Interesting:
1661220441636.jpeg
 
  • Like
  • Love
Reactions: 23 users

alwaysgreen

Top 20
Evening Rocket577,

I have only been invested in Brainchip for three years and in that time I have not once seen any of our management purchase stock privately.

Disappointing from a medium / long term holders perspective.

Their strategy would appear to be

1, Salary $
2, bonus cash $
3, bonus shares $
4, bonus shares restricted $
5, bonus options restricted $
6, medical benefits $
7, & free lunches.

With sporadic selling of stock in between.

Regards,
Esq.

In regards to Peter and Anil, I believe they have very likely put everything into this that they possibly could. Blood, sweat and tears along with the sacrifices of missed birthdays, not putting their kids into bed etc. While they are now wealthy individuals with the value of their shares, they likely don't have a lot of free cash to purchase more shares given they have been working on Akida solely for so many years. If this thing works out the way they and we hope, they will have created generational wealth for their families but I don't believe they need to purchase on market to prove their belief in the company.

Someone like Sean Hehir or Rob Telson, it would be nice to see him chuck a few $$ in, I don't disagree. It would be a huge confidence boost for me.
 
  • Like
  • Love
  • Fire
Reactions: 31 users
D

Deleted member 118

Guest
Evening Rocket577,

I have only been invested in Brainchip for three years and in that time I have not once seen any of our management purchase stock privately.

Disappointing from a medium / long term holders perspective.

Their strategy would appear to be

1, Salary $
2, bonus cash $
3, bonus shares $
4, bonus shares restricted $
5, bonus options restricted $
6, medical benefits $
7, & free lunches.

With sporadic selling of stock in between.

Regards,
Esq.
Why should they when the can get em for

 
  • Haha
  • Like
Reactions: 6 users

butcherano

Regular
I admit to only reading the abstract but that makes me believe this is rebranding of an existing technology: encrypted Pulse Code Modulation—the standard way of all computers to currently securely transmit audio. The ONLY inventive part being the involvement of a neural network used to encode and decode based off an acoustic signature ( and then still, a signature used to distort a spatio-temporal distribution—aka encryption of PCM). The learning part is a bit novel, and more possible with a neural network, but then what will stop a hostile intercept entity from using the same “embodiment” and learning also?

As with MANY patents I am amazed that this was able to be patented.

Sometimes I think patents exist just to keep the patent office in a job.
It's interesting that this brainchip patent was cited by Toyota and Samsung though. Doesn't mean that we're in bed with them in any way, but definitely lends some importance to the technology covered and the need for the patent in the first place...imo...

Toyota - Audible notification systems and methods for autonomous vehicles

Samsung - Method for converting neural network and apparatus for recognizing using the same
 
  • Like
  • Fire
  • Love
Reactions: 27 users

Xhosa12345

Regular
Perhaps they aren’t allowed to buy any… given that they are privy to what’s in the non-disclosure agreements, it could be considered insider trading. The individual who told me that they had personally bought shares was not aware of what was in the NDAs, whereas I imagine that Rob Telson is.

100%

not publicly available information - so the company walls are holding up ok
 
  • Like
Reactions: 3 users

Quiltman

Regular
I'm not suggesting this is Akida, but what a transformative evolution neuromorphic computing is about to release upon the world.

Artificial intelligence model can detect Parkinson’s from breathing patterns​

An MIT-developed device with the appearance of a Wi-Fi router uses a neural network to discern the presence and severity of one of the fastest-growing neurological diseases in the world.

Alex Ouyang | Abdul Latif Jameel Clinic for Machine Learning in Health
Publication Date:
August 22, 2022
PRESS INQUIRIES
Side view of an older man lying down with a mist of white particles emanating from his nose and mouth. Beside him is an android in a pensive position, looking at images behind the man. Images include a rendering of a brain in purple; the human nervous system in blue and pink; a brain in blue, yellow, and green; and the man standing up with blue waves around his body representing tremors or shakiness.

Caption:
A new neural network trained by MIT PhD student Yuzhe Yang and postdoc Yuan Yuan assesses whether or not someone has Parkinson’s from their nocturnal breathing.
Credits:
Image courtesy of researchers.
Bedroom photo showing a bed, a painting, a window, and a white box hanging next to the painting.

Caption:
A wall-mounted device developed at MIT and powered by artificial intelligence can detect Parkinson’s disease from ambient breathing patterns. There is no need for the user to interact with the device or change their behavior in order for it to work.
Credits:
Photo courtesy of the researchers.
Four illustrations: First, labeled Data Source, shows two people, one wearing a belt while asleep, the other with a box on the wall in their room while asleep. The second, labeled Inputs, shows a wave pattern where crests represent exhale and valleys represent inhale. Third, labeled A.I.-based model, is a series of interconnected nodes, representing the neural network. Fourth, labeled Outputs, shows an old man next to a brain with a red spot under a magnifying glass, and a meter at 85 percent, r

Caption:
The system extracts nocturnal breathing signals either from a breathing belt worn by the subject, or from radio signals that bounce off their body while asleep. It processes the breathing signals using a neural network to infer whether the person has Parkinson's, and if they do, assesses the severity of their disease in accordance with the Movement Disorder Society Unified Parkinson’s Disease Rating Scale.
Credits:
Image courtesy of the researchers.
Previous image Next image
Parkinson’s disease is notoriously difficult to diagnose as it relies primarily on the appearance of motor symptoms such as tremors, stiffness, and slowness, but these symptoms often appear several years after the disease onset. Now, Dina Katabi, the Thuan (1990) and Nicole Pham Professor in the Department of Electrical Engineering and Computer Science (EECS) at MIT and principal investigator at MIT Jameel Clinic, and her team have developed an artificial intelligence model that can detect Parkinson’s just from reading a person’s breathing patterns.
The tool in question is a neural network, a series of connected algorithms that mimic the way a human brain works, capable of assessing whether someone has Parkinson’s from their nocturnal breathing — i.e., breathing patterns that occur while sleeping. The neural network, which was trained by MIT PhD student Yuzhe Yang and postdoc Yuan Yuan, is also able to discern the severity of someone’s Parkinson’s disease and track the progression of their disease over time.
Yang is first author on a new paper describing the work, published today in Nature Medicine. Katabi, who is also an affiliate of the MIT Computer Science and Artificial Intelligence Laboratory and director of the Center for Wireless Networks and Mobile Computing, is the senior author. They are joined by Yuan and 12 colleagues from Rutgers University, the University of Rochester Medical Center, the Mayo Clinic, Massachusetts General Hospital, and the Boston University College of Health and Rehabilition.
Over the years, researchers have investigated the potential of detecting Parkinson’s using cerebrospinal fluid and neuroimaging, but such methods are invasive, costly, and require access to specialized medical centers, making them unsuitable for frequent testing that could otherwise provide early diagnosis or continuous tracking of disease progression.
The MIT researchers demonstrated that the artificial intelligence assessment of Parkinson's can be done every night at home while the person is asleep and without touching their body. To do so, the team developed a device with the appearance of a home Wi-Fi router, but instead of providing internet access, the device emits radio signals, analyzes their reflections off the surrounding environment, and extracts the subject’s breathing patterns without any bodily contact. The breathing signal is then fed to the neural network to assess Parkinson’s in a passive manner, and there is zero effort needed from the patient and caregiver.
 
  • Like
  • Love
  • Fire
Reactions: 34 users
I admit to only reading the abstract but that makes me believe this is rebranding of an existing technology: encrypted Pulse Code Modulation—the standard way of all computers to currently securely transmit audio. The ONLY inventive part being the involvement of a neural network used to encode and decode based off an acoustic signature ( and then still, a signature used to distort a spatio-temporal distribution—aka encryption of PCM). The learning part is a bit novel, and more possible with a neural network, but then what will stop a hostile intercept entity from using the same “embodiment” and learning also?

As with MANY patents I am amazed that this was able to be patented.

Sometimes I think patents exist just to keep the patent office in a job.
Give them more I say. Without the employee incentive scheme we would be up that well known creek without a paddle. We don't pay big salaries compared to peers so they are compensated with shares IMO. The more they get hopefully the better the job they are doing. Hard to get quality people in this space so I'm all for it not to mention it was voted in favour by shareholders!

SC
 
  • Like
  • Love
Reactions: 26 users

Slymeat

Move on, nothing to see.
Give them more I say. Without the employee incentive scheme we would be up that well known creek without a paddle. We don't pay big salaries compared to peers so they are compensated with shares IMO. The more they get hopefully the better the job they are doing. Hard to get quality people in this space so I'm all for it not to mention it was voted in favour by shareholders!

SC
I agree, and always vote for resolutions that ask for shares as part of remuneration packages for execs. It incentivises them to improve a KPI that I, as an investor, am also very interested in—the share price.
 
  • Like
  • Fire
Reactions: 8 users
I agree, and always vote for resolutions that ask for shares as part of remuneration packages for execs. It incentivises them to improve a KPI that I, as an investor, am also very interested in—the share price.
They weren't issued to key personnel and could be shared by 30 different employees that have worked their socks off.
Employees with skin in the game, all for it and I think usually is part of retention policy. Leave before certain date and lose them.

SC
 
  • Like
  • Love
Reactions: 9 users

alwaysgreen

Top 20
I prefer if they are not promised of millions of performance/bonus shares with low target.
They would have preferred for you to have developed the worlds first commercially available neuromorphic chip but that isn't what happened.

It was them. Their time. Their brains. Their labour.

This is their reward.
 
  • Like
  • Fire
Reactions: 31 users
  • Like
Reactions: 1 users

jk6199

Regular
Below a dollar, where's that secret stash of money???
 
  • Like
  • Haha
Reactions: 3 users
D

Deleted member 118

Guest
Give them more I say. Without the employee incentive scheme we would be up that well known creek without a paddle. We don't pay big salaries compared to peers so they are compensated with shares IMO. The more they get hopefully the better the job they are doing. Hard to get quality people in this space so I'm all for it not to mention it was voted in favour by shareholders!

SC

 
There has been much said about the failure of Brainchip Inc employees to buy Brainchip shares on market most of which is either uniformed or deliberately dishonest.

Since 2020 the company has stated in writing and in oral presentations that it has multiple NDAs some of which have been disclosed now such as NASA, Ford, Valeo, Mercedes Benz, Renesas and Mega Chips. In more recent months the company has also disclosed a number of commercial partnerships with the likes of ARM, Edge Impulse, Prophesee, SiFive, Intellisense, ISL (formerly an NDA EAP), US Airforce Research and Nviso.

Since 2020 retail shareholders have begged the company for more information about these NDAs including the non disclosed 10 or so others and these commercial partnerships and have been advised on multiple occasions that these companies as a result of their NDAs and desires to keep secret what they are intending or hoping to do with AKIDA have insisted that Brainchip and employees keep these details secret.

Indeed Rob Telson in a very popular interview much cited here stated that NASA was looking at AKIDA for vision and other things that they were prohibited from talking about.

More recently it has been cited time and time again on these threads since the May, 2022 AGM that the CEO Sean Hehir stated that the only way retail shareholders will be able to follow the progress of the company as a result of customer demands for secrecy is to look to the 4C’s.

In keeping these details secret ALL OF BRAINCHIP’S BOARD AND KEY MANAGEMENT PERSONNEL (and likely most employees) will be in possession of information that you the retail shareholder do not have, and if you did have this information to the exclusion of the market generally, it would permit you to trade either for profit or to minimise your future losses, and in so doing you would be insider trading.

The rules that apply to you as a retail investor not to engage in insider trading if you stumbled across this information apply to ALL OF BRAINCHIP’S BOARD AND KEY MANAGEMENT PERSONNEL (and likely most employees) at Brainchip.

It is of course convenient for some here to ignore the fact that Tony Dawe was prior to joining Brainchip Inc a retail shareholder of Brainchip Inc and continues to hold his shares and has lamented to various shareholders in his written communications that he is prohibited from buying shares on market in Brainchip Inc.

Why would this be so? Perhaps using a little imagination you might work out that working closely as he does with Peter van der Made at the Perth Innovation Centre Tony Dawe has insider knowledge of patents and technological developments around cortical columns for example which you as an ordinary retail shareholder are not privy.

I have posted about this in the past and the law and how it applies to Insider Trading which is straight forward and easy to understand and so I am left to ponder the motives of some who I know would have read my previous posts on this subject yet who are raising this subject today.

Of course I note that one of these posters has claimed over a number of weeks a desire to purchase more Brainchip Inc shares when funds become available and yet this poster is constantly posting negative views supported by nothing other than anxiety but has now moved to posting completely misleading opinions about the failure of Brainchip’s Board and Key Management to buy shares on market when this poster must know they are prohibited from doing so by law.

Unless this poster does not believe any of the company statements regarding the desire of the undisclosed NDA’s and those who have been disclosed to keep secret what is being done with Brainchip’s AKIDA technology then the following clause from the ASX Guidance Note No.27 covering Insider Trading is the complete answer as inconvenient as it is for them and those who are trying to manipulate other retail and drive down the share price to their advantage. Perhaps they are not trying to do this and just like to raise fear and doubt in others for kicks or are complete fools who have absolutely no idea and are prepared to say whatever rubbish comes into their heads. It is not for me to judge.

4.6. A cautionary note about the application of insider trading laws

It should be noted that the fact a trade occurs during a permitted trading window, or outside a black-out period, under an entity’s trading policy does not preclude it from breaching insider trading laws, if it is undertaken or procured by someone in possession of inside information at the time. ASX would therefore recommend that an entity include a warning in its trading policy that a person who possesses inside information about an entity’s securities is generally prohibited from trading in those securities under insider trading laws and that this applies even where the trade occurs within a permitted trading window, or outside a black-out period, specified in the policy.39


My opinion only but based on publicly available law and FACTS.

Kind regards
Fact Finder
 
  • Like
  • Love
  • Fire
Reactions: 205 users
Top Bottom