BRN Discussion Ongoing

TECH

Regular

A LOOK AT THE TOP HOLDERS OF BRAINCHIP SHARES​

(An impressive list of big funds that are taking Brainchip seriously and are invested. For example, Citicorp owns nearly 1 in 10 shares of Brainchip, Merrill Lynch (Australia) 1 in 20 shares. Personally, I find this reassuring we are on the right track. The professional big boy investors have their hat in the ring with us and believe we are on to something and they want in too.)
According to the company, BrainChip’s top 20 shareholders are as follows:
  1. Citicorp, with 9.15% of all outstanding shares
  2. Mr Peter Adrien van der Made, with 8.87%
  3. Merrill Lynch, with 4.88%
  4. BNP Paribas, with 4,75%
  5. HSBC, with 4.44%
  6. JPMorgan, with 2.82%
  7. BNP Paribas (DRP), with 2.53%
  8. HSBC (customer accounts), with 1.17%
  9. National Nominees, with 0.67%
  10. LDA Capital, with 0.52%
  11. BNP Paribas (Retail Clients), with 0.47%
  12. Mrs Rebecca Ossieran-Moisson, with 0.45%
  13. Crossfield Intech (Liebskind Family), with 0.4%
  14. Certane CT Pty Ltd (BrainChip’s unallocated long-term incentive plan), with 0.4%
  15. Mr Paul Glendon Hunter, with 0.35%
  16. Certane CT Pty Ltd ((BrainChip’s allocated long-term incentive plan), with 0.35%
  17. Mr Louis Dinardo, with 0.34%
  18. Mr Jeffrey Brian Wilton, with 0.31%
  19. Mr David James Evans, with 0.31%
  20. Superhero Securities (Client Accounts), with 0.3%


Hi Sirod69.....thanks for that breakdown.

Nice to see Peter and Anil still sitting at number 2 & 3 respectively...holding 13.75% of the company, plus 56.52% currently being held by
the rest of us outside the top 20...combined giving a nice 70.27% of the company...just an observation at this point, nothing more than that.

Regarding some nice news to drop just prior to the AGM, legally the company can't sit on information to make the timing of that possible,
it either plays out that way or not, maybe we could ask that the signing of a new IP License takes place on 22 May, that would work :ROFLMAO::ROFLMAO::ROFLMAO:

Tech 😉
 
  • Like
  • Haha
  • Wow
Reactions: 22 users
Just on Prophesee, I see Christoph involved with a conference not long ago.

He was also one of the authors of a paper being presented and for mine it seems they are all still working through the best or most suitable systems.


DATE 2023 Detailed Programme​


FS6 Focus session: New perspectives for neuromorphic cameras: algorithms, architectures and circuits for event-based CMOS sensors​

Date: Tuesday, 18 April 2023
Time: 16:30 CEST - 18:00 CEST
Location / Room: Okapi Room 0.8.1

Session chair:
Pascal VIVET, CEA-List, FR

Session co-chair:
Christoph Posch, PROPHESEE, FR


Time
LabelPresentation Title
Authors
16:30 CESTFS6.1THE CNN VS. SNN EVENT-CAMERA DICHOTOMY AND PERSPECTIVES FOR EVENT-GRAPH NEURAL NETWORKS
Speaker
:
Thomas DALGATY, CEA-LIST, FR
Authors:
Thomas DALGATY1, Thomas Mesquida2, Damien JOUBERT3, Amos SIRONI3, Pascal Vivet4 and Christoph POSCH3
1CEA-List, FR; 2Université Grenoble Alpes, CEA, LETI, MINATEC Campus, FR; 3Prophesee, FR; 4CEA-Leti, FR
Abstract
Since neuromorphic event-based pixels and cameraswere first proposed, the technology has greatly advanced suchthat there now exists several industrial sensors, processors andtoolchains. This has also paved the way for a blossoming newbranch of AI dedicated to processing the event-based data thesesensors generate. However, there is still much debate about whichof these approaches can best harness the inherent sparsity, low-latency and fine spatiotemporal structure of event-data to obtainbetter performance and do so using the least time and energy.The latter is of particular importance since these algorithms willtypically be employed near or inside of the sensor at the edgewhere the power supply may be heavily constrained. The twopredominant methods to process visual events - convolutionaland spiking neural networks - are fundamentally opposed inprinciple. The former converts events into static 2D frames suchthat they are compatible with 2D convolutions, while the lattercomputes in an event-driven fashion naturally compatible withthe raw data. We review this dichotomy by studying recentalgorithmic and hardware advances of both approaches. Weconclude with a perspective on an emerging alternative approachwhereby events are transformed into a graph data structure andthereafter processed using techniques from the domain of graphneural networks. Despite promising early results, algorithmic andhardware innovations are required before this approach can beapplied close or within the Event-based sensor.


They also appear to be involved in the NimbleAI project.

MPP1 Multi-partner projects​

Date: Monday, 17 April 2023
Time: 11:00 CEST - 12:30 CEST
Location / Room: Gorilla Room 1.5.1

Session chair:
Luca Sterpone, Politecnico di Torino, IT

TimeLabelPresentation Title
Authors
11:00 CESTMPP1.1NIMBLEAI: TOWARDS NEUROMORPHIC SENSING-PROCESSING 3D-INTEGRATED CHIPS
 
  • Like
  • Love
  • Fire
Reactions: 23 users

Deadpool

Did someone say KFC
Continuing on from the above ramblings, if Arm were to incorporate AKIDA 1500 in all of its M based MCU's then it would tie in nicely with Renesas plans to in regards to the 22nm RA-family which is being sampled right now with select customers with plans for general availability towards the end of the year. Seems to marry in nicely with the tape out times of Global Foundaries 22nm AKIDA 1500.

We know AKIDA is compatible with all of Arm's product families , so it wouldn't make sense just to incorporate it with Cortex M-85, would it?

Why stop there?

IMO. View attachment 35638

Renesas Makes the Jump to 22nm with a New RA-Class MCU with Software-Defined Radio, Sampling Now Offering Bluetooth 5.3 Low Energy (BLE) at launch, this cutting-edge Arm Cortex-M33 microcontroller can be upgraded for future releases.​


Gareth HalfacreeFollow
22 days ago • HW101 / Internet of Things / Communication
image_R2qlbKqlN4.png





2



Renesas Electronics has announced sampling of its first microcontroller to be built on a 22nm semiconductor process node — an RA-family 32-bit Arm Cortex-M33-based chip with Bluetooth 5.3 Low Energy (BLE) provided via an on-board software-defined radio (SDR).
"Renesas' MCU [Microcontroller Unit] leadership is based on a wide array of products and manufacturing process technologies," boasts Renesas' Roger Wendelken of the sampling. "We are pleased to announce the first 22nm product development in the RA MCU family which will pave the way for next generation devices that will help customers to future proof their design while ensuring long term availability. We are committed to providing the best performance, ease-of-use, and the latest features on the market. This advancement is only the beginning."
Renesas has announced a new RA-class microcontroller with SDR-powered Bluetooth 5.3 Low Energy (BLE) support, built on a 22nm process node. (📷: Renesas)

Renesas has announced a new RA-class microcontroller with SDR-powered Bluetooth 5.3 Low Energy (BLE) support, built on a 22nm process node. (📷: Renesas)

Modern semiconductor manufacturing processes are measured, after a fashion, in nanometers — once the size of a given feature, then the smallest gap between features, and now a somewhat hand-wavy way of differentiating a next-generation process node from a previous one. While bleeding-edge high-frequency application class processors, like those from Intel or AMD, are now playing with single-digit nanometer process nodes, traditionally microcontrollers — needing to pack in far fewer transistors than high-performance application processors — have stuck with proven, and more affordable, double- or triple-digit process nodes.
That's key to why Renesas' announcement of a part built on a 22nm process node, a node which Intel began using back in 2012 for its Ivy Bridge family of chips before moving to 14nm for Broadwell in 2014, is notable: for microcontrollers, 22nm is an advanced node indeed. It allows the company to pack more components into a given area, and Renesas has taken full advantage of that extra capacity by fitting the chip with a software-defined radio (SDR) — powering Bluetooth 5.3 Low Energy (BLE) connectivity with direction-finding and low-power audio capabilities at launch, but upgradeable post-release to support new radio protocols and standards as-required.
The new microcontroller enters the RA family, alongside the recently-launched entry-line RA4E2. (📷: Renesas)

The new microcontroller enters the RA family, alongside the recently-launched entry-line RA4E2. (📷: Renesas)

The shift to a 22nm node will also bring with it an overall reduction in part size and gains in efficiency which can be exploited as either increased performance for the same power draw or a lower power draw for the same performance — or a balanced combination of the two. Renesas has not, however, yet shared full specifications for the part, including frequency and power requirements.
Renesas is now sampling the 22nm RA-family chips to "select customers," with plans for general availability towards the end of the year. Parties interested in requesting a sample should contact their local sales office for more details,

Fantastic detective work once again @Bravo. Fingers crossed this is their thinking as well.

mr bean bravo GIF
 
  • Like
  • Haha
  • Fire
Reactions: 17 users

Deena

Regular
Extraordinary low volumes of shares changing hands today. Only just over 1.6 million traded. The price will have to rise considerably if they want more ... but they still won't be mine!
Deena
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 37 users
Excellent, albeit a bit long, article to provide some perspective on design, testing and implementation when dealing with AVs.

Good example of why timelines can push out as well.




Testing Perception and Sensor Fusion Systems​

Updated Apr 5, 2023


Overview​

An autonomous vehicle’s (AV) most computationally complex components lie within the perception and sensor fusion system. This system must make sense of the information that the sensors provide, which might include raw point-cloud and video-stream data. The perception and sensor fusion system’s job is to crunch all of the data and determine what it is seeing: Lane markings, pedestrians, cyclists, vehicles, or street signs, for example.

To address this computational challenge, automotive suppliers seemingly could build a supercomputer and throw it in the vehicle. However, a supercomputer consumes heaps of power, and that directly conflicts with the automotive industry’s goal to create efficient cars. We can’t expect Level 4 vehicles to be connected to a huge power supply to run the largest and smartest computer for making huge decisions. The industry must strike a balance between processing power and power consumption.

Such a monumental task requires specialized hardware; for example, “accelerators” that help specific algorithms that perceive the world execute extremely fast and precisely. Learn more about that hardware architecture and its various implementations in the next section. After that, discover methodologies to test the perception and sensor fusion systems from a hardware and system-level test perspective.

Contents​

Perception and Sensor Fusion Systems​

As noted in the introduction, AV brains can be centralized in a single system, distributed to the edge of the sensors, or a combination of both:
35566_TVT_Imagery_AV_Sensor_Fusion_Images_01

Figure 1. Control Placement Architectures
NI often refers to a centralized platform as the AV compute platform, though other companies have different names for it. AV compute platforms include the Tesla full self-driving platform and the NVIDIA DRIVE AGX platform.
35566_TVT_Imagery_AV_Sensor_Fusion_Images_02

Figure 2. NVIDIA DRIVE AGX Platform
MobilEye’s EyeQ4 offers decentralized sensors. If you combine such platforms with a centralized compute platform to offload processing, they become a hybrid system.
When we speak of perception and sensor fusion systems, we isolate the part of the AV compute platform that takes in sensor information from the cameras, RADARs, Lidars, and, occasionally, other sensors, and spits out a representation of the world around the vehicle to the next system: The path planning system. This system locates the vehicle in 3D space and maps it in the world..

Hardware and Software Technologies​

Certain processing units are best-suited for certain types of computation; for example, CPUs are particularly good at utilizing off-the-shelf and open source code to execute high-level commands and handle memory allocation. Graphics processing units can handle general-purpose image processing very well. FPGAs are excellent at executing fixed-point math very quickly and deterministically. You can find tensor processing units and neural network (NN) accelerators built to execute deep learning algorithms with specific activation functions, such as rectified linear units, extremely quickly and inparallel. Redundancy is built into the system to ensure that, if any component fails, it has a backup.
Because backups are critical in any catastrophic failure, there cannot be a single point of failure (SPOF) anywhere, especially if those compute elements are to receive their ASIL-D certification.

Some of these processing units consume large amounts of power. The more power compute elements consume, the shorter the range of the vehicle (if electric), and the more heat that’s generated. That is why you’ll often find large fans on centralized AV compute platforms and power-management integrated circuits in the board. These are critical for keeping the platform operating under ideal conditions. Some platforms incorporate liquid cooling, which requires
controlling pumps and several additional chips.
Atop the processing units lies plenty of software in the form of firmware, OSs, middleware, and application software. As of this writing, most Level 4 vehicle compute platforms run something akin to the Robot Operating System (ROS) on a Linux Ubuntu or Unix distribution. Most of these implementations are nondeterministic, and engineers recognize that, in order to deploy safety critical vehicles, they must eventually adopt a real-time OS (RTOS). However, ROS and similar robot middleware are excellent prototyping environments due to their vast amount of open source tools, ease of getting started, massive online communities, and data workflow simplicity.
With advanced driver-assistance systems (ADAS), engineers have recognized the need for RTOSs and have been developing and creating their own hardware and OSs to provide it. In many cases, these compute-platform providers incorporate best practices such as the AUTOSAR framework.
Perception and sensor fusion system software architecture varies dramatically due to the number and type of sensors associated with the perception system; types of algorithms used; hardware that’s running the software; and platform maturity. One significant difference in software architecture is “late” versus “early” sensor fusion.

Product Design Cycle​

To create a sensor fusion compute platform, engineers implement a multistep process. First, they purchase and/or design the chips. Autonomous compute platform providers may employ their own silicon design, particularly for specific NN accelerators. Those chips undergo test as described in the semiconductor section below. After the chips are confirmed good, contract manufacturers assemble and test the boards. Embedded chip software design and simulation occur in parallel to chip design and chip/module bring-up. Once the system is assembled, engineers conduct functional tests and embedded software tests, such as hardware-in-the-loop (HIL). Final compute-platform packaging takes place in-house or at the contract manufacturer, where additional testing occurs.
Back to top

Semiconductor Hardware-Level Tests​

As engineers design these compute units, they execute tests to ensure that the units operate as expected.:

Semiconductor-Level Validation and Verification​

As mentioned, all semiconductor chips undergo a process of chip-level validation and verification. Typically, these help engineers create specifications documents and send the product through the certification process. Often, hardware redundancy and safety are checked at this level. Most of these tests are conducted digitally, though analog tests also ensure that the semiconductor manufacturing process occurred correctly.

Semiconductor-Level Production Test​

After the chip engineering samples are verified, they’re sent into production. Several tests unique to processing units at the production wafer-level test stage revolve around testing the highly dense digital connections on the processors.
At this stage, ASIL-D and ISO 26262 validation occurs, and further testing confirms redundancy, identifies SPOF, and verifies manufacturing process integrity.
Back to top

Compute-Platform Validation​

After compute-platform manufacturers receive their chips and package them onto a module or subsystem, the compute-platform validation begins. Often, this means testing various subsystem functionality and the entire compute platform as a whole; for example:
  • Ensuring that all automotive network ports (controller area network [CAN], local interconnect network, and T1/Ethernet [ENET]) are communicating correctly in both directions
  • Ensuring that all standard network ports (ENET, USB, and PCIe) are communicating correctly in both directions
  • Ensuring that all sensor interfaces can communicate and handle standard loads for each type of sensor
  • Providing a representative “load” on the system and validating that it completes a task
  • Measuring the various element and complete system power consumptions as they
    complete a task
  • Measuring system thermal performance under various loads
  • Placing the subsystem or entire compute platform in a temperature, environmental, or accelerating (shaker table) chamber to ensure that it can withstand extreme operating conditions
  • Verifying that the system can connect to a GPS or global navigation satellite system (GNSS) port and synchronizing it with the clock to a certain specification and within a certain time
  • Checking onboard system diagnostics
  • Power-cycling the complete system at various voltage and current levels
35566_TVT_Imagery_AV_Sensor_Fusion_Images_03

Figure 3. Chip Validation

Functional and Module-Level Test​

Because the compute platform is a perfect mix of both consumer electronics components and automotive components, you have to thoroughly validate it with testing procedures from both industries: You need automotive and consumer electronics network interfaces and methodologies.
NI is uniquely suited to address these complex requirements through our Autonomous Compute Platform Validation product. We selected a mix of the interfaces and test instruments you might require to address the validation steps outlined above, and packaged them into a single configuration. Because we utilize PXI instrumentation, our flexible solution easily addresses changing and growing AV validation needs. Figure 4 shows a solution example:
35566_TVT_Imagery_AV_Sensor_Fusion_Images_04

Figure 4. Autonomous Compute Platform Validation Solution Example

Life Cycle, Environmental, and Reliability Tests​

Functionally validating a single compute platform is fairly straightforward. However, once the scope of the test grows to encompass multiple devices at a time or in various environments, test system size and complexity grows. It’s important to incorporate functional test and scale the number of test resources appropriately, with corresponding parallelism or serial testing capabilities. Also, you need to integrate the appropriate chambers, ovens, shaker tables, and dust rooms to simulate environmental factors. And because some tests must run for days, weeks, or even months to represent the life cycle of the devices under test, tests need to execute—uninterrupted—for that duration of time.
All of these life cycle, environmental, and reliability testing challenges are solved with the right combination of test equipment, chambering, DUT knowledge, and integration capability. To learn more about our partner channel that can assist with integrating these complex test systems, please contact us.
Back to top

Embedded Software and Systems Tests​

Perception and sensor fusion systems are the most complex vehicular elements for both hardware and software. Because the embedded software in these systems is truly cutting-edge, software validation test processes also must be cutting-edge. Later in this document, learn more about isolating the software itself to validate the code as well as testing the software once it has been deployed onto the hardware that will eventually go in the vehicle.
35566_TVT_Imagery_AV_Sensor_Fusion_Images_05

Figure 5. Simulation, Test, and Data Recording
Back to top

Algorithm Design and Development​

We can’t talk about software test without recognizing that the engineers and developers designing the software are constantly trying to improve their software’s capability. Without diving in too deeply, know that software that is appropriately architected makes validating that software significantly easier.
35566_TVT_Imagery_AV_Sensor_Fusion_Images_06

Figure 6. Design, Deployment, and Verification and Validation (V and V)

Software Test and Simulation​

99.9% of perception and sensor fusion validation occurs in software. It’s the only way to test an extremely high volume within reasonable cost and timeframe constraints because you can utilize cloud-level deployments and run hundreds of simulations simultaneously. Often, this is known as simulation or software-in-the-loop (SIL) testing. As mentioned, we need an extremely realistic environment if we are testing the perception and sensor fusion software stack; otherwise, we will have validated our software against scenarios and visual representation that only exists in cartoon worlds.
35566_TVT_Imagery_AV_Sensor_Fusion_Images_07

Figure 7. Perception Validation Characteristics
Testing AV software stack perception and sensor fusion elements requires a multitude of things: You need a representative “ego” vehicle in a representative environmental worldview. You need to place realistic sensor representations on the ego vehicle in spatially accurate locations, and they need to move with the vehicle. You need accurate ego vehicle and environmental physics and dynamics. You need physics-based sensor models that give you actual information that a realworld sensor would provide, not some idealistic version of it.
After you have equipped the ego vehicle and set up the worldview, you need to execute scenarios for that vehicle and sensors to encounter by playing through a preset or prerecorded scene. You also can let the vehicle drive itself through the scene. Either way, the sensors must have a communication link to the software under test. It can be through some type of TCP link—either running on the same machine or separately—to the software under test.
That software under test is then tasked with identifying its environment, and you can verify how well it did by comparing the results of the perception and sensor fusion stack against the “ground truth” that the simulation environment provides.
35566_TVT_Imagery_AV_Sensor_Fusion_Images_08

Figure 8. Testing Perception and Planning
The real advantage is that you can spin up tens of thousands of simulation environments in the cloud and cover millions of miles per day in simulated test scenarios. To learn more about how to do this, contact us.
35566_TVT_Imagery_AV_Sensor_Fusion_Images_09

Figure 9. Using the Cloud with SystemLink Software

Record and Playback​

If you live in a city that tests AVs, you may have seen those dressed-up cars navigating the road with test drivers hovering their hands over the wheel. Those mule vehicles rack up millions of miles so that engineers can verify their software. There are many steps to validating software with road-and-track test..
The most prevalent methodology for validating embedded software is to record a bunch of realworld sensor information through the sensors placed on vehicles. This is the highest-fidelity way to provide software-under-test sensor data, as it is actual, real-world data. The vehicle can be in autonomous mode or non autonomous mode. It is the engineer’s job to equip the vehicle with a recording system that stores massive amounts of sensor information without impeding the vehicle. A representative recording system is shown in Figure 10:
35566_TVT_Imagery_AV_Sensor_Fusion_Images_10

Figure 10. Figure 10. Vehicle Recording System
Once data records onto large data stores, it needs to move to a place where engineers can play with it. The process of moving the data from a large raid array to the cloud or on-premise storage is a challenge, because we’re talking about moving tens, if not hundreds, of terabytes to storage as quickly as possible. There are dedicated copy centers and server-farm-level interfaces that can help accomplish this.
Engineers then face the daunting task of classifying or labeling stored data. Typically, companies pay millions of dollars to send sensor data to buildings full of people that visually inspect the data and identify things such as pedestrians, cars, and lane markings. These identifications serve as “ground truth” for the next step of the process. Many companies are investing heavily in automated labeling that would ideally eliminate the need for human annotators, but that technology is not yet feasible. As you might imagine, sufficiently developing that technology would greatly reduce testing embedded software that classifies data, resulting in much more confidence in AVs.
After data has been classified and is ready for use, engineers play it back into the embedded software, typically on a development machine (open-loop software test), or on the actual hardware (open-loop hardware test). This is known as open-loop playback because the embedded software is not able to control the vehicle—it can only identify what it sees, which is then compared against the ground truth data.
35566_TVT_Imagery_AV_Sensor_Fusion_Images_11

Figure 11. Figure 11. ADAS Data Playback Diagram
One of the more cutting-edge things engineers do is convert real-world sensor data into their simulation environment so that they can make changes to their prerecorded data. This way, they can add weather conditions that the recorded data didn’t see, or other scenarios that the recording vehicle didn’t encounter. While this provides high-fidelity sensor data and test-case breadth, it is quite complex to implement. It does provide limited capability to perform closedloop tests, such as SIL, with real-world data while controlling a simulated vehicle.
Mule vehicles equipped with recording systems are often very expensive and take significant time and energy to deploy and rack up miles. Plus, you can’t possibly encounter all of the various scenarios you need to validate vehicle software. This is why you see significantly more tests performed in simulation.

HIL Test​

Once there’s an SIL workflow in place, it’s easier to transition to HIL, which runs the same tests with the software onboard the hardware that eventually makes it into the vehicle. You can take the existing SIL workflow and cut communication between the simulator and the software under test. And you can add an abstraction layer between the commands sent to and from the simulator and hardware that has sensor and network interfaces to communicate with the compute platform under test. The commands talking to the hardware must execute in real time to be validated appropriately. You can take those same sensor interfaces described in the AV Functional Test section and plug them into the SIL system with the real-time software abstraction layer and create a true HIL tester.
You can execute perception and sensor fusion HIL tests either by directly injecting into the sensor interfaces, or, with the sensor in the loop, providing an emulated over-the-air interface to the sensors, as shown in Figure 12.
35566_TVT_Imagery_AV_Sensor_Fusion_Images_12

Figure 12. Figure 12. Closed-Loop Perception Test
Each of these processes—from road test to SIL to HIL—employ a similar workflow. For more information about this simulation test platform, contact us.
Back to top

Conclusion​

Now that you understand how to test AV compute platform perception and sensor fusion systems, you may want a supercomputer as the brain of your AV. Know that, as a new market emerges, there are uncertainties. NI offers a software-defined platform that helps solve functional testing challenges to validate the custom computing platform and test automotive network protocols, computer protocols, sensor interfaces, power consumption, and pin measurements. Our platform flexibility, I/O breadth, and customizability cover not only today’s testing requirements to bring the AV to market, but can help you swiftly adapt to tomorrow’s needs.
 
  • Like
  • Fire
Reactions: 19 users
D

Deleted member 118

Guest
  • Haha
Reactions: 4 users

Earlyrelease

Regular
Hi Sirod69.....thanks for that breakdown.

Nice to see Peter and Anil still sitting at number 2 & 3 respectively...holding 13.75% of the company, plus 56.52% currently being held by
the rest of us outside the top 20...combined giving a nice 70.27% of the company...just an observation at this point, nothing more than that.

Regarding some nice news to drop just prior to the AGM, legally the company can't sit on information to make the timing of that possible,
it either plays out that way or not, maybe we could ask that the signing of a new IP License takes place on 22 May, that would work :ROFLMAO::ROFLMAO::ROFLMAO:

Tech 😉

Tech
Whilst legally yes they can’t sit on information. However depending on how your relationships are with your customers and how well you have supported them and keep matters in house, there is nothing wrong with giving them your AGM date and requesting that if they were in a position to endorse the release of something the timing would be welcome around that time. Now it may not work out but the old saying if you don’t ask you don’t get. So you never ever know.
 
  • Like
  • Fire
  • Love
Reactions: 22 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
OK, how did we miss this particular article?!!!




Cisco updates Webex, aims to enhance hybrid work experiences with AI​

Sri Krishna@SriTalkstech
March 28, 2023 12:33 PM
hybrid workplace with employees working from home or work from office or work from the beach or sea vector

Image Credit: Piscine

Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success
. Learn More


Cisco today unveiled AI-powered enhancements across its Webex suite, promising to deliver hybrid work experiences with automation, while protecting customers’ confidentiality and privacy.

The updates span workspace, collaboration and customer experience categories, built on the Webex platform, and join a long list of AI and machine learning (ML) features already embedded in Cisco products.

The next step forward for such collaboration is video intelligence, which Webex is expanding throughout the conference room operating system RoomOS.
With cinematic meeting experiences, cameras follow individuals through voice and facial recognition to capture the best angle of the active speaker. This ensures focus on the speaker, while making certain that hybrid workers not physically present in the room can still feel included, according to Cisco.

Once in a generation platform shift​

RoomOS uses facial detection, information about where people are sitting in a room and voice location to direct the meeting and provide the best view. The feature individually frames and levels participants at eye height, and in speaker mode, uses audio triangulation from devices and an intelligent beam-forming table microphone to quickly and accurately identify the position of the active speaker.

Cinematic meetings support a range of camera intelligence features, including speaker mode, frames, presenter and audience tracking and meeting zones.


“AI is fundamentally transforming the way we work and live,” Jeetu Patel, EVP and GM for security and collaboration at Cisco, told VentureBeat. “It has the potential to make collaboration radically more immersive, personalized and efficient.”

Cisco studied what he described as a “once-in-a-generation platform shift” that AI could support. The company’s efforts center around re-imagining hybrid work.

Targeting hybrid work experiences​


With the rise of hybrid work, it’s essential that organizations provide employees with the flexibility to work in different locations and in different ways. To address this, Cisco has introduced three new AI-based features into its Webex suite.

This includes a super resolution function that ensures crystal-clear video in Webex meetings, even in low-bandwidth conditions. This is achieved through deep neural network video recovery that hides choppiness, removes blocking artifacts and reconstructs the face and body to render in high-resolution images and videos.

Another new AI capability is smart re-lighting, which automatically enhances lighting in Webex meetings to ensure that people look their best in any environment. This is particularly useful when working in poor lighting conditions. The algorithm is trained to recognize different scenarios with people in different lighting, and automatically enhances the light on the facial foreground.

The third new capability is a “be right back” update, which automatically puts up a BRB message, blurs the background, and mutes audio when a user steps away from a Webex meeting. This feature saves time and is simple to use. By leveraging a 3D face mesh algorithm, Webex can detect when a user has stepped away and replace their video feed with a BRB indicator until they return. Users can turn their audio and video back on when they are back in front of the screen.

AI-powered chat summaries​

As customer expectations continue to rise and organizations handle billions of daily customer interactions, it has become challenging for agents and legacy systems to keep up with the volume and personalization required. To this end, Cisco is introducing new AI capabilities for its customer experience solutions, including Webex Contact Center and Webex Connect.

One of the new capabilities, topic analysis in Webex Contact Center, provides actionable insights to business analysts by surfacing key reasons customers are calling in. This feature is built using an AI large language model (LLM) that aggregates call transcripts and highlights trends for business analysts.

Another capability, agent answers, acts as a real-time coach for human agents by listening and instantly surfacing knowledge-based articles and helpful information for the customer. This capability uses learnings from self-service and automated customer interactions and applies AI to ensure that the highest match probability options are identified first.

Meanwhile, AI-powered chat summaries eliminate the need for agents to read lengthy digital chat histories and provide key takeaways in a quickly digestible format. Lastly, Webex Connect users can now describe the function they want to perform, and AI will generate and return the appropriate code instantly, making it easier to create and iterate customer journeys quickly.

 
  • Like
  • Fire
  • Love
Reactions: 69 users

TECH

Regular
Tech
Whilst legally yes they can’t sit on information. However depending on how your relationships are with your customers and how well you have supported them and keep matters in house, there is nothing wrong with giving them your AGM date and requesting that if they were in a position to endorse the release of something the timing would be welcome around that time. Now it may not work out but the old saying if you don’t ask you don’t get. So you never ever know.

I'd suggest listening to the upcoming tech talk between ARM and Brainchip 🧐:whistle:

May 9 11pm Perth

May 10 1am Sydney

The rather big question really is, once a prospective client becomes an official holder of an IP License, we should possibly consider that
mass production of a product or products could be added to that timeframe, maybe 24 months minimum from the day they actually sign
a contract, with no new IP Licenses currently signed many, I would suggest, haven't factored in a 24 month design cycle from that day forth.

Anyway, that's simply my view.....cheers.
 
  • Like
  • Thinking
  • Haha
Reactions: 14 users

Frangipani

Top 20
Unless Nandan Nayampally possesses the unlikely gift of bilocation, I’d suggest to contact our friends (?!) at Cisco to see whether/how they can help.
Let’s just assume for a moment that they are indeed hidden behind one of the NDAs. Dot, dot, dot…
I am thinking along the lines of a Webex Hologram demonstration as shown in that “Takei on Tech” video @Taproot shared with us the other day. Cisco is promoting their revolutionary tech (see how they’d complement each other?!) as “a real-time, photorealistic holographic interaction that goes beyond video conferencing for a truly immersive experience” on its homepage.
Here you are, in case you missed watching it when it was first posted:



Just imagine that Sydney AGM ballroom full of disgruntled shareholders abruptly going dead silent, when they are told that Nandan Nayampally will shortly be with them “in the room”, even though in actual fact he is physically in California, having dinner the night before his conference presentation (given the time difference of 17 hours). They’d all be left speechless (well except those of you who will also be attending in person and have now been tipped off by me… 🤣)
The only problem being, the audience members would all need their own AR headset to enjoy the cutting edge 3D Pacific-bridging experience. Mmmhhh… Maybe all shareholders could enter into an AGM lottery on arrival, and then a dozen or so winners would be drawn and get to experience and rave about this groundbreaking technology first-hand (or rather first-eye).

That would be a stellar PR coup for Brainchip and undoubtedly result in innumerable relieved shareholder faces as well as an explosion of (unexploding) lunar-bound 🚀🚀🚀🚀🚀🚀🚀🚀 in this space, while the Mickleboros of the world will be staring in disbelief at the BRN share price trajectory.
Sadly, the stock market feels more like a casino these days, so more likely there will be heaps of 😢 😭😤🤬 instead, after the almost inevitable “sell on good news”.

P.S.: In case the BRN management happens to monitor this - I just wanted to mention in passing that I wouldn‘t mind being donated a couple of thousand shares as a reward for this ingenious proposal… 😂


In the “This is our mission” podcast with Geoffrey Moore, Sean Hehir mentions from 5:12 min “a company in communications, something you wouldn’t even think about - and it’s not a handset - but they want AI functionality in their devices, and they’re gonna build chips, so world-class companies are usually driven by some competitive nature (…) it’s that competitive nature that’s really driving their verticalisation”.

So we get this is not about handsets such as smartphones or gaming controllers, but how about headsets, then? Besides the mixed-reality Cisco one I had referred to in my post above for Webex videoconferencing solutions, there is of course also the long-awaited possibly-soon-to-be-released Apple VR/AR-headset dubbed Reality Pro (this high-end model is rumoured to sell at around 3000 USD) and a more affordable version dubbed Reality One that is predicted to be launched at a later stage. Other posters have previously also commented on a speculative link to Brainchip re those upcoming Apple mixed-reality headsets that will have eye and gesture tracking and are said to operate independently of an iPhone, an iPad or a Mac.

While we should keep in mind the word “speculative“ here, let us occasionally allow ourselves to daydream just a little. The English word earworm is a calque (loan translation) of the German word Ohrwurm, referring to a catchy piece of music that continues to occupy your mind long after it has been played. The song “A Whole New World” from the Disney movie Aladdin that I brought up in my post yesterday sure is one of those earworms for me. And not only for its captivating melody:

🎼 Unbelievable sights, indescribable feeling
Soaring, tumbling, freewheeling
Through an endless diamond sky
A whole new world (Don't you dare close your eyes)
A hundred thousand things to see (Hold your breath, it gets better)
I'm like a shooting star, I've come so far
I can't go back to where I used to be (A whole new world)
With new horizons to pursue
I'll chase them anywhere
There's time to spare
Let me share this whole new world with you…”

Edit: In the light of what @TECH just posted minutes before me, I should add we’d better restrict our daydreaming to future product updates containing Akida, and shouldn’t set our hopes on recently released or soon to be launched products, as it is true that we would need to factor in long production timelines once an IP license has been signed (except if the deal was done through Megachips or Renesas, I assume?)
 
Last edited:
  • Like
  • Fire
  • Wow
Reactions: 25 users

Terroni2105

Founding Member
Geoffrey Moore promoting the recent podcast with Brainchip on Twitter


 
  • Like
  • Love
  • Fire
Reactions: 59 users
  • Haha
Reactions: 3 users

alwaysgreen

Top 20
I'd suggest listening to the upcoming tech talk between ARM and Brainchip 🧐:whistle:

May 9 11pm Perth

May 10 1am Sydney

The rather big question really is, once a prospective client becomes an official holder of an IP License, we should possibly consider that
mass production of a product or products could be added to that timeframe, maybe 24 months minimum from the day they actually sign
a contract, with no new IP Licenses currently signed many, I would suggest, haven't factored in a 24 month design cycle from that day forth.

Anyway, that's simply my view.....cheers.
I'm okay with the cycle to production. Here's hoping for 10 new IP licenses over the next 12 months. The market will factor in the future revenue.
 
  • Like
  • Fire
  • Love
Reactions: 19 users

Diogenese

Top 20
I'd suggest listening to the upcoming tech talk between ARM and Brainchip 🧐:whistle:

May 9 11pm Perth

May 10 1am Sydney

The rather big question really is, once a prospective client becomes an official holder of an IP License, we should possibly consider that
mass production of a product or products could be added to that timeframe, maybe 24 months minimum from the day they actually sign
a contract, with no new IP Licenses currently signed many, I would suggest, haven't factored in a 24 month design cycle from that day forth.

Anyway, that's simply my view.....cheers.
Hi Tech,

The 24 months may be a reasonable period where the licence involves building an entirely new product, eg , Prophesee or Valeo with Akida, but there will be new licencees who want basically a COTS processor (ARM/SiFive/Intel) boosted by Akida. In those cases, once the initial batch is made, we could expect a shorter time to market.

We already know that Akida is processor agnostic and it has been confirmed that it does fit all ARM processors, so there must already be some progress in fitting Akida and ARM designs together.
 
  • Like
  • Fire
  • Love
Reactions: 92 users

Terroni2105

Founding Member
Any idea why Socionext is not listed on the BrainChip website under the “You’re In Good Company” heading?

We know they are still working together and I would have thought it beneficial from BrainChip standpoint to have Socionext noted on their website in some capacity.
 
  • Like
  • Thinking
  • Love
Reactions: 15 users
In the “This is our mission” podcast with Geoffrey Moore, Sean Hehir mentions from 5:12 min “a company in communications, something you wouldn’t even think about - and it’s not a handset - but they want AI functionality in their devices, and they’re gonna build chips, so world-class companies are usually driven by some competitive nature (…) it’s that competitive nature that’s really driving their verticalisation”.

So we get this is not about handsets such as smartphones or gaming controllers, but how about headsets, then? Besides the mixed-reality Cisco one I had referred to in my post above for Webex videoconferencing solutions, there is of course also the long-awaited possibly-soon-to-be-released Apple VR/AR-headset dubbed Reality Pro (this high-end model is rumoured to sell at around 3000 USD) and a more affordable version dubbed Reality One that is predicted to be launched at a later stage. Other posters have previously also commented on a speculative link to Brainchip re those upcoming Apple mixed-reality headsets that will have eye and gesture tracking and are said to operate independently of an iPhone, an iPad or a Mac.

While we should keep in mind the word “speculative“ here, let us occasionally allow ourselves to daydream just a little. The English word earworm is a calque (loan translation) of the German word Ohrwurm, referring to a catchy piece of music that continues to occupy your mind long after it has been played. The song “A Whole New World” from the Disney movie Aladdin that I brought up in my post yesterday sure is one of those earworms for me. And not only for its captivating melody:

🎼 Unbelievable sights, indescribable feeling
Soaring, tumbling, freewheeling
Through an endless diamond sky
A whole new world (Don't you dare close your eyes)
A hundred thousand things to see (Hold your breath, it gets better)
I'm like a shooting star, I've come so far
I can't go back to where I used to be (A whole new world)
With new horizons to pursue
I'll chase them anywhere
There's time to spare
Let me share this whole new world with you…”

Edit: In the light of what @TECH just posted minutes before me, I should add we’d better restrict our daydreaming to future product updates containing Akida, and shouldn’t set our hopes on recently released or soon to be launched products, as it is true that we would need to factor in long production timelines once an IP license has been signed (except if the deal was done through Megachips or Renesas, I assume?)
The other side of communications ties in potentially with cyber attacks and packet inspections which is part of what the CyberNeuro RT project is about.

2022 paper here.

HERE

The other use case possibility is as their patent grant from 2022. They've gone to the effort to patent this :unsure:

The latest patents awarded to BrainChip from the USPTO include:


  • US 11,468,299 “An Improved Spiking Neural Network,” protects the learning function of BrainChip’s digital neuron circuit implemented on a neuromorphic integrated circuit/system (e.g., AkidaTM).
  • US 11,429,857, “Secure Voice Communications System,” protects a system to establish secure voice communications between a local and a remote neural network device. Information is encrypted by transmitting spike timing rather than original data, rendering it useless to anyone intercepting the transmission.
 
  • Like
  • Fire
  • Love
Reactions: 36 users

Vladsblood

Regular
Extraordinary low volumes of share changing hands today. Only just over 1.6 million. The price will have to rise considerably if they want more ... but they won't be mine!
Deena
Hi Denna, yes very low to extreme low volume “The Thermometer “ is showing the time frame is just about to turn cycle upwards. Once time runs out nothing can change the coming SP reversal as time is so much more important than price in forecasting future moves. Cheers Deena, Vlad.
 
  • Like
  • Fire
  • Love
Reactions: 27 users

rgupta

Regular
I'd suggest listening to the upcoming tech talk between ARM and Brainchip 🧐:whistle:

May 9 11pm Perth

May 10 1am Sydney

The rather big question really is, once a prospective client becomes an official holder of an IP License, we should possibly consider that
mass production of a product or products could be added to that timeframe, maybe 24 months minimum from the day they actually sign
a contract, with no new IP Licenses currently signed many, I would suggest, haven't factored in a 24 month design cycle from that day forth.

Anyway, that's simply my view.....cheers.
If the cycle will go 24 months after signing an IP? That will mean EQXX is not going have akida inside. As Merc is saying launch is early 2024.
Or the other way around contracts are written differently with EAP.
Wait !!!
 
  • Like
Reactions: 1 users
The other side of communications ties in potentially with cyber attacks and packet inspections which is part of what the CyberNeuro RT project is about.

2022 paper here.

HERE

The other use case possibility is as their patent grant from 2022. They've gone to the effort to patent this :unsure:

The latest patents awarded to BrainChip from the USPTO include:


  • US 11,468,299 “An Improved Spiking Neural Network,” protects the learning function of BrainChip’s digital neuron circuit implemented on a neuromorphic integrated circuit/system (e.g., AkidaTM).
  • US 11,429,857, “Secure Voice Communications System,” protects a system to establish secure voice communications between a local and a remote neural network device. Information is encrypted by transmitting spike timing rather than original data, rendering it useless to anyone intercepting the transmission.
Not sure how I forgot this as well when it comes to communications ;)

Intellisense to use neuromorphic AI for next generation cognitive radio chip

Business news | March 22, 2023
By Nick Flaherty
SENSING / CONDITIONING WIRELESS COMMUNICATIONS AI



Intellisense Systems is to use neuromorphic AI technology from BrainChip to improve cognitive radio system on spacecraft and robotics.


Intellisense’s intelligent radio frequency (RF) system enable wireless devices and platforms to sense and learn the characteristics of the communications environment in real time, providing enhanced communication quality, reliability and security. By integrating BrainChip’s Akida neuromorphic processor, Intellisense can deliver even more advanced, yet energy efficient, cognitive capabilities to its RF system solutions.

One such project is the development of a new Neuromorphic Enhanced Cognitive Radio (NECR) device to enable autonomous space operations on platforms constrained by size, weight and power (SWaP).

Intellisense’s NECR technology provides NASA numerous applications and can be used to enhance the robustness and reliability of space communication and networking, especially cognitive radio devices. Smart sensing algorithms will be implemented on neuromorphic computing hardware, including Akida, and then integrated with radio frequency modules as part of a Phase II prototype.


“By integrating BrainChip’s Akida processor into our cognitive radio solutions, we will be able to provide our customers with an unparalleled level of performance, adaptability and reliability,” said Frank Willis, President and CEO of Intellisense.

“Intellisense provides advanced sensing and display solutions and we are thrilled to be partnering with them to deliver the next generation of cognitive radio capabilities,” said Sean Hehir, CEO of BrainChip. “Our Akida processor is uniquely suited to address the demanding requirements of cognitive radio applications and we look forward to continue partnering with Intellisense to deliver cutting-edge embedded processing with AI on-chip to their customers.”
 
  • Like
  • Love
  • Fire
Reactions: 57 users

TECH

Regular
Hi Tech,

The 24 months may be a reasonable period where the licence involves building an entirely new product, eg , Prophesee or Valeo with Akida, but there will be new licencees who want basically a COTS processor (ARM/SiFive/Intel) boosted by Akida. In those cases, once the initial batch is made, we could expect a shorter time to market.

We already know that Akida is processor agnostic and it has been confirmed that it does fit all ARM processors, so there must already be some progress in fitting Akida and ARM designs together.

Thanks for your opinion (y) and yes, I was hinting at what you wrote on the first line.

While other products and designs are/could be well underway, it's targeting the companies whom we have been engaged with that the
company is working hard on getting an IP License commitment from, which as you have eluded to numerous times isn't an easy gig.

While it's 100% true products are coming very soon, including others that won't be far behind, it's the companies goal to sign IP
Licenses in the first instance, so I would assume many are in the pipeline, we shall see.

Roll on 1 January 2025....looking forward to seeing a dollar or three in front of our share price !

Best regards.....Chris (Tech)
 
  • Like
  • Fire
  • Love
Reactions: 33 users
Top Bottom