BRN Discussion Ongoing

Deena

Regular
Extraordinary low volumes of shares changing hands today. Only just over 1.6 million traded. The price will have to rise considerably if they want more ... but they still won't be mine!
Deena
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 37 users
Excellent, albeit a bit long, article to provide some perspective on design, testing and implementation when dealing with AVs.

Good example of why timelines can push out as well.




Testing Perception and Sensor Fusion Systems​

Updated Apr 5, 2023


Overview​

An autonomous vehicle’s (AV) most computationally complex components lie within the perception and sensor fusion system. This system must make sense of the information that the sensors provide, which might include raw point-cloud and video-stream data. The perception and sensor fusion system’s job is to crunch all of the data and determine what it is seeing: Lane markings, pedestrians, cyclists, vehicles, or street signs, for example.

To address this computational challenge, automotive suppliers seemingly could build a supercomputer and throw it in the vehicle. However, a supercomputer consumes heaps of power, and that directly conflicts with the automotive industry’s goal to create efficient cars. We can’t expect Level 4 vehicles to be connected to a huge power supply to run the largest and smartest computer for making huge decisions. The industry must strike a balance between processing power and power consumption.

Such a monumental task requires specialized hardware; for example, “accelerators” that help specific algorithms that perceive the world execute extremely fast and precisely. Learn more about that hardware architecture and its various implementations in the next section. After that, discover methodologies to test the perception and sensor fusion systems from a hardware and system-level test perspective.

Contents​

Perception and Sensor Fusion Systems​

As noted in the introduction, AV brains can be centralized in a single system, distributed to the edge of the sensors, or a combination of both:
35566_TVT_Imagery_AV_Sensor_Fusion_Images_01

Figure 1. Control Placement Architectures
NI often refers to a centralized platform as the AV compute platform, though other companies have different names for it. AV compute platforms include the Tesla full self-driving platform and the NVIDIA DRIVE AGX platform.
35566_TVT_Imagery_AV_Sensor_Fusion_Images_02

Figure 2. NVIDIA DRIVE AGX Platform
MobilEye’s EyeQ4 offers decentralized sensors. If you combine such platforms with a centralized compute platform to offload processing, they become a hybrid system.
When we speak of perception and sensor fusion systems, we isolate the part of the AV compute platform that takes in sensor information from the cameras, RADARs, Lidars, and, occasionally, other sensors, and spits out a representation of the world around the vehicle to the next system: The path planning system. This system locates the vehicle in 3D space and maps it in the world..

Hardware and Software Technologies​

Certain processing units are best-suited for certain types of computation; for example, CPUs are particularly good at utilizing off-the-shelf and open source code to execute high-level commands and handle memory allocation. Graphics processing units can handle general-purpose image processing very well. FPGAs are excellent at executing fixed-point math very quickly and deterministically. You can find tensor processing units and neural network (NN) accelerators built to execute deep learning algorithms with specific activation functions, such as rectified linear units, extremely quickly and inparallel. Redundancy is built into the system to ensure that, if any component fails, it has a backup.
Because backups are critical in any catastrophic failure, there cannot be a single point of failure (SPOF) anywhere, especially if those compute elements are to receive their ASIL-D certification.

Some of these processing units consume large amounts of power. The more power compute elements consume, the shorter the range of the vehicle (if electric), and the more heat that’s generated. That is why you’ll often find large fans on centralized AV compute platforms and power-management integrated circuits in the board. These are critical for keeping the platform operating under ideal conditions. Some platforms incorporate liquid cooling, which requires
controlling pumps and several additional chips.
Atop the processing units lies plenty of software in the form of firmware, OSs, middleware, and application software. As of this writing, most Level 4 vehicle compute platforms run something akin to the Robot Operating System (ROS) on a Linux Ubuntu or Unix distribution. Most of these implementations are nondeterministic, and engineers recognize that, in order to deploy safety critical vehicles, they must eventually adopt a real-time OS (RTOS). However, ROS and similar robot middleware are excellent prototyping environments due to their vast amount of open source tools, ease of getting started, massive online communities, and data workflow simplicity.
With advanced driver-assistance systems (ADAS), engineers have recognized the need for RTOSs and have been developing and creating their own hardware and OSs to provide it. In many cases, these compute-platform providers incorporate best practices such as the AUTOSAR framework.
Perception and sensor fusion system software architecture varies dramatically due to the number and type of sensors associated with the perception system; types of algorithms used; hardware that’s running the software; and platform maturity. One significant difference in software architecture is “late” versus “early” sensor fusion.

Product Design Cycle​

To create a sensor fusion compute platform, engineers implement a multistep process. First, they purchase and/or design the chips. Autonomous compute platform providers may employ their own silicon design, particularly for specific NN accelerators. Those chips undergo test as described in the semiconductor section below. After the chips are confirmed good, contract manufacturers assemble and test the boards. Embedded chip software design and simulation occur in parallel to chip design and chip/module bring-up. Once the system is assembled, engineers conduct functional tests and embedded software tests, such as hardware-in-the-loop (HIL). Final compute-platform packaging takes place in-house or at the contract manufacturer, where additional testing occurs.
Back to top

Semiconductor Hardware-Level Tests​

As engineers design these compute units, they execute tests to ensure that the units operate as expected.:

Semiconductor-Level Validation and Verification​

As mentioned, all semiconductor chips undergo a process of chip-level validation and verification. Typically, these help engineers create specifications documents and send the product through the certification process. Often, hardware redundancy and safety are checked at this level. Most of these tests are conducted digitally, though analog tests also ensure that the semiconductor manufacturing process occurred correctly.

Semiconductor-Level Production Test​

After the chip engineering samples are verified, they’re sent into production. Several tests unique to processing units at the production wafer-level test stage revolve around testing the highly dense digital connections on the processors.
At this stage, ASIL-D and ISO 26262 validation occurs, and further testing confirms redundancy, identifies SPOF, and verifies manufacturing process integrity.
Back to top

Compute-Platform Validation​

After compute-platform manufacturers receive their chips and package them onto a module or subsystem, the compute-platform validation begins. Often, this means testing various subsystem functionality and the entire compute platform as a whole; for example:
  • Ensuring that all automotive network ports (controller area network [CAN], local interconnect network, and T1/Ethernet [ENET]) are communicating correctly in both directions
  • Ensuring that all standard network ports (ENET, USB, and PCIe) are communicating correctly in both directions
  • Ensuring that all sensor interfaces can communicate and handle standard loads for each type of sensor
  • Providing a representative “load” on the system and validating that it completes a task
  • Measuring the various element and complete system power consumptions as they
    complete a task
  • Measuring system thermal performance under various loads
  • Placing the subsystem or entire compute platform in a temperature, environmental, or accelerating (shaker table) chamber to ensure that it can withstand extreme operating conditions
  • Verifying that the system can connect to a GPS or global navigation satellite system (GNSS) port and synchronizing it with the clock to a certain specification and within a certain time
  • Checking onboard system diagnostics
  • Power-cycling the complete system at various voltage and current levels
35566_TVT_Imagery_AV_Sensor_Fusion_Images_03

Figure 3. Chip Validation

Functional and Module-Level Test​

Because the compute platform is a perfect mix of both consumer electronics components and automotive components, you have to thoroughly validate it with testing procedures from both industries: You need automotive and consumer electronics network interfaces and methodologies.
NI is uniquely suited to address these complex requirements through our Autonomous Compute Platform Validation product. We selected a mix of the interfaces and test instruments you might require to address the validation steps outlined above, and packaged them into a single configuration. Because we utilize PXI instrumentation, our flexible solution easily addresses changing and growing AV validation needs. Figure 4 shows a solution example:
35566_TVT_Imagery_AV_Sensor_Fusion_Images_04

Figure 4. Autonomous Compute Platform Validation Solution Example

Life Cycle, Environmental, and Reliability Tests​

Functionally validating a single compute platform is fairly straightforward. However, once the scope of the test grows to encompass multiple devices at a time or in various environments, test system size and complexity grows. It’s important to incorporate functional test and scale the number of test resources appropriately, with corresponding parallelism or serial testing capabilities. Also, you need to integrate the appropriate chambers, ovens, shaker tables, and dust rooms to simulate environmental factors. And because some tests must run for days, weeks, or even months to represent the life cycle of the devices under test, tests need to execute—uninterrupted—for that duration of time.
All of these life cycle, environmental, and reliability testing challenges are solved with the right combination of test equipment, chambering, DUT knowledge, and integration capability. To learn more about our partner channel that can assist with integrating these complex test systems, please contact us.
Back to top

Embedded Software and Systems Tests​

Perception and sensor fusion systems are the most complex vehicular elements for both hardware and software. Because the embedded software in these systems is truly cutting-edge, software validation test processes also must be cutting-edge. Later in this document, learn more about isolating the software itself to validate the code as well as testing the software once it has been deployed onto the hardware that will eventually go in the vehicle.
35566_TVT_Imagery_AV_Sensor_Fusion_Images_05

Figure 5. Simulation, Test, and Data Recording
Back to top

Algorithm Design and Development​

We can’t talk about software test without recognizing that the engineers and developers designing the software are constantly trying to improve their software’s capability. Without diving in too deeply, know that software that is appropriately architected makes validating that software significantly easier.
35566_TVT_Imagery_AV_Sensor_Fusion_Images_06

Figure 6. Design, Deployment, and Verification and Validation (V and V)

Software Test and Simulation​

99.9% of perception and sensor fusion validation occurs in software. It’s the only way to test an extremely high volume within reasonable cost and timeframe constraints because you can utilize cloud-level deployments and run hundreds of simulations simultaneously. Often, this is known as simulation or software-in-the-loop (SIL) testing. As mentioned, we need an extremely realistic environment if we are testing the perception and sensor fusion software stack; otherwise, we will have validated our software against scenarios and visual representation that only exists in cartoon worlds.
35566_TVT_Imagery_AV_Sensor_Fusion_Images_07

Figure 7. Perception Validation Characteristics
Testing AV software stack perception and sensor fusion elements requires a multitude of things: You need a representative “ego” vehicle in a representative environmental worldview. You need to place realistic sensor representations on the ego vehicle in spatially accurate locations, and they need to move with the vehicle. You need accurate ego vehicle and environmental physics and dynamics. You need physics-based sensor models that give you actual information that a realworld sensor would provide, not some idealistic version of it.
After you have equipped the ego vehicle and set up the worldview, you need to execute scenarios for that vehicle and sensors to encounter by playing through a preset or prerecorded scene. You also can let the vehicle drive itself through the scene. Either way, the sensors must have a communication link to the software under test. It can be through some type of TCP link—either running on the same machine or separately—to the software under test.
That software under test is then tasked with identifying its environment, and you can verify how well it did by comparing the results of the perception and sensor fusion stack against the “ground truth” that the simulation environment provides.
35566_TVT_Imagery_AV_Sensor_Fusion_Images_08

Figure 8. Testing Perception and Planning
The real advantage is that you can spin up tens of thousands of simulation environments in the cloud and cover millions of miles per day in simulated test scenarios. To learn more about how to do this, contact us.
35566_TVT_Imagery_AV_Sensor_Fusion_Images_09

Figure 9. Using the Cloud with SystemLink Software

Record and Playback​

If you live in a city that tests AVs, you may have seen those dressed-up cars navigating the road with test drivers hovering their hands over the wheel. Those mule vehicles rack up millions of miles so that engineers can verify their software. There are many steps to validating software with road-and-track test..
The most prevalent methodology for validating embedded software is to record a bunch of realworld sensor information through the sensors placed on vehicles. This is the highest-fidelity way to provide software-under-test sensor data, as it is actual, real-world data. The vehicle can be in autonomous mode or non autonomous mode. It is the engineer’s job to equip the vehicle with a recording system that stores massive amounts of sensor information without impeding the vehicle. A representative recording system is shown in Figure 10:
35566_TVT_Imagery_AV_Sensor_Fusion_Images_10

Figure 10. Figure 10. Vehicle Recording System
Once data records onto large data stores, it needs to move to a place where engineers can play with it. The process of moving the data from a large raid array to the cloud or on-premise storage is a challenge, because we’re talking about moving tens, if not hundreds, of terabytes to storage as quickly as possible. There are dedicated copy centers and server-farm-level interfaces that can help accomplish this.
Engineers then face the daunting task of classifying or labeling stored data. Typically, companies pay millions of dollars to send sensor data to buildings full of people that visually inspect the data and identify things such as pedestrians, cars, and lane markings. These identifications serve as “ground truth” for the next step of the process. Many companies are investing heavily in automated labeling that would ideally eliminate the need for human annotators, but that technology is not yet feasible. As you might imagine, sufficiently developing that technology would greatly reduce testing embedded software that classifies data, resulting in much more confidence in AVs.
After data has been classified and is ready for use, engineers play it back into the embedded software, typically on a development machine (open-loop software test), or on the actual hardware (open-loop hardware test). This is known as open-loop playback because the embedded software is not able to control the vehicle—it can only identify what it sees, which is then compared against the ground truth data.
35566_TVT_Imagery_AV_Sensor_Fusion_Images_11

Figure 11. Figure 11. ADAS Data Playback Diagram
One of the more cutting-edge things engineers do is convert real-world sensor data into their simulation environment so that they can make changes to their prerecorded data. This way, they can add weather conditions that the recorded data didn’t see, or other scenarios that the recording vehicle didn’t encounter. While this provides high-fidelity sensor data and test-case breadth, it is quite complex to implement. It does provide limited capability to perform closedloop tests, such as SIL, with real-world data while controlling a simulated vehicle.
Mule vehicles equipped with recording systems are often very expensive and take significant time and energy to deploy and rack up miles. Plus, you can’t possibly encounter all of the various scenarios you need to validate vehicle software. This is why you see significantly more tests performed in simulation.

HIL Test​

Once there’s an SIL workflow in place, it’s easier to transition to HIL, which runs the same tests with the software onboard the hardware that eventually makes it into the vehicle. You can take the existing SIL workflow and cut communication between the simulator and the software under test. And you can add an abstraction layer between the commands sent to and from the simulator and hardware that has sensor and network interfaces to communicate with the compute platform under test. The commands talking to the hardware must execute in real time to be validated appropriately. You can take those same sensor interfaces described in the AV Functional Test section and plug them into the SIL system with the real-time software abstraction layer and create a true HIL tester.
You can execute perception and sensor fusion HIL tests either by directly injecting into the sensor interfaces, or, with the sensor in the loop, providing an emulated over-the-air interface to the sensors, as shown in Figure 12.
35566_TVT_Imagery_AV_Sensor_Fusion_Images_12

Figure 12. Figure 12. Closed-Loop Perception Test
Each of these processes—from road test to SIL to HIL—employ a similar workflow. For more information about this simulation test platform, contact us.
Back to top

Conclusion​

Now that you understand how to test AV compute platform perception and sensor fusion systems, you may want a supercomputer as the brain of your AV. Know that, as a new market emerges, there are uncertainties. NI offers a software-defined platform that helps solve functional testing challenges to validate the custom computing platform and test automotive network protocols, computer protocols, sensor interfaces, power consumption, and pin measurements. Our platform flexibility, I/O breadth, and customizability cover not only today’s testing requirements to bring the AV to market, but can help you swiftly adapt to tomorrow’s needs.
 
  • Like
  • Fire
Reactions: 19 users
D

Deleted member 118

Guest
  • Haha
Reactions: 4 users

Earlyrelease

Regular
Hi Sirod69.....thanks for that breakdown.

Nice to see Peter and Anil still sitting at number 2 & 3 respectively...holding 13.75% of the company, plus 56.52% currently being held by
the rest of us outside the top 20...combined giving a nice 70.27% of the company...just an observation at this point, nothing more than that.

Regarding some nice news to drop just prior to the AGM, legally the company can't sit on information to make the timing of that possible,
it either plays out that way or not, maybe we could ask that the signing of a new IP License takes place on 22 May, that would work :ROFLMAO::ROFLMAO::ROFLMAO:

Tech 😉

Tech
Whilst legally yes they can’t sit on information. However depending on how your relationships are with your customers and how well you have supported them and keep matters in house, there is nothing wrong with giving them your AGM date and requesting that if they were in a position to endorse the release of something the timing would be welcome around that time. Now it may not work out but the old saying if you don’t ask you don’t get. So you never ever know.
 
  • Like
  • Fire
  • Love
Reactions: 22 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
OK, how did we miss this particular article?!!!




Cisco updates Webex, aims to enhance hybrid work experiences with AI​

Sri Krishna@SriTalkstech
March 28, 2023 12:33 PM
hybrid workplace with employees working from home or work from office or work from the beach or sea vector

Image Credit: Piscine

Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success
. Learn More


Cisco today unveiled AI-powered enhancements across its Webex suite, promising to deliver hybrid work experiences with automation, while protecting customers’ confidentiality and privacy.

The updates span workspace, collaboration and customer experience categories, built on the Webex platform, and join a long list of AI and machine learning (ML) features already embedded in Cisco products.

The next step forward for such collaboration is video intelligence, which Webex is expanding throughout the conference room operating system RoomOS.
With cinematic meeting experiences, cameras follow individuals through voice and facial recognition to capture the best angle of the active speaker. This ensures focus on the speaker, while making certain that hybrid workers not physically present in the room can still feel included, according to Cisco.

Once in a generation platform shift​

RoomOS uses facial detection, information about where people are sitting in a room and voice location to direct the meeting and provide the best view. The feature individually frames and levels participants at eye height, and in speaker mode, uses audio triangulation from devices and an intelligent beam-forming table microphone to quickly and accurately identify the position of the active speaker.

Cinematic meetings support a range of camera intelligence features, including speaker mode, frames, presenter and audience tracking and meeting zones.


“AI is fundamentally transforming the way we work and live,” Jeetu Patel, EVP and GM for security and collaboration at Cisco, told VentureBeat. “It has the potential to make collaboration radically more immersive, personalized and efficient.”

Cisco studied what he described as a “once-in-a-generation platform shift” that AI could support. The company’s efforts center around re-imagining hybrid work.

Targeting hybrid work experiences​


With the rise of hybrid work, it’s essential that organizations provide employees with the flexibility to work in different locations and in different ways. To address this, Cisco has introduced three new AI-based features into its Webex suite.

This includes a super resolution function that ensures crystal-clear video in Webex meetings, even in low-bandwidth conditions. This is achieved through deep neural network video recovery that hides choppiness, removes blocking artifacts and reconstructs the face and body to render in high-resolution images and videos.

Another new AI capability is smart re-lighting, which automatically enhances lighting in Webex meetings to ensure that people look their best in any environment. This is particularly useful when working in poor lighting conditions. The algorithm is trained to recognize different scenarios with people in different lighting, and automatically enhances the light on the facial foreground.

The third new capability is a “be right back” update, which automatically puts up a BRB message, blurs the background, and mutes audio when a user steps away from a Webex meeting. This feature saves time and is simple to use. By leveraging a 3D face mesh algorithm, Webex can detect when a user has stepped away and replace their video feed with a BRB indicator until they return. Users can turn their audio and video back on when they are back in front of the screen.

AI-powered chat summaries​

As customer expectations continue to rise and organizations handle billions of daily customer interactions, it has become challenging for agents and legacy systems to keep up with the volume and personalization required. To this end, Cisco is introducing new AI capabilities for its customer experience solutions, including Webex Contact Center and Webex Connect.

One of the new capabilities, topic analysis in Webex Contact Center, provides actionable insights to business analysts by surfacing key reasons customers are calling in. This feature is built using an AI large language model (LLM) that aggregates call transcripts and highlights trends for business analysts.

Another capability, agent answers, acts as a real-time coach for human agents by listening and instantly surfacing knowledge-based articles and helpful information for the customer. This capability uses learnings from self-service and automated customer interactions and applies AI to ensure that the highest match probability options are identified first.

Meanwhile, AI-powered chat summaries eliminate the need for agents to read lengthy digital chat histories and provide key takeaways in a quickly digestible format. Lastly, Webex Connect users can now describe the function they want to perform, and AI will generate and return the appropriate code instantly, making it easier to create and iterate customer journeys quickly.

 
  • Like
  • Fire
  • Love
Reactions: 69 users

TECH

Regular
Tech
Whilst legally yes they can’t sit on information. However depending on how your relationships are with your customers and how well you have supported them and keep matters in house, there is nothing wrong with giving them your AGM date and requesting that if they were in a position to endorse the release of something the timing would be welcome around that time. Now it may not work out but the old saying if you don’t ask you don’t get. So you never ever know.

I'd suggest listening to the upcoming tech talk between ARM and Brainchip 🧐:whistle:

May 9 11pm Perth

May 10 1am Sydney

The rather big question really is, once a prospective client becomes an official holder of an IP License, we should possibly consider that
mass production of a product or products could be added to that timeframe, maybe 24 months minimum from the day they actually sign
a contract, with no new IP Licenses currently signed many, I would suggest, haven't factored in a 24 month design cycle from that day forth.

Anyway, that's simply my view.....cheers.
 
  • Like
  • Thinking
  • Haha
Reactions: 14 users

Frangipani

Top 20
Unless Nandan Nayampally possesses the unlikely gift of bilocation, I’d suggest to contact our friends (?!) at Cisco to see whether/how they can help.
Let’s just assume for a moment that they are indeed hidden behind one of the NDAs. Dot, dot, dot…
I am thinking along the lines of a Webex Hologram demonstration as shown in that “Takei on Tech” video @Taproot shared with us the other day. Cisco is promoting their revolutionary tech (see how they’d complement each other?!) as “a real-time, photorealistic holographic interaction that goes beyond video conferencing for a truly immersive experience” on its homepage.
Here you are, in case you missed watching it when it was first posted:



Just imagine that Sydney AGM ballroom full of disgruntled shareholders abruptly going dead silent, when they are told that Nandan Nayampally will shortly be with them “in the room”, even though in actual fact he is physically in California, having dinner the night before his conference presentation (given the time difference of 17 hours). They’d all be left speechless (well except those of you who will also be attending in person and have now been tipped off by me… 🤣)
The only problem being, the audience members would all need their own AR headset to enjoy the cutting edge 3D Pacific-bridging experience. Mmmhhh… Maybe all shareholders could enter into an AGM lottery on arrival, and then a dozen or so winners would be drawn and get to experience and rave about this groundbreaking technology first-hand (or rather first-eye).

That would be a stellar PR coup for Brainchip and undoubtedly result in innumerable relieved shareholder faces as well as an explosion of (unexploding) lunar-bound 🚀🚀🚀🚀🚀🚀🚀🚀 in this space, while the Mickleboros of the world will be staring in disbelief at the BRN share price trajectory.
Sadly, the stock market feels more like a casino these days, so more likely there will be heaps of 😢 😭😤🤬 instead, after the almost inevitable “sell on good news”.

P.S.: In case the BRN management happens to monitor this - I just wanted to mention in passing that I wouldn‘t mind being donated a couple of thousand shares as a reward for this ingenious proposal… 😂


In the “This is our mission” podcast with Geoffrey Moore, Sean Hehir mentions from 5:12 min “a company in communications, something you wouldn’t even think about - and it’s not a handset - but they want AI functionality in their devices, and they’re gonna build chips, so world-class companies are usually driven by some competitive nature (…) it’s that competitive nature that’s really driving their verticalisation”.

So we get this is not about handsets such as smartphones or gaming controllers, but how about headsets, then? Besides the mixed-reality Cisco one I had referred to in my post above for Webex videoconferencing solutions, there is of course also the long-awaited possibly-soon-to-be-released Apple VR/AR-headset dubbed Reality Pro (this high-end model is rumoured to sell at around 3000 USD) and a more affordable version dubbed Reality One that is predicted to be launched at a later stage. Other posters have previously also commented on a speculative link to Brainchip re those upcoming Apple mixed-reality headsets that will have eye and gesture tracking and are said to operate independently of an iPhone, an iPad or a Mac.

While we should keep in mind the word “speculative“ here, let us occasionally allow ourselves to daydream just a little. The English word earworm is a calque (loan translation) of the German word Ohrwurm, referring to a catchy piece of music that continues to occupy your mind long after it has been played. The song “A Whole New World” from the Disney movie Aladdin that I brought up in my post yesterday sure is one of those earworms for me. And not only for its captivating melody:

🎼 Unbelievable sights, indescribable feeling
Soaring, tumbling, freewheeling
Through an endless diamond sky
A whole new world (Don't you dare close your eyes)
A hundred thousand things to see (Hold your breath, it gets better)
I'm like a shooting star, I've come so far
I can't go back to where I used to be (A whole new world)
With new horizons to pursue
I'll chase them anywhere
There's time to spare
Let me share this whole new world with you…”

Edit: In the light of what @TECH just posted minutes before me, I should add we’d better restrict our daydreaming to future product updates containing Akida, and shouldn’t set our hopes on recently released or soon to be launched products, as it is true that we would need to factor in long production timelines once an IP license has been signed (except if the deal was done through Megachips or Renesas, I assume?)
 
Last edited:
  • Like
  • Fire
  • Wow
Reactions: 25 users

Terroni2105

Founding Member
Geoffrey Moore promoting the recent podcast with Brainchip on Twitter


 
  • Like
  • Love
  • Fire
Reactions: 59 users
  • Haha
Reactions: 3 users

alwaysgreen

Top 20
I'd suggest listening to the upcoming tech talk between ARM and Brainchip 🧐:whistle:

May 9 11pm Perth

May 10 1am Sydney

The rather big question really is, once a prospective client becomes an official holder of an IP License, we should possibly consider that
mass production of a product or products could be added to that timeframe, maybe 24 months minimum from the day they actually sign
a contract, with no new IP Licenses currently signed many, I would suggest, haven't factored in a 24 month design cycle from that day forth.

Anyway, that's simply my view.....cheers.
I'm okay with the cycle to production. Here's hoping for 10 new IP licenses over the next 12 months. The market will factor in the future revenue.
 
  • Like
  • Fire
  • Love
Reactions: 19 users

Diogenese

Top 20
I'd suggest listening to the upcoming tech talk between ARM and Brainchip 🧐:whistle:

May 9 11pm Perth

May 10 1am Sydney

The rather big question really is, once a prospective client becomes an official holder of an IP License, we should possibly consider that
mass production of a product or products could be added to that timeframe, maybe 24 months minimum from the day they actually sign
a contract, with no new IP Licenses currently signed many, I would suggest, haven't factored in a 24 month design cycle from that day forth.

Anyway, that's simply my view.....cheers.
Hi Tech,

The 24 months may be a reasonable period where the licence involves building an entirely new product, eg , Prophesee or Valeo with Akida, but there will be new licencees who want basically a COTS processor (ARM/SiFive/Intel) boosted by Akida. In those cases, once the initial batch is made, we could expect a shorter time to market.

We already know that Akida is processor agnostic and it has been confirmed that it does fit all ARM processors, so there must already be some progress in fitting Akida and ARM designs together.
 
  • Like
  • Fire
  • Love
Reactions: 92 users

Terroni2105

Founding Member
Any idea why Socionext is not listed on the BrainChip website under the “You’re In Good Company” heading?

We know they are still working together and I would have thought it beneficial from BrainChip standpoint to have Socionext noted on their website in some capacity.
 
  • Like
  • Thinking
  • Love
Reactions: 15 users
In the “This is our mission” podcast with Geoffrey Moore, Sean Hehir mentions from 5:12 min “a company in communications, something you wouldn’t even think about - and it’s not a handset - but they want AI functionality in their devices, and they’re gonna build chips, so world-class companies are usually driven by some competitive nature (…) it’s that competitive nature that’s really driving their verticalisation”.

So we get this is not about handsets such as smartphones or gaming controllers, but how about headsets, then? Besides the mixed-reality Cisco one I had referred to in my post above for Webex videoconferencing solutions, there is of course also the long-awaited possibly-soon-to-be-released Apple VR/AR-headset dubbed Reality Pro (this high-end model is rumoured to sell at around 3000 USD) and a more affordable version dubbed Reality One that is predicted to be launched at a later stage. Other posters have previously also commented on a speculative link to Brainchip re those upcoming Apple mixed-reality headsets that will have eye and gesture tracking and are said to operate independently of an iPhone, an iPad or a Mac.

While we should keep in mind the word “speculative“ here, let us occasionally allow ourselves to daydream just a little. The English word earworm is a calque (loan translation) of the German word Ohrwurm, referring to a catchy piece of music that continues to occupy your mind long after it has been played. The song “A Whole New World” from the Disney movie Aladdin that I brought up in my post yesterday sure is one of those earworms for me. And not only for its captivating melody:

🎼 Unbelievable sights, indescribable feeling
Soaring, tumbling, freewheeling
Through an endless diamond sky
A whole new world (Don't you dare close your eyes)
A hundred thousand things to see (Hold your breath, it gets better)
I'm like a shooting star, I've come so far
I can't go back to where I used to be (A whole new world)
With new horizons to pursue
I'll chase them anywhere
There's time to spare
Let me share this whole new world with you…”

Edit: In the light of what @TECH just posted minutes before me, I should add we’d better restrict our daydreaming to future product updates containing Akida, and shouldn’t set our hopes on recently released or soon to be launched products, as it is true that we would need to factor in long production timelines once an IP license has been signed (except if the deal was done through Megachips or Renesas, I assume?)
The other side of communications ties in potentially with cyber attacks and packet inspections which is part of what the CyberNeuro RT project is about.

2022 paper here.

HERE

The other use case possibility is as their patent grant from 2022. They've gone to the effort to patent this :unsure:

The latest patents awarded to BrainChip from the USPTO include:


  • US 11,468,299 “An Improved Spiking Neural Network,” protects the learning function of BrainChip’s digital neuron circuit implemented on a neuromorphic integrated circuit/system (e.g., AkidaTM).
  • US 11,429,857, “Secure Voice Communications System,” protects a system to establish secure voice communications between a local and a remote neural network device. Information is encrypted by transmitting spike timing rather than original data, rendering it useless to anyone intercepting the transmission.
 
  • Like
  • Fire
  • Love
Reactions: 36 users

Vladsblood

Regular
Extraordinary low volumes of share changing hands today. Only just over 1.6 million. The price will have to rise considerably if they want more ... but they won't be mine!
Deena
Hi Denna, yes very low to extreme low volume “The Thermometer “ is showing the time frame is just about to turn cycle upwards. Once time runs out nothing can change the coming SP reversal as time is so much more important than price in forecasting future moves. Cheers Deena, Vlad.
 
  • Like
  • Fire
  • Love
Reactions: 27 users

rgupta

Regular
I'd suggest listening to the upcoming tech talk between ARM and Brainchip 🧐:whistle:

May 9 11pm Perth

May 10 1am Sydney

The rather big question really is, once a prospective client becomes an official holder of an IP License, we should possibly consider that
mass production of a product or products could be added to that timeframe, maybe 24 months minimum from the day they actually sign
a contract, with no new IP Licenses currently signed many, I would suggest, haven't factored in a 24 month design cycle from that day forth.

Anyway, that's simply my view.....cheers.
If the cycle will go 24 months after signing an IP? That will mean EQXX is not going have akida inside. As Merc is saying launch is early 2024.
Or the other way around contracts are written differently with EAP.
Wait !!!
 
  • Like
Reactions: 1 users
The other side of communications ties in potentially with cyber attacks and packet inspections which is part of what the CyberNeuro RT project is about.

2022 paper here.

HERE

The other use case possibility is as their patent grant from 2022. They've gone to the effort to patent this :unsure:

The latest patents awarded to BrainChip from the USPTO include:


  • US 11,468,299 “An Improved Spiking Neural Network,” protects the learning function of BrainChip’s digital neuron circuit implemented on a neuromorphic integrated circuit/system (e.g., AkidaTM).
  • US 11,429,857, “Secure Voice Communications System,” protects a system to establish secure voice communications between a local and a remote neural network device. Information is encrypted by transmitting spike timing rather than original data, rendering it useless to anyone intercepting the transmission.
Not sure how I forgot this as well when it comes to communications ;)

Intellisense to use neuromorphic AI for next generation cognitive radio chip

Business news | March 22, 2023
By Nick Flaherty
SENSING / CONDITIONING WIRELESS COMMUNICATIONS AI



Intellisense Systems is to use neuromorphic AI technology from BrainChip to improve cognitive radio system on spacecraft and robotics.


Intellisense’s intelligent radio frequency (RF) system enable wireless devices and platforms to sense and learn the characteristics of the communications environment in real time, providing enhanced communication quality, reliability and security. By integrating BrainChip’s Akida neuromorphic processor, Intellisense can deliver even more advanced, yet energy efficient, cognitive capabilities to its RF system solutions.

One such project is the development of a new Neuromorphic Enhanced Cognitive Radio (NECR) device to enable autonomous space operations on platforms constrained by size, weight and power (SWaP).

Intellisense’s NECR technology provides NASA numerous applications and can be used to enhance the robustness and reliability of space communication and networking, especially cognitive radio devices. Smart sensing algorithms will be implemented on neuromorphic computing hardware, including Akida, and then integrated with radio frequency modules as part of a Phase II prototype.


“By integrating BrainChip’s Akida processor into our cognitive radio solutions, we will be able to provide our customers with an unparalleled level of performance, adaptability and reliability,” said Frank Willis, President and CEO of Intellisense.

“Intellisense provides advanced sensing and display solutions and we are thrilled to be partnering with them to deliver the next generation of cognitive radio capabilities,” said Sean Hehir, CEO of BrainChip. “Our Akida processor is uniquely suited to address the demanding requirements of cognitive radio applications and we look forward to continue partnering with Intellisense to deliver cutting-edge embedded processing with AI on-chip to their customers.”
 
  • Like
  • Love
  • Fire
Reactions: 57 users

TECH

Regular
Hi Tech,

The 24 months may be a reasonable period where the licence involves building an entirely new product, eg , Prophesee or Valeo with Akida, but there will be new licencees who want basically a COTS processor (ARM/SiFive/Intel) boosted by Akida. In those cases, once the initial batch is made, we could expect a shorter time to market.

We already know that Akida is processor agnostic and it has been confirmed that it does fit all ARM processors, so there must already be some progress in fitting Akida and ARM designs together.

Thanks for your opinion (y) and yes, I was hinting at what you wrote on the first line.

While other products and designs are/could be well underway, it's targeting the companies whom we have been engaged with that the
company is working hard on getting an IP License commitment from, which as you have eluded to numerous times isn't an easy gig.

While it's 100% true products are coming very soon, including others that won't be far behind, it's the companies goal to sign IP
Licenses in the first instance, so I would assume many are in the pipeline, we shall see.

Roll on 1 January 2025....looking forward to seeing a dollar or three in front of our share price !

Best regards.....Chris (Tech)
 
  • Like
  • Fire
  • Love
Reactions: 33 users

Diogenese

Top 20
Not sure how I forgot this as well when it comes to communications ;)

Intellisense to use neuromorphic AI for next generation cognitive radio chip

Business news | March 22, 2023
By Nick Flaherty
SENSING / CONDITIONING WIRELESS COMMUNICATIONS AI



Intellisense Systems is to use neuromorphic AI technology from BrainChip to improve cognitive radio system on spacecraft and robotics.


Intellisense’s intelligent radio frequency (RF) system enable wireless devices and platforms to sense and learn the characteristics of the communications environment in real time, providing enhanced communication quality, reliability and security. By integrating BrainChip’s Akida neuromorphic processor, Intellisense can deliver even more advanced, yet energy efficient, cognitive capabilities to its RF system solutions.

One such project is the development of a new Neuromorphic Enhanced Cognitive Radio (NECR) device to enable autonomous space operations on platforms constrained by size, weight and power (SWaP).

Intellisense’s NECR technology provides NASA numerous applications and can be used to enhance the robustness and reliability of space communication and networking, especially cognitive radio devices. Smart sensing algorithms will be implemented on neuromorphic computing hardware, including Akida, and then integrated with radio frequency modules as part of a Phase II prototype.


“By integrating BrainChip’s Akida processor into our cognitive radio solutions, we will be able to provide our customers with an unparalleled level of performance, adaptability and reliability,” said Frank Willis, President and CEO of Intellisense.

“Intellisense provides advanced sensing and display solutions and we are thrilled to be partnering with them to deliver the next generation of cognitive radio capabilities,” said Sean Hehir, CEO of BrainChip. “Our Akida processor is uniquely suited to address the demanding requirements of cognitive radio applications and we look forward to continue partnering with Intellisense to deliver cutting-edge embedded processing with AI on-chip to their customers.”
Hi Fmf,

The link you posted leads to this rabbit warren:

BrainChip expands partners for neuromorphic AI IP​

Business news | May 2, 2023
By Jean-Pierre Joosting
https://www.eenewseurope.com/en/brainchip-expands-partners-for-neuromorphic-ai-ip/

The eeNews article refers to BrainChip's associations with Intellisense, Teksun, emotion3D, and AI Labs, as well as Akida gen 2, Global Foundries, ARM:


BrainChip Holdings Ltd, the first commercial producer of ultra-low power, fully digital, event-based, neuromorphic AI IP, saw the launch of its second-generation Akida™ processor IP platform and key partnerships with AI leaders on the first quarter of 2023.


The 2nd generation Akida technology platform is a hyper-efficient, yet powerful neural processing system designed for embedded Edge AI applications. The latest Akida platform adds efficient 8-bit processing to go with advanced capabilities, such as spatial-temporal domain convolutions and Vision Transformer (ViT) acceleration, for an unprecedented level of performance in sub-watt devices, taking them from perception toward cognition.

Other technology advancements made to BrainChip’s IP portfolio in 2023 include integration of its Akida processor family with the Arm® Cortex®-M85 processor, demonstrating the company’s commitment to developing cutting-edge AI systems that deliver exceptional performance and efficiency.

BrainChip also achieved tape-out of its AKD1500 reference design on GlobalFoundries’ 22nm fully depleted silicon-on-insulator (FD-SOI) technology. This milestone is part of validating BrainChip’s IP across different processes and foundries, empowering partners with varied global manufacturing options.


As part of their commitment to improving functionality and options for partners, BrainChip entered several strategic partnerships to expand its partner ecosystem:

  1. Intellisense Systems chose BrainChip’s neuromorphic technology to improve the cognitive communication capabilities on size, weight, and power (SWaP) constrained platforms (such as spacecraft and robotics) for commercial and government markets.
  2. Their partnership with Teksun demonstrates and proliferates BrainChip’s technology through Teksun product development channels, impacting the next generation of intelligent vehicles, smart homes, medicine, and industrial IoT.
  3. BrainChip’s partnership with emotion3D enables an ultra-low-power working environment with on-chip learning to make driving safer and enable next-level user experience.
  4. Working with AI Labs Inc., both companies are collaborating on next-generation application development, leveraging the Minsky™ AI Engine in a cost-effective, compelling solution to real-world problems.
“As we move from milestone to milestone, our achievements for the first quarter of 2023 bode well for BrainChip’s growth,” said Nandan Nayampally, CMO of BrainChip. “From advancing the state of the art with our latest product developments to significantly expanding the ecosystem BrainChip inhabits through industry partnerships, we are pushing the edge of AI at a time of rapid market innovation.”

www.brainchip.com
 
  • Like
  • Fire
  • Love
Reactions: 54 users

Cardpro

Regular
OK, how did we miss this particular article?!!!




Cisco updates Webex, aims to enhance hybrid work experiences with AI​

Sri Krishna@SriTalkstech
March 28, 2023 12:33 PM
hybrid workplace with employees working from home or work from office or work from the beach or sea vector

Image Credit: Piscine

Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success
. Learn More


Cisco today unveiled AI-powered enhancements across its Webex suite, promising to deliver hybrid work experiences with automation, while protecting customers’ confidentiality and privacy.

The updates span workspace, collaboration and customer experience categories, built on the Webex platform, and join a long list of AI and machine learning (ML) features already embedded in Cisco products.

The next step forward for such collaboration is video intelligence, which Webex is expanding throughout the conference room operating system RoomOS.
With cinematic meeting experiences, cameras follow individuals through voice and facial recognition to capture the best angle of the active speaker. This ensures focus on the speaker, while making certain that hybrid workers not physically present in the room can still feel included, according to Cisco.

Once in a generation platform shift​

RoomOS uses facial detection, information about where people are sitting in a room and voice location to direct the meeting and provide the best view. The feature individually frames and levels participants at eye height, and in speaker mode, uses audio triangulation from devices and an intelligent beam-forming table microphone to quickly and accurately identify the position of the active speaker.

Cinematic meetings support a range of camera intelligence features, including speaker mode, frames, presenter and audience tracking and meeting zones.


“AI is fundamentally transforming the way we work and live,” Jeetu Patel, EVP and GM for security and collaboration at Cisco, told VentureBeat. “It has the potential to make collaboration radically more immersive, personalized and efficient.”

Cisco studied what he described as a “once-in-a-generation platform shift” that AI could support. The company’s efforts center around re-imagining hybrid work.

Targeting hybrid work experiences​


With the rise of hybrid work, it’s essential that organizations provide employees with the flexibility to work in different locations and in different ways. To address this, Cisco has introduced three new AI-based features into its Webex suite.

This includes a super resolution function that ensures crystal-clear video in Webex meetings, even in low-bandwidth conditions. This is achieved through deep neural network video recovery that hides choppiness, removes blocking artifacts and reconstructs the face and body to render in high-resolution images and videos.

Another new AI capability is smart re-lighting, which automatically enhances lighting in Webex meetings to ensure that people look their best in any environment. This is particularly useful when working in poor lighting conditions. The algorithm is trained to recognize different scenarios with people in different lighting, and automatically enhances the light on the facial foreground.

The third new capability is a “be right back” update, which automatically puts up a BRB message, blurs the background, and mutes audio when a user steps away from a Webex meeting. This feature saves time and is simple to use. By leveraging a 3D face mesh algorithm, Webex can detect when a user has stepped away and replace their video feed with a BRB indicator until they return. Users can turn their audio and video back on when they are back in front of the screen.

AI-powered chat summaries​

As customer expectations continue to rise and organizations handle billions of daily customer interactions, it has become challenging for agents and legacy systems to keep up with the volume and personalization required. To this end, Cisco is introducing new AI capabilities for its customer experience solutions, including Webex Contact Center and Webex Connect.

One of the new capabilities, topic analysis in Webex Contact Center, provides actionable insights to business analysts by surfacing key reasons customers are calling in. This feature is built using an AI large language model (LLM) that aggregates call transcripts and highlights trends for business analysts.

Another capability, agent answers, acts as a real-time coach for human agents by listening and instantly surfacing knowledge-based articles and helpful information for the customer. This capability uses learnings from self-service and automated customer interactions and applies AI to ensure that the highest match probability options are identified first.

Meanwhile, AI-powered chat summaries eliminate the need for agents to read lengthy digital chat histories and provide key takeaways in a quickly digestible format. Lastly, Webex Connect users can now describe the function they want to perform, and AI will generate and return the appropriate code instantly, making it easier to create and iterate customer journeys quickly.

How is this relevant to brainchip?..
 
  • Like
  • Haha
  • Fire
Reactions: 6 users

Damo4

Regular
How is this relevant to brainchip?..

Teksun mentioned Brainchip was working with Toshiba, Cisco and (I forget the third company) but they amended their release and Brainchip said they didn't release the approved message.

Since then, Cisco has been on the radar, especially when our CEO has mentioned telecom, but not handsets or 4g, but potentially conference calling devices and networking.
All our ears are pricked for mentions of Ai and Cisco.
 
  • Like
  • Fire
  • Thinking
Reactions: 32 users
Top Bottom