BRN Discussion Ongoing

robsmark

Regular
Hi Rob

It's a shame we didn't get to meet. I'll introduce myself at the next one!

Cheers
Hi Rob

It's a shame we didn't get to meet. I'll introduce myself at the next one!

Cheers
Yeah I’m sorry we never spoke - look forward to catching up at the next one though.
 
  • Like
Reactions: 5 users

Iseki

Regular
Hi @dippY22 and others

Whether AKIDA will be involved in in cabin monitoring in 2022/23 via Nviso or Valeo or Mercedes Benz is not dependent on the technology but ASIL certification.

The following explains ASIL:

“What is ASIL?​

Definition​

ASIL refers to Automotive Safety Integrity Level. It is a risk classification system defined by the ISO 26262 standard for the functional safety of road vehicles.
The standard defines functional safety as “the absence of unreasonable risk due to hazards caused by malfunctioning behavior of electrical or electronic systems.” ASILs establish safety requirements―based on the probability and acceptability of harm―for automotive components to be compliant with ISO 26262.
There are four ASILs identified by ISO 26262―A, B, C, and D. ASIL A represents the lowest degree and ASIL D represents the highest degree of automotive hazard.
Systems like airbags, anti-lock brakes, and power steering require an ASIL-D grade―the highest rigor applied to safety assurance―because the risks associated with their failure are the highest. On the other end of the safety spectrum, components like rear lights require only an ASIL-A grade. Head lights and brake lights generally would be ASIL-B while cruise control would generally be ASIL-C.
ASIL Classifications

How do ASILs work?​

ASILs are established by performing hazard analysis and risk assessment. For each electronic component in a vehicle, engineers measure three specific variables:
  • Severity (the type of injuries to the driver and passengers)
  • Exposure (how often the vehicle is exposed to the hazard)
  • Controllability (how much the driver can do to prevent the injury)
Each of these variables is broken down into sub-classes. Severity has four classes ranging from “no injuries” (S0) to “life-threatening/fatal injuries” (S3). Exposure has five classes covering the “incredibly unlikely” (E0) to the “highly probable” (E4). Controllability has four classes ranging from “controllable in general” (C0) to “uncontrollable” (C3).
All variables and sub-classifications are analyzed and combined to determine the required ASIL. For example, a combination of the highest hazards (S3 + E4 + C3) would result in an ASIL D classification.

What are the challenges of ASILs?​

Determining an ASIL involves many variables and requires engineers to make assumptions. For example, even if a component is hypothetically “uncontrollable” (C3) and likely to cause “life-threatening/fatal injuries” (S3) if it malfunctions, it could still be classified as ASIL A (low risk) simply because there’s a low probability of exposure (E1) to the hazard.
ASIL definitions are informative rather than prescriptive, so they leave room for interpretation. A lot of room. ASIL vocabulary relies on adverbs (usually, likely, probably, unlikely). Does “usually” avoiding injury mean 60% of the time or 90% of the time? Is the probability of exposure to black ice the same in Tahiti as it is in Canada? And what about traffic density? Rush hour in Los Angeles vs. late morning on an empty stretch of road in the Australian Outback?
Simply put, ASIL classification depends on context and interpretation”

Confused so am I but perhaps confused is not exactly the right term. There are many, many industries and regulation is an industry where the only way to not be confused is to actually work in the industry. Our recent discussion about the ASX Rules is a case in point. (AND by the way this is the simplest explanation I have found.)

So where does that take us well to the presentation by Anil Mankar at the 2021 Ai Field Day to an answers he gave when he was asked the following questions:

“Audience: Are there any car manufacturers using your chips today?

Anil Mankar: They are evaluating technologies, we are developing some Lidar data set applications for them, they will probably, they may not use this current chip because this current chip is not ASIL compatible and things like that but we expect that they will, once they are happy with our network that we are working with them on they might ask us or they might ask one of their suppliers to develop a car certified ASIL certification and if we expect it to be embedded into the chip that are already going into the car, there are lots of companies selling camera chips into the car, there is no reason why they can’t take our IP and do it all ASIL compatible and all that but this current chip that we are developing to assist 28 nanometer is not certified for that but they are using this or testing all the network evaluating power performance and once they are happy we expect them, either them or their vendors, to be a IP customers for us.
(AN ASIDE-Clearly Anil Mankar in the last part is referencing Mercedes Benz who are certainly happy, and so he has stated an IP licence either with them or a Valeo for instance.)

Audience: Okay so you’re in development now but you’re not yet certified, is there a roadmap for that certification? Can you even ballpark a date or you don’t want to talk about it?

Anil Mankar: Actually we don’t have plans to be, customers we are working with are already ASIL certified to be in car like camera chip guys ultrasound, Lidar guys, so we’ll depend on them to because automotive certification all that will be a long process and we’re not trying to be a big manufacturer of IC’a our focus is to enable Ai into all of the applications by supplying the IP. (AN ASIDE-Clearly the camera, ultrasound, Lidar guys includes Valeo)

Audience: Thank you.”


So we’re does this leave us Nviso is demonstrating on a non ASIL certified AKD1000 chip. Nviso does not make chips. Brainchip does not make chips. They have to rely upon a third party for the chip and ASIL certification. This third party will already be in automotive either as a vehicle manufacturer or an OEM like Valeo, Renesas or MegaChips.

I believe that having seen what Nviso and Brainchip can do that monitoring a driver simply for fatigue, definitely would not require the full 80 nodes of AKIDA IP neural fabric nor all of Nviso’s algorithms.

So time will tell if BRN and Nviso are doing this from the 2022/2023 commencement date but I am firm in my belief that the performance and power savings offered by AKIDA that so impressed Mercedes Benz will mean as the switch to EV develops a pace that it is not ‘if’ but ‘when’ their astonishing combination will be adopted and become ubiquitous.

My opinion only DYOR
FF

AKIDA BALLISTA
 

Iseki

Regular
Is it clear that we'd need ASIL if we're a part of the user cabin comfort/environment? If we're just going down the ip route I'm wondering who merc will choose to fabricate the chips that they'll inevitably need? With any luck they might be doing this already and this may have been one of the motivating factors for us to go ip only, and be included in more than one chip.
 
  • Like
Reactions: 4 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
  • Haha
  • Love
  • Like
Reactions: 18 users
Akida ubiquitous? Who are you? Harry Potter?

Got a good ring to it and maybe a rebirth for poor old discarded Ken

1653705908512.png
 
  • Like
  • Haha
  • Fire
Reactions: 17 users

Violin1

Regular
  • Haha
  • Fire
  • Love
Reactions: 9 users
I have been giving the vote against Peter van der Made a great deal of thought and looking at the numbers.

Why did I give it so much thought well because a poster here expressed remorse about having not bothered to vote.

I therefore have decided we need to start now and marshal the retail vote for next years AGM for two reasons.

The first is we need to let Peter van der Made know that he is appreciated and fully supported by retail for his genius and generosity in allowing us to share in his creation and in due course his wealth.

The second is to let the party or parties that engaged in this display of strength know that retail will have none of this nonsense in future. I have calculated based on the figures in the Annual Report the following:

1. Retail holders in the category of less than 100,000 shares have total voting rights of 388,154,326

2. Retail holders in the category of more than 100,000 shares and less than 5,800,000* have total voting rights of 575,226,045
(* 5,800,000 is the last holder in the top 20 list)

3. Peter van der Made received 238,100,194 votes

4. This means as a starting point there were at least 105,054,132 votes by retail investors with less than 100,000 shares not voted.

5. Of the 575,226,045 shares above 100,000 and below 5,800,000 I am allowing that 50% of these are held by institutions of one form and
another which leaves a further 286,113,022 shares at least which could have been voted in favour of Peter van der Made.

6. This means had all of retail voted for Peter van der Made he would have received a total vote of at least 629,267,384 votes the most of any
Director.

Now retail investors should note that to stop any takeover all you need is a blocking vote of 25% of the shares entitled to vote and it just so happens that retail investors at 629,267,384 shares (which is a minimum) well exceed 30% of the shares on issue and can decide if a takeover offer is accepted. Had retail voted for Peter van der Made in these numbers a clear message would have been sent to Brainchip about the importance of the retail vote.


I do not say this to rabble rouse but simply to make the point that we retail have much more power than most understand and all companies rely upon this lack of understanding by retail most of the time.

As a retail shareholder I am not putting that retail should in anyway control the day to day operations of the company but retail shareholders should be properly respected by all parties including institutions and those with 100 million share voting blocks who are trying to kick some shins for whatever vested stakeholder reason/s.

Brainchip is a public company. Retail shareholders with one share have the same rights as someone with 100 million shares but they need to vote to have their position as a retail shareholder respected and acknowledged.

I will leave it to you all to ponder but in this electronic age retail should be able to come together as a block and vote as a block to ensure that no individual or institutional investor with six times less voting power can manipulate the company to their ends behind the scenes.

My opinion only so do your own research and remember my maths is poor sometimes so DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Love
  • Fire
Reactions: 97 users
  • Haha
  • Like
Reactions: 11 users

Dhm

Regular
View attachment 7921
That chip is Akida within the Nviso occupant system.

Thanks @chapman89 for this link, I went there for a 3pm meeting with Tim and Colin from Nviso. For much of the time I was the only shareholder there and Tim gave me a great summation of Nviso's in car driver and passenger observance protocol. Tim's explanation was in line with the previous video published here, but the ability to learn within the cabin of the car sets this system apart. Tim said that with Akida included in the package the power usage was just 1 watt, but the alternative, also being used, gobbled up 10 watts, and the case the alternative was in was quite hot, a result of the extra wattage. Akida in this example uses 60 frames per second but could be ramped up to 1000 FPS given the need. Akida learns on site and Tim explained the driver could inform Akida of an event that needs to be learnt From. Not sure how this would work in practice, because it implies the driver is computer savvy and would be both understanding how to pass on the information. As a sidebar, I remember reading somewhere, most probably here, that Mercedes Benz bofins understand that the vast percentage of drivers don't understand the amount of help that the car's AI can give them, and Mercedes are training the AI within the car to recognise this inability and to suggest or coax better alternative usage of the abilities the driver isn’t aware of. That observation rings true with me and my Tesla, asi don’t understand some of the onboard acces that would be useful for me to know.

View attachment 7922
Then it was off to meet Colin Mason, Global Head of Customer Engagement Nviso, who introduced a Panasonic robotty thing with big eyes, and a cute tail and grey knitted cover. The robot has been planned as being very useful for older, single women in Japan. It doesn’t follow you around, but seems to vie for your attention. This current model can be programmed to turn on your tv at the appropriate time, and also for example to remind you to take your pills and other important things. This current robot doesn’t have Akida in it, but we agreed that for both power effectiveness and more efficient on chip learning Akida would be most probably in later version. In this version battery size and storage is quite limiting. Colin admitted to me that he, amongst others tell clients like Panasonic all about Brainchip and it’s elevated abilities. Tim and Colin are very excited about Brainchip’s future and are doing their best to ‘sell the Akida story’ with clients.

And what a story it is.
View attachment 7923
One other thing I forgot to mention about driver monitoring, Tim explained that there are various levels of monitoring. The basic level is just driver awareness, suitable for cheaper cars like perhaps a Toyota Corolla. The intermediate level takes on a more personal touch with expression monitoring of all occupants, and the more expensive version covers just about everything under the sun, like seatbelt connectivity, temperature of occupants, heartbeat (really!) and also precise positioning of front seat occupants that in the unlikely need to deploy airbags, the deployment can be instantaneously tweaked if someone is slightly out of position. The difference between these three options is quite a bit so Toyota will possibly have a range of cars that would suit all levels of standard.

Tim was well aware of US Congress telling all US car manufacturers that the driver monitoring law is in place for 2025/26. Basically what Nviso has now would more than tick that box. He also said that future market is very large, and there will be room for a number of competitors.

Hopefully with most of them carrying Brainchip IP.
 
  • Like
  • Love
  • Fire
Reactions: 48 users

Zedjack33

Regular
  • Haha
  • Fire
Reactions: 5 users

BaconLover

Founding Member
I have been giving the vote against Peter van der Made a great deal of thought and looking at the numbers.

Why did I give it so much thought well because a poster here expressed remorse about having not bothered to vote.

I therefore have decided we need to start now and marshal the retail vote for next years AGM for two reasons.

The first is we need to let Peter van der Made know that he is appreciated and fully supported by retail for his genius and generosity in allowing us to share in his creation and in due course his wealth.

The second is to let the party or parties that engaged in this display of strength know that retail will have none of this nonsense in future. I have calculated based on the figures in the Annual Report the following:

1. Retail holders in the category of less than 100,000 shares have total voting rights of 388,154,326

2. Retail holders in the category of more than 100,000 shares and less than 5,800,000* have total voting rights of 575,226,045
(* 5,800,000 is the last holder in the top 20 list)

3. Peter van der Made received 238,100,194 votes

4. This means as a starting point there were at least 105,054,132 votes by retail investors with less than 100,000 shares not voted.

5. Of the 575,226,045 shares above 100,000 and below 5,800,000 I am allowing that 50% of these are held by institutions of one form and
another which leaves a further 286,113,022 shares at least which could have been voted in favour of Peter van der Made.

6. This means had all of retail voted for Peter van der Made he would have received a total vote of at least 629,267,384 votes the most of any
Director.

Now retail investors should note that to stop any takeover all you need is a blocking vote of 25% of the shares entitled to vote and it just so happens that retail investors at 629,267,384 shares (which is a minimum) well exceed 30% of the shares on issue and can decide if a takeover offer is accepted. Had retail voted for Peter van der Made in these numbers a clear message would have been sent to Brainchip about the importance of the retail vote.


I do not say this to rabble rouse but simply to make the point that we retail have much more power than most understand and all companies rely upon this lack of understanding by retail most of the time.

As a retail shareholder I am not putting that retail should in anyway control the day to day operations of the company but retail shareholders should be properly respected by all parties including institutions and those with 100 million share voting blocks who are trying to kick some shins for whatever vested stakeholder reason/s.

Brainchip is a public company. Retail shareholders with one share have the same rights as someone with 100 million shares but they need to vote to have their position as a retail shareholder respected and acknowledged.

I will leave it to you all to ponder but in this electronic age retail should be able to come together as a block and vote as a block to ensure that no individual or institutional investor with six times less voting power can manipulate the company to their ends behind the scenes.

My opinion only so do your own research and remember my maths is poor sometimes so DYOR
FF

AKIDA BALLISTA
Great idea FF

I voted on my personal accounts, but was not able to do it on my Super Acct. I know there are a few holders here with BRN in their Super, I'll contact them on Monday and see if we'd be eligible.
Knowing those holdings are slightly different, we might not be able to, but worth a try.

Also, if you haven't already, please register online with "Boardroom" investorcentre makes your life easier to vote.
 
  • Like
  • Love
  • Fire
Reactions: 20 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Is this proof that Cerence hybrid approach incorporates AKIDA for the embedded/edge features? Take a look at 1.30 min as the embedded system is discussed and approx 2 min when Daniel says that it works very fast even when connectivity is bad.






Screen Shot 2022-05-28 at 1.50.14 pm.png














Screen Shot 2022-05-28 at 1.42.45 pm.png


 
  • Like
  • Fire
  • Wow
Reactions: 16 users
Hi @Diogenese, @Fact Finder or anyone else tech-savvy - do you know much about Digital Twins? Can Akida be utilised via sensors?

From IBM

How does a digital twin work?​

A digital twin is a virtual model designed to accurately reflect a physical object. The object being studied — for example, a wind turbine — is outfitted with various sensors related to vital areas of functionality. These sensors produce data about different aspects of the physical object’s performance, such as energy output, temperature, weather conditions and more. This data is then relayed to a processing system and applied to the digital copy.
Once informed with such data, the virtual model can be used to run simulations, study performance issues and generate possible improvements, all with the goal of generating valuable insights — which can then be applied back to the original physical object.



Article from google search “digital twin edge ai”

Why do digital twins need the edge?​

Craig Beddis, CEO and co-founder of Hadean, explains why digital twins need the edge in order to truly prosper
Why do digital twins need the edge? image

Centralising digital twin technology comes with challenges that the edge can mitigate.
Digital twins have evolved to become all encompassing digital replicas of anything from a single object to an entire industrial process. They incorporate and synthesise multiple layers of information to become reactable in the same way as their physical counterparts. However, creating these digital twins with the highest level of detail requires reflecting the volatile nature of their properties. Supply chains can be disrupted by changing availability of resources; factory processes can be affected by temperature and pressure. Our physical world is never static, and how things change is an essential part of their makeup. There are factors involved that are time sensitive and so representing these entities to highest detail requires simulating their constant flux of change.

Cloud infrastructure has served to provide an excellent platform on which the disparate data sources can be unified and synthesised. But the centralised nature of the cloud acts as a double edged sword, as sending and analysing data to a remote location can result in latency issues. When there’s a sluggishness to this time, sensitive data, entities or processes risk being shown inaccurately. Crucial business moments can be missed, batches of product can be ruined and energy can be wasted, resulting in high costs. The advantages of cloud infrastructure are far too great to give up, but a solution to its latency problem is absolutely essential to allow digital twins to fulfill their huge potential. This is where edge computing is primed to become the next crucial piece of the puzzle.

This relationship was recently described by Gartner: “Centralised cloud was the latest swing toward centralised environments. It focused on economies of scale, agility, manageability, governance and delivery of platform services. Even so, centralising services to a cloud environment, or to other types of core infrastructure, omits some capabilities that are quite desirable. Deployments at the edge provide additional capabilities that complement the capabilities provided by the core infrastructure.”

Edge servers process data closer to the source reducing the latency issues that occur when sending data to the cloud. This new topography removes delays created by physical distance. The processing is enabled primarily through edge network data centers. These are servers that are scattered more frequently across the world compared to cloud ones, which are often placed in more remote locations for the lower cost. Devices local to the edge servers can then connect and send and receive data without having to communicate with the further placed cloud servers.

Say, for example, you have a sensor on a valve in a factory. The sensor collects data on the pressure and temperature, which it can then send for analysis. High latency can present an issue if the valve needs to be adjusted. If the pressure is too high, and the valve is not adjusted in time, then an explosion could occur. Safety concerns such as this are key, but it also extends to optimisation of processes as well. By adjusting the valve according to real-time data, energy usage could be reduced during downtimes. By connecting to edge servers, the valve control can adjust much more quickly.


Though a huge opportunity, creating competent edge networks to support digital twins comes with its own set of challenges. Firstly, the sheer number of these kinds of IoT devices has led to a large amount of noise. That is to say, managing all the data supplied by them can be overwhelming. Much of the information varies in its relevance and value, so any edge infrastructure has to provide a system that can interpret it effectively. A network relevancy solution might address this by using spatial data structures to efficiently query which information is relevant to each client. Entities found using these queries are scheduled to be sent to the client based on metrics which correspond to the importance of that entity, enabling less important entities to be sent less frequently and reduce bandwidth.

IoT devices making use of edge computation have been a crucial part of building realistic digital twins. Simulation infrastructure needs, however, to include a sophisticated networking model to connect them all. Without one, systems can be prone to crashing when they have a limited number of active connections. Having an asynchronous architecture can deal with this problem effectively. It handles the actions of thousands of devices and distributed load balancing ensures the network always has enough CPU to handle large influxes. This eliminates the need of having a single thread per device and handles the control events sent by each one, forwarding them back to the simulation – the events are then reconstructed into a complete world state and then stored in a data structure.

With so much data being shared, it’s also essential that edge networks can deal with surges in demand effectively. A distributed load balancing system would ensure that the network always has enough CPU to handle large influxes without crashing. On top of this, the system needs to deal with the logging, analysis and debugging of processes within the network. A network visualiser, for instance, can provide enormous amounts of information about the connection between them. Not just latency and bandwidth, but also detailed statistics, such as the lost packets, the window sizes or the time since the last send or receive.

The importance of the edge in practical terms cannot be overstated. Speaking on edge computing’s benefits in healthcare, Weisong Shi, professor of computer science at Wayne State University, said: “By enabling edge computing, crucial data can be transmitted from the ambulance to the hospital in real time, saving time and arming emergency department teams with the knowledge they need to save lives.”

Creating the complex and dynamic digital twins of today requires reflecting the volatile nature of our modern systems. Cloud computing offers accessibility to high level processing power, and edge computing is primed to fill in the gaps.

Craig-Beddis.jpg.optimal.jpg

Written by Craig Beddis, CEO and co-founder of Hadean
 
  • Like
  • Fire
  • Love
Reactions: 21 users
Great idea FF

I voted on my personal accounts, but was not able to do it on my Super Acct. I know there are a few holders here with BRN in their Super, I'll contact them on Monday and see if we'd be eligible.
Knowing those holdings are slightly different, we might not be able to, but worth a try.

Also, if you haven't already, please register online with "Boardroom" investorcentre makes your life easier to vote.
I would have thought that there must be a provision to allow you to direct the trustee of the super fund to vote you shares the way you want them voted. It would probably be annoying to the trustees to have to manage the directions but if you start your enquiries now you will be able to check with the regulatory body to make sure the information you receive from your fund is correct.

Regards
FF

AKIDA BALLISTA
 
  • Like
Reactions: 10 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Is this proof that Cerence hybrid approach incorporates AKIDA for the embedded/edge features? Take a look at 1.30 min as the embedded system is discussed and approx 2 min when Daniel says that it works very fast even when connectivity is bad.






View attachment 7938













View attachment 7937



Here is a description of Edge Software Component from Cerence Prospectus 24 October 2019, so it looks like it isn't necessarily Akida after all, unless I'm missing something.


Screen Shot 2022-05-28 at 2.02.37 pm.png

 
  • Like
  • Love
Reactions: 7 users
Hi @Diogenese, @Fact Finder or anyone else tech-savvy - do you know much about Digital Twins? Can Akida be utilised via sensors?

From IBM

How does a digital twin work?​

A digital twin is a virtual model designed to accurately reflect a physical object. The object being studied — for example, a wind turbine — is outfitted with various sensors related to vital areas of functionality. These sensors produce data about different aspects of the physical object’s performance, such as energy output, temperature, weather conditions and more. This data is then relayed to a processing system and applied to the digital copy.
Once informed with such data, the virtual model can be used to run simulations, study performance issues and generate possible improvements, all with the goal of generating valuable insights — which can then be applied back to the original physical object.



Article from google search “digital twin edge ai”

Why do digital twins need the edge?​

Craig Beddis, CEO and co-founder of Hadean, explains why digital twins need the edge in order to truly prosper
Why do digital twins need the edge? image

Centralising digital twin technology comes with challenges that the edge can mitigate.
Digital twins have evolved to become all encompassing digital replicas of anything from a single object to an entire industrial process. They incorporate and synthesise multiple layers of information to become reactable in the same way as their physical counterparts. However, creating these digital twins with the highest level of detail requires reflecting the volatile nature of their properties. Supply chains can be disrupted by changing availability of resources; factory processes can be affected by temperature and pressure. Our physical world is never static, and how things change is an essential part of their makeup. There are factors involved that are time sensitive and so representing these entities to highest detail requires simulating their constant flux of change.

Cloud infrastructure has served to provide an excellent platform on which the disparate data sources can be unified and synthesised. But the centralised nature of the cloud acts as a double edged sword, as sending and analysing data to a remote location can result in latency issues. When there’s a sluggishness to this time, sensitive data, entities or processes risk being shown inaccurately. Crucial business moments can be missed, batches of product can be ruined and energy can be wasted, resulting in high costs. The advantages of cloud infrastructure are far too great to give up, but a solution to its latency problem is absolutely essential to allow digital twins to fulfill their huge potential. This is where edge computing is primed to become the next crucial piece of the puzzle.

This relationship was recently described by Gartner: “Centralised cloud was the latest swing toward centralised environments. It focused on economies of scale, agility, manageability, governance and delivery of platform services. Even so, centralising services to a cloud environment, or to other types of core infrastructure, omits some capabilities that are quite desirable. Deployments at the edge provide additional capabilities that complement the capabilities provided by the core infrastructure.”


Edge servers process data closer to the source reducing the latency issues that occur when sending data to the cloud. This new topography removes delays created by physical distance. The processing is enabled primarily through edge network data centers. These are servers that are scattered more frequently across the world compared to cloud ones, which are often placed in more remote locations for the lower cost. Devices local to the edge servers can then connect and send and receive data without having to communicate with the further placed cloud servers.

Say, for example, you have a sensor on a valve in a factory. The sensor collects data on the pressure and temperature, which it can then send for analysis. High latency can present an issue if the valve needs to be adjusted. If the pressure is too high, and the valve is not adjusted in time, then an explosion could occur. Safety concerns such as this are key, but it also extends to optimisation of processes as well. By adjusting the valve according to real-time data, energy usage could be reduced during downtimes. By connecting to edge servers, the valve control can adjust much more quickly.


Though a huge opportunity, creating competent edge networks to support digital twins comes with its own set of challenges. Firstly, the sheer number of these kinds of IoT devices has led to a large amount of noise. That is to say, managing all the data supplied by them can be overwhelming. Much of the information varies in its relevance and value, so any edge infrastructure has to provide a system that can interpret it effectively. A network relevancy solution might address this by using spatial data structures to efficiently query which information is relevant to each client. Entities found using these queries are scheduled to be sent to the client based on metrics which correspond to the importance of that entity, enabling less important entities to be sent less frequently and reduce bandwidth.


IoT devices making use of edge computation have been a crucial part of building realistic digital twins. Simulation infrastructure needs, however, to include a sophisticated networking model to connect them all. Without one, systems can be prone to crashing when they have a limited number of active connections. Having an asynchronous architecture can deal with this problem effectively. It handles the actions of thousands of devices and distributed load balancing ensures the network always has enough CPU to handle large influxes. This eliminates the need of having a single thread per device and handles the control events sent by each one, forwarding them back to the simulation – the events are then reconstructed into a complete world state and then stored in a data structure.

With so much data being shared, it’s also essential that edge networks can deal with surges in demand effectively. A distributed load balancing system would ensure that the network always has enough CPU to handle large influxes without crashing. On top of this, the system needs to deal with the logging, analysis and debugging of processes within the network. A network visualiser, for instance, can provide enormous amounts of information about the connection between them. Not just latency and bandwidth, but also detailed statistics, such as the lost packets, the window sizes or the time since the last send or receive.

The importance of the edge in practical terms cannot be overstated. Speaking on edge computing’s benefits in healthcare, Weisong Shi, professor of computer science at Wayne State University, said: “By enabling edge computing, crucial data can be transmitted from the ambulance to the hospital in real time, saving time and arming emergency department teams with the knowledge they need to save lives.”

Creating the complex and dynamic digital twins of today requires reflecting the volatile nature of our modern systems. Cloud computing offers accessibility to high level processing power, and edge computing is primed to fill in the gaps.

Craig-Beddis.jpg.optimal.jpg

Written by Craig Beddis, CEO and co-founder of Hadean
I have read other articles about the digital twin one related to space vehicles and there is absolutely no reason why AKIDA technology would not be considered routinely for this purpose.

The whole issue is bandwidth once you have your model created as you want a continuous relevant data stream so that your model is facing the same issues as the original in as close to real time as can be provided.

AKIDA processes all the input at the sensor and in doing so would only send meta data of those things that are changing if indeed a change is occurring. This reduces the amount of data to only relevant actionable data so two savings are achieved:

1. Shorter smaller meta data packets imposing minimal impact on the available bandwidth and no queuing of compressed data waiting for a slot to fit into to be transmitted which can cause a bottleneck that requires resetting of the system.

2. What arrives at the digital model does not have to be processed first as compressed data does to create the relevant data upon which to have the digital model respond.

My opinion only DYOR
FF

AKIDA BALLISTA
 
Last edited:
  • Like
  • Fire
Reactions: 12 users

Diogenese

Top 20
Hi @Diogenese, @Fact Finder or anyone else tech-savvy - do you know much about Digital Twins? Can Akida be utilised via sensors?

From IBM

How does a digital twin work?​

A digital twin is a virtual model designed to accurately reflect a physical object. The object being studied — for example, a wind turbine — is outfitted with various sensors related to vital areas of functionality. These sensors produce data about different aspects of the physical object’s performance, such as energy output, temperature, weather conditions and more. This data is then relayed to a processing system and applied to the digital copy.
Once informed with such data, the virtual model can be used to run simulations, study performance issues and generate possible improvements, all with the goal of generating valuable insights — which can then be applied back to the original physical object.



Article from google search “digital twin edge ai”

Why do digital twins need the edge?​

Craig Beddis, CEO and co-founder of Hadean, explains why digital twins need the edge in order to truly prosper
Why do digital twins need the edge? image

Centralising digital twin technology comes with challenges that the edge can mitigate.
Digital twins have evolved to become all encompassing digital replicas of anything from a single object to an entire industrial process. They incorporate and synthesise multiple layers of information to become reactable in the same way as their physical counterparts. However, creating these digital twins with the highest level of detail requires reflecting the volatile nature of their properties. Supply chains can be disrupted by changing availability of resources; factory processes can be affected by temperature and pressure. Our physical world is never static, and how things change is an essential part of their makeup. There are factors involved that are time sensitive and so representing these entities to highest detail requires simulating their constant flux of change.

Cloud infrastructure has served to provide an excellent platform on which the disparate data sources can be unified and synthesised. But the centralised nature of the cloud acts as a double edged sword, as sending and analysing data to a remote location can result in latency issues. When there’s a sluggishness to this time, sensitive data, entities or processes risk being shown inaccurately. Crucial business moments can be missed, batches of product can be ruined and energy can be wasted, resulting in high costs. The advantages of cloud infrastructure are far too great to give up, but a solution to its latency problem is absolutely essential to allow digital twins to fulfill their huge potential. This is where edge computing is primed to become the next crucial piece of the puzzle.

This relationship was recently described by Gartner: “Centralised cloud was the latest swing toward centralised environments. It focused on economies of scale, agility, manageability, governance and delivery of platform services. Even so, centralising services to a cloud environment, or to other types of core infrastructure, omits some capabilities that are quite desirable. Deployments at the edge provide additional capabilities that complement the capabilities provided by the core infrastructure.”


Edge servers process data closer to the source reducing the latency issues that occur when sending data to the cloud. This new topography removes delays created by physical distance. The processing is enabled primarily through edge network data centers. These are servers that are scattered more frequently across the world compared to cloud ones, which are often placed in more remote locations for the lower cost. Devices local to the edge servers can then connect and send and receive data without having to communicate with the further placed cloud servers.

Say, for example, you have a sensor on a valve in a factory. The sensor collects data on the pressure and temperature, which it can then send for analysis. High latency can present an issue if the valve needs to be adjusted. If the pressure is too high, and the valve is not adjusted in time, then an explosion could occur. Safety concerns such as this are key, but it also extends to optimisation of processes as well. By adjusting the valve according to real-time data, energy usage could be reduced during downtimes. By connecting to edge servers, the valve control can adjust much more quickly.


Though a huge opportunity, creating competent edge networks to support digital twins comes with its own set of challenges. Firstly, the sheer number of these kinds of IoT devices has led to a large amount of noise. That is to say, managing all the data supplied by them can be overwhelming. Much of the information varies in its relevance and value, so any edge infrastructure has to provide a system that can interpret it effectively. A network relevancy solution might address this by using spatial data structures to efficiently query which information is relevant to each client. Entities found using these queries are scheduled to be sent to the client based on metrics which correspond to the importance of that entity, enabling less important entities to be sent less frequently and reduce bandwidth.


IoT devices making use of edge computation have been a crucial part of building realistic digital twins. Simulation infrastructure needs, however, to include a sophisticated networking model to connect them all. Without one, systems can be prone to crashing when they have a limited number of active connections. Having an asynchronous architecture can deal with this problem effectively. It handles the actions of thousands of devices and distributed load balancing ensures the network always has enough CPU to handle large influxes. This eliminates the need of having a single thread per device and handles the control events sent by each one, forwarding them back to the simulation – the events are then reconstructed into a complete world state and then stored in a data structure.

With so much data being shared, it’s also essential that edge networks can deal with surges in demand effectively. A distributed load balancing system would ensure that the network always has enough CPU to handle large influxes without crashing. On top of this, the system needs to deal with the logging, analysis and debugging of processes within the network. A network visualiser, for instance, can provide enormous amounts of information about the connection between them. Not just latency and bandwidth, but also detailed statistics, such as the lost packets, the window sizes or the time since the last send or receive.

The importance of the edge in practical terms cannot be overstated. Speaking on edge computing’s benefits in healthcare, Weisong Shi, professor of computer science at Wayne State University, said: “By enabling edge computing, crucial data can be transmitted from the ambulance to the hospital in real time, saving time and arming emergency department teams with the knowledge they need to save lives.”

Creating the complex and dynamic digital twins of today requires reflecting the volatile nature of our modern systems. Cloud computing offers accessibility to high level processing power, and edge computing is primed to fill in the gaps.

Craig-Beddis.jpg.optimal.jpg

Written by Craig Beddis, CEO and co-founder of Hadean
In the old days, before it was digitized, we used to call it telemetry.

Think of the dials on your car's dashboard (petrol gauge, temperature, rpm, speed, oil, ...) hooked up to the internet via WiFi, so the data can be displayed on a remote computer screen.

So now they're doing it with electric windmills - maybe with downstream controls like, eg, the ability to feather the blades etc. You could even reduce the output when there is too much power being generated and the storage batteries are full.
 
  • Like
  • Fire
  • Love
Reactions: 20 users
Here is a description of Edge Software Component from Cerence Prospectus 24 October 2019, so it looks like it isn't necessarily Akida after all, unless I'm missing something.


View attachment 7939
There is a known known and that is software solutions are slower and more power hungry than hardware solutions. AKIDA is the hardware solution that Mercedes Benz has stated is five to ten times more efficient than Cerence's software solution running on the CPU or GPU in an old fashioned system.

Power consumption is the issue in Electric Vehicles. Electric Vehicles are coming like a steam roller as the world has made them compulsory. AKIDA will be essential and Mercedes Benz knows it and has said so by lauding AKIDA's role in the EQXX.

If Cerence sits there watching AKIDA do what it does and does not come on board it will end up like Kodak selling to those who kept their old film cameras with an ever diminishing market place.

There are a lot of petrol and diesel vehicles still to be built between now and 2030 that have no issues around electric power usage for driving the on board compute and the air conditioning to keep everything cool and some manufacturers will lag behind because they are still getting orders but must change their ideas if they are going to remain relevant.

I am reminded of a line from an old romantic song "our day will come".

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Love
  • Fire
Reactions: 27 users
In the old days, before it was digitized, we used to call it telemetry.

Think of the dials on your car's dashboard (petrol gauge, temperature, rpm, speed, oil, ...) hooked up to the internet via WiFi, so the data can be displayed on a remote computer screen.

So now they're doing it with electric windmills - maybe with downstream controls like, eg, the ability to feather the blades etc. You could even reduce the output when there is too much power being generated and the storage batteries are full.

So Digital Twins can be used for early diagnostics? Ties in with the below?

025D4149-9847-43CF-B26E-6F47C3E2A13C.jpeg


 
  • Like
  • Love
  • Fire
Reactions: 26 users
Perhaps with the business focus moving from the Chip to IP the empHasis needs to change.
Maybe BrainchIP .
I think they should change the Company’s name, to just AKIDA.

"Everything you need in A.I. Is in the name"

Getting away completely, from a "chip" reference, would be better marketing, in my opinion.

I can see them continually having to tell new customers, "No, we just supply the IP..."

Plus, it would really help the value of my Rego plates 😉
 
Last edited:
  • Like
  • Haha
  • Love
Reactions: 28 users
Top Bottom