BRN Discussion Ongoing

  • Haha
  • Like
  • Love
Reactions: 17 users

IloveLamp

Top 20
1000016046.jpg
 
  • Like
  • Fire
  • Wow
Reactions: 18 users

Bravo

If ARM was an arm, BRN would be its biceps💪!

SpiNNaker-Based Supercomputer Launches in Dresden​

By Sally Ward-Foxton 05.28.2024 0
Share Post
Share on Facebook
Share on Twitter


A new neuromorphic supercomputer is claiming the title of world’s biggest. University of Dresden spinout SpiNNcloud, formed to commercialize technology based on a second generation of Steve Furber’s SpiNNaker neuromorphic architecture, is now offering a five-billion neuron supercomputer in the cloud, as well as smaller commercial systems for on-prem use. Among the startup’s first customers are Sandia National Labs, Technische Universität München and Universität Göttingen.
The first generation of the SpiNNaker architecture, an academic project led by Arm architecture co-inventor Steve Furber, was created 10 years ago and is used in more than 60 research groups in more than 23 countries today. The second generation of SpiNNaker architecture, SpiNNaker2, is substantially different to the first, SpiNNcloud co-CEO Hector Gonzalez told EE Times.



“We don’t have a bottom-up approach, where you try to encode every single synapse of the brain into silicon,” he said. “We have an approach that is more practical. We follow inspiration from the brain where we believe it makes sense, where we see tangible effects on efficient compute.”
Gonzalez calls SpiNNaker2’s architecture a hybrid computer—combining acceleration for three different types of workloads, the intersection of which SpiNNcloud thinks will be the future of AI. These workloads are: brain-inspired spiking neural networks, practical application-inspired deep neural networks and symbolic models—which provide reliability and explainability.


Spiking neural networks (SNNs) mimic the brain’s dynamic sparsity for the ultimate energy efficiency. Deep neural networks (DNNs), which form the bulk of mainstream AI today, are excellent learners and very scalable, but less energy efficient and are sometimes criticized for being a “black box”—that is, they are not explainable. Symbolic models, formerly known as “expert systems,” have a rule-based backbone that makes them good at reasoning, but they have limited ability to generalize and adapt to other problems. In the SpiNNaker context, symbolic models provide explainability and can help make AI models more robust against phenomena like hallucination.

Future AI models will combine all three disciplines, making systems that can generalize their knowledge, be efficient and behave intelligently, per DARPA’s definition of the “third wave of AI,” Gonzalez said. SpiNNcloud is working with various groups of researchers on this. Possibilities include DNN layers for feature extraction followed by spiking layers, for example.
“This type of architecture enables things you wouldn’t do with traditional architectures because you cannot embed the event-based [properties] into the standard cascaded processors you have with traditional architectures,” he said. “So this enables entirely new fields.”
“We have the potential to deploy applications in these three fields and particularly at the intersection we have the capacity to deploy models that cannot be scaled up in in standard hardware,” he added.
Gonzalez’s example of a neuro-symbolic workload, NARS-GPT (short for non-axiomatic reasoning system), is part-DNN with a symbolic engine backbone. This combination outperformed GPT-4 in reasoning tests.
“The trouble with scaling up these models in standard architectures is that DNN accelerators often rely on tile-based approaches, but they don’t have cores with full programmability to implement rule-based engines for the symbolic part,” he said. By contrast, SpiNNaker2 can execute this model in real time.
NARS-GPT, which uses all three types of workload SpiNNaker2 is designed for, outperformed GPT-4 in reasoning NARS-GPT, which uses all three types of workloads SpiNNaker2 is designed for, outperformed GPT-4 in reasoning. (Source: SpiNNcloud)
Other work combining SNNs and symbolic engines includes SPAUN (semantic pointer architecture unified network) from the University of Waterloo. The connectivity required is too complex to execute in real time on GPUs, Gonzalez said.
Practical applications that exist today for this type of architecture include personalized drug discovery. Gonzalez cites work from the University of Leipzig, which deploys many small models that talk to each other over SpiNNaker’s high speed mesh. This work is aiming to enable personalized drug discovery searches.
“Standard architectures like GPUs are overkill for this application because the models are quite small, and you wouldn’t be able to leverage the huge parallelism you have in these small constrained [compute] units in such a highly parallel manner,” he said.
Optimization problems also suit SpiNNaker’s highly parallel mesh, Gonzalez added, and there are many applications that could use an AI that does not hallucinate. Smart city infrastructure can use its very low latency, and it can also be used for quantum emulation (the second generation architecture has added true random number generation to each core for this).

In-house accelerators

The SpiNNaker2 chip has 152 cores connected in a highly parallel, low power mesh.
Each core has an off-the-shelf Arm Cortex-M microcontroller core alongside in-house designed native accelerators for neuromorphic operators, including exponentials and logarithms, a true random number generator, and a MAC array for DNN acceleration.
A lightweight network on-chip is based on a GALS (globally asynchronous, locally synchronous) architecture, meaning each of the compute units behaves asynchronously but they are locally clocked. This mesh of compute units can be run in an event-based way—activated only when something happens.
SpiNNaker2's dynamic power management system SpiNNaker2 cores, based on Arm Cortex-M cores plus additional acceleration, are connected in a mesh. Cores can be switched off when not in use to to save power. (Source: SpiNNcloud)
A custom crossbar gives the Cortex-M cores and their neighbors access to memory in each of the nodes. SpiNNcloud has designed partitioning strategies to split workloads across this mesh of cores.
The true random number generator, SpiNNcloud’s patented design, samples thermal noise from the PLLs. This is exploited to produce randomness that can be used for neuromorphic applications (e.g. stochastic synapses) and in quantum emulation.
The chip uses an adaptive body biasing (ABB) scheme called reverse body bias based on IP developed by Racyics, which allows SpiNNcloud to operate transistors as low as 0.4 V (close to sub-threshold operation) to reduce power consumption while maintaining performance.
The company also uses a patented dynamic voltage frequency scaling (DVFS) scheme at the core level to save power. Cores can be entirely switched off if not needed, inspired by the brain’s energy-proportional properties.
“Brains are very efficient because they are energy proportional—they only consume energy when it’s required,” he said. “This isn’t just about spiking networks—we can do spiking networks, but this is about taking that brain inspiration to different levels of how the system operates its resources efficiently.”
The SpiNNcloud board has 48 SpiNNaker2 chips. 90 of these boards fit into a rack, with a full 16-rack system comprising 69120 chips The SpiNNcloud board has 48 SpiNNaker2 chips. Ninety of these boards fit into a rack, with a full 16-rack system comprising 69,120 chips. (Source: SpiNNcloud)
SpiNNcloud’s board has 48 SpiNNaker2 chips, with 90 boards to a rack. The full Dresden system will be 16 racks (69,120 chips total) for a total of 10.5 billion neurons. Half of that, five billion neurons, has been deployed so far; it can achieve 1.5 PFLOPS (32-bit, using Arm cores), and 179 PFLOPS (8-bit, using MAC accelerators). Theoretical peak performance per chip is 5.4 TOPS, but realistic utilization would mean around 5.0 TOPS, Gonzalez said.
Chips on the board can communicate with each other in the order of a millisecond, even at large scale. The full-size system has chips connected in a toroidal mesh for the shortest possible communication paths between chips (this has been optimized based on research from the University of Manchester).
SpiNNcloud’s Dresden supercomputer is available for cloud access, while the first production run for commercial customer systems will be in the first half of 2025.



 
  • Like
  • Love
  • Fire
Reactions: 10 users
  • Like
  • Haha
  • Love
Reactions: 8 users

Tony Coles

Regular
We get a tag here, and our logo is hiding behind the chair on the partner wall.....check out the other partners......
View attachment 63922
Hiding behind the chair, its called NDA! 🤣🤣🤣
 
  • Haha
  • Like
Reactions: 16 users
Edge Impulse & Brainchip are in stealth mode here by tactfully using the chair 🤣


No, not correct, Brainchip is under an NDA (Non Disclosure Arsemount) !
 
  • Haha
  • Like
Reactions: 5 users

IloveLamp

Top 20
  • Like
  • Haha
Reactions: 5 users

7für7

Regular
  • Haha
Reactions: 1 users
  • Haha
  • Like
Reactions: 3 users

Bravo

If ARM was an arm, BRN would be its biceps💪!

For Infineon, AI Is the Key to IoT’s Potential​

  • By Mark N. Vena
  • May 28, 2024 11:50 AM PT

The internet of things (IoT) represents a transformative wave in technology, promising to revolutionize how we interact with our surroundings. The business potential of IoT devices is rapidly evolving, driven by advancements in sensor technology, expanding applications, and increasing connectivity. This evolution is unfolding so swiftly that it is challenging to grasp its implications fully.
At a high level, the IoT connects devices in homes, factories, and public spaces. Examples include refrigerators that order groceries when they run low, cars that cooperate to find parking spots, and systems that proactively maintain industrial plants remotely to prevent outages. The goal is to reduce expenses and energy usage.

More Powerful, Smaller, and Lower Cost​

One of the primary reasons for the rapid acceleration of IoT is the continuous advancement in sensor technology. Sensors are becoming smaller, more powerful, and more energy-efficient, enabling a broader range of devices to become “smart.”
These sensors collect data on various metrics, from environmental conditions to personal health indicators. Their proliferation lays the groundwork for a vast network of interconnected devices that can communicate and collaborate to perform complex tasks.
Importantly, the cost of smart sensors has decreased dramatically, making them available in low-cost and affordable products, which has increased consumers’ interest.

AI Benefits Big Data Analysis​

Another critical factor is the explosion of data generated by IoT devices. As these devices become more prevalent, they produce vast volumes of data that can be analyzed to derive valuable insights. Manufacturers and service providers can use this data to optimize processes, improve efficiency, and enhance decision-making.

For example, in manufacturing, IoT devices can monitor machinery in real time, predicting failures before they occur and scheduling maintenance proactively. This benefit reduces downtime and extends the lifespan of equipment, leading to significant cost savings.

Health Care and Smart Cities Are Big Potential Markets​

IoT devices are proving transformative in health care. Wearable devices can continuously monitor vital signs, providing users and health care providers with real-time health data. This attribute allows for personalized and proactive health care, where conditions can be detected and addressed early.
Remote monitoring can also reduce the need for hospital visits, making health care more accessible and less burdensome for patients. The benefits of IoT in health care are vast, encompassing everything from chronic disease management to emergency response systems.
Smart cities are another intriguing area where IoT is making a substantial impact. Integrating IoT devices into urban infrastructure makes cities more efficient, sustainable, and livable. Intelligent traffic management systems can reduce congestion and improve safety, while smart grids can optimize energy consumption and reduce waste.
Environmental sensors can monitor air quality and noise levels, providing data that can inform policy and improve public health. Integrating IoT in urban planning can create cities that are more responsive to their inhabitants’ needs.

IoT Benefits Extend to the Consumer Sector​

Smart homes with IoT devices offer increased convenience, security, and energy efficiency. The potential for enhancing daily life is immense, from smart thermostats that learn and adjust to user preferences to security systems that provide real-time surveillance and alerts. These devices can also be integrated with voice assistants, creating a seamless and intuitive user experience.
Despite the rapid advancements, the full potential of IoT is challenging to comprehend because it represents a fundamental shift in how we interact with technology and data. The interconnected nature of IoT devices means that their benefits are not just additive but multiplicative.
As more devices come online and begin to communicate with each other, the possibilities for innovation expand exponentially. This network effect makes it challenging to predict how IoT will transform industries and daily life.

Infineon Bets on the ‘Edge’ To Optimize IoT’s Business Potential​

Infineon Technologies is one of the leading global semiconductor manufacturers specializing in microelectronics. The company provides innovative solutions for automotive, industrial, and consumer markets, focusing on energy efficiency, mobility, and security.

While its capabilities can be broadly applied to multiple markets, its next-generation PSoC (programmable system-on-chip) Edge portfolio, which features powerful AI capabilities for consumer and industrial applications in the IoT space, is perhaps the poster child for overall industry optimism.
Manufacturers integrate configurable digital and analog circuits into PSoC devices. These devices offer improved design revision capability and reduced component count because an on-chip microprocessor manages them. They are efficient, take up less board space, use less power and lower system costs. These attributes appeal broadly to IoT device makers as win-win elements.

AI Takes These Capabilities to a Higher Level​

Increasingly, next-generation IoT edge devices require more performance without sacrificing power. Because Infineon’s new PSoC solutions use machine language (ML) capabilities to compute alertness while balancing performance and providing integrated security for connected home devices, wearables, and numerous industrial applications, AI becomes a significant benefit.
AI enhances voice/audio sensing for activation and control, vision-based position detection, and face/object detection. As a result, IoT devices can become more intuitive, anticipative, less reactionary, and proactive in their operation.
Examples of these enhancements are security system cameras that can discern with a high degree of confidence between external intruders and animals or HVAC systems that can avoid costly repairs by proactively signaling device failures weeks or months in advance.
In my recent interview with Infineon’s EVP of IoT, Compute, and Wireless, Sam Geha said AI will make legacy IoT devices look crude and archaic. “As billions more IoT devices are deployed into the market, more information must be stored, processed, and harnessed. AI will help connect our real world to the digital world, from buildings and homes to wearable consumer and fitness devices to factories and cities,” Geha remarked.

Closing Thoughts​

To be sure, Infineon’s involvement in the IoT space was strong before “internet of things” became a buzzphrase.
The company’s IoT strategy enables secure, energy-efficient, intelligent, connected devices by leveraging its expertise in microelectronics to provide comprehensive solutions, including sensors, microcontrollers, and connectivity modules.
Infineon prioritizes robust security features to protect data and privacy in IoT applications. Its approach emphasizes seamless integration and interoperability, supporting diverse applications from smart homes to industrial automation.
Moreover, Infineon has a strong “design-in” confidence reputation that puts OEMs at ease, a significant advantage given that these solutions are being integrated into solutions that can cost $60,000 or more, as is the case with EVs.

Geha also believes that a microcontroller approach like Infineon’s new PSoC Edge 84 family is the most cost-effective way to democratize these solutions in a broad spectrum of products that use AI and ML in wearables, smart homes and other low-power “always-on domain” products with autonomous analog sensing supporting battery applications, such as smart locks, video doorbells, and security cameras.
“Infineon’s vision is to create capabilities in this space that can scale, make it easy for design partners to utilize its solutions over multiple generations of products, and get faster to market in a consistent and reliable manner,” Geha stated.
The IoT market is projected to grow significantly, with an estimated value of over $1 trillion by 2025. This growth is driven by the increasing adoption of IoT devices across various industries, including health care, manufacturing, and smart cities. Over the next five years, analysts expect the market to experience a compound annual growth rate (CAGR) of around 25%, highlighting its rapid expansion and transformative potential. These are not trivial data points.
While the overall market seems obsessed with how the likes of Apple and Google are battling over AI and its comprehension into its smartphone, PC, and tablet operating systems, Infineon has quietly played a leadership role in putting the pieces in place to optimize AI at the edge in the IoT space.
While it may not always get the headlines it deserves, Infineon continues to quietly demonstrate IoT market governance, innovation, and credibility, which few companies have achieved.





A reminder...

722D7727-CB8F-480F-A6EF-7BD1834C953A.jpeg





And, back in January Farshad Akhbari ( Senior Principal Platform Architect at Infineon) was on one of our podcasts where he had been involved with BrainChip for approximately one year and that he thought our technology was "very promising" said he thinks we're "underestimating ourselves".
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 40 users

IloveLamp

Top 20

SpiNNaker-Based Supercomputer Launches in Dresden​

By Sally Ward-Foxton 05.28.2024 0
Share Post
Share on Facebook
Share on Twitter


A new neuromorphic supercomputer is claiming the title of world’s biggest. University of Dresden spinout SpiNNcloud, formed to commercialize technology based on a second generation of Steve Furber’s SpiNNaker neuromorphic architecture, is now offering a five-billion neuron supercomputer in the cloud, as well as smaller commercial systems for on-prem use. Among the startup’s first customers are Sandia National Labs, Technische Universität München and Universität Göttingen.
The first generation of the SpiNNaker architecture, an academic project led by Arm architecture co-inventor Steve Furber, was created 10 years ago and is used in more than 60 research groups in more than 23 countries today. The second generation of SpiNNaker architecture, SpiNNaker2, is substantially different to the first, SpiNNcloud co-CEO Hector Gonzalez told EE Times.



“We don’t have a bottom-up approach, where you try to encode every single synapse of the brain into silicon,” he said. “We have an approach that is more practical. We follow inspiration from the brain where we believe it makes sense, where we see tangible effects on efficient compute.”
Gonzalez calls SpiNNaker2’s architecture a hybrid computer—combining acceleration for three different types of workloads, the intersection of which SpiNNcloud thinks will be the future of AI. These workloads are: brain-inspired spiking neural networks, practical application-inspired deep neural networks and symbolic models—which provide reliability and explainability.


Spiking neural networks (SNNs) mimic the brain’s dynamic sparsity for the ultimate energy efficiency. Deep neural networks (DNNs), which form the bulk of mainstream AI today, are excellent learners and very scalable, but less energy efficient and are sometimes criticized for being a “black box”—that is, they are not explainable. Symbolic models, formerly known as “expert systems,” have a rule-based backbone that makes them good at reasoning, but they have limited ability to generalize and adapt to other problems. In the SpiNNaker context, symbolic models provide explainability and can help make AI models more robust against phenomena like hallucination.

Future AI models will combine all three disciplines, making systems that can generalize their knowledge, be efficient and behave intelligently, per DARPA’s definition of the “third wave of AI,” Gonzalez said. SpiNNcloud is working with various groups of researchers on this. Possibilities include DNN layers for feature extraction followed by spiking layers, for example.
“This type of architecture enables things you wouldn’t do with traditional architectures because you cannot embed the event-based [properties] into the standard cascaded processors you have with traditional architectures,” he said. “So this enables entirely new fields.”
“We have the potential to deploy applications in these three fields and particularly at the intersection we have the capacity to deploy models that cannot be scaled up in in standard hardware,” he added.
Gonzalez’s example of a neuro-symbolic workload, NARS-GPT (short for non-axiomatic reasoning system), is part-DNN with a symbolic engine backbone. This combination outperformed GPT-4 in reasoning tests.
“The trouble with scaling up these models in standard architectures is that DNN accelerators often rely on tile-based approaches, but they don’t have cores with full programmability to implement rule-based engines for the symbolic part,” he said. By contrast, SpiNNaker2 can execute this model in real time.
NARS-GPT, which uses all three types of workload SpiNNaker2 is designed for, outperformed GPT-4 in reasoning NARS-GPT, which uses all three types of workloads SpiNNaker2 is designed for, outperformed GPT-4 in reasoning. (Source: SpiNNcloud)
Other work combining SNNs and symbolic engines includes SPAUN (semantic pointer architecture unified network) from the University of Waterloo. The connectivity required is too complex to execute in real time on GPUs, Gonzalez said.
Practical applications that exist today for this type of architecture include personalized drug discovery. Gonzalez cites work from the University of Leipzig, which deploys many small models that talk to each other over SpiNNaker’s high speed mesh. This work is aiming to enable personalized drug discovery searches.
“Standard architectures like GPUs are overkill for this application because the models are quite small, and you wouldn’t be able to leverage the huge parallelism you have in these small constrained [compute] units in such a highly parallel manner,” he said.
Optimization problems also suit SpiNNaker’s highly parallel mesh, Gonzalez added, and there are many applications that could use an AI that does not hallucinate. Smart city infrastructure can use its very low latency, and it can also be used for quantum emulation (the second generation architecture has added true random number generation to each core for this).

In-house accelerators

The SpiNNaker2 chip has 152 cores connected in a highly parallel, low power mesh.
Each core has an off-the-shelf Arm Cortex-M microcontroller core alongside in-house designed native accelerators for neuromorphic operators, including exponentials and logarithms, a true random number generator, and a MAC array for DNN acceleration.
A lightweight network on-chip is based on a GALS (globally asynchronous, locally synchronous) architecture, meaning each of the compute units behaves asynchronously but they are locally clocked. This mesh of compute units can be run in an event-based way—activated only when something happens.
SpiNNaker2's dynamic power management system's dynamic power management system SpiNNaker2 cores, based on Arm Cortex-M cores plus additional acceleration, are connected in a mesh. Cores can be switched off when not in use to to save power. (Source: SpiNNcloud)
A custom crossbar gives the Cortex-M cores and their neighbors access to memory in each of the nodes. SpiNNcloud has designed partitioning strategies to split workloads across this mesh of cores.
The true random number generator, SpiNNcloud’s patented design, samples thermal noise from the PLLs. This is exploited to produce randomness that can be used for neuromorphic applications (e.g. stochastic synapses) and in quantum emulation.
The chip uses an adaptive body biasing (ABB) scheme called reverse body bias based on IP developed by Racyics, which allows SpiNNcloud to operate transistors as low as 0.4 V (close to sub-threshold operation) to reduce power consumption while maintaining performance.
The company also uses a patented dynamic voltage frequency scaling (DVFS) scheme at the core level to save power. Cores can be entirely switched off if not needed, inspired by the brain’s energy-proportional properties.
“Brains are very efficient because they are energy proportional—they only consume energy when it’s required,” he said. “This isn’t just about spiking networks—we can do spiking networks, but this is about taking that brain inspiration to different levels of how the system operates its resources efficiently.”
The SpiNNcloud board has 48 SpiNNaker2 chips. 90 of these boards fit into a rack, with a full 16-rack system comprising 69120 chips The SpiNNcloud board has 48 SpiNNaker2 chips. Ninety of these boards fit into a rack, with a full 16-rack system comprising 69,120 chips. (Source: SpiNNcloud)
SpiNNcloud’s board has 48 SpiNNaker2 chips, with 90 boards to a rack. The full Dresden system will be 16 racks (69,120 chips total) for a total of 10.5 billion neurons. Half of that, five billion neurons, has been deployed so far; it can achieve 1.5 PFLOPS (32-bit, using Arm cores), and 179 PFLOPS (8-bit, using MAC accelerators). Theoretical peak performance per chip is 5.4 TOPS, but realistic utilization would mean around 5.0 TOPS, Gonzalez said.
Chips on the board can communicate with each other in the order of a millisecond, even at large scale. The full-size system has chips connected in a toroidal mesh for the shortest possible communication paths between chips (this has been optimized based on research from the University of Manchester).
SpiNNcloud’s Dresden supercomputer is available for cloud access, while the first production run for commercial customer systems will be in the first half of 2025.



Hi @Bravo

Do you think we're involved with SpiNNaker?

Sorry, not getting the connection 🤔
 
  • Like
Reactions: 1 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
  • Like
  • Love
Reactions: 5 users

TECH

Regular
Good morning...

It is important to realize/remember that most, if not all NDA's have been instigated/stipulated as a pre-requisite by Brainchip directly,
before we even start formal engagements with interested parties, that's my understanding any how.
Amazon Prime Video I Have No Idea GIF by The Man in the High Castle
Tech x
 
  • Like
  • Haha
Reactions: 4 users

toasty

Regular
Good morning...

It is important to realize/remember that most, if not all NDA's have been instigated/stipulated as a pre-requisite by Brainchip directly,
before we even start formal engagements with interested parties, that's my understanding any how.
Amazon Prime Video I Have No Idea GIF by The Man in the High Castle
Tech x
And your point is????
 
  • Like
Reactions: 5 users

IloveLamp

Top 20
  • Like
Reactions: 2 users

Diogenese

Top 20
AIRBUS is located in "Toulouse" 🥳

I just want to mention it again ... Toulouse !!!!!


View attachment 63910

Slow Motion Fire GIF by NowThis


https://brainchip.com/brainchip-completes-acquisition-of-spikenet/

BrainChip Completes Acquisition of Spikenet​

ALISO VIEJO, CA — (Marketwired) — 09/06/16 — BrainChip Holdings Ltd (ASX: BRN) (“BrainChip” or “the Company”), announced today it has completed the acquisition of Spikenet Technology SAS (“Spikenet”), a revenue-producing, France-based Artificial Intelligence (Al) company and leader in computer vision technology, first announced by BrainChip on June 30, 2016.
...
About Spikenet

Spikenet, based in Toulouse, France, is an “Artificial Vision Specialist”, using Spiking Neural Networks to create superior computer vision solutions. Spikenet provides tools and programs that are able to rapidly learn to recognize objects, people and anomalies. Spikenet’s current products have been sold in the security, transport, media, industrial vision, and gaming sectors. Please visit www.spikenet-technology.com to learn more.
 
  • Like
  • Fire
  • Thinking
Reactions: 12 users


“(…) Following U.S. regulators’ approval, a video originally posted on TikTok claiming that cultivated meat is produced using animal “cancerous and pre-cancerous cells” went viral on Facebook.

The person speaking in the video is Kashif Khan, the CEO and founder of The DNA Company. Khan doesn’t appear to hold any qualification in biology or medicine, but nevertheless offers advice on preventing and reversing chronic diseases through functional medicine, a practice that lacks scientific support.

The claim stemmed from the idea that a procedure known as immortalization used for producing cultivated meat turns the cells cancerous. However, this idea is incorrect. As we will show below, immortalized doesn’t equal cancerous, and there is currently no evidence indicating that using this procedure makes cultivated meat unsafe to consume (…).

To overcome this limitation and upscale production, manufacturers use a procedure called immortalization, which consists of modifying the cells genetically so they divide indefinitely. This procedure has been used for decades in biomedical research for multiple purposes, including studies on gene and protein function, vaccine production, and drug testing.


The video claimed that immortalization turns the cells into “cancerous and pre-cancerous”. This would mean that these cells have the capacity to grow without control and invade nearby tissues (cancerous) or that they have abnormal changes that make them more likely to develop into cancer cells (pre-cancerous). However, this claim is incorrect.

While cancer cells can divide indefinitely, this is only one of the various characteristics that make them cancerous. To be considered cancerous, the cells must also have the capacity to create new blood vessels and invade neighboring tissues, and show unpredictable, uncontrollable behavior[1].

Joe Regenstein, a professor emeritus of food science at Cornell University, told AFP that this isn’t how immortalized cells behave. “Immortalized cells are essentially the exact opposite of cancer cells” because “They are highly controlled and repeatable”, said Regenstein (…).”
🤣.. Hey I didn't say they "were" or could "grow" inside you, when dead and cooked.

But they do have similar properties.
It's "common sense" where they got them from.
I have little respect, for today's "scientists" whose writings are financially funded and motivated.

Of course there is huge monetary incentive, to ensure they are "safe".

They also say, they can be made without antibiotics (Only a very small amount on initial animal harvesting) and then grown in very sterile conditions (as the lab grown meat, doesn't have an immune system, so is easily contaminated).

But the reality is, when they are doing very large scale production, they will have no choice, but to use antibiotics and various other things, in my opinion.

But hey, enjoy, I won't be touching that shit, with a 10 foot pole.
And although it will eventually be illegal in the Market to differentiate between Natural and Lab grown, the types of meat products I eat, I doubt will be replicated anytime soon.
 
Last edited:
  • Like
  • Fire
Reactions: 5 users
🤣.. Hey I didn't say they "were" or could "grow" inside you, when dead and cooked.

But they do have similar properties.
It's "common sense" where they got them from.
I have little respect, for today's "scientists" whose writings are financially funded and motivated.

Of course there is huge monetary incentive, to ensure they are safe.
Been here before haven't we.
So many statements from those with the incentive to tell us that COVID vax was not only safe but prevented infection and transmission.
Oh! and wearing a mask helped the uninfected stay protected lol....
Trust = zero
 

Attachments

  • 1716952567341.png
    1716952567341.png
    71.8 KB · Views: 31
  • Like
  • Haha
  • Fire
Reactions: 6 users
Been here before haven't we.
So many statements from those with the incentive to tell us that COVID vax was not only safe but prevented infection and transmission.
Oh! and wearing a mask helped the uninfected stay protected lol....
Trust = zero
Now you've started it! 😛

Hey, the Government/Corporations don't tell lies, okay?

Now back to our regular feature.

AKIKA TREBUCHET! 🔥🔥🔥
 
  • Haha
  • Like
Reactions: 6 users
So, have we got an offshoot, rebranding, part the restructure process on NVISO?

BeEmotion.ai.

I thought the layout, images and graphics looked similar when I looked at their homepage initially and digging deeper find some of what I snipped below.

When I scrolled to the bottom of the page I can see the site managed by Tropus here in Perth. I know exactly where their office is as drive past it for certain client meetings.

There is also a small Japanese link at the bottom which takes you to the Japanese site.






View attachment 53840

HUMAN BEHAVIOUR AI​

NEUROMORPHIC COMPUTING​

BeEmotion empowers system integrators to build AI-driven human machine interfaces to transform our lives using neuromorphic computing. Understand people and their behavior in real-time without an internet connection to make autonomous devices safe, secure, and personalized for humans.


NEUROMORPHIC COMPUTING INTEROPERABILITY​

ULTRA-LOW LATENCY WITH LOW POWER​


ULTRA-LOW LATENCY (<1MS)​

Total BeEmotion Neuro Model latency is similar for GPU and BrainChip Akida™ neuromorphic processor (300 MHz), however CPU latency is approximately 2.4x slower. All models on all platforms can achieve <10ms latency and the best model can achieve 0.6ms which is almost 2x times faster than a GPU. On a clock frequency normalization basis, this represents an acceleration of 6x.

HIGH THROUGHPUT (>1000 FPS)​

BeEmotion Neuro Model performance can be accelerated by an average of 3.67x using BrainChip Akida™ neuromorphic processor at 300MHz over a single core ARM Cortex A57 as found in a NVIDIA Jetson Nano (4GB) running at close to 5x the clock frequency. On a clock frequency normalization basis, this represents an acceleration of 18.1x.

SMALL STORAGE (<1MB)

BeEmotion Neuro Models can achieve a model storage size under 1MB targeting ultra-low power MCU system where onboard flash memory is limited. Removing the need for external flash memory saves cost and power. BrainChip Akida™ format uses 4-bit quantisation where ONNX format uses Float32 format.

DESIGNED FOR EDGE COMPUTING

NO CLOUD REQUIRED

woman_car_2-1024x423.jpg

privacy.jpg

PRIVACY PRESERVING

By processing video and audio sensor data locally it does not have to be sent over a network to remote servers for processing. This improves data security and privacy as it can perform all processing disconnected from the central server, which is a more secure and private architecture decreasing security risks

Further to my post above when BeEmotion (NVISO imo) was discovered, it appears there has been a refresh and launching soon. Australia, Japan & Switzerland.


IMG_20240529_121859.jpg
 
  • Like
  • Fire
  • Love
Reactions: 28 users
Top Bottom