BRN Discussion Ongoing

IloveLamp

Top 20
One for the list @Fact Finder


1000013676.jpg
 
  • Like
  • Fire
  • Love
Reactions: 38 users

IloveLamp

Top 20

1000013678.jpg
 
  • Like
  • Fire
Reactions: 12 users

cosors

👀
One for the list @Fact Finder


View attachment 58010
And an update for Neuromorphia!
 
  • Like
  • Fire
  • Haha
Reactions: 7 users

Tothemoon24

Top 20

Renesas Develops New AI Accelerator for Lightweight AI Models and Embedded Processor Technology to Enable Real-Time Processing​


Businesswire
ByBusinesswire
February 26, 2024

Results of Operation Verification Using an Embedded AI-MPU Prototype Announced at ISSCC 2024

Renesas Electronics Corporation, a premier supplier of advanced semiconductor solutions, today announced the development of embedded processor technology that enables higher speeds and lower power consumption in microprocessor units (MPUs) that realize advanced vision AI. The newly developed technologies are as follows: (1) A dynamically reconfigurable processor (DRP)-based AI accelerator that efficiently processes lightweight AI models and (2) Heterogeneous architecture technology that enables real-time processing by cooperatively operating processor IPs, such as the CPU. Renesas produced a prototype of an embedded AI-MPU with these technologies and confirmed its high-speed and low-power-consumption operation. It achieved up to 16 times faster processing (130 TOPS) than before the introduction of these new technologies, and world-class power efficiency (up to 23.9 TOPS/W at 0.8 V supply).
Amid the recent spread of robots into factories, logistics, medical services, and stores, there is a growing need for systems that can autonomously run in real time by detecting surroundings using advanced vision AI. Since there are severe restrictions on heat generation, particularly for embedded devices, both higher performance and lower power consumption are required in AI chips. Renesas developed new technologies to meet these requirements and presented these achievements on February 21, at the International Solid-State Circuits Conference 2024 (ISSCC 2024), held between February 18 and 22, 2024 in San Francisco.
The technologies developed by Renesas are as follows:
(1) An AI accelerator that efficiently processes lightweight AI models
As a typical technology for improving AI processing efficiency, pruning is available to omit calculations that do not significantly affect recognition accuracy. However, it is common that calculations that do not affect recognition accuracy randomly exist in AI models. This causes a difference between the parallelism of hardware processing and the randomness of pruning, which makes processing inefficient.
To solve this issue, Renesas optimized its unique DRP-based AI accelerator (DRP-AI) for pruning. By analyzing how pruning pattern characteristics and a pruning method are related to recognition accuracy in typical image recognition AI models (CNN models), we identified the hardware structure of an AI accelerator that can achieve both high recognition accuracy and an efficient pruning rate, and applied it to the DRP-AI design. In addition, software was developed to reduce the weight of AI models optimized for this DRP-AI. This software converts the random pruning model configuration into highly efficient parallel computing, resulting in higher-speed AI processing. In particular, Renesas’ highly flexible pruning support technology (flexible N:M pruning technology), which can dynamically change the number of cycles in response to changes in the local pruning rate in AI models, allows for fine control of the pruning rate according to the power consumption, operating speed, and recognition accuracy required by users.
This technology reduces the number of AI model processing cycles to as little as one-sixteenth of pruning incompatible models and consumes less than one-eighth of the power.
(2) Heterogeneous architecture technology that enables real-time processing for robot control
Robot applications require advanced vision AI processing for recognition of the surrounding environment. Meanwhile, robot motion judgment and control require detailed condition programming in response to changes in the surrounding environment, so CPU-based software processing is more suitable than AI-based processing. The challenge has been that CPUs with current embedded processors are not fully capable of controlling robots in real time. That is why Renesas introduced a dynamically reconfigurable processor (DRP), which handles complex processing, in addition to the CPU and AI accelerator (DRP-AI). This led to the development of heterogeneous architecture technology that enables higher speeds and lower power consumption in AI-MPUs by distributing and parallelizing processes appropriately.
A DRP runs an application while dynamically changing the circuit connection configuration between the arithmetic units inside the chip for each operation clock according to the processing details. Since only the necessary arithmetic circuits operate even for complex processing, lower power consumption and higher speeds are possible. For example, SLAM (Simultaneously Localization and Mapping), one of the typical robot applications, is a complex configuration that requires multiple programming processes for robot position recognition in parallel with environment recognition by vision AI processing. Renesas demonstrated operating this SLAM through instantaneous program switching with the DRP and parallel operation of the AI accelerator and CPU, resulting in about 17 times faster operation speeds and about 12 times higher operating power efficiency than the embedded CPU alone.
Operation Verification
Renesas created a prototype of a test chip with these technologies and confirmed that it achieved the world-class, highest power efficiency of 23.9 TOPS per watt at a normal power voltage of 0.8 V for the AI accelerator and operating power efficiency of 10 TOPS per watt for major AI models. It also proved that AI processing is possible without a fan or heat sink.
Utilizing these results helps solve heat generation due to increased power consumption, which has been one of the challenges associated with the implementation of AI chips in a variety of embedded devices such as service robots and automated guided vehicles. Significantly reducing heat generation will contribute to the spread of automation into various industries, such as the robotics and smart technology markets. These technologies will be applied to Renesas’ RZ/V series—MPUs for vision AI applications.
 
  • Like
  • Fire
  • Love
Reactions: 33 users

IloveLamp

Top 20
🤔

Description

Announcing The KnowU™, Know Labs' Wearable Non-Invasive CGM

Announcing the KnowU™, Know Labs' wearable non-invasive continuous glucose monitor (CGM). The KnowU incorporates the sensor the Company plans to submit to the FDA for clearance. This proprietary radio-frequency (RF) dielectric sensor has been tested and proven stable and accurate in a lab setting and is now miniaturized and wearable. The KnowU can be worn with an adhesive, allowing the user to clip the sensor on and off or on the wrist or forearm with a strap. The device, which is significantly smaller and lighter than the prototype, includes on-board computing power and built-in machine learning capabilities. The KnowU is designed to optimize the customer experience – expected to last for years, eliminate costly disposables, have a rechargeable battery, and connect with an easy-to-use companion mobile app.



1000013681.jpg
 
  • Like
  • Love
  • Fire
Reactions: 30 users

Tothemoon24

Top 20
IMG_8492.jpeg




Swift Path to Edge AI: Revolutionizing Development Through Synthetic Data [S61308]

Synthetic data generation is a game-changer for industries operating in complex industrial, remote, or sensitive environments, where obtaining real-world data can be costly, time-consuming, or have privacy concerns, or simply can't account for all types of scenarios. NVIDIA Omniverse Replicator can generate highly realistic synthetic datasets tailored to train computer vision models used in specific industrial scenarios. When combined with Edge Impulse, users can rapidly create professional-grade industrial machine learning models that can run on resource-constrained devices. We'll showcase real-world industrial data collection that previously required manual data collection and can now be achieved with the Omniverse Replicator. These will include warehouses employing asset tracking tags, detecting vehicles in remote locations, and more. We'll show how these scenarios are completed with randomized lighting conditions, camera positions, and material textures, resulting in a dataset that can surpass in-person data collected. Join us to hear about Edge Impulse’s real-world industrial edge solutions that leverage synthetic data from Omniverse Replicator.
, Senior Developer Relations Engineer, Edge Impulse
 
  • Like
  • Fire
Reactions: 12 users
  • Like
  • Fire
  • Love
Reactions: 24 users

Tothemoon24

Top 20
IMG_8493.jpeg










The company also gave a preview of the next-gen Intel Xeon processor Sierra Forest, set to launch later in the year. It uses up to 288 Efficient-cores (E-cores) on a single chip. It targets 5G core workloads, emphasizing improved performance and power efficiency. The company promised broad availability and industry adoption of Intel Infrastructure Power Manager software, which enables operators to reduce CPU power by an average of 30% while maintaining key telco performance metrics.


Designed to meet the evolving needs of enterprises at the edge, the Intel Edge Platform offers support for heterogeneous components, policy-based management, and built-in AI runtime with OpenVINO inference. Built on Intel’s vast expertise from over 90,000 edge deployments, the platform is set to be generally available later this quarter.


Intel and Microsoft are set to announce extended benefits of the AI PC to commercial designs on day two of MWC 2024, emphasizing enhanced AI experiences and productivity for businesses of all sizes.


Intel’s Programmable Solutions Group (PSG) will launch two new radio macro and mMIMO Enablement Packages, along with Intel Precision Time Protocol Servo, offering flexible, low-power, low-latency, high-throughput solutions for emerging spaces like vRAN, OpenRAN, 6G, and AI.
 
  • Like
  • Fire
  • Love
Reactions: 26 users

Beebo

Regular
🤔

Description

Announcing The KnowU™, Know Labs' Wearable Non-Invasive CGM

Announcing the KnowU™, Know Labs' wearable non-invasive continuous glucose monitor (CGM). The KnowU incorporates the sensor the Company plans to submit to the FDA for clearance. This proprietary radio-frequency (RF) dielectric sensor has been tested and proven stable and accurate in a lab setting and is now miniaturized and wearable. The KnowU can be worn with an adhesive, allowing the user to clip the sensor on and off or on the wrist or forearm with a strap. The device, which is significantly smaller and lighter than the prototype, includes on-board computing power and built-in machine learning capabilities. The KnowU is designed to optimize the customer experience – expected to last for years, eliminate costly disposables, have a rechargeable battery, and connect with an easy-to-use companion mobile app.



View attachment 58012

(?) If I’m not mistaken, BrainChip’s partnership is with AI Labs, not Know Labs.

Although Know Labs would be a good fit for Akida, especially that they use an RF sensor.
 
  • Like
  • Fire
  • Love
Reactions: 8 users
  • Like
  • Fire
  • Love
Reactions: 16 users

MegaportX

Regular
Not BRN Telecommunications EDGE ............

*DJ Telstra Trials Adtran FSP 150 for Edge Compute Services

(MORE TO FOLLOW) Dow Jones Newswires
February 27, 2024 08:01 ET (13:01 GMT)
Copyright(c) 2024 Dow Jones & Company, Inc.

*DJ Adtran: Telstra Has Successfully Trialed Its FSP 150 Edge Compute Device >ADTN
 
  • Thinking
  • Like
  • Wow
Reactions: 5 users
Hopefully we will have a bit of a bounce today after yesterday.

1709065585995.gif
 
  • Haha
  • Like
  • Wow
Reactions: 29 users

GStocks123

Regular
Last edited:
  • Like
  • Love
  • Fire
Reactions: 13 users

MegaportX

Regular
  • Like
  • Fire
  • Love
Reactions: 38 users

Frangipani

Regular
I don't believe we are in this one, could be wrong.
The one we are in is to launch March 4th

👆🏻

👍🏻



1709066269465.jpeg



TECH SPACE
Experimental orbital services vehicle Optimus set for launch
space-machines-optimus-orbital-services-vehicle-hg.jpg
The development and upcoming launch of Optimus signal a new era for the Australian space industry, with Space Machines Company at the forefront of this transformative journey. By extending the life of satellites, enhancing space sustainability, and fostering international collaborations, Optimus is set to make a significant impact on the future of space operations and the broader space economy.


Experimental orbital services vehicle Optimus set for launch
by Simon Mansfield
Adelaide, Australia (SPX) Feb 27, 2024

The Space Machines Company has announced the completion of Optimus, Australia's largest-ever private satellite. This Orbital Servicing Vehicle (OSV) represents a significant leap in Australia's sovereign space capabilities, poised to provide critical life-extension services, inspections, and assistance to the existing space infrastructure.

Rajat Kulshrestha, CEO of Space Machines, highlighted the collaborative effort behind Optimus, emphasizing the years of dedicated work from engineers, scientists, and both local and international partners. This partnership has not only culminated in the creation of Optimus but has also set the stage for innovative satellite services designed to extend operational lifetimes, reduce space debris, and enable the sustainable expansion of space activities.

Central to Optimus's mission are the cutting-edge technology payloads it carries, contributed by leading partners.
Among these, Advanced Navigation has introduced a revolutionary Digital Fiber Optic Gyroscope (DFOG) Inertial Navigation System (INS), with CEO Xavier Orr noting Optimus's role in demonstrating highly precise navigation capabilities. This technology is critical for efficient maneuvering within and between orbits, optimizing mission success while conserving fuel and time.

Orbit Fab, a pioneer in on-orbit refueling services, has equipped Optimus with fiducial markers, providing essential position and orientation data for safe and reliable operations. Daniel Faber, CEO of Orbit Fab, expressed excitement over the collaboration, emphasizing its importance in advancing cooperative refueling and sustainable space operations.

Further enriching the mission, ANT61 brings to Optimus the world's first neuromorphic computer at the core of autonomous robotics technology. This innovation is expected to facilitate future in-orbit docking and refueling missions, with the Sydney-based company aiming to lead in the $10 billion in-orbit servicing market and support the development of infrastructure for an international lunar base.

The satellite also hosts a range of innovative payloads from Australian and international partners, including HEO Robotics's space domain awareness camera, Esper's hyperspectral camera, Spiral Blue's in-space image processor, and Dandelions's high-powered network processor. These technologies aim to test and demonstrate capabilities that could redefine space technology applications.

Support from government agencies such as Investment NSW, the Australian Space Agency, Defence Space Command, and the Government of South Australia has been crucial in bringing Optimus to fruition. This support underscores the importance of public-private partnerships in advancing national space objectives.

Optimus is scheduled for launch aboard SpaceX's Transporter-10 mission, targeting a departure from Vandenberg Space Force Base in California no earlier than March 2024. This collaboration with SpaceX not only showcases the international dimension of space exploration efforts but also Australia's growing influence in the global space economy.

Related Links

Space Machines Company
Space Technology News - Applications and Research
 
  • Like
  • Fire
  • Love
Reactions: 37 users

IloveLamp

Top 20
Last edited:
  • Like
  • Fire
  • Love
Reactions: 35 users

charles2

Regular
Hopefully we will have a bit of a bounce today after yesterday.

View attachment 58020
Nearly 500k shares on the ask late in the day in the US. Keeping a lid on any enthusiasm/rebound today. After a huge sell off like we have experienced it takes substantial time to reestablish a foundation. Shorts and weak hands should have us running in place until we get a glimpse of blue skies again.

Caveat: Contracts and announced partnerships or a NVDA etc buy in could turn the share price pronto.

So be patient and prepare to experience more time wandering in the wilderness.

Forty days perhaps.
 
  • Like
  • Haha
  • Love
Reactions: 21 users

Easytiger

Regular
The following is a link to a presentation containing a Brainchip created Competitive Analysis Chart at page 13.


You will note beyond Intel and IBM that Google Coral and Deep Learning Accelerators from Nvidia and others are listed and compared to AKIDA which ticks all the boxes.

Today we are told the competitors names are not included basically for politeness sake.

Until today Brainchip has seen no reason to be so polite about Google, Nvidia & others when listing competitors failings compared to AKIDA technology.

What has changed???

My opinion only DYOR
Fact Finder
Nothing
 
  • Like
Reactions: 2 users
(?) If I’m not mistaken, BrainChip’s partnership is with AI Labs, not Know Labs.

Although Know Labs would be a good fit for Akida, especially that they use an RF sensor.


Know Labs signed a partnership with Edge Impulse over a year ago. We are Edge Impulse strategic partner providing the hardware for them to perform their tiny ml to work on.

Off the top of my head the glucose monitoring market for America was $US7B annually of which I believe this first of its kind non invasive device would clean up all competitors.

Personally Know Labs hitting the market in 2026 is a target of mine I’m following with keen interest.

These current share price fluctuations are white noise when your horizons have a longer focus than 24hrs.



:)
 
  • Like
  • Love
  • Fire
Reactions: 31 users
Top Bottom