BRN Discussion Ongoing

RobjHunt

Regular
I’m a little behind and need to catch up but has there been a Forbes article posted of late?
 
  • Like
Reactions: 4 users

CHIPS

Regular
I’m a little behind and need to catch up but has there been a Forbes article posted of late?

Yes, yesterday or the day before.
 
  • Like
Reactions: 4 users

Boab

I wish I could paint like Vincent
I’m a little behind and need to catch up but has there been a Forbes article posted of late?
Pitt Street put out this report @RobjHunt
BrainChip+intiation+report+2024+25+6+2024.pdf
 
  • Like
  • Fire
Reactions: 5 users

Boab

I wish I could paint like Vincent
Apologies as my copy and paste skills are lacking.
 
  • Like
  • Haha
Reactions: 5 users

Diogenese

Top 20
I’m a little behind and need to catch up but has there been a Forbes article posted of late?
Like the butcher who fell backwards into the mincing machine -


he got a little behind in his work.
 
  • Haha
  • Wow
  • Like
Reactions: 15 users

Diogenese

Top 20
Like the butcher who fell backwards into the mincing machine -


he got a little behind in his work.
Unlike the flight attendant who stepped backwards into the propellor -

Disaster!
 
  • Haha
  • Wow
  • Sad
Reactions: 7 users

Rach2512

Regular
He

3 Less Known Technological Niches Full Of Startup Opportunities​

Abdo Riani
Senior Contributor
I share tips about launching, validating and growing startups.

Jun 30, 2024, 03:34pm EDT


Some of the most important factors for startup success are good timing, a good idea, and most importantly - favorable market conditions. That’s why choosing the right niche where you can create the most added value in a market which desperately needs that value is probably the most crucial thing you can do to be successful as a startup founder.

In this article, we explore three niches that don’t get as much press attention as AI while at the same time providing extremely high market disruption potential for innovative high-tech startups.

1. Neuromorphic Engineering​

Neuromorphic engineering is an exciting and emerging field focused on developing hardware that mimics the human brain's structure and function. This technology has the potential to revolutionize artificial intelligence and machine learning by making them more efficient and energy-effective. Startups in this niche can focus on developing neuromorphic chips or software that leverages these chips for advanced AI applications.

One notable startup in the neuromorphic engineering space is BrainChip Holdings. BrainChip is an Australian company that specializes in developing neuromorphic processors designed to enable high-performance and low-power AI at the edge. The company's flagship product, Akida is a neuromorphic System-on-Chip (SoC) that processes information in an event-based manner, similar to how the human brain operates, which significantly reduces power consumption compared to traditional AI processors. Akida is reported to achieve a power consumption of less than 1 milliwatt (mW), making it highly efficient for edge devices that require extended battery life.

The field is still in its early stages, offering ample room for innovation and development. Startups can explore various applications and sectors where neuromorphic engineering can make a significant impact, with the most obvious sector being AI, which is one of the most opportunity-abundant fields in itself.


Here it is again
 
  • Like
  • Love
  • Fire
Reactions: 13 users

Boab

I wish I could paint like Vincent
Like the butcher who fell backwards into the mincing machine -


he got a little behind in his work.
Perhaps the Jack Newton jokes are a bit off hand.
 
  • Haha
  • Like
  • Wow
Reactions: 5 users

schuey

Regular
Stellen Sie sich vor, Ihre IoT-Geräte hätten die Intelligenz eines menschlichen Gehirns 🧠. Klingt vielleicht nach Zukunftsmusik, ist aber durch neuromorphes Computing möglich! Neuromorphes Computing ahmt das Verhalten von Gehirnneuronen nach und hat somit menschen-ähnliche Anpassungs- und Lernfähigkeiten. Doch warum könnte das vorteilhaft für das IoT (Internet of Things) sein? Bisher nutzen IoT-Geräte meist die Neumann-Architektur, bei der Daten und Befehle im selben Speicherplatz sequenziell verarbeitet werden. Diese Methode führt jedoch auch zu Daten-Engpässen und hohem Stromverbrauch, was für IoT-Geräte, die oft batteriebetrieben sind, problematisch ist 🪫. Neuromorphes Computing bietet hier revolutionäre Vorteile: · Parallele Verarbeitung: Im Gegensatz zur sequenziellen Verarbeitung können neuromorphe Systeme viele Aufgaben gleichzeitig durchführen. · Geringerer Stromverbrauch: Ideal für IoT-Geräte, die auf Energieeffizienz angewiesen sind. · Echtzeitverarbeitung: Sorgt für schnelle Reaktionen und geringe Latenzzeiten. · Adaptives Lernen vor Ort: IoT-Geräte können sich anpassen und lernen, ohne ständig auf zentrale Server angewiesen zu sein. In welchem Bereich könnte neuromorphes Computing noch sinnvoll werden? Teilen Sie Ihre Meinung in den Kommentaren. #neuromorphiccomputing #ki #internetofthings #vodafonebusiness (Quellen: The Digital Speaker 2023, The Financial Express 2024)
  • No alternative text description for this image
592 Comments
LikeCommen
Couldnt translate it and post it.....Beyond me at 7pm.......But I like what I read
 
  • Like
Reactions: 2 users

7für7

Regular
Couldnt translate it and post it.....Beyond me at 7pm.......But I like what I read
Imagine your IoT devices had the intelligence of a human brain 🧠. This isn't just futuristic; it's achievable through neuromorphic computing! Neuromorphic computing mimics the behavior of brain neurons, offering human-like adaptability and learning capabilities. Why is this beneficial for IoT (Internet of Things)?


Currently, IoT devices mainly use the Von Neumann architecture, which processes data and commands sequentially in the same memory space. This method can lead to data bottlenecks and high power consumption, problematic for often battery-operated IoT devices 🪫.


Neuromorphic computing offers revolutionary advantages: · Parallel processing: Unlike sequential processing, neuromorphic systems can perform many tasks simultaneously. · Lower power consumption: Ideal for energy-efficient IoT devices. · Real-time processing: Ensures quick responses and low latency. · On-device adaptive learning: IoT devices can adapt and learn without constantly relying on central servers.


In what other areas could neuromorphic computing be useful? Share your thoughts in the comments.


#neuromorphiccomputing #AI #InternetofThings #VodafoneBusiness


(Sources: The Digital Speaker 2023, The Financial Express 2024)
 
  • Like
  • Love
  • Haha
Reactions: 8 users

Diogenese

Top 20

cosors

👀
Couldnt translate it and post it.....Beyond me at 7pm.......But I like what I read
"Imagine if your IoT devices had the intelligence of a human brain. It may sound like a dream of the future, but neuromorphic computing makes it possible! Neuromorphic computing mimics the behaviour of brain neurons and therefore has human-like adaptation and learning capabilities. But why could this be beneficial for the IoT (Internet of Things)? Up to now, IoT devices have mostly used the Neumann architecture, in which data and commands are processed sequentially in the same memory space. However, this method also leads to data bottlenecks and high power consumption, which is problematic for IoT devices that are often battery-operated 🪫. Neuromorphic computing offers revolutionary advantages here: - Parallel processing: unlike sequential processing, neuromorphic systems can perform many tasks simultaneously. - Lower power consumption: Ideal for IoT devices that rely on energy efficiency - Real-time processing: Ensures fast responses and low latency. - On-site adaptive learning: IoT devices can adapt and learn without constantly relying on centralised servers. In what other areas could neuromorphic computing become useful? Share your opinion in the comments. #neuromorphiccomputing #ki #internetofthings #vodafonebusiness (Sources: The Digital Speaker 2023, The Financial Express 2024)"
 
  • Like
  • Fire
  • Love
Reactions: 13 users
Mmm
 
Last edited:
  • Like
Reactions: 1 users

Tothemoon24

Top 20

Press Release for Announcement of Design and Development of indigenous HPC Processor SoC “AUM”​

ByMosChip Official02/07/2024
MosChip%C2%AE_PR_AUM_CDAC.png

C-DAC partners with MosChip and Socionext for design of HPC Processor AUM based on Arm architecture​

New Delhi, India 1 July 2024 – C-DAC Partners with MosChip® Technologies, and Socionext Inc. for the design and development of a High-Performance-Computing (HPC) Processor SoC based on the Arm® architecture and built on TSMC 5nm technology node.
Centre for Development of Advanced Computing (C-DAC) is the premier R&D organization of the Ministry of Electronics and Information Technology (MeitY), Govt. of India for carrying out R&D in IT, Electronics and associated areas. C-DAC was established to develop and deploy the state-of-the-art supercomputing technology in India.
National Supercomputing Mission (NSM), funded by Ministry of Electronics and Information Technology (MeitY) and Department of Science and Technology (DST), was launched to make India one of the world leaders in supercomputing and to enhance India’s capability in solving grand challenge problems of national and global relevance. As part of this, C-DAC is developing and deploying HPC systems at leading R&D and academic institutions across the country.
C-DAC is working towards complete indigenization of supercomputing technology. Towards this, C-DAC has developed indigenous compute node RUDRA, Trinetra-Interconnect and System Software stack. Further for complete indigenization of HPC system development, C-DAC is designing an indigenous HPC Processor AUM. Keenheads Technologies, an Indian Startup, has been engaged by C-DAC as Program Management Consultant (PMC) for the project.
C-DAC is collaborating with the consortium of MosChip Technologies, India and Socionext Inc., Japan for design & development of this indigenous HPC Processor AUM, based on the high-performance Arm Neoverse™ V2 CPU platform, and incorporates advanced packaging technology. This approach allows them to retain ownership of unique differentiators, providing a significant competitive edge.
“MeitY’s strategic policies have been instrumental in the growth of the National Supercomputing Mission. We have achieved significant milestones, including the installation of advanced supercomputers, enhancements in computational capabilities, and breakthroughs in research. Our indigenization efforts have reached more than 50% with server nodes, interconnects, and system software stack. Now for complete indigenization, we are aiming to develop indigenous HPC Processor AUM. Government of India and MeitY are committed to driving India towards a technologically sovereign advanced future, harnessing supercomputing for national development and global leadership”, said Shri S. Krishnan, Secretary, MeitY.
“Today’s announcement is a significant achievement in chip design. It demonstrates India’s capability in indigenous development in the field of high-performance computing. These ventures in consortia mode in partnership with industry are the need of the hour” said Dr. Praveen Kumar S, Head of Scientific Divisions (HOD) Department of Science and Technology.
“The collaboration between C-DAC and MosChip & Socionext to develop cutting-edge HPC processor, which is designed to meet the evolving demands of High-Performance-Computing and related applications, exemplifies the growing synergy between R&D and Industry. This joint effort marks a significant milestone in technological advancement, leveraging C-DAC’s expertise in Supercomputing technology and MosChip’s and Socionext’s capabilities in semiconductor design and manufacturing. The collaboration aims to design, develop and produce indigenous HPC processor that not only meets global standards but also propels India to the forefront in supercomputing arena. This collaboration is also a significant step forward in our efforts to bolster India’s position in the global semiconductor landscape.”, said Shri E Magesh, Director General, C-DAC.
MosChip%C2%AE_PR_AUM_CDAC-1.png

“We are honoured and excited to join forces with C-DAC and MeitY on this pioneering project to design and develop a HPC Processor SoC in TSMC 5nm node. This collaboration leverages our cutting-edge silicon design expertise and underscores our commitment to strengthening India’s technological prowess. Our team of silicon and system design experts ensure that we deliver solutions that meet the highest standards of performance and reliability. We are committed to supporting C-DAC and MeitY in their goal of making India one of the world leaders in Supercomputing,” said Shri Srinivasa Rao Kakumanu, Managing Director and CEO, of MosChip Technologies.
“Socionext is pleased to collaborate with C-DAC and MosChip in the development of HPC Processor AUM, in support of India’s indigenous Supercomputing Infrastructure and ecosystem, supporting a range of applications of national interest,” stated Shri Hisato Yoshida, Deputy President, Head of Solution SoC R&D, and Global Development Group at Socionext. “The combined expertise of C-DAC, MosChip and Socionext in advanced Supercomputing technology and Silicon developments will yield a remarkable indigenous HPC SoC, delivering an exceptional energy efficient high-performance processor to propel India’s national supercomputing mission to new heights.”
“We are proud to be part of C-DAC’s efforts in advancing India’s HPC capabilities,” said Guru Ganesan, President, Arm-India. “Arm Neoverse V2 is playing a pivotal role in transforming data centers and will deliver the scalable, efficient computing needed to meet the goals set forth by India’s National Supercomputing Mission.”

About MosChip:
MosChip® is India’s first fabless semiconductor company with 25 years of experience in designing semiconductor IP, products, and first-time right silicon with 200+ SoC tape-outs.
About Socionext:
Socionext Inc. is a global SoC supplier and a pioneer of a unique “Solution SoC” business model and creating feature-rich custom SoCs using cutting-edge technology.
 
  • Like
  • Love
  • Fire
Reactions: 18 users

manny100

Regular
Not sure whether it's been mentioned in the last 24 hours but it's a circa 7 bagger from today's SP to Pitt Street Research valuation of $1.59.
That valuation is basically our portfolio asset called with the business thrown in.
 
  • Like
Reactions: 7 users

Frangipani

Regular
Since the following article is branded content by WIRED Consulting and QinetiQ (a UK-based Defense and Space Manufacturing company), chances are QinetiQ researchers have already been experimenting with neuromorphic technology - both Loihi 2 and Akida are explicitly referred to.

Also take a look at the illustration.




BRANDED CONTENT BY
Wired Consulting x QinetiQ Logo Lockup

Neuromorphic Chips Could Solve Some Of AI’s Problems​

As AI becomes the tech world’s center of gravity, the virtues of neuroscience-inspired computing systems are turning heads. These are now seen as a critical enabler for a raft of new innovations, not only on Earth, but in space…

Image may contain Aircraft Airplane Transportation Vehicle Electronics Hardware and Computer Hardware


WHAT IF WE told you there was a computer that can analyze and respond almost instantaneously to data from any of its thousands of sensors. That it can also match patterns in that data, recognizing faces, objects, and spoken words, and put that information into context, swiftly learning from what’s happening around it to predict what may happen next. Now, what if we told you it could do all that on around 20 watts of power, less than the average lightbulb—would that sound like a stretch?

As it happens, there are around eight billion of these miraculous devices operating in the world today. They’re called human brains, and they’re inspiring the relatively recent science of neuromorphic computing, a completely new approach to computer design in which elements of the system are modeled on the complex network of neurons and synapses that allow us to think and function.

“At the very basic level, neuromorphic computing is throwing out everything we think we know about computers and processors and looking at how biological brains compute,” says Mike Davies, Director of the Neuromorphic Computing Lab at Intel. “The guiding light is not just to achieve the levels of biological intelligence that we see in brains, but also the incredible efficiency and speed that we’re still very far from attaining with conventional technology.”

Traditional computers are based upon the classic Von Neumann architecture, in which data is constantly shuttled between a processing unit and a memory unit. This can create a bottleneck when large volumes of data are being processed. As a result, conventional computers are now approaching their limits, while also using mind-boggling quantities of energy to operate at those extremes. That’s been thrown into relief by the rise of “large language models” in AI, which require vast amounts of data and compute. It’s estimated that as much as 15 percent of the world’s total energy is now spent on some form of data manipulation, such as transmission or processing, and this figure is only likely to rise with the predicted millions of sensors that are necessary to enable a fully-fledged Internet of Things (IoT).

The neuromorphic approach offers a solution. It covers a range of different ways of mimicking neuroscience principles, and can apply to both hardware and software.


In Intel’s case, the company’s second-generation neuromorphic chip, Loihi 2, physically emulates the way that the brain processes data. Right now, as you read this sentence, your neurons are exchanging information with each other in a rush of electronic pulses (or “spikes”). The chip works in a similar way. It contains tens of thousands of silicon artificial neurons that also communicate through spiking electronic signals. This arrangement is known as a “spiking neural network” (SNN).

Whereas traditional chips incorporate a clock and work on the basis of continuously reading a rigid, sequential set of instructions, the neurons on the Loihi 2 chip work in parallel in an asynchronous way, and without any prescribed order. Like the neurons in our brain, its artificial neurons are event-triggered, and process information only after the receipt of an incoming activation signal.

A major benefit of this approach is that, as opposed to the always-on Von Neumann model, a spiking neuron network is effectively in “off” mode most of the time. Once triggered, it can then perform a huge number of parallel interactions.

“It’s exactly the same as the way the brain doesn’t churn every single feature of its incoming data,” says Jason Eshraghian, Assistant Professor at the University of California at Santa Cruz. “Imagine if you were to film a video of the space around you. You could be filming a blank wall, but the camera is still capturing pixels, whereas, as far as the brain is concerned, that’s nothing, so why process it?”

Because neuromorphic computing emulates the brain in this way, it can perform tasks using a fraction of the time and power needed by traditional machines.

Neuromorphic systems are also highly adaptable, because the connections between neurons can change in response to new tasks, making them well suited to AI. Analysts have therefore described it as a critical enabler of new technologies that could reach early majority adoption within five years.

Half a decade ago, Intel established a community of researchers around the world to explore the potential applications of neuromorphic computing in specific business use cases, ranging from voice and gesture recognition to image retrieval and robotic navigation. The results so far have been impressive, showing energy efficiency improvements of up to 1,000 times and speed increases of up to 100 times compared to traditional computer processors.

The potential of the neuromorphic approach in enabling less compute-hungry large language models for AI was recently demonstrated by Eshraghian and others at University of California Santa Cruz. Their “SpikeGPT” model for language generation, a piece of software that simulates an SNN through its algorithmic structures, uses approximately 30 times less computation than a similar model using typical deep learning methods.


“Large scale language models rely on ridiculous amounts of compute power,” he says. “Using spikes is a much more efficient way to represent information.”


Taking it to the edge
One of the major potential future benefits that comes from neuromorphic computing’s greater efficiency and speed is the capability to bring low-power, rapid decision-making to the increasing proliferation of devices that enable the Internet of Things. Think of autonomous vehicles, for instance. A neuromorphic chip negates the need to send signals over an internet connection for remote processing by powerful computers in the cloud. Instead, the device can carry out on-the-spot, AI-based learning in isolation—an approach known as “edge” computing.

“The dimension of remote adaptability and personalization that neuromorphic brings opens the door for all kinds of new capabilities with AI,” adds Davies, who believes the area of smart robots carrying out chores in the home, in particular, is one that’s ripe for development.

The term AIoT has been coined to describe the combination of AI and IoT, and California-based company BrainChip is already commercializing the concept with those new capabilities in mind. Its first-to-market digital neuromorphic processor, called Akida, is billed as “a complete neural processing engine for edge applications”.

Companies currently exploring the use of BrainChip’s technology include a leading car manufacturer that’s using it to boost the efficiency of in-car voice recognition, and a waste company that’s developing “smart bins” that can automatically sort and recycle different types of waste through a combination of AI-powered sensors and robotics—and wants to do it in the most efficient and eco-friendly way.

“We’re also working with space agencies to bring Akida into space, to be able to autonomously control machines on Mars, for instance,” says BrainChip CEO Sean Hehir. “When you have to run on solar power, you have to be very efficient. It also has to be completely autonomous, because there is no fast connection back to Earth. And don't forget that low power means low thermal emission—in space, you can't have a fan to cool something, because there's no air.”

Decentralizing AI from the cloud to a device also creates a desirable side effect: Greater privacy. “If you’re not moving data all around the world, you’re much more secure,” says Hehir. “It’s that simple.”



The Defense and National Security Perspective
Jeremy Baxter | Principal Systems Engineer for UAVs & Autonomy and QinetiQ Fellow


Any technology that can offer very fast reaction times or the ability to minimize power consumption will have real military benefits.

When combined with event-based sensing—which minimizes processing delays by reporting significant changes as soon as they occur—neuromorphic computing could allow us to create platforms that match the reaction times of birds and insects at a fraction of the weight and power of today’s technologies. Imagine a tiny, uncrewed aerial vehicle, for example, able to fly through woodland at high speeds yet avoid collisions.

There are two other application areas that could prove to be especially interesting. The first is defensive aid suites. These are military aircraft systems that offer protection from surface-to-air missiles, air-to-air missiles and guided anti-aircraft artillery where fast reaction times are crucial for survival.

The other is covert surveillance. Small, lower power devices are easier to conceal and last longer, playing to the strengths of neuromorphic processing.



Explore the other emerging innovation trends in the series…
  1. Mechanical human augmentation. Whether it’s additional limbs or smart exoskeletons, machinery is helping humans upgrade their natural capabilities.
  2. Power beaming. Sending power wirelessly over long distances could transform everything from electric vehicles to offshore wind farms.
  3. Biohybrid robots. Combining artificial and organic parts, biohybrid robots offer advantages such as self-repair and agility.
  4. Gene-editing and enhancement. Advances in biotech are spurring scientists to explore how genomes can be tweaked to make ecosystems more sustainable.
  5. Hyperspectral imaging. Hyperspectral cameras don’t merely record what something looks like, they can tell you what that thing is made from and help you see what the human eye cannot.
DOWNLOAD THE FULL REPORT
To find out more about QinetiQ, click here

To find out more about WIRED Consulting, click here






074C6575-9BA9-4B00-B565-D9B27E5D8CFB.jpeg


A6287E43-83AB-4311-B194-C5E90AFC1166.jpeg






Here is a link to QinetiQ’s full report on emerging innovation trends:

 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 61 users
  • Like
  • Fire
  • Thinking
Reactions: 21 users

TECH

Regular
Good Evening.....

Well our Edge-Box is due to be released to the public in the early part of this quarter, so I'm thinking late this month, very early into August.

About 4 weeks ago I decided to reach out to VVDN as, as others have pointed out, all you see on the VVDN homepage is Nvidia, Nvidia, Nvidia
no mention or promoting of the soon to be released Akida Edge AI Box.

My email was acknowledged but my questions posed remain unanswered, probably not unexpected I guess.

These really quiet periods do test ones resolve, trying to keep the negative thoughts at bay does get harder as the years tick by, so many
articles are being published now about neural nets and mimicking the human brain, latency, low power consumption, real time processing
on device, no cloud connectivity. security and privacy etc....I read these articles and think to myself, this is what Peter told me/us back in
2018/19...it's so yesterday, but here we are, still waiting, it just goes to prove that, we Brainchip, were or are still well ahead of the curve.

I'm still 100% convinced we will succeed, but...........................when.

Tech July 2024

Dance Glitch GIF by saidamagic
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 31 users
Top Bottom