BRN Discussion Ongoing

Worthwhile skim through read if have time. Explains a bit on the hardware side.

Whilst we don't get a mention, mostly the major players, there are some pertinent points on neuromorphic and a couple of speciality forms we do target being ASIC and FPGA and perform well on.

Megachips are ASIC, wonder why :)



In AI | Last updated: August 15, 2023


A Brief Introduction to the Hardware Behind AI​

avatar_user_152_1612934709-96x96.png
avatar_user_255_1665039197-96x96.jpg
By Amrita Pathak and Narendra Mohan Mittal
Brief Introduction to the Hardware Behind AI

Innovative AI hardware has the potential to drive remarkable capabilities and revolutionize how people interact with technology and the world around them.
Have you ever thought about how a tiny chip, smaller than your thumbnail, can mimic human thought processes?
It’s a mind-blowing fact that the hardware behind artificial intelligence (AI) is the powerhouse that makes it possible.
As you explore the world of AI hardware, you will discover how GPUs, TPUs, and neural processing units powerfully shape the landscape of artificial intelligence. Their significant role cannot be underestimated.
In this article, I will discuss with you the complexities of AI hardware, its pivotal role in driving modern innovation, technologies used, pros and cons, their usage, and other details.
Let’s get started!

What Is AI Hardware?​

AI hardware consists of special parts that drive artificial intelligence technologies. These parts are created to manage the complex calculations needed for recognizing patterns, making decisions, and analyzing data.
What-Is-AI-Hardware

Imagine them as the sturdy muscles that support the AI brain’s functions.
The heart of AI hardware lies in the processors such as Graphics Processing Units (GPUs), Tensor Processing Units (TPUs), and Neural Processing Units (NPUs).
  • GPUs: These were initially designed for rendering graphics. Since GPUs excel in parallel processing, these are perfect for training AI models.
  • TPUs: Created by Google specifically for accelerating AI computations, TPUs particularly excel in deep learning tasks.
  • NPUs: These can handle tasks involving neural networks and essentially mimic the neural connections found in the human brain.
All the above hardware components work together to process and analyze vast amounts of data, enabling AI systems to learn, adapt, and make predictions.

AI Hardware Technologies​

AI-Hardware-Technologies

Let’s explore the key players in this technological symphony.

#1. Graphical Processing Units (GPUs)​

Originally designed for rendering complex graphics in video games, GPUs have surprisingly found their place in the realm of artificial intelligence. The key to their capability in AI lies in parallel processing – the ability to handle multiple calculations simultaneously.
Unlike traditional processors, GPUs excel at swiftly crunching vast amounts of data, making them an ideal choice for training intricate AI models. Their impressive processing power speeds up data manipulation and model training, significantly reducing the time required to educate AI systems.

#2. Tensor Processing Units (TPUs)​

Returning from the innovative hub of Google, TPUs were crafted with a singular purpose – to supercharge specific AI workloads, especially those involving neural networks.
One remarkable aspect of TPUs is their exceptional efficiency, as they consume less power compared to traditional CPUs and GPUs while accomplishing these tasks.

#3. Deep Learning (DL)​

Deep Learning (DL), a branch of machine learning, embodies the way the human mind can assimilate and comprehend information, but in a digital form. Neural networks with multiple layers are employed by this technology to progressively abstract and manipulate data.
Deep learning serves as the driving force behind modern AI, propelling it towards increasingly sophisticated accomplishments.

#4. Application-Specific Integrated Circuits (ASICs)​

ASICs serve as the tailored suits in the world of AI hardware. These chips are meticulously crafted to excel at specific tasks within AI computations, exhibiting remarkable efficiency.
Application-Specific-Integrated-Circuits-ASICs

Unlike generic processors, ASICs are designed with precision, honing in on particular types of calculations. This focused approach grants them exceptional speed and energy efficiency for AI workloads.

#5. Field-Programmable Gate Arrays (FPGAs)​

What if your computer’s hardware had the remarkable ability to transform?
This unique characteristic defines FPGAs (Field Programmable Gate Arrays).
Unlike conventional processors, FPGAs can be reconfigured after manufacturing to adapt and optimize their performance for specific tasks seamlessly. This extraordinary flexibility positions them like the Swiss army knife of AI hardware, offering a harmonious blend between ASICs’ efficiency and conventional processors’ versatility.

#6. Neuromorphic Chips​

Imagine a world where computer chips function just like our brains, with their intricate connections and rapid signaling.
Enter neuromorphic chips. These chips are different compared to regular chips. These remarkable creations excel at multitasking and swiftly responding to events. As a result, neuromorphic chips are perfect for conserving energy in AI systems and handling real-time tasks that demand speed and efficiency.
Neuromorphic-Chips

When it comes to choosing one among these AI hardware technologies, companies often lean towards using Graphical Processing Units (GPUs) and Tensor Processing Units (TPUs) for their AI tasks.
GPUs offer parallel processing power and versatility, making them a popular choice, especially for training complex AI models. Similarly, TPUs, created by Google, stand out for their ability to speed up neural network tasks, offering both efficiency and swiftness. These two options are favored because of their proven performance in handling the intense computational demands of modern AI applications.

AI Hardware vs. Regular Hardware​

Artificial intelligence AI and machine learning concept. Compute

Understanding the distinction between AI hardware and regular hardware requires you to learn about the components that power the astonishing capabilities of artificial intelligence.
Here’s a breakdown of how AI hardware sets itself apart from regular or traditional hardware.

Complex Computations​

AI tasks involve intricate calculations for pattern recognition, data analysis, making decisions, predicting events, etc. AI hardware is designed to efficiently handle these complex computations.

Parallel Processing Power​

AI hardware, such as GPUs and TPUs, excels in parallel processing or executing multiple tasks simultaneously while ensuring performance. This enables quicker data processing and model training, which is critical for AI applications as you can deploy solutions faster.

Specialized Architecture​

An illustration of a circuit board with people around it.

AI hardware is purpose-built for specific AI workloads, like neural networks and deep learning algorithms. This specialized architecture ensures the efficient execution of AI-specific tasks, unlike regular hardware that lacks this tailored design.

Energy Efficiency​

AI hardware emphasizes energy efficiency due to the power-hungry nature of AI tasks. It’s optimized to perform AI computations using less power, prolonging the lifespan of devices and reducing operational costs.

Customization and Adaptability​

Regular hardware is versatile but lacks the customization level that you can attain with AI hardware like ASICs and FPGAs. AI hardware is designed to cater to specific AI tasks, enhancing performance and efficiency.

How Startups Are Adopting AI Hardware​

Integrating AI hardware into operations has become a strategic avenue for startups in the digital landscape, enhancing operations and driving innovation.
Let’s explore how startups harness the power of AI hardware.

Data Processing​

Startups use AI hardware, like GPUs and TPUs, to accelerate data processing and model training. This, in turn, enables them to perform tasks faster, make informed decisions swiftly, and create out-of-the-box solutions.

Cost-Effectiveness​

AI hardware’s parallel processing capability enables startups to accomplish more while utilizing fewer resources. This ultimately helps optimize costs and generate better ROI.

Customization​

A man is working on a computer motherboard.

In the world of startups, finding customized solutions is often a necessity. The reason is every business has different goals, requirements, and restrictions. So, they need a solution that they can easily customize to make it suitable for their usage.
That’s where AI hardware comes into play. Specifically designed components, like ASICs and FPGAs, are easy to customize to match specific AI workloads. This provides more operational efficiency and boosts performance.

Edge Computing​

Do you know that many startups operate on the edge, where real-time processing matters? Well, AI hardware such as neuromorphic chips can cater to that with its event-driven communication.

Innovation Boost​

By incorporating AI hardware, startups can gain a competitive advantage. This technology allows them to develop innovative AI-driven products and services, positioning themselves ahead in the market.

Best AI Hardware Providers​

Now, let’s look into the best AI hardware providers in the market.

#1. Nvidia​

Nvidia, a global leader in AI computing, stands at the forefront of transforming industries through its innovative hardware. It has pioneered accelerated computing, an integral concept in AI’s functioning.

No longer limited to graphics, their GPUs serve as the brains behind AI operations, driving the computations that fuel its success. Whether powering data centers, the cloud, or personal devices, Nvidia’s hardware delivers the necessary computational power for AI applications.
Nvidia’s cutting-edge products, like the H100 GPU, are specifically designed to tackle complex AI tasks, solidifying their crucial role in the landscape of AI hardware.

#2. Intel​

Intel, a leading name in the tech industry, offers a wide range of AI hardware options. From data preprocessing to training, inferencing, and deployment, their comprehensive portfolio has got you covered.

Whether you need a data science workstation or advanced machine learning and deep learning tools, Intel simplifies the process of AI deployments.
One standout product is their Xeon Scalable processors, which provide accelerated AI capabilities and enhanced security for easy implementation in data centers worldwide.

#3. Graphcore​

Graphcore is an innovative company that has pioneered a new type of processor exclusively crafted for machine intelligence.

Their Intelligent Processing Units (IPUs) are purpose-built to handle the intricate computations required by AI, surpassing traditional hardware and exhibiting remarkable performance.
Graphcore’s comprehensive hardware and software solutions span across diverse sectors like finance, healthcare, and scientific research, enabling these industries to harness the power of AI efficiently.

#4. Cerebras​

Cerebras has significantly contributed to AI hardware through its Wafer Scale Engine (WSE). The traditional use of GPU clusters in scaling deep learning often demands extensive engineering hours, posing a practical barrier for many who wish to harness the potential of large-scale AI.

Cerebras’ WSE removes this obstacle by providing a cluster-scale AI compute resource that is as easy to program as a single desktop machine. This means you can utilize standard tools like TensorFlow or PyTorch without the need for complex adjustments.

#5. Edge TPU​

Developed by Google, Edge TPU is an ASIC that has been purpose-built for running AI at the edge.
This technology has emerged as a response to the growing demand for deploying AI models trained in the cloud on the edge devices due to considerations of privacy, latency, and bandwidth limitations.
With its compact physical size and low power requirements, Edge TPU offers remarkable performance while enabling high-accuracy AI deployment at the edge. It’s not merely a hardware solution; it combines custom hardware with open software and advanced AI algorithms.

#6. Amazon EC2 G4 Instances​

When exploring the world of AI hardware, don’t forget to consider Amazon EC2 G4 Instances since it’s also a significant player in the industry.
YouTube video


G4 instances provide an affordable and flexible option, which makes them perfect for using machine learning models and applications that require a lot of graphics. These are specifically designed to handle tasks like image classification, object detection, speech recognition, and more.
You have the option to select either NVIDIA or AMD GPUs, each with its own unique advantages. Thus, it can become a valuable asset in your AI hardware toolkit.

#7. Qualcomm​

Qualcomm is undoubtedly a global leader in wireless technology, making significant progress in the field of AI hardware. They are currently developing power-efficient AI technology that can be applied to a wide range of products and industries.
A group of people sitting in a living room with a robot.

Qualcomm’s AI solutions bring several advantages, such as user privacy protection, improved reliability, and efficient use of network bandwidth.
With their AI Engine at the wheel, Qualcomm is driving the advancement of the Connected Intelligent Edge. This means the solutions can help enhance user experiences across various devices.

Advancements and Innovations in AI Hardware​

The AI hardware industry is experiencing rapid advancements and groundbreaking innovations that are reshaping the artificial intelligence landscape.
Let’s dive into some exciting progress in this dynamic field.

Specialized Chips for AI​

Tech giants like Google and Apple are responding to the complex requirements of AI with innovative solutions. They are revolutionizing the field by spearheading the development of specialized chips tailored to perform AI tasks.

Neuromorphic Computing​

An image of a brain on a circuit board.

Neuromorphic chips offer cutting-edge technology in the field of AI hardware. They emulate the intricate neural connections of the human brain, paving the way for unprecedented advancements. This new era of neuromorphic computing combines efficiency and brain-inspired design to shape a future where AI can reach incredible heights.

Quantum Computing​

The potential of quantum computers to tackle complex problems surpasses the capabilities of classical computers by leaps and bounds. While we are in the initial stage of witnessing the practical applications of quantum computing in AI, the impact it will have on AI hardware is profound.

Edge AI Acceleration​

The rise of edge computing is being accelerated by AI hardware specifically designed for real-time, energy-efficient processing. This technological progress holds significant relevance, especially for devices such as IoT sensors and wearables.

Memory Innovations​

Are you familiar with how AI algorithms work? They can be quite memory-intensive, which means they require a lot of storage space.
Fortunately, there are innovative solutions available to address this issue. Two emerging memory technologies, called resistive RAM (ReRAM) and phase-change memory (PCM), are stepping in to bridge the gap.

Pros and Cons of Using AI Hardware​

A man and a woman are holding a clipboard.

By incorporating AI hardware, businesses, and industries can harness the power of artificial intelligence effectively. But it’s important to understand the pros and cons associated with using AI hardware.

Pros​

  • Enhanced performance: AI hardware can handle complex AI tasks, offering faster and more efficient processing compared to traditional hardware.
  • Efficiency: Some AI chips, such as TPUs and neuromorphic chips, are made energy efficient. By using these specialized chips, you’re saving money on operations and being kinder to the environment.
  • Speed: AI hardware significantly speeds up data processing and model training, empowering you to gain faster insights and make real-time decisions in various scenarios.
  • Complex problem solving: Quantum computing, a type of AI hardware, has the incredible ability to solve complex problems at an unprecedented speed.
  • Scalability: AI hardware can adapt and expand to accommodate the increasing demands related to growing datasets and evolving AI applications.

Cons​

  • Cost: The initial investment in AI hardware, including development, deployment, and maintenance costs, can be high.
  • Lacks versatility: Some AI hardware, like ASICs, is optimized for specific tasks, limiting versatility for broader applications.
  • Complex implementation: Integrating AI hardware requires both expertise and resources, which may pose challenges for smaller businesses during implementation.

Conclusion​

AI hardware has remarkable capabilities to revolutionize different industries. Using AI hardware for executing heavy AI tasks is advantageous for businesses and individuals. It. It not only can boost efficiency and expedite problem-solving but also allow you to create scalable, futuristic AI solutions.
As AI hardware evolves, it’s expected to unlock opportunities and push boundaries in the field of technology. Whether you’re a business leader or simply curious about technology, understanding the aspects of AI hardware offers a glimpse into an exciting future led by innovative technologies.

  • Amrita Pathak
    Author
    Amrita is a freelance copywriter and content writer. She helps brands enhance their online presence by creating awesome content that connects and converts. She has completed her Bachelor of Technology (B.Tech) in Aeronautical Engineering…. read more

  • Narendra Mohan Mittal
    Editor
    Narendra Mohan Mittal is a Senior Digital Branding Strategist and Content Editor with over 12 years of versatile experience. He holds an M-Tech (Gold Medalist) and B-Tech (Gold Medalist) in Computer Science & Engineering.
    read more
 
  • Like
  • Fire
  • Sad
Reactions: 22 users

robsmark

Regular
All good points and also no issue with you voicing the concerns, was just happy to discuss as we have.
Will have a think about what you've raised.

I didn't agree with the following from your previous post however, which is why I mentioned holding onto shares for a company destined for failure.
"Without [commercial uptake] we’re done, and let’s be honest, it’s not looking so good right now is it?"
For me, it's fine, and considering the support on certain posts of mine and others, there's a lot of people who see it my way too.
I truly believe if we had a more reasonable pull back since the $2.34 highs and say hovered around the upper $1 mark, there would be less criticism, or at least a lower frequency of the echoing of said criticism.

For me it's a binary outcome. We succeed or we don't.
There isn't an in between so regardless of who is right or wrong about the outcome, the journey doesn't matter for me yet.
I take the risk and I hope for the reward.
This forum really isn't helping people evaluate their decisions anymore, so I'm surprised so many seek comfort here.

With your 4 decent questions at the end, would an answer to those mean you are satisfied?
Cos it sounds like you have far more concerns than that.

Thanks for the civil and well articulated response Damo. Whether we agree on everything or nothing is irrelevant, being able to voice criticism, comment, or give praise in an environment without reprisal is what separates this forum from the other. I appreciate that.

With your 4 decent questions at the end, would an answer to those mean you are satisfied?
Cos it sounds like you have far more concerns than that.

To be fair I feel like a battered bull at the moment, I'm a human with emotions, and obviously seeing a significant amount of money slowly pissed up the wall, and my target become blimp on the horizon leaves me feeling very disappointing, a feeling we all no doubt share.

I think knowing more about the status of the aforementioned would go a long way to helping me make a more informed decision about this investment, and thats what we all need right? All true information available to enable us to make informed decisions, something which I don't think we have fully had up to now. This company has a great way of hinting things, but not saying anything, then hiding behind privacy contracts. I think this is horse shit, and believe thy could offer us much more.

The EAP is the key here. Knowing more information about the level and type of engagement, would help me to understand the potential future success of this company. If the company announced that all original companies remain engaged, the distribution of these companies was X, Y, Z, five are nearing the completion of their products, and ten are on hold - would feed me confidence. Where as if they announced that there has been no activity over the past 18 months, I would feel very differently. I believe we as shareholders are entitled to know this due to the ASX continual disclosure requirements, and any customer seriously engaged would understand this requirement very well.

Can anyone here truly say that they are happy with the level of information provided to them by the company? I mean as shareholders, what are we actually told?
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 26 users
I agree entirely with this. More transparency is needed as to where the company sites regarding negotiations and if negotiations actually exist. This silence is BS, we effectively own this company and have no idea what’s happening inside its walls, yet the bonuses keep rolling, the SP keeps dropping, and no new deals are being signed. It’s getting very old and I expect better accountability.
5 qtrs or so of funding gives plenty of time to sign some more licenses or see some green shoots of cadence with revenue. That's all I'm really relying on as a shareholder. If the company hits some of these targets, then the share price will take care of itself in a pretty aggressive re-rate. So we wait patiently to see what time brings to light.
 
  • Like
  • Love
  • Fire
Reactions: 16 users

Diogenese

Top 20
Now, I could be wrong in my understanding and happy to be corrected if so.

However, been reading so many diff articles, papers etc like everyone else and my simple thinking....

There are obviously algorithms whether they be CNN or SNN which are the software component.

Then you have the hardware to do the processing of said algos, whether that be GPU, CPU, TNU or Neuromorphic like AKD1000, 1500 etc.

Now, I'm thinking the company thought it may have been a bit of an easier switch initially by producing AKD1000 as the hardware in conjunction with the CNN2SNN converter.

Kind of makes sense as the majority of algos appear to be CNN based and we would need SNN algos to run on the Akida hardware. The CNN2SNN would facilitate that to show the benefits.

As the company said, the AKD1000 take-up wasn't really forthcoming as existing solutions were fit for purpose and needs at the time for end users to go changing up their production schedules / models / products etc.

It did however provide a POC for those who tested and played with AKD1000 and a feedback channel to BRN for the next iterations.

These days, I'm starting to see more commentary in various places that SNN algos are now getting traction and written accordingly and they will require the neuromorphic hardware side to be able to process them to full capability.

This is where I believe we will start to find our space as and when these SNN algos start finding their spot in newer upcoming products, particularly at the edge, developers and companies will approach the Megachips and Renesas (and us) of the world.

Will be interesting to find out eventually what the Renesas tape out was for and if SNN algo driven.

The downside to my thinking is that our initial mkt penetration was a little muted as we know and provided some additional time for competitors to refine their own offerings.

The upside is that the BRN team have been listening to clients and evolving Akida accordingly with VIT and TENN which will hopefully keep us ahead of the rest though I feel the gap closed slightly during the last couple of years.

Given we have the integration of our IP with the likes of the M85 it allows the current crop of product developers and ARM users to now include specifically written SNN algos.

We need to seriously target end user products that now have SNN algos written imo and get the flow on effect of the IP agreements and / or the production royalties through Megachips and Renesas.
Hi Fmf,

There are, as you point out, CNN and SNN algorithms which run on CPU/GPU. Brainchip Studio was a SNN running on CPU. MetaTF Akida simulator is an SNN algorithm which runs on CPU.

However, when it comes to the Akida SoC, the silicon is the algorithm. The circuitry is set up to imitate the operation of the brain, in which the neurons operate asynchronously, firing when their set threshold is reached by the incoming spikes. This depends on whether the incoming spikes are a sufficient match (probability, not mathematical exactness) for the weights loaded into the neuron. The weights are derived from the model library. There is no "program" controlling the operation of Akida in its function of classification/inference. The associated processor is used to configure the connections between the NPUs to from the required layers and associate the weights with the NPUs, but are no "instructions" from the CPU for the actual classification process. Implementing instructions is a time and power consuming process. Once the NN is configured for the assigned task, the CPU minds its own business.
 
  • Like
  • Fire
  • Love
Reactions: 25 users

keyeat

Regular

Damo4

Regular
Can anyone here truly say that they are happy with the level of information provided to them by the company? I mean as shareholders, what are we actually told?

The short answer is yes, but based on a few things, most of which are based on my investment style:
  1. I've held companies that have released fluff to no end and then effectively collapsed into a back-door ASX listing for another company
    • The bigger the pump, the harder the dump
  2. I own a significant # of shares (for me) but not enough to warrant losing sleep over
  3. I own stocks in other ASX companies, so no sleep lost again
  4. I believe in the technology, so an echo of point 1 where I don't need much re-assurance
  5. Brainchip is currently the most de-risked it's ever been, regardless of how we have received the information
  6. Renesas/Megachips are yet to release their products and we are yet to understand the scale, which I'm glad management haven't hinted at
  7. I like stocks with high risk/reward, it creates bargains for those willing to take a punt, like we all have
  8. A heads-down approach could mean strong foundation - we currently can only speculate the scale of this Ai market
    • Would happily have sales come later than expected if the technology does in fact become ubiquitous.
 
  • Like
  • Love
  • Fire
Reactions: 49 users

Mazewolf

Regular
https://cosmosmagazine.com/technology/materials/mimic-human-vision-neuromorphic-chip/

A team led by researchers from Melbourne’s RMIT University has produced a tiny breakthrough device that “sees” and creates memories in a similar way to human vision. The research is published in the journal Advanced Functional Materials.

The device is a potential step toward applications such as self-driving cars and bionic eyes. It is only three nanometres thick – thousands of times thinner than a human hair – and requires no external parts to operate.

Yet it can capture, process and store visual information, just like the optical system of a human’s eyes and brain. And it does all of this with a tiny amount of data, making it a prime candidate for applications that require ultra-fast decision making.
cont. ...

sorry if reposted, looks interesting as use case for Akida in future...
 
  • Like
  • Fire
  • Love
Reactions: 18 users
Hi Fmf,

There are, as you point out, CNN and SNN algorithms which run on CPU/GPU. Brainchip Studio was a SNN running on CPU. MetaTF Akida simulator is an SNN algorithm which runs on CPU.

However, when it comes to the Akida SoC, the silicon is the algorithm. The circuitry is set up to imitate the operation of the brain, in which the neurons operate asynchronously, firing when their set threshold is reached by the incoming spikes. This depends on whether the incoming spikes are a sufficient match (probability, not mathematical exactness) for the weights loaded into the neuron. The weights are derived from the model library. There is no "program" controlling the operation of Akida in its function of classification/inference. The associated processor is used to configure the connections between the NPUs to from the required layers and associate the weights with the NPUs, but are no "instructions" from the CPU for the actual classification process. Implementing instructions is a time and power consuming process. Once the NN is configured for the assigned task, the CPU minds its own business.
Awesome.

Thanks for the additional clarity.

So, when I read about SNN algos being written and requiring neuromorphic hardware to process, where does that step come in?
 
  • Like
Reactions: 8 users
Thanks for the civil and well articulated response Damo. Whether we agree on everything or nothing is irrelevant, being able to voice criticism, comment, or give praise in an environment without reprisal is what separates this forum from the other. I appreciate that.

With your 4 decent questions at the end, would an answer to those mean you are satisfied?
Cos it sounds like you have far more concerns than that.

To be fair I feel like a battered bull at the moment, I'm a human with emotions, and obviously seeing a significant amount of money slowly pissed up the wall, and my target become blimp on the horizon leaves me feeling very disappointing, a feeling we all no doubt share.

I think knowing more about the status of the aforementioned would go a long way to helping me make a more informed decision about this investment, and thats what we all need right? All true information available to enable us to make informed decisions, something which I don't think we have fully had up to now. This company has a great way of hinting things, but not saying anything, then hiding behind privacy contracts. I think this is horse shit, and believe thy could offer us much more.

The EAP is the key here. Knowing more information about the level and type of engagement, would help me to understand the potential future success of this company. If the company announced that all original companies remain engaged, the distribution of these companies was X, Y, Z, five are nearing the completion of their products, and ten are on hold - would feed me confidence. Where as if they announced that there has been no activity over the past 18 months, I would feel very differently. I believe we as shareholders are entitled to know this due to the ASX continual disclosure requirements, and any customer seriously engaged would understand this requirement very well.

Can anyone here truly say that they are happy with the level of information provided to them by the company? I mean as shareholders, what are we actually told?
Yes I think its a function of poorly performing share price. With an 85% reduction in share price off ATH's, with no visible non-Megachips IP signings, there's less the company can talk about that will satisfy shareholders. By its very nature, you'll be getting forward-looking guidance without confirmation.

When a company valuation is suffering, shareholders tend to go to the narrative or fundamentals and newsflow to try to find something to hold onto to justify their position. Getting more information and communication isn't going to make the share price better. It just simply a way of trying to psychologically justify a poor investment to yourself.

That's where I see things.. And is why I always manage risk appropriately.
 
  • Like
  • Love
Reactions: 6 users

IloveLamp

Top 20
 

Attachments

  • Screenshot_20230821_145257_LinkedIn.jpg
    Screenshot_20230821_145257_LinkedIn.jpg
    824.5 KB · Views: 63
  • Love
  • Like
  • Fire
Reactions: 8 users

IloveLamp

Top 20
Hewlett Packard advertising for a research scientist.....check out the preferred experience

"Experience with design and testing neuromorphic photonic integrated circuits is also preferred."

 

Attachments

  • Screenshot_20230821_145654_LinkedIn.jpg
    Screenshot_20230821_145654_LinkedIn.jpg
    491.8 KB · Views: 64
  • Like
  • Fire
  • Thinking
Reactions: 20 users

Diogenese

Top 20
Awesome.

Thanks for the additional clarity.

So, when I read about SNN algos being written and requiring neuromorphic hardware to process, where does that step come in?
As far as Akida is concerned, the configuration of the NN determining the number of layers and in allocating NPUs to each layer, and the determination of the weights from the model library are the functions carried out by the CPU preliminary to using the NN to classify input signal spikes.

For CPU based SNNs, the programme includes instructions to compare the model library imges and the incoming signals.
 
  • Like
  • Love
  • Fire
Reactions: 17 users

WhiteDove

Member
52 week low... wow.
 
  • Sad
  • Like
  • Wow
Reactions: 12 users

Esq.111

Fascinatingly Intuitive.
Afternoon WhiteDove ,

Try 33 Month Low ,

Regards,
Esq
 
  • Sad
  • Like
  • Wow
Reactions: 30 users

Xray1

Regular
Seriously? In a veritable vacuum of information and updates from the company 40mins is barely enough imo. That response shows a complete lack of awareness by the company imo.
Personally, I was quite offended that they started the update firstly with the "Strike 1" situation about their remunerations ..... So imo, it seems to me that their first priority off the bat in that update was to cover their own monetary interests, well before the other interests / concerns of S/holders ....... this strike one issue imo could and should have been dealt with at the end of the next weeks upcoming update ........
 
  • Like
  • Fire
  • Love
Reactions: 15 users

Foxdog

Regular
Personally, I was quite offended that they started the update firstly with the "Strike 1" situation about their remunerations ..... So imo, it seems to me that their first priority off the bat in that update was to cover their own monetary interests, well before the other interests / concerns of S/holders ....... this strike one issue imo could and should have been dealt with at the end of the next weeks upcoming update ........
We'll if the SP continues to drop during this information hiatus then they'll be looking elsewhere for jobs.
 
  • Like
  • Fire
  • Haha
Reactions: 5 users
Far out!
Can people take things without twisting and making up stories that may or may not be true.
The question answer session was just that an opportunity to answer concerns of the shareholders.
We (as a shareholder group) certainly did expressed concern about the current planned remuneration package at the AGM.
Therefore it was brought up in the Q&A session. why do people have to add story to it like they are addressing their monetary plans first?
The order of the questions may or may not have been prioritised.

I, for one, feel if the remuneration packages are not part of the opportunities for employment, BRN may not have all the power to attract the best in the business going forward.
So how about attaching that story to why this concern was dealt with first in the Q&A.
I.E the board believes this is a vital part of the BRN implementation plan going forward to remain at the forefront of the neuromorphic leader board in terms of IP, employee quality and our technology advancements.
I am down a considerable amout atm. I too feel the pain and am hanging out for some revenue related news to excite the market.
I choose to hold beliving the board/directors of the company are going deliver on their plan and are progressing all things as required by the market to ensure BRN has a solid future.
 
  • Like
  • Love
  • Fire
Reactions: 49 users

Xray1

Regular
From the last Quarterly report it was stated therein that:

"The Company is currently experiencing its highest ever level of commercial engagements, the volume and quality of which are improving rapidly as a larger number of customers learn about BrainChip and our 2nd Generation technology which will be available in late Q3."

I do hope that the second generation Akida 2E, 2S & 2P does come out at the end of this quarter as stated by the Co as above that being the 30 September 2023... ( that being only some 6 weeks away ) ... imo, any delay in the delivery of the Akida Generation 2 will only further exaggerate negative s/holder sentiment, cause the s/price to fall further and more importantly put further doubt as to management's ability to provide creditability to the statements that they may make and time lines.
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 15 users
Top Bottom