BRN Discussion Ongoing

McHale

Regular
Gidday Believers..well, the next 90 days are going to be telling..quoting Sean, " this year, 2024 is going to be a critical year for our company."
" I know the dates that the company/ies are making their decision/s as to whom they will sign with."
" I hope to get a couple over the line later this year."
Antonio " Quite frankly, we've (the board) being giving Sean shit, that comment will probably end up in the papers."
AND FINALLY....
A big congratulations to my AFL team...what a year, well played guys, the entire playing group contributed yesterday, Chris Fagan what a humble, genuine human who can show real emotion towards his players..Brisbane Lions 🦁🦁🦁
❤️❤️❤️ Akida Tech
Hi @TECH that was an awesome win by the Lions, to cap off an amazing finals series with 3 wins on the road to seal the deal, I will be very happy with this for quite a while GO LIONS, maybe catch you at a game one day.

On the BRN we are now in Oct. so I am thinking and feeling that there is a good likelihood of a win here before much longer, Sean has put it on the line as per your quotes.

However it is now October, and I have a feeling that the price will continue to rise here, that last CR for me was really interesting, in that, the discount to market was a relatively meagre 4.5% discount to the 10 day VWAP (off top of my head), many CRs have to offer a significantly higher discount to market in order to attract capital support, particularly under current conditions money is tight and interest rates are actually higher than the 4.5% disc BRN got their $20mill for.

Then the underwriters said that they would not sell any of the shares for 12 mths, so I believe Sean must have been pretty convincing about the prospects for much higher SP over the coming months, otherwise I don't think the CR could have been pulled off on such good terms.

All my opinion or what some would call speculation there, but I reckon there could be some validity, many CRs are done at 10% to 20% discount to market, BRN got that money cheap which of course also means we didn't cop anywhere near as much dilution, I was surprised at the terms of the deal.

Go Lions
 
  • Like
  • Love
  • Fire
Reactions: 51 users

McHale

Regular
  • Love
  • Like
Reactions: 2 users

McHale

Regular
Not boasting just proving there are a few of us lucky ones!
View attachment 70069

My BRN holding shows green too, buying the back end of that Merc story was always risky, the price was 36.5c the prior October, so IMO anything over 50c was high risk, particularly without any other announcements from BRN
 
  • Like
Reactions: 4 users

CHIPS

Regular
I also hope that she is healthy again.

I see too much speculation and guesswork in your statement!
She mentions that she has a WhatsApp group that has a few members.
She writes that she knows Thomas Hulsing - possibly via LinkedIn or maybe even privately.
In general, I still don't see any proof that Thomas Hulsing is supposed to be in the WhatsApp group.
You can continue to speculate - maybe someone will dare to contact Thomas Hulsing.

If you do not believe what you were told, contact Hülsing yourself! Where is the problem?
He is an investor just like us!
 
  • Like
Reactions: 2 users

CHIPS

Regular
Thank you!
Today we are celebrating our reunification here in Germany, no more wall, no wall, no wall, break down the wall - break down the wall BREAK DOWN THE WALL
DIE MAUER MUSS WEG!

I find two other things from you DU extremely interesting.
One is Shiraz gin and the other, even more fascinating, is Shiraz sparkling wine.

Break down the wall ✊

___
Sorry!
...not to forget Vegemite

You are either from East Germany which used to be behind the wall, only those Germans really celebrate that day, or you opened the second bottle of wine already 😂🤣
 
  • Haha
Reactions: 5 users
  • Like
  • Love
  • Wow
Reactions: 5 users

CHIPS

Regular

Mccabe84

Regular
  • Like
Reactions: 1 users

Mccabe84

Regular
Sorry but there was no link on the post from the Facebook page
FB_IMG_1727945524867.jpg

Here's a copy of it without the volume bar in it 🤦‍♂️. Once again this is a copy from someone's post on a Facebook group
 
  • Like
Reactions: 12 users

7für7

Top 20
You are either from East Germany which used to be behind the wall, only those Germans really celebrate that day, or you opened the second bottle of wine already 😂🤣
Actually both of them have nothing to celebrate… especially the East Germans. When they opened the border, the west Germans made a lot of money because of them… they bought everything overpriced from cars which was already ready to be demolished but they baught it because it was western car… to electric devices… even their savings from the bank was overnight almost worthless…and now they can’t even like each other…. Yeah… happy reunion 🤡
 
  • Sad
  • Like
  • Love
Reactions: 4 users

Boab

I wish I could paint like Vincent
View attachment 70312
Here's a copy of it without the volume bar in it 🤦‍♂️. Once again this is a copy from someone's post on a Facebook group
Pretty sure this was from their annual report.
If you go to the Tata website you will be able to read all about it. Beware, it is substantial.
 
  • Like
  • Fire
Reactions: 10 users

itsol4605

Regular
If you do not believe what you were told, contact Hülsing yourself! Where is the problem?
He is an investor just like us!
😂 I have already contacted him and asked. He read the discussion about the WhatsApp group and then replied to me.🙂👍
 
  • Like
Reactions: 1 users

Diogenese

Top 20
Pretty sure this was from their annual report.
If you go to the Tata website you will be able to read all about it. Beware, it is substantial.


https://www.tata.com/business/tata-elxsi

Projects, Alliances and Collaborations

Transportation

  • Collaborated with IIT-Guwahati for EV technologies, focusing on problems such as digital analysis of electrical signature data for traction motors
  • Partnership with Cultos Global for integrating its blockchain mechanism with its TETHER connected vehicle platform, introducing a driver reward through a high-trust and high-privacy blockchain model
  • Joined eSync Alliance to help standardize and accelerate OTA initiatives, hence accelerating the industry shift towards SDV
  • Collaborated with Indian Institute of Science (IISc) for Automotive cybersecurity Solutions using AI and ML-based intrusion detection
  • Partnership with NIT-Kozhikode to establish state-of-theart laboratory for EV technologies
Media and Communications

  • Strategic alliance with Ateme, a global leader in video compression, delivery, and streaming solutions to deliver a pre-integrated FAST (Free Ad-Supported Television) channel deployment solution
  • Partnership with Accuknox, the developer of NIMBUS, a state-of-the-art cloud-native security solution for advancement in network transformation and security, offering operators a comprehensive solution for building and securing autonomous networks
  • Tata Elxsi collaborates with Telefónica in the domain of automation of cloud infrastructure for telecommunication, integrating ETSI OSM with Tata Elxsi’s NEURON for unprecedented agility
  • Partnership with INVIDI to develop targeted advertising solutions and create new revenue streams for enterprises
Healthcare and Life Sciences

  • Partnership with BrainChip for driving Akida technology into medical devices and industrial applications leveraging its superior AI performance on the edge
Key Wins

Transportation


  • Established strategic partnership with a global automotive OEM for software development in the SDV domain
  • Selected as a strategic innovation and development partner for the advancement of next-generation EV and on-board systems by a leading European automotive supplier
  • Awarded a multi-year, multi-million-dollar contract for the design and development of Level 3+ autonomous driving systems for passenger vehicles by a US automotive Tier 1
Media and Communication

  • Selected as a strategic partner for transforming video services across several LATAM countries for a multi-country telecom operator
  • Tata Elxsi’s 5G Orchestrator and Service Automation Suite has been selected by a leading Telco for its upcoming network rollout and deployment
  • Bagged a large product engineering consolidation deal for a leading MSO in North America, leveraging unmatched offshore execution capability and AI to improve efficiency
Healthcare and Life Sciences

  • Multi-year deal for innovation and re-engineering of a critical care device platform targeting emerging markets, from a European leader
  • Design-led New Product Development (NPD) deal from a Global Healthcare Company to innovate a new line of next-gen Smart Hospital equipment
  • Implementing a multi-year regulatory workflow transformation program by a European medical device OEM. This engagement leverages AI to significantly enhance quality of outcome and efficiency of workflows

Can we assume that Akida is involved in all the Key Wins for Healthcare and Life Sciences?
 
  • Like
  • Fire
  • Love
Reactions: 44 users

CHIPS

Regular
  • Like
Reactions: 2 users

Frangipani

Regular

I should really go to bed soon, but I just keep on finding Brainchip-related nuggets online today - believe it or not, even a job ad from Ukraine 🇺🇦 !



Junior Machine Learning Engineer​


Data Science UA
Diana Marchenko, IT Recruiter

About us:
We are Data Science UA, and we are a fast-growing IT service company. We are proud of developing the Data Science community in Ukraine for more than 7 years. Data Science UA unites all researchers, engineers, and developers around Data Science and related areas. We conduct events on machine learning, computer vision, intelligence, information science, and the use of artificial intelligence for business in various fields.

About role:

Data Science UA is looking for a Junior Machine Learning Engineer to become a helping hand for our internal Data Science team. Do you want to work on some real projects for our Clients and/or perform R&D of novel AI algorithms and platforms (in particular, on the neuromorphic platform Brainchip Akida)? Then apply and join our team! You will report directly to our Head of AI Consulting, PhD, which is a great opportunity for you and your further growth.

Requirements:

✅0,5-1 year of experience as ML Engineer/Data Scientist or related (alternatively, participation in open-source ML projects or Kaggle competitions);
✅Minimal expertise in CV or NLP-based projects;
✅Good knowledge of math, CS and AI fundamentals;
✅Proficiency in Python;
✅Student/graduate in the field of exact sciences (computer science, mathematics, cybernetics, physics, etc.);
✅Intermediate English.

Nice to have:

✔Have your own pet projects;
✔Completed different Data Science related courses.

Responsibilities:

💡Participation in AI consulting projects for Clients from Ukraine and abroad;
💡Work on complex R&D projects;
💡Write and publish scientific papers (and possibly your diploma in University);
💡Preparation of technical content (presentations, analytical articles).

We offer:

🔥Opportunity to grow and participate in large projects of our AI R&D centers;
🔥Possibility of remote work;
🔥Development of professional skills and support in acquiring new knowledge (by attending conferences on Data Science, exchange of experience, etc.);
🔥Friendly team;
🔥Interesting tasks.

About Data Science UA​

We understand that you need more than just a search engine to find IT and technical professionals or to progress your own career, that is why we established Data Scientist UA.
One place connecting business and developers.

Company website:
https://data-science-ua.com/

DOU company page:
https://jobs.dou.ua/companies/data-science-ua/
Job posted on 29 November 2023
50 views 15 applications

Just over ten months ago, I spotted the above job ad by Data Science UA, a company from Ukraine:

“(…) Data Science UA is looking for a Junior Machine Learning Engineer to become a helping hand for our internal Data Science team. Do you want to work on some real projects for our Clients and/or perform R&D of novel AI algorithms and platforms (in particular, on the neuromorphic platform Brainchip Akida)? (…)”

I just noticed that earlier today their Head of AI Consulting, who has a PhD in polymer chemistry and describes his job as “specializing in integrating AI solutions into chemical and pharmaceutical R&D” posted very favourably about neuromorphic chips in general and Akida in particular, which they had assessed in their latest R&D project, benchmarking it against NVIDIA Jetson hardware. A paper on this R&D project is currently being worked on.

ED9FEF02-B3E7-43D5-B165-4813E6624A8D.jpeg


E80AD400-B119-4988-8AF2-22F4C64CE591.jpeg


D52B643F-C51D-44C3-B244-9515FF4CF261.jpeg
 
  • Like
  • Fire
  • Love
Reactions: 47 users
Nice to see we make the list and for good reason too imo. The authors final thoughts appear to concur with what's needed and what BRN (& obviously other smaller providers) are offering against the behemoths and their power sucking, water evaporating solutions.

We know we play in the in-cabin space but interesting it was highlighted as a specialisation :unsure:


Alan Morrison is an independent consultant and freelance writer on data tech and enterprise transformation. He is a contributor to Data Science Central with over 35 years of experience as an analyst, researcher, writer, editor and technology trends forecaster, including 20 years in emerging tech R&D at PwC.

A few enterprise takeaways from the AI hardware and edge AI summit 2024​

A few enterprise takeaways from the AI hardware and edge AI summit 2024

Image by Gerd Altmann from Pixabay

A few enterprise takeaways from the AI hardware and edge AI summit 2024​

Enterprises haven’t seemed as enthusiastic about generative AI and large language models (LLMs) lately as they have been in previous years. The Kisaco Research event I attended in September provided some reasons why.

Current gen AI processing far too centralized to be efficient or cost effective​

If there’s a single takeaway that I could point to, it’s how overloaded data centers are and how limited edge infrastructure has been when it comes to effectively reducing that data center load for gen AI applications. Ankur Gupta, Senior Vice President and General Manager at Siemens Electronic Data Automation noted during his talk that “the opportunity for low power needs to be met at the edge.”

Gen AI-oriented data centers must handle an inordinate amount of heat per GPU. Gupta asserted that half a liter of water evaporates with every ChatGPT prompt.

The newest, largest GPUs run even hotter. Tobias Mann in The Register in March 2024 wrote that “Nvidia says the [Blackwell] chip can output 1,200W of thermal energy when pumping out the full 20 petaFLOPS of FP4.” Even so, Charlotte Trueman writing in August 2024 in Data Center Dynamics and citing Nvidia CFO Colette Kress, wrote that “Nvidia was expecting to ship ‘several billion dollars in Blackwell revenue during Q4 2024.’”

Edge infrastructure innovation is key, with significant spending planned. IDC recently estimated that global spending on edge computing will reach $228 billion in 2024, a 14% increase from 2023. IDC forecasts spending to rise to $378 billion by 2028, a 13 percent CAGR from 2024 levels.

The research firm “expects all 19 enterprise industries profiled in the spending guide to see five-year double-digit compound annual growth rates (CAGRs) over the forecast period,” with the banking sector spending the most, according to CDO Trends.

To justify this level of investment, Lip-Bu Tan, chairman of Walden International, forecast edge AI revenue potential of $140 billion annually by 2033.

Aren’t smaller language models (SLMs) better for most purposes?​

Donald Thompson, Distinguished Engineer @ Microsoft / LinkedIn, compared and contrasted LLMs with SLMs, saying he favors SLMs. It’s not often, he says, that users really need state-of-the-art LLMs. SLMs can allow faster inference, more efficiency and customizability.

Moreover, a solid micro-prompting approach can harness the power of functional, logically divided tasks, in the process improving accuracy. Thompson shared an example user dialogue agentic workflow that enables a form of knowledge graph creation. A dialectic that’s part of this user dialogue flow includes thesis, antithesis and synthesis, eliciting a broader, more informed viewpoint.

Pragmatic enterprise AI starts with better data and organizational change management​

Manish Patel, Founding Partner at Nava Ventures, moderated a panel session on
“Emerging Architectures for Applications Using LLMs – The Transition to LLM Agents.” Panelists included Daniel Wu of the Stanford University AI Professional Program, Arun Nandi, Senior Director and Head of Data & Analytics, at Unilever, and Neeraj Kumar, Chief Data Scientist at Pacific Northwest National Laboratory.

The prospect of agentic AI is placing much more focus on the need for governance, risk assessment and improved data quality.

In order for those improvements to be realized, AI adoption must wait for organizational change. Wu pointed out that inside enterprises, “Change management is the single point of failure.” Even successful change efforts take years.

Moreover, expectations about AI are often unrealistic, with executives who don’t have the patience to wait for return on investment.

Kumar underscored the cross-functional nature of AI deployments and ownership considerations that arise as a result.
Nandi figured that “70 percent of the effort (in enterprise AI initiatives)” are change management related, and that such initiatives imply a need for much more extensive collaboration, given AI’s cross-functional nature, with the right people in the right roles in the loop.

Effective edge AI requires a different, linear modeling approach

Stephen Brightfield, CMO of neuromorphic IP provider Brainchip, presented on “Combining Efficient Models with Efficient Architectures.” Brainchip specializes in on-chip, in-cabin processing technology for smart car applications.
Brightfield asserted that “most edge hardware is stalled because it’s designed with a data center mentality.” Some of the observations he made underscored the learnings of an edge-constrained environment:

  • Assume fixed power limits.
  • Lots of parameters implies a lot of data to move.
  • Most data isn’t relevant.
  • Sparse data implies more efficiency.
  • Don’t recompute what hasn’t changed.
Rather than stick with a transformer-based neural net of the kind used in LLMs, Brainchip advocates a state space, state evolving, event-based model that promises more efficiency and lower latency.

A final thought

Much media coverage is focused on LLM behemoths and the data center-related activities of hyperscalers. But what I found much more compelling were the innovations of smaller providers who were trying to boost the performance and utility of edge AI. After all, inferencing is consuming 80 percent of the energy AI demands, and the potential clearly exists to improve efficiencies through better and more pervasive edge processing.
 
  • Like
  • Fire
  • Love
Reactions: 50 users
Just over ten months ago, I spotted the above job ad by Data Science UA, a company from Ukraine:

“(…) Data Science UA is looking for a Junior Machine Learning Engineer to become a helping hand for our internal Data Science team. Do you want to work on some real projects for our Clients and/or perform R&D of novel AI algorithms and platforms (in particular, on the neuromorphic platform Brainchip Akida)? (…)”

I just noticed that earlier today their Head of AI Consulting, who has a PhD in polymer chemistry and describes his job as “specializing in integrating AI solutions into chemical and pharmaceutical R&D” posted very favourably about neuromorphic chips in general and Akida in particular, which they had assessed in their latest R&D project, benchmarking it against NVIDIA Jetson hardware. A paper on this R&D project is currently being worked on.

View attachment 70330

View attachment 70332

View attachment 70333
Pfftt.. 🙄
Why do I get the feeling he just spent the last 10 months or whatever, on a study confirming the Sky is blue..
 
  • Haha
  • Like
Reactions: 9 users

Diogenese

Top 20
Nice to see we make the list and for good reason too imo. The authors final thoughts appear to concur with what's needed and what BRN (& obviously other smaller providers) are offering against the behemoths and their power sucking, water evaporating solutions.

We know we play in the in-cabin space but interesting it was highlighted as a specialisation :unsure:


Alan Morrison is an independent consultant and freelance writer on data tech and enterprise transformation. He is a contributor to Data Science Central with over 35 years of experience as an analyst, researcher, writer, editor and technology trends forecaster, including 20 years in emerging tech R&D at PwC.

A few enterprise takeaways from the AI hardware and edge AI summit 2024​

A few enterprise takeaways from the AI hardware and edge AI summit 2024

Image by Gerd Altmann from Pixabay

A few enterprise takeaways from the AI hardware and edge AI summit 2024​

Enterprises haven’t seemed as enthusiastic about generative AI and large language models (LLMs) lately as they have been in previous years. The Kisaco Research event I attended in September provided some reasons why.

Current gen AI processing far too centralized to be efficient or cost effective​

If there’s a single takeaway that I could point to, it’s how overloaded data centers are and how limited edge infrastructure has been when it comes to effectively reducing that data center load for gen AI applications. Ankur Gupta, Senior Vice President and General Manager at Siemens Electronic Data Automation noted during his talk that “the opportunity for low power needs to be met at the edge.”

Gen AI-oriented data centers must handle an inordinate amount of heat per GPU. Gupta asserted that half a liter of water evaporates with every ChatGPT prompt.

The newest, largest GPUs run even hotter. Tobias Mann in The Register in March 2024 wrote that “Nvidia says the [Blackwell] chip can output 1,200W of thermal energy when pumping out the full 20 petaFLOPS of FP4.” Even so, Charlotte Trueman writing in August 2024 in Data Center Dynamics and citing Nvidia CFO Colette Kress, wrote that “Nvidia was expecting to ship ‘several billion dollars in Blackwell revenue during Q4 2024.’”

Edge infrastructure innovation is key, with significant spending planned. IDC recently estimated that global spending on edge computing will reach $228 billion in 2024, a 14% increase from 2023. IDC forecasts spending to rise to $378 billion by 2028, a 13 percent CAGR from 2024 levels.

The research firm “expects all 19 enterprise industries profiled in the spending guide to see five-year double-digit compound annual growth rates (CAGRs) over the forecast period,” with the banking sector spending the most, according to CDO Trends.

To justify this level of investment, Lip-Bu Tan, chairman of Walden International, forecast edge AI revenue potential of $140 billion annually by 2033.

Aren’t smaller language models (SLMs) better for most purposes?​

Donald Thompson, Distinguished Engineer @ Microsoft / LinkedIn, compared and contrasted LLMs with SLMs, saying he favors SLMs. It’s not often, he says, that users really need state-of-the-art LLMs. SLMs can allow faster inference, more efficiency and customizability.

Moreover, a solid micro-prompting approach can harness the power of functional, logically divided tasks, in the process improving accuracy. Thompson shared an example user dialogue agentic workflow that enables a form of knowledge graph creation. A dialectic that’s part of this user dialogue flow includes thesis, antithesis and synthesis, eliciting a broader, more informed viewpoint.

Pragmatic enterprise AI starts with better data and organizational change management​

Manish Patel, Founding Partner at Nava Ventures, moderated a panel session on
“Emerging Architectures for Applications Using LLMs – The Transition to LLM Agents.” Panelists included Daniel Wu of the Stanford University AI Professional Program, Arun Nandi, Senior Director and Head of Data & Analytics, at Unilever, and Neeraj Kumar, Chief Data Scientist at Pacific Northwest National Laboratory.

The prospect of agentic AI is placing much more focus on the need for governance, risk assessment and improved data quality.

In order for those improvements to be realized, AI adoption must wait for organizational change. Wu pointed out that inside enterprises, “Change management is the single point of failure.” Even successful change efforts take years.

Moreover, expectations about AI are often unrealistic, with executives who don’t have the patience to wait for return on investment.

Kumar underscored the cross-functional nature of AI deployments and ownership considerations that arise as a result.
Nandi figured that “70 percent of the effort (in enterprise AI initiatives)” are change management related, and that such initiatives imply a need for much more extensive collaboration, given AI’s cross-functional nature, with the right people in the right roles in the loop.

Effective edge AI requires a different, linear modeling approach

Stephen Brightfield, CMO of neuromorphic IP provider Brainchip, presented on “Combining Efficient Models with Efficient Architectures.” Brainchip specializes in on-chip, in-cabin processing technology for smart car applications.
Brightfield asserted that “most edge hardware is stalled because it’s designed with a data center mentality.” Some of the observations he made underscored the learnings of an edge-constrained environment:

  • Assume fixed power limits.
  • Lots of parameters implies a lot of data to move.
  • Most data isn’t relevant.
  • Sparse data implies more efficiency.
  • Don’t recompute what hasn’t changed.
Rather than stick with a transformer-based neural net of the kind used in LLMs, Brainchip advocates a state space, state evolving, event-based model that promises more efficiency and lower latency.

A final thought

Much media coverage is focused on LLM behemoths and the data center-related activities of hyperscalers. But what I found much more compelling were the innovations of smaller providers who were trying to boost the performance and utility of edge AI. After all, inferencing is consuming 80 percent of the energy AI demands, and the potential clearly exists to improve efficiencies through better and more pervasive edge processing.


"
Nice to see we make the list and for good reason too imo. The authors final thoughts appear to concur with what's needed and what BRN (& obviously other smaller providers) are offering against the behemoths and their power sucking, water evaporating solutions.

We know we play in the in-cabin space but interesting it was highlighted as a specialisation :unsure:


Alan Morrison is an independent consultant and freelance writer on data tech and enterprise transformation. He is a contributor to Data Science Central with over 35 years of experience as an analyst, researcher, writer, editor and technology trends forecaster, including 20 years in emerging tech R&D at PwC.

A few enterprise takeaways from the AI hardware and edge AI summit 2024​

A few enterprise takeaways from the AI hardware and edge AI summit 2024

Image by Gerd Altmann from Pixabay

A few enterprise takeaways from the AI hardware and edge AI summit 2024​

Enterprises haven’t seemed as enthusiastic about generative AI and large language models (LLMs) lately as they have been in previous years. The Kisaco Research event I attended in September provided some reasons why.

Current gen AI processing far too centralized to be efficient or cost effective​

If there’s a single takeaway that I could point to, it’s how overloaded data centers are and how limited edge infrastructure has been when it comes to effectively reducing that data center load for gen AI applications. Ankur Gupta, Senior Vice President and General Manager at Siemens Electronic Data Automation noted during his talk that “the opportunity for low power needs to be met at the edge.”

Gen AI-oriented data centers must handle an inordinate amount of heat per GPU. Gupta asserted that half a liter of water evaporates with every ChatGPT prompt.

The newest, largest GPUs run even hotter. Tobias Mann in The Register in March 2024 wrote that “Nvidia says the [Blackwell] chip can output 1,200W of thermal energy when pumping out the full 20 petaFLOPS of FP4.” Even so, Charlotte Trueman writing in August 2024 in Data Center Dynamics and citing Nvidia CFO Colette Kress, wrote that “Nvidia was expecting to ship ‘several billion dollars in Blackwell revenue during Q4 2024.’”

Edge infrastructure innovation is key, with significant spending planned. IDC recently estimated that global spending on edge computing will reach $228 billion in 2024, a 14% increase from 2023. IDC forecasts spending to rise to $378 billion by 2028, a 13 percent CAGR from 2024 levels.

The research firm “expects all 19 enterprise industries profiled in the spending guide to see five-year double-digit compound annual growth rates (CAGRs) over the forecast period,” with the banking sector spending the most, according to CDO Trends.

To justify this level of investment, Lip-Bu Tan, chairman of Walden International, forecast edge AI revenue potential of $140 billion annually by 2033.

Aren’t smaller language models (SLMs) better for most purposes?​

Donald Thompson, Distinguished Engineer @ Microsoft / LinkedIn, compared and contrasted LLMs with SLMs, saying he favors SLMs. It’s not often, he says, that users really need state-of-the-art LLMs. SLMs can allow faster inference, more efficiency and customizability.

Moreover, a solid micro-prompting approach can harness the power of functional, logically divided tasks, in the process improving accuracy. Thompson shared an example user dialogue agentic workflow that enables a form of knowledge graph creation. A dialectic that’s part of this user dialogue flow includes thesis, antithesis and synthesis, eliciting a broader, more informed viewpoint.

Pragmatic enterprise AI starts with better data and organizational change management​

Manish Patel, Founding Partner at Nava Ventures, moderated a panel session on
“Emerging Architectures for Applications Using LLMs – The Transition to LLM Agents.” Panelists included Daniel Wu of the Stanford University AI Professional Program, Arun Nandi, Senior Director and Head of Data & Analytics, at Unilever, and Neeraj Kumar, Chief Data Scientist at Pacific Northwest National Laboratory.

The prospect of agentic AI is placing much more focus on the need for governance, risk assessment and improved data quality.

In order for those improvements to be realized, AI adoption must wait for organizational change. Wu pointed out that inside enterprises, “Change management is the single point of failure.” Even successful change efforts take years.

Moreover, expectations about AI are often unrealistic, with executives who don’t have the patience to wait for return on investment.

Kumar underscored the cross-functional nature of AI deployments and ownership considerations that arise as a result.
Nandi figured that “70 percent of the effort (in enterprise AI initiatives)” are change management related, and that such initiatives imply a need for much more extensive collaboration, given AI’s cross-functional nature, with the right people in the right roles in the loop.

Effective edge AI requires a different, linear modeling approach

Stephen Brightfield, CMO of neuromorphic IP provider Brainchip, presented on “Combining Efficient Models with Efficient Architectures.” Brainchip specializes in on-chip, in-cabin processing technology for smart car applications.
Brightfield asserted that “most edge hardware is stalled because it’s designed with a data center mentality.” Some of the observations he made underscored the learnings of an edge-constrained environment:

  • Assume fixed power limits.
  • Lots of parameters implies a lot of data to move.
  • Most data isn’t relevant.
  • Sparse data implies more efficiency.
  • Don’t recompute what hasn’t changed.
Rather than stick with a transformer-based neural net of the kind used in LLMs, Brainchip advocates a state space, state evolving, event-based model that promises more efficiency and lower latency.

A final thought

Much media coverage is focused on LLM behemoths and the data center-related activities of hyperscalers. But what I found much more compelling were the innovations of smaller providers who were trying to boost the performance and utility of edge AI. After all, inferencing is consuming 80 percent of the energy AI demands, and the potential clearly exists to improve efficiencies through better and more pervasive edge processing.

There's that overfitting again.

"Don’t recompute what hasn’t changed."

OpenAI suffers from hallucinations due to overfitting.

"... inferencing is consuming 80% of the energy AI demands"

Inferencing is used in interpreting the sensor output (microphone/video etc.). That is on the input side, so, in a cloud based system, having SNNs at the edge doing the inferencing would greatly reduce the power demand of LLMs. For example, if Siri were to be adapted for local inference, only the "interpreted" request would need to be sent to the cloud, rather than sending the full enquiry to the cloud for interpretation by GPUs in the cloud server.

Of course, with SLMs, the cloud is out of the loop.
 
  • Like
  • Love
Reactions: 18 users

Frangipani

Regular

Either
the author’s source - Billy Leung, investment strategist at Global X Australia - knows more than us and leaked it… (rather unlikely, though)
OR we all somehow missed this recent announcement of a strategic partnership with a leading automotive manufacturer… (okay, I guess we can pretty much rule out that option straight away)
OR Nadine McGrath and/or Billy Leung must have misunderstood our company’s recent social media posts re the advantages of event-based computing for radar/LiDAR applications…
OR the alleged announcement happens to be some sort of hallucination in an AI-generated text…

OR ???

🤔


B2384ED3-CB00-4541-99CF-C9FC97105DB8.jpeg

FE254EB9-2DDA-4984-8F54-8B5170642D7D.jpeg


5E507CB4-6BCD-4C9E-B546-FDB58688687F.jpeg

E2734B2C-037A-48AF-A90A-37378A2C1992.jpeg
 
  • Like
  • Thinking
  • Wow
Reactions: 32 users

Cand2it

Member

Either
the author’s source - Billy Leung, investment strategist at Global X Australia - knows more than us and leaked it… (rather unlikely, though)
OR we all somehow missed this recent announcement of a strategic partnership with a leading automotive manufacturer… (okay, I guess we can pretty much rule out that option straight away)
OR Nadine McGrath and/or Billy Leung must have misunderstood our company’s recent social media posts re the advantages of event-based computing for radar/LiDAR applications…
OR the alleged announcement happens to be some sort of hallucination in an AI-generated text…

OR ???

🤔


View attachment 70335
View attachment 70337

View attachment 70338
View attachment 70339
Maybe we all missed that strategic partnership announcement and thats what drove the price up🤷🏽‍♂️
 
  • Like
Reactions: 3 users
Top Bottom