BRN Discussion Ongoing

Hi All
I found this a useful guide to understanding LLMs and that they are all one thing it is just that to use them for a particular purpose such as running at the Edge you would likely want to optimise them to remove those parts which are not needed for your intended purpose.

An LLM for handling Macca’s drive through orders would likely not need the ability to discuss quantum mechanics:

“Optimizing LLMs for Your Use Cases: A Developer’s Guide​

Pankaj Pandey
Pankaj Pandey
·
Follow
4 min read
·
Jan 13, 2024

Listen
Share
Optimizing LLMs for specific use cases can significantly improve their efficiency and performance.
0*1mUk7znBHVUK9DAY

Photo by 1981 Digital on Unsplash
Large Language Models (LLMs) hold immense potential to revolutionize diverse workflows, but unlocking their full potential for specific use cases requires thoughtful optimization.
Below are some guides for developers looking to optimize LLMs for their own use cases:

1. Analyze Your Use Case:​

  • Define the task: Identify the specific task you want the LLM to perform (e.g., code generation, text summarization, question answering).
  • Data analysis: Assess the type and size of data available for training and fine-tuning. Is it curated, labeled and relevant to your specific domain?
  • Evaluation metrics:Determine how you’ll measure success. Are you aiming for accuracy, speed, creativity or something else?
For Example, imagine you are developing a chatbot for a customer support application. The use case is to generate helpful responses to user queries, providing accurate and relevant information.

2. LLM Selection and Fine-tuning:​

  • Model selection: Choose an LLM with capabilities aligning with your task. Consider pre-trained models for your domain may help you do this task easily.
    For Example, choose GPT-3 as it excels in natural language understanding and generation, which aligns well with the conversational nature of the customer support chatbot.
  • Fine-tuning: Adapt the LLM to your specific data using transfer learning. Popular frameworks like Hugging Face offer tools and tutorials for fine-tuning.
    Fine-tune GPT-3 using a dataset of customer support interactions. Provide examples of user queries and corresponding responses to help the model understand the specific context and language used in the customer support domain.
  • Hyperparameter optimization: Adjust settings like learning rate, batch size and optimizer to maximize performance on your data. Consider using automated Hyperparameter Optimization (HPO) tools.
    Experiment with smaller variants of GPT-3 or adjust hyperparameters to find the right balance between model size and performance. For a latency-sensitive application like customer support, a smaller model might be preferred.

3. Data Wrangling and Augmentation:​

  • Data quality: Ensure data cleanliness and relevance. Label inconsistencies, biases and irrelevant examples can negatively impact performance.
    Apply quantization to reduce model precision, making it more efficient. Prune unnecessary connections to create a more compact model without compromising performance.
  • Data augmentation: Artificially expand your data with techniques like synonym substitution, back-translation or paraphrasing to improve model generalization.
  • Active learning: Interactively query the LLM to identify informative data points for further labeling, focusing resources on areas where the model needs improvement.

4. Integration and Deployment:​

  • API integration: Connect the LLM to your application or workflow through APIs offered by platforms like OpenAI or Google Cloud AI.
  • Latency optimization: Optimize resource allocation and inference techniques to minimize response time and improve user experience.
  • Monitoring and feedback: Continuously monitor model performance and gather feedback from users. Use this data to further refine the LLM and iterate on your solution.

5. Caching and Memorization:​

  • Implement caching and memorization strategies to store and reuse intermediate results during inference. This can significantly reduce redundant computations and improve response times.
  • Implement caching of frequently used responses. For commonly asked questions, store and reuse the model’s previous outputs to reduce redundant computations and improve response times.

6. User Feedback Loop:​

  • Establish a feedback loop with end-users to understand their experience and gather insights for further optimization. User feedback can help refine the model and identify areas for improvement.
    For Example, gather feedback from users regarding the effectiveness of the chatbot’s responses. Use this feedback to identify areas for improvement, update the model accordingly and enhance the overall user experience.

Additional Tips:​

  • Consider interpretability: Choose LLMs with built-in explainability features to understand their reasoning and build trust with users.
  • Utilize transfer learning techniques: Leverage pre-trained knowledge from similar tasks to accelerate development and improve performance.
  • Collaborate with the LLM community: Stay informed about advances in LLM research and best practices, participate in forums and contribute your findings.
By following these steps and continuously iterating, you can significantly improve the efficiency and efficacy of LLMs for your specific use cases. Remember, optimizing LLMs is an ongoing process and dedication to the data, model and integration aspects will ultimately unlock their full potential in your workflows.

Helpful Resources:​

Fine-tuning — OpenAI API
Customize a model with Azure OpenAI Service — Azure OpenAI | Microsoft Learn
Please note: This guide provides a general framework. Specific steps and tools may vary depending on your chosen LLM, framework and use case.”

This is what Brainchip’s engineers and scientists have been doing that Dr. Lewis described as achieving SOTA performance.

My opinion only DYOR
Fact Finder
 
  • Like
  • Fire
  • Love
Reactions: 26 users

Diogenese

Top 20
Is that the same stuff you tried to sell me with the warning about not to be used for Smoking Ceremonies?😂🤡🤣
It's ok as long as you don't inhale.
 
  • Haha
  • Like
Reactions: 9 users

Teach22

Regular
Do we have any (good) chartists that could give their interpretation of today’s trading please?
 
  • Haha
  • Like
Reactions: 4 users

Diogenese

Top 20
Hi All
I found this a useful guide to understanding LLMs and that they are all one thing it is just that to use them for a particular purpose such as running at the Edge you would likely want to optimise them to remove those parts which are not needed for your intended purpose.

An LLM for handling Macca’s drive through orders would likely not need the ability to discuss quantum mechanics:

“Optimizing LLMs for Your Use Cases: A Developer’s Guide​

Pankaj Pandey
Pankaj Pandey
·
Follow
4 min read
·
Jan 13, 2024

Listen
Share

0*1mUk7znBHVUK9DAY

Photo by 1981 Digital on Unsplash
Large Language Models (LLMs) hold immense potential to revolutionize diverse workflows, but unlocking their full potential for specific use cases requires thoughtful optimization.
Below are some guides for developers looking to optimize LLMs for their own use cases:

1. Analyze Your Use Case:​

  • Define the task: Identify the specific task you want the LLM to perform (e.g., code generation, text summarization, question answering).
  • Data analysis: Assess the type and size of data available for training and fine-tuning. Is it curated, labeled and relevant to your specific domain?
  • Evaluation metrics:Determine how you’ll measure success. Are you aiming for accuracy, speed, creativity or something else?
For Example, imagine you are developing a chatbot for a customer support application. The use case is to generate helpful responses to user queries, providing accurate and relevant information.

2. LLM Selection and Fine-tuning:​

  • Model selection: Choose an LLM with capabilities aligning with your task. Consider pre-trained models for your domain may help you do this task easily.
    For Example, choose GPT-3 as it excels in natural language understanding and generation, which aligns well with the conversational nature of the customer support chatbot.
  • Fine-tuning: Adapt the LLM to your specific data using transfer learning. Popular frameworks like Hugging Face offer tools and tutorials for fine-tuning.
    Fine-tune GPT-3 using a dataset of customer support interactions. Provide examples of user queries and corresponding responses to help the model understand the specific context and language used in the customer support domain.
  • Hyperparameter optimization: Adjust settings like learning rate, batch size and optimizer to maximize performance on your data. Consider using automated Hyperparameter Optimization (HPO) tools.
    Experiment with smaller variants of GPT-3 or adjust hyperparameters to find the right balance between model size and performance. For a latency-sensitive application like customer support, a smaller model might be preferred.

3. Data Wrangling and Augmentation:​

  • Data quality: Ensure data cleanliness and relevance. Label inconsistencies, biases and irrelevant examples can negatively impact performance.
    Apply quantization to reduce model precision, making it more efficient. Prune unnecessary connections to create a more compact model without compromising performance.
  • Data augmentation: Artificially expand your data with techniques like synonym substitution, back-translation or paraphrasing to improve model generalization.
  • Active learning: Interactively query the LLM to identify informative data points for further labeling, focusing resources on areas where the model needs improvement.

4. Integration and Deployment:​

  • API integration: Connect the LLM to your application or workflow through APIs offered by platforms like OpenAI or Google Cloud AI.
  • Latency optimization: Optimize resource allocation and inference techniques to minimize response time and improve user experience.
  • Monitoring and feedback: Continuously monitor model performance and gather feedback from users. Use this data to further refine the LLM and iterate on your solution.

5. Caching and Memorization:​

  • Implement caching and memorization strategies to store and reuse intermediate results during inference. This can significantly reduce redundant computations and improve response times.
  • Implement caching of frequently used responses. For commonly asked questions, store and reuse the model’s previous outputs to reduce redundant computations and improve response times.

6. User Feedback Loop:​

  • Establish a feedback loop with end-users to understand their experience and gather insights for further optimization. User feedback can help refine the model and identify areas for improvement.
    For Example, gather feedback from users regarding the effectiveness of the chatbot’s responses. Use this feedback to identify areas for improvement, update the model accordingly and enhance the overall user experience.

Additional Tips:​

  • Consider interpretability: Choose LLMs with built-in explainability features to understand their reasoning and build trust with users.
  • Utilize transfer learning techniques: Leverage pre-trained knowledge from similar tasks to accelerate development and improve performance.
  • Collaborate with the LLM community: Stay informed about advances in LLM research and best practices, participate in forums and contribute your findings.
By following these steps and continuously iterating, you can significantly improve the efficiency and efficacy of LLMs for your specific use cases. Remember, optimizing LLMs is an ongoing process and dedication to the data, model and integration aspects will ultimately unlock their full potential in your workflows.

Helpful Resources:​

Fine-tuning — OpenAI API
Customize a model with Azure OpenAI Service — Azure OpenAI | Microsoft Learn
Please note: This guide provides a general framework. Specific steps and tools may vary depending on your chosen LLM, framework and use case.”

This is what Brainchip’s engineers and scientists have been doing that Dr. Lewis described as achieving SOTA performance.

My opinion only DYOR
Fact Finder
Fine tuning: Too much shiraz has flowed under the bridgework, an article was just discussed where Peter was asked about a patent and he said it was an auxiliary invention for transfer learning via the cloud.

Model selection: Really CNN2SNN gives Akida an unfair advantage in adapting existing models.
 
  • Like
  • Love
  • Fire
Reactions: 18 users

Deadpool

hyper-efficient Ai
Do we have any (good) chartists that could give their interpretation of today’s trading please?
Screenshot 2024-02-19 at 16-32-39 CommSec Quotes & Research.jpg It went up it went down it went flato_O


😅🤪
 
  • Haha
  • Like
  • Fire
Reactions: 32 users

HopalongPetrovski

I'm Spartacus!
  • Haha
  • Like
Reactions: 11 users

Esq.111

Fascinatingly Intuitive.
Afternoon Chippers ,

19/2/2024
Trading Report..

Days Trading Range : $0.33 to $0.39 AU
Finnished : $0.36 AU
Volume : Getting rather Exiting.

SHOOTING STAR RESEARCH
Date : 19 Feb 2024
☆☆☆☆☆ STAR RATING.
Fair Value :
$5.22 AU
3.01081 CHF. Swiss Franks
3 & seven eighths SCS ( Standard bags of Cowrie Shells.
* Note : Above dose not even remotely include a buyout premium.
* Rockerrothsgettyfellerchild Inc .LLC
Corporate Office : Post Box 1A , Little Cayman Island
Offices : Canairy Islands , Liechtenstein & Switzerland
Trading desk : Wilds of Sotherm Panama.

* This Investment House operates on the highest standards of integrity , with the ultimate goal to maximise investment return by way of shits & giggles too clientele, whilst also delivering a fair & equitable shareprice guestimate to both sides of the investment community.

* Average return over 10 years is in excess of 25% shifts & giggles margin above nearest competitors

Not Financial Advice. One should always insult a financial planer if in doubt.



Regards,
Esq.
 
  • Haha
  • Like
  • Love
Reactions: 25 users

Diogenese

Top 20
  • Love
  • Like
Reactions: 7 users

Esq.111

Fascinatingly Intuitive.
  • Like
Reactions: 1 users

TheDrooben

Pretty Pretty Pretty Pretty Good
Afternoon Chippers ,

19/2/2024
Trading Report..

Days Trading Range : $0.33 to $0.39 AU
Finnished : $0.36 AU
Volume : Getting rather Exiting.

SHOOTING STAR RESEARCH
Date : 19 Feb 2024
☆☆☆☆☆ STAR RATING.
Fair Value :
$5.22 AU
3.01081 CHF. Swiss Franks
3 & seven eighths SCS ( Standard bags of Cowrie Shells.
* Note : Above dose not even remotely include a buyout premium.
* Rockerrothsgettyfellerchild Inc .LLC
Corporate Office : Post Box 1A , Little Cayman Island
Offices : Canairy Islands , Liechtenstein & Switzerland
Trading desk : Wilds of Sotherm Panama.

* This Investment House operates on the highest standards of integrity , with the ultimate goal to maximise investment return by way of shits & giggles too clientele, whilst also delivering a fair & equitable shareprice guestimate to both sides of the investment community.

* Average return over 10 years is in excess of 25% shifts & giggles margin above nearest competitors

Not Financial Advice. One should always insult a financial planer if in doubt.



Regards,
Esq.


200w (5).gif
 
  • Haha
  • Like
Reactions: 6 users

HopalongPetrovski

I'm Spartacus!
Hi Hoppy,

The follow-on explains the difference between NLP and LLM ... no really>


Thank you.
I understood that.
A score of 8000 previous words for context is amazing to me.
Had no idea LLMs were that powerful.
No wonder they use a lot of juice. 🤣
 
  • Like
  • Fire
Reactions: 6 users

Esq.111

Fascinatingly Intuitive.
What’s the total shorts outstanding ?
Afternoon Pom down under ,

From Shortman data today, which gives 13thfeb , shares shorted stood at 4.13% & BRN was the 44th most shorted.

Would appear to be dropping out of favour , from their perspective.

Regards,
Esq.
 
  • Like
  • Fire
  • Love
Reactions: 21 users

buena suerte :-)

BOB Bank of Brainchip
  • Like
  • Fire
Reactions: 9 users
Afternoon Pom down under ,

From Shortman data today, which gives 13thfeb , shares shorted stood at 4.13% & BRN was the 44th most shorted.

Would appear to be dropping out of favour , from their perspective.

Regards,
Esq.
Well they not got out in the last 2 weeks which is

1708330437917.gif
 
  • Haha
  • Like
  • Love
Reactions: 9 users

wilzy123

Founding Member
I was more baiting the parrots in Panama.

Looks like they haven't been seen this afternoon. Naturally makes sense to flush them out.
 
  • Like
  • Fire
Reactions: 4 users

cosors

👀
You may be right.
I notice that the rightmost line shows the 'spread', the difference between bid/ask. If the supply is poor here and there is very little trading, then it is normal for this to be well above 5% (I think 5 is ~normal vor ASX stocks here). If there is a lot of trading it is below 5%. I would like to exclude a bit Lang & Schwarz from this, they are private.
In any case, I have just compared your price at the ASX with ours here. Ours is about 10% higher than yours, including currency conversion. So if you deduct that, we would be quite close to you again.
The last time I saw such a big difference, I think it was also 10% off but at that time in our favor here, i.e. a kind of discount, was the evening before the first short attack that I noticed. The next day there were transferred I mean someone said 25M shares to you at the ASX and were dumped into the market. And the difference between the markets was evened out again. I remember it quite well because I found it exciting and scary at the same time. I still don't know whether there was a connection to the short attack.
But these two moments are linked by this rare 10% difference in value. Only the other way around back then. Let's see what happens on Monday. Perhaps a few million shares will be withdrawn from the ASX and thrown into the market here? Who knows. Sometimes I would love to have a look behind the scenes at how things like this work.
The 10% difference was equalised. AUD0.36 corresponds to €0.22. And as you can see, the ratio including currency conversion is balanced again.

1708331354994.png

Of course, I have no idea how the market makers work. Perhaps they were supplied with fresh shares from the ASX?
 
Last edited:
  • Like
  • Fire
Reactions: 8 users

Getupthere

Regular

Crackdowns on data centre power use threaten AI expansion


Governments worldwide are upping regulations on the development of data centres amid concerns over energy usage and the potential repercussions on power infrastructure and national climate targets.


The Financial Times reports that nations including China, Singapore and Ireland have imposed restrictions on new data centres to align with more stringent environmental standards.


Ireland — recognised as a pivotal location for cloud computing firms due to its favourable tax regime and strategic access to global internet connections — is facing the most acute challenges.


The country's energy and water regulator's 2021 decision to restrict new data connections has led to the rejection of permits for new projects in Dublin by data centre operators Vantage, EdgeConneX and Equinix.


Germany and Loudoun County in Virginia, United States, have also adopted strategies to mitigate the environmental impact of data centres. These include limiting permits in residential zones and mandating contributions to renewable energy and waste heat reuse.


Exacerbating the situation are the burgeoning energy demands of artificial intelligence (AI) with the United States hosting a significant portion of the world's data centres.


AI encourages growth of renewables


Escalating power consumption from the rise of AI has led to increased scrutiny of tech giants Microsoft, Alphabet and Amazon to engage more actively in renewable energy generation and improve efficiency. Investments in wind and solar energy are underway, and Microsoft is also exploring nuclear options to power its facilities.


Barclays analysts warn that the implications of rising internet usage on power grids are yet to be fully accounted for by governments, suggesting a trend towards global restrictions. This could impact the $220 billion data centre and cloud industry, which is anticipated to be worth $418 billion by the decade's end amid soaring data demands.
 
  • Like
  • Fire
  • Love
Reactions: 35 users

macro

Member
Hi macro

I would back Cambridge Consultants because of their ties with ARM and previous indications they have some understanding of AKIDA technology.

I would dismiss Imperial College London as they are deeply engaged with Intel and publish papers regarding Loihi fairly regularly.

I have not that I can remember ever come across any research by the other three that raised a suspicion they had links to Brainchip.

Maybe FMF has picked up something I missed.

My opinion only DYOR
Fact Finder
Hi FF

Dr Aidong Xu Head of Semiconductor Capability Cambridge Consultants

https://www.cambridgeconsultants.com/us/insights/opinion/what-is-neuromorphic-computing

What is neuromorphic computing and why do businesses and society need it?


Cambridge Consultants is a technology development firm that has worked with NETRI (Neuro Engineering Technologies Research Institute) on a breakthrough in precision imaging of brain-on-a-chip technology

https://www.cambridgeconsultants.co...hrough-unlocks-progress-brain-chip-technology

https://www.cambridgeconsultants.com/case-studies

Works/ed with ARM, Ford, NASA, Caterpillar, Phillips, Hitachi, SpaceX, Nvidia, Iridium, Uber and others
 
  • Like
  • Fire
  • Wow
Reactions: 18 users
Top Bottom