BRN Discussion Ongoing

Murphy

Life is not a dress rehearsal!
Is It Safe to Eat Sunflower Seeds Whole?
Nothing succeeds like a budgie with no beak! :giggle:

If you don't have dreams, you can't have dreams come true!
 
  • Haha
  • Like
Reactions: 6 users

Diogenese

Top 20
C'mon Miss Kitty Kat! Pull on them running shoes..........*frantically massaging Deep Heat into her calves...*

If you don't have dreams, you can't have dreams come true!
Running shoes? It's gumboots for sure.
 
  • Haha
  • Like
Reactions: 6 users
Can't recall if posted before.

Certainly like some of of the engagements MyWAI has ;)


Industrial Robotics : BrainChip moves closer to the’italian MyWAI to provide Edge AI solutions built on its neuromorphic chip​

Posted on 09-02-2024 by Pierrick Arlot
PlatformBrainChip
MyWAI


The American company of Australian origin’ BrainChip, which has developed under the name’Akida a neuromorphic processor with ultra-low consumption for network periphery (edge), has signed a cooperation agreement with the young Italian firm MyWAI which defines itself as a provider of AIoT (Artificial Intelligence of Things) solutions for intelligent’edge - to provide next-generation Edge AI solutions leveraging neuromorphic computing.

Created in 2021 and based in Sestri Levante about forty kilometers east of Genoa, MyWAI has designed, patented and developed what the young company presents as the first fully native European AIoT platform for "equipment as a service" (EaaS). This solution is supposed to help machine tool manufacturers and users to add new intelligent services to their equipment, such as innovative technical features (predictive maintenance, multimodal quality inspection), insurance policies (e.g., payment by event) or fintech business models (e.g., pay-as-you-go).

MyWAI, which has patented its EaaS solution combining advanced technologies of generative AI and Edge AI, the Internet of Things and the blockchain, has been engaged in projects with key partners such as Mitsubishi Electric (to add smart services to Japanese robotic fleets), Hitachi Rail (to generate artificial defects for video surveillance systems) and Esaote (for the maintenance services of the biomedical machines of the Italian group).

As part of the cooperation between the two companies, MyWAI's future Edge AI solutions will rely on BrainChip's Akida chips, which can process and analyze sensor data with record efficiency, accuracy and energy saving. It will be recalled that the Akida chip is a 100% event-driven AI processor% digital that uses neuromorphic principles mimicking the human brain and that analyzes only the essential data detected at the point of’ acquisition.

BrainChip S’ technology will integrate into MyWAI's EaaS-like AIoT platform that can broadcast, process and prepare data from multimodal sensors (time series, audio, vision, touch data, etc.) at the network edge, enabling, manage the MLOps (Machine Learning Engineering for Production) type workflows and ensure the certification of data and results by the blockchain (DLT) in accordance with the new European regulations for a trusted AI.

The partnership between the two companies has the ambition of’ to accelerate the adoption of’Edge AI in industrial and robotic sectors such as manufacturing, logistics, energy management and health. « We believe it is important to help businesses reach new heights by adding intelligence to processes and machines at the edge through generative AI in the cloud and trusted AI at the edge level, says Fabrizio Cardinali, CEO of MyWAI. By integrating Akida from BrainChip with our EaaS platform, we can enable our customers to optimize their processes and machines with eco-efficient AI, by providing’intelligence where it is needed, when they need it. » .

Further to MyWAI post.

Will be attending this event, demonstrating and exhibiting.

Some big names attending etc.




A&T_automation&testing
18th EDITION | _ 14-16 FEBRUARY 2024 | TURIN - OVAL LINGOTTO FIERE
THE FAIR DEDICATED TO INNOVATION,
TECHNOLOGY, RELIABILITY AND SKILLS 4.0
2nd EDITION
| 6-8 NOVEMBER 2024 | VICENZA - EXHIBITION CENTER

HOME OF ARTIFICIAL INTELLIGENCE​

FIRST TIME IN ITALY - DEMONSTRATORS AND EXPERTS

1000 m 2 to discover the logic of AI for manufacturing

For the first time in Italy, a meeting moment for the industrial world, dedicated to the opportunities offered by AI to the manufacturing world and for supply chains. Large industrial players, selected SMEs and Startups actively participate in the project coordinated by the National Competence Center CIM4.0.​


IMG_20240219_112852.jpg


Exhibiting.


MYWAI

Stand: CASA AI

Request an appointment

The company

MYWAI is one of the fastest growing AI startups for Industry 4.0 that has developed an AIoT platform, a scalable no-code solution to deploy Edge & Generative AI models within your machinery and processes, enabling multimodal and multivariate analysis of machine data and processes to recognize and prevent anomalies, failures or defects.
Discover the power of on-edge Artificial Intelligence, and reduce maintenance costs, downtime, improving production quality control and certification. Our versatile solution has demonstrated advanced analytical capabilities delivering tangible benefits to businesses around the world.
Keywords Data acquisition, AI - Artificial Intelligence, IOT - Internet of Things, Artificial Vision

Technologies exhibited

HandlerGetImmagine.ashx

Discover the power of generative & on-edge Artificial Intelligence applied to your machinery and industrial processes

MYWAI's AIoT platform integrates the following modules into one solution to connect and make machines more intelligent:

  • Cloud : Zero-code configuration on multi-cloud
  • MLOPS Manager : ML pipeline management and certification via Docker and Kubernetes, according to EU AI ACT
  • Edge Analyzer : Run-time module on Edge for multivariate and multimodal data streams
  • DLT Certifier : Certification via IOTA blockchain and IPFS distributed file system for insurance purposes
MYWAI helps you achieve operational excellence, optimize asset management and achieve significant savings.

Keywords Data acquisition, AI - Artificial Intelligence, IOT - Internet of Things, Energy saving, Artificial vision


If I read right, I think the CEO had a keynote too?


Screenshot_2024-02-19-11-56-34-51_4641ebc0df1485bf6b47ebd018b5ee76.jpg
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 29 users

Esq.111

Fascinatingly Intuitive.
1708315329207.png
 
  • Haha
  • Like
Reactions: 10 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Last edited:
  • Haha
  • Like
  • Love
Reactions: 10 users

RobjHunt

Regular
OK Mia. I

👌 🆗 OK Mia! I’ll go outside and shovel some mulch for my Mum and we can see if that does anything. Back later!
Assuming you’re Mexican?
 
  • Haha
Reactions: 1 users

Lex555

Regular
Nintendo Switch 2 release date is reportedly being pushed back from late 24 to Q1 2025. Supposedly due to stock shortages and more time to increase initial gaming titles on release. Not sure what this means for the March presentation.
 
  • Like
  • Wow
Reactions: 6 users

RobjHunt

Regular
Maybe an even finish. More than happy with the last few trading days closes though. @Bravo what sort of state did the spa end up after the 30c part-ay? Probably recommend the flux capacitor have another flush out though as the 50c spa gathering may not be too far away 😉
 
  • Like
  • Fire
  • Haha
Reactions: 16 users

Esq.111

Fascinatingly Intuitive.
Well there's 4% of the total floated stock played with so far today.

Esq.
 
  • Like
  • Fire
  • Wow
Reactions: 15 users
  • Like
  • Love
  • Fire
Reactions: 14 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
  • Haha
  • Like
Reactions: 7 users

macro

Member
Wonder who in London is playing with or wants to play with Akida Gen 2....hmmmm :unsure:



Machine Learning Engineer - Neuromorphic Computing (Akida 2) - London​

Posted 3 weeks ago

U.K. located freelancers only
We are at the forefront of advancing neuromorphic computing technology. We are dedicated to developing cutting-edge solutions that transform how machines learn and interact with the world. Our team is growing, and we are seeking a talented Machine Learning Engineer to join our London office, focusing on developing applications using the Akida 2 neuromorphic computing platform.

Job Description:
As a Machine Learning Engineer, you will play a crucial role in our dynamic team, focusing on the development and implementation of machine learning algorithms tailored for the Akida 2 neuromorphic computing platform. Your expertise will contribute to optimizing AI models for energy efficiency and performance, aligning with the unique capabilities of neuromorphic computing.

Key Responsibilities:

Develop and optimize machine learning models for the Akida 2 platform.
Collaborate with cross-functional teams to integrate AI solutions into products.
Conduct research and stay updated with the latest trends in neuromorphic computing.
Provide technical guidance and mentorship to junior team members.
Participate in code reviews and maintain high standards in development practices.

Qualifications:

Bachelor’s or Master’s degree in Computer Science, Electrical Engineering, or related field.
Proven experience in machine learning and neural network development.
Familiarity with neuromorphic computing, particularly Akida 2, is highly desirable.
Strong programming skills in Python and experience with machine learning frameworks.
Excellent problem-solving abilities and a collaborative team player.
Strong communication skills, both written and verbal.

What We Offer:

Competitive salary and benefits package.
Opportunity to work on groundbreaking technology in a fast-paced environment.
Professional development opportunities and a collaborative team culture.
Central London location with modern office facilities.

Application Process:
To apply, please submit your CV and a cover letter outlining your suitability for the role. Shortlisted candidates will be invited for an interview process, which may include technical assessments.

We are an equal-opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.

Join us in shaping the future of AI and neuromorphic computing. Apply today!
Who is at the forefront of advancing neuromorphic computing technology in London?

Perplexity AI says:

  • Cambridge Consultants, which has a head of semiconductor capability involved in neuromorphic computing
  • Silicon Storage Technology, which is a leading patent filer in neuromorphic computing
  • UCL, which has researchers such as Professor Tony Kenyon and Dr. Adnan Mehonic working on neuromorphic computing
  • Imperial College London, which has researchers like Dr. Oscar Lee, Dr. Jack Gartside, and Professor Will Branford involved in neuromorphic computing
  • University of West London, which is involved in the development of methods and computational tools for neuromorphic AI
 
  • Like
  • Fire
  • Thinking
Reactions: 15 users

mrgds

Regular
No thanks to @Bravo :love: .............................
GREY BABY YEAH ................
Screenshot (81).png
Screenshot (82).png
:love:
 
Last edited:
  • Like
  • Haha
Reactions: 14 users
Who is at the forefront of advancing neuromorphic computing technology in London?

Perplexity AI says:

  • Cambridge Consultants, which has a head of semiconductor capability involved in neuromorphic computing
  • Silicon Storage Technology, which is a leading patent filer in neuromorphic computing
  • UCL, which has researchers such as Professor Tony Kenyon and Dr. Adnan Mehonic working on neuromorphic computing
  • Imperial College London, which has researchers like Dr. Oscar Lee, Dr. Jack Gartside, and Professor Will Branford involved in neuromorphic computing
  • University of West London, which is involved in the development of methods and computational tools for neuromorphic AI
Hi macro

I would back Cambridge Consultants because of their ties with ARM and previous indications they have some understanding of AKIDA technology.

I would dismiss Imperial College London as they are deeply engaged with Intel and publish papers regarding Loihi fairly regularly.

I have not that I can remember ever come across any research by the other three that raised a suspicion they had links to Brainchip.

Maybe FMF has picked up something I missed.

My opinion only DYOR
Fact Finder
 
  • Like
  • Love
Reactions: 16 users

Dr E Brown

Regular
Many thanks to those who replied, especially FMF who gave me a better understanding. I appreciate your help. My thoughts are a continuing uptrend, which hopefully is gentle until Sean gives us a presentation around release of half yearly figures. Thanks again.
 
  • Like
  • Fire
Reactions: 7 users

TheDrooben

Pretty Pretty Pretty Pretty Good
Looks like the buyers were back at the finish to soak up shares from the day traders and weak hands. Great finish and set up for the next leg up IMO. Gee a surprise positive pre-open anouncement in the morning would really provide the fuel............
tumblr_mgncjreM5z1qcxh9vo1_500.gif


HAPPY AS LARRY
 
  • Like
  • Haha
  • Love
Reactions: 32 users

Diogenese

Top 20
👌 🆗 OK Mia! I’ll go outside and shovel some mulch for my Mum and we can see if that does anything. Back later!
We've got some second hand mulch that we've cleaned as best as we can.
 
  • Haha
  • Like
Reactions: 9 users
We've got some second hand mulch that we've cleaned as best as we can.
Is that the same stuff you tried to sell me with the warning about not to be used for Smoking Ceremonies?😂🤡🤣
 
  • Haha
Reactions: 10 users
Hi All
I found this a useful guide to understanding LLMs and that they are all one thing it is just that to use them for a particular purpose such as running at the Edge you would likely want to optimise them to remove those parts which are not needed for your intended purpose.

An LLM for handling Macca’s drive through orders would likely not need the ability to discuss quantum mechanics:

“Optimizing LLMs for Your Use Cases: A Developer’s Guide​

Pankaj Pandey
Pankaj Pandey
·
Follow
4 min read
·
Jan 13, 2024

Listen
Share
Optimizing LLMs for specific use cases can significantly improve their efficiency and performance.
0*1mUk7znBHVUK9DAY

Photo by 1981 Digital on Unsplash
Large Language Models (LLMs) hold immense potential to revolutionize diverse workflows, but unlocking their full potential for specific use cases requires thoughtful optimization.
Below are some guides for developers looking to optimize LLMs for their own use cases:

1. Analyze Your Use Case:​

  • Define the task: Identify the specific task you want the LLM to perform (e.g., code generation, text summarization, question answering).
  • Data analysis: Assess the type and size of data available for training and fine-tuning. Is it curated, labeled and relevant to your specific domain?
  • Evaluation metrics:Determine how you’ll measure success. Are you aiming for accuracy, speed, creativity or something else?
For Example, imagine you are developing a chatbot for a customer support application. The use case is to generate helpful responses to user queries, providing accurate and relevant information.

2. LLM Selection and Fine-tuning:​

  • Model selection: Choose an LLM with capabilities aligning with your task. Consider pre-trained models for your domain may help you do this task easily.
    For Example, choose GPT-3 as it excels in natural language understanding and generation, which aligns well with the conversational nature of the customer support chatbot.
  • Fine-tuning: Adapt the LLM to your specific data using transfer learning. Popular frameworks like Hugging Face offer tools and tutorials for fine-tuning.
    Fine-tune GPT-3 using a dataset of customer support interactions. Provide examples of user queries and corresponding responses to help the model understand the specific context and language used in the customer support domain.
  • Hyperparameter optimization: Adjust settings like learning rate, batch size and optimizer to maximize performance on your data. Consider using automated Hyperparameter Optimization (HPO) tools.
    Experiment with smaller variants of GPT-3 or adjust hyperparameters to find the right balance between model size and performance. For a latency-sensitive application like customer support, a smaller model might be preferred.

3. Data Wrangling and Augmentation:​

  • Data quality: Ensure data cleanliness and relevance. Label inconsistencies, biases and irrelevant examples can negatively impact performance.
    Apply quantization to reduce model precision, making it more efficient. Prune unnecessary connections to create a more compact model without compromising performance.
  • Data augmentation: Artificially expand your data with techniques like synonym substitution, back-translation or paraphrasing to improve model generalization.
  • Active learning: Interactively query the LLM to identify informative data points for further labeling, focusing resources on areas where the model needs improvement.

4. Integration and Deployment:​

  • API integration: Connect the LLM to your application or workflow through APIs offered by platforms like OpenAI or Google Cloud AI.
  • Latency optimization: Optimize resource allocation and inference techniques to minimize response time and improve user experience.
  • Monitoring and feedback: Continuously monitor model performance and gather feedback from users. Use this data to further refine the LLM and iterate on your solution.

5. Caching and Memorization:​

  • Implement caching and memorization strategies to store and reuse intermediate results during inference. This can significantly reduce redundant computations and improve response times.
  • Implement caching of frequently used responses. For commonly asked questions, store and reuse the model’s previous outputs to reduce redundant computations and improve response times.

6. User Feedback Loop:​

  • Establish a feedback loop with end-users to understand their experience and gather insights for further optimization. User feedback can help refine the model and identify areas for improvement.
    For Example, gather feedback from users regarding the effectiveness of the chatbot’s responses. Use this feedback to identify areas for improvement, update the model accordingly and enhance the overall user experience.

Additional Tips:​

  • Consider interpretability: Choose LLMs with built-in explainability features to understand their reasoning and build trust with users.
  • Utilize transfer learning techniques: Leverage pre-trained knowledge from similar tasks to accelerate development and improve performance.
  • Collaborate with the LLM community: Stay informed about advances in LLM research and best practices, participate in forums and contribute your findings.
By following these steps and continuously iterating, you can significantly improve the efficiency and efficacy of LLMs for your specific use cases. Remember, optimizing LLMs is an ongoing process and dedication to the data, model and integration aspects will ultimately unlock their full potential in your workflows.

Helpful Resources:​

Fine-tuning — OpenAI API
Customize a model with Azure OpenAI Service — Azure OpenAI | Microsoft Learn
Please note: This guide provides a general framework. Specific steps and tools may vary depending on your chosen LLM, framework and use case.”

This is what Brainchip’s engineers and scientists have been doing that Dr. Lewis described as achieving SOTA performance.

My opinion only DYOR
Fact Finder
 
  • Like
  • Fire
  • Love
Reactions: 26 users
Top Bottom