BRN Discussion Ongoing

Learning

Learning to the Top 🕵‍♂️
Not sure if this was shared. Two lectures from different universities talking about Akida.

Miss the copy of the link.

Screenshot_20240610_004504_LinkedIn.jpg


Learning 🪴
 
  • Like
  • Love
  • Fire
Reactions: 59 users
We can only hope that one of Sean important meetings was with Apples AI main man Mr John Giannandrea....This would be insane.


Imagine
 
  • Like
  • Fire
Reactions: 5 users

Diogenese

Top 20
  • Haha
  • Like
  • Fire
Reactions: 18 users

JoMo68

Regular
Let's hope MF keeps all their BRN posts in a burn box.

Perhaps they could explain TeNNs to their readers.
Yes, I do believe that Mickelpenis is going to have some serious egg on his face at some point in the not too distant future 🤞
 
  • Like
  • Fire
  • Love
Reactions: 18 users
Not sure if this was shared. Two lectures from different universities talking about Akida.

Miss the copy of the link.

View attachment 64584

Learning 🪴
Sounds like the professor, is impressed enough to become a shareholder in our Company.

We can't know how big his "bet" is though.

And anyway, what would he know?
He's probably just caught up in the "hype" like the rest of us 🙄..
 
  • Like
  • Haha
  • Thinking
Reactions: 11 users
  • Haha
  • Like
Reactions: 5 users
Nice to see Tata Elxsi still on the neuromorphic train.



MULTIMODAL AI AND NEUROMORPHIC AI: DETECTION, DIAGNOSIS, PROGNOSIS

Navigate​


The synergy of cutting-edge technologies like Multimodal and Neuromorphic AI signals a pivotal shift from reactive to proactive healthcare. This article explores captivating use cases, offering insights on the implementation of Responsible AI.
Join us as we navigate the frontier of healthcare, where the synergy of innovation and responsibility promises a revolution in patient care and well-being.

Current state of AI adoption in Healthcare​

Unlocking the full potential of AI in healthcare is an uncharted journey. From optimising drug combinations to spearheading clinical correlation and toxicity studies, AI is set to redefine every facet of the industry. Despite its transformative capabilities, AI remains a niche technology, requiring a nuanced understanding of its application in healthcare.

The tides are changing as the healthcare sector recognises the urgency for an interdisciplinary approach, marrying engineering with medical science. This paradigm shift signals an imminent era where AI’s vast capabilities will revolutionise diagnostics, patient treatment, protocols, drug development and delivery, and prescription practices over the next decade.

Join us as we navigate the frontier of healthcare, where the synergy of innovation and responsibility promises a revolution in patient care and well-being.

Multimodal AI and Neuromorphic Technology – A new era in Preventive Healthcare​

In the ever-evolving landscape of healthcare, the amalgamation of Multimodal AI and Neuromorphic Technology marks a pivotal moment—a shift from reactive medicine to a proactive, preventive healthcare paradigm. This synergy is not just a collaboration of cutting-edge technologies; it’s a gateway to a future where wellness takes centre stage.
Multimodal AI and Neuromorphic Technology A new era in Preventive Healthcare

These technologies hold promise to transform healthcare by enhancing diagnostics, enabling personalised medicine, predicting long-term prognosis and contributing to innovations in therapeutic interventions.

Let’s delve into compelling use cases and glimpse the future of preventive healthcare.

The tides are changing as the healthcare sector recognises the urgency for an interdisciplinary approach, marrying engineering with medical science. This paradigm shift signals an imminent era where AI’s vast capabilities will revolutionise diagnostics, patient treatment, protocols, drug development and delivery, and prescription practices over the next decade.

Join us as we navigate the frontier of healthcare, where the synergy of innovation and responsibility promises a revolution in patient care and well-being.

Defining Multimodal and Neuromorphic AI​

Multi-modal AI​

Multimodal AI refers to the artificial intelligence systems that process and analyse data from multiple modalities or sources. In healthcare, these modalities often include both visual and clinical data. Visual data may include medical images from scans, while clinical data encompasses patient records, parameters, and test reports. Multimodal AI integrates these diverse data types to provide a comprehensive understanding, draw meaningful insights and give suggestions based on data and image analytics.

Neuromorphic Technology​

The term “neuromorphic” comes from the combination of “neuro” (related to the nervous system) and “morphic” (related to form or structure). Neuromorphic technology is an innovative approach in computing that draws inspiration from the structure and function of a human brain. These are AI powered by brain-like computing architectures. It can help process larger amount of data with less computing power, memory and electric power consumption. Neuromorphic Technology utilises Artificial Neural Networks (ANN) and Spiking Neural Networks (SNN) to mimic the parallel processing, event-driven nature, and adaptability observed in biological brains.
Defining Multimodal and Neuromorphic AI

Multimodal Inputs​

  • Medical Images
  • Lab Reports
  • Clinical History
  • Patient Demographic Information

Fusion Module​

Cross-modal attention mechanism to dynamically weigh the importance of text, video, and other parameters for calculating index to decide priority of selection.

Inference Outputs​

Inference Results for
  • Diagnostic
  • Prognostic
  • Lifestyle Recommendation
  • Disease Prediction

Use Cases of Multimodal and Neuromorphic AI​

Early Screening & Disease Detection​

Multimodal AI​

  • Integrates visual and clinical data for holistic analysis.
  • Advanced –image recognition for early detection.
  • Comprehensive patient profiling.

Neuromorphic Technology​

  • Efficient pattern recognition for subtle disease indicators.
  • Event-driven processing for real-time detection. This is crucial for detecting anomalies or irregularities that may be early signs of diseases.
  • Continuous monitoring for dynamic changes. This continuous surveillance is especially valuable for conditions with varying symptoms.

Diagnosis​

Multimodal AI​

  • Integrated diagnostic insights from diverse data.
  • Cross-verification for reliability.
  • Tailored treatment plans based on nuanced understanding.
  • Continuous updates based on latest findings reported in the subject.

Neuromorphic Technology​

  • Large and Efficient data processing with minimal energy consumption. This efficiency contributes to faster and more accurate diagnoses.
  • Allows integration of more complex algorithms on wearable devices; makes diagnostics more real-time and helps timely interventions
  • Implantable devices can be made AI enabled with Neuromorphic computing, due to the low computing requirement and power consumption, making the diagnosis and management more precise and real-time.
  • Adaptive intelligence for dynamic adjustments. This adaptability enhances the precision of diagnostic processes. This event-driven processing aligns with the dynamic nature of healthcare data allowing for more accurate and timely diagnoses.
  • SNN for real-time response
    and accuracy.

Prognosis​

Multimodal AI​

  • Research Advancements: Facilitates discovery of new insights, contributing to medical advancements and innovations.
  • Personalised Prognostic Models: Considering both visual and clinical data, these models account for individual variations, and correlate with prior case records and provide more accurate predictions of disease outcomes.
  • Dynamic Adaptability: The adaptability of multimodal AI to changing data patterns ensures that prognostic models can dynamically adjust based on evolving patient conditions and improve prognosis predictions

Neuromorphic Technology​

  • Analysis of longitudinal data for predicting disease progression.
  • Dynamic adaptability in prognostic models that can adjust to changing data patterns. This adaptability improves prognosis prediction accuracy for evolving patient conditions.
  • Personalised prognostic insights based on individual variations can help in more accurate predictions tailored to individual patient profiles.

Tata Elxsi Use Case​

Disease Detection and Diagnosis​

Utilising Neuromorphic Technology, we’ve achieved significant advancements in the analysis of medical images on low-computing embedded platforms, enabling on-field diagnostics of ultrasound images.

This innovative approach provides critical diagnostic information for musculoskeletal injuries, including tissue damage extent, recovery progress, and healing time predictions, all with enhanced efficiency and device portability making it ideal for applications such as sports medicine.

Applications of Multimodal and Neuromorphic AI​

Multimodal AI​

  • Comprehensive Patient Analysis
  • Diagnostic Accuracy
  • Mental Health and Behavioural Analysis
  • Lifestyle Reviews and Recommendations
  • Management of Chronic Diseases like Diabetes/HT/Cardiac Diseases with continous monitoring and personalised medications
  • Diagnosis/Management and Prognosis of various types of cancers, Digital Drug Trials, Effective Pandemic Surveillance and staging, Gene Therapy and Genomics
  • Recommendations for interventions and Prioritisation of therapeutic resources and modalities

Neuromorphic Technology​

  • Implants, Wearables Devices
  • Processing Large Data
  • Medical Imaging Analysis
  • Drug Discovery and Personalised Medicine
  • Robotic Surgery Assistance
  • Neurological Disorder Understanding
  • Patient Care and Rehabilitation
  • Predictive Analytics for Healthcare Management
  • Energy Efficient Remote Monitoring

The Synergy of MLOps and Advanced AI​

The transformative impact of MLOps across operational efficiency, data management, patient outcomes, and the overall quality of care is unmistakable. In the quest for advancing healthcare, the convergence of Machine Learning Operations (MLOps) with Multimodal and Neuromorphic AI has emerged as a game-changer. These technologies can help in seamless deployment, continuous monitoring, and collaborative development across various stakeholders in the healthcare ecosystem.

While Advanced AI Technologies offer the potential for improvising the use cases, the application of MLOps can be instrumental in strengthening and regulating these advancements. It achieves this by bringing in streamlined AI development processes, dataset management, continuous monitoring of model accuracy across different versions, ensuring the deployment of thoroughly vetted versions for clinical use.

Additonally, MLOps frameworks enable and learn from deviations, further enhancing their efficacy in healthcare applications.

Use Cases​


Disease Detection​

Disease Prediction and Prevention
Real World Application – Early Detection of Chronic Disease, Infectious Disease Monitoring
Healthcare Fraud Detection
Real World Application – Claims Analysis, Identify Theft Prevention
Medical Imaging Analysis
Real World Application – Early Cancer Detection, Neurological Disorders Diagnosis
Genomic Research
Real World Application – Cancer Genomics, Rare Genetic Diseases

Diagnosis​

Drug Discovery and Development
Real World Application – Protein Folding Prediction, Drug Toxicity Prediction
Personalised Medicine
Real World Application – Oncology and Targeted Therapies, Chronic Disease Management
Healthcare Resource Management
Real World Application – Emergency Room Digitalised Management, Pharmaceutical Supply Chain Management

Prognosis​

Prediction of Clinical Outcome
Real World Application – Prediction of recovery time and quality of life, Adverse Effects, Short term and long term impact
Remote Patient Monitoring
Real World Application – Chronic Disease Management, Post Surgery Monitoring

Responsible AI – Navigating Ethical Frontiers​

In the realm of AI, addressing bias stands as a pivotal ethical imperative, particularly in fields like medical analysis where the demand for precision is ethically, legally, and morally paramount. As AI practitioners, our commitment to responsible AI requires rigorous testing using diverse, unbiased anonymised datasets, continual monitoring to mitigate biases, and a steadfast dedication to achieving fair outcomes across diverse patient populations.
Ethical AI

Moreover, the ethical considerations extend to the strategic utilisation of data. The foundation of responsible AI in healthcare is laid upon a robust ethical framework that guides the entire lifecycle of Neuromorphic and Multimodal AI systems. Stakeholders must unwaveringly adhere to established ethical principles, ensuring transparency, fairness, and accountability at every stage of AI implementation.

When we delve into the realm of Gen AI, the potential for malpractice looms. Consider scenarios where a patient, with normal renal function manageable through medication, undergoes a renal scan. Unscrupulous use of Gen AI could manipulate images, creating false lesions, leading to unnecessary surgeries or even nephrectomy which can benefit illegal organ trade

Thus, the imperative lies in defining strong ethical boundaries, implementing robust audits, and establishing legal frameworks to prevent data manipulation and ensure the highest standards of integrity.

In embracing these technological advancements responsibly, we are not just witnessing the future of healthcare; we are actively shaping it. The era of proactive, preventive healthcare beckons, promising a future where wellness is at the forefront of the industry’s evolution.

The shift from traditional, one-size-fits-all medical practices, prone to misinterpretation and diagnostic errors, to AI-enhanced methodologies, heralds a new era of precision and personalised care. AI’s capability to analyse a broad spectrum of patient data—ranging from genetic backgrounds to lifestyle factors—promises a departure from misdiagnoses and introduces tailored therapeutic interventions.

AI and multimodal technologies enable a holistic view of the patient’s health, integrating diverse data points. While, Neuromorphic computing advances the portability of medical devices, including wearables and implants, transforming them into intelligent systems capable of adapting to varying conditions.

As thought leaders in the healthcare industry, our commitment to responsibly integrate these technologies paves the way for a future where healthcare is not only reactive, but anticipatory, personalised, and universally accessible.

Author
Anup SS, Practice Head, AI and ML, Tata Elxsi
Anup S S
Practice Head, Artificial Intelligence, Tata Elxsi
Anup S.S. is a visionary in leveraging Artificial Intelligence, Machine Learning and Deep Learning. Leading breakthrough AI projects in healthcare, Anup’s strategic insight and innovation ignite client success, unlocking AI’s full potential.

Book Your Exclusive Print Copy Now!​


© 2024 TATA ELXSI
PRIVACY POLICY
COOKIE POLICY
TERMS OF USE
Facebook Twitter Youtube Linkedin Instagram Envelope
Video below does work just click watch on YouTube


 
Last edited:
  • Haha
  • Like
Reactions: 2 users
Wonder if these guys have ever had a chat with us or if we should chat to them :D



E-SPIN
MONDAY, 10 JUNE 2024 / PUBLISHED IN GLOBAL THEMES AND FEATURE TOPICS

Neuromorphic Computing Enhance Computing Efficiency​

Neuromorphic Computing is an innovative approach to design and build computer systems that are inspired by the structure and function of the human brain. This field combines principles from neuroscience, computer science, and electrical engineering to create hardware and software that mimic neural processes. Here’s an overview of key concepts and developments in neuromorphic computing:

Excerpt.

Examples of Neuromorphic Computing in 2024

  1. IBM’s TrueNorth:
    • Architecture: TrueNorth chip features 1 million neurons and 256 million synapses, organized into 4,096 neurosynaptic cores. Each core simulates 256 neurons.
    • Applications: Used in image recognition, sensory processing, and cognitive computing tasks, demonstrating high efficiency and low power consumption.
  2. Intel’s Loihi 2:
    • Self-Learning Capabilities: Loihi 2 chips continue to advance real-time learning and adaptation, reflecting brain-like plasticity.
    • Performance: Significant improvements in speed and scalability, facilitating more complex problem-solving and adaptive behaviors in real-time applications.
  3. BrainChip’s Akida:
    • Event-Based Processing: Akida leverages event-based processing for energy-efficient, real-time data analysis.
    • Commercial Use: Applied in smart home devices, automotive systems, and industrial applications for enhanced pattern recognition and decision-making.
  4. SynSense DYNAP-SEL:
    • Ultra-Low Power Consumption: Designed for ultra-low power applications, DYNAP-SEL is used in edge computing, enabling advanced processing capabilities in IoT devices without significant power draw.
    • Real-World Deployment: Implemented in sensor networks for real-time environmental monitoring, providing efficient data processing at the edge.
  5. Human Brain Project’s SpiNNaker:
    • Large-Scale Neural Simulations: SpiNNaker (Spiking Neural Network Architecture) simulates large-scale neural networks, supporting neuroscience research and neuromorphic computing development.
    • Collaboration: Used in collaborative research projects to explore brain function and develop new neuromorphic algorithms and hardware.




ABOUT US​

Established in 2005, E-SPIN is a private enterprise representing Enterprise Solutions Professional on Information and Network. Serving as a regional hub in South East Asia (SEA) with operations spanning Malaysia, Singapore, Indonesia, Thailand, Philippines, and Vietnam, as well as in the Greater China Region (GRC) covering Hong Kong, Macau, and China, the company engages in international trade across adjacent nations.
  • E-SPIN specializes in delivering innovative Enterprise ICT solutions, distribution, and international trade, along with shared services outsourcing (SSO). Through a collaborative approach with leading technology partners, E-SPIN offers a comprehensive suite of solutions encompassing solutions consulting, network and systems integration, portal development, application integration, product training, skill certification, project management, maintenance support, and outsourcing management services. These services are tailored to meet the diverse needs of partners, enterprises, government entities, and military clients, delivering holistic value-added solutions.

  • E-SPIN today refers to the global organisation, and may refer to one or more of the member firms of E-SPIN Group of Companies, each of which is a separate legal entity.

WHAT WE DO​

E-SPIN specializes in delivering a range of value-added services within the regions where the company operates. These services include:
  • Enterprise Technology Product Distribution & Trading
  • Solution Consultancy
  • Solution Architecture
  • Network / System Integration
  • Global Sourcing and Turnkey Project Management
  • Product/System/Technology Migration and Modernasation
  • Product/project Training and Knowledge Transfer
  • Product/Project Maintenance Support
  • Shared Services and Outsourcing (SSO)
  • Managed Services
  • Application Security Testing (AST) as a Services (SaaS)
  • Network Monitoring System (NMS) as a Services (SaaS)
  • Anything as a Services (XaaS)

  • These offerings are tailored to meet the evolving needs of businesses and organizations, providing comprehensive solutions and support across various aspects of enterprise technology and operations.

ACHIEVEMENTS​

We offer enterprise technology solutions and products to channel partners, corporate entities, and government clients within the regions where E-SPIN operates. Additionally, we cater to international markets and undertake projects spanning multiple countries or on a global scale, provided they are commercially viable.
  • In 2005 E-SPIN was founded
  • In 2015 E-SPIN operated and served 11 countries channel partners and end clients across region E-SPIN do business.
  • In 2020 E-SPIN celebrate 15 years in business, and business keep expanding. 100+ channel partners and end clients
  • 300+ completed projects and keep growing
 
  • Like
  • Fire
Reactions: 16 users
A very exciting company to watch: Alat

It is Saudi Arabia's ambition to build a world-class manufacturing hub in the Kingdom through next-generation technologies and sustainable practices.

Alat, headquartered in Riyadh, has been established to create a global champion in electronics and advanced industrial segments and mandated to create world class manufacturing enabled by global innovation and technology leadership. Alat is partnering with global technology leaders to transform industries like semiconductors, smart devices and next-gen infrastructure while establishing world class businesses in the Kingdom, powered by clean energy.

They have received $100 Billion USD in funding from the Saudi Arabia Public Investment Fund (PIF).

Alat is led by His Royal Highness Crown Prince Mohammed bin Salman bin Abdulaziz Al-Saud who is the Crown Prince and Prime Minister of the Kingdom of Saudi Arabia

On the Alat Executive Leadership team is Ross Jatou, President of their Semiconductors Business unit. He only formally announced his appointment yesterday.

Ross came from Onsemi where he was for 8 years and the Senior Vice President and General Manager of their Intelligent Sensing Group. He was with Nvidia for 14 years prior to Onsemi.

Ross is well aware of BrainChip where he re-posted this on LinkedIn 3 weeks ago.

View attachment 57834



Watch the Alat CEO video here https://www.alat.com/en/about/what-is-alat/

Alat has partnered for a joint venture with Softbank Alat and SoftBank Group form a strategic partnership to manufacture groundbreaking industrial robots in the Kingdom | SoftBank Group Corp.

“The new JV will build industrial robots based on intellectual property developed by SoftBank Group and its affiliates that will perform tasks with minimal additional programming, that are ideally suited for industrial assembly and applications in manufacturing and production. The robot manufacturing factory that the JV will create in the Kingdom is a lighthouse factory, that will use the latest technology to manufacture unprecedented next generation robots to perform a wide variety of tasks”. The first factory is targeted to open in December 2024.


This is what Alex Divinsky (Ticker Symbol You) posted about Alat earlier today.

View attachment 57835

https://www.linkedin.com/posts/acti...qv?utm_source=share&utm_medium=member_desktop


Chippers, it would be massive if we got in with Alat !

And I’ve got positive vibes about it.

DYOR.
Hi @Terroni2105

I just saw the below and the previous LinkedIn post by Ross on their TOF being used in our demo with Onsemi.

Did a search and saw you already picked up on them.

Agree, would be a nice hook up and I like who he is also speaking with. Would like BRN to have a seat at that table for sure :)


Ross Jatou
President - Semiconductors at ALAT
1mo

It was an honor to meet with Cristiano Amon, the CEO of Qualcomm, during his visit to Riyadh. Our discussions centered around exploring potential collaboration opportunities between Qualcomm and Alat. It was an insightful conversation, and we hope to continue our dialogue in the future. Thank you for taking the time to meet with us.
 
  • Like
  • Fire
  • Love
Reactions: 12 users

Iseki

Regular
Hi @Terroni2105

I just saw the below and the previous LinkedIn post by Ross on their TOF being used in our demo with Onsemi.

Did a search and saw you already picked up on them.

Agree, would be a nice hook up and I like who he is also speaking with. Would like BRN to have a seat at that table for sure :)


Ross Jatou
President - Semiconductors at ALAT
1mo

It was an honor to meet with Cristiano Amon, the CEO of Qualcomm, during his visit to Riyadh. Our discussions centered around exploring potential collaboration opportunities between Qualcomm and Alat. It was an insightful conversation, and we hope to continue our dialogue in the future. Thank you for taking the time to meet with us.
So why isn't Sean booking the first premium economy ticket to see if they want to invest?
 
  • Like
Reactions: 2 users

Tothemoon24

Top 20
IMG_9073.jpeg





On May 9, 2024, the U.S. National Highway Transportation Safety Administration (NHTSA) issued a final rule mandating that all passenger vehicles and light trucks sold in the United States after September 2029 must be equipped with an Automated Emergency Braking System (AEB). This is a significant step forward in mainstreaming technology that is already standard in all new luxury vehicles and available as an enhanced safety upgrade in most mass-market models. However, while the NHTSA's decision is welcome for driver, pedestrian, and cyclist safety, the effectiveness of these systems, particularly in night driving conditions, remains a concern due to the limitations of cost-efficient sensors used in mass-market vehicles.

The automotive electronics development cycle is considerably longer than that of complex consumer electronics like smartphones, often compared to dog years. The NHTSA 2029 mandate is expected to trigger a push among automotive OEMs to meet the new requirements economically within the next five years. AEB technology, first introduced by Volvo in 2010, has proven effective enough over time to become pervasive. The most advanced AEB systems combine a variety of sensors and sensor types (radar, camera, lidar, ultrasonic) and the silicon processing power to enhance accuracy and reduce false positives, which can potentially cause collisions that the systems are designed to prevent. However, last month’s mandate is bound to have an impact on some OEMs, forcing them to balance accuracy, BOM cost, and system complexity for mass market vehicles.

A critical issue for AEB systems is their performance in low-light conditions. Research supports the need for improved nighttime AEB performance. According to Jessica Cicchino's study in Accident Analysis & Prevention (AAP), "AEB with pedestrian detection was associated with significant reductions of 25%-27% in pedestrian crash risk and 29%-30% in pedestrian injury crash risk. However, there was no evidence that the system was effective in dark conditions without street lighting…"【Jessica Cicchino, AAP May 2022】

The effectiveness of automotive CMOS image sensors commonly used in these systems diminishes after dark. This is particularly concerning since drivers with limited visibility and reaction time are most dependent on AEB systems and other ADAS systems at night. Pedestrian fatalities in the U.S. have nearly doubled since 2001, with over 7,500 deaths nationwide in 2021, and about 77% of pedestrian fatalities happening after dark. Although the NHTSA's ruling is a positive move towards improving safety, the challenge of cost-effective solutions for nighttime driving remains for nearly 80% of the vehicle-on-pedestrian fatalities the mandate certainly seeks to mitigate.

Fortunately, AI-based computational imaging offers a promising solution. By applying real-time denoising using neural networks and embedded neural network processors (NPUs), the nighttime range and accuracy of automotive sensors can be significantly enhanced. This AI denoising software runs on existing automotive SoCs with embedded NPUs and removes temporal and spatial noise from the RAW image feed from the sensor before processing, allowing the analog gain and exposure time to be increased without increasing the associated sensor noise.

This method does not require any modifications or recalibration of the existing image signal pipeline (ISP). In initial OEM road tests, AI denoising works effectively with both high-cost low-light-capable sensors and mainstream automotive CMOS sensors, effectively giving them "steroids" for better and more accurate night vision. This improved night vision translates into earlier and more accurate computer vision results such as nighttime pedestrian detection in AEB systems.

Since this is a software upgrade to existing and planned ECUs leveraging existing/roadmap Tier-2 fabless SoCs, the time required for integration, testing, and productization is much lower compared to hardware-based alternatives.

I am proud to be part of a dynamic team of AI computational image scientists and software engineers who are changing the world by delivering technology that will potentially mitigate thousands of fatalities in the coming years.

For more information on how AI-based computational imaging can improve the nighttime performance and accuracy of ADAS, as well as human vision-assist systems, contact me via LinkedIn or consult one of our Tier-2 fabless partners about their adoption plans for AI-based computational imaging from Visionary.ai.
 
  • Like
  • Love
  • Fire
Reactions: 13 users

Rach2512

Regular
From 6 days ago, sorry if already posted.


 
  • Like
  • Love
  • Fire
Reactions: 10 users

Learning

Learning to the Top 🕵‍♂️
Sounds like the professor, is impressed enough to become a shareholder in our Company.

We can't know how big his "bet" is though.

And anyway, what would he know?
He's probably just caught up in the "hype" like the rest of us 🙄..
Hi Dingo,

I rather followed the "hype" of a professor, than a FOOL me think. 😏🤔🫡

😁😁😁😁

Learning 🪴
 
  • Like
  • Haha
  • Fire
Reactions: 10 users

Earlyrelease

Regular
So any nefarious traders don’t steal shares from the new to the game investors I remind those new investors of the Australian tax year ending in June. Which means those that have taken profit in shares and face a tax bill they tend to sell some other shares which may be at a loss to even out the gain against the loss so no net tax paid. Why do I say this. Well in the last two weeks on June there is a bit of turbulence in some stocks share price and volumes traded. Bear this in mind before you make any rash decisions. Further if there is no news from the company soon then we are prime for shorters to take advantage of the trading situation that will present.

So do what you think is right for your own circumstances but knowledge allows you to make informed decisions and not be hoodwinked to a parting with your shares.
 
  • Like
  • Fire
  • Love
Reactions: 30 users
Whilst this IV is from late last year I hadn't seen or read it and pretty sure most would know this gentleman's name from TCS in our dot joining.

If you don't, just search Arpan and you'll get like 6 pages of posts to wade through.

I can understand why the BRN relationship commenced and also an element of why it can take some time given his thoughts on the software - hardware co-design process. Unfortunately, it's not all just PnP.



maxresdefault-e1699113127996.jpg

Embedded Systems: A Journey with Arpan Pal, TCS Research’s Distinguished Chief Scientist​

Arpan Pal, a distinguished Chief Scientist and Research Area Head at TCS Research, has carved a prominent niche in Intelligent Sensing, Signal Processing & AI, and Edge Computing. With an impressive career spanning over three decades, he has contributed significantly to the advancements in Embedded Devices and Intelligent Systems. His expertise lies at the intersection of hardware and software, where he thrives, making significant contributions to embedded systems.

In this interview, Arpan delves into the intricacies of his career journey, shedding light on the inspirations that led him to pursue a path in embedded systems and the subsequent evolution of his expectations. Furthermore, he generously shares insights into the surprises and challenges encountered, emphasizing the critical balance between technological innovation and real-world applications. As a seasoned professional, Arpan offers invaluable advice for aspiring scientists and engineers in the field, providing a roadmap for success that revolves around a deep understanding of hardware-software co-design and the adaptability to emerging technologies.

Arpan envisions a future for Embedded Systems that is deeply rooted in the principles of power efficiency and sustainable computing. With his visionary perspective on the transformative potential of brain-inspired neuromorphic computing and spiking neural networks, Arpan anticipates a paradigm shift towards energy-conscious AI systems. Pal’s remarkable contributions guide the future of technology and innovation as we delve deeper into the world of embedded intelligent systems.

What inspired you to go into embedded systems? Would you say your career has matched what your original expectations were? If so, what? If not, why not?​


For me, embedded systems had been a natural fit since I liked both hardware and programming and my first two jobs were in hardware-driven embedded systems – one was to build a microcontroller-based PSTN telephone exchange monitoring system, and the other was to build missile seeker systems. As I began working on these projects, I realized that I loved embedded systems because it is the only field that allows one to work at the intersection of hardware and software and requires knowledge of both electronics and software programming.

So far, the experience and the journey have exceeded my expectations. The biggest satisfaction is to see how embedded systems are making a comeback in the form of Edge Computing and how the concept of “AI at the Edge” is becoming mainstream for IoT, Robotics and AR/VR as it enables reliable, low latency, low power yet privacy-preserving intelligence at the edge.

What in your career has surprised you the most? Are there any challenges you overcame that you’d like to share?​


The biggest surprise in the early part of my career was to discover the possibility, for a given use case, of building a computationally lighter version of a sophisticated complex algorithm when faced with the compute/memory/power constraint of embedded systems without any significant compromise on the algorithm performance and I have applied this understanding again and again in my career.

The main challenge in embedded systems research is how to marry technological novelty to a visible and useful impact in the application. When I worked in missile technology, this challenge manifested itself in designing novel real-time target-tracking algorithms that can run on a DSP chip. In my sensing work in TCS for healthcare, this meant designing AIML/Signal Processing algorithms that consume as little power as possible so that they can work with wearables. Our Industry 4.0 intelligent IoT work involved designing systems that provide real-time or near-real-time response with deterministic latency.

The other challenge is at the platform level, where we have come a long way from tiny microcontrollers to DSP processors to AI chipset accelerators. But what has not changed is that an algorithm will always need more time, memory, and power than is available in the target embedded hardware – optimizing it to fit the target hardware is always a challenging task that requires embedded engineering expertise.

What resources or skills did you find most helpful when moving up in your career?​


Key skills are as follows:
  • A thorough understanding of hardware system features and limitations is essential for abstracting their implications for embedded applications.
  • When dealing with real-time systems, how to make software optimally utilize the hardware – hardware-software co-design is the key.
  • Understanding on how to map the impact of an application to the novelty of an embedded system in terms of system/technology, and how an application-level constraint will translate into system-level constraint in an embedded system.

What advice would you give to scientists/engineers in embedded systems?​


The first piece of advice will be to understand the beauty and the nuances of the hardware-software co-design in embedded systems, which is unique in terms of hardware capability and software features.

The second piece of advice will be to keep an open mind and be ready to adapt to new technologies/techniques as they come. Let’s take an example – In today’s world AI is the hype word; however, AI on embedded systems is not really well-understood yet. Embedded Edge Computing technology is coming up in a big way to address this.

The third advice is to identify a problem and then use technology to solve it, rather than going bottom-up to build a novel technology system first and then look for its suitable application.

What do you see as the future of Embedded Systems?​


When will embedded intelligent systems become truly power-aware? Green computing is indispensable as we forge towards a sustainable future. Embedded System engineers are inherently trained to make their algorithms work on low-power, low-latency-constrained embedded devices. The same principles need to be applied to transform over-parameterized ultra-large and power-hungry AI models into power-efficient AI systems.

Our brain computation needs only 20 Watts, while a typical GPU cluster may need tens of kilowatts of power – how do we design AI systems that consume power in the order of our brain? In the area of low-power embedded systems, brain-inspired neuromorphic computing and spiking neural networks (SNN) tailor-made for non-Von-Neumann Neuromorphic architecture will result in significant power saving. SNN on Neuromorphic architecture is a great example of nature-inspired hardware-software co-design.

Learn More About Arpan Pal​


Infographic of Arpan Pal's career timeline.

Arpan Pal has more than 30 years of experience in the areas of Intelligent Sensing, Signal Processing &AI, Edge Computing and

Affective Computing. Currently, as Distinguished Chief Scientist and Research Area Head, Embedded Devices and Intelligent Systems, TCS Research, he is working in the areas of Connected Health, Smart Manufacturing, Smart Retail and Remote Sensing.

Arpan has been on the editorial board of notable journals like ACM Transactions on Embedded Systems, and Springer Nature Journal on

Computer Science. Additionally, he is on the TPC of notable conferences like IEEE Sensors, ICASSP, and EUSIPCO. He has filed 180+ patents (out of which 95+ were granted in different geographies) and have published 160+ papers and book chapters in reputed conferences and journals. He has also written three complete books on IoT, Digital Twins in Manufacturing, and Application AI in Cardiac screening. He is on the governing/review/advisory board of some Indian Government organizations like CSIR, and MeitY, as well as of educational Institutions like IIT, IIIT, and Technology Innovation Hubs. Arpan is two times winner of the Tata Group top Innovation award in Tata InnoVista under Piloted technology category.

Prior to joining Tata Consultancy Services (TCS), Arpan had worked for DRDO, India as Scientist for Missile Seeker Systems and in Rebeca Technologies as their Head of Real-time Systems. He has a B. Tech and M. Tech degree from IIT, Kharagpur, India and PhD. from Aalborg University, Denmark.
 
  • Like
  • Fire
  • Love
Reactions: 29 users
View attachment 64605




On May 9, 2024, the U.S. National Highway Transportation Safety Administration (NHTSA) issued a final rule mandating that all passenger vehicles and light trucks sold in the United States after September 2029 must be equipped with an Automated Emergency Braking System (AEB). This is a significant step forward in mainstreaming technology that is already standard in all new luxury vehicles and available as an enhanced safety upgrade in most mass-market models. However, while the NHTSA's decision is welcome for driver, pedestrian, and cyclist safety, the effectiveness of these systems, particularly in night driving conditions, remains a concern due to the limitations of cost-efficient sensors used in mass-market vehicles.

The automotive electronics development cycle is considerably longer than that of complex consumer electronics like smartphones, often compared to dog years. The NHTSA 2029 mandate is expected to trigger a push among automotive OEMs to meet the new requirements economically within the next five years. AEB technology, first introduced by Volvo in 2010, has proven effective enough over time to become pervasive. The most advanced AEB systems combine a variety of sensors and sensor types (radar, camera, lidar, ultrasonic) and the silicon processing power to enhance accuracy and reduce false positives, which can potentially cause collisions that the systems are designed to prevent. However, last month’s mandate is bound to have an impact on some OEMs, forcing them to balance accuracy, BOM cost, and system complexity for mass market vehicles.

A critical issue for AEB systems is their performance in low-light conditions. Research supports the need for improved nighttime AEB performance. According to Jessica Cicchino's study in Accident Analysis & Prevention (AAP), "AEB with pedestrian detection was associated with significant reductions of 25%-27% in pedestrian crash risk and 29%-30% in pedestrian injury crash risk. However, there was no evidence that the system was effective in dark conditions without street lighting…"【Jessica Cicchino, AAP May 2022】

The effectiveness of automotive CMOS image sensors commonly used in these systems diminishes after dark. This is particularly concerning since drivers with limited visibility and reaction time are most dependent on AEB systems and other ADAS systems at night. Pedestrian fatalities in the U.S. have nearly doubled since 2001, with over 7,500 deaths nationwide in 2021, and about 77% of pedestrian fatalities happening after dark. Although the NHTSA's ruling is a positive move towards improving safety, the challenge of cost-effective solutions for nighttime driving remains for nearly 80% of the vehicle-on-pedestrian fatalities the mandate certainly seeks to mitigate.

Fortunately, AI-based computational imaging offers a promising solution. By applying real-time denoising using neural networks and embedded neural network processors (NPUs), the nighttime range and accuracy of automotive sensors can be significantly enhanced. This AI denoising software runs on existing automotive SoCs with embedded NPUs and removes temporal and spatial noise from the RAW image feed from the sensor before processing, allowing the analog gain and exposure time to be increased without increasing the associated sensor noise.

This method does not require any modifications or recalibration of the existing image signal pipeline (ISP). In initial OEM road tests, AI denoising works effectively with both high-cost low-light-capable sensors and mainstream automotive CMOS sensors, effectively giving them "steroids" for better and more accurate night vision. This improved night vision translates into earlier and more accurate computer vision results such as nighttime pedestrian detection in AEB systems.

Since this is a software upgrade to existing and planned ECUs leveraging existing/roadmap Tier-2 fabless SoCs, the time required for integration, testing, and productization is much lower compared to hardware-based alternatives.

I am proud to be part of a dynamic team of AI computational image scientists and software engineers who are changing the world by delivering technology that will potentially mitigate thousands of fatalities in the coming years.

For more information on how AI-based computational imaging can improve the nighttime performance and accuracy of ADAS, as well as human vision-assist systems, contact me via LinkedIn or consult one of our Tier-2 fabless partners about their adoption plans for AI-based computational imaging from Visionary.ai.
"The automotive electronics development cycle is considerably longer than that of complex consumer electronics like smartphones, often compared to dog years."

That was reassuring :)
 
  • Like
  • Haha
Reactions: 4 users
Will they ever sign anyone ??
 
  • Like
  • Wow
  • Thinking
Reactions: 5 users

MegaportX

Regular
Will they ever sign anyone ??
Surely if they are, it has to be soon during the ai boom that is currently underway everywhere.
 
  • Like
Reactions: 2 users
Top Bottom