BRN Discussion Ongoing

IloveLamp

Top 20
I am of the opinion we are involved with each and every one of the companies mentioned here (not based on this, but dyor)

Screenshot_20230727_055822_LinkedIn.jpg
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 25 users

IloveLamp

Top 20
Screenshot_20230727_060121_LinkedIn.jpg
 
  • Like
  • Fire
  • Love
Reactions: 37 users

FJ-215

Regular
This is from Forbes, worth a read..

Tenstorrent Could Reshape The AI And CPU Competitive Landscape

Introduction


"It is hard to believe the difference a year makes. In 2021, there were over 100 public and venture-backed startups with the same mission, to compete with NVIDIA in producing the fast chips needed to create and run artificial intelligence (AI). Fast forward to 2023, and now many companies are struggling to gain market traction or acquire enough capital to keep going. Part of the problem is undoubtedly the global economy; many AI adopters and investors do not have the resources or courage to give new chips a chance. But the real culprit is NVIDIA; they are proving a lot harder to beat than many companies and their investors ever imagined. So why is Tenstorrent, a Toronto-based startup, any different? Why should we believe that Tenstorrent could succeed where so many are struggling and even failing? This paper will explore what distinguishes Tenstorrent from the scores of other startups from leadership, strategy, and technology perspectives.






If an AI startup isn’t scared, it doesn’t get it.



Do we really need yet another AI hardware startup? Over the last five years, the industry has been flooded with over 100 such firms. Some have already closed shop, realizing that NVIDIA technology for data center AI is tough to beat. Consequently, investors have become far more cautious. Realistically, all these companies are trying to vie for a spot as a second source for NVIDIA in AI data center processing for training and inference processing. Could they win? In our opinion, they should be thrilled if they could get a combined 10% of the data center AI pie over the next three years. Yes, NVIDIA is THAT good.



Into this storm enters Tenstorrent, the Toronto-based AI Hardware startup with offices in the Bay Area, Austin, and Tokyo, Japan. Over the last year, the company has begun to expand from early-phase research and development into becoming a real company on a mission, with marketing, sales, support, and functional area execs adding to the engineering talent the company has been recruiting. The company has now grown to over 280 employees."


Clink the link for the full article
 
  • Like
  • Fire
  • Love
Reactions: 28 users

chapman89

Founding Member
I am of the opinion we are involved with each and every one of the companies mentioned here (not based on this alone, but dyor)

View attachment 40868
We are with Stellantis through Valeo and also Honda I believe we are, as Honda along with Mercedes both worked with Valeo and got approved for level 3 SAE driving.
 
  • Like
  • Love
  • Fire
Reactions: 55 users

Tothemoon24

Top 20

Socionext​

By Shreyas Basavaraju on July 26, 2023

featured image

Share this post
Share TweetShare

As the realms of technology and biology converge, hardware neurons powered by FPGA and ASIC are emerging as groundbreaking tools to unlock the next level of artificial intelligence. These bio-inspired components, a mainstay of robust artificial neural networks, have the potential to supercharge AI systems, offering enhanced speed, reduced energy consumption, and leading to more effective AI applications. In this post, we’ll look at the intricate world of hardware neurons synthesized in custom FPGA or ASIC and explore how this approach will revolutionize the AI landscape.
The computational power of artificial neural networks (ANNs) is the unsung hero driving the rapid advancement in artificial intelligence (AI). The design of ANNs simulates the behavior of biological neurons, which are the fundamental building blocks of the brain. ANNs have many applications, including image and speech recognition and natural language processing.
Fully appreciating the potential of ANNs requires us to understand the basic principles of neurons, their hardware structure, and their function in neural networks. By unlocking the full potential of neurons and optimizing their hardware structure with custom ASIC or FPGA implementations, we can achieve even greater advancements in AI. These continuous discoveries can lead to faster speeds and reduced energy consumption, resulting in more efficient and effective AI systems.

Biology Inspires Hardware Neurons​

Our journey to understanding AI’s power begins by delving into the fascinating world of neuroscience. The intricate structure and function of biological neurons, the brain’s essential building blocks, have inspired the development of advanced neural hardware. Figure 1 illustrates the complexity of a biological neuron, with dendrites receiving inputs and the cell body performing intricate information processing.
hardware neurons
Figure 1 – A biological neuron
Researchers recreated this complex biological system into a simpler mathematical model called a perceptron, clearing the path for developing neural hardware (Figure 2). Unlike the more complex and biologically realistic Izhikevich (which captures the spiking behavior of neurons) and Hodgkin-Huxley (which capture the biophysical properties of the neuron membrane) models, the perceptron captures the fundamental behavior of neurons without all the biological details.
Figure 2- A hardware neuron
Figure 2- A hardware neuron

The Hardware Brain: Multiplier Accumulator Unit (MAC)​

At the heart of neural hardware lies the multiplier accumulator unit (MAC) (Figure 3). This unit emulates the functions of a neuron’s dendrites and cell body, receiving and processing input signals.
Figure 3 – MAC schematic
Figure 3 – MAC schematic
Much like a biological axon synapse, the MAC transmits an output electrical signal when the cumulative strength of the input signals surpasses a predefined threshold. The MAC is a computational powerhouse, using the remarkable perceptron mathematical model to mirror a neuron’s input response by summing the weighted input signals to produce the final output.

Simulating Neurons for Real-World Solutions​

Take a look at Figure 4 to see the simulation results of a neuron synthesized in FPGA and performing a logical OR operation with two single-bit inputs. The waveform showcases how the neuron reacts to combinations of 0s and 1s for its inputs. Consistent with our expectations, the output peaks at 1 when at least one input is 1 and drops to 0 when both inputs are 0, as visualized in input table 1.
Figure 4 – Simulated waveform result of a hardware neuron
Figure 4 – Simulated waveform result of a hardware neuron
Input table 1.
Input table 1.
The synthesized neuron, shown in Figure 5, demonstrates the capability of hardware neurons to execute logical operations, establishing them as crucial elements for more sophisticated neural networks. Implementing these networks on FPGA or ASIC offers the benefits of high speed and low power consumption, making them well-suited for a wide range of real-time applications.
Figure 5 – Synthesized schematic of the neuron
Figure 5 – Synthesized schematic of the neuron

The Future is Here: Hardware Neural Network Applications​

The significance of neural networks within the field of artificial intelligence is far-reaching and is an essential component for many cutting-edge technologies and advancements. Researchers and vendors are innovating with hardware neurons to create AI systems that can learn and adapt to new and unpredictable situations and perform on-the-fly decision-making for enhanced human safety, comfort, and operational efficiency.
The range of AI applications using neural networks is far-reaching and exciting. The possibilities are endless, from intelligent temperature control systems that monitor outdoor temperatures and adjust indoor settings for optimal comfort (Figure 6) to AI-based lighting systems that observe and measure the indoor environment to activate lights and mirror displays as soon as an individual is detected within the perimeter (Figure 7).
Figure 6 – Intelligent self-adjusting temperature control
Figure 6 – Intelligent self-adjusting temperature control
Figure 7 – Smart lighting and mirror
Figure 7 – Smart lighting and mirror
Custom neuron hardware will play a vital role in meeting the demanding requirements of AI applications like the above by providing higher energy efficiency and instantaneous response to the sensor signals.
A key advantage of using custom hardware is low-latency operation. An AI custom ASIC inference hardware neural network can efficiently process and analyze data in real-time faster and more accurately without constant communication with the cloud. The demand for inference hardware neural networks will increase as AI technology evolves since they offer a powerful, efficient solution for real-time AI applications.
The development of custom AI hardware neurons is a critical step in the evolution of advanced AI, and it holds great promise for the creation of intelligent systems that can improve our world in countless ways.

Work with a leader in hardware solutions for AI​

Socionext is a leading fabless ASIC supplier specializing in a wide range of standard and customizable SoC solutions for automotive, consumer, and industrial markets. We provide our customers with quality semiconductor products based on extensive and differentiated IPs, proven design methodologies, state-of-the-art implementation expertise, and full support.
Socionext offers the right combination of IPs, design expertise, and support to implement large-scale, fully customizable SoC and FPGA solutions to meet the most demanding and rigorous AI application requirements.
Contact us today to learn how Socionext can help you accelerate your AI system design.
 
  • Like
  • Fire
Reactions: 30 users

FJ-215

Regular
And this from Reuters:

Canadian AI computing startup Tenstorrent and LG partner to build chips



Canadian AI computing startup Tenstorrent and LG partner to build chips​

By Jane Lee
May 30, 202311:04 PM GMT+10Updated 2 months ago


OAKLAND, California, May 30 (Reuters) - Canadian AI computer design startup Tenstorrent said on Tuesday it was partnering with South Korea's consumer electronics firm LG Electronics Inc (066570.KS) to build chips that power smart TVs, automotive products and data centers.

Tenstorrent, started in 2016, designs computers to train and run artificial intelligence models and works on both the software and hardware, CEO Jim Keller said in an interview. Keller is an engineer best known for his pioneering work in designing chips at Apple Inc (AAPL.O), Tesla Inc (TSLA.O), and chipmaker Advanced Micro Devices Inc (AMD.O).


Keller, an early investor in Tenstorrent, took the helm in 2023. The company, already worth $1 billion according to data firm PitchBook, has not revealed any of its customers until now.

LG will initially use Tenstorrent's AI chip blueprint to design its own chips, but the partnership is more strategic, said David Bennett, Tenstorrent's chief customer officer.

"What we're looking at is also some of the technology that LG has developed. Could it not be something that we use either in our own products or potentially with other future customers."


Tenstorrent has also designed a processor chip using RISC-V, a relatively new open standard chip architecture competing with Arm Ltd's Arm architecture. While many chip startups focus on one type of chip, Keller said his team was developing both the AI chip and processor as they will need to work closely together to handle the fast-changing AI models.

"We have to aim at the whole thing. ... It's quite early. And it was built on the available components," said Keller about today's AI and AI hardware landscape.


"In the last five years people learned so much about how this works and made real progress. But it doesn't look like we're anywhere close to 'this is the right way to do it, the best way to do it, or the final thing.'"

Reporting by Jane Lanhee Lee; Editing by Richard Chang
Our Standards: The Thomson Reuters Trust Principles.
 
  • Like
  • Fire
  • Wow
Reactions: 28 users

IloveLamp

Top 20
  • Like
  • Fire
Reactions: 19 users
  • Like
  • Fire
Reactions: 29 users

Quatrojos

Regular
  • Like
  • Fire
  • Love
Reactions: 18 users

IloveLamp

Top 20
Screenshot_20230727_080855_LinkedIn.jpg
 
  • Like
  • Fire
Reactions: 17 users

Cardpro

Regular
What is Transfer Learning?
Transfer learning is the process of transferring learned features from one application to another. It’s a commonly used training technique where a model trained on one task is re-trained for use on a different task. You can apply transfer learning on vision, speech, and language-understanding models.

:( tbh I was super excited as I thought it was a special term used by us... hahahhaah damn..
 
Last edited:
  • Like
  • Fire
Reactions: 6 users

MDhere

Regular
Quite possibly the below?

UBS Securities Australia is a wholly owned subsidiary of UBS AG, and is a related company of UBS Nominees Pty Ltd (ABN 32 001 450 522) and Warbont Nominees Pty Ltd (ABN 19 003 943 799).

UBS Securities Australia uses the nominee services provided by Warbont Nominees Pty Ltd ("Warbont Nominees"), during the transitional settlement period, in accordance with the Market Integrity Rules on behalf of clients of UBS Securities Australia.

The Market Integrity Rules require that all financial products being held for a client during this period may only be registered under a nominee company. Warbont Nominees is the company that has been established to hold these financial products on your behalf in accordance with the Market Integrity Rules. These services are conducted under the Australian Financial Services Licence of UBS Securities Australia and UBS Securities Australia is responsible for the conduct of Warbont Nominees in respect of those services. Warbont Nominees is a wholly owned subsidiary of UBS Securities Australia.
and lets not forget UBS are proud sponsors of Mercedes :)
Screenshot_20230727-090814_Google.jpg
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 9 users

GStocks123

Regular
A possible NVISO link
 

Attachments

  • IMG_2724.png
    IMG_2724.png
    710.3 KB · Views: 104
  • IMG_2723.png
    IMG_2723.png
    685.9 KB · Views: 94
  • Like
  • Fire
  • Love
Reactions: 9 users

IloveLamp

Top 20
  • Like
  • Thinking
  • Fire
Reactions: 10 users

TheDrooben

Pretty Pretty Pretty Pretty Good
Carnegie Mellon collaborating with Meta and others to run LLMs on the edge........



Besides this, yesterday, LeCun shared Meta AI’s latest breakthrough in LLMs. LIMA, made in collaboration with Carnegie Mellon University, University of Southern California, and Tel Aviv University, is a 65 billion parameter model built with LLaMA and fine-tuned with a standard supervised loss with only 1,000 carefully curated prompts and responses.



Recent technological advancements have made it possible to run AI models on devices, bringing edge AI into the spotlight. Engineers from Carnegie Mellon University, University of Washington, Shanghai Jiao Tong University, and AI startup OctoML have collaborated to run large language models (LLMs) on iPhones, Androids, PCs, and browsers. This breakthrough could potentially enable the widespread use of generative AI.

The most interesting part is that it does not require RLHF (reinforcement learning with human feedback),






Larry (Happy as)
 
Last edited:
  • Like
  • Fire
  • Thinking
Reactions: 25 users

Boab

I wish I could paint like Vincent
  • Haha
  • Wow
  • Like
Reactions: 10 users

TECH

Regular
Well, in 47 business days September and the 3rd Quarter will be done and dusted, not long is it really, AKD II will have been
launched (all going well).

We embark on yet another phase in our journey, all eyes will be focused on products with AKD I IP embedded, any new IP License
agreements signed with the release of technology (AKD 2.0) that a number of clients have taken part in through laying down their
personal wish list to either be added or removed from the design process, which Brainchip has on the surface at least accommodated
those very wishes, so the big question still remains, will they now "finally commit".....lets see some of these engagements closed out, the
company's staff and stakeholders need to be rewarded now, fair is fair, sign on the dotted line please.

Just my view of the day...Tech 🧐
 
  • Like
  • Love
  • Fire
Reactions: 75 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Carnegie Mellon collaborating with Meta and others to run LLMs on the edge........



Besides this, yesterday, LeCun shared Meta AI’s latest breakthrough in LLMs. LIMA, made in collaboration with Carnegie Mellon University, University of Southern California, and Tel Aviv University, is a 65 billion parameter model built with LLaMA and fine-tuned with a standard supervised loss with only 1,000 carefully curated prompts and responses.



Recent technological advancements have made it possible to run AI models on devices, bringing edge AI into the spotlight. Engineers from Carnegie Mellon University, University of Washington, Shanghai Jiao Tong University, and AI startup OctoML have collaborated to run large language models (LLMs) on iPhones, Androids, PCs, and browsers. This breakthrough could potentially enable the widespread use of generative AI.

The most interesting part is that it does not require RLHF (reinforcement learning with human feedback),





Larry (Happy as)

That's very interesting Larry, especially as we have ties with the Department of Electrical and Computer Engineering at Carnegie Mellon.

Uam.png

 
  • Like
  • Love
  • Fire
Reactions: 48 users

Pappagallo

Regular
Well, in 47 business days September and the 3rd Quarter will be done and dusted, not long is it really, AKD II will have been
launched (all going well).

We embark on yet another phase in our journey, all eyes will be focused on products with AKD I IP embedded, any new IP License
agreements signed with the release of technology (AKD 2.0) that a number of clients have taken part in through laying down their
personal wish list to either be added or removed from the design process, which Brainchip has on the surface at least accommodated
those very wishes, so the big question still remains, will they now "finally commit".....lets see some of these engagements closed out, the
company's staff and stakeholders need to be rewarded now, fair is fair, sign on the dotted line please.

Just my view of the day...Tech 🧐

Also AKD1500, which was designed on the back of customer feedback. There’s plenty to be excited about over the next 6 months as we watch our intrepid team attempt to cross the chasm.
 
  • Like
  • Love
  • Fire
Reactions: 47 users
Top Bottom