BRN Discussion Ongoing

IloveLamp

Top 20
Screenshot_20230727_060121_LinkedIn.jpg
 
  • Like
  • Fire
  • Love
Reactions: 37 users

FJ-215

Regular
This is from Forbes, worth a read..

Tenstorrent Could Reshape The AI And CPU Competitive Landscape

Introduction


"It is hard to believe the difference a year makes. In 2021, there were over 100 public and venture-backed startups with the same mission, to compete with NVIDIA in producing the fast chips needed to create and run artificial intelligence (AI). Fast forward to 2023, and now many companies are struggling to gain market traction or acquire enough capital to keep going. Part of the problem is undoubtedly the global economy; many AI adopters and investors do not have the resources or courage to give new chips a chance. But the real culprit is NVIDIA; they are proving a lot harder to beat than many companies and their investors ever imagined. So why is Tenstorrent, a Toronto-based startup, any different? Why should we believe that Tenstorrent could succeed where so many are struggling and even failing? This paper will explore what distinguishes Tenstorrent from the scores of other startups from leadership, strategy, and technology perspectives.






If an AI startup isn’t scared, it doesn’t get it.



Do we really need yet another AI hardware startup? Over the last five years, the industry has been flooded with over 100 such firms. Some have already closed shop, realizing that NVIDIA technology for data center AI is tough to beat. Consequently, investors have become far more cautious. Realistically, all these companies are trying to vie for a spot as a second source for NVIDIA in AI data center processing for training and inference processing. Could they win? In our opinion, they should be thrilled if they could get a combined 10% of the data center AI pie over the next three years. Yes, NVIDIA is THAT good.



Into this storm enters Tenstorrent, the Toronto-based AI Hardware startup with offices in the Bay Area, Austin, and Tokyo, Japan. Over the last year, the company has begun to expand from early-phase research and development into becoming a real company on a mission, with marketing, sales, support, and functional area execs adding to the engineering talent the company has been recruiting. The company has now grown to over 280 employees."


Clink the link for the full article
 
  • Like
  • Fire
  • Love
Reactions: 28 users

chapman89

Founding Member
I am of the opinion we are involved with each and every one of the companies mentioned here (not based on this alone, but dyor)

View attachment 40868
We are with Stellantis through Valeo and also Honda I believe we are, as Honda along with Mercedes both worked with Valeo and got approved for level 3 SAE driving.
 
  • Like
  • Love
  • Fire
Reactions: 55 users

Tothemoon24

Top 20

Socionext​

By Shreyas Basavaraju on July 26, 2023

featured image

Share this post
Share TweetShare

As the realms of technology and biology converge, hardware neurons powered by FPGA and ASIC are emerging as groundbreaking tools to unlock the next level of artificial intelligence. These bio-inspired components, a mainstay of robust artificial neural networks, have the potential to supercharge AI systems, offering enhanced speed, reduced energy consumption, and leading to more effective AI applications. In this post, we’ll look at the intricate world of hardware neurons synthesized in custom FPGA or ASIC and explore how this approach will revolutionize the AI landscape.
The computational power of artificial neural networks (ANNs) is the unsung hero driving the rapid advancement in artificial intelligence (AI). The design of ANNs simulates the behavior of biological neurons, which are the fundamental building blocks of the brain. ANNs have many applications, including image and speech recognition and natural language processing.
Fully appreciating the potential of ANNs requires us to understand the basic principles of neurons, their hardware structure, and their function in neural networks. By unlocking the full potential of neurons and optimizing their hardware structure with custom ASIC or FPGA implementations, we can achieve even greater advancements in AI. These continuous discoveries can lead to faster speeds and reduced energy consumption, resulting in more efficient and effective AI systems.

Biology Inspires Hardware Neurons​

Our journey to understanding AI’s power begins by delving into the fascinating world of neuroscience. The intricate structure and function of biological neurons, the brain’s essential building blocks, have inspired the development of advanced neural hardware. Figure 1 illustrates the complexity of a biological neuron, with dendrites receiving inputs and the cell body performing intricate information processing.
hardware neurons
Figure 1 – A biological neuron
Researchers recreated this complex biological system into a simpler mathematical model called a perceptron, clearing the path for developing neural hardware (Figure 2). Unlike the more complex and biologically realistic Izhikevich (which captures the spiking behavior of neurons) and Hodgkin-Huxley (which capture the biophysical properties of the neuron membrane) models, the perceptron captures the fundamental behavior of neurons without all the biological details.
Figure 2- A hardware neuron
Figure 2- A hardware neuron

The Hardware Brain: Multiplier Accumulator Unit (MAC)​

At the heart of neural hardware lies the multiplier accumulator unit (MAC) (Figure 3). This unit emulates the functions of a neuron’s dendrites and cell body, receiving and processing input signals.
Figure 3 – MAC schematic
Figure 3 – MAC schematic
Much like a biological axon synapse, the MAC transmits an output electrical signal when the cumulative strength of the input signals surpasses a predefined threshold. The MAC is a computational powerhouse, using the remarkable perceptron mathematical model to mirror a neuron’s input response by summing the weighted input signals to produce the final output.

Simulating Neurons for Real-World Solutions​

Take a look at Figure 4 to see the simulation results of a neuron synthesized in FPGA and performing a logical OR operation with two single-bit inputs. The waveform showcases how the neuron reacts to combinations of 0s and 1s for its inputs. Consistent with our expectations, the output peaks at 1 when at least one input is 1 and drops to 0 when both inputs are 0, as visualized in input table 1.
Figure 4 – Simulated waveform result of a hardware neuron
Figure 4 – Simulated waveform result of a hardware neuron
Input table 1.
Input table 1.
The synthesized neuron, shown in Figure 5, demonstrates the capability of hardware neurons to execute logical operations, establishing them as crucial elements for more sophisticated neural networks. Implementing these networks on FPGA or ASIC offers the benefits of high speed and low power consumption, making them well-suited for a wide range of real-time applications.
Figure 5 – Synthesized schematic of the neuron
Figure 5 – Synthesized schematic of the neuron

The Future is Here: Hardware Neural Network Applications​

The significance of neural networks within the field of artificial intelligence is far-reaching and is an essential component for many cutting-edge technologies and advancements. Researchers and vendors are innovating with hardware neurons to create AI systems that can learn and adapt to new and unpredictable situations and perform on-the-fly decision-making for enhanced human safety, comfort, and operational efficiency.
The range of AI applications using neural networks is far-reaching and exciting. The possibilities are endless, from intelligent temperature control systems that monitor outdoor temperatures and adjust indoor settings for optimal comfort (Figure 6) to AI-based lighting systems that observe and measure the indoor environment to activate lights and mirror displays as soon as an individual is detected within the perimeter (Figure 7).
Figure 6 – Intelligent self-adjusting temperature control
Figure 6 – Intelligent self-adjusting temperature control
Figure 7 – Smart lighting and mirror
Figure 7 – Smart lighting and mirror
Custom neuron hardware will play a vital role in meeting the demanding requirements of AI applications like the above by providing higher energy efficiency and instantaneous response to the sensor signals.
A key advantage of using custom hardware is low-latency operation. An AI custom ASIC inference hardware neural network can efficiently process and analyze data in real-time faster and more accurately without constant communication with the cloud. The demand for inference hardware neural networks will increase as AI technology evolves since they offer a powerful, efficient solution for real-time AI applications.
The development of custom AI hardware neurons is a critical step in the evolution of advanced AI, and it holds great promise for the creation of intelligent systems that can improve our world in countless ways.

Work with a leader in hardware solutions for AI​

Socionext is a leading fabless ASIC supplier specializing in a wide range of standard and customizable SoC solutions for automotive, consumer, and industrial markets. We provide our customers with quality semiconductor products based on extensive and differentiated IPs, proven design methodologies, state-of-the-art implementation expertise, and full support.
Socionext offers the right combination of IPs, design expertise, and support to implement large-scale, fully customizable SoC and FPGA solutions to meet the most demanding and rigorous AI application requirements.
Contact us today to learn how Socionext can help you accelerate your AI system design.
 
  • Like
  • Fire
Reactions: 30 users

FJ-215

Regular
And this from Reuters:

Canadian AI computing startup Tenstorrent and LG partner to build chips



Canadian AI computing startup Tenstorrent and LG partner to build chips​

By Jane Lee
May 30, 202311:04 PM GMT+10Updated 2 months ago


OAKLAND, California, May 30 (Reuters) - Canadian AI computer design startup Tenstorrent said on Tuesday it was partnering with South Korea's consumer electronics firm LG Electronics Inc (066570.KS) to build chips that power smart TVs, automotive products and data centers.

Tenstorrent, started in 2016, designs computers to train and run artificial intelligence models and works on both the software and hardware, CEO Jim Keller said in an interview. Keller is an engineer best known for his pioneering work in designing chips at Apple Inc (AAPL.O), Tesla Inc (TSLA.O), and chipmaker Advanced Micro Devices Inc (AMD.O).


Keller, an early investor in Tenstorrent, took the helm in 2023. The company, already worth $1 billion according to data firm PitchBook, has not revealed any of its customers until now.

LG will initially use Tenstorrent's AI chip blueprint to design its own chips, but the partnership is more strategic, said David Bennett, Tenstorrent's chief customer officer.

"What we're looking at is also some of the technology that LG has developed. Could it not be something that we use either in our own products or potentially with other future customers."


Tenstorrent has also designed a processor chip using RISC-V, a relatively new open standard chip architecture competing with Arm Ltd's Arm architecture. While many chip startups focus on one type of chip, Keller said his team was developing both the AI chip and processor as they will need to work closely together to handle the fast-changing AI models.

"We have to aim at the whole thing. ... It's quite early. And it was built on the available components," said Keller about today's AI and AI hardware landscape.


"In the last five years people learned so much about how this works and made real progress. But it doesn't look like we're anywhere close to 'this is the right way to do it, the best way to do it, or the final thing.'"

Reporting by Jane Lanhee Lee; Editing by Richard Chang
Our Standards: The Thomson Reuters Trust Principles.
 
  • Like
  • Fire
  • Wow
Reactions: 28 users

IloveLamp

Top 20
  • Like
  • Fire
Reactions: 19 users
  • Like
  • Fire
Reactions: 29 users

IloveLamp

Top 20
Screenshot_20230727_080855_LinkedIn.jpg
 
  • Like
  • Fire
Reactions: 17 users

Cardpro

Regular
What is Transfer Learning?
Transfer learning is the process of transferring learned features from one application to another. It’s a commonly used training technique where a model trained on one task is re-trained for use on a different task. You can apply transfer learning on vision, speech, and language-understanding models.

:( tbh I was super excited as I thought it was a special term used by us... hahahhaah damn..
 
Last edited:
  • Like
  • Fire
Reactions: 6 users

MDhere

Regular
Quite possibly the below?

UBS Securities Australia is a wholly owned subsidiary of UBS AG, and is a related company of UBS Nominees Pty Ltd (ABN 32 001 450 522) and Warbont Nominees Pty Ltd (ABN 19 003 943 799).

UBS Securities Australia uses the nominee services provided by Warbont Nominees Pty Ltd ("Warbont Nominees"), during the transitional settlement period, in accordance with the Market Integrity Rules on behalf of clients of UBS Securities Australia.

The Market Integrity Rules require that all financial products being held for a client during this period may only be registered under a nominee company. Warbont Nominees is the company that has been established to hold these financial products on your behalf in accordance with the Market Integrity Rules. These services are conducted under the Australian Financial Services Licence of UBS Securities Australia and UBS Securities Australia is responsible for the conduct of Warbont Nominees in respect of those services. Warbont Nominees is a wholly owned subsidiary of UBS Securities Australia.
and lets not forget UBS are proud sponsors of Mercedes :)
Screenshot_20230727-090814_Google.jpg
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 9 users

GStocks123

Regular
A possible NVISO link
 

Attachments

  • IMG_2724.png
    IMG_2724.png
    710.3 KB · Views: 114
  • IMG_2723.png
    IMG_2723.png
    685.9 KB · Views: 106
  • Like
  • Fire
  • Love
Reactions: 9 users

IloveLamp

Top 20
Another one on the chalk board for Teal



Screenshot_20230727_095708_LinkedIn.jpg
 
  • Like
  • Thinking
  • Fire
Reactions: 10 users

TheDrooben

Pretty Pretty Pretty Pretty Good
Carnegie Mellon collaborating with Meta and others to run LLMs on the edge........



Besides this, yesterday, LeCun shared Meta AI’s latest breakthrough in LLMs. LIMA, made in collaboration with Carnegie Mellon University, University of Southern California, and Tel Aviv University, is a 65 billion parameter model built with LLaMA and fine-tuned with a standard supervised loss with only 1,000 carefully curated prompts and responses.



Recent technological advancements have made it possible to run AI models on devices, bringing edge AI into the spotlight. Engineers from Carnegie Mellon University, University of Washington, Shanghai Jiao Tong University, and AI startup OctoML have collaborated to run large language models (LLMs) on iPhones, Androids, PCs, and browsers. This breakthrough could potentially enable the widespread use of generative AI.

The most interesting part is that it does not require RLHF (reinforcement learning with human feedback),






Larry (Happy as)
 
Last edited:
  • Like
  • Fire
  • Thinking
Reactions: 25 users

Boab

I wish I could paint like Vincent
  • Haha
  • Wow
  • Like
Reactions: 10 users

TECH

Regular
Well, in 47 business days September and the 3rd Quarter will be done and dusted, not long is it really, AKD II will have been
launched (all going well).

We embark on yet another phase in our journey, all eyes will be focused on products with AKD I IP embedded, any new IP License
agreements signed with the release of technology (AKD 2.0) that a number of clients have taken part in through laying down their
personal wish list to either be added or removed from the design process, which Brainchip has on the surface at least accommodated
those very wishes, so the big question still remains, will they now "finally commit".....lets see some of these engagements closed out, the
company's staff and stakeholders need to be rewarded now, fair is fair, sign on the dotted line please.

Just my view of the day...Tech 🧐
 
  • Like
  • Love
  • Fire
Reactions: 75 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Carnegie Mellon collaborating with Meta and others to run LLMs on the edge........



Besides this, yesterday, LeCun shared Meta AI’s latest breakthrough in LLMs. LIMA, made in collaboration with Carnegie Mellon University, University of Southern California, and Tel Aviv University, is a 65 billion parameter model built with LLaMA and fine-tuned with a standard supervised loss with only 1,000 carefully curated prompts and responses.



Recent technological advancements have made it possible to run AI models on devices, bringing edge AI into the spotlight. Engineers from Carnegie Mellon University, University of Washington, Shanghai Jiao Tong University, and AI startup OctoML have collaborated to run large language models (LLMs) on iPhones, Androids, PCs, and browsers. This breakthrough could potentially enable the widespread use of generative AI.

The most interesting part is that it does not require RLHF (reinforcement learning with human feedback),





Larry (Happy as)

That's very interesting Larry, especially as we have ties with the Department of Electrical and Computer Engineering at Carnegie Mellon.

Uam.png

 
  • Like
  • Love
  • Fire
Reactions: 48 users

Pappagallo

Regular
Well, in 47 business days September and the 3rd Quarter will be done and dusted, not long is it really, AKD II will have been
launched (all going well).

We embark on yet another phase in our journey, all eyes will be focused on products with AKD I IP embedded, any new IP License
agreements signed with the release of technology (AKD 2.0) that a number of clients have taken part in through laying down their
personal wish list to either be added or removed from the design process, which Brainchip has on the surface at least accommodated
those very wishes, so the big question still remains, will they now "finally commit".....lets see some of these engagements closed out, the
company's staff and stakeholders need to be rewarded now, fair is fair, sign on the dotted line please.

Just my view of the day...Tech 🧐

Also AKD1500, which was designed on the back of customer feedback. There’s plenty to be excited about over the next 6 months as we watch our intrepid team attempt to cross the chasm.
 
  • Like
  • Love
  • Fire
Reactions: 47 users
Hi Fullmoonfever,
How do you interpret the following paragraph from the article you shared above?

…However, “I think the pay TV operators are at the very tail end of the technical spectrum, and I say that in a nice way,” said Nayampally. “They are a very margin-driven business, so they work on chips they can build with – the lowest common denominators. So, they will require a silicon partner to promote these new capabilities, and we are working on it.”…
Hi @Beebo

As Rob T would say..."that's a great question" :LOL:

Def an interesting statement for mine.

So, is Nandan talking about pay tv operators / streaming services at the source or is he talking about the end user and things like set top boxes, sticks etc or players in between?

Obviously have pay tv operators like Foxtel, Apple, Disney, Fetch, Stan, Paramount an on and on.

Even though smart TVs these days have streaming apps preloaded or available to DL there is still the standalone receiver space like Chromecast, Android TV, Amazon Fire Stick, NVIDIA Shield and generic Foxtel, Telstra and Fetch STBs.

Very quick article below (not read the white paper) that touches on the tech point on the STBs. Obviously something the industry is looking at.

So it's hard to say where in the chain we are working, though Nandan says "we are working on it". Does this tie in under the telecommunications space we are working with?

Given the margin model statement we know they need a low cost solution but it would potentially provide huge volumes for us.

And what are these "new capabilities" that need promoting? Obviously AI based, so maybe HMI, voice, eye tracking, hand gestures for end users. Who are we partnered with that could or does work in that space maybe?


How Can Pay TV Operators Grow and Thrive in A Post-Pandemic World​

6411b2b41ce94f87e6a4e945_Pay-TV-operators-Main-banner.png


Bleuenn Le Goffic​

VP, Strategy & Business Development
February 23, 2023

1 min
Being an operator in the 21st century has truly been a transformative experience.

The UX expectations from end users are ever increasing as is the number of available services from which they can access and stream content. We’re also seeing traditional Pay TV subscriptions continuing to fall, and according to analysis, this decline is likely to accelerate in 2023. As a result, Pay TV operators face challenging decisions over many crucial factors including the technology choice powering their STBs, how to aggregate content across services, and what business model they should adopt for monetizing their content.

With users becoming more accustomed to churning in and out of subscriptions, it is essential that operators try to stay ahead of the curve and work proactively to both attract and retain users. Will the growth of FAST channels see a decline in the need for large subscription bundles which have been the mainstay of operator propositions? What is the best way forward for an operator? It’s clear that new technology as well as agile work methods need to be accepted and embraced in order to compete in the future.

In this whitepaper, we will walk through some of the key strategic choices you can make as an operator. As always, there is no one-size-fits-all solution and you will need to assess your company and its unique opportunities and challenges before concluding which strategy is right for you. If your organization is big with a large budget, you might be contemplating the acquisition of a media house or buying the exclusive content rights for a major sports league. If size is not in your favor, you will need to come up with different ways to compete.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 25 users
Top Bottom