BRN Discussion Ongoing

D

Deleted member 118

Guest
He must be crapping his pants as he keeps moving it to a higher price
 
  • Love
  • Like
Reactions: 2 users

uiux

Regular
Maybe they do it cause they know you will post about it?
 
  • Haha
  • Like
Reactions: 12 users
D

Deleted member 118

Guest
  • Haha
  • Like
Reactions: 3 users

wilzy123

Founding Member
Maybe they do it cause they know you will post about it?

Yep - we have that much power.

AbleCorruptDrongo-size_restricted.gif
 
  • Haha
  • Like
  • Fire
Reactions: 14 users

Esq.111

Fascinatingly Intuitive.
Maybe they do it cause they know you will post about it?
Morning Uiux,

Mabey ....maybe not.

Regards,
Esq.
 
  • Like
  • Haha
Reactions: 7 users
D

Deleted member 118

Guest
Guess it’s a robot moving that 700k amount of shares? As it’s just moved down to $1.2
 
  • Like
  • Fire
Reactions: 2 users

Slymeat

Move on, nothing to see.
I know is an older article but edge impuls posted it today

Edge Impulse @EdgeImpulse 45min
.
@CEA_Leti researchers are coupling innovative sensors with RRAM–based neuromorphic computation to build ultra-low-power systems for edge AI applications.
I see this as the K-Mart, or cheaper, version of achieving some of the benefits of neuromorphic computing. The cheapness is in the cost of RRAM and power savings are provided by the low power consumption of RRAM. I doubt it has one-shot learning nor uses spikes or sparcity. It could be used to implement LTSM however, with a lot of effort by each individual developer.
RRAM-based neuromorphic computation will have its uses and will be useful in places where Akida offers too much functionality. Mainly purpose-built smarts for a specific task that is trained once and doesn’t change. Hence not a competitor in my mind. And who knows, developers may play with this and consider Akida when they soon reach the limitations of RRAM based neurons.

But as with anything that is cheaper, the savings may be artificial, as more $ will need to be spent developing solutions as much of the work will need to be coded.

Maybe a RRAM based neuron, or even neural network, may work to pre-process inputs into an Akida device. Maybe even help with LSTM by persistently remembering states and weights for a few iterations (and potentially forever). This, along with persistent data storage, is one of the reasons I hope for cooperation between BrainChip and Weebit. A ball I hopefully have already started rolling.

Anything advancing the cause of neuromorphic computing is positive in my mind. Believe it or not, there still are A vast number of WANCAs out there Who need the info spoon fed to them.
 
  • Fire
  • Like
  • Love
Reactions: 9 users

Slade

Top 20
Tv Land Reaction GIF by #Impastor
 
  • Haha
  • Like
Reactions: 6 users

Slade

Top 20
Wall Street Reaction GIF by La Guarimba Film Festival
 
  • Haha
  • Like
Reactions: 7 users

equanimous

Norse clairvoyant shapeshifter goddess
 
  • Like
  • Fire
  • Love
Reactions: 24 users

Wags

Regular
  • Haha
  • Like
Reactions: 7 users

Esq.111

Fascinatingly Intuitive.
Chippers,

Gross short stock for Tuesday 9th Aug.

827,787

Esq.
 
  • Like
  • Fire
  • Love
Reactions: 21 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Ummmm...so are we in the Renesas R-Car V3H and V4H system-on-chips? I don't think we've ruled that out have we?

View attachment 13690


I don't know if its me just being extremely stupid which is not beyond the realms of possibility, but I still can't see why we couldn't be incorporated inn the Renesas R-Car V4H. Or maybe in the R-Car V4H Duel or the R Car Gen 4 shown on this time-line diagram (slated for 2023/2024)?

Couldn't we easily be smooshed in with the Arm cores? After-all it does does refer to "dedicated Deep Learning & Computer Vision I/Ps with overall performance of 34 TOPS". That's IP's PLURAL.

How else are they supposed to get to Level 3 without AKIDA?

I patiently await my handsome ogre's response @Diogenese .

B x
Screen Shot 2022-08-10 at 10.55.35 am.png


Screen Shot 2022-08-10 at 10.58.19 am.png
Screen Shot 2022-08-10 at 10.53.57 am.png
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 26 users

Deadpool

Did someone say KFC

If the link works it’s worth a 1 minute listen to Synopsys CEO discuss the mood after a meeting at the White House!

Maybe that’s where PVDM is?

View attachment 13761
Hi Stable-Genius, what a wonderful clip, he can hardly control his enthusiasm for the future of tech. Don't often see this kind of school boy energy coming from a leader, which is a real shame. It is a bit of a coincidence that Peter is in the US currently, hopefully he was attending.
 
  • Like
  • Fire
  • Love
Reactions: 22 users

Xhosa12345

Regular
I don't know if its me just being extremely stupid which is not beyond the realms of possibility, but I still can't see why we couldn't be incorporated inn the Renesas R-Car V4H. Or maybe in the R-Car V4H Duel or the R Car Gen 4 shown on this time-line diagram (slated for 2023/2024)?

Couldn't we easily be smooshed in with the Arm cores? After-all it does does refer to "dedicated Deep Learning & Computer Vision I/Ps with overall performance of 34 TOPS". That's IP's PLURAL.

How else are they supposed to get to Level 3 without AKIDA?

I patiently await my handsome ogre's response.

B x
View attachment 13771

View attachment 13772 View attachment 13774
Capture.JPG


think OLE MATE looked into this... not sure of any follow up etc, all this stuff way over my head ...

love your work everyone! idiots like me really appreciate the stuff you guys do for the benefit for all of us!

now trading halt, go public with a big DEAL please and burn the shorters.... lets get rid of the traders while we are at it... thank you
!
 
  • Like
  • Fire
  • Haha
Reactions: 20 users

Diogenese

Top 20
View attachment 13778

think OLE MATE looked into this... not sure of any follow up etc, all this stuff way over my head ...

love your work everyone! idiots like me really appreciate the stuff you guys do for the benefit for all of us!

now trading halt, go public with a big DEAL please and burn the shorters.... lets get rid of the traders while we are at it... thank you
!
That would be real collateral damage.
 
  • Thinking
  • Like
Reactions: 2 users
Hi Stable-Genius, what a wonderful clip, he can hardly control his enthusiasm for the future of tech. Don't often see this kind of school boy energy coming from a leader, which is a real shame. It is a bit of a coincidence that Peter is in the US currently, hopefully he was attending.
Agreed. You’d expect Peter to be there given there’s presentations invited for new technologies regarding defence.

Defence is massive and we already have a foot in the door so I’d expect in a few years time (3-4) for it to become a big earner for Brainchip as it gets implemented in sensors ….everywhere!

An incredible opportunity: right time, right place!
 
  • Like
  • Fire
  • Love
Reactions: 24 users

Bravo

If ARM was an arm, BRN would be its biceps💪!

SiFive Is Leading The Way For Innovation On RISC-V​

Karl Freund

Founder and Principal Analyst, Cambrian-AI Research LLC
Aug 8, 2022,01:04pm EDT


The company appears well positioned to challenge CPU incumbents with high performance RISC-V CPUs and Vector Extensions to the open ISA architecture.

The RISC-V CPU Instruction Set Architecture (ISA) is emerging as a serious challenger to current CPUs based on proprietary architectures, creating new opportunities for chip designers and investors alike. While RISC-V first gained traction in the low-end embedded market, where the open ISA model afforded more cost-effective designs, RISC-V is now getting more wind in its sails due to performance and power efficiency, especially with vector enhancements. Behind all the RISC-V buzz, SiFive is the company that makes many of the innovations of the open-source CPU architecture available and so appealing.

In addition to open-source and computing efficiency, RISC-V now offers a well-designed, highly efficient vector processing extension which can enable significant acceleration in applications where large data sets need to be manipulated in parallel. We have published a research paper that dives more deeply into the advantages Vector extensions offer. This note explores the company’s role and directions.

RISC-V Benefits and SiFive’s Role​

Silicon Valley startup SiFive has assumed the role of industry leadership and commercial IP innovation for the RISC-V movement, providing tested Intellectual Property (IP) and support for chip developers who incorporate RISC-V into their products.

RISC-V portends to offer an alternative to proprietary processor cores in a user-friendly licensing and development environment. The raw performance of the latest SiFive RISC-V implementation is rapidly closing the gap, but with lower power and smaller die area, and with no lock-in to a closed architecture. SiFive is further enhancing its portfolio with vector processing extensions that clearly differentiate the ISA from any other architecture.
SiFive is essentially the most visible and accomplished commercial steward of RISC-V, providing validated IP and support as well as open and proprietary enhancements to the RISC-V development community. With this open-standard approach and dependable IP, SiFive has garnered over 300 design wins with over 100 firms, including 8 of the top 10 semiconductor companies. With the addition of vector processing, we expect this trend to accelerate.

SiFive Strategy and Product Portfolio​

In September 2020, SiFive announced it had hired CEO Patrick Little as the new President, CEO, and Chairman. Coming from Qualcomm where he led the company’s successful foray into the automotive sector, Mr. Little has sharpened the company’s business model on developing and licensing IP, selling the SiFive’s OpenFive SoC design business to AlphaWave for $210 million. The company subsequently raised $175 million in a Series F funding round at a $2.5 billion post-money valuation. The latest round brings SiFive's total venture funding to over $350 million and was led by global investment firm Coatue Management LLC. Existing investors Intel Capital, Sutter Hill, and some others joined this latest round.


SiFive already has a broad portfolio of RISC-V processors.


SiFive already has a broad portfolio of RISC-V processors.
SiFive
In today’s heterogeneous world of Domain-Specific Processors, parallel processing of large data sets is a critical adjunct to scalar processing. While accelerators such as GPU’s and ASICs provide some incremental performance, they come at significant cost and generally require connectivity to CPU’s along with the cost of data transfers from the CPU to and from the accelerator. And each accelerator requires its own distinct programming model. Now with RISC-V, general vector processing in the CPU cores offers an alternative approach.
Vector processing, where instructions manipulate data across a large dataset of numbers, has been a foundation of high-performance computing since the Cray 1 supercomputer in 1975. RISC-V Vector extensions (RVV) enables RISC-V cores to process data arrays alongside traditional scalar operations to parallelize the computation of single instruction streams on large data sets. SiFive helped establish RVV as a part of the RISC-V standard and has now extended the concept in two dimensions.

The SiFive extensions to the RISC-V vector capabilities can dramatically increase performance and efficiency.


The SiFive extensions to the RISC-V vector capabilities can dramatically increase performance and ... [+]
SiFIve
Figure 2: The SiFive extensions to the RISC-V vector capabilities can dramatically increase performance and efficiency.
The SiFive Intelligence Extensions add new operations such as matmuls for INT8, BF16 converts and compute operations, and enable vector instructions to operate on a broad range of AI/ML data types, including BFLOAT16. The SiFive Intelligence Extensions also add support for TensorFlow Lite for Machine Learning models, reducing the cost to port AI models to SiFive based designs.

VCIX represents a strategic opportunity for SiFive​

In a world of increasing heterogeneity, there is a large opportunity to help SoC and System-on-Package (SoP) designers build tightly integrated solutions. The SiFive Vector Coprocessor Interface Extension (VCIX) is a direct interface between the X280 and a custom accelerator, enabling parallel instructions to be executed on the accelerator directly from the scalar pipeline. The custom instructions are executed from the standard software flow, utilizing the vector pipeline, and can access the full vector register set.

The SiFive Processor Portfolio​

The SiFive product portfolio is structured into three clearly differentiated product lines: the 32/64 bit Essential products (2-, 6-, and 7-Series) for embedded control/Linux applications, the SiFive Performance Series (the P200 and P500/P600 families) for high efficiency and higher performance, and the SiFive Intelligence Series (the X200 family) for parallelizable workloads such as Machine Learning at the edge and in data centers.
To capitalize on its advantage in vector processing, SiFive has built its vector capabilities into both the Performance P270 and the Intelligence X280 processors.

The portfolio includes the Essential, Performance, and Intelligence processors.


The portfolio includes the Essential, Performance, and Intelligence processors.
SiFive

Early Adopters of the SiFive X280​

SiFive X280 has already been adopted by several companies of note, including a Tier 1 semiconductor company and a US Federal Agency for a strategic initiative in the aerospace and defence sector. Another customer has selected the X280 for projects for its mobile devices and data center AI products. Similarly, a US company delivering autonomous self-driving platforms has selected the X280 for its next generation SoC. Of these opportunities, the last two could generate significant volumes, while the 1st could open more doors in the government sector.
On the startup front, we have already seen a number of SoC developers publicly announce their adoption of SiFive including Tenstorrent and Kinara (formerly known as DeepVision). Many are developing SoCs for AI acceleration, leveraging the vector processing of the X280 and complementing that with custom AI blocks. Tenstorrent tells us they are getting great support and that the cores are rock solid.

SiFive Development Tool Suite​

For AI applications, SiFive supports an Out-of-the-box software and processor hardware solution with TensorFlow Lite running under Linux OS to run NN models in the Object detention, Image Classification, Segmentation, Text, and Speech domains. Existing models can be run with little porting effort with a broad range of optimized NN operators in both 32-bit Float and Quantized 8-bit precisions.

Applications that can Benefit from Vector Processing​

From our perspective, we believe that parallel processing is transitioning from the tool of a few to the norm for many applications, especially as AI and Machine Learning become pervasive. And as Moore’s Law provides ever-diminishing returns, application developers still require more performance and Vector processing can provide the avenue for both higher levels of performance and better power efficiency especially with RISC-V. We see opportunities for RISC-V vector processing in multiple application domains including smart homes, telco, mobile devices, autonomous vehicles, industrial automation, robotic control, and health care. The simplicity and elegance of RVV and the performance gains are powerful selling points.

The X280 processor supports a wide range of use cases.


The X280 processor supports a wide range of use cases.
SiFive
Figure 5: The X280 processor supports a wide range of use cases.

Conclusions​

We are impressed by the progress that RISC-V and SiFive has made in the last few years. The new product line positioning makes a ton of sense, the processors are beefier, the software stack is getting much better and the vector extensions are impressive, both the open source RVV and the AI extensions the company has included in the Intelligence Series X280. The CPUs are relatively high performance with excellent scalability and power efficiency due to the simplicity that stems from the efficient RISC-V ISA and clever extensions. SiFive has also recently disclosed the intention of releasing an even higher performance P600 Series class Out-of-Order core with RISC-V vector compute in the near future.
Finally, the commitment to and leverage of the open-source community is perhaps RISC-V and SiFive’s most important value they can offer as an alternative to Arm, especially for designers looking to build SoC solutions for Domain-Specific Architectures.


Disclosures: This article expresses the opinions of the author, and is not to be taken as advice to purchase from nor invest in the companies mentioned. My firm, Cambrian-AI Research, is fortunate to have many semiconductor firms as our clients, including NVIDIA, Intel, IBM, Qualcomm, Esperanto, Graphcore, SImA,ai, Synopsys, Cerebras Systems, Tenstorrent and Ventana Microsystems. We have no investment positions in any of the companies mentioned in this article. For more information, please visit our website at https://cambrian-AI.com.

Karl Freund

I love to learn and share the amazing hardware and services being built to enable Artificial Intelligence, the next big thing in technology.



 
  • Like
  • Fire
  • Love
Reactions: 29 users
Top Bottom