BRN Discussion Ongoing

Bravo

If ARM was an arm, BRN would be its biceps💪!
The lead sales guy from Syntiant coming to BrainChip and saying "Akida™ is uniquely superior and positioned to be the de facto standard for edge AI semiconductor IP"?



Seems OK to me
Me two!(y)🤭


B x
❤️🧠🍟

Screen Shot 2022-09-06 at 11.32.20 pm.png
 
  • Like
  • Fire
  • Love
Reactions: 51 users
Don't know if covered previously but something I just found within ARM which I wasn't aware of, is their Arm Flexible Access.


What led me there was these guys I was reading up on....Femtosense....early stage start up with some backing by Vanedge Capital it seems.

Not saying in same league as BRN or have same capabilities....was more about the ARM IP access, these guys have an ARM blog early 2021 so before we in the ARM ecosystem...well...formally announced anyway haha.

If you read some of the below & links can see that Hailo, Synaptics, MemryX have also dabbled in ARM Flexible Access early on.

Wonder if BRN has exposure through this channel also?



SPU Architecture​


Near-Memory-Compute​

  • Distributes memory into small banks near processing elements to improve throughput
  • Reduces data motion by performing computations close to on-chip memory banks
  • Eliminates energy and memory bottlenecks caused by accessing off-chip memory

Scalable Core Design​

  • Can be scaled to match needs and constraints of any deployment environment
  • Targets a wide range of applications and form factors
  • Digital design can be ported to other process nodes to balance performance and cost

10 x 10 =​

DUAL SPARSITY​


Our hardware can achieve multiplicative benefits in speed and efficiency when both forms of sparsity are present.

SPARSE Weights​

  • Supports sparsely connected models
  • Only stores and computes on weights that matter
  • 10x improvement in speed, efficiency, and memory
icon_model-2.svg

SPARSE ACTIVATIONS​

  • Supports sparse activations
  • Skips computation when a neuron outputs zero
  • 10x increase in speed and efficiency
icon model 1

Core Design​

coredesign gif

4 Core Configuration​

  • 512 kB on-chip SRAM per core
  • 5 MB effective SRAM with weight sparsity
  • 1.3 mm2 single core (22nm process)
  • AXI interface

Femtosense: Our Ultra-efficient AI Chip, Built with Arm Flexible Access

Sam Fok, co-founder and CEO of silicon startup Femtosense, explains how Arm Flexible Access for Startups enabled the development of a neuromorphic AI chip

One classical pathway to technology success runs from university research to commercial startup. It’s an exciting journey in which a small group of grad students takes a spark of innovation from research and kindles it into a technology that they hope will transform the world.

That journey has its challenges as the technology moves from the lab into the rigorous and sometimes unforgiving world of product development. And it’s a journey my co-founder, Alexander Neckar, and I embarked on two years ago out of Stanford.

Our startup, Femtosense, emerged from work we did in Stanford’s Electrical Engineering graduate program on a project called Braindrop—a mixed-signal neuromorphic system designed to be programmed at a high level of abstraction.

Neuromorphic systems are silicon versions of the neural systems found in neurobiology. It’s a growing field, offering a range of exciting possibilities such as sensory systems that rival human senses in real-time.

Over the past two years, we’ve nurtured our startup to develop the aspects of the technology with commercial potential. We’ve taken the original concept and built a neural network application-specific integrated circuit (ASIC) for the general application area of ultra-efficient AI. As a start, we want to enable ultra-power-efficient inference in embedded endpoint devices everywhere.

Why did we venture down this path? Datacenter technology is well-developed and has different technical challenges than edge technology. Further, consumers and market competition are pushing for edge and endpoint devices with ever-increasing capability. AI can and should be deployed at the endpoints when feasible to reduce latency, lower computing costs, and enhance security.

Area and power major considerations in neural networks​

But designing such an ASIC (any ASIC really) is a unique challenge in today’s ultra-deep submicron world. For one thing, area and power considerations loom large. Neural networks are a different animal than classical signal processing. Designing an ASIC or system-on-chip (SoC) for neural networks presents a different challenge than, say, designing an efficient DSP or microcontroller.

Neural networks have a lot of parameters, are quite memory intensive, and have a lot of potential for energetically expensive data motion. Of primary concern is the question of on-chip versus off-chip memory. Off-chip memory provides excellent density, allowing for bigger models, but accessing off-chip memory costs a lot of energy, which often puts such systems beyond the power budgets of the envisioned initial applications.

So, when you’re looking at ultra-power-efficient endpoint embedded solutions, you will want to put everything a single die, but then you will hit area and cost issues. This is where algorithm-hardware codesign comes into play; the two really must be done together.

You can’t just naively take an algorithm and map it down to a chip. It would cost too much energy or money or not just perform well. You have to think carefully about how to make that algorithm efficient. That’s the hardware and algorithm co-design challenge. We spend a lot of effort on the algorithm side to fit everything on-chip, and, once it’s on-chip, to map the algorithm onto the chip’s compute fabric.

As we design, we’re exploring many potential applications because we want to apply our technology as widely as possible. The market is not nearly as black-and-white as endpoint-and-cloud terminology suggests.

We see it as a continuum. It’s not like you’re either in datacenters or in tiny battery-powered devices. There are many nodes across the spectrum—everything from on-prem servers, to laptops, phones, smartwatches, earbuds, and even sensors out in the middle of nowhere with no power source could use more efficient neural network compute.

We want to deliver ultra-efficient compute. To us, this is our primary mission. When you’re in tough environments with tight requirements, that’s when you’re going to want to use our technology.

Arm Flexible Access for Startups gave us the design flexibility we needed​

We’re a small team for now, so we need to focus on our strengths. There’s the core hardware accelerator, the software that goes with it, and the algorithms. Then, there’s technology to take that value and serve customers. This is where we’re very excited to be working with Arm and the Arm Flexible Access for Startups program.

To integrate with customer designs, you need an interface to handle communications and off-load a bit of compute. This is not something we think about day-to-day in terms of our core engineering or IP. This is why it’s paramount to have a well-established, reliable partner like Arm to work with because you don’t have to reinvent everything or spend effort educating the market. Everyone knows Arm, and Arm is a well-known path to integration. Arm solves one of our biggest commercial challenges, and that’s why working with Arm is key.

One of the attractive elements of the Arm Flexible Access for Startups program is the ecosystem support. The ecosystem reduces barriers to adoption, which in turn drives innovation. When you’re planning projects, you need clear and accessible specifications and information about which products do what.

Making actionable information available upfront without huge outlays and with the ability to evaluate different IP and run experiments in the sandbox is huge for us. The program’s pricing models are much more aligned with how startups grow than traditional IP vending.

Standing at the beginning of the SoC integration path, we have an exciting journey ahead, and we’ll have more interesting perspectives the deeper we get into it. We have the initial design and are moving toward an ASIC implementation, making the Arm Flexible Access for Startups program an important tool to have.

2020 blog post with bit of insight.



arm Blogs

Arm Flexible Access one year later: Accelerating innovation for more than 60 partners and counting​

arm Blogs - Dipti Vachani, Arm
Aug. 24, 2020
When I joined Arm a little more than 18 months ago, I was thrilled to become part of a team of people delivering the foundational technology for billions of devices each year, from sensors to smartphones to supercomputers. The convergence of 5G, IoT, and AI is driving multiple industries to transform at the same time. We see this transformation in retail, automotive, factories, homes, and our cities. One of my first questions to the team was simply, “how do we make Arm the easiest company to innovate with?”
No matter the size of the business or sector, to deliver innovative new technologies, our partners need the fastest, lowest-cost and minimum-risk journey to SoC design. This was the thinking when we launched Arm Flexible Access last summer; to provide both new and existing partners with access to more than 75% of Arm’s IP portfolio, support, tools and training, but with no up-front licensing commitment.
One year later, Flexible Access is now Arm’s fastest-growing program ever with more than 60 partners signing up for the freedom to experiment, evaluate, design and customize their own unique SoCs. The program is empowering existing partners and more than 30 first-time Arm IP customers to address growth opportunities in IoT, machine learning, autonomous systems and automotive. Flexible Access provides these first-time customers with a portal to the largest ecosystem of tools, services and software.
In my conversations with Flexible Access users, the feedback has been overwhelmingly positive and already we’re seeing some great early success stories. These include ASIC houses such as Faraday and Socionext, established semiconductor companies such as Nordic Semiconductor, startups including Atmosic and Hailo, and even government bodies such as the Korean Ministry of SMEs and Startups, which has invested in the program to support startups in the region. OEMs which previously used third-party design houses to create the full design of their chips, are now able to develop their SoCs in a far more collaborative way with direct access to all of the IP they need.
Simple and easy
Two words I consistently hear from partners when talking about accessing our IP through Flexible Access are “simple” and “easy”, which enables them to focus on delivering fantastic new innovations. One example that really stood out recently came from new partner ZhorTech, which has developed new advanced footwear technology. Flexible Access allowed them to develop a chip that uses AI algorithms embedded into shoes, with multiple applications including detecting musculoskeletal malformations, monitoring diseases like Parkinson or diabetes, detecting fatigue and injury risk, or transforming the shoes into a gaming accessory.
Since launching Flexible Access, we have been evolving the program based on our partners’ requirements and feedback. This includes special adaptations for both silicon startups and research institutions, to cater to their specific needs. Startups such as Femtosense and MemryX have been able to immediately access Arm IP, empowering them to begin silicon design much earlier, even before they have secured VC funding. Researchers and academics also have the freedom to experiment and increase their opportunities using commercially relevant IP for their projects.
More please
Another consistent point of feedback I hear from Flexible Access customers is “give us more please.” We are regularly expanding the range of IP within the program. Most recently we have added a new Arm Corstone subsystem reference design, providing SoC designers with pre-integrated IP blocks for faster design while reducing verification requirements. We have also expanded the range of physical IP, meaning designers can optimize and predict the power-performance area of their chips.
SoC designs are becoming increasingly complex, but the design process itself doesn’t have to be. The results of this program speak for themselves as 15 percent of the companies who have joined Flexible Access in its first year are already moving toward tape-out. I for one can’t wait to see what the next year brings!
 
  • Like
  • Fire
  • Wow
Reactions: 17 users

marsch85

Regular
Interesting change. Chris Stevens is the new VP of Worldwide Sales and I quote 'Rob Telson, the former VP of Worldwide Sales, will manage and grow ecosystems and global partnerships.'

Quite common for companies to separate sales from business development / partnerships. Does make me wonder how this change has gone down with Rob. In any case, good that the company is bringing on someone with extensive Edge AI sales expertise for our path to commercialisation / world domination.

Edit:
And of course nice to hear from someone who is moving across from a "competitor": I am thrilled to take this sales leadership role with confidence as BrainChip's Akida™ is uniquely superior and positioned to be the de facto standard for edge AI semiconductor IP," said Stevens.
 
  • Like
  • Fire
  • Love
Reactions: 45 users

uiux

Regular
  • Like
Reactions: 1 users

Townyj

Ermahgerd
Held positions at Syntiant and Texas Instruments... Oh ok. 🥳🥳🥳
 
  • Like
  • Love
Reactions: 17 users
Just cause haven't posted a chart snip for awhile.

Like it to & it's trying to...hold this poss supp & bottom trend line as a min.

She wants to turn but a little bit of work still do....a solid little push through that smooth Heiken Ashi (yellow candle bodies) around 0.92 then break the pivot (green horizontal line) just above to the left early Aug at about 1.00 will do nicely.


1662471890204.png
 
  • Like
  • Love
  • Fire
Reactions: 25 users

Diogenese

Top 20


"... I am thrilled to take this sales leadership role with confidence as BrainChip's Akida™ is uniquely superior and positioned to be the de facto standard for edge AI semiconductor IP,"

A few points about Syntiant:

AI Chip Company Syntiant Raises $55 Million to Accelerate Growth

Strong Pipeline as Global Demand for Edge AI Rises; 20 Million+ Processors Already Shipped
Irvine, Calif., March 28, 2022
Syntiant was founded in 2017
Renesas is invested in Syntiant

“We are at a pivotal point of our company’s growth and development, having shipped more than 20 million of our Neural Decision Processors as global market demand for edge AI rises among device manufacturers,” said Kurt Busch, CEO at Syntiant
.

Considering that they were founded in 2017, it is remarkable that they have shipped 20 million NDPs.

Syntiant has a hybrid (analog/digital -Frankenstein) Neural Decision Processor, so it's hardly surprising that Chris Stevens sees Akida as being uniquely superior and as becoming the de facto AI standard.
 
  • Like
  • Love
  • Fire
Reactions: 55 users

JDelekto

Regular
The lead sales guy from Syntiant coming to BrainChip and saying "Akida™ is uniquely superior and positioned to be the de facto standard for edge AI semiconductor IP"?



Seems OK to me
Why do I have the sense that you'll be rubbing Shareman's nose in that news?
 
  • Haha
  • Like
Reactions: 13 users

uiux

Regular
Why do I have the sense that you'll be rubbing Sharespam's nose in that news?

Not going to lie I did LOL when I read where he came from.

1662475524293.png
 
  • Haha
  • Like
  • Love
Reactions: 49 users

Cenr

Emerged
Reuters
Reuters
FollowView Profile

Analysis-Banned U.S. AI chips in high demand at Chinese state institutes​

By Eduardo Baptista - 4h ago
ReactComments|

1
  • Share

  • Save

View attachment 15983
By Eduardo Baptista
FILE PHOTO: The logo of Nvidia Corporation is seen during the annual Computex computer exhibition in Taipei
FILE PHOTO: The logo of Nvidia Corporation is seen during the annual Computex computer exhibition in Taipei© Reuters/Tyrone Siu
BEIJING (Reuters) - High-profile universities and state-run research institutes in China have been relying on a U.S. computing chip to power their artificial intelligence (AI) technology but whose export to the country Washington has now restricted, a Reuters review showed.
U.S. chip designer Nvidia Corp last week said U.S. government officials have ordered it to stop exporting its A100 and H100 chips to China. Local peer Advanced Micro Devices Inc (AMD) also said new licence requirements now prevent export to China of its advanced AI chip MI250.

One Aussie’s Honest Review Of HelloFresh
Ad
The Journiest for HelloFresh

One Aussie’s Honest Review Of HelloFresh

The development signalled a major escalation of a U.S. campaign to stymie China's technological capability as tension bubbles over the fate of Taiwan, where chips for Nvidia and almost every other major chip firm are manufactured.
China views Taiwan as a rogue province and has not ruled out force to bring the democratically governed island under its control. Responding to the restrictions, China branded them a futile attempt to impose a technology blockade on a rival.
A Reuters review of more than a dozen publicly available government tenders over the past two years indicated that among some of China's most strategically important research institutes, there is high demand - and need - for Nvidia's signature A100 chips.
Tsinghua University, China's highest-ranked higher education institution globally, spent over $400,000 last October on two Nvidia AI supercomputers, each powered by four A100 chips, one of the tenders showed.
In the same month, the Institute of Computing Technology, part of top research group, the Chinese Academy of Sciences (CAS), spent around $250,000 on A100 chips.
The school of artificial intelligence at a CAS university in July this year also spent about $200,000 on high-tech equipment including a server partly powered by A100 chips.
In November, the cybersecurity college of Guangdong-based Jinan University spent over $93,000 on an Nvidia AI supercomputer, while its school of intelligent systems science and engineering spent almost $100,000 on eight A100 chips just last month.


Related video: Five Chinese Companies to Delist From US Exchanges



Five Chinese Companies to Delist From US Exchanges

Less well-known institutes and universities supported by municipal and provincial governments, such as in Shandong, Henan and Chongqing, also bought A100 chips, the tenders showed.
None of the research departments responded to requests for comment on the effect on their projects of the A100 export curb.
Nvidia did not respond to a request for comment. Last Wednesday, it said it had booked $400 million in Chinese sales of the affected chips this quarter which could be lost if its customers decide not to buy alternative Nvidia products. It also said it planned to apply for exemptions to the new rules.
REPLACEMENTS
The lack of chips from the likes of Nvidia and AMD is likely to hamper efforts at Chinese organisations to cost-effectively carry out the kind of advanced computing used for tasks such as image and speech recognition.
Image recognition and natural language processing are common in consumer applications such as smartphones that can answer queries and tag photos. They also have military uses such as scouring satellite imagery for weapons or bases and filtering digital communications for intelligence-gathering purposes.
Experts said there are few Chinese chipmakers that could readily replace such advanced Nvidia and AMD chips, and buyers could instead use multiple lower-end chips to replicate the processing power.
Reuters could not locate any Chinese government tenders mentioning the other two restricted chips - Nvidia's H100 and AMD's MI250.
But some of the tenders showed, for instance, chip purchases from U.S. technology firm Intel Corp and proposals for purchasing less-sophisticated Nvidia products, underscoring China's reliance on an array of U.S. chip technology.
One tender in May showed the Chinese Academy of Surveying and Mapping, a research institute of the Ministry of Natural Resources, considering an Nvidia AI supercomputer to improve its ability to create three-dimensional images from geographic data.
"The proposed NVIDIA DGX A100 server will be equipped with 8 A100 chips with 40GB memory, which will greatly improve the data-carrying capacity and computing speed, shorten the scientific research process, and get scientific research results faster and better," the tender read.
The National University of Defense and Technology (NUDT), which describes itself as a "military university" and "under the direct leadership of the Central Military Commission", China's top military body, is also among the buyers of A100 chips.
The NUDT, home of Tianhe-2, one of the world's most powerful supercomputers, has been on a U.S. blacklist since 2015 due to national security concerns, eliminating the university's access to the Intel processors it uses in its supercomputers.
One May tender showed the institute planned to buy 24 Nvidia graphics processing units with AI applications. The tender was published again last month, indicating NUDT had not yet found the right deal or supplier.
NUDT did not respond to a request for comment.
(Reporting by Eduardo Baptista; Additional reporting by Josh Horwitz; Editing by Miyoung Kim and Christopher Cushing)


LOOKS LIKE THINGS ARE RAMPING UP......NO MORE STEALING THANKS ...LOVE THE US GOVERNMENT :ROFLMAO::ROFLMAO::ROFLMAO:
 
  • Like
Reactions: 1 users

Sirod69

bavarian girl ;-)

NASA Selects SiFive and Makes RISC-V the Go-to Ecosystem for Future Space Missions​

SiFive X280 delivers 100x increase in computational capability with leading power efficiency, fault tolerance, and compute flexibility to propel next-generation planetary and surface missions

“We’ve always said that with SiFive and RISC-V the future has no limits, and we’re excited to see the impact of our innovations extend well beyond our planet.”


 
  • Like
  • Fire
  • Love
Reactions: 63 users
Not sure if I'm looking into it too much here but couldn't help noticing both Chris and Sean using the word 'certain' when indicating their level of confidence of how much of an impact Chris will have on Brainchip's growth and development.

To me this definitely indicates an extremely high level of confidence in what Chris can bring to the table, which is a great positive sign.

Brainchip, onwards and upwards!
 
  • Like
  • Fire
  • Love
Reactions: 46 users

stuart888

Regular
Excellent presentation by CEO Zach Shelby of Edge Impulse.

 
  • Like
  • Fire
  • Love
Reactions: 21 users

cosors

👀
  • Like
Reactions: 5 users

alwaysgreen

Top 20
The lead sales guy from Syntiant coming to BrainChip and saying "Akida™ is uniquely superior and positioned to be the de facto standard for edge AI semiconductor IP"?



Seems OK to me
In fairness, he's also not going to bag the product he now needs to sell to justify his salary.

Akida is definitely superior though!
 
  • Like
  • Fire
Reactions: 8 users

alwaysgreen

Top 20
Interesting change. Chris Stevens is the new VP of Worldwide Sales and I quote 'Rob Telson, the former VP of Worldwide Sales, will manage and grow ecosystems and global partnerships.'

Quite common for companies to separate sales from business development / partnerships. Does make me wonder how this change has gone down with Rob. In any case, good that the company is bringing on someone with extensive Edge AI sales expertise for our path to commercialisation / world domination.

Edit:
And of course nice to hear from someone who is moving across from a "competitor": I am thrilled to take this sales leadership role with confidence as BrainChip's Akida™ is uniquely superior and positioned to be the de facto standard for edge AI semiconductor IP," said Stevens.
I feel like Rob would be okay with it. He got ARM on board. That would have been a major focus for the ex-ARM guy and is one that Anil and Peter would have wanted to get into bed with since day one.
 
  • Like
Reactions: 25 users

TheFunkMachine

seeds have the potential to become trees.
I feel like Rob would be okay with it. He got ARM on board. That would have been a major focus for the ex-ARM guy and is one that Anil and Peter would have wanted to get into bed with since day one.
I just had the same thought. Rob did what he was hired to do, get ARM! I think Rob will thrive in his new role and are happy to pass the Baton onwards for the next guy to run his race.

Rob will still be able to say 10 years from now that he contributed to billions in sales trough ARM 😁
 
  • Like
Reactions: 25 users

Easytiger

Regular
"... I am thrilled to take this sales leadership role with confidence as BrainChip's Akida™ is uniquely superior and positioned to be the de facto standard for edge AI semiconductor IP,"

A few points about Syntiant:

AI Chip Company Syntiant Raises $55 Million to Accelerate Growth

Strong Pipeline as Global Demand for Edge AI Rises; 20 Million+ Processors Already Shipped
Irvine, Calif., March 28, 2022
Syntiant was founded in 2017
Renesas is invested in Syntiant

“We are at a pivotal point of our company’s growth and development, having shipped more than 20 million of our Neural Decision Processors as global market demand for edge AI rises among device manufacturers,” said Kurt Busch, CEO at Syntiant
.

Considering that they were founded in 2017, it is remarkable that they have shipped 20 million NDPs.

Syntiant has a hybrid (analog/digital -Frankenstein) Neural Decision Processor, so it's hardly surprising that Chris Stevens sees Akida as being uniquely superior and as becoming the de facto AI standard.
Fantastic
 
  • Like
  • Fire
Reactions: 11 users

Mccabe84

Regular
I just had the same thought. Rob did what he was hired to do, get ARM! I think Rob will thrive in his new role and are happy to pass the Baton onwards for the next guy to run his race.

Rob will still be able to say 10 years from now that he contributed to billions in sales trough ARM 😁
What’s Robs new role ?
 
  • Like
Reactions: 1 users
Top Bottom