BRN Discussion Ongoing

Kachoo

Regular
I would step back and say the following comments:

1. CES 2023 was great success for BRN and it's partners.
2. The potential leads have really increased and our technology is out there.

The pull back in SP is market fighting shorting pumping longs and many other variables out of control to retail investors.

The lack of to revenue is not a result of lack of activity. I think the end product needs to be sold by the end suppliers before we see payments down the line.

I would also add to the few that poke these partners to be humble and let them market ther products weather it's Akida or not.

Not many companies are going to thank IP suppliers for the innovation and they don't need to it's there product they spent the money and that's it. Poking these people and c companies will only slow us and make it harder for BRN employees to market products.

I would say that black outs on info could even be a result of how the share holders will circle the company of the next product and tell them to put Akida on the package lol.

Let's look at Megachips we don't even know who they work for but they grow.

In the end like The CEO says watch the financial numbers. They I believe are comming there is a Lot of activity and soon we will start seeing it roll in.

Let's be professional. I know not many have not been but please to those that poke these companies let them enjoy their moments too they engineered the product also lots of amazing work went in to the whole product.
 
  • Like
  • Fire
  • Love
Reactions: 51 users

JK200SX

Regular
Published today in the Edge Impulse Blog :)

Edge Impulse

BrainChip Akida™ Platform Features in Edge Impulse

Blog


mathijsTeam
6h

Edge Impulse enables developers to rapidly build enterprise-grade ML algorithms, trained on real sensor data, in a low- to no-code environment. With the complete integration of BrainChip Akida™ technology, these trained algorithms and new algorithms can now be converted into spiking neural networks (SNNs) and deployed to BrainChip Akida target devices such as the Akida mini PCIe development board. This blog highlights the new features with BrainChip technology that are now available in Edge Impulse Studio to provide easy, quick, and advanced model development and deployment for BrainChip Akida neuromorphic technology.
The development of models for BrainChip Akida is now integrated in Edge Impulse Studio. Developers can select BrainChip Akida (refer Figure 1) in the learning block of an impulse design. There are two learning blocks available today — classification, which supports development of new models and transfer learning, which provides access to a model zoo that is optimized for BrainChip Akida devices. The type of learning blocks visible depend on the type of data collected and intent of the project such as classification, object detection. Using the BrainChip Akida learning blocks ensures that the models developed are optimized and successfully deployed to the BrainChip Akida mini PCIe development board.
Figure 1: Choose BrainChip Classification or Transfer learning blocks for development on BrainChip Akida devices
In the learning block of the impulse design one can compare between float, quantized, and Akida versions of a model. If you added a processing block to your impulse design, you will need to generate features before you can train your model. Developers can use Edge Impulse Studio (refer Figure 2) to edit predefined models, and training parameters.
Figure 2: Visual mode of BrainChip Akida learning block with options for to profile float, quantized, and Akida accuracies
Edge Impulse Studio also gives the ability for users to modify pre-generated Python code as a way to get more exact behavior from the learn blocks. In this area the more advanced user can also call into the Akida MetaTF Python package as is integrated into the Akida learn blocks (Figure 3a and 3b).
Figure 3a: How to access Expert Mode of a BrainChip Akida learn blockFigure 3b: Example of Expert Mode code that calls BrainChip’s MetaTF package functions
While training the BrainChip Akida learn block, useful information such as model compatibility, summary, sparsity, and # of NPs required are also displayed in the log output (refer Figure 4). This helps developers to review and modify their models to generate custom, optimized, and desired configurations for BrainChip Akida devices (Figure 5).
Figure 4: Profile information for BrainChip Akida
If the project uses a transfer learning block, the developer will be presented with a list of models (refer Figure 5) from BrainChip’s Model Zoo that are pre-trained and optimized for BrainChip Akida. These models will provide a great starting point for developers and implement transfer learning for their projects. As of today, several AkidaNet-based models are integrated into the Edge Impulse Studio and many more will be integrated over time. If developers have a specific request on this, please let us know via the Edge Impulse forums.
Figure 5: List of models available when BrainChip Transfer Learning block is chosen
Any model that has been developed in the impulse design on Edge Impulse Studio can be deployed to BrainChip Akida target devices. In order to download these models for custom deployments, developers must choose the BrainChip MetaTFTM model block (refer Figure 5a) under the deployment stage to obtain a .zip file with the converted model in it. Alternatively, there is also a BrainChip Akida PCIe deployment block (refer Figure 5b) available which will generate a pre-built binary that can be used with Edge Impulse Linux CLI to run on any compatible Linux installation where this board is installed.
The pre-built binary that is provided from the deployment block also has in-built tools that provide performance metrics for the AKD1000 mini PCI reference board (refer Figure 7). This is a very unique integration as developers can not only deploy their favorite projects to BrainChip Akida devices but can also capture information such as efficiency that help build a prototype system with Akida technology.
Figure 7: Performance information unique to BrainChip Akida devices
To quickly get started, please see these example projects that are available to clone immediately for quick training and deployment to the AKD1000 development platform.
If you are new to Edge Impulse, please try out one of the Getting Started tutorials before continuing with Edge Impulse BrainChip Akida features. Please reach out to us on Edge Impulse forums for technical support. If you need more information about getting started with BrainChip Akida on Edge Impulse, visit the documentation page.

This is a companion discussion topic for the original entry at https://www.edgeimpulse.com/blog/brainchip-akidatm-platform-features-in-edge-impulse
 
  • Like
  • Fire
  • Love
Reactions: 44 users
  • Like
  • Fire
  • Love
Reactions: 14 users

11D5164D-AE28-4314-A890-7498B8F742C1.jpeg
 
  • Like
  • Fire
  • Love
Reactions: 27 users
Might not be related but it's interesting that both Dell and Panasonic are using it. Reads like it's using a radar solution similar to what Socionext was developing while using ultra low power:


Synaptics’ Emza Visual Sense AI Powers Human Presence Detection in Latest Dell and Panasonic Mobile PCs​

Ultra-low-power, machine learning (ML) algorithms analyze user and on-looker behavior to conserve power while enhancing privacy and security.​

Las Vegas, NV, January 3rd, 2023 – Synaptics® Incorporated (Nasdaq: SYNA) announced today that Dell and Panasonic have deployed its Emza Visual Sense artificial intelligence (AI) technology to enable Human Presence Detection (HPD) in their latest Latitude and SR mobile PCs, respectively. Running advanced Emza ML algorithms on dedicated, ultra-low power edge AI hardware, Synaptics provides PC OEMs with a turnkey solution that enables both longer battery life and enhanced privacy and security. By analyzing context, Synaptics’ technology goes far beyond basic presence detection to automatically wake the system when a user is engaged, dim the screen when they are not, hide information when an on-looker is detected, and lock the PC when the user walks away—all while the PC’s webcam is off.

CES 2023: To see the latest demonstration of Emza Visual Sense and HPD, visit us in the Venetian Hotel, Level 2 Exhibitor, Bellini Ballroom, #2105. Email press@synaptics.com for an appointment.

“By using Emza Visual Sense AI for HPD, Dell and Panasonic are leading an era of context-aware computing that will combine multiple sensing modes with edge AI to enable user-friendly, efficient, and secure intelligent platform experiences,” said Saleel Awsare, SVP & GM at Synaptics. “We are as excited for our partners as they embark on this journey as we are for the end users who will quickly reap the benefits.”

Both the Dell Latitude laptop and Panasonic SR notebook are shipping today.

Additional SB7900 features include:

Synaptics’ KatanaTM AI SoC platform also pairs with Emza Visual Sense ML algorithms to form ultra-low power, highly intelligent computer vision platforms with broad deployability. Together, they expand HPD applications to devices beyond PCs and notebooks to include smart TVs and assisted living cameras.

Availability
Synaptics’ Emza Visual Sense technology is available now. For more information, contact your local Synaptics sales representative.
 
  • Like
  • Fire
Reactions: 17 users

Foxdog

Regular
I would step back and say the following comments:

1. CES 2023 was great success for BRN and it's partners.
2. The potential leads have really increased and our technology is out there.

The pull back in SP is market fighting shorting pumping longs and many other variables out of control to retail investors.

The lack of to revenue is not a result of lack of activity. I think the end product needs to be sold by the end suppliers before we see payments down the line.

I would also add to the few that poke these partners to be humble and let them market ther products weather it's Akida or not.

Not many companies are going to thank IP suppliers for the innovation and they don't need to it's there product they spent the money and that's it. Poking these people and c companies will only slow us and make it harder for BRN employees to market products.

I would say that black outs on info could even be a result of how the share holders will circle the company of the next product and tell them to put Akida on the package lol.

Let's look at Megachips we don't even know who they work for but they grow.

In the end like The CEO says watch the financial numbers. They I believe are comming there is a Lot of activity and soon we will start seeing it roll in.

Let's be professional. I know not many have not been but please to those that poke these companies let them enjoy their moments too they engineered the product also lots of amazing work went in to the whole product.
Good post, thanks.
After the CEO podcast I've turned my attention (and expectations) to 2024. There's already a groundswell around AI growing in the media and I think this will continue to grow throughout the remainder of this year. As for our revenue, perhaps a bit more in the next 4C followed by bigger 'lumps' towards end 2023. We should see some solid revenue in 2024 as customers products mature and enter the marketplace.
I don't know if we'll ever see 'Akida Inside' advertised on products but we should see a steady increase in SP reflective of a successful company providing an amazing product to many other successful companies.
That's my take away from the CEO's podcast anyway.
 
  • Like
  • Love
Reactions: 20 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Howdy Brain Fam,

Hope everyone is having a great weekend. Let's hope I can make it even better!

I just watched the Cerence 25th Annual Needham Growth Conference which was filmed on the 10th Jan 2023. It's a 40 min approx video presentation that you have to sign up for to watch (full name and email address required for access). This link is here if you're interested in watching. https://wsw.com/webcast/needham

I'm itching to share a bit of information from the presentation because I believe there were numerous points raised throughout the presentation that indicate quite strongly the possible use of our technology in Cerence's embedded voice solution IMO.

For some background, Cerence is a global leader in conversational AI and they state that they are the only company in the world to offer the "complete stack" including conversational AI, audio, speech to text AI. Cerence state that every second newly defined SOP (start of production) car uses their technology, and they’re working with some very big names such as BYD, NIO, GM, Ford, Toyota, Volkswagen, Stellantis, Mercedes, BMW.

In the presentation they discussed how in November they held their second Analyst Day in which they outlined their new strategy called "Destination Next". They said that from a technology perspective this strategy or transition means they are going to be evolving from a voice only driver-centric solution via their Cerence Assistant or Co-pilate to a truly immersive in-cabin experience. Stefan Ortmanns (CEO Cerence) said early in the presentation something like "which means we're bringing in more features and applications beyond conversational AI, for example, wellness sensing, for example surrounding awareness, emotional AI or the interaction inside and outside the car with passengers and we have all these capabilities for creating a really immersive companion”. He also said something about the underlying strategy being based on 3 pillars, "scalable AI, teachable AI, and the immersive in-cabin experience", which has been bought about as a result of a "huge appetite for AI".

At about 6 mins Stefan Ortmanns says they have recently been shifting gear to bring in more proactive AI and he said something along these lines "What does it mean? So you bring everything you get together, so you have access to the sensors in the car, you have an embedded solution, you have a cloud solution, and you also have this proactive AI, for example the road conditions or the weather conditions. And if you can bring everything together you have a personalised solution for the driver and also for the passengers and this is combines with what we call the (??? mighty ?? intelligence). And looking forward for the immersive experience, you need to bring in more together, it's not just about speech, it's about AI in general right so, with what I said wellness sensing, drunkenness detection, you know we're working on all this kind of cool stuff. We're working on emotional AI to have a better engagement with the passengers and also with the driver. And this is our future road map and we have vetted this with 50-60 OEM's across the globe and we did it together with a very well know consultancy firm."

At about 13 mins they describe how there will be very significant growth in fiscal years 23/24 because of the bookings they have won over the last 18 to 24 months that will go into production at the end of this year and very early in 2024 and a lot of them will have the higher tech stack that Stefan talked about.

At roughly 25 mins Stefan Ortmanns is asked how they compete with big tech like Alexa, Google, Apple, and how are they are co-exisiting because there are certain OEMS's using Alexa and certain ones using Cerence as well. In terms of what applications is Cerence providing Stephan replied stating something like "Alexa is a good example, so what you're seeing in the market is that OEM's are selecting us for their OEM branded solution and we are providing the wake word for interacting with Alexa, that's based on our core technology".

Now here comes the really good bit. At 29 mins the conversation turns to partnership statements, and they touch on NVDIA and whether Cerence view NVDIA as a competitor or partner (sounds familiar). This question was asked in relation to NVDIA having its own chauffeur product which enables some voice capabilities with its own hardware and software however Cerence has also been integrated into NVDIA's DRIVE platform. In describing this relationship, Stefan Ortmanns says something like "So you're right. They have their own technology, but our technology stack is more advanced. And here we're talking about specifically Mercedes where they're positioned with their hardware and with our solution. There's also another semi-conductor big player, Qualcomm namely now they are working with Volkswagen group and they're also using our technology. So we're very flexible and open with our partners".

Following on from that they discuss how Cerence is also involved in the language processing for BMW which has to be "seamlessly integrated" with "very low latency".

So, a couple of points I wanted to throw in to emphasise why I'm thinking all of this so strongly indicates the use of BrainChip's technology being a part of Cerence's stack.
  • Cerence presented Mercedes as the premium example in which to demonstrate how advanced their voice technology is in comparison to NVDIA's. Since this presentation is only a few days old, I don't think they'd be referring to Mercedes old voice technology but rather the new advanced technology developed for the Vision EQXX. And I don't think Cerence would be referring to Mercedes at all if they weren't still working with them.
  • This is after Mercedes worked with BrainChip on the “wake word” detection for the Vision EQXX which made it 5-10 times more efficient. So, it only seems logical if Cerence's core technology is to provide the wake word that they should incorporate BrainChip’s technology to make the wake word detection 5-10 times faster.
  • In November 2022 Nils Shanz, who was responsible for user interaction and voice control at Mercedes and who also worked on the Vision EQXX voice control system was appointed Chief Product Officer at Cerence.
  • Previous post in which Cerence describe their technology as "self-learning",etc #6,807
  • Previous post in which Cerence technology is described as working without an internet connection #35,234 and #31,305
  • I’m no engineer but I would have thought the new emotion detection AI and contextual awareness AI that are connected to the car’s sensors must be embedded into Cerence’s technology for it all to work seamlessly.
Anyway, naturally I would love to hear what @Fact Finder has to say about this. As we all know he is the world's utmost guru in being able to sort the chaff from the wheat and always stands at the ready to pour cold water on any outlandish dot joining attempts when the facts don't stack up.

Of course, these are only the conclusions I have arrived at after watching the presentation and I would love to hear what everyone else’s impression are. Needless to say, I hope I'm right.

B 💋

 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 134 users

Foxdog

Regular
Mate I reckon we're in good shape while the Fool keeps bagging us. Lookout when they decide to pump our stock and recommend it - that might be time to sell. Don't fret tho, it'll be 10 bucks plus by then 😜
 
  • Like
  • Fire
Reactions: 10 users

Vladsblood

Regular
Mate I reckon we're in good shape while the Fool keeps bagging us. Lookout when they decide to pump our stock and recommend it - that might be time to sell. Don't fret tho, it'll be 10 bucks plus by then 😜
I’m not selling until at least 4 stock splits on our Full Nasdaq Listing folks. By then we could be 50-100 dollars Or MORE. Cheers Chippers Vlad
 
  • Like
  • Love
  • Fire
Reactions: 31 users

Esq.111

Fascinatingly Intuitive.
Evening Chippers,

Breaking news...

World first, Pioneer DJ mixing table utilising Brainchips Akida neuromorphic chip on the International Space Station.
Personally can't imagine having to wash the external windows , whilst attached via umbilical, without some groovy tunes.

😄 .

* With any luck, may pull Fact Finder back, to give me a dressing down.
Seemed to work last time.

All in good humour.

ARi - Matasin, Live Series, Ep.003 ( Melodic Techno Progressive House Mix) 7th Jan 2023.

If a savvy individual could post link, Thankyou in advance.
This may be our only hope of retrieving Fact Finder.

Cheers for all the great finds and posts today.

Regards,
Esq.
 
  • Like
  • Haha
  • Fire
Reactions: 29 users

SiFive
Did you miss us at the 2022 #RISCVSummit? We’ve got you covered! Learn more about our collaboration with
@Intel and the new Horse Creek development board from SiFive’s Jack Kang:



I was not aware that SiFive are collaborating with Intel on a ‘powerful tool for developers’

Is BrainChip also involved?

0740833C-FB6C-4C42-893B-DE0DA779D3FD.jpeg


HiFive Pro P550​

RISC-V is inevitable, and the HiFive Pro P550 development system exemplifies that.

In partnership, Intel and SiFive are excited to introduce the highest performance RISC-V development board, which is scheduled to be available Summer 2023.

The soul of the machine is the Intel Horse Creek SoC, built on the Intel 4 process, that includes a SiFive Performance™ P550 Core Complex, a quad-core application processor featuring a thirteen-stage, triple-issue, out-of-order pipeline with the RISC-V RV64GBC ISA, and on-board DDR5-5600 and PCIe Gen5.

Board features (subject to change) include; 16GB DDR5, 2x PCIe expansion slots, 1/10GbE Networking, USB 3, on-board graphics and a remote management ready interface (OCP DC-SCM).

This is a premium software development system ideal for developer desktop machines and rack-based build/test/deploy servers for RISC-V software development. RISC-V has no limits.



1673680024165.png

1673680440463.png






Intel at 18:25
 
  • Like
  • Fire
  • Love
Reactions: 51 users
D

Deleted member 118

Guest
  • Like
  • Fire
  • Love
Reactions: 7 users

Diogenese

Top 20
Howdy Brain Fam,

Hope everyone is having a great weekend. Let's hope I can make it even better!

I just watched the Cerence 25th Annual Needham Growth Conference which was filmed on the 10th Jan 2023. It's a 40 min approx video presentation that you have to sign up for to watch (full name and email address required for access). This link is here if you're interested in watching. https://wsw.com/webcast/needham

I'm itching to share a bit of information from the presentation because I believe there were numerous points raised throughout the presentation that indicate quite strongly the possible use of our technology in Cerence's embedded voice solution IMO.

For some background, Cerence is a global leader in conversational AI and they state that they are the only company in the world to offer the "complete stack" including conversational AI, audio, speech to text AI. Cerence state that every second newly defined SOP (start of production) car uses their technology, and they’re working with some very big names such as BYD, NIO, GM, Ford, Toyota, Volkswagen, Stellantis, Mercedes, BMW.

In the presentation they discussed how in November they held their second Analyst Day in which they outlined their new strategy called "Destination Next". They said that from a technology perspective this strategy or transition means they are going to be evolving from a voice only driver-centric solution via their Cerence Assistant or Co-pilate to a truly immersive in-cabin experience. Stefan Ortmanns (CEO Cerence) said early in the presentation something like "which means we're bringing in more features and applications beyond conversational AI, for example, wellness sensing, for example surrounding awareness, emotional AI or the interaction inside and outside the car with passengers and we have all these capabilities for creating a really immersive companion”. He also said something about the underlying strategy being based on 3 pillars, "scalable AI, teachable AI, and the immersive in-cabin experience", which has been bought about as a result of a "huge appetite for AI".

At about 6 mins Stefan Ortmanns says they have recently been shifting gear to bring in more proactive AI and he said something along these lines "What does it mean? So you bring everything you get together, so you have access to the sensors in the car, you an embedded solution, you have a cloud solution, and you also have this proactive AI, for example the road conditions or the weather conditions. And if you can bring everything together you have a personalised solution for the diver and also for the passengers and this is combines with what we call the (??? mighty ?? intelligence). And looking forward for the immersive experience, you need to bring in more together, it's not just about speech, it's about AI in general right so, with what I said wellness sensing, drunkenness detection, you know we're working on all this kind of cool stuff. We're working on emotional AI to have a better engagement with the passengers and also with the driver. And this is our future road map and we have vetted this with 50-60 OEM's across the globe and we did it together with a very well know consultancy firm."

At about 13 mins they describe how there will be very significant growth in fiscal years 23/24 because of the bookings they have won over the last 18 to 24 months that will go into production at the end of this year and very early in 2024 and a lot of them will have the higher tech stack that Stephan talked about.

At roughly 25 mins Stefan Ortmanns is asked how they compete with big tech like Alexa, Google, Apple, and how are they are co-exisiting because there are certain OEMS's using Alexa and certain ones using Cerence as well. In terms of what applications is Cerence providing Stephan replied stating something like "Alexa is a good example, so what you're seeing in the market is that OEM's are selecting us for their OEM branded solution and we are providing the wake word for interacting with Alexa, that's based on our core technology".

Now here comes the really good bit. At 29 mins the conversation turns to partnership statements, and they touch on NVDIA and whether Cerence view NVDIA as a competitor or partner (sounds familiar). This question was asked in relation to NVDIA having its own chauffeur product which enables some voice capabilities with its own hardware and software however Cerence has also been integrated into NVDIA's DRIVE platform. In describing this relationship, the Stefan Ortmanns says something like "So you're right. They have their own technology, but our technology stack is more advanced. And here we're talking about specifically Mercedes where they're positioned with their hardware and with our solution. There's also another semi-conductor big player, Qualcomm namely now they are working with Volkswagen group and they're also using our technology. So we're very flexible and open with our partners".

Following on form that they discuss how Cerence is also involved in the language processing for BMW which has to be "seamlessly integrated" with "very low latency".

So, a couple of points I wanted to throw in to emphasise why I'm thinking all of this so strongly indicates the use of BrainChip's technology being a part of Cerence's stack.
  • Cerence presented Mercedes as the premium example in which to demonstrate how advanced their voice technology is in comparison to NVDIA's. Since this presentation is only a few days old, I don't think they'd be referring to Mercedes old voice technology but rather the new advanced technology developed for the Vision EQXX. And I don't think Cerence would be referring to Mercedes at all if they weren't still working with them.
  • This is after Mercedes worked with BrainChip on the “wake word” detection for the Vision EQXX which made it 5-10 times more efficient. So, it only seems logical if Cerence's core technology is to provide the wake word that they should incorporate BrainChip’s technology to make the wake word detection 5-10 times faster.
  • In November 2022 Nils Shanz, who was responsible for user interaction and voice control at Mercedes and who also worked on the Vision EQXX voice control system was appointed Chief Product Officer at Cerence.
  • Previous post in which Cerence describe their technology as "self-learning",etc #6,807
  • Previous post in which Cerence technology is described as working with an internet connection #35,234 and #31,305
  • I’m no engineer but I would have thought the new emotion detection AI and contextual awareness AI that are connected to the car’s sensors must be embedded into Cerence’s technology for it all to work seamlessly.
Anyway, naturally I would love to hear what @Fact Finder has to say about this. As we all know he is the world's utmost guru in being able to sort the chaff from the wheat and always stands at the ready to pour cold water on any outlandish dot joining attempts when the facts don't stack up.

Of course, these are only the conclusions I have arrived at after watching the presentation and I would love to hear what everyone else’s impression are. Needless to say, I hope I'm right.

B 💋

Hi Bravo,

As you know, Cerence has been on our radar and I had filed them under competitors under the "friend or Foe" principle, but the truth is that that they seem to be agnostic as far as NNs are concerned, simply listing "artificial intelligence" in a grocery list of functions.


US2022415318A1 VOICE ASSISTANT ACTIVATION SYSTEM WITH CONTEXT DETERMINATION BASED ON MULTIMODAL DATA

1673681591770.png





A vehicle system for classifying spoken utterance within a vehicle cabin as one of system-directed and non-system directed may include at least one microphone to detect at least one acoustic utterance from at least one occupant of the vehicle, at least one camera to detect occupant data indicative of occupant behavior within the vehicle corresponding to the acoustic utterance, and a processor programmed to receive the acoustic utterance, receive the occupant data, determine whether the occupant data is indicative of a vehicle feature, classify the acoustic utterance as a system-directed utterance in response to the occupant data being indicative of a vehicle feature, and process the acoustic utterance.

[0016] The vehicle 104 may be configured to include various types of components, processors, and memory, and may communicate with a communication network 110 . The communication network 110 may be referred to as a “cloud” and may involve data transfer via wide area and/or local area networks, such as the Internet, Global Positioning System (GPS), cellular networks, Wi-Fi, Bluetooth, etc. The communication network 110 may provide for communication between the vehicle 104 and an external or remote server 112 and/or database 114 , as well as other external applications, systems, vehicles, etc. This communication network 110 may provide navigation, music or other audio, program content, marketing content, internet access, speech recognition, cognitive computing, artificial intelligence, to the vehicle 104
.
 
  • Like
  • Fire
  • Love
Reactions: 33 users

Mt09

Regular

CEO of Inivation wishes us all the best?

Big Rob T gave it a like.

Soz if posted b4!
 

Attachments

  • 0AD26E54-77AB-41AE-B23F-5DAF54B39065.png
    0AD26E54-77AB-41AE-B23F-5DAF54B39065.png
    241.7 KB · Views: 204
  • Like
  • Love
  • Fire
Reactions: 31 users

TopCat

Regular
Evening Chippers,

Breaking news...

World first, Pioneer DJ mixing table utilising Brainchips Akida neuromorphic chip on the International Space Station.
Personally can't imagine having to wash the external windows , whilst attached via umbilical, without some groovy tunes.

😄 .

* With any luck, may pull Fact Finder back, to give me a dressing down.
Seemed to work last time.

All in good humour.

ARi - Matasin, Live Series, Ep.003 ( Melodic Techno Progressive House Mix) 7th Jan 2023.

If a savvy individual could post link, Thankyou in advance.
This may be our only hope of retrieving Fact Finder.

Cheers for all the great finds and posts today.

Regards,
Esq.
Not quite techno but why haven’t I ever seen this before? By Akida 😎

 
  • Like
  • Fire
Reactions: 3 users
I was not aware that SiFive are collaborating with Intel on a ‘powerful tool for developers’

Is BrainChip also involved?

@Diogenese what do you reckon? Could we be also involved in the SiFive - Intel ‘Horse Creek’ collaboration? Is the ChatGPT description below legit?

1673682370604.png

1673682396301.png

1673682466123.png
 
  • Like
  • Fire
Reactions: 12 users

Potato

Regular
When is the next quarterly being released? Anyone got the date?
 
  • Like
Reactions: 1 users
  • Like
  • Fire
Reactions: 8 users

Diogenese

Top 20
@Diogenese what do you reckon? Could we be also involved in the SiFive - Intel ‘Horse Creek’ collaboration? Is the ChatGPT description below legit?

View attachment 27073
View attachment 27075
View attachment 27076
SiFive Horse Creek was showcased in October 2022, so they would have been seeing each other for some time before that.
https://www.cnx-software.com/2022/1...rm-sifive-p550-risc-v-cpu-8gb-ddr5-pcie-gen5/

There is no mention of NNs or AI accelerators in this article from 20221010:
https://www.cnx-software.com/2022/1...rm-sifive-p550-risc-v-cpu-8gb-ddr5-pcie-gen5/
Horse Creek platform specifications:

  • CPU – SiFive P500 quad-core RISC-V processor @ up to 2.2 GHz with a 13-stage, 3-issue, out-of-order (OoO) pipeline, private L2 cache, and common L3 cache
  • Memory – DDR5-5600 interface
  • PCIe – PCIe Gen5 through Intel’s PCIe PHY with 8 lanes, Synopsys PCIe Root Hub controller
  • Other peripheral interfaces – I3C, Quad and Octal SPI, UART, peripheral DMA
  • Package – 19×19 FBGA
  • Process – Intel 4 technology

Our affair with SiFive goes back to April 2022
https://brainchip.com/brainchip-sifive-partner-deploy-ai-ml-at-edge/
but we did not start going out with Intel until December 2022.


There is nothing to indicate that Horse Creek uses Akida. [Now I'm talking like ChatGPT (where I get all my answers from)].

As for the GPT response, it is couched in broad generalizations without any real detail, almost like it was under NDA.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 22 users


Founder of Qualcomm and a couple of other well knowns.
Short video but I like to know what these guys are like in a relaxed setting.
"When the money hits the table that's when you find out the real character of people" so very true
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 10 users
Top Bottom