CHIPS
Regular
How do we know Sean was going to South Korea on the way home? Did he say that?
Yes, he did. In the last investor's podcast, at the end, he mentioned his schedule of trips.
How do we know Sean was going to South Korea on the way home? Did he say that?
4:50 Mark: "You've got foundries. You've got IBM (?) ... "4:50 Mark: "You've got foundries. You've got IBM (?) ... "
Interesting that Sean referred to TeNNs as an algorithm.
Algorithms can be implemented in software at some cost to speed and energy consumption.
We know that Akida 2 was taped out but did not proceed to silicon.
The TeNNs patents were filed mid-2022, so we could have been discussing TeNNs with EAPs from then on, ~ 2 years.
Because TeNNs is such a great improvement on transformers, it could be implemented in software and still outperform transformer NNs by a mile. In fact, Akida NNs can be implemented in software. As I may have said before there is a significant advantage in implementing new tech which is in a state of flux in software, and waiting til the tech is more settled before committing to silicon.
We know Valeo uses software image interpretation in SCALA 3, and we have been working with them in a Joint Development partnership for a few years. Clearly they would have been among the first to hear about TeNNs.
Although they have switched to radio silence on Akida, Mercedes makes a great deal of noise about its software-defined vehicle. Mercedes would also have been among to first to hear about TeNNs, but they have swapped to Luminar for most of their lidar, although I forget when that takes effect. However Luminar also uses software image interpretation. There is thus a possibility that Mercedes could use TeNNs with or without Akida 2 in conjunction with any "legacy" Scala 3 or the new Luminar lidar.
In any event, we know that Valeo has $1B+ forward orders with Stellantis and Toyota, so it is possible that TeNNs/Akida 2 software will be used in these applications.
The best image temporal classification technology, TeNNs, has not yet been reduced to silicon, but it is available in software form*. BRN are busy building the NN models for TeNNs for various applications. The beauty of a software implementation during this development phase is that it can be readily updated.
Similar considerations also apply to Prophesee. In fact, all EAPs would have been consulted about TeNNs - is TeNNs the reason that Akida 1 was labeled too narrow?
*I wonder if Tony's AGM demo of TeNNs sentence building v GPT2 was software or FPGA?
View attachment 63536
Whom ever made this video could at least advised Sean which camera to look at , unprofessional to say the least.
Actually it’s nowadays common to make interviews like that, this way… no one looks directly to the cam… it looks stupid to be honest… from my point of view it’s fresh and up to date!
Edit: it should give you the expression that he talks to someone
As you can see, it was on the beginning a very normal response… I had no problem with you at all… in a forum everyone can respond on a topic. Btw.in most of the interviews I saw and made, no one watches directly into the cam. Again… I have no problem… it’s just my point of viewI have disagreed mate, iv been in TV my whole life that’s not how you do interviews
I simply made a comment on the camera and you feel the need to start this I mean listen to ya carry on below, just a attach or rant what every you want to call it. Moving on my opinionAs you can see, it was on the beginning a very normal response… I had no problem with you at all… in a forum everyone can respond on a topic. Btw.in most of the interviews I saw and made, no one watches directly into the cam. Again… I have no problem… it’s just my point of view
Wow, that is wonderfull, thank you for sharing and thank`s for implementation it on the webside.Tony Dawe just advised that the 2024 AGM webcast is up on the site.
Hit RESOURCES tab.
Then EVENTS.
past events....BRAINCHIP AGM.
Hit AGM WEBCAST.
It's all there.
AGM Webcast
Is that you Janus?I simply made a comment on the camera and you feel the need to start this I mean listen to ya carry on below, just a attach or rant what every you want to call it. Moving on my opinion
7für7 said:
Mate….I know people whose grandparents also run a business in which they specialized. That doesn't mean they ( the grandchildren) automatically know everything. The media landscape is constantly changing. Camera work, which is meant to evoke a certain drama, is also evolving. Imagine if films were still shot the same way they were in your grandparents' time… I come from this field as well, so relax… my great-great-great-grandparents were already doing theater, and my ancestors invented drama and comedy…so!?
Hopefully it's not Ouroboros.Is that you Janus?
Perhaps he could have a look around Brainchip?Among my experimental use of packages for SNN, I found SNNTorch to be the easiest and cleanest package available for building your SNN
Maybe it is just me but I really like Qualcomm being right next door to the BRN displays. Purely my point of view….hmmm may not just me.Our 2024 Embedded Vision Summit’s booth is much better-looking than that tiny space at the 2024 embedded world (which was part of the tinyML Pavilion)!
Looks like one of the VVDN Edge AI Boxes on the left?
I can also spot the audio denoising demo that was presented at the AGM (orange screen on the right).
View attachment 63506
100% correct Dio.
Mouna worked with us for about 6 months on a project that resulted in a shared Patent Application in 2017.
Named inventors are Peter, Mouna and Nic Oros.
US10157629B2 Low power neuromorphic voice activation system and method
ALSO....something very new, this AUSTRALIAN FILING is the first that I believe names us in the Agriculture sector !!
It was only filed 6 days ago !!
AU2023255049A1 System and Method for growing fungal mycelium Patent Translate
Love Tech and Akida
If you wants some fact on point of view on to production sure let’s talk off here as this is BRN forum not your personal insult forum. I would be glad to rub your new re into . Let me know how to get in connection with youSo as I can understand, you never worked at this field. Is that correct? Because normally you would respond with facts. I gave you some points why the interview is made how it is. All you can do is telling me your Blood is boiling and insulting me. Bro. You are nothing that’s all. Igno
Yes he did. I had a chat to him. All good.How do we know Sean was going to South Korea on the way home? Did he say that?
In documentaries and other non-fiction films, interviewees typically do not look directly at the camera during interviews for a few reasons:Just to close the topic (because you’re so professional… I found this for you in English
In documentaries and other non-fiction films, interviewees typically do not look directly at the camera during interviews for a few reasons:
Overall, not looking at the camera during interviews in documentaries is a stylistic choice that helps create a more natural and engaging viewing experience for the audience.
- Maintaining Authenticity: Looking directly at the camera can break the illusion of a natural conversation between the interviewer and the interviewee. By looking at the interviewer or off to the side, the interviewee appears to be speaking directly to the interviewer rather than to the audience.
- Avoiding Distraction: When an interviewee looks directly at the camera, it can be distracting for the viewer. It can feel like the interviewee is addressing the audience rather than having a conversation with the interviewer.
- Focus on the Interviewer: By looking at the interviewer, the interviewee can maintain focus on the conversation and the questions being asked. This can help them provide more thoughtful and genuine responses.
- Conventional Style: Not looking at the camera during interviews has become a convention in documentary filmmaking. This style helps maintain consistency across different interviews and documentaries.
Some interesting capabilities happening on device