BRN Discussion Ongoing

Boab

I wish I could paint like Vincent
Just reading through some old stuff and I'm reminded (excited) of/by the below comments.

BrainChip is aiming its technology at the edge, where more data is expected to be generated in the coming years. Pointing to IDC and McKinsey research, BrainChip expects the market for edge-based devices needing AI to grow from $44 billion this year to $70 billion by 2025. In addition, at last week’s Dell Technologies World event, CEO Michael Dell reiterated his belief that while 10 percent of data now is generated at the edge, that will shift to 75 percent by 2025. Where data is created, AI will follow. BrainChip has designed Akida for the high-processing, low-power environment and to be able to run AI analytic workloads – particularly inference – on the chip to lessen the data flow to and from the cloud and thus reduce latency in generating results.
 
  • Like
  • Love
  • Fire
Reactions: 21 users
D

Deleted member 118

Guest
1319831C-3FBB-4CF2-BEB6-635DC48E40E1.png


 
  • Like
Reactions: 1 users

JK200SX

Regular
View attachment 18989

View attachment 18990


View attachment 18993



Design your doorbell​

camera system now​

Drop us a line! Or skip the form
and call us now 877.437.9693





Hi chippers,
I've been looking for a possible door bell system that uses our IP. Came across this one, I have read this name before in TSE, cannot remember when or which post.

Anyways, if we ring the above number, we might get some information IF we ask the right questions on what else the door bell does. I tried to call the number, with a few questions in mind such as battery efficiency (they say it is actually wired, BUT, if we ask this question, they may or may not give us some answers we need), other smart features, does it recognize specific people such as family members etc.
But looks like I cannot make this call from Australia, so any of our US investors could give a try.

I also wanted to know how they can ''customise'' a door bell (see how they say ''design your doorbell camera system''?) How can we possibly design a door bell camera?

- Event based camera
- Does not stop recording even if WiFi goes down
- Stop theft Before it happens.

HOW?

Have a great evening!

In my opinion, I don't think this camera has Akida in it. The specification that mentions "event based 10-90 seconds", is referring to captured events that take place, and appear as a dot on the app playback screen so that you can quickly replay that "event" without trawling through all the footage.

I've also done some reading on the new wired Nest Doorbell (gen 2) that was released just over a week ago, and it could possibly be a contender for having Brainchip in it. My reasons:
- on device learning
- Camera can differentiate between people, animals, boxes etc, and the clincher is that it can identify the people/names it has learned.
- 960 x 1280 pixel/ 1.3Megapixel resolution. I know this is left field, but there have been quite a few comments online from people complaining that the resolution of the gen2 has dropped from 1600x1200 for the Gen 1 wired doorbell. Why would they do this with a gen2 product, considering that everytime there is an updated model of some camera based devise the number of pixels increases? One thing I've noticed with all the information we've uncovered on this forum for event based camera systems is that the camera sensors have typically had a low pixel count, ie no where near FHD or UHD, and the addition of AKIDA may have dictated this, whilst obtaining a superior quality image.

 
  • Like
  • Love
  • Fire
Reactions: 16 users
I think we do not have enough details. In the automotive world producing a motor vehicle means having a plant where you bolt together bits sourced from all over the world. Mercedes Benz has been building a very high tech production facility in Germany to build the electronics as platforms to send to plants where the vehicles are assembled. If AKIDA was for example in a dashboard platform shipped from Germany to their assembly plant in China it would not be an issue.

Lots of ways this can play out.

My opinion only DYOR
FF

AKIDA BALLISTA

Mercedes has 3 plants. Luckily Beijing isn’t producing cars at this time. It might be a hiccup if China can’t build them but cars will still be made.

1665892256942.png
 
  • Like
  • Love
Reactions: 7 users
1 August 2022updated 18 Aug 2022 11:49am

BrainChip and Prophesee Optimize Computer Vision AI Performance​

shutterstock_796709485-1.jpg
Credit: LeoWolfert/Shutterstock
Concept: California’s tech company BrainChip and French tech startup Prophesse have partnered to launch next-gen platforms for original equipment manufacturers (OEMs) looking to integrate event-based vision systems with high levels of AI performance. The partnership combines Prophesee’s computer vision technology with BrainChip’s neuromorphic processor Akida to deliver a complete high-performance and ultra-low-power solution.
Nature of Disruption:Prophesee’s computer vision technology leverages patented sensor design and AI algorithms. It mimics the eye and brain to reveal what was invisible until now using standard frame-based technology. The new computer vision technology has applications in autonomous vehicles, industrial automation, IoT, security and surveillance, and AR or VR. BrainChip’s Akida mimics the human brain to analyze only essential sensor inputs at the point of acquisition. It can process data with improved efficiency and precision. Also, it keeps AI or ML local to the chip, independent of the cloud to reduce the latency. The combination of both technologies can help advance AI enablement and offer manufacturers a ready-to-implement solution. Additionally, it helps OEMs looking to leverage edge-based visual technologies as part of their product offerings.
Outlook: The application of computer vision is increasing in various industries including automotive, healthcare, retail, robotics, agriculture, and manufacturing. It gives AI-enabled gadgets an edge to perform efficiently. BrainChip and Prophesee claim that the combination of their technologies can provide OEMs with a computer vision solution that can be directly implemented in a manufacturer’s end product. It can enable data processing with better efficiency, precision, and economy of energy at the point of acquisition.

This article was originally published in Verdict.co.uk
 
  • Like
  • Fire
Reactions: 23 users

HopalongPetrovski

I'm Spartacus!


View attachment 19026

View attachment 19026
Interesting to read the advertising companies blurb about our new look.
Whilst I freely acknowledge that I am not the target market and understand from what I have read here that engagement is growing I do find myself somewhat at odds with their stated strategy.
They say they inherited a brand strong on logic but short on magic.
Whilst I can appreciate the allure of our tech “hey prestoing” away our clients issues I think this approach is more suited to a commercial retail customer who just wants whatever the doodad is, to work, preferably efficiently, but certainly effectively and economically, easily, and straight out of the box.
However, it seems to me that the client’s we are seeking to nab now are people who appreciate and want the “logic” of our offering front and centre, presented in a clear and undeniable way that compels them to proceed with further investigation and engagement, leading, where appropriate to our adoption.
Perhaps my age has somewhat jaded my sensibilities, and I am not advocating a return to generic robots, but would like a bit more pzazz than an odd splash of orange here and there.
I know it’s all very grown up and that we are a serious company but would love to see a bit more flair and originality and some element to tie it all together.
Perhaps the reincorporation of our synapse symbol.
I saw that as a lovely, non linguistic, universally meaningful sign that helped in making our brand memorable and somewhat more approachable.
It exhibited simplicity with the promise of scalability with which one can easily appreciate that complex problem’s may be resolved by the incorporation of a scaled thinking device at the “right“ place. Being a representation of a natural phenomenon also lends it a degree of intuitive recognition and viability.
Anyway just my Sunday afternoon musings, but I say “Bring back the Synapse”
before someone else appropriates it. 🤣
 
  • Fire
  • Like
  • Haha
Reactions: 12 users
Hi @HopalongPetrovski I generally agree with you.

The target customer is not you’re average Mum and Dad. They are manufacturers of intelligent devices and therefore looking for “The stated difference in competition products” E.g. what makes Brainchip superior to the rest! In reality it doesn’t take too much digging on the Brainchip website to find out: you just need the hook to get them there!

I suppose we have to have faith in the Management Team. Not blind faith; but if Sean, Jerome, Rob etc are happy with the direction we are taking then I’ll stick with them; they are the leaders in their respective fields.

I personally loved the Synapse as well. I thought it would start out including ”Brainchip” but then in 5-10 years time when it was universally recognised it would just be the the Synapse, similar to how Nike is now just the Swoosh!

So I agree: ”Bring back the Synapse”

:)

Edit: Thinking about it over a coffee; even something bold and brash (which I’m not usually too fond of) like:

a billboard stating: “Brainchip: We have the best and first commercially available neuromorphic chip; We are superior to the rest!”

That’ll get the secret out there!
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 17 users

HopalongPetrovski

I'm Spartacus!
Hi @HopalongPetrovski is generally agree with you.

The target customer is not you’re average Mum and Dad. They are manufacturers of intelligent devices and therefore looking for “The stated difference in competition products” E.g. what makes Brainchip superior to the rest! In reality it doesn’t take too much digging on the Brainchip website to find out: you just need the hook to get them there!

I suppose we have to have faith in the Management Team; not blind faith but if Sean, Jerome, Rob etc and if they are happy with the direction we are taking then I’ll stick with them; they are the leaders in their respective fields.

I personally loved the Synapse as well. I thought it would start out including ”Brainchip” but then in 5-10 years time when it was universally recognised it would just the the Synapse, similar to how Nike is now just the Swoosh!

So I agree: ”Bring back the Synapse”

:)
I’ll know you’ll all find this ridiculous, but I just looked at our website on my phone for the first time and can see it’s been optimised for this format.
I actually quite like the story as presented in this manner. 🤣
Being an old goat I am accustomed to looking at everything on my desk top, but realise this makes me somewhat of a dinosaur 🦖
Nevertheless would still like to have the synapse back, even if they have to play around with it a bit to make it work with the new colour scheme. 🦕
 
  • Like
  • Fire
  • Love
Reactions: 8 users
I’ll know you’ll all find this ridiculous, but I just looked at our website on my phone for the first time and can see it’s been optimised for this format.
I actually quite like the story as presented in this manner. 🤣
Being an old goat I am accustomed to looking at everything on my desk top, but realise this makes me somewhat of a dinosaur 🦖
Nevertheless would still like to have the synapse back, even if they have to play around with it a bit to make it work with the new colour scheme. 🦕
I suppose a technophobe is more developed than a dinosaur but either way we will both find out if they knew what they were doing by rebranding when we rock up to the AGM and pass judgment on what has been achieved by the CEO Sean Hehir.

Maybe he will be standing out the front waving 4C’s with one hand and holding an orange kite in the other with a smile that could span the Pacific Ocean.

Time will reveal all.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Haha
  • Fire
Reactions: 15 users

HopalongPetrovski

I'm Spartacus!
I suppose a technophobe is more developed than a dinosaur but either way we will both find out if they knew what they were doing by rebranding when we rock up to the AGM and pass judgment on what has been achieved by the CEO Sean Hehir.

Maybe he will be standing out the front waving 4C’s with one hand and holding an orange kite in the other with a smile that could span the Pacific Ocean.

Time will reveal all.

My opinion only DYOR
FF

AKIDA BALLISTA
Indeed. Looking forward to that 🤣
Hopefully Rob will be there this time too.
Wouldn’t mind sharing a burger and 5 minutes of his enthusiasm 🤣
Also, now that I’ve realised I can claim it back, will be staying in a more upmarket hotel 🤣
Just how upmarket is up to our CEO.
Bring it, Mr Hehir 🤣
 
  • Like
  • Haha
  • Love
Reactions: 18 users
D

Deleted member 118

Guest
Why don’t they just put using brainchip technology


960FA502-2408-4C32-9856-BCF65976DE20.png


Description:

OUSD (R&E) MODERNIZATION PRIORITY: General Warfighting Requirements (GWR);Microelectronics;Quantum Science



TECHNOLOGY AREA(S): Electronics



OBJECTIVE: Develop a novel smart visual image recognition system that has intrinsic ultralow power consumption and system latency, and physics-based security and privacy.



DESCRIPTION: Image-based recognition in general requires a complicated technology stack, including lenses to form images, optical sensors for opto-to-electrical conversion, and computer chips to implement the necessary digital computation process. This process is serial in nature, and hence, is slow and burdened by high-power consumption. It can take as long as milliseconds, and require milliwatts of power supply, to process and recognize an image. The image that is digitized in a digital domain is also vulnerable to cyber-attacks, putting the users’ security and privacy at risk. Furthermore, as the information content of images needs to be surveilled and reconnoitered, and continues to be more complex over time, the system will soon face great challenges in system bottleneck regarding energy efficiency, system latency, and security, as the existing digital technologies are based on digital computing, because of the required sequential analog-to-digital processing, analog sensing, and digital computing.



It is the focus of this STTR topic to explore a much more promising solution to mitigate the legacy digital image recognition latency and power consumption issues via processing visual data in the optical domain at the edge. This proposed technology shifts the paradigm of conventional digital image processing by using analog instead of digital computing, and thus can merge the analog sensing and computing into a single physical hardware. In this methodology, the original images do not need to be digitized into digital domain as an intermediate pre-processing step. Instead, incident light is directly processed by a physical medium. An example is image recognition [Ref 1], and signal processing [Ref 2], using physics of wave dynamics. For example, the smart image sensors [Ref 1] have judiciously designed internal structures made of air bubbles. These bubbles scatter the incident light to perform the deep-learning-based neuromorphic computing. Without any digital processing, this passive sensor can guide the optical field to different locations depending on the identity of the object. The visual information of the scene is never converted to a digitized image, and yet the object can be identified in this unique computation process. These novel image sensors are extremely energy efficient (a fraction of a micro Watt) because the computing is performed passively without active use of energy. Combined with photovoltaic cells, in theory, it can compute without any energy consumption, and a small amount of energy will be expended upon successful image recognition and an electronic signal needs to be delivered to the optical and digital domain interface. It is also extremely fast, and has extremely low latency, because the computing is done in the optical domain. The latency is determined by the propagation time of light in the device, which is on the order of no more than hundreds of nanoseconds. Therefore, its performance metrics in terms of energy consumption and latency are projected to exceed those of conventional digital image processing and recognition by up to at least six orders of magnitude (i.e., 100,000 times improvement). Furthermore, it has the embedded intrinsic physics-based security and privacy because the coherent properties of light are exploited for image recognition. When these standalone devices are connected to system networks, cyber hackers cannot gain access to original images because such images have never been created in the digital domain in the entire computation process. Hence, this low-energy, low-latency image sensor system is well suited for the application of 24/7 persistent target recognition surveillance system for any intended targets.



In summary, these novel image recognition sensors, which use the nature of wave physics to perform passive computing that exploits the coherent properties of light, is a game changer for image recognition in the future. They could improve target recognition and identification in degraded vision environment accompanied by heavy rain, smoke, and fog. This smart image recognition sensor, coupled with analog computing capability, is an unparalleled alternative solution to traditional imaging sensor and digital computing systems, when ultralow power dissipation and system latency, and higher system security and reliability provided by analog domain, are the most critical key performance metrics of the system.



PHASE I: Develop, design, and demonstrate the feasibility of an image recognition device based on a structured optical medium. Proof of concept demonstration should reach over 90% accuracy for arbitrary monochrome images under both coherent and incoherent illumination. The computing time should be less than 10 µs. The throughput of the computing is over 100,000 pictures per second. The projected energy consumption is less than 1 mW. The Phase I effort will include prototype plans to be developed under Phase II.



PHASE II: Design image recognition devices for general images, including color images in the visible or multiband images in the near-infrared (near-IR). The accuracy should reach 90% for objects in ImageNet. The throughput reaches over 10 million pictures per second with computation time of 100 ns and with an energy consumption less than 0.1 mW. Experimentally demonstrate working prototype of devices to recognize barcodes, handwritten digits, and other general symbolic characters. The device size should be no larger than the current digital camera-based imaging system.



PHASE III DUAL USE APPLICATIONS: Fabricate, test, and finalize the technology based on the design and demonstration results developed during Phase II, and transition the technology with finalized specifications for DoD applications in the areas of persistent target recognition surveillance and image recognition in the future for improved target recognition and identification in degraded vision environment accompanied by heavy rain, smoke, and fog.



The commercial sector can also benefit from this crucial, game-changing technology development in the areas of high-speed image and facial recognition. Commercialize the hardware and the deep-learning-based image recognition sensor for law enforcement, marine navigation, commercial aviation enhanced vision, medical applications, and industrial manufacturing processing.
 
  • Like
  • Fire
  • Thinking
Reactions: 13 users

MrNick

Regular
I’ll know you’ll all find this ridiculous, but I just looked at our website on my phone for the first time and can see it’s been optimised for this format.
I actually quite like the story as presented in this manner. 🤣
Being an old goat I am accustomed to looking at everything on my desk top, but realise this makes me somewhat of a dinosaur 🦖
Nevertheless would still like to have the synapse back, even if they have to play around with it a bit to make it work with the new colour scheme. 🦕
Agreed. As I stated at the brand relaunch, it appeared to be the most ‘heveanly simplistic and paired-down’ solution. Essentially, the brand is now so visually basic that it says nothing. The synapse said everything whilst competing with a very stylised font. Now, pair the synapse (our USP) with new font and you have a fully rounded and immediately identifiable ‘suite’ of brand elements to use across your entire marketing toolbox. The issue here is that by nailing your name and entire business to a font and hoping it will create visual remembrance is naive at best. Facebook can do that, Twitter and Instagram can do that, but they’re engrained in global psychology. Finally, if you’re going to rebrand with a standalone font, then choose one that will sing what you stand for from the rooftops… *see Palantir, see Sony, see Netflix, see other leading edge tech brands that have owned their rebrand process and not been swayed by thoughts of emulating the global leaders and a marketing agency that think they’ve hit gold.
*(But what do I know, I’ve only had thirty years of global branding experience… so far).
 
  • Like
  • Fire
  • Love
Reactions: 18 users
D

Deleted member 118

Guest
A few videos for a Sunday if anyone can be bothered



5ADF0BA4-6BEB-48FE-8D16-C16C36EA6F7B.png
 
  • Like
Reactions: 5 users

TasTroy77

Founding Member

Will be interesting to see
 
  • Like
  • Love
  • Fire
Reactions: 19 users

Will be interesting to see

Thanks @TasTroy77

Very exciting. How am I supposed to sleep tonight now!

This has potential to be excellent publicity for Brainchip!

:)
 
  • Like
Reactions: 13 users
Anyone come across these guys as yet?

No really comp as yet given another start up looking for funding it appears but believe commercial chip early 2023.

Below was from May and only posting cause of the neuromorphic side & comments on BRN popped up in the search.




How Rigpa is building chips inspired by our brains



About six years ago, it was very trendy to talk about how A.I. could soon match or surpass human intelligence. The old sci-fi trope made popular in films like The Terminator seemed close to reality and everyone was reading Nick Bostrom, as big names like Elon Musk talked up the almost limitless potential of A.I., and self-driving cars seemed just a few short years away from dominating our roads.

None of that has come to pass quite yet, but A.I. continues to make progress, finding its way into many aspects of our lives. Companies like Google, Microsoft, and Amazon harness it to make their own products smarter, and to empower third-party developers. It’s all useful stuff, but a time traveller from the wide-eyed days of 2016 might be a little disappointed.

That’s why it’s important to separate genuine advances from hype cycles. Away from the spotlight that shines on big tech company product launches, researchers and early-stage startups are working on technology that could form the next wave of A.I. and could bring us closer to artificial general intelligence (AGI), software that really does match human intelligence and adaptability.

Meet Rigpa

Rigpa is an Edinburgh-based startup that has been quietly working on A.I. technology inspired by how the human brain works; a field known as neuromorphic computing. “The brain itself is so powerful but consumes very little power… 20 watts, like a lightbulb,” says Rigpa founder Mike Huang. “By mimicking the biology of the brain we believe we can create A.I. that has lower power consumption and faster inference speed.”

Rigpa’s work is based on Huang's PhD research into neuromorphic computing for radioisotope identification, and Huang believes that beyond improved efficiency, the approach could even help A.I. self-learn and generate its own innovative ideas.

“The A.I. will not be the equivalent to a human being, but you hope that the machine itself can let people be liberated from repetitive work, so they can spend more of their time working on creative things, or do what they really want to”.

This will be a familiar idea if you follow the rhetoric around A.I. The idea of automation liberating humans from work will sound like a utopia to many, but the A.I. of today is a long way away from achieving that. Huang believes a radically different approach, like brain emulation, is required to get us there.

Huang envisions that Rigpa’s work will find its way into the A.I. processors of the future. Today, much A.I.-processing for tasks like machine learning is done using high-powered GPUs from companies like Nvidia. These are components often originally designed to help gamers get the best possible graphics, which by chance turned out to be good for A.I., too.

“It’s a coincidence that GPUs are good for A.I. because they’re good at parallel computing, but they're not efficient,” says Huang. And efficiency of A.I. is about much more than saving money. A.I.’s carbon footprint problem is a growing concern. One study in 2020 found training A.I. models can generate a carbon footprint five times greater than the lifetime of the average American car. Even the more generous findings of a Google-backed study in 2021 found that training the much-lauded GPT-3 natural language A.I. model used 1,287 megawatts, producing 552 metric tons of carbon dioxide emissions.
https://substackcdn.com/image/fetch...e2e4-8c5f-4d82-854a-9b053ce40d19_800x800.jpeg
Huang comes to neuromorphic computing after a decade in chip design, including eight years at Broadcom. He began his PhD at the University of Edinburgh in 2019, conducting research funded by the US Defense Threat Reduction Agency (DTRA) and radiation detector company Kromek Group.

Huang is joined at Rigpa by co-founders Dr. Taihai Chen and Edward Jones. Chen, who previously co-founded University of Southampton spinout AccelerComm, is focused on building out Rigpa’s commercial strategy. Jones, a University of Manchester PhD candidate, collaborated with Huang on the research that forms the basis of Rigpa’s technology.

A.I.’s progression towards the human brain

Rigpa’s solution is far from the only show in town when it comes to more efficient A.I. hardware. Huang considers current state-of-the-art offerings like Google’s TPU to be part of a “second generation” of A.I. processor.

“With the second generation A.I. network, the artificial neural network is mature for the current market. We are working on the next generation, the third generation… which is more close to a biological neural network… it's low-power and fast inference but much less mature [as a technology],” says Huang.

One benefit of this fresh approach should be greater adaptability. While the TPU is great for working with datasets like images or text, new kinds of advanced sensors could require more human-like adaptability to make sense of their outputs, efficiently and at scale. What kinds of sensors? Huang gives the example of event cameras, which measure brightness on a pixel-by-pixel basis and could find use in fields like autonomous vehicles and robotics.

Rigpa has competition in the development of this third generation, most notably BrainChip, which was founded in 2006, IPO’ed in Australia in 2011, and recently launched what it describes as the first commercial neuromorphic A.I. chip. Big companies like IBM and Intel are also exploring the space. For example, Intel launched the Loihi 2 research chip last year. But Huang isn’t concerned about having much larger competition in an emerging space. He sees it as a new market ready for the capturing, just not quite yet….

Indeed, Huang speculates that perhaps BrainChip moved too quickly, too early. “There’s no real customer there yet.” he says. BrainChip’s financial results paint a picture that supports that view.

The route to market

Rigpa is taking time to explore the market and develop tools that fit real needs in the fields of defence and security, internet of things, drone and Lidar. While he declines to go into details about who the startup is working with, Huang says Rigpa has been engaged in an industrial partnership with Kromek Group, which serves the US Department of Defense, to develop brain-influenced A.I. for specific market needs.

Over the space of a three-year partnership, Rigpa has developed several prototype chips, the latest of which he says demonstrates at least 28x lower power and 23x faster speeds than the customer’s existing solution.

An edge chip, it is designed to provide A.I. computing at the location of sensors themselves, rather than sending data to the cloud. A good, relatable example of A.I. on the edge is how Google’s Tensor chip in the Pixel 6 Pro smartphone transcribed my conversation with Huang on-device, in real-time as we talked. BrainChip announced an edge computing-focused partnership last month.

A.I. is a competitive market, with plenty of big names and big money involved. But while the likes of Google and Intel have researched neuromorphic computing for years, Huang is right that the market for this type of A.I. just isn’t quite there yet. This provides an opportunity for the likes of Rigpa to develop new technology that either ends up being sought after by tech giants, or serves specific niche markets well. And of course, there’s always room for new giants to emerge as rivals to the likes of Google, Microsoft, with the right technology and the drive to market it well.

Rigpa is currently working on its commercial chip, which it plans to release in Q1 of 2023. Having been funded to date by the commercial backing for Huang’s PhD project, the startup is currently preparing its first equity round.
 
  • Like
  • Thinking
  • Haha
Reactions: 14 users
Came across the first article about Samsung and then the second in a post by @uiux

Does this mean anything for BrainChip? I assume Akida compatability due to its agnostic nature?

I have no idea about 2nm and 1.4nm chips

Samsung to Mass-Produce 2nm Chips in 2025, 1.4nm Chips in 2027​

Link to PC Mag article 04/10/2022

EU Signs €145bn Declaration to Develop Next Gen Processors and 2nm Technology

Link to EE Times article 09/12/2020 (sourced from 2021 post by @uiux)
Samsung only just started producing 3nm in June this year and one report I read, said they were having problems producing them profitably..
Not saying they won't get there, I wouldn't have a clue, but they're going to be expensive chips, you would think..

Synopsys, who I think Prophesee ditched for Brainchip have partnered with Samsung on the 3nm process for their chips..


But anything they produce, is going to be an expensive option, for something that is technically not as good and can't just be built into a chip design, like AKIDA can, with its IP..

Such a customer economically viable option we have.

Ahhh feeling really at ease about BrainChip's future prospects 😊

Good Fortune to all Holders!
 
  • Like
  • Fire
  • Love
Reactions: 23 users

Andy38

The hope of potential generational wealth is real
  • Like
  • Love
Reactions: 8 users

MDhere

Regular
When i was looking into Magikeyes new update site, i also checked the linkedin page and i noticed a little while ago Franz Grzeschshniok from Robert Bosch liked a Magikeye post as did Sujay Jayashankar who happened to be a Senior software engineer for Magikeye but just last month became Senior Engineer for Mercedes Benz Research and Development India...fancy that for a move which both have Brainchip connections...
Now back to Franz - the Karli Project Audi interests me and they used the term few shot learning which may be not just a Brainchip team but not sure and the other intrerest of mine has been for sometime Boschs Spexor which really NEEDS AKIDA.
Its amazing what one Magik-Eye view can dig up. 😀
 
  • Like
  • Fire
Reactions: 15 users
Interesting to read the advertising companies blurb about our new look.
Whilst I freely acknowledge that I am not the target market and understand from what I have read here that engagement is growing I do find myself somewhat at odds with their stated strategy.
They say they inherited a brand strong on logic but short on magic.
Whilst I can appreciate the allure of our tech “hey prestoing” away our clients issues I think this approach is more suited to a commercial retail customer who just wants whatever the doodad is, to work, preferably efficiently, but certainly effectively and economically, easily, and straight out of the box.
However, it seems to me that the client’s we are seeking to nab now are people who appreciate and want the “logic” of our offering front and centre, presented in a clear and undeniable way that compels them to proceed with further investigation and engagement, leading, where appropriate to our adoption.
Perhaps my age has somewhat jaded my sensibilities, and I am not advocating a return to generic robots, but would like a bit more pzazz than an odd splash of orange here and there.
I know it’s all very grown up and that we are a serious company but would love to see a bit more flair and originality and some element to tie it all together.
Perhaps the reincorporation of our synapse symbol.
I saw that as a lovely, non linguistic, universally meaningful sign that helped in making our brand memorable and somewhat more approachable.
It exhibited simplicity with the promise of scalability with which one can easily appreciate that complex problem’s may be resolved by the incorporation of a scaled thinking device at the “right“ place. Being a representation of a natural phenomenon also lends it a degree of intuitive recognition and viability.
Anyway just my Sunday afternoon musings, but I say “Bring back the Synapse”
before someone else appropriates it. 🤣
Got to admit, personally not a fan of the new website. When I saw it at the beginning of the year I was frankly disappointed and not impressed by Sean’s first big statement. To me it looks cheap and bland. That a company was paid, what would be significant dollars, to create this concept irks me even more. The previous website was far more appealing.

When I tried to look at careers being advertised on the site it was buggy and didn’t work - strike 2.

I then email the company about the issue and received the automated reply that they would respond, but alas no response. Previously the company had always responded promptly and politely to shareholder correspondence- strike 3

The problem remained unresolved for an extended period after my email.

Hopefully Sean and his new vision is kicking commercial goals.
 
  • Like
Reactions: 4 users
Top Bottom