BRN Discussion Ongoing

I'd like you to write a formal complaint to the producers of Sesame street, regarding why they never provided directions to sesame street even though it was ask numerous times over and over in the sesame street theme song. This has still been unanswered to this day many decades later.


Dear Sesame Street Producers,
I am writing to express my frustration and disappointment that the television show "Sesame Street" has never provided directions to the eponymous location despite numerous requests for them in the show's theme song.
As a long-time fan of the show, I have always been curious about where Sesame Street is located and how to get there. The fact that the show has never provided this information is a significant oversight, in my opinion.
I understand that Sesame Street is a fictional place, but that does not excuse the lack of transparency on the part of the producers. It is unacceptable that the show has neglected to provide such a basic and important piece of information to its viewers.
I urge the producers of Sesame Street to reconsider their decision and provide the requested directions to Sesame Street as soon as possible. This will go a long way in improving the overall quality and enjoyment of the show for its devoted fans.
Sincerely, [Your name]
Hold the damn phone. Where the heck would we send this letter since we don't even know where sesame street is.

Ok self imposed ban for tonight I've dribbled way too much crap.
I'll be back in an hour when it's tomorrow 😉
 
  • Haha
  • Like
Reactions: 8 users

White Horse

Regular
Hi Victor,

I wonder what the date of the Kunpeng announcement was. In particular, was it before the end of 2020?

Some time before LdN said "We don't need China", we had received US approval to export to China, and had planned to set up in Shanghai, but NASA et al spiked that idea.

So was the KunPeng deal nullified when we no longer needed China?

https://brainchip.com/brainchip-receives-akida-export-approval-from-us-government-brainchip-170220-02/#:~:text=BrainChip obtained a formal classification for EAR99 under,countries and to non-restricted customers and use cases.

BrainChip receives Akida export approval from US government​

via Small Caps
Artificial intelligence device company BrainChip (ASX: BRN) has unveiled a new export classification issued from the US Government’s Bureau of Industry and Security (BIS).
The ruling authorises the export of its AI technologies without the company having to apply for additional licences and most importantly, paves the way for BrainChip to target non-restricted customers in Japan, Korea, China and Taiwan.
BrainChip obtained a formal classification for EAR99 under the Export Administration Regulations which removes barriers for exporting Akida to non-US countries and to non-restricted customers and use cases.
According to BrainChip, its technology is suitable for numerous edge applications including surveillance, advanced driver assistance systems, vision-guided robotics, drones, internet of things, acoustic analysis and cybersecurity. The Akida chip includes BrainChip’s entire AI edge network and has multiple learning modes.
BrainChip also stated that it continues with Akida product development and is engaging with early access manufacturers to bring a “first-in-kind product” to market. The Akida NSoC enables AI Edge solutions for high-growth, high-volume applications that have been difficult to achieve with existing AI architectures.


I recall being a bit surprised when this was announced.


.

Appears to be current.

https://e.huawei.com/au/products/servers/computing-kunpeng
 
  • Like
  • Fire
  • Thinking
Reactions: 11 users
Hi Victor,

I wonder what the date of the Kunpeng announcement was. In particular, was it before the end of 2020?

Some time before LdN said "We don't need China", we had received US approval to export to China, and had planned to set up in Shanghai, but NASA et al spiked that idea.

So was the KunPeng deal nullified when we no longer needed China?

https://brainchip.com/brainchip-receives-akida-export-approval-from-us-government-brainchip-170220-02/#:~:text=BrainChip obtained a formal classification for EAR99 under,countries and to non-restricted customers and use cases.

BrainChip receives Akida export approval from US government​

via Small Caps
Artificial intelligence device company BrainChip (ASX: BRN) has unveiled a new export classification issued from the US Government’s Bureau of Industry and Security (BIS).
The ruling authorises the export of its AI technologies without the company having to apply for additional licences and most importantly, paves the way for BrainChip to target non-restricted customers in Japan, Korea, China and Taiwan.
BrainChip obtained a formal classification for EAR99 under the Export Administration Regulations which removes barriers for exporting Akida to non-US countries and to non-restricted customers and use cases.
According to BrainChip, its technology is suitable for numerous edge applications including surveillance, advanced driver assistance systems, vision-guided robotics, drones, internet of things, acoustic analysis and cybersecurity. The Akida chip includes BrainChip’s entire AI edge network and has multiple learning modes.
BrainChip also stated that it continues with Akida product development and is engaging with early access manufacturers to bring a “first-in-kind product” to market. The Akida NSoC enables AI Edge solutions for high-growth, high-volume applications that have been difficult to achieve with existing AI architectures.


I recall being a bit surprised when this was announced.


.

First found it around April 2022 and wasn't announced per se so don't know when Ex3 started their "research" on the set up.

Since found out that Huawei run their own DaVinci AI though some grey areas in some of this still I believe. Potentially shows Akida in a server environment as KunPeng is in the Taishan servers and Atlas comes into play somewhere from memory as well.

Was discussed below.

Thread 'HUAWEI TAISHAN 200 SERVER / KUNPENG 920 PROCESSOR USING AKIDA' https://thestockexchange.com.au/thr...rver-kunpeng-920-processor-using-akida.29899/
 
  • Like
  • Love
Reactions: 19 users

VictorG

Member
Hi Victor,

I wonder what the date of the Kunpeng announcement was. In particular, was it before the end of 2020?

Some time before LdN said "We don't need China", we had received US approval to export to China, and had planned to set up in Shanghai, but NASA et al spiked that idea.

So was the KunPeng deal nullified when we no longer needed China?

https://brainchip.com/brainchip-rec...and to non-restricted customers and use cases.

BrainChip receives Akida export approval from US government​

via Small Caps
Artificial intelligence device company BrainChip (ASX: BRN) has unveiled a new export classification issued from the US Government’s Bureau of Industry and Security (BIS).
The ruling authorises the export of its AI technologies without the company having to apply for additional licences and most importantly, paves the way for BrainChip to target non-restricted customers in Japan, Korea, China and Taiwan.
BrainChip obtained a formal classification for EAR99 under the Export Administration Regulations which removes barriers for exporting Akida to non-US countries and to non-restricted customers and use cases.
According to BrainChip, its technology is suitable for numerous edge applications including surveillance, advanced driver assistance systems, vision-guided robotics, drones, internet of things, acoustic analysis and cybersecurity. The Akida chip includes BrainChip’s entire AI edge network and has multiple learning modes.
BrainChip also stated that it continues with Akida product development and is engaging with early access manufacturers to bring a “first-in-kind product” to market. The Akida NSoC enables AI Edge solutions for high-growth, high-volume applications that have been difficult to achieve with existing AI architectures.


I recall being a bit surprised when this was announced.


.

Sorry Diogenes I couldnt find a date but I guess it was after BRN joined ARM because Kunpeng uses ARM architecture.
 
  • Like
Reactions: 7 users

Diogenese

Top 20
First found it around April 2022 and wasn't announced per se so don't know when Ex3 started their "research" on the set up.

Since found out that Huawei run their own DaVinci AI though some grey areas in some of this still I believe. Potentially shows Akida in a server environment as KunPeng is in the Taishan servers and Atlas comes into play somewhere from memory as well.

Was discussed below.

Thread 'HUAWEI TAISHAN 200 SERVER / KUNPENG 920 PROCESSOR USING AKIDA' https://thestockexchange.com.au/thr...rver-kunpeng-920-processor-using-akida.29899/
It all comes back to me now -

Facilitating research on bleeding-edge HPC technologies

The eX3 infrastructure is continuously under build-up and reconfiguration in order to keep up-to-date with the technology development. The following hardware resources, acquired in the first phase procurement, are currently available. For further details, please consult the eX3 wiki.

The KunPeng is part of the ex3 project in Norway, and ex3 have added 4 Akidas to the KunPeng to do the NN work.

Akida is processor agnostic even though KunPeng has an exotic operating system.

So we are not trading with the enemy, and will not incur the displeasure of the US trade regulators.

This confirms Akida's ability to work with exascale computers and, it follows, with cloud servers.
 
  • Like
  • Love
  • Fire
Reactions: 33 users

Diogenese

Top 20
https://www.msn.com/en-au/lifestyle...sedgntp&cvid=0a5e1908b3904695b62d089a354caaf1

Watch out AMD – Nvidia could boost GPU performance by up to 30% with AI​

Story by Darren Allan • Yesterday 11:15 pm
Comments

Nvidia has plans to optimize its GeForce graphics drivers using artificial intelligence to ensure that games run faster, going by the latest from the GPU grapevine.

This comes from CapFrameX on Twitter (via VideoCardz), a known source of leaks and developer of a utility that deals in frame times capture and analysis.

The AI-powered optimizations to boost performance would vary in their effects from game to game, naturally, with the leaker asserting that the average improvement would be in the ballpark of 10%. However, some titles might see benefits of up to 30% in terms of faster frame rates.

Obviously, we need to be skeptical around this claim, and indeed CapFrameX notes that we should apply a grain of salt here (we’d go for a substantially greater quantity of grains than that, mind)
.

... and, of course, Akida can work with GPUs, but Nvidia does have its own NN patents.
 

Attachments

  • 1673271791835.png
    1673271791835.png
    68 bytes · Views: 56
Last edited:
  • Like
  • Fire
  • Love
Reactions: 16 users

cassip

Regular
Does anybody know more about Sapeon? It was mentioned in @Fullmoonfevers post #11145 in connection with SK Hynix. Career website not available in English (?)

It states:
„AI everywhere
SAPEON everywhere“


 
  • Like
Reactions: 4 users
Does anybody know more about Sapeon? It was mentioned in @Fullmoonfevers post #11145 in connection with SK Hynix. Career website not available in English (?)

It states:
„AI everywhere
SAPEON everywhere“


chrome_screenshot_1673279672729.png
 
  • Like
  • Fire
  • Thinking
Reactions: 11 users
T

Thank you Rise 😊
No worries cassip,
Any particular job listings you'd like me to translate or you got it sorted?
 
  • Like
Reactions: 2 users

cassip

Regular
Nothing else to translate atm. I will get back to this on occasion 😉
Cheers
 
  • Like
  • Fire
Reactions: 2 users
Last edited:
  • Like
  • Fire
  • Love
Reactions: 35 users

stuart888

Regular
" It should be easy to evaluate algorithmic performance using local data and retrain on-site if need be.
...
In technical terms, what we’re describing is called a machine learning operations (MLOps) platform. Platforms in other fields, such as Snowflake, have shown the power of this approach and how it works in practice
."


This patent, which claims "federated learning", is based on a priority back to PvdM's 2008 application:

US10410117B2 Method and a system for creating dynamic neural function libraries

[0001] This application is a continuation-in-part of U.S. patent application Ser. No. 13/461,800, filed on May 2, 2012, which is a continuation-in-part of U.S. patent application Ser. No. 12/234,697, filed on Sep. 21, 2008, now U.S. Pat. No. 8,250,011, the disclosures of each of which are hereby incorporated by reference in their entirety.


View attachment 26604


[0073] FIG. 11, labeled “Method of Reading and Writing Dynamic Neuron Training Models”, represents a preferred embodiment of the function model library creation and uploading method. The communication module reads registers and provides an access means to an external computer system. The communication module is typically a microcontroller or microprocessor or equivalent programmable device. Its databus comprises a method of communicating with the hardware of the dynamic neuron array to receive or send data to binary registers.



Claim 1: A method of creating a reusable dynamic neural function library for use in artificial intelligence, the method comprising the steps of:

sending a plurality of input pulses in form of stimuli to a first artificial intelligent device, where the first artificial intelligent device includes a hardware network of reconfigurable artificial neurons and synapses;

learning at least one task or a function autonomously from the plurality of input pulses, by the first artificial intelligent device;

generating and storing a set of control values, representing one learned function, in synaptic registers of the first artificial intelligent device;

altering and updating the control values in synaptic registers, based on a time interval and an intensity of the plurality of input pulses for autonomous learning of the functions, thereby creating the function that stores sets of control values, at the first artificial intelligent device; and

transferring and storing the function in the reusable dynamic neural function library, together with other functions derived from a plurality of artificial intelligent devices, allowing a second artificial intelligent device to reuse one or more of the functions learned by the first artificial intelligent device
.
Well said @Diogenese, always informative and insightful. Winner! 👍

For those that want to watch a detailed video on AI/ML being used for Medical Life Sciences, this is a winner.

Daphne Koller (CEO of Insitro and Coursera) goes over the process, from how they outlined biological use cases and use AI/ML to massively expedite the process and backs it up with facts and details.

It will let you know AI/ML is blasting off in all forms of biological health discoveries. This is not about the FDA approval or Brainchip or Akida focused, much more on using AI/ML to figure out chemistry/biology.

However, Brainchip SNN Akida has tons of use cases all over IoMT "Internet of Medical Things" that need always-on:low-power smarts.

"The Internet of Medical Things (IoMT) is the collection of medical devices and applications that connect to healthcare IT systems through online computer networks. Medical devices equipped with Wi-Fi allow the machine-to-machine communication that is the basis of IoMT."

Scale AI has a lot of great AI-thought-leaders on their youtube channel ongoing. These are not for those in a hurry!



"Modern medicine has provided effective tools to treat some of humanity’s most significant and burdensome diseases. At the same time, it is becoming consistently more challenging and more expensive to develop new therapeutics. The drug development process involves multiple steps, each of which requires a complex and protracted experiment that often fails. Insitro CEO and Founder Daphne Koller believe that, for many of these phases, machine learning models can help predict the outcome of the experiments and that those models, while inevitably imperfect, can outperform predictions based on traditional heuristics."
 
  • Like
  • Fire
  • Love
Reactions: 15 users

Townyj

Ermahgerd
  • Like
  • Fire
  • Love
Reactions: 21 users

equanimous

Norse clairvoyant shapeshifter goddess
 
  • Like
  • Fire
  • Love
Reactions: 41 users
Just touching on that lecture video again, at the 1 hour 30 minute mark Nickunj from Brainchip is talking about some use cases such as-

RF signalling or maybe infrared, or we have Lidar coming in. We have customers who have reached out for all these use cases, we are working with them, it’ll come soon, you will see these integrations & products VERY SOON”

The Intellisense Systems Inc Data Acquisition page cover all three of RF Signalling, Infrared and Lidar - see screenshots below
@uiux put in a lot of effort on the BrainChip + Intellisense Systems, Inc page here

1673295402654.png


1673295449209.png


1673295426453.png


1673295545555.png


1673295600020.png
 
  • Like
  • Fire
  • Love
Reactions: 30 users

Beebo

Regular
Is it lump or bump this time🤣
It may be lumpy guidance.
"We have use cases from the Gambit"!

I have the video below ready to run starting here, use cases from the Gambit.

In addition to Mr @chapman89 nailing it on Nasa and other specific parts, what about akida:

"Identifying Men and Women in Gaming Rooms"? Then he hints at John Deere or similar right after that.

"Sounds from Animals in sick detection"?

There are a myriad of clues sprinkled in this gem, lots in the Q&A at the end. All of which just add another layer of my personal confidence.

This starts at Use Cases from Gambit.



View attachment 26542

listen to the portion starting from 1:28:45 sec and tell me what you hear Todd is saying about Apple and Qualcomm…help me interpret his message…are they integrating our Akida engine?
 
  • Like
  • Fire
  • Love
Reactions: 10 users

alwaysgreen

Top 20
It may be lumpy guidance.

listen to the portion starting from 1:28:45 sec and tell me what you hear Todd is saying about Apple and Qualcomm…help me interpret his message…are they integrating our Akida engine?
I took that as Qualcomm and Apple are trying to reduce chips from 28nm to lower nm but if Akida is integrated into 28nm, there are greater performance increases than going to a lower nm without akida. 🤷‍♂️
 
  • Like
  • Fire
Reactions: 18 users

Foxdog

Regular

So this article has made me think again about the potential behemoth that BRN could become. Let's say that the only use case for AKIDA was agricultural applications - weeding, fertilising, recognising sick animals etc. If adopted worldwide (and why wouldn't it be if it's the best and cheapest option out there) then this one industry alone could turn BRN into a multi billion dollar company.

Knowing what we do about possible penetration into multiple and diverse industries, plus our up to 3 year lead on competing technologies, how big can BRN actually get and what does the SP look like when this is a mature company, say 5 to 10 years from now?
 
  • Like
  • Fire
  • Love
Reactions: 42 users
The Intellisense Systems Inc Data Acquisition page cover all three of RF Signalling, Infrared and Lidar - see screenshots below
@uiux put in a lot of effort on the BrainChip + Intellisense Systems, Inc page here

View attachment 26616

View attachment 26618

View attachment 26617

View attachment 26619

View attachment 26620
This is an interesting article from 2012. It seems clear that all of the issues raised in the last paragraph and spoken of by @Diogenese did not allow analogue to reign and as we know a new SCNN technology has been embraced and is providing NASA with long needed solutions:

“NASA technologists test ‘game-changing’ data-processing technology​

Back to the future?

November 29, 2012​

nasa_pellish_analog_board

Card containing the analog-based data-processing integrated circuit. The card snaps into the digital test board and will be used to test a number of spaceflight processing applications. (Credit: NASA/Goddard/Pat Izzo)
NASA technologist Jonathan Pellish believes the analog computing technology of yesteryear could potentially revolutionize everything from autonomous rendezvous and docking to remotely correcting wavefront errors on large, deployable space telescope mirrors like those to fly on the James Webb Space Telescope.
Pellish is meeting with scientists and engineers to explain the technology’s capabilities and is building printed circuit boards that researchers can use to test the technology’s performance for a range of scientific applications.
Pellis works at NASA’s Goddard Space Flight Center in Greenbelt, Md. He also has carried out preliminary radiation-effects studies to see how the analog technology’s architecture holds up under the extreme environment encountered in space.
Analog-Based Microchip
The new technology is an analog-based microchip developed with significant support from the Defense Advanced Research Projects Agency (DARPA).
Instead of relying on tiny switches or transistors that turn on and off, producing streams of ones and zeroes that computing systems then translate into something meaningful to users, the company’s new microchip is more like a dimmer switch. It can accept inputs and calculate outputs that are between zero and one, directly representing probabilities, or levels of certainty.
“The technology is fundamentally different from standard digital-signal processing, recognizing values between zero and one to accomplish what would otherwise be cost prohibitive or impossible with traditional digital circuits,” Pellish said.
While digital systems use processors that step through calculations one at a time, in a serial fashion, the new processor uses electronic signals to represent probabilities rather than binary ones and zeros. It then effectively runs the calculations in parallel. Where it might take 500 transistors for a digital computer to calculate a probability, the new technology would take just a few. In other words, the microchip can perform a calculation more efficiently, with fewer circuits and less power than a digital processor — attributes important for space- and power-constrained spacecraft instruments, Pellish said.
Fast Fourier Transform
Because of its efficiency and inherent design, however, it’s especially ideal for computing fast Fourier transforms (FFTs), and more particularly the discrete Fourier transform, a ubiquitously used mathematical algorithm in digital-signal processing. Among other things, Fourier transforms decompose signals into their constituent frequencies and are used to generate and filter cell-phone and Wi-Fi transmissions as well as compress audio, image and video files so that they take up less bandwidth.
Among other products, Analog Devices Lyric Labs has developed an analog-based integrated circuit geared specifically for computing Fourier transforms. The team will use the technology, which the company donated, to assemble several custom circuit boards.
One of the first applications the group plans to target with a version of the FFT integrated circuit is wavefront sensing and control, the computational technique for aligning multiple mirror segments, like those that are flying on the Webb telescope, so that they operate as a single mirror system.
In addition, Jeffrey Klenzing, who works with Goddard’s Space Weather Laboratory, wants to evaluate the technology’s use for on-board data processing, particularly for studies of the sun. “For a typical sounding rocket application, we send all data down and perform fast Fourier transforms on the ground. However, for satellite missions, this is not feasible given limited telemetry,” Klenzing said. “A chip for performing rapid, reliable FFTs would be very useful for such heliophysics missions particularly with the push toward smaller, low-power satellites such as CubeSats and nanosats.”
Pellish also believes autonomous rendezvous and docking and other applications requiring precise locational information would benefit from the analog-based technology. “We’re trying to create a new market at NASA for analog processing. I believe it will give us a competitive edge. If we can push this, it could revolutionize how we do onboard data processing.”
It will be interesting to see how far analog-based technology can go in terms of computational density. “Dense analog circuits … are sensitive to fabrication process variations, ambient temperatures and noisy environments, making it difficult to configure circuits that operate reliably under a wide range of external parameters,” as Dharmendra S Modha et al. point out in a paper on developing a neurosynaptic core for a scalable neuromorphic architecture capable of emulating spiking neural networks.”

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Fire
  • Love
Reactions: 17 users
Top Bottom