BRN Discussion Ongoing

I agree but remember the OECD has recently predicted that once again Australia will be the miracle economy and grow at more than 4% for the rest of this year.

When my son was in primary school he came home one day and asked his mother “When are you and Dad getting divorced?” She said “We are not.” He said “All my friends parents are.”

His logic was flawless but things are much more complex underneath than they appear from the outside.

The ASX will react to what the US does because that is what it does.

The US reacts because their economy is not performing well.

Australia’s economy according to the OECD is a miracle but just like my son Aussie investors look at America from the outside and draw a false conclusion about Australia.

The fact that this occurs is what traders and institutions rely upon safe volatility.

No real concern about fundamentals involved so even if we get stuck in a trade we are at no risk of losing the lot.

Hence they cultivate the idea of the US sneezing and Australia gets a cold. It is massive institutionalised manipulation and Australia falls for it every single time.

I suspect if Australia became wealthier than the US they would still engage the same lie and out of habit Australians would follow.

My opinion only DYOR
FF

AKIDA BALLISTA
Ask yourself when was the last time you felt any sense of concern because the Japanese market took a fall. I never have if I am completely honest. Never give it a thought really.

Then ask yourself WHY having regard to the following importance of Japan verses the US to the Australian economy:

1650771751137.png
 
  • Like
  • Fire
Reactions: 8 users

Slade

Top 20
  • Haha
  • Like
Reactions: 5 users
Posted this then worked out @uiux had already done so far more comprehensively back in February

I’m more of a simple dot joiner as I do not have a technical background

I’ll leave my post here too in case others missed the one from Uiux like I did. Credit goes to Uiux

Link here to @uiux’s post

0E9AC82B-E00B-4DF1-9150-18A9CB5AEB46.jpeg

041E1179-1113-467D-9EF9-914EE66240AB.jpeg

6487F90F-1D37-4D3B-93EE-03FDFF6246DE.jpeg


A7361FCF-62BE-48A3-9B4E-4F855B3FDEDC.jpeg

8CFEA9EF-1CE9-4FC7-BB39-055AA78783B8.jpeg


1349F74C-38E3-406F-80BF-9C68427D3F0F.jpeg
D1109A8C-BC45-4B48-87D8-F3CF079DA2C9.jpeg
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 30 users
D

Deleted member 118

Guest
  • Like
  • Haha
Reactions: 4 users

chapman89

Founding Member
  • Like
  • Fire
  • Thinking
Reactions: 22 users

Build-it

Regular
I don't mean to throw a turd in the punchbowl, but watch out for the US markets this coming week. (The U.S. markets....) they WANT to keep falling. They NEED to keep falling. With a few of the big boys reporting next week even good results from them could be punished.

The U.S. Fed is running the countries equities market show for awhile more. I think the best that Brainchip could do in this environment is a "meh" market reaction. Not too hot and not too cold. Just give us some incremental improvement hopefully, management reporting on continued customer engagement progress and an overall thumbs up report from management.

And then let the necessary passage of time put the worldwide macro events impacting markets everywhere right now (Ukraine, rising interest rates in U.S., inflation- at least in the U.S., worker shortages, etc), behind us, ...and pray that no significant virus varients pop up to shut things down again.

Brainchip is still in the early stages, afterall. I will always want them to under promise, and over deliver.

Sorry everyone, but that's just my opinion on the coming week ahead. Regards, dippY

Listening to Cramer, always seems like I have to listen to his clips twice to understand what he is on about.

He certainly has an interesting way of presenting but I suppose that can be US way.

 
  • Like
Reactions: 3 users

Terroni2105

Founding Member
Further confirmation from Intel that they are way behind Akida.



Intel Labs lead Rich Uhlig offered two possibilities: integrating Loihi in a CPU for PCs to perform energy-efficient AI tasks and potentially offering its neuromorphic chips as a cloud service, although Uhlig was clear he wasn't firming actual product plans, just projecting what could theoretically happen in the future.

"Right now with Loihi, we're at that point where we think we're onto something, but we don't actually have product plans yet. We're sort of earlier on in that work stream," he said last month.


 
  • Like
  • Haha
  • Love
Reactions: 28 users
D

Deleted member 118

Guest
Further confirmation from Intel that they are way behind Akida.



Intel Labs lead Rich Uhlig offered two possibilities: integrating Loihi in a CPU for PCs to perform energy-efficient AI tasks and potentially offering its neuromorphic chips as a cloud service, although Uhlig was clear he wasn't firming actual product plans, just projecting what could theoretically happen in the future.

"Right now with Loihi, we're at that point where we think we're onto something, but we don't actually have product plans yet. We're sort of earlier on in that work stream," he said last month.



If anyone from Intel is reading this

 
  • Haha
  • Like
Reactions: 23 users
  • Like
  • Fire
Reactions: 11 users
Latest vacancy just saw while surfing.

Looking for a Junior role to get up to speed on Akida 1.0 but assist the research team in dev of Akida 2.0 :)




BrainChip, Inc.
Laguna Hills, CA
  • Posted: 1 day ago
  • Full-Time
Job Description

Come join the company leading the technological revolution in artificial intelligence. BrainChip is a global technology company producing a groundbreaking neuromorphic processor that brings artificial intelligence to the edge in a way that is beyond the capabilities of other products.
We are the world’s first commercial producer of ultra-low-power and high-performance artificial intelligence technology processors that enables a wide array of applications such as self-driving cars, hearing aids, drones, and agricultural equipment. The event-based neural network processor is inspired by the spiking nature of the human brain and is implemented in an industry-standard digital process.
Our company was recognized as one of the “Startups Worth Watching in 2021” in EE Times’ annual Silicon 100 list of global semiconductor technologies and our founder was named the winner of the AI Hardware 2021 Innovator Award. We have offices in Laguna Hills, California; Toulouse, France; Hyderabad, India; and Perth, Australia. We are also publicly traded on the Australian Stock Exchange (BRN:ASX) and the OTC Market (BRCHF).
Job Title: Junior Machine Learning Engineer
Reports To: Manager of Applied Research
Department: R&D

SUMMARY:
The Junior Machine Learning Engineer’s primary role is to is to support the research team in developing commercial applications for BrainChip’s Akida Neuromorphic System-on-Chip (NSoC). Some of the target applications include work in computer vision (object classification/detection and face recognition), audio processing (keyword spotting), and sensor fusion. Additionally, this team member will support the research team’s algorithm development for the next version of the Akida NSoC.
ESSENTIAL JOB DUTIES AND RESPONSIBILITIES:
  • Quickly prototyping, training, and testing machine learning (ML) solutions using online code repositories, research publications, or customer specifications
  • Keeping abreast of developments in ML such as new development tools, libraries, and frameworks as well as new ML models/architectures, training techniques, and application pipelines
  • Participating in ML algorithm/hardware co-design tasks
  • Performing, documenting, and presenting detailed analyses related to ML algorithm development, software/hardware benchmarking, and application development
  • Obtaining a thorough understanding of the Akida 1.0 hardware device and associated software stack (MetaTF)
  • Interfacing with customers to discuss ML application goals, constraints, and opportunities
QUALIFICATIONS:To perform this job successfully, an individual must be able to perform each essential duty satisfactorily. The requirements listed below are representative of the knowledge, skill, and ability required. Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions.
Education/Experience:
  • Bachelor’s Degree in Computer Science or equivalent.
  • Course work in machine learning and computer vision
  • Strong programming skills in Python
  • One year experience developing ML applications in either TensorFlow/Keras and/or PyTorch
  • Excellent communication skills
  • Experience in one or more of the following application fields: Image Processing/Computer Vision, ADAS, Anomaly Detection, Audio/Speech Processing, or Sensor Fusion.
Preferred Qualifications:
  • Two plus years’ experience developing ML applications in TensorFlow/Keras and PyTorch
  • Multi-project experience in object classification, object detection, face recognition, and/or keyword spotting
  • Knowledge of deep learning quantization techniques
  • Experience with Docker and Git
  • Experience with Scrum/Agile software development (e.g. Jira)
Benefits Offered:
· Competitive Pay
· Restricted Stock Units
· Bonus Pay up to 10% of annual salary
· 401K with matching
· Free Lunch Daily
· Flexible Work Schedule
· Paid Time Off
· Holiday Pay
· Company-paid Medical HMO, Dental PPO, and Vision Insurance
· Company-paid Life Insurance and AD&D
· Employee Assistance Program, Caregiver Support, Adoption Assistance Program
· Flexible Spending Account
· Health Savings Account
· Commuter Benefit Program
· Employee Discounts

BrainChip is an equal opportunity employer. All aspects of employment including the decision to hire, promote, discipline, or discharge, will be based on merit, competence, performance, and business needs. We do not discriminate on the basis of race, color, religion, marital status, age, national origin, ancestry, physical or mental disability, medical condition, pregnancy, genetic information, gender, sexual orientation, gender identity or expression, veteran status, or any other status protected under federal, state, or local law.

BrainChip, Inc.​


Address​

Laguna Hills, CA
92653 USA

Industry​

Technology
 
  • Like
  • Fire
  • Love
Reactions: 43 users

Shadow59

Regular
Latest vacancy just saw while surfing.

Looking for a Junior role to get up to speed on Akida 1.0 but assist the research team in dev of Akida 2.0 :)




BrainChip, Inc.
Laguna Hills, CA
  • Posted: 1 day ago
  • Full-Time
Job Description

Come join the company leading the technological revolution in artificial intelligence. BrainChip is a global technology company producing a groundbreaking neuromorphic processor that brings artificial intelligence to the edge in a way that is beyond the capabilities of other products.
We are the world’s first commercial producer of ultra-low-power and high-performance artificial intelligence technology processors that enables a wide array of applications such as self-driving cars, hearing aids, drones, and agricultural equipment. The event-based neural network processor is inspired by the spiking nature of the human brain and is implemented in an industry-standard digital process.
Our company was recognized as one of the “Startups Worth Watching in 2021” in EE Times’ annual Silicon 100 list of global semiconductor technologies and our founder was named the winner of the AI Hardware 2021 Innovator Award. We have offices in Laguna Hills, California; Toulouse, France; Hyderabad, India; and Perth, Australia. We are also publicly traded on the Australian Stock Exchange (BRN:ASX) and the OTC Market (BRCHF).
Job Title: Junior Machine Learning Engineer
Reports To: Manager of Applied Research
Department: R&D

SUMMARY:
The Junior Machine Learning Engineer’s primary role is to is to support the research team in developing commercial applications for BrainChip’s Akida Neuromorphic System-on-Chip (NSoC). Some of the target applications include work in computer vision (object classification/detection and face recognition), audio processing (keyword spotting), and sensor fusion. Additionally, this team member will support the research team’s algorithm development for the next version of the Akida NSoC.
ESSENTIAL JOB DUTIES AND RESPONSIBILITIES:
  • Quickly prototyping, training, and testing machine learning (ML) solutions using online code repositories, research publications, or customer specifications
  • Keeping abreast of developments in ML such as new development tools, libraries, and frameworks as well as new ML models/architectures, training techniques, and application pipelines
  • Participating in ML algorithm/hardware co-design tasks
  • Performing, documenting, and presenting detailed analyses related to ML algorithm development, software/hardware benchmarking, and application development
  • Obtaining a thorough understanding of the Akida 1.0 hardware device and associated software stack (MetaTF)
  • Interfacing with customers to discuss ML application goals, constraints, and opportunities
QUALIFICATIONS:To perform this job successfully, an individual must be able to perform each essential duty satisfactorily. The requirements listed below are representative of the knowledge, skill, and ability required. Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions.
Education/Experience:
  • Bachelor’s Degree in Computer Science or equivalent.
  • Course work in machine learning and computer vision
  • Strong programming skills in Python
  • One year experience developing ML applications in either TensorFlow/Keras and/or PyTorch
  • Excellent communication skills
  • Experience in one or more of the following application fields: Image Processing/Computer Vision, ADAS, Anomaly Detection, Audio/Speech Processing, or Sensor Fusion.
Preferred Qualifications:
  • Two plus years’ experience developing ML applications in TensorFlow/Keras and PyTorch
  • Multi-project experience in object classification, object detection, face recognition, and/or keyword spotting
  • Knowledge of deep learning quantization techniques
  • Experience with Docker and Git
  • Experience with Scrum/Agile software development (e.g. Jira)
Benefits Offered:
· Competitive Pay
· Restricted Stock Units
· Bonus Pay up to 10% of annual salary
· 401K with matching
· Free Lunch Daily
· Flexible Work Schedule
· Paid Time Off
· Holiday Pay
· Company-paid Medical HMO, Dental PPO, and Vision Insurance
· Company-paid Life Insurance and AD&D
· Employee Assistance Program, Caregiver Support, Adoption Assistance Program
· Flexible Spending Account
· Health Savings Account
· Commuter Benefit Program
· Employee Discounts

BrainChip is an equal opportunity employer. All aspects of employment including the decision to hire, promote, discipline, or discharge, will be based on merit, competence, performance, and business needs. We do not discriminate on the basis of race, color, religion, marital status, age, national origin, ancestry, physical or mental disability, medical condition, pregnancy, genetic information, gender, sexual orientation, gender identity or expression, veteran status, or any other status protected under federal, state, or local law.

BrainChip, Inc.​


Address​

Laguna Hills, CA
92653 USA

Industry​

Technology
I would love to see Nvidia advertising for system engineers with knowledge of Akida.
That would make me very happy.;)
 
  • Like
  • Haha
  • Love
Reactions: 22 users
Latest vacancy just saw while surfing.

Looking for a Junior role to get up to speed on Akida 1.0 but assist the research team in dev of Akida 2.0 :)




BrainChip, Inc.
Laguna Hills, CA
  • Posted: 1 day ago
  • Full-Time
Job Description

Come join the company leading the technological revolution in artificial intelligence. BrainChip is a global technology company producing a groundbreaking neuromorphic processor that brings artificial intelligence to the edge in a way that is beyond the capabilities of other products.
We are the world’s first commercial producer of ultra-low-power and high-performance artificial intelligence technology processors that enables a wide array of applications such as self-driving cars, hearing aids, drones, and agricultural equipment. The event-based neural network processor is inspired by the spiking nature of the human brain and is implemented in an industry-standard digital process.
Our company was recognized as one of the “Startups Worth Watching in 2021” in EE Times’ annual Silicon 100 list of global semiconductor technologies and our founder was named the winner of the AI Hardware 2021 Innovator Award. We have offices in Laguna Hills, California; Toulouse, France; Hyderabad, India; and Perth, Australia. We are also publicly traded on the Australian Stock Exchange (BRN:ASX) and the OTC Market (BRCHF).
Job Title: Junior Machine Learning Engineer
Reports To: Manager of Applied Research
Department: R&D

SUMMARY:
The Junior Machine Learning Engineer’s primary role is to is to support the research team in developing commercial applications for BrainChip’s Akida Neuromorphic System-on-Chip (NSoC). Some of the target applications include work in computer vision (object classification/detection and face recognition), audio processing (keyword spotting), and sensor fusion. Additionally, this team member will support the research team’s algorithm development for the next version of the Akida NSoC.
ESSENTIAL JOB DUTIES AND RESPONSIBILITIES:
  • Quickly prototyping, training, and testing machine learning (ML) solutions using online code repositories, research publications, or customer specifications
  • Keeping abreast of developments in ML such as new development tools, libraries, and frameworks as well as new ML models/architectures, training techniques, and application pipelines
  • Participating in ML algorithm/hardware co-design tasks
  • Performing, documenting, and presenting detailed analyses related to ML algorithm development, software/hardware benchmarking, and application development
  • Obtaining a thorough understanding of the Akida 1.0 hardware device and associated software stack (MetaTF)
  • Interfacing with customers to discuss ML application goals, constraints, and opportunities
QUALIFICATIONS:To perform this job successfully, an individual must be able to perform each essential duty satisfactorily. The requirements listed below are representative of the knowledge, skill, and ability required. Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions.
Education/Experience:
  • Bachelor’s Degree in Computer Science or equivalent.
  • Course work in machine learning and computer vision
  • Strong programming skills in Python
  • One year experience developing ML applications in either TensorFlow/Keras and/or PyTorch
  • Excellent communication skills
  • Experience in one or more of the following application fields: Image Processing/Computer Vision, ADAS, Anomaly Detection, Audio/Speech Processing, or Sensor Fusion.
Preferred Qualifications:
  • Two plus years’ experience developing ML applications in TensorFlow/Keras and PyTorch
  • Multi-project experience in object classification, object detection, face recognition, and/or keyword spotting
  • Knowledge of deep learning quantization techniques
  • Experience with Docker and Git
  • Experience with Scrum/Agile software development (e.g. Jira)
Benefits Offered:
· Competitive Pay
· Restricted Stock Units
· Bonus Pay up to 10% of annual salary
· 401K with matching
· Free Lunch Daily
· Flexible Work Schedule
· Paid Time Off
· Holiday Pay
· Company-paid Medical HMO, Dental PPO, and Vision Insurance
· Company-paid Life Insurance and AD&D
· Employee Assistance Program, Caregiver Support, Adoption Assistance Program
· Flexible Spending Account
· Health Savings Account
· Commuter Benefit Program
· Employee Discounts

BrainChip is an equal opportunity employer. All aspects of employment including the decision to hire, promote, discipline, or discharge, will be based on merit, competence, performance, and business needs. We do not discriminate on the basis of race, color, religion, marital status, age, national origin, ancestry, physical or mental disability, medical condition, pregnancy, genetic information, gender, sexual orientation, gender identity or expression, veteran status, or any other status protected under federal, state, or local law.

BrainChip, Inc.​


Address​

Laguna Hills, CA
92653 USA

Industry​

Technology
Another great pick up and highly significant. Thanks FMF.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
Reactions: 11 users
The following is for the visionaries who have stayed the course. Written in 2017 it makes clear how absolutely amazing it is where Brainchip is today:

Neuromorphic Chips Are Destined for Deep Learning—or Obscurity​

Researchers in this specialized field have hitched their wagon to deep learning’s star​

LEE GOMES
29 MAY 2017
ILLUSTRATION: CHAD HAGEN
People in the tech world talk of a technology “crossing the chasm" by making the leap from early adopters to the mass market. A case study in chasm crossing is now unfolding in neuromorphic computing.
The approach mimics the way neurons are connected and communicate in the human brain, and enthusiasts say neuromorphic chips can run on much less power than traditional CPUs. The problem, though, is proving that neuromorphics can move from research labs to commercial applications. The field's leading researchers spoke frankly about that challenge at the Neuro Inspired Computational Elements Workshop, held in March at the IBM research facility at Almaden, Calif.
“There currently is a lot of hype about neuromorphic computing," said Steve Furber, the researcher at the University of Manchester, in England, who heads the SpiNNaker project, a major neuromorphics effort. “It's true that neuromorphic systems exist, and you can get one and use one. But all of them have fairly small user bases, in universities or industrial research groups. All require fairly specialized knowledge. And there is currently no compelling demonstration of a high-volume application where neuromorphic outperforms the alternative."
Other attendees gave their own candid analyses. Another prominent researcher, Chris Eliasmith of the University of Waterloo, in Ontario, Canada, said the field needs to meet the hype issue “head-on." Given that neuromorphics has generated a great deal of excitement, Eliasmith doesn't want to “fritter it away on toy problems": A typical neuro morphic demonstration these days will show a system running a relatively simple artificial intelligence application. Rudimentary robots with neuromorphic chips have navigated down a Colorado mountain trail and rolled over squares of a specific color placed in a pattern on the floor. The real test is for traditional companies to accept neuromorphics as a mainstay platform for everyday engineering challenges, Eliasmith said, but there is “tons more to do" before that happens.
Illustration: James Provost Tiny Spikes: Two layers within a neural network contain groups of “neurons" with similar functions, indicated by color [blue, yellow, orange, and pink] in the illustration on the left. In the graphic on the right, those neurons are mapped to spiking neurons in an IBM TrueNorth chip. The spiking neurons are connected by gridlike “synapses" to other neurons in the same core, and to a row of inputs. Those inputs can generate spikes, which are then processed by the neural network.
The basic building block of neuromorphic computing is what researchers call a spiking neuron, which plays a role analogous to what a logic gate does in traditional computing. In the central processing unit of your desktop, transistors are assembled into different types of logic gates—AND, OR, XOR, and the like—each of which evaluates two binary inputs. Then, based on those values and the gate's type, each gate outputs either a 1 or a 0 to the next logic gate in line. All of them work in precise synchronization to the drumbeat of the chip's master clock, mirroring the Boolean logic of the software it's running.
The spiking neuron is a different beast. Imagine a node sitting on a circuit and measuring whatever spikes—in the form of electrical pulses—are transmitted along the circuit. If a certain number of spikes occur within a certain period of time, the node is programmed to send along one or more new spikes of its own, the exact number depending on the design of the particular chip. Unlike the binary, 0-or-1 option of traditional CPUs, the responses to spikes can be weighted to a range of values, giving neuromorphics something of an analog flavor. The chips save on energy in large part because their neurons aren't constantly firing, as occurs with traditional silicon technology, but instead become activated only when they receive a spiking signal.
“When we look at how neurons compute in the brain, there are concrete things we can learn"
A neuromorphic system connects these spiking neurons into complex networks, often according to a task-specific layout that programmers have worked out in advance. In a network designed for image recognition, for example, certain connections between neurons take on certain weights, and the way spikes travel between these neurons with their respective weights can be made to represent different objects. If one pattern of spikes appears at the output, programmers would know the image is of a cat; another pattern of spikes would indicate the image is of a chair.
Within neuromorphics, each research group has come up with its own design to make this possible. IBM's DARPA-funded TrueNorth neuromorphic chip, for example, does its spiking in custom hardware, while Furber's SpiNNaker (Spiking Neural Network Architecture) relies on software running on the ARM processors that he helped develop.
In the early days, there was no consensus on what neuromorphic systems would actually do, except to somehow be useful in brain research. In truth, spiking chips were something of a solution looking for a problem. Help, though, arrived unexpectedly from an entirely different part of the computing world.
Starting in the 1990s, artificial intelligence researchers made a number of theoretical advances involving the design of the “neural networks" that had been used for decades for computational problem solving, though with limited success. Emre Neftci, with the University of California, Irvine's Neuromorphic Machine Intelligence Lab, said that when combined with faster silicon chips, these new, improved neural networks allowed computers to make dramatic advances in classic computing problems, such as image recognition.
This new breed of computing tools used what's come to be called deep learning, and in the past few years, deep learning has basically taken over the computer industry. Members of the neuromorphics research community soon discovered that they could take a deep-learning network and run it on their new style of hardware. And they could take advantage of the technology's power efficiency: The TrueNorth chip, which is the size of a postage stamp and holds a million “neurons," is designed to use a tiny fraction of the power of a standard processor.
Those power savings, say neuromorphics boosters, will take deep learning to places it couldn't previously go, such as inside a mobile phone, and into the world's hottest technology market. Today, deep learning enables many of the most widely used mobile features, such as the speech recognition required when you ask Siri a question. But the actual processing occurs on giant servers in the cloud, for lack of sufficient computing horsepower on the device. With neuromorphics on board, say its supporters, everything could be computed locally.
Which means that neuromorphic computing has, to a considerable degree, hitched its wagon to deep learning's star. When IBM wanted to show off a killer app for its TrueNorth chip, it ran a deep neural networkthat classified images. Much of the neuromorphics community now defines success as being able to supply extremely power-efficient chips for deep learning, first for big server farms such as those run by Google, and later for mobile phones and other small, power-sensitive applications. The former is considered the easier engineering challenge, and neuromorphics optimists say commercial products for server farms could show up in as few as two years.
Unfortunately for neuromorphics,just about everyone else in the semiconductor industry—including big players like Intel and Nvidia—also wants in on the deep-learning market. And that market might turn out to be one of the rare cases in which the incumbents, rather than the innovators, have the strategic advantage. That's because deep learning, arguably the most advanced software on the planet, generally runs on extremely simple hardware.
“The neuromorphic approaches are interesting scientifically, but they are nowhere close on accuracy"
Karl Freund, an analyst with Moor Insights & Strategy who specializes in deep learning, said the key bit of computation involved in running a deep-learning system—known as matrix multiplication—can easily be handled with 16-bit and even 8-bit CPU components, as opposed to the 32- and 64-bit circuits of an advanced desktop processor. In fact, most deep-learning systems use traditional silicon, especially the graphics coprocessors found in the video cards best known for powering video games. Graphics coprocessors can have thousands of cores, all working in tandem, and the more cores there are, the more efficient the deep-learning network.
So chip companies are bringing out deep-learning chips that are made out of very simple, traditional components, optimized to use as little power as possible. (That's true of Google's Tensor Processing Unit, the chip the search company announced last year in connection with its own deep-learning efforts.) Put differently, neuromorphics' main competition as the platform of choice for deep learning is an advanced generation of what are essentially “vanilla" silicon chips.
Some companies on the vanilla side of this argument deny that neuromorphic systems have an edge in power efficiency. William J. Dally, a Stanford electrical engineering professor and chief scientist at Nvidia, said that the demonstrations performed with TrueNorth used a very early version of deep learning, one with much less accuracy than is possible with more recent systems. When accuracy is taken into account, he said, any energy advantage of neuromorphics disappears.
“People who do conventional neural networks get results and win the competitions," Dally said. “The neuromorphic approaches are interesting scientifically, but they are nowhere close on accuracy."
Indeed, researchers have yet to figure out simple ways to get neuromorphic systems to run the huge variety of deep-learning networks that have been developed on conventional chips. Brian Van Essen, at the Center for Applied Scientific Computing at the Lawrence Livermore National Laboratory, said his group has been able to get neural networks to run on TrueNorth but that the task of picking the right network and then successfully porting it over remains “a challenge." Other researchers say the most advanced deep-learning systems require more neurons, with more possible interconnections, than current neuromorphic technology can offer.
The neuromorphics communitymust tackle these problems with a small pool of talent. The March conference, the field's flagship event, attracted only a few hundred people; meetings associated with deep learning usually draw many thousands. IBM, which declined to comment for this article, said last fall that TrueNorth, which debuted in 2014, is now running experiments and applications for more than 130 users at more than 40 universities and research centers.
By contrast, there is hardly a Web company or university computer department on the planet that isn't doing something with deep learning on conventional chips. As a result, those conventional architectures have a robust suite of development tools, along with legions of engineers trained in their use— typical advantages of an incumbent technology with a large installed base. Getting the deep-learning community to switch to a new and unfamiliar way of doing things will prove extremely difficult unless neuromorphics can offer an unmistakable performance and power advantage.
Again, that's a problem the neuromorphics community openly acknowledges. If the presentations at the March conference frequently referred to the challenges that lie ahead for the field, most of them also offered suggestions on how to overcome them.
The University of Waterloo's Eliasmith, for example, said that neuromorphics must progress on a number of fronts. One of them is building more-robust hardware, with more neurons and interconnections, to handle more-advanced deep-learning systems. Also needed, he said, are theoretical insights about the inherent strengths and weaknesses of neuromorphic systems, to better know how to use them most productively. To be sure, he still believes the technology can live up to expectations. “We have been seeing regular improvements, so I'm encouraged," Eliasmith said.
Still, the neuromorphics community might find that its current symbiotic relationship with deep learning comes with its own hazards. For all the recent successes of deep learning, plenty of experts still question how much of an advance it will turn out to be.
Photo: University of Manchester Building Blocks: The SpiNNaker project is constructing a machine with 50,000 of these specialized chips in hopes of creating a network of 1 billion “neurons."
Deep learning clearly delivers superior results in applications such as pattern recognition, in which one picture is matched to another picture, or for language translation. It remains to be seen how far the technique will take researchers toward the holy grail of “generalized intelligence," or the ability of a computer to have, like HAL 9000 in the film 2001: A Space Odyssey, the reasoning and language skills of a human. Deep-learning pioneer Yann LeCun compares AI research to driving in the fog. He says there is a chance that even armed with deep learning, AI might any day now crash into another brick wall.
That prospect caused some at the conference to suggest that neuromorphics researchers should persevere even if the technology doesn't deliver a home run for deep learning. Bruno Olshausen, director of the University of California, Berkeley's Redwood Center for Theoretical Neuroscience, said neuromorphic technology may, on its own, someday bring about AI results more sophisticated than anything deep learning ever could. “When we look at how neurons compute in the brain, there are concrete things we can learn," he said. “Let's try to build chips that do the same thing, and see what we can leverage out of them."
The SpiNNaker project's Furber echoed those sentiments when asked to predict when neuromorphics would be able to produce low-power components that could be used in mobile phones. His estimate was five years—but he said he was only 80 percent confident in that prediction. He added, however, that he was far more certain that neuromorphics would play an important role in studying the brain, just as early proponents thought it might.
However, there is a meta-issue hovering over the neuromorphics community: Researchers don't know whether the spiking behavior they are mimicking in the brain is central to the way the mind works, or merely one of its many accidental by-products. Indeed, the surest way to start an argument with a neuromorphics researcher is to suggest that we don't really know enough about how the brain works to have any business trying to copy it in silicon. The usual response you'll get is that while we certainly don't know everything, we clearly know enough to start.
It has often been noted that progress in aviation was made only after inventors stopped trying to copy the flapping wings of birds and instead discovered—and then harnessed—basic forces, such as thrust and lift. The knock against neuromorphic computing is that it's stuck at the level of mimicking flapping wings, an accusation the neuromorphics side obviously rejects. Depending on who is right, the field will either take flight and soar over the chasm, or drop into obscurity.
This article appears in the June 2017 print issue as “The Neuromorphic Chip's Make-or-Break Moment."
 
  • Like
  • Love
  • Fire
Reactions: 19 users
I would love to see Nvidia advertising for system engineers with knowledge of Akida.
That would make me very happy.;)
Nvidia have plans to recruit 3,000 plus additional engineers over the next 12 months so you have over 3,000 chances to be made very happy. 😂🤣 FF
 
  • Like
  • Haha
  • Fire
Reactions: 10 users

Violin1

Regular
Hi crew. Haven't Seen a lot from @Fredsnugget since he made it across here - but....
Hey @Fredsnugget - just want to call out to you and say we're all thinking about you and your mates tomorrow. You've all given so much to Australia and Australians. Have a wild but inner-peaceful day friend. Thank you.
 
  • Like
  • Love
  • Fire
Reactions: 27 users
Sorry, I should have added this excerpt from the General Meeting CEO and Chairman’s Address on the 26 May 2021. I guess what I'm saying is that I wouldn't be at all surprised by the time 2024 comes around if the Mercedes vehicles coming off the production line were able to (as an example) learn the difference between a plastic bag, blowing across the street and a pedestrian or a rock, thus bringing the vehicles to an even higher level of safety and sophistication than was demonstrated on the Mercedes Vision EQXX.


(Extract)
Today, if a plastic bag blows across the street, a car equipped with AI sees an object and
hits the brakes, or worse, takes evasive action.
Our future networks will be able to learn the difference between a plastic bag, blowing
across the street and a rock by their behaviour. They will learn from sequences of events
and from observing behaviour. A rock does not get blown in the wind. A door has a handle
that makes it open. Objects have not only a shape, but also behaviour and a location in
space. Our brain understands and predicts the behaviour of objects and sounds. We aim to
build that intelligence into future products so that we enable our clients not only to build
intelligent products, but safe and beneficial
This post touches upon something which I have mentioned in passing before and which is supported by history in the following extract from a 2019 Brainchip ASX announcement which amongst other things states that from May, 2019 they were marketing the AKD1000 IP to customers which as we know was the best part of 16 months before they held the silicon AKD1000 chip in their hand in October, 2020. It was also 9 months before they sent the file to Socionext to finalise the design.

So given everything we have been told it seems entirely reasonable to deduce that the AKD2000 IP has been available for select customers since late last year, 2021 if not a little earlier and most certainly early 2022. Your suggestion that it could very well be Mercedes Benz who have been given the privilege of playing with AKD2000 IP makes a lot of sense:

Update for the June 2019 Quarter

• Introduction of AkidaTM Intellectual Property for Licensing to ASIC Suppliers
• Introduction of a Neural Network Converter for CNN to SNN translation
• Execution of Definitive Agreement with Socionext for Akida Development and Manufacturing
• Convertible Note issued, to raise US$2.85M
• Entitlement Offering Raised A$10.7M Sydney, Australia – 30 July 2019:

BrainChip Holdings Ltd (ASX: BRN), the leading AI Edge company, today provides the following update for the quarter ending 30 June 2019, to accompany the Company’s 4C lodged with the ASX. The Company ended the March quarter with US$5.5M in cash. On 22 July 2019, the Company had US$12.5M in cash. Total cash outflows for the quarter were US$2.1M. For the September 2019 quarter forecast recurring operating expenses are US$2.2M, with a total cash outflow of US$3.1M including third party expenses associated with the Akida device development. Total cash receipts for the June quarter were US$73,000 (approximately AU$106,000). This amount does not include invoices issued in the quarter that will be paid at a future date. The Company continues to control expenses, primarily employee headcount, while completing the Akida development. On 26 May 2019, the Company announced the availability of the Akida intellectual property for licensing by companies seeking to incorporate a low-power, small, flexible and accurate neural network into their proprietary Application Specific Integrated Circuits (ASIC). On 11 June 2019, the Company announced the availability of a powerful neural network converter which enables users to easily convert existing convolutional neural networks (CNNs) to an Akida compatible event-based spiking neural network (SNN). The converter is integrated with the Akida Development Environment (ADE) to provide network conversion and simulation. On 16 June 2019, the Company executed a Definitive Agreement with Socionext Americas to collaborate on the development of the Akida NSoC and manufacture the device. Wafers will be produced on a 28nm digital process at Taiwan Semiconductor Manufacturing Corporation (TSMC). Socionext, formerly known as the Fujitsu Semiconductor business, is a global leader in Application Specific Integrated Circuits (ASIC) products"

My opinion and speculation based on the above so DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Fire
  • Love
Reactions: 14 users

M_C

Founding Member
Further confirmation from Intel that they are way behind Akida.



Intel Labs lead Rich Uhlig offered two possibilities: integrating Loihi in a CPU for PCs to perform energy-efficient AI tasks and potentially offering its neuromorphic chips as a cloud service, although Uhlig was clear he wasn't firming actual product plans, just projecting what could theoretically happen in the future.

"Right now with Loihi, we're at that point where we think we're onto something, but we don't actually have product plans yet. We're sort of earlier on in that work stream," he said last month.


All is not what meets the eye with Intel imo, INTEL have more to lose than most when it comes to AKIDA.

Even DELL have publicly stated that they have no qualms about ditching INTEL for the next best thing......

There have been multiple interactions between BRN and INTEL on LinkedIn over an extended period of time, and more so recently.

Brainchip have jumped to Mike Davies defence in the past with regards to criticism he's copped from brn shareholders......NOT TO MENTION Mike Davies has publicly suggested people "talk to brainchip" if they want a commercially available Neuromorphic chip.......

IMO .........INTEL are a strong chance of doing business with brn.....pure speculation
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 28 users
D

Deleted member 118

Guest
All is not what meets the eye with Intel imo, INTEL have more to lose than most when it comes to AKIDA imo.

Even DELL have publicly stated that they have no qualms about ditching INTEL for the next best thing......

There have been multiple interactions between BRN and INTEL on LinkedIn over an extended period of time, and more so recently.

Brainchip have jumped to Mike Davies defence in the past with regards to criticism he's copped from brn shareholders......

IMO .........INTEL are a strong chance of doing business with brn.....pure speculation

 
  • Haha
  • Like
  • Love
Reactions: 6 users
  • Like
  • Love
  • Fire
Reactions: 7 users
* Don’t get too excited by the below folks (like I did 🤣). Unfortunately I was a little off the mark with my post. Thought I’d nailed it. I’ll keep it here to refer back to as a premonition if BrainChip and Texas Instruments are linked officially in the future 😃


Hoping this a bloody ripper. Somebody tell me I am correct

Pretty sure I’ve got BrainChip directly linked to Texas Instruments

If so it is potentially colossal as they have circa 100,000 customers

The below links to one specific humidity sensor only, however Texas Instruments could be fully on board as a customer if they were impressed

@Fact Finder what do you reckon mate?? I’ve searched this forum and can only see Texas Instruments mentioned generically

Apologies if someone has made this link already and also if I have it wrong

38519803-FACE-4729-8B55-870C254D999B.jpeg

4EA3597A-E532-4CD2-AD68-467E421DC5B8.jpeg


035A5678-DC34-4D4C-8F27-25F0553AF3EF.jpeg



DAD166D9-A2BC-4D52-B2B3-369F55EB3ACF.jpeg



4887B04F-C198-41DA-B4F0-41FD9AC3052B.jpeg





D09820BA-84E5-4029-B38E-1A00142D027A.jpeg




B51C51D8-8339-485A-A2BB-4A55397A10E9.jpeg


Link to data sheet

Amazing Texas Instruments facts:

AE090CBB-B8E9-48F3-A33F-60DD531E64AD.jpeg

FB3F0A4A-802C-4A35-A780-DB82475FFCA8.jpeg

7BA45E4D-4845-434E-8B98-A52746A08944.jpeg



TI at a glance

Texas Instruments has been making progress possible for decades. We are a global semiconductor company that designs, manufactures, tests and sells analog and embedded processing chips. Our approximately 80,000 products help over 100,000 customers efficiently manage power, accurately sense and transmit data and provide the core control or processing in their designs, going into markets such as industrial, automotive, personal electronics, communications equipment and enterprise systems. Our passion to create a better world by making electronics more affordable through semiconductors is alive today as each generation of innovation builds upon the last to make our technology smaller, more efficient, more reliable and more affordable – opening new markets and making it possible for semiconductors to go into electronics everywhere. We think of this as Engineering Progress. It’s what we do and have been doing for decades.

Key facts​

  • Founded in 1930.
  • Headquartered in Dallas, Texas.
  • Publicly traded (Nasdaq: TXN).
  • Richard K. Templeton is chairman, president and CEO.
  • ~ 31,000 employees.
    • ~ 13,000 in the Americas
    • ~ 16,000 in Asia-Pacific
    • ~ 2,000 in Europe
  • 15 manufacturing sites worldwide, tens of billions of chips produced each year.
  • ~ 80,000 products for over 100,000 customers.
  • Industrial and automotive, the markets with the best opportunities for our products, made up 62% of our 2021 revenue.

Revenue in 2021​

2019 revenue chart

Revenue by market​

key markets chart

Worldwide manufacturing sites​

  • 15 manufacturing sites worldwide, with 11 wafer fabs, seven assembly and test factories, and multiple bump and probe facilities
manufacturing site maps

Recognition and giving​

  • Recognized by WayUp as a Top 100 Internship Program
  • Recognized by Glassdoor as one of the Best Places to Work in 2022
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 50 users
Top Bottom