BRN Discussion Ongoing

IS BRAINCHIP AKIDA TECHNOLOGY THE NEXT BIG BANG IN Ai???

IN MY OPINION IT ALREADY IS BUT WHAT WOULD I KNOW BUT EVERYONE I READ WHO HAS THE CREDENTIALS TO KNOW ARE SCREAMING AT THE TOP OF THEIR LUNGS THAT POWER AND HEAT HAVE REACHED UNSUSTAINABLE LEVELS IN THE TECHNOLOGY WORLD AND AKIDA SOLVES BOTH ISSUES BEFORE IT EVEN STARTS TO DISPLAY ITS REVOLUTIONARY BEAST LIKE PROCESSING ABILITIES INCLUDING AS WE ALL KNOW ONE SHOT/SEVERAL SHOT/INCREMENTAL LEARNING. ON CHIP CONVOLUTION, CNN TO SNN CONVERSION AND CAPACITY TO OPERATE COMPLETELY SELF CONTAINED WITHOUT CLOUD CONNECTION WHILE REMAINING PROCESS AND SENSOR AGNOSTIC:


Has There Been A Second AI Big Bang?​

Calum Chace
Contributor
"The AI guy"
Follow
Oct 18, 2022,05:30am EDT

Listen to article7 minutes

https://policies.google.com/privacy

Aleksa Gordic, an AI researcher with DeepMind

Aleksa Gordic, an AI researcher with DeepMind
ALEKSA GORDIC

The first Big Bang in 2012

The Big Bang in artificial intelligence (AI) refers to the breakthrough in 2012, when a team of researchers led by Geoff Hinton managed to train an artificial neural network (known as a deep learning system) to win an image classification competition by a surprising margin. Prior to that, AI had performed some remarkable feats, but it had never made much money. Since 2012, AI has helped the big technology companies to generate enormous wealth, not least from advertising.


A second Big Bang in 2017?

Has there been a new Big Bang in AI, since the arrival of Transformers in 2017? In episodes 5 and 6 of the London Futurist podcast, Aleksa Gordic explored this question, and explained how today’s cutting-edge AI systems work. Aleksa is an AI researcher at DeepMind, and previously worked in Microsoft’s Hololens team. Remarkably, his AI expertise is self-taught – so there is hope for all of us yet!

Transformers

Transformers are deep learning models which process inputs expressed in natural language and produce outputs like translations, or summaries of texts. Their arrival was announced in 2017 with the publication by Google researchers of a paper titled “Attention is All You Need”. This title referred to the fact that Transformers can “pay attention” simultaneously to large corpus of text, whereas their predecessors, Recurrent Neural Networks, could only pay attention to the symbols either side of the segment of text being processed.


Transformers work by splitting text into small units, called tokens, and mapping them onto high-dimension networks - often thousands of dimensions. We humans cannot envisage this. The space we inhabit is defined by three numbers – or four, if you include time, and we simply cannot imagine a space with thousands of dimensions. Researchers suggest that we shouldn’t even try.

Dimensions and vectors

For Transformer models, words and tokens have dimensions. We might think of them as properties, or relationships. For instance, “man” is to “king” as “woman” is to “queen”. These concepts can be expressed as vectors, like arrows in three-dimensional space. The model will attribute a probability to a particular token being associated with a particular vector. For instance, a princess is more likely to be associated with the vector which denotes “wearing a slipper” than to the vector that denotes “wearing a dog”.
There are various ways in which machines can discover the relationships, or vectors, between tokens. In supervised learning, they are given enough labelled data to indicate all the relevant vectors. In self-supervised learning, they are not given labelled data, and they have to find the relationships on their own. This means the relationships they discover are not necessarily discoverable by humans. They are black boxes. Researchers are investigating how machines handle these dimensions, but it is not certain that the most powerful systems will ever be truly transparent.

Parameters and synapses

The size of a Transformer model is normally measured by the number of parameters it has. A parameter is analogous to a synapse in a human brain, which is the point where the tendrils (axons and dendrites) of our neurons meet. The first Transformer models had a hundred million or so parameters, and now the largest ones have trillions. This is still smaller than the number of synapses in the human brain, and human neurons are far more complex and powerful creatures than artificial ones.


Not by text alone

A surprising discovery made a couple of years after the arrival of Transformers was that they are able to tokenise not just text, but images too. Google released the first vision Transformer in late 2020, and since then people around the world have marvelled at the output of Dall-E, MidJourney, and others.
The first of these image-generation models were Generative Adversarial Networks, or GANs. These were pairs of models, with one (the generator) creating imagery designed to fool the other into accepting it as original, and the second system (the discriminator) rejecting attempts which were not good enough. GANs have now been surpassed by Diffusion models, whose approach is to peel noise away from the desired signal. The first Diffusion model was actually described as long ago as 2015, but the paper was almost completely ignored. They were re-discovered in 2020.

Energy gluttons

Transformers are gluttons for compute power and for energy, and this has led to concerns that they might represent a dead end for AI research. It is already hard for academic institutions to fund research into the latest models, and it was feared that even the tech giants might soon find them unaffordable. The human brain points to a way forward. It is not only larger than the latest Transformer models (at around 80 billion neurons, each with around 10,000 synapses, it is 1,000 times larger). It is also a far more efficient consumer of energy - mainly because we only need to activate a small portion of our synapses to make a given calculation, whereas AI systems activate all of their artificial neurons all of the time. Neuromorphic chips, which mimic the brain more closely than classic chips, may help.

Unsurprising surprises

Aleksa is frequently surprised by what the latest models are able to do, but this is not itself surprising. “If I wasn’t surprised, it would mean I could predict the future, which I can’t.” He derives pleasure from the fact that the research community is like a hive mind: you never know where the next idea will come from. The next big thing could come from a couple of students at a university, and a researcher called Ian Goodfellow famously created the first GAN by playing around at home after a brainstorming session over a couple of beers.
 
  • Like
  • Love
  • Fire
Reactions: 52 users

Iseki

Regular
  • Like
Reactions: 5 users
? Processing.org is an easy to learn programming language that runs on the raspberry pi. Not on a nintendo switch? Then again nintendo switch runs an arm chip, so could run linux, so maybe....

Jimmy Fallon I Had No Idea GIF by The Tonight Show Starring Jimmy Fallon
 
  • Haha
  • Like
Reactions: 7 users
IS BRAINCHIP AKIDA TECHNOLOGY THE NEXT BIG BANG IN Ai???

IN MY OPINION IT ALREADY IS BUT WHAT WOULD I KNOW BUT EVERYONE I READ WHO HAS THE CREDENTIALS TO KNOW ARE SCREAMING AT THE TOP OF THEIR LUNGS THAT POWER AND HEAT HAVE REACHED UNSUSTAINABLE LEVELS IN THE TECHNOLOGY WORLD AND AKIDA SOLVES BOTH ISSUES BEFORE IT EVEN STARTS TO DISPLAY ITS REVOLUTIONARY BEAST LIKE PROCESSING ABILITIES INCLUDING AS WE ALL KNOW ONE SHOT/SEVERAL SHOT/INCREMENTAL LEARNING. ON CHIP CONVOLUTION, CNN TO SNN CONVERSION AND CAPACITY TO OPERATE COMPLETELY SELF CONTAINED WITHOUT CLOUD CONNECTION WHILE REMAINING PROCESS AND SENSOR AGNOSTIC:

Has There Been A Second AI Big Bang?​

Calum Chace
Contributor
"The AI guy"
Follow
Oct 18, 2022,05:30am EDT

Listen to article7 minutes

https://policies.google.com/privacy

Aleksa Gordic, an AI researcher with DeepMind

Aleksa Gordic, an AI researcher with DeepMind
ALEKSA GORDIC

The first Big Bang in 2012

The Big Bang in artificial intelligence (AI) refers to the breakthrough in 2012, when a team of researchers led by Geoff Hinton managed to train an artificial neural network (known as a deep learning system) to win an image classification competition by a surprising margin. Prior to that, AI had performed some remarkable feats, but it had never made much money. Since 2012, AI has helped the big technology companies to generate enormous wealth, not least from advertising.


A second Big Bang in 2017?

Has there been a new Big Bang in AI, since the arrival of Transformers in 2017? In episodes 5 and 6 of the London Futurist podcast, Aleksa Gordic explored this question, and explained how today’s cutting-edge AI systems work. Aleksa is an AI researcher at DeepMind, and previously worked in Microsoft’s Hololens team. Remarkably, his AI expertise is self-taught – so there is hope for all of us yet!

Transformers

Transformers are deep learning models which process inputs expressed in natural language and produce outputs like translations, or summaries of texts. Their arrival was announced in 2017 with the publication by Google researchers of a paper titled “Attention is All You Need”. This title referred to the fact that Transformers can “pay attention” simultaneously to large corpus of text, whereas their predecessors, Recurrent Neural Networks, could only pay attention to the symbols either side of the segment of text being processed.


Transformers work by splitting text into small units, called tokens, and mapping them onto high-dimension networks - often thousands of dimensions. We humans cannot envisage this. The space we inhabit is defined by three numbers – or four, if you include time, and we simply cannot imagine a space with thousands of dimensions. Researchers suggest that we shouldn’t even try.

Dimensions and vectors

For Transformer models, words and tokens have dimensions. We might think of them as properties, or relationships. For instance, “man” is to “king” as “woman” is to “queen”. These concepts can be expressed as vectors, like arrows in three-dimensional space. The model will attribute a probability to a particular token being associated with a particular vector. For instance, a princess is more likely to be associated with the vector which denotes “wearing a slipper” than to the vector that denotes “wearing a dog”.
There are various ways in which machines can discover the relationships, or vectors, between tokens. In supervised learning, they are given enough labelled data to indicate all the relevant vectors. In self-supervised learning, they are not given labelled data, and they have to find the relationships on their own. This means the relationships they discover are not necessarily discoverable by humans. They are black boxes. Researchers are investigating how machines handle these dimensions, but it is not certain that the most powerful systems will ever be truly transparent.

Parameters and synapses

The size of a Transformer model is normally measured by the number of parameters it has. A parameter is analogous to a synapse in a human brain, which is the point where the tendrils (axons and dendrites) of our neurons meet. The first Transformer models had a hundred million or so parameters, and now the largest ones have trillions. This is still smaller than the number of synapses in the human brain, and human neurons are far more complex and powerful creatures than artificial ones.


Not by text alone

A surprising discovery made a couple of years after the arrival of Transformers was that they are able to tokenise not just text, but images too. Google released the first vision Transformer in late 2020, and since then people around the world have marvelled at the output of Dall-E, MidJourney, and others.
The first of these image-generation models were Generative Adversarial Networks, or GANs. These were pairs of models, with one (the generator) creating imagery designed to fool the other into accepting it as original, and the second system (the discriminator) rejecting attempts which were not good enough. GANs have now been surpassed by Diffusion models, whose approach is to peel noise away from the desired signal. The first Diffusion model was actually described as long ago as 2015, but the paper was almost completely ignored. They were re-discovered in 2020.

Energy gluttons

Transformers are gluttons for compute power and for energy, and this has led to concerns that they might represent a dead end for AI research. It is already hard for academic institutions to fund research into the latest models, and it was feared that even the tech giants might soon find them unaffordable. The human brain points to a way forward. It is not only larger than the latest Transformer models (at around 80 billion neurons, each with around 10,000 synapses, it is 1,000 times larger). It is also a far more efficient consumer of energy - mainly because we only need to activate a small portion of our synapses to make a given calculation, whereas AI systems activate all of their artificial neurons all of the time. Neuromorphic chips, which mimic the brain more closely than classic chips, may help.

Unsurprising surprises

Aleksa is frequently surprised by what the latest models are able to do, but this is not itself surprising. “If I wasn’t surprised, it would mean I could predict the future, which I can’t.” He derives pleasure from the fact that the research community is like a hive mind: you never know where the next idea will come from. The next big thing could come from a couple of students at a university, and a researcher called Ian Goodfellow famously created the first GAN by playing around at home after a brainstorming session over a couple of beers.
DARPA MAY THINK SO???


"The past few years have seen an explosion of interest in a sub-field of AI dubbed machine learning that applies statistical and probabilistic methods to large data sets to create generalized representations that can be applied to future samples. Foremost among these approaches are deep learning (artificial) neural networks that can be trained to perform a variety of classification and prediction tasks when adequate historical data is available. Therein lies the rub, however, as the task of collecting, labelling, and vetting data on which to train such “second wave” AI techniques is prohibitively costly and time-consuming.

DARPA envisions a future in which machines are more than just tools that execute human-programmed rules or generalize from human-curated data sets. Rather, the machines DARPA envisions will function more as colleagues than as tools. Towards this end, DARPA research and development in human-machine symbiosis sets a goal to partner with machines. Enabling computing systems in this manner is of critical importance because sensor, information, and communication systems generate data at rates beyond which humans can assimilate, understand, and act. Incorporating these technologies in military systems that collaborate with warfighters will facilitate better decisions in complex, time-critical, battlefield environments; enable a shared understanding of massive, incomplete, and contradictory information; and empower unmanned systems to perform critical missions safely and with high degrees of autonomy. DARPA is focusing its investments on a third wave of AI that brings forth machines that understand and reason in context."

Program Announcement for Artificial Intelligence Exploration (AIE)​


ACTIVE
Contract Opportunity

Notice ID
DARPA-PA-22-02
Related Notice
Department/Ind. Agency
DEPT OF DEFENSE
Sub-tier
DEFENSE ADVANCED RESEARCH PROJECTS AGENCY (DARPA)
Office
DEF ADVANCED RESEARCH PROJECTS AGCY


Looking for contract opportunity help?​

(opens in new window)
Procurement Technical Assistance Centers (PTACs) are an official government contracting resource for small businesses. Find your local PTAC (opens in new window)for free government expertise related to contract opportunities.

General Information​

  • Contract Opportunity Type: Presolicitation (Original)
  • All Dates/Times are: (UTC-04:00) EASTERN STANDARD TIME, NEW YORK, USA
  • Original Published Date: Aug 19, 2022 02:41 pm EDT
  • Original Response Date: Aug 18, 2023 04:00 pm EDT
  • Inactive Policy: Manual
  • Original Inactive Date: Sep 17, 2023
  • Initiative:
    • None

Classification​

  • Original Set Aside:
  • Product Service Code: AC11 - National Defense R&D Services; Department of Defense - Military; Basic Research
  • NAICS Code:
    • 541715 - Research and Development in the Physical, Engineering, and Life Sciences (except Nanotechnology and Biotechnology)
  • Place of Performance:

Description​

The mission of the Defense Advanced Research Projects Agency (DARPA) is to make strategic, early investments in science and technology that will have long-term positive impact on our Nation’s security. In support of this mission, DARPA has pioneered groundbreaking research and development (R&D) in Artificial Intelligence (AI) for more than five decades. Today, DARPA continues to lead innovation in AI research through a large, diverse portfolio of fundamental and applied R&D AI programs aimed at shaping a future for AI technology where machines may serve as trusted and collaborative partners in solving problems of importance to national security. The AI Exploration (AIE) program is one key element of DARPA’s broader AI investment strategy that will help ensure the U.S. maintains a technological advantage in this critical area.
 
  • Like
  • Fire
  • Love
Reactions: 29 users

ndefries

Regular
No idea if the following is correct but if you have a subscription for the AFR you might like to confirm:

"from today’s AFR

’Company secretary Ben Cohen said AVZ would only respond further with its lawyers present due to unspecified allegations on social media that Financial Review reporter Tom Richardson is connected with a short selling group named Boatman Capital, which chose to publish mistruths.’


No opinion either way at this stage needing to Do More Research
FF

AKIDA BALLISTA
this is that article

Battle over ‘world’s largest lithium deposit’ drags in China​

Tom Richardson

Tom RichardsonMarkets reporter and commentator
Updated Oct 24, 2022 – 8.15am,first published at 5.00am
Save
Share
Suspended lithium play AVZ Minerals has conceded a Congo court ordered its payments to acquire an additional 15 per cent stake in the Manono Project be suspended, after it was questioned about the September 20 court ruling by The Australian Financial Review.
The ruling suggests AVZ and Congo’s Dathomir face more arbitration over the deal’s validity and sale price, while extending a list of disputes AVZ faces over its ownership rights to the Manono hard rock lithium deposit in the Congo.
517fbed26ceabf4986f88fee2d35017228a47262

AVZ Minerals CEO Nigel Ferguson is disputing claims that a Chinese company owns 15 per cent of AVZ’s lithium project in the Democratic Republic of the Congo. David Rowe
On Thursday last week, AVZ told the market only a properly constituted arbitration tribunal has jurisdiction to overturn an acquisition it says it executed with Dathomir in August 2021 for $US21 million.
The Perth-headquartered explorer rode the lithium boom and excitement over claims Manono is the world’s largest lithium deposit to reach a $2.7 billion sharemarket valuation last May, before shares were suspended over its mining licence’s approval by the Congo government.
Despite the ruling suspending AVZ’s payments for the 15 per cent stake, AVZ said it still had legal title to 75 per cent ownership of Manono.

RELATED QUOTES​

AVZAVZ Minerals​

$0.780 -7.14%
1 year1 day


Updated: Oct 24, 2022 – 3.48pm. Data is 20 mins delayed.
View AVZ related articles

In September 2021, AVZ announced a separate agreement with Chinese investor Suzhou CATH Energy Technologies to sell a 24 per cent equity interest in Manono for $US240 million, which by proxy valued the project at $US1 billion.
This compares to the $US21 million AVZ said it paid Dathomir for a 15 per cent stake just a month before in August 2021 in a transaction the Congolese mining group has now had a court rule as suspended.
On May 10, AVZ told investors its future Manono stake would fall from 75 per cent to 51 per cent after the deal to sell a 24 per cent stake to CATH completed, with the right to negotiate to buy another 15 per cent from Congo government’s Cominiere potentially taking its stake back to a final 66 per cent.
However, in May 2022 another Chinese mining group named Zijin Mining announced it had signed a separate legal deal with the Congo government’s Cominiere to acquire a 15 per cent stake in the Manono Project for $US33.4 million.
Fortune-500 company Zijin said the deal was struck in September 2021 and the Commercial Court of Lubumbashi rejected AVZ’s attempt to have the deal thrown out in November 2021 and January 2022.
AVZ didn’t disclose the Zijin legal dispute or court rulings to the market until May 4, 2022, as shares raced to $1.30 in April 2022 on a $4.5 billion valuation. Its stock was suspended on May 9.

In May, Zijin also applied to the International Court of Arbitration (ICC) to force AVZ to recognise its claim to 15 per cent in the Manono project.
“Zijin Mining confidently looks forward to the ICC hearing in April 2023 and expects AVZ’s abuse of Zijin Mining’s 15 per cent stake in the Manono project as alleged will be addressed,” Zijin’s in-house legal counsel Sun Kuiyuan said on Thursday. “Zijin Mining is disappointed with AVZ’s lack of cooperation with the minority shareholders of the Manono project.
“Zijin is surprised that AVZ has not supported its participation in the Manono project, which brings substantial capital and expertise to the benefit of all stakeholders and the DRC.”

Zijin refused to comment further. It previously said the separate sale between Dathomir and AVZ had been legally terminated to mean AVZ only had legal ownership of 60 per cent of the project, with Zijin at 15 per cent, Dathomir at 15 per cent, and the Congo government under Cominiere at 10 per cent.
It also said if AVZ proceeded with the 24 per cent sale to CATH Technologies its stake would drop from 60 per cent to 36 per cent to mean AVZ “will no longer have an absolute controlling interest” in Manono.

AVZ refused to comment on questions around the September 20 court ruling and a separate December 2021 court document related to the Dathomir deal.
Company secretary Ben Cohen said AVZ would only respond further with its lawyers present due to unspecified allegations on social media that Financial Review reporter Tom Richardson is connected with a short selling group named Boatman Capital, which chose to publish mistruths.
London-based short side research firm Boatman Capital has followed Zijin in claiming AVZ’s stake will fall from 60 per cent to 36 per cent if the deal to sell to CATH goes ahead, which theoretically means AVZ would own less of the project than Chinese interests.
On September 9, AVZ issued a statement attacking Boatman for publishing alleged mistruths around the Congo courtroom barneys and reiterated its position that it still owns 75 per cent of Manono.
In response, Boatman instructed lawyers Grosvenor Law to demand both the ASX and ASIC investigate AVZ for allegedly misleading the market and failing to follow its disclosure obligations.
Boatman has also submitted court documents and made unreported allegations to Australia’s regulators it said supported its statements.
 
  • Like
  • Love
  • Fire
Reactions: 10 users

Xray1

Regular
  • Like
  • Fire
  • Love
Reactions: 21 users
  • Like
  • Love
  • Fire
Reactions: 27 users

TECH

Regular
Hi... A week or so ago I emailed both Peter and Anil together.

I said to Anil that I'd personally like to meet him face-to-face, so to save me having to fly to San Francisco, I politely planted a seed, that being,
it would be great for both Peter and Anil to attend next year's AGM together, assuring them that they would be very well received together, by the loyal Australian shareholder base.

I don't know if any shareholders have had the opportunity to meet with Anil, but I'd consider it a real privilege, as with Peter.

Whatever our 4C does or doesn't deliver this week, there's more to come before this year is out...listen very carefully to Nikunj during his presentation the other day with Edge Impulse, he seemed to give a hint about something coming at one point, in my own opinion of course.

Love Brainchip x
 
  • Like
  • Love
  • Fire
Reactions: 69 users
  • Like
  • Fire
Reactions: 8 users

cosors

👀
Another new branch on this ever- growing tree

ABOUT MVTEC



MVTec Software GmbH is a leading international manufacturer of software for machine vision used in all demanding areas of imaging like the semiconductor industry, surface inspection, automatic optical inspection systems, quality control, metrology, medicine, or surveillance. In particular, software by MVTec enables new automation solutions for the Industrial Internet of Things by providing modern technologies like 3D vision, deep learning, and embedded vision. With its headquarters in Munich (Germany), locations in Boston, MA (USA) and Kunshan near Shanghai (China), as well as an established network of international distributors, MVTec is represented in more than 35 countries worldwide. View attachment 19807
"Software from MVTec is not only used worldwide in industry, but also in research. For example, the Robonaut R2, NASA's first humanoid robotic astronaut to be deployed on the ISS , uses MVTec's image processing algorithms."
https://marjorie-wiki.de/wiki/MVTec_Software
 
  • Like
  • Fire
  • Love
Reactions: 18 users

Getupthere

Regular
 
  • Like
  • Love
  • Fire
Reactions: 31 users

Fox151

Regular
No idea if the following is correct but if you have a subscription for the AFR you might like to confirm:

"from today’s AFR

’Company secretary Ben Cohen said AVZ would only respond further with its lawyers present due to unspecified allegations on social media that Financial Review reporter Tom Richardson is connected with a short selling group named Boatman Capital, which chose to publish mistruths.’


No opinion either way at this stage needing to Do More Research
FF

AKIDA BALLISTA
Where's Yak52?
 
  • Like
  • Haha
Reactions: 10 users
Here you go, enjoy😀. It's for the EQXX. Hope it brings a smile to your face too. The WANCA's are going to be eating crow😉



Hope that link works.

Why is it so? (with apologies to Julius Sumner Miller and the Saturday Morning Science Show)

After almost 70 years of rejecting advertising why is it so that this old technophobe become so enthralled by an advertisement for an electric car and keeps replaying this video? :ROFLMAO:😂🤣🤡😂🤣:ROFLMAO:
 
  • Love
  • Like
  • Haha
Reactions: 10 users

Boab

I wish I could paint like Vincent
  • Like
  • Fire
  • Love
Reactions: 10 users
I just receive the following unsolicited email from Tony Dawe. I can only assume he has had quite a few (possibly hundreds) of enquiries regarding
todays announcement. I was pretty close with my guess except it was Boardroom which had its lease expire.


Tony Dawe​

2:26 PM (6 minutes ago)
to me
Hi Fact Finder

Our share registrar, Boardroom Limited, changed address and BrainChip’s Registered Office is located at Boardroom Limited’s address.

It’s as simple as that.

Regards


Tony Dawe
Investor Relations Manager
+61 (0)405 989 743
If they change offices every day for the next 365 days we would be over 1400% further in gains
 
  • Haha
  • Like
Reactions: 23 users

Sirod69

bavarian girl ;-)
I don't even know why I'm posting this here, I think few of us will understand French.
Nevertheless I like it! 🥰😘

Listen to France Inter 📻
On the occasion of the Mondial de l'Auto - Paris, the show Regards Croisés dedicates two episodes to the "car of the future" 🚘
Dive into Valeo's autonomous vehicle, with Antoine Lafay.
1666597208356.png

 
  • Like
  • Love
Reactions: 8 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Hey Brain Fam,

Check out this SoundHound announcement from 15 Sep 2022 which describes their new EdgeLite option. It states "With this fully-embedded voice solution companies can process data locally without cloud-related data privacy concerns. Developers have access to natural language commands with less memory and CPU impact, a bundled wake word, and the ability to instantly update commands".

SoundHound has a partnership with Mercedes and they also have a "multi-year agreement with Qualcomm Technologies Inc. to enable SoundHound’s advanced voice AI technology, consisting of its automatic speech recognition, natural language understanding, and text-to-speech conversion software with select Qualcomm Technologies’ Snapdragon® platforms."






Screen Shot 2022-10-24 at 4.38.01 pm.png
f pm.png

 
  • Like
  • Fire
  • Love
Reactions: 47 users
S

Straw

Guest
Why is it so? (with apologies to Julius Sumner Miller and the Saturday Morning Science Show)

After almost 70 years of rejecting advertising why is it so that this old technophobe become so enthralled by an advertisement for an electric car and keeps replaying this video? :ROFLMAO:😂🤣🤡😂🤣:ROFLMAO:

A couple of comments on the video:
*impressive level of enthusiasm (Peter's first Akida AGM talk would give him a run for his money - though a great deal less awkward).
*students, just smile and don't look fightened
*I have traumatic memories of an unsupervised Van der Graaf generator (and pencil sharpener generator) All boys school
*is it just me or is that a confusing accent
*is that the guy in the chocolate add (or am I thinking of toothpaste?) TV has cooked my brain
 
  • Like
  • Haha
Reactions: 7 users
So many people wondering who the heck I am 🤣
2DDD43B5-8FCB-4D83-A382-AB535DD5C758.jpeg

8F481586-5A67-4A7A-878A-5D3F9C55E49D.jpeg
 
  • Haha
  • Like
  • Fire
Reactions: 36 users
A couple of comments on the video:
*impressive level of enthusiasm (Peter's first Akida AGM talk would give him a run for his money - though a great deal less awkward).
*students, just smile and don't look fightened
*I have traumatic memories of an unsupervised Van der Graaf generator (and pencil sharpener generator) All boys school
*is it just me or is that a confusing accent
*is that the guy in the chocolate add (or am I thinking of toothpaste?) TV has cooked my brain
Cadbury chocolate:

 
  • Like
  • Love
Reactions: 2 users
Top Bottom