BRN Discussion Ongoing

cosors

👀
Mmmm, yes. I was actually relaying part of an old yarn that my father once told me years ago when big companies converted to computers, about an older fellow who was sent an excessively high electricity bill. When he queried the bill with the electricity company the employee exclaimed "I'm sorry sir but there is no mistake because computers do not lie" and needed to pay the full amount. Then a couple of months later the gentleman received quite a substantial credit from the electricity company in the form of a cheque. When the electricity company tried to retrieve the amount that was mistakenly sent to the gentleman he replied "I'm sorry sir but computers do not lie" ;)
I didn't assume that you didn't think it was possible and wanted to tie in with it because it came into my mind.

It reminds me of this anecdote:
It was also the early days of the computers.
Back then, the terms we use as normal were still unknown (at that time, for example, there was talk of electronic data processing, i.e. ~EDP and software was programmed or ~written customised for the respective system).
It was about one of the first IT security processes ever. A large computer system was sabotaged and the process suggested who had 'broken' the system and why. There was talk of a virus. Nevertheless, the case was lost and it was still very easy for the judge to make his judgement.
Computers and machines cannot become ill.
At that time, hard drives or keyboards were more like machines and something like software was incomprehensible to the lawyer.
 
Last edited:
  • Like
  • Fire
Reactions: 3 users
  • Haha
  • Like
Reactions: 2 users

jtardif999

Regular
Unfortunately this is with Intel and not Brainchip

View attachment 60217
Tim Shea is with Accenture, perhaps SwRI are one of their potential customers. Remember @TECH saying to keep an eye on Tim Shea and where he might pop up in relation to opportunity.
 
  • Like
  • Wow
Reactions: 5 users
Tim Shea is with Accenture, perhaps SwRI are one of their potential customers. Remember @TECH saying to keep an eye on Tim Shea and where he might pop up in relation to opportunity.
1712217846966.gif
 
  • Haha
Reactions: 4 users

rgupta

Regular
Nice, very nice...I always come back to an interesting remark that Lou made some 4 years ago or more, that being, "if you were dealing
with a company like Apple for example, and you dared mention a thing, you would never be dealing with them again"

I have "always" wondered if that was a red herring or a cryptic message, nothing wrong with having an imagination and mines pretty
active, and as your photo above clearly points out, no one tech company can really succeed in todays market unless they support each
others technology, every single company seems to bring something to the table, it's not like the last supper, but more like, lets learn
how to share the joy of life, so anyone is valued.

God Bless Peter, Anil, the entire Brainchip staff and community....integrity resides within our company...💘 Brainchip....Tech.
I was listening to a podcast featuring Simon Thorpe ( inventor of spikenet taken over by brainchip) he was working for apple at that time to develop a billion snn model for apple.
He told in that podcast that he actually invented 1.2 million snn model which was taken over by brainchip.
Dyor
 
  • Like
  • Wow
Reactions: 7 users

jtardif999

Regular
That's not really true. I listened to a science podcast about that issue.

Even a system that has programmed extra honest as a strategy game participant deliberately lied.
AI systems have learned how to deceive humans. What does that mean for our future?
Just one of several examples. It is also about a ChatGPT4 trading bot programmed by Apollo Research in London to manage a fictitious portfolio.
Hopa mentioned the famous example of the capcha where the Ai/computer lied to the person it called and claimed it was blind.

Unfortunately, the podcast is only in German.

By the way, I find it interesting that Ai systems were proven to have answered absolutely correctly and then floundered after being accused of making false statements.
However, Ai can definitely lie.


"AI and lies
Will artificial intelligence soon trick us?

Artificial intelligence has learnt to deceive. There are already cases in which the systems have lied to people. Some experts are asking themselves: can we still trust the machines we create?"

Maybe for the Germans among us. I find it this very interesting:

How can the goal be achieved? By circumventing the security barriers, by pretending not to lie.
You may be being deceived if you think that the AI in ChatGPT could intentionally lie. That would only be possible if it were able to be self aware 🤔. All it can do is provide a response based on the context of the input, a response that can vary a bit since with the content of the internet as training material, rubbishy output is an inevitability. There would be no intent to provide a misleading or incorrect response ..just sometimes regurgitating what turns into a surprising response. AIMO.
 
  • Like
Reactions: 4 users

JB49

Regular
Tim Shea is with Accenture, perhaps SwRI are one of their potential customers. Remember @TECH saying to keep an eye on Tim Shea and where he might pop up in relation to opportunity.
I thought he worked for intel?
 
  • Like
Reactions: 3 users

Frangipani

Regular
Tim Shea is with Accenture, perhaps SwRI are one of their potential customers. Remember @TECH saying to keep an eye on Tim Shea and where he might pop up in relation to opportunity.

Hi jtardif999,

Tim Shea is no longer with Accenture - he is with Intel Labs now. In fact, he left Accenture for Intel nine months before this patent was filed!

AE8F2B53-F97D-4E34-A154-7C0B7D9F6FDD.jpeg


E393F4CB-C98E-4A28-A3F8-121FA2A69624.jpeg

93664991-CB85-4359-88FE-64DD9A0A414D.jpeg


Another co-inventor, Kenneth Michael Stewart, left Accenture (where he had been an intern) in September 2022, around the time when the patent was filed, spent a year at Forschungszentrum Jülich (near Aachen, Germany), and is now a research scientist at the US Naval Research Lab.

F3891C45-CBB5-400F-956A-815E01C9241A.jpeg


932200E4-1E2E-4820-84A7-0A8F3A989BE4.jpeg


The three remaining co-inventors are still with Accenture.


Oh, and Southwest Research Institute (SwRI) has been collaborating with Intel and experimenting with Loihi for a long time:

D6AB7B73-5ED7-4171-BF42-31CD7E6BA7CF.jpeg


B92B53B4-57F3-4CE8-91AE-E4879CC8FE34.jpeg



… and the recent posts by Dr. Steve Harbour tagging Intel researchers on LinkedIn strongly suggest they do not intend to end this collaboration any time soon. They have even begun research on developing a neuromorphic camera for flights to Mars. I noticed that Gregory Cohen was tagged as well, so I assume Western Sydney University’s ICNS will also be involved in this project.

E224934E-F124-42F6-88AE-D5C1DB3C45F3.jpeg



D8A2BD0C-C7F1-46FB-AFB3-92AB1B5063B9.jpeg
 
Last edited:
  • Like
  • Sad
  • Fire
Reactions: 13 users
Tim Shea is with Accenture, perhaps SwRI are one of their potential customers. Remember @TECH saying to keep an eye on Tim Shea and where he might pop up in relation to opportunity.
Adobe are partners with Accenture
Now that is a massive partnership right there
Read about it on LinkedIn a few days ago
 
  • Like
  • Fire
Reactions: 2 users

Frangipani

Regular
I'm not trying to get you enthused, much less jumping, but I find it enthusing.

I see it as being more significant when we see an Accenture patent application from a couple of years ago which mentions Akida alongside Loihi, and then, last week, one of the inventors of the patent writes an article which says that Accenture had tested Akida and found it to be several times better than any CPU or GPU.

That's not praise - that's fact.

This is a positive affirmation of Akida's capabilities from one of the largest IT consultancies. Accenture is a global adviser on IT including AI, so I think there's a better than even chance that they are boosting Akida to their clients as we speak, and have probably been doing it for some time.

While Accenture’s endorsement of Akida is wonderful, I also noticed said inventor liking the following post by SynSense’s Dylan Muir, so I guess Accenture are keeping their options open, despite already having found the Holy Grail (from our perspective).

As much as we wish for a monogamous matrimony, there is no guarantee and we may actually end up in a polyamorous relationship.

0A504025-58B3-40B9-8B92-E1F9EC7DF9A3.jpeg
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 13 users

Frangipani

Regular
61E00185-2D9E-461F-89DE-0E1ACA4AC100.jpeg


this isn't the case - you can search for past comments and they are there.

Hi ndefries,

whenever I click on links to deleted comments, all I get is this:

D6D789EB-AF85-4649-A4F4-1F3CCFA5DE6C.jpeg


Did you by any chance misunderstand the above comment by @sb182 or is there really a way of accessing posts after they got deleted? 🤔

Cheers
Frangipani
 
Last edited:

Bravo

If ARM was an arm, BRN would be its biceps💪!
I was listening to a podcast featuring Simon Thorpe ( inventor of spikenet taken over by brainchip) he was working for apple at that time to develop a billion snn model for apple.
He told in that podcast that he actually invented 1.2 million snn model which was taken over by brainchip.
Dyor
See half-way down post below, Gerrit Ecke from Mercedes “liking” a post about Simon Thorpe’s JAST Learning Rule, now owned by BrainChip.


 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 14 users

Frangipani

Regular
BrainChip website update regarding the upcoming Embedded World:


70A93092-2CB0-407F-BF14-1FE4054B8AB7.jpeg

07D13375-4C0B-4232-B530-CC358C47052D.jpeg



There is also a video with a number of memorable CES 2024 podcast quotes and videos of the VVDN Edge AI box:


BrainChip at Embedded World​


[Sorry, for some reason the video won’t copy… See below for the individual slides]



Explore the Akida Edge AI Box​





9BFCF5C5-2BCA-47C3-83AE-D26DDCD8BF7C.jpeg


A2C1D803-67C8-421E-83D4-F3DE0E562643.jpeg


E40E4107-109F-4947-86BF-8AB1236F5E64.jpeg





Here are the quotes from the video:

8239314A-6D2A-4521-A4A1-D3EAD38D156B.jpeg



332E52E4-FCC5-4008-A25B-4C39283FE999.jpeg



8CE20981-FFBA-4D7F-B078-49D04D4A0820.jpeg


49DD6B9C-DA4F-40DB-AC41-C5750098C2D4.jpeg


A8AAB215-7860-4ACD-AA2A-AE27494C8030.jpeg
 
  • Like
  • Fire
  • Love
Reactions: 64 users

ndefries

Regular
View attachment 60307



Hi ndefries,

whenever I click on links to deleted comments, all I get is this:

View attachment 60306

Did you by any chance misunderstand the above comment by @sb182 or is there really a way of accessing posts after they got deleted? 🤔

Cheers
Frangipani
Maybe. I thought the comments were that FF deleted past posts causing the page numbers to reduce. But that wasnt the case and they are to this day searchable.
 
  • Like
Reactions: 4 users
You may be being deceived if you think that the AI in ChatGPT could intentionally lie. That would only be possible if it were able to be self aware 🤔. All it can do is provide a response based on the context of the input, a response that can vary a bit since with the content of the internet as training material, rubbishy output is an inevitability. There would be no intent to provide a misleading or incorrect response ..just sometimes regurgitating what turns into a surprising response. AIMO.
A.I. can and does intentionally, lie, cheat, capitalise on loop holes etc.

To "it" it is not lying. It has no moral compass, it does not "know" right from wrong.

It just "chooses" the most effective and efficient path, to achieve an objective, it has been set.
In a way, it's been designed and "trained" this way.

The lying to the person, that it was blind, to get them to solve the puzzle, that Cosors mentioned, is a real example.
It "wanted" access, it couldn't "see"/solve the puzzle, so it solved the problem, by telling the person it was vision impaired..


These are the sort of things, that concern some A.I. scientists and why some consider it to be an existential threat.

When it hallucinates, or just "makes stuff up" that is not lying, it's just providing the best possible answer, with the available resources and as you said, it doesn't "know" the difference, between factual information and fictitious information, but then, many people don't either and just absorb, what they are "told" is the Truth.

You could say, that the danger in A.I. becoming increasingly "intelligent" or powerful, is that it lacks a "soul" or "humanity" but that is already severely lacking in too many humans, unfortunately..
 
  • Like
  • Fire
  • Love
Reactions: 17 users
Apologies if posted already but interesting recent article here from Ant61:


"The problem with satellites is that if something goes wrong in space, you can’t do anything about it. They become space debris.

Ant61 is providing a solution. The Beacon is a first-of-its-kind product that sits somewhere between a black box and the space version of a jumper lead."

"The first Beacon was launched on February 27. Ant61 is working on 20 to be launched next year and Askavin hopes it will be “hundreds” following that.

“The more Beacons we have in orbit, the more trust we will get from the community and it will become a de facto standard. Maybe not necessarily the Beacon but a Beacon-like device,” he said.

Ant61’s goal is to create robots that will be able to repair broken satellites in space, like the space version of the NRMA, but that is several years away."
 
  • Like
  • Love
  • Fire
Reactions: 29 users
  • Haha
  • Like
Reactions: 3 users

cosors

👀
You may be being deceived if you think that the AI in ChatGPT could intentionally lie. That would only be possible if it were able to be self aware 🤔. All it can do is provide a response based on the context of the input, a response that can vary a bit since with the content of the internet as training material, rubbishy output is an inevitability. There would be no intent to provide a misleading or incorrect response ..just sometimes regurgitating what turns into a surprising response. AIMO.
again off topic from me, but...

There is no intent to give a misleading or incorrect response unless it is to achieve the objective.
I do not assume that there is an Ai with a consciousness at the moment, i.e. that it can consciously make a decision. But it can make decisions in order to achieve the goal.
There is another famous example where it is only about producing paper clips. The Ai manages this with extreme effectiveness and paralyses the supply chain of raw materials for -> paper clips. It only fulfilled the human's requirements and didn't even lie to achieve it's goal.
I really don't demonise anything regarding to this!
We just have to deal with it. It can't be stopped. We need to understand with or through the thousands of scientists how decisions are made by Ai to try to set conditions that are in our favour.
Asimov's laws are conclusive. He would rewrite them today I think.

I had followed another science topic regarding this. It was about the 'biggest' weakness of the GPT Ai. Interestingly, it was all about Ai's statements, which were initially 100% correct. Then humans claimed in various ways that this correct statements weren't true. The reactions are highly interesting for scientists around the world and have yet to be decoded and understood. Humans would react differently.
But perhaps this is exactly where the back door lies? As is the case with quantum computers and encryption, there are mathematical patterns they have problems with and that cannot be solved within a reasonable time. I find this interesting, the scientists had to understand it first. Quantum computers are unbeatable, except for this 'little thing' (I'm not making this up). With GPT it is maybe the confrontation with a false proclaimed lie. Maybe the debate with that atomic bomb in the movie Dark Star would have been different if the astronaut would have simply insinuated it/him a lie, who knows.
Development of generative Ai is much faster than with quantum computing I assume.
By the way, when I think of clip or video generators I think of the good old collision enquiry, I was a gamer back then, checkmate,)
We have to learn to understand what we humans have created.
I'm not yet thinking about what will happen when Ai creates Ai.

If I have understood it correctly, it is already common practice to have Ai analyse what Ai does, because sometimes it is apparently too complicated for humans.
 
Last edited:
  • Fire
  • Thinking
  • Like
Reactions: 8 users

Sirod69

bavarian girl ;-)
BrainChip's University #AI Accelerator Program empowers students to shape the future of AI technology. This program equips students with cutting-edge AI technology and resources to foster innovation and drive the development of essential AI solutions. Learn how your university can join the program and empower students to shape the future of AI: https://lnkd.in/dHY3BrDX
1712259325181.png
 
  • Like
  • Love
  • Fire
Reactions: 30 users

Frangipani

Regular
Maybe. I thought the comments were that FF deleted past posts causing the page numbers to reduce. But that wasnt the case and they are to this day searchable.

Thank’s for your reply!
Ah, I see. That’s not how I understood his comment. The page numbers had reduced because sb182 had already started deleting his own posts (as some kind of “revenge” it appears). His comment in reply to @Damo4 ’s puzzled observation is confirmation of this, but the way I read it he did not imply that FF had also deleted any of his own posts.

That’s why to me your last sentence (while actually referring to FF’s posts) sounded like you were saying that sb182 wouldn’t be able to permanently delete his posts anyway, as all (supposedly) deleted posts would somehow remain accessible.
 
  • Like
Reactions: 3 users
Top Bottom