BRN Discussion Ongoing

The reference to First Data has value though when you consider the following:

"What happened First Data?

The big deal is now complete: Fiserv announced this morning (July 29) that it has completed its acquisition of First Data Corporation. The two massive firms first inked the deal earlier this year, which will see Fiserv purchase First Data for $22 billion in an all-stock transaction.29 July 2019

Fiserv-First Data Merger Is Complete - PYMNTS.com

https://www.pymnts.com/news/partnerships-acquisitions/2019/fiserv-first-data-merger-complete/#:~:text=The big deal is now,in an all-stock transaction.
pymnts.com
https://www.pymnts.com › partnerships-acquisitions › fis...

https://www.pymnts.com/news/partnerships-acquisitions/2019/fiserv-first-data-merger-complete/#:~:text=The big deal is now,in an all-stock transaction.
It strikes me that if a company as large as First Data (was $US22 billion in 2019) has carried out a survey of customers and they prefer to keep their data on device then now as Fiserv this is going to be influential when advising other companies.

My opinion only DYOR
FF

AKIDA BALLISTAhttps://www.pymnts.com/news/partnerships-acquisitions/2019/fiserv-first-data-merger-complete/#:~:text=The big deal is now,in an all-stock transaction.
 
  • Like
  • Love
  • Wow
Reactions: 22 users

rgupta

Regular
Here is another thought.

In an early video by Alex the Rocket Scientist he stated that he considers Autonomous Vehicles, Drones etc as nothing more than robots. The point being that the same technology that will allow robots to function autonomously will be what makes autonomous vehicles possible.

So if you accept this logic how significant was the reveal by ANT61 that they were using AKIDA technology as the brain to allow their robot to act autonomously in space for the purpose of maintaining and repairing space craft.

Think about this idea. If ANT61 can do this using AKIDA technology in space would not AKIDA be able to hit valet parking in parking stations for six without the need for parking stations to install costly supporting infrastructure.

Would not AKIDA be able to undertake low speed hands off the wheel driving in congested tunnels or remote locations with poor connection like my carport here in Sydney. 😂🤣😂

My opinion only DYOR
FF

AKIDA BALLISTA
the only difference here is in space there is no incoming traffic, no dependence upon behaviour of their drivers, no other road users, no traffic laws, no speed limits, no traffic lights and a lot more which is absent.
The robert is programmed to find and repair a fault. So similarly an autonomous car is a very job as long as other users are not existent.
DYOR
 
  • Like
Reactions: 5 users
the only difference here is in space there is no incoming traffic, no dependence upon behaviour of their drivers, no other road users, no traffic laws, no speed limits, no traffic lights and a lot more which is absent.
The robert is programmed to find and repair a fault. So similarly an autonomous car is a very job as long as other users are not existent.
DYOR
I think you need to read what they say their robot is capable of doing which is learning and adapting to its environment.

Being powered by AKIDA it is not trained as you have suggested.

Indeed in every presentation made by Peter van der Made and Anil Mankar over many years now they make the point over and over again that AKIDA is not trained.

While you suggest a static environment in which this robot is operating this is hardly the situation it will be encountering in space. If you can predict the precise nature of the repair that will be required you do not need the robot at all because you redesign that part of the space vehicle so that it will not require repair. The repairs that are to be fixed are not routine maintenance they are as a result of damage or unexpected failures. They need to be located and a method to repair them worked out and then implemented in real time under extreme conditions while the space craft is travelling at thousands of kilometres per hour.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Fire
  • Love
Reactions: 21 users

Dang Son

Regular
This guy achieved 10% gain in one month programming trading using GPT Bot, over a period of eighteen months that could be better than nothing as we on the bus have seen. 💩🌭;)🤡🤖🥳
 
  • Wow
  • Like
  • Fire
Reactions: 4 users
This guy achieved 10% gain in one month programming trading using GPT Bot, over a period of eighteen months that could be better than nothing as we on the bus have seen. 💩🌭;)🤡🤖🥳

And this is why what I will call retail traders are a dying breed. Technology will out trade human traders and the bigger and better the computer being used the faster and more efficient it will be and the retail trader will disappear. The large institutions already have direct access close to the ASX under licence and as Ai fully integrates into their systems retail traders have no hope. The only future for retail shareholders will be by doing their research, backing their judgement and holding long as retail once did. It will be back to the future for successful retail shareholders.

I expect that there will be limited opportunities even for manipulators in the future as a result. The Ai powered trading computers will not need their services and will over time act together as a giant cartel to maximise their profits and no one will be any the wiser.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Thinking
  • Love
Reactions: 23 users

Townyj

Ermahgerd
This is a great find @DaBenjamins and I think you have exposed that a licence has been sold to Lassen Peak - https://www.lassenpeak.com/

I think it is for a handheld weapon detector. Lassen Peak seems to be a very well connected Defence Industry company and it may well be that the failure to release details was prevented by US Law regarding homeland security.

In any event Lassen Peak is the company in my opinion based on the fine print in the following. So now over to the 1000 Eyes.

View attachment 32605
Mike Cromartie

Business Development at Lassen Peak

Join to edit

About


Mike Cromartie has a long and varied career in business development. Mike began their career in 1998 as the Director of the Western Region for NEC Electronics, where they managed a team of 18 and was responsible for $350M in annual sales. In 2001, Mike moved to a Processor IP Company as the VP of WW Sales, where they increased license revenue from $1.4M to $4M. In 2003, Mike became the Silicon Valley Regional Sales Manager for Fujitsu, focusing on ASIC with Networking, Communication and Storage Verticals, as well as FPD with ELO Touch, DELL, ViewSonic, Phillips, IGT, and National Display Systems. In 2005, Mike joined Transmeta as the Director of Efficeon Programs, partnering with Microsoft on Business Development/Program Management activity on the FlexGo Emerging Markets Desk Top Computer. In 2007, Mike became the Director of Sales - Silicon Valley for Tensilica, responsible for Processor IP and Audio/Video Codec licensing to Major OEMs and Systems Companies in the Western US. In 2016, Mike moved to Sharp Microelectronics of the Americas as the Sr. Global Account Manager, responsible for sales and support of Imaging Products and Custom LCD Displays to Google, Facebook and Essential (Playground Global). In 2019, Mike became the Founder of Unival Sales and Service, a Business Development SoC Intellectual Property and Project Based Funding company, and also joined Zocial.io as the Business Development - Blockchain based Application Software. Finally, in 2020, Mike joined Lassen Peak as the Business Development.

Mike Cromartie attended California State University, Stanislaus, and obtained a Learning LinkedIn Recruiter certification from LinkedIn in March 2019.

My opinion only so DYOR
FF

AKIDA BALLISTA

Looks like he is working 2 Jobs... Lassenpeak and Unival Sales and Service. Going from the dates he has only been working with Brainchip/Akida since January this Year.
 
  • Like
  • Fire
  • Love
Reactions: 15 users
Looks like he is working 2 Jobs... Lassenpeak and Unival Sales and Service. Going from the dates he has only been working with Brainchip/Akida since January this Year.
My take is he discovered Brainchip while working for Lassen Peak and running his private consultancy. He discovered Brainchip because Lassen Peak are licencing AKIDA for their ultra low powered handheld weapon detection device. In consequence of the relationship he was able to convince Brainchip or they approached him to act as an agent for Brainchip IP via his private consultancy and he commenced doing so from January, 2023.

I could be wrong and this is why I am hanging out for @Diogenese to run his eye over the patent lodged by Lassen Peak. Fingers very tightly crossed.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Fire
  • Love
Reactions: 45 users
My take is he discovered Brainchip while working for Lassen Peak and running his private consultancy. He discovered Brainchip because Lassen Peak are licencing AKIDA for their ultra low powered handheld weapon detection device. In consequence of the relationship he was able to convince Brainchip or they approached him to act as an agent for Brainchip IP via his private consultancy and he commenced doing so from January, 2023.

I could be wrong and this is why I am hanging out for @Diogenese to run his eye over the patent lodged by Lassen Peak. Fingers very tightly crossed.

My opinion only DYOR
FF

AKIDA BALLISTA
The wording in their FAQ indeed has signs of Akidaitis...

How will AI be used in the product?
'AI will be used in the product to analyze the radar image and look for anomalies – things that do not appear to be part of the human body. Then, the AI will look at those objects in an attempt to assess the level of threat the object poses – what is the relative size of the object, what material does it appear to be, where is it on the person’s body, and what characteristics does the object have (sharp point, trigger, etc). The AI will essentially work much like our own human brains work as we look at things and classify them. Many different attributes are examined for every object, and AI improves over time as it “sees” more image data.'
 

Attachments

  • Lassen Peak.jpg
    Lassen Peak.jpg
    309.8 KB · Views: 84
  • Like
  • Fire
  • Love
Reactions: 43 users

Townyj

Ermahgerd
My take is he discovered Brainchip while working for Lassen Peak and running his private consultancy. He discovered Brainchip because Lassen Peak are licencing AKIDA for their ultra low powered handheld weapon detection device. In consequence of the relationship he was able to convince Brainchip or they approached him to act as an agent for Brainchip IP via his private consultancy and he commenced doing so from January, 2023.

I could be wrong and this is why I am hanging out for @Diogenese to run his eye over the patent lodged by Lassen Peak. Fingers very tightly crossed.

My opinion only DYOR
FF

AKIDA BALLISTA

Fair enough :) That would be the best case scenario for sure.

Maybe... ummm our local "Verification Engineer" could get into contact with him and find out some details.. *cough *cough* @chapman89
 
  • Haha
  • Like
  • Fire
Reactions: 17 users

chapman89

Founding Member
Fair enough :) That would be the best case scenario for sure.

Maybe... ummm our local "Verification Engineer" could get into contact with him and find out some details.. *cough *cough* @chapman89
I’m waiting for him to accept my invite 😁
 
  • Haha
  • Like
  • Love
Reactions: 40 users
The wording in their FAQ indeed has signs of Akidaitis...

How will AI be used in the product?
'AI will be used in the product to analyze the radar image and look for anomalies – things that do not appear to be part of the human body. Then, the AI will look at those objects in an attempt to assess the level of threat the object poses – what is the relative size of the object, what material does it appear to be, where is it on the person’s body, and what characteristics does the object have (sharp point, trigger, etc). The AI will essentially work much like our own human brains work as we look at things and classify them. Many different attributes are examined for every object, and AI improves over time as it “sees” more image data.'
I have extracted the following from the patent which fits nicely with your FAQ @DollarsAndSense

"The RSOC consists of two major functions: 1) A transmitter that produces the radar signal and initiates the scan and 2) a receiver that receives the reflected signal and recovers differential phase and frequency information, and provides that information to the digital processing system.



The apparatus can include sufficient local storage and processing power for operating independent of a network.



At 203, in an embodiment, the analog signal from the scan is converted to a digital format using one or more analog-to-digital converters (ADCs) to create a digital image that can be forwarded to the processing complex of the apparatus. In an embodiment, the process of scanning and creating an image can be repeated a predetermined number of times (programmed into the apparatus or selected by the user) creating multiple digital images.



Upon completion of a search, at 211, post-session processing takes place. This processing can include all or some of the following: tagging images or videos with metadata, gathering and uploading metadata, generating a report, providing a digital signature or certificate, archiving, and uploading the data (both received and processed) and metadata. In this step, images can be cryptographically tagged with various metadata and transmitted and stored on the device, or can be uploaded for further processing. If a data repository is used (e.g., a cloud-based database or an online server), the images, videos, and metadata can be stored there. Examples of metadata can include (but are not limited to) time stamps, geolocation data, device data, customer specific information (user, associated visual images), networked or connected devices, voice recordings, and session information. In an embodiment, a web-based service can be implements using public cloud infrastructure and services such as those provided by (but not limited to) AWS, Azure, and GCP.



At 404, once the objects have been normalized, the resultant image is transferred to an AI engine for pattern matching against known threats and then calculating the likelihood that the input data is a threat. As part of the image processing, in an embodiment, the apparatus performs an image search to match detected shapes against a prebuilt local image threat library, or a mathematical model representing such images, and makes a threat determination using parameters such as shape type, size, type of weapon, confidence level, contrast, and other parameters. Entries in the threat library can include some or all of the following: guns, knives, bombs and bomb vests, clubs, truncheons, bottles, and other objects of interest. In an embodiment, once a preliminary determination has been made that a weapon is suspected, the apparatus will focus in on the suspected weapon(s) and providing better image resolution to improving the detection confidence. In an embodiment, privacy filtering processing is applied, thus ensuring all locally storage body images are obfuscated as part of the image processing described in FIG. 3."


The diagrams which for some reason I cannot persuade to copy show that the Ai Engine at 404 is a completely discrete device from the other two processors at 103 & 104.

These coincidences are just too convenient in my opinion and I personally am satisfied that they are using AKIDA as the Ai Engine unless @Diogenese finds something in the patent to tear asunder my conclusion.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Fire
  • Love
Reactions: 19 users
I’m waiting for him to accept my invite 😁
I think Brainchip may have alerted him to the risks associated with verification engineers.😂🤣😂
 
  • Haha
  • Like
Reactions: 19 users

Kachoo

Regular
I have extracted the following from the patent which fits nicely with your FAQ @DollarsAndSense

"The RSOC consists of two major functions: 1) A transmitter that produces the radar signal and initiates the scan and 2) a receiver that receives the reflected signal and recovers differential phase and frequency information, and provides that information to the digital processing system.



The apparatus can include sufficient local storage and processing power for operating independent of a network.



At 203, in an embodiment, the analog signal from the scan is converted to a digital format using one or more analog-to-digital converters (ADCs) to create a digital image that can be forwarded to the processing complex of the apparatus. In an embodiment, the process of scanning and creating an image can be repeated a predetermined number of times (programmed into the apparatus or selected by the user) creating multiple digital images.



Upon completion of a search, at 211, post-session processing takes place. This processing can include all or some of the following: tagging images or videos with metadata, gathering and uploading metadata, generating a report, providing a digital signature or certificate, archiving, and uploading the data (both received and processed) and metadata. In this step, images can be cryptographically tagged with various metadata and transmitted and stored on the device, or can be uploaded for further processing. If a data repository is used (e.g., a cloud-based database or an online server), the images, videos, and metadata can be stored there. Examples of metadata can include (but are not limited to) time stamps, geolocation data, device data, customer specific information (user, associated visual images), networked or connected devices, voice recordings, and session information. In an embodiment, a web-based service can be implements using public cloud infrastructure and services such as those provided by (but not limited to) AWS, Azure, and GCP.



At 404, once the objects have been normalized, the resultant image is transferred to an AI engine for pattern matching against known threats and then calculating the likelihood that the input data is a threat. As part of the image processing, in an embodiment, the apparatus performs an image search to match detected shapes against a prebuilt local image threat library, or a mathematical model representing such images, and makes a threat determination using parameters such as shape type, size, type of weapon, confidence level, contrast, and other parameters. Entries in the threat library can include some or all of the following: guns, knives, bombs and bomb vests, clubs, truncheons, bottles, and other objects of interest. In an embodiment, once a preliminary determination has been made that a weapon is suspected, the apparatus will focus in on the suspected weapon(s) and providing better image resolution to improving the detection confidence. In an embodiment, privacy filtering processing is applied, thus ensuring all locally storage body images are obfuscated as part of the image processing described in FIG. 3."


The diagrams which for some reason I cannot persuade to copy show that the Ai Engine at 404 is a completely discrete device from the other two processors at 103 & 104.

These coincidences are just too convenient in my opinion and I personally am satisfied that they are using AKIDA as the Ai Engine unless @Diogenese finds something in the patent to tear asunder my conclusion.

My opinion only DYOR
FF

AKIDA BALLISTA
I wonder if you could also add in a facial monitoring sensor that can also assist in the identification of the nervous nelly!

This has a so many spinoffs.

Pattern recognition and probably out comes can assist in many fields.

Let's hope the Ogar is gental.
 
  • Like
Reactions: 7 users

Boab

I wish I could paint like Vincent
I wonder if you could also add in a facial monitoring sensor that can also assist in the identification of the nervous nelly!

This has a so many spinoffs.

Pattern recognition and probably out comes can assist in many fields.

Let's hope the Ogar is gental.
I want to see the Oooph meter turned up to Max😁
 
  • Haha
  • Like
Reactions: 6 users

Diogenese

Top 20

LAW ENFORCEMENT & MILITARY

Lassen Peak’s solution will improve safety and the overall experience for everyone by allowing for highly accurate weapon detection to be conducted at a safe distance – avoiding potential conflict, eliminating escalation to use of force in situations where there is no threat, and providing for a more dignified and respectful experience for all​

  • No-contact, less invasive, conducted at a safe distance​

  • Prevents escalation to use of force​

  • Ensures accountability and transparency through automated log​

  • Fosters trust and safety between police and communities​

  • Respects personal privacy and civil rights​


The addressable market for this technology is huge. Imagine it being incorporated into home security systems where it can detect that the person outside your front door seeking entry is carrying a weapon.

Detecting weapons on patrons of mass sporting or entertainment venues. Detecting weapons on travellers entering airports unobtrusively at multiple points during the process of ticketing right through to boarding of planes.

My opinion only but startup or not it has $16 million US to get the technology ball rolling.

Where is @Diogenese we need an urgent patent search.


My opinion only DYOR
FF

AKIDA BALLISTA
Lassen Peak have several patent applications, all totally innocent of neural networks. It's basically a photo-fit ID system, so clearly Akida would supercharge it.

The most recently published is:

WO2022250862A1 SYSTEMS AND METHODS FOR NONINVASIVE DETECTION OF IMPERMISSIBLE OBJECTS USING DECOUPLED ANALOG AND DIGITAL COMPONENTS Priority: US202163192540P·2021-05-24; US202217734079A·2022-05-01

1679209777397.png



1679209803272.png


A system for scanning targets for concealed objects comprises a set of analog imaging components of a portable radar system with both a ranging resolution and lateral resolution sufficient to detect an object concealed on a person, where the analog imaging components are contained with a first housing and in communication with digital processing components contained in a second housing, where the digital processing components are configured to receive imaging information from the analog components for processing. Each housing is configured to be attached to a user's article of equipment.

[0024] For example, lens 120 can be a Luneberg lens of the type or types described in U.S. Patent Application No. 63/161,323, the contents of which are hereby incorporated in their entirety. [024] In an embodiment, core processing system 102 includes processor 103 and custom logic 104. Processor 103 is configured to process instructions to render or display images, initiate a scan, process the results of a scan, alert the user, and provide the results of an object match, if any, to the user. Processor 103 can be any of a variety and combination of processors, and can be distributed among various types and pieces of hardware found on the apparatus, or can include hardware distributed across a network. Processor 103 can be an ARM (or other RISC-based) processor. Additionally, such processors can be implemented, for example, as hardware modules such as embedded microprocessors, Application Specific Integrated Circuits (“ASICs”), and Programmable Logic Devices, including flash memory (“PLDs). Some such processors can have multiple instruction executing units or cores. Such processors can also be implemented as one or more software modules in programming languages as Java, C++, C, assembly, a hardware description language, or any other suitable programming language. A processor according to some embodiments includes media and program code (which also can be referred to as code) specially designed and constructed for the specific purpose or purposes. Custom logic 104 can include one or more Field Programmable Gate Array(s) (FPGA) or any type of PLD for custom logic to support processing offload from Processor 103. In an embodiment, the term “processing offload” includes digital signal processing and digital beam forming
.
 
  • Like
  • Love
  • Fire
Reactions: 40 users
I wonder if you could also add in a facial monitoring sensor that can also assist in the identification of the nervous nelly!

This has a so many spinoffs.

Pattern recognition and probably out comes can assist in many fields.

Let's hope the Ogar is gental.
Why stop there what about heart rate, blood pressure, perspiration and blood sugar even muscle tensing not to mention rapid eye movements and pupil dilation. 😂🤣😂
 
  • Like
  • Fire
  • Haha
Reactions: 8 users
Lassen Peak have several patent applications, all totally innocent of neural networks. It's basically a photo-fit ID system, so clearly Akida would supercharge it.

The most recently published is:

WO2022250862A1 SYSTEMS AND METHODS FOR NONINVASIVE DETECTION OF IMPERMISSIBLE OBJECTS USING DECOUPLED ANALOG AND DIGITAL COMPONENTS Priority: US202163192540P·2021-05-24; US202217734079A·2022-05-01

View attachment 32618


View attachment 32619

A system for scanning targets for concealed objects comprises a set of analog imaging components of a portable radar system with both a ranging resolution and lateral resolution sufficient to detect an object concealed on a person, where the analog imaging components are contained with a first housing and in communication with digital processing components contained in a second housing, where the digital processing components are configured to receive imaging information from the analog components for processing. Each housing is configured to be attached to a user's article of equipment.

[0024] For example, lens 120 can be a Luneberg lens of the type or types described in U.S. Patent Application No. 63/161,323, the contents of which are hereby incorporated in their entirety. [024] In an embodiment, core processing system 102 includes processor 103 and custom logic 104. Processor 103 is configured to process instructions to render or display images, initiate a scan, process the results of a scan, alert the user, and provide the results of an object match, if any, to the user. Processor 103 can be any of a variety and combination of processors, and can be distributed among various types and pieces of hardware found on the apparatus, or can include hardware distributed across a network. Processor 103 can be an ARM (or other RISC-based) processor. Additionally, such processors can be implemented, for example, as hardware modules such as embedded microprocessors, Application Specific Integrated Circuits (“ASICs”), and Programmable Logic Devices, including flash memory (“PLDs). Some such processors can have multiple instruction executing units or cores. Such processors can also be implemented as one or more software modules in programming languages as Java, C++, C, assembly, a hardware description language, or any other suitable programming language. A processor according to some embodiments includes media and program code (which also can be referred to as code) specially designed and constructed for the specific purpose or purposes. Custom logic 104 can include one or more Field Programmable Gate Array(s) (FPGA) or any type of PLD for custom logic to support processing offload from Processor 103. In an embodiment, the term “processing offload” includes digital signal processing and digital beam forming.
Don’t forget 404.😎
 
  • Like
  • Fire
Reactions: 3 users

Diogenese

Top 20
Just getting the paperwork ready for Dio, inspection.

Systems and Methods for Noninvasive Detection of Impermissible Objects
Abstract
An apparatus comprises a first and second coherent radar system on a first chip configured to operate in a terahertz range to provide a frequency modulated continuous wave, and having a first and second field of view, respectively. The apparatus further comprises a first processor in communication with the first coherent radar system and configured to include instructions to send a first signal to the first coherent radar system to scan a target with the first field of view, and a second processor in communication with the second coherent radar system and configured to collaborate with the first processor, and further configured to include instructions to send a second signal to the second coherent radar system to scan a target within the second field of view.
Images (11)

Classifications
G01S13/887 Radar or analogous systems specially adapted for specific applications for detection of concealed objects, e.g. contraband or weapons
View 8 more classifications
US20220214447A1
United States

Download PDF Find Prior Art Similar
InventorHatch GrahamEhsan AfshariKarl TriebesRyan KearnyCurrent Assignee Lassen Peak Inc
Worldwide applications
2021 US 2022 WO
Application US17/515,421 events
Priority claimed from US202163134373P
2021-09-10
Priority claimed from US17/472,156
2021-10-30
Application filed by Lassen Peak Inc
2021-10-30
Priority to US17/515,421
2021-10-30
Assigned to Lassen Peak, Inc.
2022-01-03
Priority to PCT/US2022/011040
2022-07-07
Publication of US20220214447A1
Status
Pending
InfoPatent citations (5) Legal events Similar documents Priority and Related ApplicationsExternal linksUSPTOUSPTO PatentCenterUSPTO AssignmentEspacenetGlobal DossierDiscuss


Learning 🏖
Thanks Learning.
 
  • Like
  • Love
Reactions: 7 users

Diogenese

Top 20
I have extracted the following from the patent which fits nicely with your FAQ @DollarsAndSense

"The RSOC consists of two major functions: 1) A transmitter that produces the radar signal and initiates the scan and 2) a receiver that receives the reflected signal and recovers differential phase and frequency information, and provides that information to the digital processing system.



The apparatus can include sufficient local storage and processing power for operating independent of a network.



At 203, in an embodiment, the analog signal from the scan is converted to a digital format using one or more analog-to-digital converters (ADCs) to create a digital image that can be forwarded to the processing complex of the apparatus. In an embodiment, the process of scanning and creating an image can be repeated a predetermined number of times (programmed into the apparatus or selected by the user) creating multiple digital images.



Upon completion of a search, at 211, post-session processing takes place. This processing can include all or some of the following: tagging images or videos with metadata, gathering and uploading metadata, generating a report, providing a digital signature or certificate, archiving, and uploading the data (both received and processed) and metadata. In this step, images can be cryptographically tagged with various metadata and transmitted and stored on the device, or can be uploaded for further processing. If a data repository is used (e.g., a cloud-based database or an online server), the images, videos, and metadata can be stored there. Examples of metadata can include (but are not limited to) time stamps, geolocation data, device data, customer specific information (user, associated visual images), networked or connected devices, voice recordings, and session information. In an embodiment, a web-based service can be implements using public cloud infrastructure and services such as those provided by (but not limited to) AWS, Azure, and GCP.



At 404, once the objects have been normalized, the resultant image is transferred to an AI engine for pattern matching against known threats and then calculating the likelihood that the input data is a threat. As part of the image processing, in an embodiment, the apparatus performs an image search to match detected shapes against a prebuilt local image threat library, or a mathematical model representing such images, and makes a threat determination using parameters such as shape type, size, type of weapon, confidence level, contrast, and other parameters. Entries in the threat library can include some or all of the following: guns, knives, bombs and bomb vests, clubs, truncheons, bottles, and other objects of interest. In an embodiment, once a preliminary determination has been made that a weapon is suspected, the apparatus will focus in on the suspected weapon(s) and providing better image resolution to improving the detection confidence. In an embodiment, privacy filtering processing is applied, thus ensuring all locally storage body images are obfuscated as part of the image processing described in FIG. 3."


The diagrams which for some reason I cannot persuade to copy show that the Ai Engine at 404 is a completely discrete device from the other two processors at 103 & 104.

These coincidences are just too convenient in my opinion and I personally am satisfied that they are using AKIDA as the Ai Engine unless @Diogenese finds something in the patent to tear asunder my conclusion.

My opinion only DYOR
FF

AKIDA BALLISTA
Hi FF,

Figure 4 is a flow chart, not a circuit diagram, so each step is shown in an individual box:

1679211000546.png


1679211098834.png


[027] Memory 107 can be used to store, in computer code, artificial intelligence (“AI”) instructions, AI algorithms, a catalog of images, device configuration, an allowable, calculated, or predetermined user workflow, conditions for altering, device status, device and scanning configuration, and other metadata resulting from the scanning process. Memory 107 can be a read-only memory (“ROM”); a random-access memory (RAM) such as, for example, a magnetic disk drive, and/or solid-state RAM such as static RAM (“SRAM) or dynamic RAM (“DRAM), and/or FLASH memory or a solid-data disk (“SSD), or a magnetic, or any known type of memory. In some embodiments, a memory can be a combination of memories. For example, a memory can include a DRAM cache coupled to a magnetic disk drive and an SSD. Memory 107 can also include processor-readable media such as magnetic storage media such as hard disks, floppy disks, and magnetic tape; optical storage media such as Compact Disc/Digital Video Discs (“CD/DVDs), Compact Disc-Read Only Memories (“CD-ROMs), and holographic devices: magneto-optical storage media such as floptical disks; Solid state memory such as SSDs and FLASH memory; and ROM and RAM devices and chips.

[045] Fig. 3 is a flowchart of a method for creating a dataset of images to be used for imaging and detection, according to an embodiment. At 301, one or more images are taken. At 302, the images are sent to a processor for processing. The image or images received at the processor are increased in size by a predetermined amount creating a set of larger images, at 303. In an embodiment, the images are increased in size to achieve finer blending of the image stack in order to extract the high frequency data that is embedded in the low frequency data hidden in the aliasing.

[046] At 304, at least a subset of images in the set of larger images are aligned, according to an embodiment. In an embodiment, at 305, the layers are averaged with linear opacity 1, .5, .25, .125, and so on, allowing images, in an embodiment, to be blended evenly, making use of the aliasing.

[047] At 306, in an embodiment, the image stack, the plurality of images being combined, is sharpened using a predetermined radius. At 307, according to an embodiment, the final super image is resized. One skilled in the art will understand that the output can be resized to any desirable size using any practicable resampling method that provides an appropriate image. At 308, the super image is used to create the final image (seen in 206 from Fig. 2). Once the super image is created, the image is further processed, as detailed in Fig. 4, discussed below
.

[048] Fig. 4 is a flow chart of a method for processing the existing data to create a final image. At 401, an optical image is created and mapped to the super image creating a filtered image. In an embodiment, the apparatus uses a separate camera to create an optical image used as a base image configured to be mapped to the super image, according to an embodiment. In an embodiment, the separate camera is a digital camera using a CCD sensor, or a CMOS sensor, or any practicable sensor.

Seems to be doing a lot of superfluous pre-fiddling with the sensor data.
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 13 users

Diogenese

Top 20
I wonder if you could also add in a facial monitoring sensor that can also assist in the identification of the nervous nelly!

This has a so many spinoffs.

Pattern recognition and probably out comes can assist in many fields.

Let's hope the Ogar is gental.
We've been there before for school security - did not end happily.
 
  • Like
  • Fire
Reactions: 7 users
Top Bottom