BRN Discussion Ongoing

FuzM

Regular
I wouldn’t be surprised if it turned out we’re also somehow connected to South Korean ASIC design house ASICLAND, whose engineers work closely with global partners, including TSMC and Arm.

The fact that both their CMO & Head of Overseas Sales and the company’s Global Strategy Manager “celebrated” this week’s LinkedIn post about our redesigned website with a clapping-hands emoji each is a little too much of a coincidence, don’t you think? 😉

“ASICLAND is a leading design house specializing in application-specific integrated circuit (ASIC) design, offering high-performance, low-power, and cost-optimized design services. As an official Value Chain Alliance (VCA) partner of TSMC—the world’s No.1 foundry—ASICLAND serves as a trusted bridge between customers and TSMC. We deliver full turnkey support, from architecture design to GDS delivery, across a wide range of industries including AI, automotive, IoT, and memory.”


View attachment 86920


View attachment 86922

View attachment 86935


View attachment 86930




View attachment 86926 View attachment 86927 View attachment 86928 View attachment 86929


In this context, I was wondering whether the undisclosed “Leading U.S. IDM Company” in yesterday’s press release 👇🏻 could by any chance be our licensee Renesas Electronics America, a wholly owned subsidiary of Tokyo-headquartered Renesas Electronics Corporation, which in turn happens to be a global semiconductor player? Just a wild guess, though…

“▶ Strengthening automotive semiconductor design capabilities through collaboration with a global semiconductor client

(…) [2025-06-11] ASICLAND has signed a supply agreement with a leading U.S. integrated device manufacturer (IDM) to jointly target the global automotive semiconductor market.

(…) The U.S. semiconductor company involved is an IDM providing essential chip designs and power management solutions for automotive electronics systems, with active operations across various industrial sectors. Through this collaboration, ASICLAND will expand its technological foundation and expertise in automotive chip design.

(…) Meanwhile, ASICLAND is accelerating efforts to enter global markets by establishing an advanced R&D center in Hsinchu, Taiwan*. The company is actively securing cutting-edge design technologies for 3nm and 5nm process nodes as well as CoWos (Chip-on-Wafer-on-Substrate) packaging technologies.”


*Hsinchu Science Park is Taiwan’s Silicon Valley and home to about 500 high-tech companies, among them TSMC, UMC and MediaTek, as well as our partner Andes Technology.




View attachment 86936




View attachment 86932

Not a surprise after all
 
  • Like
Reactions: 3 users

jtardif999

Regular
FF

Let’s compare production time lines:

AKD1000
2018 Design work completed and IP tested by select customers leading to decision to completely redesign AKD1000
2019 July redesigned AKD1000 IP released to select customers for feedback
2020 March Feedback received AKD1000 produced in FPGA and tested
2020 April final design sent to Socionext for engineering.
2020 September AKD1000 engineering sample back from TSMC & testing takes place
2020 October AKD1000 engineering samples released to early access customers
2020 December AKD1000 engineering samples released to NASA
2021 March Brainchip announces taking customer feedback into account design changes made to AKD1000 and ready to send to tapeout of reference AKD1000 by TSMC
2021 November AKD1000 reference chips have been received and tested proving redesign shows a 30 percent increase in efficiency over AKD1000 engineering samples.
2023 January After extensive customer engagement Brainchip design a further iteration of the AKD 1000 series named AK1500

AKIDA 2000
2022 Brainchip announce acceleration of the development of AKIDA2.0 IP
2023 March AKIDA2.0 IP announced and released to select customers before general availability to obtain customer feedback.
2023 October AKIDA2.0 IP availability extended
2024 Brainchip prioritises development of models and TENNS software to support AKIDA 2.0
2024 - 2025 Brainchip works on and completes engineering design for demonstrating AKIDA 2.0 on FPGA
2025 Brainchip demonstrates AKIDA 2.0 on FPGA and makes it available for testing by select customers.
2025 August Brainchip announces the AKIDA2.0 FPGA Cloud Access to customers/developers.
2026 February Brainchip announces the refined AKD2.0 named AKIDA2500 to be produced in silicon and taped out on 12nm at TSMC as an engineering sample prior to any sort of volume production.

There is appears to be a certain amount of similarity in the time lines for developing revolutionary neuromorphic technology regardless of who is on the Board, who is the Chair, and who is the CEO of Brainchip.

Note I have excluded references to AKIDA PICO from the timeline for AKIDA2.0 as well as the various platforms such as the Edge Box and M.2 Cards for AKD1000
Bit confusing nomenclature 🙂;

Akida1.0 is the underlying architecture associated with the AKD1000 and AKD1500 chips.

Akida2.0 is the underlying architecture for the AKD2500 chip when it has been fabricated.

AFAIK Akida Pico is a further modification of underlying Akida2.0 architecture allowing one neural processing unit to be the singular unit of processing in the neural fabric (the entire neural fabric) - not limited to one though I think.

Modifications have been made to each of the Akida architectures in line with customer requests as have modifications made directly to the chips themselves.

I hope this makes a bit more sense. I think all of this would be quite confusing to anyone without an engineering background.
 
  • Like
Reactions: 6 users

Diogenese

Top 20
Hi @manny100

Don’t want to be a negative Nelly on the topic but Megachips are also partnered with Quadric and have been pumping their tyres up publicly:

Quadric​

“Chimera General Purpose Neural Processing Unit” (GPNPU), which Quadric independently developed, can provide high performance computing regardless of sensor types. Conventional AI engines usually only run at high speed in neural networks while off-loading pre-processing and boundary processing onto a host system. However, with Quadric’s technology, it is possible to run at high speed with pre-processing, boundary processing, and neural network inference on a single hardware unit.
This unique ability that can support all kinds of data processing will allow for the accommodation of new types of neural networks released in the future without any hardware changes, regardless of the applications.

Features​

  • Scalability for both low power consumption and accuracy
  • Accelerate entire steps of AI pipeline processing
  • Flexible architecture to support the evolution of AI models
  • IPs for silicon implementation
  • Software development kits available


Douglas Fairburn who once did a podcast with Brainchip says:

View attachment 95056


I think we need to be realistic that Megachips are possibly using Quadric vs BC.

Love to be wrong of course but I haven’t seen any evidence of that.
Hi SG,

I'd forgotten about the Megachips/Quadric partnership:


Quadric and MegaChips Form Partnership to Bring IP Products to ASIC and SoC Market - Embedded Computing Design

Quadric and MegaChips announced a strategic partnership to deliver ASIC and SoC solutions built on Quadric’s edge AI processor architecture.

MegaChips announced an equity stake in Quadric in January 2022 and is also a major investor in a $21M Series B funding round announced in March through their MegaChips LSI USA Corporation subsidiary. The round aims to help Quadric release the next version of its processor architecture, improve the performance and breadth of the Quadric software development kit (SDK), and roll out IP products to be integrated in MegaChips’ ASICs and SoCs
.

I looked at their patents at the time, and, if I recall correctly, was not overly impressed. However they have a new patent which refers to their old patents as Prior Art. (This does not necessarily mean they are obsolete, as it could refer to an improvement):

US20260023715A1 SYSTEMS AND METHODS FOR IMPLEMENTING DIRECTIONAL OPERAND BROADCAST AND MULTIPLY-ACCUMULATE EXECUTION USING A CONFIGURABLE PATCH MESH IN A MULTI-CORE PROCESSING ARRAY OF AN INTEGRATED CIRCUIT 20240719 - Published 20260122

1770966533622.png


[0099] Each array processing core includes a local register memory 310 that may be logically divided into a broadcast region 309 , a source region 318 , and a destination region 320 . The broadcast region 309 may be configured to hold operand values intended to be fed into a patch operation. In one or more embodiments, the source region 318 preferably stores local data operands to be used in multiply-accumulate computations, while the destination region 320 receives accumulation results.

[0100] A feed path interconnect enables operand values from one or more array processing cores to be routed to the origin core 303 . Once the operand value may be received by the origin core 303 , the operand value may be staged in a broadcast staging register or similar holding buffer. The origin core 303 then initiates a unidirectional wavefront broadcast of the operand value using a patch mesh interconnect fabric 312 .

[0101] In one or more embodiments, the patch mesh interconnect 312 may be configured to propagate operand values from the origin core 303 to other processing cores within the patch region using a wavefront propagation scheme. In one embodiment, the wavefront progresses outward from the origin core 303 in a Manhattan-distance order, such that
each processing core within the patch region receives the broadcast value with a fixed delay relative to the number of inter-core hops from the origin core. The broadcast timing model enables deterministic compute scheduling across the patch.

Basically, a pulse originating at 303 (top left) propagates through the matrix with a delay determined by the number pf cores between 303 and the destination core, creating a propagating "wave front".

This sounds to me like it is intended to mimic the propagation of signals through the brain. The reason for the developement is set out here"

[0006] Accordingly, there remains a need in the integrated circuitry field for operand distribution techniques that permit localized, directionally controlled broadcasting of operands to selected subsets of processing elements. There also remains a need for compute scheduling frameworks that enable deterministic multiply-accumulate operations across such subsets while minimizing interconnect congestion and improving temporal alignment of data arrival and compute triggering.


CTO Nigel Drego cut his teeth with cryptocurrency tech.

The Quadric IO webpage has been 404'd. Their last investment round was 20260115.
 
Last edited:
  • Like
  • Fire
  • Wow
Reactions: 8 users

manny100

Top 20
Hi SG,

I'd forgotten about the Megachips/Quadric partnership:


Quadric and MegaChips Form Partnership to Bring IP Products to ASIC and SoC Market - Embedded Computing Design

Quadric and MegaChips announced a strategic partnership to deliver ASIC and SoC solutions built on Quadric’s edge AI processor architecture.

MegaChips announced an equity stake in Quadric in January 2022 and is also a major investor in a $21M Series B funding round announced in March through their MegaChips LSI USA Corporation subsidiary. The round aims to help Quadric release the next version of its processor architecture, improve the performance and breadth of the Quadric software development kit (SDK), and roll out IP products to be integrated in MegaChips’ ASICs and SoCs
.

I looked at their patents at the time, and, if I recall correctly, was not overly impressed. However they have a new patent which refers to their old patents as Prior Art. (This does not necessarily mean they are obsolete, as it could refer to an improvement):

US20260023715A1 SYSTEMS AND METHODS FOR IMPLEMENTING DIRECTIONAL OPERAND BROADCAST AND MULTIPLY-ACCUMULATE EXECUTION USING A CONFIGURABLE PATCH MESH IN A MULTI-CORE PROCESSING ARRAY OF AN INTEGRATED CIRCUIT 20240719 - Published 20260122

View attachment 95059

[0099] Each array processing core includes a local register memory 310 that may be logically divided into a broadcast region 309 , a source region 318 , and a destination region 320 . The broadcast region 309 may be configured to hold operand values intended to be fed into a patch operation. In one or more embodiments, the source region 318 preferably stores local data operands to be used in multiply-accumulate computations, while the destination region 320 receives accumulation results.

[0100] A feed path interconnect enables operand values from one or more array processing cores to be routed to the origin core 303 . Once the operand value may be received by the origin core 303 , the operand value may be staged in a broadcast staging register or similar holding buffer. The origin core 303 then initiates a unidirectional wavefront broadcast of the operand value using a patch mesh interconnect fabric 312 .

[0101] In one or more embodiments, the patch mesh interconnect 312 may be configured to propagate operand values from the origin core 303 to other processing cores within the patch region using a wavefront propagation scheme. In one embodiment, the wavefront progresses outward from the origin core 303 in a Manhattan-distance order, such that
each processing core within the patch region receives the broadcast value with a fixed delay relative to the number of inter-core hops from the origin core. The broadcast timing model enables deterministic compute scheduling across the patch.

Basically, a pulse originating at 303 (top left) propagates through the matrix with a delay determined by the number pf cores between 303 and the destination core, creating a propagating "wave front".

This sounds to me like it is intended to mimic the propagation of signals through the brain. The reason for the developement is set out here"

[0006] Accordingly, there remains a need in the integrated circuitry field for operand distribution techniques that permit localized, directionally controlled broadcasting of operands to selected subsets of processing elements. There also remains a need for compute scheduling frameworks that enable deterministic multiply-accumulate operations across such subsets while minimizing interconnect congestion and improving temporal alignment of data arrival and compute triggering.


CTO Nigel Drego cut his teeth with cryptocurrency tech.

The Quadric IO webpage has been 404'd. Their last investment round was 20260115.
Its likely horses for courses. Some robot tasks will be suited to AKIDA1000 and other tasks to Quadric.
Its likely both will get a 'gig'.
 
  • Like
Reactions: 3 users
Hi SG,

I'd forgotten about the Megachips/Quadric partnership:


Quadric and MegaChips Form Partnership to Bring IP Products to ASIC and SoC Market - Embedded Computing Design

Quadric and MegaChips announced a strategic partnership to deliver ASIC and SoC solutions built on Quadric’s edge AI processor architecture.

MegaChips announced an equity stake in Quadric in January 2022 and is also a major investor in a $21M Series B funding round announced in March through their MegaChips LSI USA Corporation subsidiary. The round aims to help Quadric release the next version of its processor architecture, improve the performance and breadth of the Quadric software development kit (SDK), and roll out IP products to be integrated in MegaChips’ ASICs and SoCs
.

I looked at their patents at the time, and, if I recall correctly, was not overly impressed. However they have a new patent which refers to their old patents as Prior Art. (This does not necessarily mean they are obsolete, as it could refer to an improvement):

US20260023715A1 SYSTEMS AND METHODS FOR IMPLEMENTING DIRECTIONAL OPERAND BROADCAST AND MULTIPLY-ACCUMULATE EXECUTION USING A CONFIGURABLE PATCH MESH IN A MULTI-CORE PROCESSING ARRAY OF AN INTEGRATED CIRCUIT 20240719 - Published 20260122

View attachment 95059

[0099] Each array processing core includes a local register memory 310 that may be logically divided into a broadcast region 309 , a source region 318 , and a destination region 320 . The broadcast region 309 may be configured to hold operand values intended to be fed into a patch operation. In one or more embodiments, the source region 318 preferably stores local data operands to be used in multiply-accumulate computations, while the destination region 320 receives accumulation results.

[0100] A feed path interconnect enables operand values from one or more array processing cores to be routed to the origin core 303 . Once the operand value may be received by the origin core 303 , the operand value may be staged in a broadcast staging register or similar holding buffer. The origin core 303 then initiates a unidirectional wavefront broadcast of the operand value using a patch mesh interconnect fabric 312 .

[0101] In one or more embodiments, the patch mesh interconnect 312 may be configured to propagate operand values from the origin core 303 to other processing cores within the patch region using a wavefront propagation scheme. In one embodiment, the wavefront progresses outward from the origin core 303 in a Manhattan-distance order, such that
each processing core within the patch region receives the broadcast value with a fixed delay relative to the number of inter-core hops from the origin core. The broadcast timing model enables deterministic compute scheduling across the patch.

Basically, a pulse originating at 303 (top left) propagates through the matrix with a delay determined by the number pf cores between 303 and the destination core, creating a propagating "wave front".

This sounds to me like it is intended to mimic the propagation of signals through the brain. The reason for the developement is set out here"

[0006] Accordingly, there remains a need in the integrated circuitry field for operand distribution techniques that permit localized, directionally controlled broadcasting of operands to selected subsets of processing elements. There also remains a need for compute scheduling frameworks that enable deterministic multiply-accumulate operations across such subsets while minimizing interconnect congestion and improving temporal alignment of data arrival and compute triggering.


CTO Nigel Drego cut his teeth with cryptocurrency tech.

The Quadric IO webpage has been 404'd. Their last investment round was 20260115.

Probably sick of me banging on about Quadric but whether we like it or not Quadric Is a competitor and a partner of Megachips. Maybe it isn’t as good as Akida but in some cases it must be good enough.


The article attached states Quadric just raised another $30M taking it’s funding to 72M so it’s not going anywhere in a hurry.

Further discussion about the speed of the industry and how technology advancements are quicker than chips can be manufactured which may point to why Brainchip took its time and consulted widely prior to taping out:


I’m still wrapped AKD 2500 has been taped out because it indicates industry has generally approved what BC is building or they wouldn’t spend the $2.5M to produce it.

It’s not going to be a winner takes all scenario anyway. It will eventually be a big TAM!

:)
 
  • Like
  • Fire
Reactions: 16 users

gex

Regular
Whats the difference between 3 years ago when people were saying where goimg to the moon,
Our company was nothing back then.
Look at us now on how far we have developed and how many varieties of avenues of potential income with the array of products availiable.
Why did people believe then but not now,
Look at the prices? Do yourself a favour
What is satire pleb
 
  • Haha
Reactions: 1 users

Diogenese

Top 20
Probably sick of me banging on about Quadric but whether we like it or not Quadric Is a competitor and a partner of Megachips. Maybe it isn’t as good as Akida but in some cases it must be good enough.


The article attached states Quadric just raised another $30M taking it’s funding to 72M so it’s not going anywhere in a hurry.

Further discussion about the speed of the industry and how technology advancements are quicker than chips can be manufactured which may point to why Brainchip took its time and consulted widely prior to taping out:


I’m still wrapped AKD 2500 has been taped out because it indicates industry has generally approved what BC is building or they wouldn’t spend the $2.5M to produce it.

It’s not going to be a winner takes all scenario anyway. It will eventually be a big TAM!

:)
Yes. Akida 2 has TENNs - that's streets ahead of the competition (Applied Brain Research excepted). Akida 3 adds the latency and power benefits of solid state switching mesh v packet switching.
 
  • Like
  • Fire
  • Love
Reactions: 17 users

Cardpro

Regular
I remember when they first released announcement for AI Accelerator... Akida...s... I dont know how you guys continue to stay positive, although some of the recent engagements seem somewhat interesting but seeing my investment value going down day by day for years makes me sad... to me, this is another excuse to buy time and justify their pay for a few years...
 
  • Like
Reactions: 3 users

Frangipani

Top 20
Kevin D. Johnson, Field CTO and Principal HPC Cloud Technical Specialist at IBM, just shared his latest IBM Spectrum Symphony&AKD1000-in-harmony demo with his LinkedIn network - this time round it’s about how stock market traders can use voice emotion biometrics during earnings announcements to their advantage:


08E9CEBF-AE17-42A1-8E44-644C47C87884.jpeg



8C5CBB08-0D31-4035-80A8-5D989A254A51.jpeg
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 38 users

Frangipani

Top 20

56A5FE53-C322-4E1C-8B08-53A8A21FD51A.jpeg




Akida Pico: The Tiny Brain Making “Always-On” AI a Reality


In the world of Edge AI, there’s always been a tradeoff: high intelligence, always-on capability, or long battery life. If you wanted a device to listen for a voice command or monitor something 24/7, you usually had to accept that the battery would be drained just waiting for a command.

Enter Akida Pico.​

Built on BrainChip’s proprietary event-based processing platform, Akida Pico is an ultra-low-power co-processor (or standalone core) designed to give small devices “eyes and ears” without the power-hungry baggage of a traditional CPU tasked with the nuisance of stand-by mode.

Why “Event-Based” is a Game Changer

Traditional AI processors are like a light that stays on all night, even when the room is empty. They’re constantly crunching numbers regardless of whether anything significant is happening.

Akida Pico uses event-based processing, which mimics the human brain. It only “fires” when it detects a relevant change in data (an “event”). If nothing is happening, it consumes almost zero power. This allows it to operate in the microwatt (uW) to milliwatt (mW) range, making it the leanest NPU core in the industry.

Unlike most chips that need a heavy-duty host CPU, this core can operate entirely standalone, pulling just microwatts to milliwatts of power. Pico stays lean by using “power islands” to make sure its standby mode doesn’t incur the leakage of the whole system.

Real-World Use Cases

Akida Pico isn’t just a spec sheet; it’s designed for specific, high-impact “extreme edge” applications.

Wake-up Systems
In many designs, Akida Pico acts as a low-power filter. Instead of the main CPU staying awake to listen for a keyword or detect motion, Pico remains on duty. When it identifies a “qualified event,” like a specific voice command, it sends a low-power interrupt to “wake up” the main MCU: perfect for applications like smart appliances, voice assistants, and wearables.

Healthcare
Imagine a wearable that monitors heart health or detects the early onset of a seizure. Because Pico is purely digital and ultra-efficient, it can perform medical anomaly detection locally on the device without ever needing to send data to the cloud. Patients can enjoy prolonged battery life with a monitor that only alerts a doctor when a specific event is detected.

Industrial Predictive Maintenance
In a factory, thousands of motors hum 24/7. Identifying a failing part early can prevent emergency repairs or lost productivity. Akida Pico can be integrated into remote sensors to perform industrial anomaly detection, analyzing vibration patterns or thermal spikes in real-time. An industrial sensor can run for years, only “reporting in” if it hears a problem.

Developer-Friendly: No New Languages Required

Akida Pico is easy test, train, and deploy with BrainChips Meta TF development tool. MetaTF includes a processor IP simulator for model execution, as well as support for Akida hardware like the AKD1000 reference SoC and Akida 2 FPGA platform. Inspired by the Keras API, MetaTF provides a high-level Python API for neural networks. This API facilitates early evaluation, design, final tuning, and productization of neural network models.
  1. Native Support: Works directly with TensorFlow/Keras and PyTorch.
  2. Low-Code/No-Code: For those who aren’t AI experts, BrainChip offers turnkey tools to deploy optimized models quickly.

The Bottom Line

Akida Pico is about making the Internet of Things intelligent without the need to tether devices to a charging cable. With its ultra-low power neuromorphic technology at the edge, Pico is proving that you don’t need a massive power budget to have a massive impact.

Ready to check out Akida Pico?

Join us for a live webinar to see Pico in action on Akida FPGA in the Cloud: execute models, assess accuracy, and benchmark performance all from your desktop with no hardware required.

Register
 
  • Like
  • Love
  • Fire
Reactions: 25 users

Yoghesh

Regular
View company: EDGE AI FOUNDATION
EDGE AI FOUNDATION
25,480 followers

Contact us
2m •
As edge AI systems scale, the limitations of traditional von Neumann computing—separate memory and processing, high data movement, and power inefficiency—are becoming increasingly apparent. Neuromorphic computing offers a fundamentally different approach, inspired by the structure and operation of the human brain, enabling event-driven, ultra-low-power, real-time intelligence at the edge. In this inaugural EDGE AI Neuromorphic Livestream, we bring together industry leaders, researchers, and system builders to explore how neuromorphic AI is moving from research into real-world deployment. The session will examine architectures, sensing and control applications, training methods, and benchmarking practices across both small-scale and large-scale systems. Designed for technologists, researchers, and decision-makers, this livestream will provide practical insights into where neuromorphic AI delivers real value today—and where it is headed next. Tune in for these talks from: - Innatera - University of Southern California - Brainchip - Harvard University - Spinncloud - Delft University

 
  • Like
  • Love
Reactions: 15 users

Esq.111

Fascinatingly Intuitive.
Kevin D. Johnson, Field CTO and Principal HPC Cloud Technical Specialist at IBM, just shared his latest IBM Spectrum Symphony&AKD1000-in-harmony demo with his LinkedIn network - this time round it’s about how stock market traders can use voice emotion biometrics during earnings announcements to their advantage:


View attachment 95066


View attachment 95067
Evening Frangipani, Chippers.

Boooooom. 😗👍.

Some scratchings of mine , to put the above into context.

Regards,
Esq.
 

Attachments

  • 20260214_074437.jpg
    20260214_074437.jpg
    1.4 MB · Views: 141
  • Like
  • Love
  • Fire
Reactions: 27 users

stockduck

Regular
offtopic



???
 
  • Wow
Reactions: 1 users

stockduck

Regular
offtopic:



https://www.kplabs.space/solutions/hardware/leopard
....
Quad ARM Cortex-A53 CPU 1.2 GHz
....

???
 
  • Like
  • Fire
Reactions: 2 users
Nice
 

Attachments

  • Screenshot_20260214_092947_LinkedIn.jpg
    Screenshot_20260214_092947_LinkedIn.jpg
    211 KB · Views: 94
  • Haha
  • Like
Reactions: 4 users

itsol4605

Regular
offtopic



???
Well, nice!
They are using neuromorphic processors (like Intel Loihi2).

"A case study focusing on an autonomous drone workload reveals up to 312x energy savings relative to conventional deep neural networks, all while sustaining real-time operation.
Validation on both Intel Loihi 2 and IBM TrueNorth platforms confirms the real-world applicability of NeuEdge, showcasing significant energy improvements, 312x over GPU baselines and 89x over conventional neural networks on edge CPUs.
These results firmly establish neuromorphic computing as a viable solution for sustainable edge AI systems, offering a pathway towards truly energy-efficient intelligent devices."
 
  • Like
  • Love
Reactions: 5 users

manny100

Top 20
If this proves out IBM will be able to sell this as an add on to Symphony to funds, institutions,Banks and Trading brokers etc for a mint.
A licence with a royalty as a % of sales may be go if it plays out.
Analysing voice sentiment is interesting. Another tech small step and AKIDA will pick up BS at meetings, home etc, via a tiny wearable Brain Tag.
 
  • Like
  • Fire
  • Love
Reactions: 7 users

Esq.111

Fascinatingly Intuitive.
Afternoon Chippers,

Be thinking Kevin D . Johnston may well be a candidate for the NOBEL PEACE PRIZE this year.

* Ability to detect a human emotion shift 2 to 3 seconds before the market explodes 15 times.

Imagine if one will...... every mobile phone has such technology embedded within .... monitoring our partners mood & giving a little alert.

World Peace is finally within our grasp.

😇

On a side note , is anyone able to pinch the vidio off LinkedIn & post on this forum.

I don't have a LinkedIn account , hence can't view it.

Thankyou in advance.

Regards,
Esq.
 
  • Like
  • Haha
Reactions: 8 users
Top Bottom