BRN Discussion Ongoing

Mugen74

Regular
Sorry busy following the open.

The low price is transfer of shares to overseas . Forget the Bank responsible for this and appointed by Brainchip. NY bank. That price is the TRANSFER stock price.
They must supply these to the buyers say in USA OTC etc. hey would have come from a stock holding account loaded up with these and probably bought before $0.89c

TOADYS action is disgusting! An investigation into the DUMPING of 22 Mil shares at open dropping the SP 17% IMMEDIATELY should be known.
UBS is my best guess as TOP culprit. Credit Suiess 2nd place.

Brainchip should push this inquiry as it will get a bad name for being a pump & dump stock if it continues.
This equates to price manipulation.

Has NOTHING to do with the ASX drop as BRN went up against the ASX drops for 3 days now.


Yak52
Yep its criminal!


1652143807510.png
 
Last edited:
  • Haha
  • Like
Reactions: 10 users
You know how BrN like to encrypt clues now and then, so the photo of the Harvester on our website might be worth investigating. I have looked this morning but can’t find much

View attachment 6075


Ha ha I went down the same path yesterday with the drone on the new website via google image search!


1652143618964.png

1652143686040.png


1652143495313.png
 
  • Like
  • Love
  • Haha
Reactions: 6 users
Sorry busy following the open.

The low price is transfer of shares to overseas . Forget the Bank responsible for this and appointed by Brainchip. NY bank. That price is the TRANSFER stock price.
They must supply these to the buyers say in USA OTC etc. hey would have come from a stock holding account loaded up with these and probably bought before $0.89c

TOADYS action is disgusting! An investigation into the DUMPING of 22 Mil shares at open dropping the SP 17% IMMEDIATELY should be known.
UBS is my best guess as TOP culprit. Credit Suiess 2nd place.

Brainchip should push this inquiry as it will get a bad name for being a pump & dump stock if it continues.
This equates to price manipulation.

Has NOTHING to do with the ASX drop as BRN went up against the ASX drops for 3 days now.


Yak52
Yes @Yak52 but we both know nothing will happen because they are just too big so the ASX will issue speeding tickets and let the real criminals run free.

So the only course of action available to retail investors is to DYOR and have a plan.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Fire
Reactions: 17 users

Yak52

Regular
Hey Bravo,

Intel are spending tens of billions building new and expanding existing foundries. They're trying to attract customers for the foundries. They've said they'll work with anyone on any level ie they can design, manufacture and program your chip or just manufacture.

Proga.............when you cannot design or invent a Neuromorphic Chip or IP that works well the the next best thing is to give up and go build a FAB and then make other peoples Neuromorphic Chips instead.
Ooops sorry (not) INTEL.

Yak52
 
Last edited:
  • Like
  • Haha
  • Love
Reactions: 15 users
  • Like
  • Fire
  • Love
Reactions: 12 users

MADX

Regular
Hmm. To save BRN's time, by reducing phone calls to them, would it be an idea to have a dedicated thread for them to monitor for such things?
It could also save FactFinder some time. He has become our conduit from this forum by phone calls to his contacts at BRN by way of his amazing abilities. Here's another idea..... would FF like to be on the BRN board or would that cause conflict with his position on this forum? I personally find his input so valuable that I'd like him to be rewarded.
Logically, should any such dedicated thread include, and be part of the suggested applications for BRN tech? There is already a thread for the latter and we want to minimise the number of threads to make this forum easier to read.
 

Yak52

Regular
Yes @Yak52 but we both know nothing will happen because they are just too big so the ASX will issue speeding tickets and let the real criminals run free.

So the only course of action available to retail investors is to DYOR and have a plan.

My opinion only DYOR
FF

AKIDA BALLISTA
Yes correct FF. nothing will ever happen to the big guys who do this. Especially while using ASX rented Servers loaded with the BoTs doing the trading and providing high numbers of trading fees for the ASX.

We are headed back up after they dumped 22 Mil and ran out of supply so see where today ends up.
I suspect aprox O/S price level or above .

Yak52

ps. Time to pick up some extras, the "picture" has not changed regarding all these Dots, partnerships etc.
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 15 users

Mugen74

Regular
If only we could find them and have them apologise.
 
  • Haha
  • Like
  • Love
Reactions: 6 users

Yak52

Regular
  • Like
  • Fire
  • Love
Reactions: 18 users
The game is not fair. Do not play as you cannot win day to day. It will eat you up and spit you out.

I have mentioned my son previously. At one point after going to the UK he worked at the Financial Conduct Authority.
After the Lehman affair collapsed the world financial system the big players all signed up to a voluntary code to play nice in future. Barclays signed this code. Two weeks later a retail investor complained that his account had been mismanaged or worse still fraudulently dealt with and rather than having lost his 1.5 million pounds he should have made 4.5 million pounds. Barclays said we will investigate and came back and said no nothing to see here you just made a bad trade and kept their million pound plus commission and left their employees 200 thousand pound commission in his account. This retail investor rang the FCA and was referred to my son who carried out the investigation and called for all the files including emails which were supplied and after analysing these documents showed the trader at Barclays had in fact defrauded this retail investor of his investment for his and Barclays benefit with the assistance of all traders not just those at Barclays. The wink, wink say no more brigade at work across all the desks. The retail investor received his money about 4.5 million pounds and fines and disqualifications were imposed on Barclays and the trader and there was a lot of press coverage in the UK and around the world. Barclays once again promised to be good in future. I was greatly reassured by this promise.

The only leverage retail have is Doing Their Own Research and having a Plan.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Fire
  • Love
Reactions: 33 users
Looking cute
02BE8FF0-5E50-4281-867A-E064F56F66DF.png
 
  • Like
  • Fire
Reactions: 9 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
The game is not fair. Do not play as you cannot win day to day. It will eat you up and spit you out.

I have mentioned my son previously. At one point after going to the UK he worked at the Financial Conduct Authority.
After the Lehman affair collapsed the world financial system the big players all signed up to a voluntary code to play nice in future. Barclays signed this code. Two weeks later a retail investor complained that his account had been mismanaged or worse still fraudulently dealt with and rather than having lost his 1.5 million pounds he should have made 4.5 million pounds. Barclays said we will investigate and came back and said no nothing to see here you just made a bad trade and kept their million pound plus commission and left their employees 200 thousand pound commission in his account. This retail investor rang the FCA and was referred to my son who carried out the investigation and called for all the files including emails which were supplied and after analysing these documents showed the trader at Barclays had in fact defrauded this retail investor of his investment for his and Barclays benefit with the assistance of all traders not just those at Barclays. The wink, wink say no more brigade at work across all the desks. The retail investor received his money about 4.5 million pounds and fines and disqualifications were imposed on Barclays and the trader and there was a lot of press coverage in the UK and around the world. Barclays once again promised to be good in future. I was greatly reassured by this promise.

The only leverage retail have is Doing Their Own Research and having a Plan.

My opinion only DYOR
FF

AKIDA BALLISTA



My plan is to do this until at least 2025.



mr-bean.gif
 
  • Like
  • Haha
  • Love
Reactions: 29 users
You know how BrN like to encrypt clues now and then, so the photo of the Harvester on our website might be worth investigating. I have looked this morning but can’t find much

View attachment 6075
Hi Rocket577 I've said before I think John deer would be a good fit. To me this picture suggests that the yellow parts are using akida. The header at the front could possibly auto adjust for optimum cut hight. The combine will be able to efficiently sort the wheat from the chart. The discharge arm will self position over field bin and stop when at optimum capacity. The tractor pulling the field bin will also be autonomous hopefully using AKIDA technology. I know a grain farmer 1 hour out of Melbourne looking for 2 workers and can't fill positions.
This Industrial Revolution is coming as the Technology and economics are getting closer to proving themselves.
I also think Toro and John Deer would be looking at Akida or will one day look at it for their Industrial sporting facility machinery. A decade ago it was only getting close to feasible in places like Japan and Australia due to high labour cost, but wasn't financially beneficial in the USA due to cheap labour.
When the Tech and cost benefit analysis come together I do not know.
Hopefully one day AKIDA will be involved in this transformation.
 
  • Like
  • Love
Reactions: 7 users

miaeffect

Oat latte lover
  • Haha
  • Like
  • Love
Reactions: 15 users

IloveLamp

Top 20
Screenshot_20220510-111207_LinkedIn.jpg
Screenshot_20220510-111225_LinkedIn.jpg
 

Attachments

  • Screenshot_20220510-111225_LinkedIn.jpg
    Screenshot_20220510-111225_LinkedIn.jpg
    513.4 KB · Views: 143
Last edited:
  • Like
  • Fire
  • Love
Reactions: 21 users
Just received the following email from BRN regarding MosChip. My registration does seem to be working:

New Press Release From Brainchip: Media Alert: BrainChip and MosChip to Demonstrate Capabilities of Neural Processor IP and ASICs for Smart Edge Devi​

External
Inbox




profile_mask2.png

BrainChip Inc <noreply@brainchip.com> Unsubscribe​

11:01 AM (34 minutes ago)








Laguna Hills, Calif. – May 9, 2022
webicon_green.png
BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of ultra-low power neuromorphic AI IP, and
webicon_green.png
MosChip Technologies Limited (BSE: MOSCHIP), a semiconductor and system design services company, are jointly presenting a session at the India Electronics & Semiconductor Association (IESA) AI Summit discussing how the companies are working collaboratively to enable neural network IP for edge applications.
With the advent of new emerging technologies in the Intelligent Electronics & Semiconductor ecosystem, IESA is looking at tapping the opportunities brought forward by AI in the hardware space. The objective of this Global Summit is to share insights on how AI is driving the hardware and software ecosystem in the country.
BrainChip and MosChip are co-presenting at the May 11 IESA AI Summit session with comments by Murty Gonella, Site Director at BrainChip India, followed by Swamy Irrinki, VP of Marketing and Business Development at MosChip. The presentation ends with a demonstration of BrainChip’s Akida
™
neural processor IP, enabling high performance and ultra-low power on-chip inference and learning and MosChip’s ASIC platform for smart edge devices.
The IESA AI Summit is a two-day conference, May 11 and 12, showcasing how AI is driving the hardware and software ecosystem in India. It features panel discussions, keynote addresses, session and showcases from top India policy makers and global thought leaders. Additional information about the event is available at
webicon_green.png
https://iesaaisummit.org/.

About BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY)
BrainChip is the worldwide leader in edge AI on-chip processing and learning. The company’s first-to-market neuromorphic processor, AkidaTM, mimics the human brain to analyze only essential sensor inputs at the point of acquisition, processing data with unparalleled efficiency, precision, and economy of energy. Keeping machine learning local to the chip, independent of the cloud, also dramatically reduces latency while improving privacy and data security. In enabling effective edge compute to be universally deployable across real world applications such as connected cars, consumer electronics, and industrial IoT, BrainChip is proving that on-chip AI, close to the sensor, is the future, for its customers’ products, as well as the planet. Explore the benefits of Essentia
 
  • Like
  • Fire
  • Love
Reactions: 26 users

Yak52

Regular
WE are currently well of the lows of the day ($1.00) and at $1.11

That dump was definitely mostly done by one Brokerage (Insto) because as soon as they had finished unloading the shares selling dried up and we re-bounded back from $1.00 mark. A Broker report (one day) Stan would be able to show this.

Our SP is now the same as at midday yesterday (Mon) and we will see what the rest of today brings. Fustrating!

yak52
 
  • Like
  • Fire
  • Wow
Reactions: 32 users

Hadn't heard of these guys before for whatever reason.

Thought take a look and from their site late Mar.



The Present and Future of AI Semiconductor (2): Various Applications of AI Technology​

March 24, 2022
Artificial intelligence (AI), which is regarded as the ‘the most significant paradigm shift in history,’ is becoming the center of our lives in remarkable speed. From autonomous vehicles, AI assistants to neuromorphic semiconductor that mimics the human brain, artificial intelligence has already exceeded human intelligence and learning speed, and is now quickly being applied across various areas by affecting many aspects of our lives. What are the key applications of AI technology and how is it realized?
(Check here to discover more insights from SNU professor Deog-Kyoon Jeong about AI semiconductor!)

Cloud Computing vs. Edge Computing​

220317_Figure_1.jpg

Figure 1. Cloud Computing vs. Edge Computing
Image Download
One AI application, which is an antipode to cloud services, is edge computing1. Applications that require processing massive amounts of input data such as videos or image data must process data using edge computing or transfer the data to a cloud service through wired or wireless communication preferably by reducing the amount of data. Accelerators specifically designed for edge computing for this purpose take up a huge part of AI chip design. AI chips used in autonomous driving are a good example. These chips perform image classification and object detection by processing images that contain massive amounts of data using CNN2 and a series of neural operations.

AI and the Issue of Privacy​

220317_Figure_2._SKT_NUGU.jpg

Figure 2. SK Telecom’s NUGU
(Source: SKT NUGU )
Image Download
220317_Figure_2._AmazonAlexa.png

Figure 2. Amazon’s Alexa
(Source: NY Times)
Image Download
220317_Figure_2._SKT_NUGU.jpg

Figure 2. SK Telecom’s NUGU
(Source: SKT NUGU )
Image Download
220317_Figure_2._AmazonAlexa.png

Figure 2. Amazon’s Alexa
(Source: NY Times)
Image Download



Another area of AI application is conversational services like Amazon’s Alexa or SK Telecom’s NUGU. However, such services cannot be used widely if privacy is not protected. Conversational AI services, where conversations at home are continuously eavesdropped by a microphone, cannot be developed beyond a simple recreational service by nature, and therefore, many efforts are being made to resolve these privacy issues.
The latest research trend in solving the privacy issue is homomorphic encryption3 . Homomorphic encryption does not transmit users’ voice or other sensitive information such as medical data as is. It is a form of encryption that allows computations of multiplication and addition on encrypted data in the form of ciphertext, which only the user can decrypt, on a cloud service without first decrypting it. The outcome or results are sent to the user again in an encrypted form and only the user can decrypt to see the results. Therefore, no one including the server can see the original data other than the individual user. Homomorphic service requires an immense amount of computation up to several thousand or tens of thousand times more compared to the general plaintext DNN4service. The key area for research in the future will be around reducing the service time by dramatically enhancing computation performance through specially designed homomorphic accelerators5.

AI Chip and Memory​

In a large-scale DNN, the number of weights is too high to contain all of them in a processor. As a result, it has to make a read access whenever it requires a weight stored in an external large capacity DRAM and bring it to the processor. If a weight is used only once and cannot be reused after accessing it, the data that was pulled with considerable amount of energy and time consumption will be wasted. This is an extremely inefficient method as it consumes additional time and energy compared to storing and utilizing all weights in the processor. Therefore, processing an intense amount of data using enormous number of weights in large-scale DNN requires a parallel connection and/or a batch operation that uses the same weights over several times. In other words, there is a need to perform computations by connecting several processors with DRAMs in parallel to disperse and store weight or intermediate data in several DRAMs to reuse them. High speed connection among processors is essential in this structure, which is more efficient compared to having all processors access through one route. And only this structure can deliver the maximum performance.

Interconnection of AI Chips​

220318_SK-hynix_0308_02.png

Figure 3. Interconnection Network of AI Chips
Image Download
The performance bottleneck that occurs when connecting numerous processors depends on the provided bandwidth, latency as well as the form of interconnection. These elements define the size and performance of the DNN. In other words, if one were to deliver ‘N-times’ higher performance by connecting ‘N’ number of accelerators in parallel, bottleneck occurs in the latency and bandwidth provided by the interconnections and will not be able to deliver the performance as one desires.
Therefore, the interconnection structure between a processor and another is crucial in efficiently providing the scalability of performance. In the case of NVIDIA A100 GPU, NVLink 3.0 plays that role. There are 12 NVLink channels in this GPU and each provides 50 GBps in bandwidth. Connecting 4 GPUs together can be done by direct connections using 4 channels each in the form of a clique. But to connect 16 GPUs, an NVSwitch, which is an external chip dedicated just for interconnection, is required. In the case of Google TPU v2, it is designed to enable a connection of a 2D torus structure using Inter-Core Interconnect (ICI) with an aggregate bandwidth of 496 GBps.
220317_Figure_4._Nvidia%E2%80%99s_GPU_Accelerator_A100_using_6_HBMs.jpg

Figure 4. NVIDIA’s GPU Accelerator A100 using 6 HBMs
(Source: The Verge)
Image Download
The way in which processors are interconnected has a huge impact on the whole system. For example, if they are interconnected in a mesh or torus structure, the structure is easy to compose as the physical connection between chips is simple. But latency increases proportionally to the distance as it requires hopping over several processors to interconnect between nodes that are far away. The most extreme method would be in the form of a clique that interconnects all processors one to one, but this would lead to a significant increase in the number of chip pins by N!, causing PCB congestion beyond allowable so that in actual design, connecting up to only four processors would be the limit.
Most generally, using a crossbar switch like a NVSwitch is another attractive option, but this method also converges all connections on the switch. Therefore, the more the number of processors you want to interconnect, the more difficult the PCB layout becomes as transmission lines concentrate around the switch. The best method is structuring the whole network in a binary tree, connecting processors at the bottom end, and allocating the most bandwidth to the top of the binary tree. Therefore, creating a binary fat tree will be the most ideal and will be able to deliver maximum performance with scalability.

Neuromorphic AI Chip​

220317_Figure_5.jpg

Figure 5. Cloud Server Processor vs. Neuromorphic AI Processor
Image Download
Data representation and processing method of processors for cloud servers that serve as DNN accelerators take the form of digital, since the computational structure is fundamentally simulation of NN through software on top of hardware. Recently, there is an increase in research on neuromorphic AI chip which, unlike the previous simulation method, directly mimics the neural network of a living organism and its signals and maps to an analog electronic circuit and performs in the same manner. This method takes the form of being analog in the representation of original data in actual applications. This means that one signal is represented in one node, and the interconnection is by hardwire and not defined by the software, while the weights are stored in an analog form.
220317_Figure_6.jpg

Figure 6. Previous semiconductor vs. Neuromorphic semiconductor
Image Download
The advantage of such structure is that it has maximum parallelism to perform with minimum energy. And neuromorphic chips can secure great advantage in certain applications. Because the structure is fixed, it lacks programmability, but it can offer a great advantage in certain edge computing applications of a small scale. In fact, neuromorphic processor has significance in applications such as processing AI signals of sensors used in IoT by delivering high energy efficiency or image classification that requires processing large amounts of video data using CNN of a fixed weight. However, because the weight is fixed, it will be difficult to use in areas of applications that require continued learning. Also, it is difficult to leverage parallelism that interconnects several chips in parallel due to a structural limitation when it comes to large-scale computations, making its actual area of application restricted to edge computing. It is also possible to realize the neuromorphic structure in a digital form, and IBM’s TrueNorth is an example. It is known, however, that the scalability is limited, making it difficult to find wide practical applications.

Current Status of AI Chip Development​

To create a smart digital assistant that can converse with humans, Meta (formerly known as Facebook), which needs to process massive amounts of user data, is designing an AI chip specialized to have basic knowledge about the world. The company is also internally developing AI chips that will perform moderation to decide whether to post real-time videos that are uploaded to Facebook.
Amazon, a technology company that mainly focuses on e-commerce and cloud computing, has already developed its own AI accelerator called AWS Inferentiato power its digital assistant Alexa and uses it to recognize audio signals. Cloud service provider AWS has developed an infrastructure that uses the Inferentia chip and provides services for cloud service users that can accelerate deep learning workloads like Google’s TPU.
Microsoft, on the other hand, uses field programmable gate array (FPGA) in its data centers and has introduced a method of securing the best performance by reconfiguring precision and DNN structure according to application algorithms in order to create AI chips optimized not only in current applications, but also in future applications. This method, however, creates a lot of overhead to refigure the structure and logic circuit even if it has identified an optimal structure. As a result, it is unclear that it will have actual benefit because it is inevitably disadvantaged in terms of energy and performance compared to ASIC chips specifically designed for certain purposes.
A number of fabless startups are competing against NVIDIA by developing general-purpose programmable accelerators that are not specialized to certain areas of application. Many companies, including Cerebras Systems, Graphcore, and Groq, are joining the fierce competition. In Korea, SK Telecom, in collaboration with SK hynix, has developed SAPEON and will soon be used as the AI chip in data centers. And Furiosa AI is preparing to commercialize its silicon chip, Warboy, as well.
220317_Figure_7._SAPEON_X220.jpg

Figure 7. SAPEON X220
(Source: SK Telecom Press Release)
Image Download

The Importance of the Compiler​

The performance of such AI hardware depends greatly on how optimized its software is. Operating thousands or tens of thousands of computational circuits at the same time through systolic array and gathering the outcome efficiently require highly advanced coordination. Setting up the order of the input data to feed numerous computational circuits in the AI chip and make them to work continuously in a lockstep and then transmitting the output to the next stage can only be done through a specialized library. This means that developing an efficient library and the compiler to use them is as important as designing the hardware.
NVIDIA GPU started as a graphics engine. But NVIDIA provided a development environment, CUDA, , to enable users to write programs easily and enabled them to run efficiently on the GPU, which made it popularly and commonly used across the AI community. Google also provides its own development environment, TensorFlow, to help develop software using TPUs. As a result, it supports users to utilize TPU easily. More and more diverse development environments must be provided in the future, which will increase the applicability of AI chips.

AI Chip and its Energy Consumption​

TThe direction of AI services in the future must absolutely focus on enhancing the quality of service and reducing the required energy consumption. Therefore, it is expected that efforts will focus around reducing power consumption of AI chips and accelerating the development of energy-saving DNN structure. In fact, it is known that it takes 10^19 floating-point arithmetic in the training of ImageNet to reduce error rate to less than 5%. This is the equivalent to the amount of energy consumed by New York City citizens for a month. In the example of AlphaGo that was used in the game of Go against 9-Dan professional player Lee Sedol in 2016, a total of 1,202 CPUs and 176 GPUs were used in the inference to play Go and estimated 1 MW in power consumption, which is tremendous compared with the human brain using only 20 W.
AlphaGo Zero, which was developed later, became a system of a performance that exceeds AlphaGo merely after 72 hours of training using self-play reinforcement learning with only 4 TPUs. This case proves that there is potential in reducing energy consumption using a new neural network structure and a learning method. And we must continue to pursue research and development on energy-saving DNN structures.

The Future of the AI Semiconductor Market​

220317_Figure_8.jpg

Figure 8. AI Chip Market Outlook
(Source: Statista)
Image Download
The successful accomplishments made in the field of AI will expand the scope of application, triggering stunning market growth as well. For example, SK hynix recently developed a next-generation intelligence semiconductor memory, or processing-in-memory (PIM)6, to resolve the bottleneck issue in data access in AI and big data processing. SK hynix unveiled the ‘GDDR6-AiM (Accelerator in Memory)’ sample as the first product to apply the PIM, and announced the achievement of its PIM development at the International Solid-State Circuits Conference, ISSCC 20227, an international conference of the highest authority in the field of semiconductor, held in San Francisco in the end of February this year.
220317_Figure_9._%E2%80%98GDDR6-AiM%E2%80%99_of_SK_hynix.jpg

Figure 9. GDDR6-AiM developed by SK hynix
Image Download
Application systems will further drive a wider AI market and continuously create new areas, enabling differentiated service quality backed by the quality of inference based on a structure of neural network. AI semiconductors, which are the backbone of the AI system, will be differentiated based on how fast and accurately they can conduct inference and training tasks using low energy. Latest research findings show that energy efficiency per se is extremely poor. Therefore, there is an increasing need for research on new neural network structures with a focus not only on function, but also on energy efficiency. And in terms of hardware, the core element that defines energy efficiency lies around improving memory access methods. As such, Processing-In-Memory (PIM), which processes within a memory and not by accessing memory separately, and neuromorphic computing that mimics the neural network by storing synapse weights in analog memory will become important fields of research.
 
  • Like
  • Fire
Reactions: 18 users

Earlyrelease

Regular
  • Haha
  • Like
Reactions: 7 users

Yak52

Regular


SO...........an engineer (Samar Shekhar) from INTEL has liked our joint Demo with MOSCHIP at the Summit.

More & more INTEL dots keep popping up to the point of where can we ignore them anymore?

To say that INTEL does know how good we are is an understatement and they are watching us with interest it would seem!

Yak52
 
  • Like
  • Fire
  • Love
Reactions: 32 users
Top Bottom