BRN Discussion Ongoing

Pmel

Regular
Do you guys know if BRN was going to get more funding from LD capital. From memory we could get more finance from LDA in may which looks like hasnt happened. Does it mean compnay is expecting good revenue soon.

Dyor
 
  • Like
Reactions: 2 users

VictorG

Member
They have enough funding for 4 qtrs, $30+ million. They don't need more funding and are not relient on revenue just yet.
 
  • Like
  • Love
  • Fire
Reactions: 23 users

Cyw

Regular
You bring up exactly the thought I had immediately. I thought that this was not in the interest of the really nasty, big and well-paying or greasing investors. Maybe the pump & dumpers are just the investors FF describes as sophisticated.

But those who transfer millions of shares and place millions of shorts to then throw the millions of shares into the market are playing the criminal role in my view. They work ruthlessly for a much greater purpose and fund their action within themselves and then get paid for their services. If someone whips and pumps up and takes his pay, it may not be the best way but I think it's a small fish, like a pawn sacrifice (?) to distract. Nothing will change with the real criminals. But that's just my guess and I can definitely be wrong as I only have the insight into the machanisms of the ASX shared here at tse.

nice term - bear raid...

If only I had known all this about the ASX two or three years ago. 🤣
From observing BRN trading patterns for the last two years, I noticed when the fake bids are pulled like now, the shorters have finished their shorting for now. It is time to buy.
 
  • Like
  • Love
  • Fire
Reactions: 12 users

cosors

👀
Yeah i changed it from hotcrappa to hotcoppa :) Im wondering if this will stir up the hornets nest and make people think twice about doing exactly what Gabriel did...

I see so many users on HC with exactly the same comments, or writing in a similar style to other users. It cant be a coincidence. Definitely some shitty tactics in play on the Crapper.
Thanks to zeeb0t we were able to expel one who followed us into the TLG group. At first he played his role but I was very sceptical. Then I described my concerns and warned. zeeb0t apparently observed and then it really started with hotcrapper, he revealed himself. And how the one about tse pulled off and spread lies!

For me HC is unfortunately for the most part just a disgusting system I realised this far too late. But the binding power is great because that's where most of them are. It's like the massangers. But there are solutions too. Today I use Threema >90%, analogous to tse with 100% (+Threema.) ❤️ Of course I quickly donated a little here and subscribed.
 
  • Like
  • Love
Reactions: 11 users

Learning

Learning to the Top 🕵‍♂️
Speaking of shorts,

Over 10 million share were closed on the 31 May-1 June
20220608_071950.jpg
20220608_071925.jpg

But volume was only around 14 million shares
20220608_072355.jpg

Learning.
 
  • Like
  • Fire
  • Love
Reactions: 22 users

Baisyet

Regular
  • Like
Reactions: 7 users

Slymeat

Move on, nothing to see.
New YouTube clip of Rob Telson discussing tactile sensing, concentrating on energy use. From Edge AI conference.

It’s only 1 minute long and is simple enough even for WANCAs to understand.

Akida Tactile Sense Demo
 
  • Like
  • Love
  • Fire
Reactions: 21 users
F

Filobeddo

Guest
  • Like
  • Haha
  • Fire
Reactions: 14 users

jla

Regular

Liking the RT likes 👍
 

Attachments

  • 28056155-988C-4366-BF98-39FA7A485865.jpeg
    28056155-988C-4366-BF98-39FA7A485865.jpeg
    641.4 KB · Views: 86
  • Like
  • Fire
Reactions: 16 users
Now this is an earlier article from Mar 21 that came up and I decided to read...is only a few mins but worth it now with the ARM link.

Was prior to the NVIDIA buy getting turfed but makes the case why NVIDIA wanted or should want ARM at that time.

Can see where some pieces start to fit and link in the broader scope.

Read the analogy around the coffee maker and also look at the last graphic and the little green squares....what can we see potentially fitting in around there?

Also look at some of the numbers re chips, devices etc they throwing around and I also saw the mention of HiSilicon when discussing servers.

You might note that we previously discussed HiSilicon as a sub of Huawei and the "research" servers and HiSilicon KunPeng processors with apparently Akida onboard at eX3.

We get some traction with ARM and the V9 architecture....just some reasonable traction and hmmmm :love:





ARM’S V9 ARCHITECTURE EXPLAINS WHY NVIDIA NEEDS TO BUY IT​

March 30, 2021 Timothy Prickett Morgan
arm-vision-logo-763x438.jpg


Many of us have been wracking our brains why Nvidia would spend a fortune – a whopping $40 billion – to acquire Arm Holdings, a chip architecture licensing company that generates on the order of $2 billion in sales – since the deal was rumored back in July 2020. As we sat and listened to the Arm Vision Day rollout of the Arm V9 architecture, which will define processors ranging from tiny embedded controllers in IoT device all the way up to massive CPUs in the datacenter, we may have figured it out.

There are all kinds of positives, as we pointed out in our original analysis ahead of the deal, in our analysis the day the deal was announced in September 2020, and in a one-on-one conversation with Nvidia co-founder and chief executive officer Jensen Huang in October 2020.

We have said for a long time that we believe that Nvidia needs to control its own CPU future, and even joked with Huang that it didn’t need to have to buy all of Arm Holdings to make the best Arm server CPU, to which he responded that this was truly a once-in-a-lifetime opportunity to create value and push all of Nvidia’s technologies – its own GPUs for compute and graphics and Mellanox network interface chips, DPU processors, and switch ASICs – through an Arm licensing channel to make them all as malleable and yet standardized as the Arm licensing model not only allows, but encourages.


Huang would be the first to tell you that Nvidia can’t create every processor for every situation, and indeed no single company can. And that is why the Arm ecosystem needs to not only be protected, it needs to be cultivated and extended in a way that only a relatively big company like Nvidia can make happen. (Softbank is too distracted by the financial woes of its investments around the globe that have gone bad and basically has to sell Arm to fix its balance sheet. Which is a buying opportunity for Nvidia, which is only really spending $12 billion in cash to get control of Arm; the rest is funny money from stock market capitalization, which in a sense is “free” money that Nvidia can spend to fill in the remaining $28 billion.)

We have sat through these interviews, and chewed on all of this, and chocked it up to yet another tech titan having enough dough to do a big thing. But, as we watched the Vision Day presentations by Arm chief executive officer Simon Segars and the rest of the Arm tech team, they kept talking about pulling more vector math, matrix math, and digital signal processing onto the forthcoming Arm V9 architecture. And suddenly, it all finally became clear: Nvidia and Arm both believe that in a modern, massively distributed world all kinds of compute are going to be tailored to run analytics, machine learning, and other kinds of data manipulation and transaction processing or preprocessing as locally as possible and a single, compatible substrate is going to be the best answer to creating this malleable compute fabric for a lot of workloads. What this necessarily means is that both companies absolutely believe that in many cases, the applicability of a hybrid CPU-GPU compute model will not and cannot work.

In other words, Nvidia’s GPU compute business has a limit to its expansion, and perhaps it is a lot lower than many of us have been thinking. The pendulum will be swinging back to purpose built CPUs that have embedded vector and matrix capabilities, highly tuned for specific algorithms. This will be specifically true for intermediate edge computing and endpoint IoT devices that need to compute locally because shipping data back to be processed in a datacenter doesn’t make sense at all, either technically or economically.

Jem Davies, an Arm Fellow and general manager of its machine learning division, gave a perfect example of the economic forces that are driving compute out of the datacenter and into a more diffuse data galaxy, as we called it three years ago.

“In the Armv9 decade, partners will create a future enabled by Arm AI with more ML on device,” Davies explained. “With over eight billion voice assistive devices. We need speech recognition on sub-$1 microcontrollers. Processing everything on server just doesn’t work, physically or financially. Cloud computing bandwidth aren’t free and recognition on device is the only way. A voice activated coffee maker using cloud services used ten times a day would cost the device maker around $15 per year per appliance. Computing ML on device also benefits latency, reliability and, crucially, security.”

To bring this on home, if the coffee maker with voice recognition was in use for four years, the speech recognition cost for chewing on data back in the Mr Coffee datacenter would wipe out the entire revenue stream from that coffee maker, but that same function, if implemented on a device specifically tuned for this very precise job, could be done for under $1 and would not affect the purchase price significantly. And, we think, the coffee maker manufacturer could probably charge a premium for the voice recognition and recoup some or all of the investment in the technology added to the pot over a reasonably short time until it just became normal. Like having a clock and timer in a coffee maker did several decades ago, allowing us all to wake up to a hot cup of java or joe or whatever you call it in the morning by staging the ground coffee beans and water the night before.

What holds true for the coffee maker is going to hold true for most of the hundreds of billions of devices that span from the client back to the edge and up through the datacenter.

There will be millions of such examples across hundreds of billions of devices in the next decade, and that is why with the Armv9 architecture, Arm engineers are planning to make so many changes. The changes will come gradually, of course, just as happened with the Armv7 and Armv8 architectures that most IT people are familiar with because these designs coincide with the rise of the Arm’s use as the motor of choice in smartphones and tablets and increasing use in datacenter infrastructure, including but not limited to servers.

Here is the key question, and it is one that we have asked in many slightly different ways over in the several decades we have been watching the IT sector grow and evolve: Does the world want a single, malleable, compatible substrate? By which we mean in the next decade will it be Arm’s time to help IT wave goodbye to X86? The rise of the mobile phone and then smartphones put the Arm architecture on a collision course with the X86 instruction set starting with the launch of the Nokia 6110 mobile phone in 1997 and with the Apple iPhone launch in 2007.

With the launch of server chip maker Calxeda in 2010, we thought something could give X86 a run for the server money, much as X86 did for RISC/Unix and RISC/Unix did for proprietary CISC in the prior decades of datacenter compute. And we have watched over the past decade as Arm server chip makers have come and gone. But today it is different. Amazon Web Services is already the largest maker of Arm servers in the world, with its Graviton2 chip, and it looks like Microsoft could be working on its own Arm server chips. Ampere Computing is fielding a good set of Arm server processors, too. Fujitsu’s A64FX is a resounding success in the “Fugaku” supercomputer in Japan, and SiPearl in Europe and HiSilicon in China are continuing to invest in new chippery for systems, too.

Despite all of the disappointments – and some successes – with servers to date, it is hard to bet against Arm. Volume and momentum is on the side of the Arm architecture so long as Nvidia does not mess with success should it prevail in its $40 billion acquisition. (We do not believe Nvidia will change Arm’s licensing and take it at face value from Huang himself that Nvidia will pump more, not less, technology through the Arm licensing pipeline.) In his keynote address, Segars said that by the end of 2021, Arm’s partners would ship a cumulative of 200 billion devices based on its architecture. The first 100 billion took 26 years, as Acorn Computers evolved into Advanced RISC Machines and transmuted into Arm Holdings. The second 100 billion chips (through the end of 2021) took only five years to sell. And between the end of 2021 and the end of this decade, Segars predicts that Arm will sell another 300 billion chips. If history is any guide, then that is a run rate of 20 billion chips per year here in 2021 but around 55 billion per year through 2030. The rate of change of Arm deployments is itself expected to accelerate.

How much of these Arm chips will be in the datacenter, at different levels of the edge, and within endpoints remains to be seen. While Arm server shipments were up by a factor of 4.5X in the fourth quarter, according to IDC, it is from a base that is small enough that this is not really affecting Intel’s hegemony in the datacenter server. As we reported years ago, Arm had hoped to represent 20 percent of server shipments by now, and at one point raised its expectations to 25 percent of shipments by 2020. Not even close. And the re-emergence of AMD with its Epyc processors has not helped. But only a fool would count Arm out. Hope springs eternal for Arm servers, as we discussed a few months ago.

The ArmV9 architecture certainly has servers in mind as well as other devices, and Segars contends that there won’t be one bit of data in the world that doesn’t somehow go through or end up on an Arm-based device. It might take another five to ten years for Arm to be representative in servers, we think. Segars and the Arm team were not foolish enough to put a new stake in that old ground as part of the architecture rollout – particularly with the Nvidia acquisition still not completed. But clearly one of the arguments that Nvidia can credibly make is that there needs to be more competition and more innovation inside the servers of the world.

Richard Grisenthwaite, another Arm Fellow who is also the company’s chief architect, gave an overview of the evolution of the Arm architecture since 1990 and pulled back the curtain a bit on the forthcoming Armv9 architecture. We have mashed up these two charts into one so you can see it all in proper perspective.

View attachment 8732

As far as we area concerned, Arm did not become a possible server instruction set until 40-bit memory addressing (LPAE), hardware-assisted server virtualization, and vector floating point (VFP) units, and Advanced SIMD Extensions (which made integer and floating point vector instructions native to the architecture) were added with Armv7. But it really took the Armv8 architecture launched in 2011, with its memory extended to 64-bits, to make a good server chip, and in the past decade a slew of technologies have been added to this architecture to make it a very good instruction set for a server chip.

“The architecture is not a static thing,” Grisenthwaite explained. “We keep on innovating and evolving it to meet the ever-changing needs of the computing world. In the years since we introduced 64-bit processing in Armv8, we’ve added a number of extensions, such as improved support for virtualization, the addition of float16 and bfloat to substantially enhance the performance of machine learning, and a number of security enhancements, including increased resilience against return-oriented programing and support for a secure hypervisor. Innovating the Arm architecture never stops.”

The Armv9 architecture unveiled today is technically known as the Armv9-A architecture profile, with the A being short for “application” and is meant to designate the fullest feature set for client and server devices. The R profile is for “real-time” uses, and the M profile is for “microcontrollers” that do not need the full set of features and are aimed at low cost and low power uses. The R and M profiles will be added soon, we presume, and the feature set will expand for all Armv9 profiles as needed by the market, based on input from Arm licensees who make chips, Arm chip buyers, and the competitive landscape.

The first thing to note in the Armv9 architecture is that it is a superset of Armv8 and there is absolute backwards compatibility. Without that, Arm is dead in the water.

The second big thing on the computing front is support for Scalable Vector Extensions 2, or SVE2, vector processing.

View attachment 8733

Arm worked with Fujitsu to create the original SVE vector math specifications, which are implemented in Fujitsu’s A64FX processor, the heart of the “Fugaku” supercomputer at the RIKEN lab in Japan. These are 512-bit wide SIMD processors that support FP32 and FP64 math as you might expect but also FP16 half precision and INT16/INT8 dot product processing as well – the latter mixed precision formats all important for digital signal processing and machine learning.

Arm’s own “Ares” N1 processor core design did not support SVE, but the Neoverse “Zeus” V1 core has a pair of 256-bit vector units compatible with SVE2 and the “Perseus” N2 core will have a pair of 128-bit SVE units. Future “Poseidon” Neoverse V2 and N3 cores, we presume, will support SVE2 vector units with the expanded capabilities outlined in the chart above.

“That technology was designed in a scalable manner so that the concepts used supercomputers can be applied across a far wider range of products,” said Grisenthwaite. “We have added increased functionality to create SVE2 to enhance scalable vector extensions, to work well for 5G systems and in many other use cases, such as virtual and augmented reality, and also for machine learning within the CPU. Over the next few years, we will be extending this further with substantial enhancements in performing matrix-based calculations within the CPU.”

That sure doesn’t sound like a company that is just going to offload hard math problems to a GPU.

The biggest parts of the Armv9 architecture have to do with completely reworking the security model in the processor to make Arm a better option than an X86 processor besides that it might be a little more energy efficient and it might cost a little bit less. While these are important, the idea that companies could deploy more secure chippery across the spectrum of client, edge, and datacenter devices is one that we think IT organizations all over the world will be able to get behind.

One new security technology is called Memory Tagging Extensions, which is going to make it a lot harder for hackers to exploit vulnerabilities hidden in time and space within the code of the world.

“Analyzing the large number of security issues that get reported in the world’s software, a depressing reality is the root cause of many of those problems really comes back to the same old memory safety issues that have been plaguing computing for the last 50 years. Two particularly common memory safety problems – buffer overflow and use after free – seem to be incredibly persistent over the years. And a huge part of the problem is that they are frequently present in software for years before they are discovered and exploited.”

While this is a complex feature, the idea is to encapsulate the accessibility information for data stored in memory with the data itself – a kind of object-oriented security, we reckon. If a pointer to memory has a tag, and the tag doesn’t match when the application tries to access memory – perhaps the memory has moved on or the access is out of range – the tag check fails and the memory access is denied. No more access to memory thanks to buffer overflows and use after free hacks.

We often talk about a 20 percent price/performance advantage being enough to change chip instruction sets. What is this kind of security worth, particularly if it can be done transparently to applications? We will find out more about the CHERI project at the University of Cambridge and its derivative Project Morello that Arm Holdings is working on with Microsoft, Google, the University of Cambridge, and the University of Edinburgh, which implement memory tagging as Arm is pulling into the Armv9 architecture. Not for nothing, but IBM’s proprietary CISC processors used in its AS/400 line dating from 1988 had memory tags for just this reason, and this capability moved over to Power chips in 1995 and is still in the IBM i follow-on platform today. That said, IBM has not used the memory tags for security, per se, but rather to enhancement the performance of the system. So that use appears to be new.

The other new feature in Armv9 is called Realms, and it adds a new, secure address space extension to the trusted firmware that has evolved during the Armv8 generation.

View attachment 8734

A realm is a kind of memory partition, it looks like, at least according to the explanation given by Mark Hambleton, vice president of open source software at Arm. So instead of hosting virtual machines in a common address space, as is done by hypervisors today, each VM would be hosted in a protected address space that is walled off from the other VMs in the system, and equally important, from the unsecured areas where the operating system is running. The question we have is: Why have a hypervisor at all then, if the realm manager can do all this carving up and securing.

That’s a very high level for the Armv9-A architecture, to be sure, and we will learn more as Arm says more. But the real takeaway is that Arm believes in specialized processing within a device as well as across devices as the only way to keep advancing compute as Moore’s Law goes the way of Dennard scaling. Like this:

View attachment 8735

In the next decade, software is going to have to be co-designed with hardware on a ridiculous scale, and the idea of a what constitutes a high volume chip is going to change. It is going to be quite a balancing act between having a general purpose platform that has too much dark silicon but is cheaper per unit to make and having a specifically designed ASIC with all the right features for a specific workload right now.

This is going to be one, massive hardware-software engineering headache. So many choices, maybe too many choices.

Welcome to being a hyperscaler, everyone. Sucks, don’t it? At least Arm and its licensees – and maybe along with Nvidia – are going to try to help.
Great article FMF.

Now just cannot help myself what is one percent of 300 billion by 2030.

3 billion x 10 cents = $300 million dollars

“ Segars said that by the end of 2021, Arm’s partners would ship a cumulative of 200 billion devices based on its architecture. The first 100 billion took 26 years, as Acorn Computers evolved into Advanced RISC Machines and transmuted into Arm Holdings. The second 100 billion chips (through the end of 2021) took only five years to sell. And between the end of 2021 and the end of this decade, Segars predicts that Arm will sell another 300 billion chips. If history is any guide, then that is a run rate of 20 billion chips per year here in 2021 but around 55 billion per year through 2030. The rate of change of Arm deployments is itself expected to accelerate”

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Fire
  • Love
Reactions: 44 users
F

Filobeddo

Guest

Attachments

  • 6A3D2770-03D7-459B-B490-6B2F3332F1AD.gif
    6A3D2770-03D7-459B-B490-6B2F3332F1AD.gif
    270.7 KB · Views: 52
  • Like
Reactions: 4 users

LM77

Member
Stockhead Article

This artificial intelligence (AI) chip developer says its Akida neuromorphic processor mimics human brain processing.

Last month fresh chairman Antonio Viana compared it to a security camera where IR detects movement, turns the camera on and alerts you to some form of movement – but you have to figure out what it is.

“Where neuromorphic processing can come in would be a security camera that is ‘smart enough’ to know what is causing the movement… and not just send an alert that says ‘something’ is in the frame… but instead says ‘it is a dog’, ‘it is a human’. And even better, to know who that human is – ‘run away, your spouse’s family just showed up!’” he said.

The company already has a bunch of patents under its belt and the plan now is commercialisation.

BRN ended the March quarter with US$31.2M in cash and has a market cap of $1.75b.
 
  • Like
  • Fire
  • Love
Reactions: 24 users

chapman89

Founding Member
Yes Tim, it will happen a lot sooner, thanks to Brainchip!

#saturatethemarket


4A028530-6F40-4523-A3B7-8245E5E6DB84.jpeg
 
  • Like
  • Fire
  • Love
Reactions: 60 users

chapman89

Founding Member
Juan Chapa working for both Brainchip and Galaxy Semiconductor now…
9FFD1501-6894-4A36-9755-AFC6729ED7ED.jpeg
B6FB9D49-2E8D-40BF-ADC8-604FAA83DDB8.jpeg
 
  • Like
  • Thinking
  • Fire
Reactions: 45 users

JK200SX

Regular
 
  • Like
  • Love
  • Fire
Reactions: 26 users

Boab

I wish I could paint like Vincent
edge-ai-vision.com edge-ai-vision.com

A NEWSLETTER FROM THE EDGE AI AND VISION ALLIANCE
Late May 2022 | VOL. 12, NO. 10
To view this newsletter online, please click here
LETTER FROM THE EDITOR

Dear Colleague,Intel webinar

On Thursday, August 25 at 9 am PT, Intel will deliver the free webinar “Accelerating TensorFlow Models on Intel Compute Devices Using Only 2 Lines of Code” in partnership with the Edge AI and Vision Alliance. Are you using Google's TensorFlow framework to develop your deep learning models? And are you doing inference processing on those models using Intel compute devices: CPUs, GPUs, VPUs and/or HDDL (High Density Deep Learning) processing solutions? If the answer to both questions is "yes", then this hands-on tutorial on how to integrate TensorFlow with Intel's Distribution of the OpenVINO toolkit for rapid development while also achieving accurate and high-performance inference results is for you!

TensorFlow developers can now take advantage of OpenVINO optimizations with TensorFlow inference applications across a wide range of Intel compute devices by adding just two lines of code. In addition to introducing OpenVINO and its capabilities, the webinar will include demonstrations of the concepts discussed via a code walk-through of a sample application. It will be presented by Kumar Vishwesh, Technical Product Manager, and Ragesh Hajela, Senior AI Engineer, both of Intel. A question-and-answer session will follow the presentation. For more information and to register, please see the event page.

Brian Dipert
Editor-In-Chief, Edge AI and Vision Alliance

COMPENSATING FOR REAL-WORLD CONSTRAINTS AND CHANGES

Maintaining DNN Accuracy When the Real World is ChangingObserva
We commonly train deep neural networks (DNNs) on existing data and then use the trained model to make predictions on new data. Once trained, these predictive models approximate a static mapping function from their input onto their predicted output. However, in many applications, the trained model is used on data that changes over time. In these cases, the predictive performance of these models degrades over time. In this 2021 Embedded Vision Summit talk, Erik Chelstad, CTO and co-founder of Observa, introduces the problem of concept drift in deployed DNNs. He discusses the types of concept drift that occur in the real world, from small variances in the predicted classes all the way to the introduction of a new, previously unseen class. He also discusses approaches to recognizing these changes and identifying the point in time when it becomes necessary to update the training dataset and retrain a new model. The talk concludes with a real-world case study.

Developing Edge Computer Vision Solutions for Applications with Extreme Limitations on Real-World TestingNedra
Deploying computer vision-based solutions in very remote locations (such as those often found in mining and oil drilling) introduces unique challenges. For example, it is typically impractical to test solutions in the real operating environment—or to replicate the environment to enable testing during development. Further complicating matters, these remote locations typically lack network connectivity, making it impossible to monitor deployed systems. In this 2021 Embedded Vision Summit talk, Alexander Belugin, Computer Vision Product Manager at Nedra, presents specific practical techniques for overcoming these challenges. He covers 3D modeling, the use of GANs for data generation, testing set-ups and specialized software techniques. He also discusses methods for accelerating software adaptation to resolve issues when systems are deployed.

PEOPLE DETECTION AND TRACKING

Case Study: Facial Detection and Recognition for Always-On ApplicationsSynopsys
Although there are many applications for low-power facial recognition in edge devices, perhaps the most challenging to design are always-on, battery-powered systems that use facial recognition for access control. Laptop, tablet and cellphone users expect hands-free and instantaneous facial recognition. This means the electronics must be always on, constantly looking to detect a face, and then ready to pull from a data set to recognize the face. This 2021 Embedded Vision Summit presentation from Jamie Campbell, Product Marketing Manager for Embedded Vision IP at Synopsys, describes the challenges of moving traditional facial detection neural networks to the edge. It explores a case study of a face recognition access control application requiring continuous operation and extreme energy efficiency. Finally, it describes how the combination of Synopsys DesignWare ARC EM and EV processors provides low-power, efficient DSP and CNN acceleration for this application.

Person Re-Identification and Tracking at the Edge: Challenges and TechniquesUniversity of Auckland
Numerous video analytics applications require understanding how people are moving through a space, including the ability to recognize when the same person has moved outside of the camera’s view and then back into the camera’s view, or when a person has passed from the view of one camera to the view of another. This capability is referred to as person re-identification and tracking. It’s an essential technique for applications such as surveillance for security, health and safety monitoring in healthcare and industrial facilities, intelligent transportation systems and smart cities. It can also assist in gathering business intelligence such as monitoring customer behavior in shopping environments. Person re-identification is challenging. In this 2021 Embedded Vision Summit talk, Morteza Biglari-Abhari, Senior Lecturer at the University of Auckland, discusses the key challenges and current approaches for person re-identification and tracking, as well as his initial work on multi-camera systems and techniques to improve accuracy, especially fusing appearance and spatio-temporal models. He also briefly discusses privacy-preserving techniques, which are critical for some applications, as well as challenges for real-time processing at the edge.

FEATURED NEWS

Intel to Acquire Fellow Alliance Member Company Codeplay Software

STMicroelectronics Releases Its First Automotive IMU with Embedded Machine Learning

Vision Components' VC Stereo Camera Targets 3D and Dual-camera Embedded Vision Applications

e-con Systems Launches a Time of Flight Camera for Accurate 3D Depth Measurement
Just received this email from Edge AI

Microchip Technology's 1 GHz SAMA7G54 Single-Core MPU Includes a MIPI CSI-2 Camera Interface and Advanced Audio Features

More News

EDGE AI AND VISION PRODUCT OF THE YEAR WINNER SHOWCASE

Blaize Pathfinder P1600 Embedded System on Module (Best Edge AI Processor)Hailo
Blaize’s Pathfinder P1600 Embedded System on Module (SoM) is the 2022 Edge AI and Vision Product of the Year Award winner in the Edge AI Processors category. The Blaize Pathfinder P1600 Embedded SoM, based on the Blaize Graph Streaming Processor (GSP) architecture, enables new levels of processing power at low power with high system utilization ideal for AI inferencing workloads in edge-based applications. Smaller than the size of a credit card, the P1600 operates with 50x lower memory bandwidth, 7 W of power at 16 TOPS, 10x lower latency, and 30x better efficiency than legacy GPUs – opening doors to previously unfeasible AI inferencing solutions for edge vision use cases including in-camera and machine at the sensor edge, and network edge equipment. The Pathfinder platform is 100% programmable via the Blaize Picasso SDK, a comprehensive software environment that accelerates AI development cycles, uniquely based on open standards – OpenCL, OpenVX, and supporting ML frameworks such as TensorFlow, Pytorch, Caffe2, and ONNX. The Picasso SDK permits building complete end-to-end applications with higher transparency, flexibility, and portability levels.

Please see here for more information on Blaize's Pathfinder P1600 Embedded SoM. The Edge AI and Vision Product of the Year Awards celebrate the innovation of the industry's leading companies that are developing and enabling the next generation of edge AI and computer vision products. Winning a Product of the Year award recognizes a company's leadership in edge AI and computer vision as evaluated by independent industry experts.


About This E-Mail
LETTERS AND COMMENTS TO THE EDITOR: Letters and comments can be directed to the editor, Brian Dipert, at insights@edge-ai-vision.com.

PASS IT ON...Feel free to forward this newsletter to your colleagues. If this newsletter was forwarded to you and you would like to receive it regularly, click here to register.
 
  • Like
Reactions: 12 users

TheFunkMachine

seeds have the potential to become trees.
  • Like
  • Fire
  • Love
Reactions: 26 users

Boab

I wish I could paint like Vincent
edge-ai-vision.com edge-ai-vision.com

A NEWSLETTER FROM THE EDGE AI AND VISION ALLIANCE
Late May 2022 | VOL. 12, NO. 10
To view this newsletter online, please click here
LETTER FROM THE EDITOR

Dear Colleague,Intel webinar

On Thursday, August 25 at 9 am PT, Intel will deliver the free webinar “Accelerating TensorFlow Models on Intel Compute Devices Using Only 2 Lines of Code” in partnership with the Edge AI and Vision Alliance. Are you using Google's TensorFlow framework to develop your deep learning models? And are you doing inference processing on those models using Intel compute devices: CPUs, GPUs, VPUs and/or HDDL (High Density Deep Learning) processing solutions? If the answer to both questions is "yes", then this hands-on tutorial on how to integrate TensorFlow with Intel's Distribution of the OpenVINO toolkit for rapid development while also achieving accurate and high-performance inference results is for you!

TensorFlow developers can now take advantage of OpenVINO optimizations with TensorFlow inference applications across a wide range of Intel compute devices by adding just two lines of code. In addition to introducing OpenVINO and its capabilities, the webinar will include demonstrations of the concepts discussed via a code walk-through of a sample application. It will be presented by Kumar Vishwesh, Technical Product Manager, and Ragesh Hajela, Senior AI Engineer, both of Intel. A question-and-answer session will follow the presentation. For more information and to register, please see the event page.

Brian Dipert
Editor-In-Chief, Edge AI and Vision Alliance

COMPENSATING FOR REAL-WORLD CONSTRAINTS AND CHANGES

Maintaining DNN Accuracy When the Real World is ChangingObserva
We commonly train deep neural networks (DNNs) on existing data and then use the trained model to make predictions on new data. Once trained, these predictive models approximate a static mapping function from their input onto their predicted output. However, in many applications, the trained model is used on data that changes over time. In these cases, the predictive performance of these models degrades over time. In this 2021 Embedded Vision Summit talk, Erik Chelstad, CTO and co-founder of Observa, introduces the problem of concept drift in deployed DNNs. He discusses the types of concept drift that occur in the real world, from small variances in the predicted classes all the way to the introduction of a new, previously unseen class. He also discusses approaches to recognizing these changes and identifying the point in time when it becomes necessary to update the training dataset and retrain a new model. The talk concludes with a real-world case study.

Developing Edge Computer Vision Solutions for Applications with Extreme Limitations on Real-World TestingNedra
Deploying computer vision-based solutions in very remote locations (such as those often found in mining and oil drilling) introduces unique challenges. For example, it is typically impractical to test solutions in the real operating environment—or to replicate the environment to enable testing during development. Further complicating matters, these remote locations typically lack network connectivity, making it impossible to monitor deployed systems. In this 2021 Embedded Vision Summit talk, Alexander Belugin, Computer Vision Product Manager at Nedra, presents specific practical techniques for overcoming these challenges. He covers 3D modeling, the use of GANs for data generation, testing set-ups and specialized software techniques. He also discusses methods for accelerating software adaptation to resolve issues when systems are deployed.

PEOPLE DETECTION AND TRACKING

Case Study: Facial Detection and Recognition for Always-On ApplicationsSynopsys
Although there are many applications for low-power facial recognition in edge devices, perhaps the most challenging to design are always-on, battery-powered systems that use facial recognition for access control. Laptop, tablet and cellphone users expect hands-free and instantaneous facial recognition. This means the electronics must be always on, constantly looking to detect a face, and then ready to pull from a data set to recognize the face. This 2021 Embedded Vision Summit presentation from Jamie Campbell, Product Marketing Manager for Embedded Vision IP at Synopsys, describes the challenges of moving traditional facial detection neural networks to the edge. It explores a case study of a face recognition access control application requiring continuous operation and extreme energy efficiency. Finally, it describes how the combination of Synopsys DesignWare ARC EM and EV processors provides low-power, efficient DSP and CNN acceleration for this application.

Person Re-Identification and Tracking at the Edge: Challenges and TechniquesUniversity of Auckland
Numerous video analytics applications require understanding how people are moving through a space, including the ability to recognize when the same person has moved outside of the camera’s view and then back into the camera’s view, or when a person has passed from the view of one camera to the view of another. This capability is referred to as person re-identification and tracking. It’s an essential technique for applications such as surveillance for security, health and safety monitoring in healthcare and industrial facilities, intelligent transportation systems and smart cities. It can also assist in gathering business intelligence such as monitoring customer behavior in shopping environments. Person re-identification is challenging. In this 2021 Embedded Vision Summit talk, Morteza Biglari-Abhari, Senior Lecturer at the University of Auckland, discusses the key challenges and current approaches for person re-identification and tracking, as well as his initial work on multi-camera systems and techniques to improve accuracy, especially fusing appearance and spatio-temporal models. He also briefly discusses privacy-preserving techniques, which are critical for some applications, as well as challenges for real-time processing at the edge.

FEATURED NEWS

Intel to Acquire Fellow Alliance Member Company Codeplay Software

STMicroelectronics Releases Its First Automotive IMU with Embedded Machine Learning

Vision Components' VC Stereo Camera Targets 3D and Dual-camera Embedded Vision Applications

e-con Systems Launches a Time of Flight Camera for Accurate 3D Depth Measurement
Just received this email from Edge AI

Microchip Technology's 1 GHz SAMA7G54 Single-Core MPU Includes a MIPI CSI-2 Camera Interface and Advanced Audio Features

More News

EDGE AI AND VISION PRODUCT OF THE YEAR WINNER SHOWCASE

Blaize Pathfinder P1600 Embedded System on Module (Best Edge AI Processor)Hailo
Blaize’s Pathfinder P1600 Embedded System on Module (SoM) is the 2022 Edge AI and Vision Product of the Year Award winner in the Edge AI Processors category. The Blaize Pathfinder P1600 Embedded SoM, based on the Blaize Graph Streaming Processor (GSP) architecture, enables new levels of processing power at low power with high system utilization ideal for AI inferencing workloads in edge-based applications. Smaller than the size of a credit card, the P1600 operates with 50x lower memory bandwidth, 7 W of power at 16 TOPS, 10x lower latency, and 30x better efficiency than legacy GPUs – opening doors to previously unfeasible AI inferencing solutions for edge vision use cases including in-camera and machine at the sensor edge, and network edge equipment. The Pathfinder platform is 100% programmable via the Blaize Picasso SDK, a comprehensive software environment that accelerates AI development cycles, uniquely based on open standards – OpenCL, OpenVX, and supporting ML frameworks such as TensorFlow, Pytorch, Caffe2, and ONNX. The Picasso SDK permits building complete end-to-end applications with higher transparency, flexibility, and portability levels.

Please see here for more information on Blaize's Pathfinder P1600 Embedded SoM. The Edge AI and Vision Product of the Year Awards celebrate the innovation of the industry's leading companies that are developing and enabling the next generation of edge AI and computer vision products. Winning a Product of the Year award recognizes a company's leadership in edge AI and computer vision as evaluated by independent industry experts.


About This E-Mail
LETTERS AND COMMENTS TO THE EDITOR: Letters and comments can be directed to the editor, Brian Dipert, at insights@edge-ai-vision.com.

PASS IT ON...Feel free to forward this newsletter to your colleagues. If this newsletter was forwarded to you and you would like to receive it regularly, click here to registe
Not sure there is anything of substance in this. It is a newsletter that I had subscribed to.
 
  • Like
  • Love
Reactions: 3 users
Top Bottom