BRN Discussion Ongoing

Steve10

Regular
Dell with Snapdragon 8 Gen 2 without a fan for only $499.


The Dell Inspiron 14 is the company's first-ever Qualcomm Snapdragon laptop​

BYOLIVER HASLAM
PUBLISHED 9 HOURS AGO

Dell has entered the Windows on ARM fray with the Inspiron 14.

Dell has announced the Inspiron 14 as the company's very first Windows on ARM laptop. It's also the first Dell laptop to be powered by Qualcomm's Snapdragon 8cx Gen 2 chip as well.

Dell uses the Inspiron name in a whole load of ways including the Inspiron Chromebook 14, but this Inspiron 14 is a whole different beast. Sure, you can get it with Intel and AMD chips inside if you really must, but this latest model comes with a Snapdragon inside. And that means a few things.

First, it means it's pretty cheap. Really cheap, in fact. You can pick up a Dell Inspiron 14 with Snapdragon 8cx Gen 2 inside for $499 if you're in the market for such a thing.

The second thing is battery life. You can expect tonnes of it, thanks in part to that Qualcomm chip that's nibbling on the Inspiron 14's battery rather than trying to eat it whole. Dell reckons you can look forward to using the laptop for around 16 hours between charges, although that's likely to vary significantly depending on what you're actually doing.

Oh, and because this laptop isn't running Intel or AMD, there's no need for a fan.

The Snapdragon-powered Inspiron 14 comes with 8GB of RAM and a 256GB PCIe NVMe inside, while an Adreno 690 takes care of pushing pixels around. Those pixels live on a 14-inch FHD display that isn't touch-capable, but so be it. It does have nice thin bezels, at least.

In terms of Software, you're looking at Windows 11 Home and you can have any colour you want so long as it's silvery grey. Other bits and pieces include a 65W USB-C power adapter, microSD card reader built in, and a pair of USB-C 3.2 Gen 2 ports. A single USB 2.0 type A port is also offered for legacy connections.

We'll of course need to take this thing for a spin to be sure, but things are shaping up pretty nicely for the Dell Inspiron 14. Especially when you consider it costs iPad money. You can learn more and place your order - in the United States, at least - on Dell's website.
https://twitter.com/Snapdragon


@Snapdragon


.
@Dell
just launched the #Snapdragon 8cx Gen 2 powered Inspiron 14: its first-ever Snapdragon powered laptop. Equipped with
@Windows 11, and built for on the go users, it packs up to 16 hours of hi-def streaming. https://dell.to/3TcznIs
Snapdragon

1678943213143.png


https://www.pocket-lint.com/the-dell-inspiron-14-is-the-companys-first-ever-qualcomm-snapdragon-laptop/?newsletter_popup=1
 
  • Like
  • Wow
  • Fire
Reactions: 23 users

Dhm

Regular
"Dryyy!" - didn't you read the notice on the front of every patent:

"WARNING! This document should not be attempted unless accompanied by a large bottle of shiraz"
How large?


Red Wine GIF


Just don't stop pouring, because I won't stop drinking!
 
  • Haha
  • Like
  • Love
Reactions: 14 users

Steve10

Regular

Qualcomm aims to become one-stop shop for IoT development​


Mobile technology giant announces intention to become dedicated destination for developers across the IoT space​

Joe O’Halloran

By
Published: 15 Mar 2023 17:00

As the need for connected, intelligent and autonomous devices grows rapidly, businesses attempting to prosper in this fast-moving economy increasingly need a reliable source of control and connectivity technology for their internet of things (IoT) and robotic devices. Aiming to address these needs, Qualcomm has revealed a plan to become a “one-stop shop” for developers across the IoT space, the first part of which sees the opening of a new IoT centre of excellence and the launch of new IoT products.

Qualcomm has extended the expansion of its strategic collaboration with components and enterprise computing services provider Arrow Electronics, to establish the Edge Labs Centre of Excellence to alleviate IoT development challenges while increasing the adoption of edge artificial intelligence (AI) across a variety of IoT applications and use cases.

The firms say edge and AI services development is becoming increasingly challenging to customers due to factors such as lack of prior experience, limited access to high-performance edge and AI chipsets, supply chain complexity, and a fledgling ecosystem.

Edge Labs has been set up to help innovators navigate these challenges while increasing the adoption of Edge AI using services from Qualcomm Technologies across security, safety, healthcare, robotics, cameras, displays, optical inspection and other IoT applications.

Edge Labs will have a dedicated service architect and engineering team to develop application-specific offerings, including training sales and field application engineers specifically on Qualcomm products. It will also offer design services to enable lower risk and faster time to market for customers through eInfochips, an Arrow company.

"Edge AI is the next big engineering frontier, and we’re thrilled to expand our strategic collaboration with Arrow Electronics to strengthen the development and proliferation of IoT technologies and serve a more diverse and global customer base,” said Dev Singh, vice-president of business development for Qualcomm Technologies. “Edge Labs COE customers will have the ability to unlock new and unique edge AI use cases.”

The first development kit from eInfochips as part of the Edge Labs initiative, Aikri 42x, based on Qualcomm QRB4210 SoC, has just launched. This coincides with the launch by Qualcomm of what it calls the world’s first integrated 5G IoT processors that can support four different operating systems, in addition to two new robotics platforms and an accelerator program for IoT ecosystem partners.

 
  • Like
  • Fire
Reactions: 13 users

HopalongPetrovski

I'm Spartacus!
This doesn't surprise me at all and actually makes perfect sense. Years ago when I was reading BRN patents (brainhurty and dryyyy) I recall that BRN was cited by Qualcomm and IBM at a time when PVDM was saying we were 5 years ahead of our competitors. If Qualcomm was citing BRN in their patents 5 years ago this may well be a case of "if you can't beat 'em, join 'em"...
Are you from Bravo's litter?

litter-of-kittens.jpg
 
  • Haha
  • Like
Reactions: 18 users

TECH

Regular
They are using this processor by Texas Instruments.

AM62A7-Q1​

PREVIEW

2 TOPS vision SoC with RGB-IR ISP for 1-2 cameras, driver monitoring, front cameras​


Arm CPU 2 Arm Cortex-A53, 4 Arm Cortex-A53

Hardware accelerators 1 deep learning accelerator, 1 video encode/decode accelerator, 1 vision pre-processing accelerator

  • Deep Learning Accelerator based on Single-core C7x
    • C7x floating point, up to 40 GFLOPS, 256-bit Vector DSP at 1.0 GHz
    • Matrix Multiply Accelerator (MMA), up to 2 TOPS (8b) at 1.0 GHz
    • 32KB L1 DCache with SECDED ECC and 64KB L1 ICache with Parity protection
    • 1.25MB of L2 SRAM with SECDED ECC
Doesn't look like Akida.

Not for too much longer, just waiting for a former staffer at Texas Instruments to wave her magic wand at her old colleagues to
suggest that they move with the times, Akida event-based processors rule.

Of course I'm referring to Duy-Loan Le our marvellous addition to the Board recently. ;)

Tech.
 
  • Like
  • Love
  • Fire
Reactions: 44 users
Are you from Bravo's litter?

View attachment 32321
Yes, that's me - I'm the bad, trouble-maker, problem-child strawberry blonde on the far right that's noticing and retaining copious amounts of largely irrelevant stuff while everyone else is focused on the important big picture, but then I will come in with a slam-down and king hit the entire thesis of your argument for a TKO based on something you once flippantly said in an off-the-cuff comment in circa 1989 that I never forgot.
So, yes, that would be me. 😺
 
  • Haha
  • Like
  • Love
Reactions: 17 users

Steve10

Regular

Four Edge AI Trends To Watch​

Forbes Technology Council
Ravi Annavajjhala
Forbes Councils Member
Forbes Technology Council
COUNCIL POST| Membership (Fee-Based)

Mar 15, 2023,06:45am EDT
Ravi Annavajjhala - CEO, Kinara Inc.

As 2023 progresses, demand for AI-powered devices continues growing, driving new opportunities and challenges for businesses and developers. Technology advancements will make it possible to run more AI models on edge devices, delivering real-time results without cloud reliance.

Based on these developments, here are some key predictions to expect:

Increased Adoption​

Edge AI technology has proven its value and we can expect to see further widespread adoption in 2023 and beyond. Companies will continue to invest in edge AI to improve their operations, enhance products (i.e., safer, additional features) and gain competitive advantages. AI’s adoption will also be driven by innovative applications such as ChatGPT, generative AI models (e.g., avatars) and other state-of-the art AI models that will be used for applications in medtech, industrial safety and security.

We are also witnessing that edge AI is transitioning from a technology problem to a deployment problem. In other words, companies are comprehending the edge AI capabilities but it’s a new challenge to get it running in a commercial product, sometimes with multiple AI models in parallel to fulfill an application’s requirements.

Nevertheless, I expect to see continued progress in this area, as companies witness the benefits of edge AI and work to overcome these challenges, as increasing awareness related to costs, energy consumption and latency of running AI in the cloud will likely drive more users to run AI at the edge.

Furthermore, as businesses grow their trust in the technology, edge AI will become increasingly integrated into a wide range of devices, from smartphones and laptops to industrial machines and surveillance systems. This will create new opportunities for businesses to harness AI’s power and improve their products and services.

Improved Performance And More Advanced AI Models​

With advancements in hardware and software, edge AI devices will become more powerful, delivering faster and more accurate results. Although edge devices will still be compute-limited compared to cloud processing and expensive and power-hungry GPUs, I expect a trend towards higher tera operations per second (TOPS) and real-world performance for edge AI processors. As a result, there will be a shift towards more compute-intensive (and accurate) models.

For AI processing, developers are most interested in using leading-edge neural networks for improved accuracy. These network models include YOLO (You Only Look Once), Transformers and MovieNet. Due to its good out-of-the-box performance, YOLO is expected to remain the dominant form of object detection in the years to come. And edge AI processors should advance alongside this technology as newer, more compute-intensive versions of YOLO become available.

Transformer models are also increasing in popularity for vision applications, as they are being actively researched to provide new approaches to solve complex vision tasks. Additionally, the ability to perform computations in parallel and capture long-range dependencies in visual features makes transformers a powerful tool for processing high-dimensional data in computer vision. With the increasing compute capability of edge AI processors, we’ll see a shift towards more transformer models, as they become more accessible for edge deployment.

Activity recognition is the next frontier for edge AI as businesses seek to gain insights into human behavior. For example, in retail, depth cameras determine when a customer's hand goes into a shelf. This shift from image-based tasks to analyzing sequences of video frames is driving the popularity of models like MovieNet.

March Towards Greater Interoperability Of AI Frameworks​

As the edge AI market matures, expect to see increased standardization and interoperability between devices. This will make it easier for businesses to integrate edge AI into existing systems, improving efficiency and reducing costs. From a software perspective, standards such as Tensor Virtual Machine (TVM) and Multi-Level Intermediate Representation (MLIR) are two emerging trends in the edge AI space.

TVM and MLIR are open-source deep-learning compiler stacks or frameworks for building a compiler that aim to standardize deployment of AI models across different hardware platforms. It provides a unified API for AI models, enabling developers to write code once and run it efficiently on a wide range of devices, including cloud instances and hardware accelerators.

While these standards are becoming more stable, they are still not expected to become mass-adopted in 2023. Neural network operator coverage remains an issue, and targeting different accelerators remains a challenge. However, the industry will see continued work in this area as these technologies evolve.

Increased Focus On Security​

As edge AI becomes more widely adopted, there will be a greater focus on securing the sensors generating the data and the AI processors consuming the data. This will include efforts to secure both the hardware and software, as well as the data transmitted and stored. State-of-the-art edge AI processors will include special hardware features to secure all data associated with the neural network’s activity.

In the context of AI models, encryption can protect sensitive data that the model is trained on—for many companies, this model information is the crown jewel. In addition, it will be important to secure the model's parameters and outputs during deployment and inference. Encrypting/decrypting the data and model helps to prevent unauthorized access of the information, ensuring the confidentiality and integrity of the data and model. Encryption and decryption can introduce latency and computational overhead, so the trick for edge AI processor companies will lie in encryption methods and carefully considering trade-offs between security and performance.

Conclusion​

In conclusion, 2023 promises to be an exciting year for the edge AI industry, with new opportunities and challenges for businesses and developers alike. As edge AI continues to mature and evolve, we’ll see increased adoption, improved performance, greater interoperability, more AI-powered devices, increased focus on security and new business models. The limitations and challenges we face today will be overcome, and I have no doubt that edge AI will bring about incredible advancements.


Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?
 
  • Like
  • Fire
  • Love
Reactions: 27 users

HopalongPetrovski

I'm Spartacus!
Yes, that's me - I'm the bad, trouble-maker, problem-child strawberry blonde on the far right that's noticing and retaining copious amounts of largely irrelevant stuff while everyone else is focused on the important big picture, but then I will come in with a slam-down and king hit the entire thesis of your argument for a TKO based on something you once flippantly said in an off-the-cuff comment in circa 1989 that I never forgot.
So, yes, that would be me. 😺
Okilidokilie. 🤣

shocked ned flanders GIF
 
  • Haha
  • Like
Reactions: 15 users

M_C

Founding Member
  • Like
  • Fire
Reactions: 18 users

Easytiger

Regular
TBH I find this post kind of concerning. What’re everyone’s thoughts?
Why concerning?
 
  • Like
  • Thinking
Reactions: 3 users

M_C

Founding Member
  • Haha
  • Like
Reactions: 14 users
D

Deleted member 118

Guest
 
  • Fire
  • Like
Reactions: 4 users
Just for interest this is the link to Spiral Blue’s Edge Compute products powered by Nvidia with detailed spec sheets:


How can AKIDA running at micro to milli watts with on chip learning and a price tag of tens of US dollars compete with their current Nvidia offerings-please note this is rhetorical humour🤣😂🤣

What did Edge Impulse say again Science Fiction AKIDA can out perform at 300 megahertz a GPU running at 900 megahertz.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Fire
  • Haha
Reactions: 34 users

Steve10

Regular

I posted about this guy yesterday. ARM Cortex-M MCU again similar to Renesas. If it has ARM with Helium tech Akida can be integrated. Either M85 as announced & the M55 with Helium tech.

Remi El-Ouazzane’s Post​

View profile for Remi El-Ouazzane
Remi El-Ouazzane
10mo

Earlier today during STMicroelectronics Capital Markets Day (https://cmd.st.com), I gave a presentation on MDG contribution to our ambition of reaching $20B revenue ambition by 2025-27. During the event, I was proud to pre-announce the #STM32N6: a high performance #STM32 #MCU with our new internally developed Neural Processing Unit (#NPU) providing an order of magnitude benefit in both inference/w and inference/$ against alternative MPU solutions The #STM32N6 will deliver #MPU #AI workloads at the cost and the power consumption of #MCU. This a complete game changer that will open new ranges of applications for our customers and allow them to democratise #AI at the edge. I am excited to say we are on track to deliver first samples of the #STM32N6by the end of 2022. I I am even more excited to announce that LACROIX will leverage this technology in their next generation smart city products. Stay tuned for more news on the #STM32N6 in the coming months :=)
 
  • Like
  • Fire
Reactions: 17 users

Steve10

Regular
Just for interest this is the link to Spiral Blue’s Edge Compute products powered by Nvidia with detailed spec sheets:


How can AKIDA running at micro to milli watts with on chip learning and a price tag of tens of US dollars compete with their current Nvidia offerings-please note this is rhetorical humour🤣😂🤣

What did Edge Impulse say again Science Fiction AKIDA can out perform at 300 megahertz a GPU running at 900 megahertz.

My opinion only DYOR
FF

AKIDA BALLISTA

The Spiral Blue CEO finding out about Akida.

1678948749940.png
 
  • Haha
  • Like
Reactions: 22 users

SERA2g

Founding Member
The first of many new customer enquires as a result of this news:

Taofiq Huq
Founder and CEO of Spiral Blue
2h

Very very interesting! Would be keen to know more about the Akida and whether we can incorporate it into our Space Edge Computer in some way”


My opinion only DYOR
FF

AKIDA BALLISTA
In the interest of potentially answering some initial questions others might have about Spiral Blue...

From @Diogenese (with permission) earlier this month. I ran Spiral Blue's space edge computer past him for his opinion.



"Hi SeRA2,

No published patent docs, but they are not published until 18 moths after filing.

They use Nvidia, so they are probably software.

https://spiralblue.space/space-edge-computing

Space Edge Computers use the NVIDIA Jetson series, maximising processing power while keeping power draw manageable. They carry polymer shielding for single event effects, as well as additional software and hardware mitigations. We provide onboard infrastructure software to manage resources and ensure security, as well as onboard apps such as preprocessing, GPU based compression, cloud detection, and cropping. We can also provide AI based apps for object detection and segmentation, such as Vessel Detect and Canopy Mapper.

Note our friend "segmentation" is along for the ride."
 
  • Like
  • Love
Reactions: 25 users

TopCat

Regular
I’ve been reading some more again from the end of 2019 company progress update and I can’t work out what ADE stands for. Anyone know?

Talk about intellectual property licensing a bit more. There were a lot of questions about it, and I think in part that's because we've voiced a strong opinion, coming in advance of actual device sales. There's no manufacturing process involved. There's no inventory. There's no loan package qualification by the customer. We released that in 2019. We have received strong response from prospective customers. The ADE, in the hands of one major South Korean company, is being exercised almost as much as we exercise it. They really have dug in, validated some of the benchmark results that we've provided, and now they're moving onto some of their own proprietary networks to do validation.
 
  • Like
  • Fire
Reactions: 8 users

SiDEvans

Regular
How I’m feeling about BRN today!

9BD596DE-1C3E-4A24-8D4B-6002127D2DD0.jpeg
 
  • Like
  • Haha
  • Love
Reactions: 38 users
D

Deleted member 118

Guest
 
  • Like
  • Fire
Reactions: 4 users

Steve10

Regular
I’ve been reading some more again from the end of 2019 company progress update and I can’t work out what ADE stands for. Anyone know?

Talk about intellectual property licensing a bit more. There were a lot of questions about it, and I think in part that's because we've voiced a strong opinion, coming in advance of actual device sales. There's no manufacturing process involved. There's no inventory. There's no loan package qualification by the customer. We released that in 2019. We have received strong response from prospective customers. The ADE, in the hands of one major South Korean company, is being exercised almost as much as we exercise it. They really have dug in, validated some of the benchmark results that we've provided, and now they're moving onto some of their own proprietary networks to do validation.
Akida™ Development Environment (ADE).

BrainChip’s Akida Development Environment Now Freely Available for Use​

Develop and Deploy on Akida Deeply Learned Neural Networks in a standard TensorFlow/Keras Environment

SAN FRANCISCO–(BUSINESS WIRE)–
BrainChip Holdings Ltd. (ASX: BRN), a leading provider of ultra-low power, high-performance edge AI technology, today announced that access to its Akida™ Development Environment (ADE) no longer requires pre-approval, now allowing designers to freely develop systems for edge and enterprise products on the company’s Akida Neural Processing technology.

ADE is a complete, industry-standard machine learning framework for creating, training and testing deeply learned neural networks. The platform leverages TensorFlow and Keras for neural network development, optimization and training. Once the network model is fully trained, the ADE includes a simple-to-use compiler to map the network to the Akida fabric and run hardware accurate simulations on the Akida Execution Engine. The framework uses the Python scripting language and its associated tools and libraries, including Jupyter notebooks, NumPy and Matplotlib. With just a few lines, developers can easily run the Akida simulator on industry-standard datasets and benchmarks in the Akida model zoo such as Imagenet1000, Google Speech Commands, MobileNet among others. Users can easily create, modify, train and test their own models within a simple use development environment.

ADE comprises three main Python packages:

  • the Akida Execution Engine including the Akida Simulator is an interface to the BrainChip Akida neural processing hardware. To allow the development, optimization and testing of Akida models, it includes a software backend that simulates the Akida NSoC. The output of the Akida Execution Engine generates all necessary files to run the Akida neural processor hardware as well.
  • the CNN development tool utilizes TensorFlow/Keras to develop, optimize and train deeply learned neural networks such as CNNs
  • the Akida model zoo contains pre-created neural network models built with the Akida sequential API and the CNN development tool using quantized Keras models.
“The enormous success of our early-adopters program allowed us to make ADE available to developers looking to use an Akida-based environment for their deep machine learning needs,” said Louis DiNardo, CEO of BrainChip. “This is an important milestone for BrainChip as we continue to deliver our technology to a marketplace in search of a solution to overcome the power- and training-intense needs that deep learning networks currently require. With the ADE, designers can access the tools and resources needed to develop and deploy Edge application neural networks on the Akida neural processing technology.”

Akida is available as a licensable IP technology that can be integrated into ASIC devices and will be available as an integrated SoC, both suitable for applications such as surveillance, advanced driver assistance systems (ADAS), autonomous vehicles (AV), vision guided robotics, drones, augmented and virtual reality (AR/VR), acoustic analysis, and Industrial Internet-of-Things (IoT). Akida is a complete neural processing engine for edge applications, which eliminates CPU and memory overhead while delivering unprecedented efficiency, faster results, at minimum cost. Functions like training, learning, and inferencing are orders of magnitude more efficient with Akida.

Access to ADE is currently available online at https://doc.brainchipinc.com/. Among the resources are installation information, user guide, API reference, Akida examples, support and license documentation. ADE requires TensorFlow 2.0.0. Any existing virtual environment previously used would need to be updated as per the installation step.
 
  • Like
  • Fire
  • Love
Reactions: 41 users
Top Bottom