BRN Discussion Ongoing

D

Deleted member 118

Guest
Some people might find this article of some interest



34B52328-7104-4887-806C-5FEE366604CD.png
 
  • Like
  • Fire
Reactions: 6 users

M_C

Founding Member
  • Like
  • Fire
  • Thinking
Reactions: 10 users
Article on Edge computer vision posted by our mates at Edge Impulse. Two things I took out of this:
Firstly, there are a lot more companies in this space than I’d realised. And secondly, Although we are not mentioned in the 34 companies having a positive impact on society, I feel our impact will not only be positive it’ll be F…… ginormous!!!
Have a great weekend Chippers!!
https://omdena.com/blog/edge-computer-vision-companies/
Oh, there are many more, I went through 80 before I invested in Brainchip in 2020.

I just browsed the list quickly and it mostly seems like a list of companies that uses AI for specific applications, not so much developing AI and especially not developing neuromorphic chips. This list dosn't include the heavy weights in edge AI, like Brainchip, GrAI Matter Labs, Intel, IBM. I can maybe understand IBM and Intel, as they don't have a commercially available product, but why not Brainchip and GrAI Matter Labs???
 
  • Like
  • Thinking
Reactions: 9 users

Makeme 2020

Regular
Renesas Electronics Corporation

Enter the terms you wish to search for.
Account

Main navigation​

  • Products
  • Applications
  • Design Resources
  • Sales & Support
  • About
Breadcrumb
  1. About Renesas arrow_right
  2. Press Room arrow_right
  3. News arrow_right
  4. Renesas Expands RISC-V Embedded Processing Portfolio with New Voice-Control ASSP Solution

Renesas Expands RISC-V Embedded Processing Portfolio with New Voice-Control ASSP Solution

Back to top
New RISC-V ASSP Allows Industrial and Consumer Electronics Designers to Quickly and Cost Effectively Differentiate Applications with Voice Recognition
March 30, 2023
RISC-V MCU Turnkey Voice HMI Solution
TOKYO, Japan ― Renesas Electronics Corporation (TSE: 6723), a premier supplier of advanced semiconductor solutions, today extended its industry-leading RISC-V portfolio with the first RISC-V MCU designed for voice-controlled HMI (human-machine interface) systems. The new R9A06G150 32-bit ASSP, developed in collaboration with RISC-V ecosystem partners, provides a complete, cost-effective, production-ready voice-control system solution that eliminates the need for RISC-V tools and upfront software investment.
Targeting residential and commercial building automation, home appliances, toys and healthcare devices, the new ASSP supports multiple languages and user-defined keywords for voice-recognition operations. The foundation of a turnkey voice-control solution, the R9A06G150 is pre-programmed using specialized application code developed by independent design houses with a proven ability to bring customer designs to volume production.
The introduction reinforces Renesas as a leader in the emerging market for the open-source RISC-V ISA (instruction set architecture) and follows introductions last year of a 32-bit motor-control ASSP and 64-bit general-purpose RZ/Five MPUs based on a 64-bit RISC-V CPU.
“Renesas continues to broaden our RISC-V MCU/MPU offering, in this case by delivering designers an innovative voice-command HMI with pre-developed software that shortens development time and time to market,” said Roger Wendelken, Senior Vice President in Renesas’ IoT and Infrastructure Business Unit. “Through close collaboration with the RISC-V partner ecosystem, this flexible, scalable ASSP solution allows customers interested in voice control to support different application classes on the same MCU hardware platform.”

Optimized Solution in Collaboration with Global Partners​

Renesas’ new R9A06G150 voice-control HMI ASSP is based on RISC-V processing IP from Andes Technology Corp., which incorporated its AndesCore D25F CPU core based on the AndeStar™ V5 architecture. "We are excited to be part of Renesas' turnkey voice control solution," said Frankwell Lin, Andes’ Chairman and CEO. "The 32-bit AndesCore D25F was engineered to deliver high per-MHz performance in a small silicon footprint. Major mixed-signal ASSP developers such as Renesas can thus concentrate on their core engineering innovation and outsource non-critical elements of their design to CPU core suppliers such as Andes."
Renesas will deliver the first RISC-V voice-control ASSP pre-programmed with dedicated application code developed by leading independent design house, Cyberon, an expert in voice-recognition technology, and Orbstar, a systems integrator specializing in embedded solutions As they did with Renesas’ previous motor control ASSP, SEGGER Microcontroller GmbH will provide support for the voice-control ASSP with its complete ecosystem, including Embedded Studio and J-Link.

Key Features of the R9A06G150 Voice-Control HMI ASSP Solution​

  • 100MHz CPU with DSP instructions, floating-point extension and cost-optimized specification
  • Specialized audio PDM & SSI I/F for seamless connection to microphones and codecs
  • Compatible with low-cost analog microphones and outputs
  • Low power consumption helps end products to save energy for a greener environment through optimal mix of lowest active current, reduced standby current, background operation, SRAM power-off choices, fast wakeup, low power timers
  • Controlled by external host I/F via SCI/Uart, SPI, I3C or I2C
  • Small package support for cost and integration effectiveness (QFN 48, 32, 24)
  • 256KB program flash, 128KB RAM, 16KB data flash
  • QSPI interface for easy memory expansion
  • Complete reference kit: hardware, software, tools, hardware/software datasheets, GUI manual, app notes

RISC-V Voice Control Winning Combination​

Renesas has designed a Winning Combination, Voice Control HMI with RISC-V ASSP, that employs the R9A06G150 and other compatible devices from the Renesas portfolio to support general-purpose voice-control HMI systems. This integrated, pre-programmed, turnkey solution, adds HMI voice control capability without any user code programming, and can be used in IoT, hands-free, or smart appliance applications. The complete reference design enables customized wake-up words and commands and supports multiple languages. Renesas’ Winning Combinations are technically vetted system architectures from mutually compatible devices that work together seamlessly to bring an optimized low-risk design for a faster time to market. Renesas offers more than 300 Winning Combinations with a wide range of products from the Renesas portfolio to enable customers to speed up the design process and bring their products to market more quickly. They can be found at renesas.com/win.

Availability​

The R9A06G150 32-bit MCU voice-control HMI ASSP solution is available now. For further information, please visit: renesas.com/R9A06G150. Renesas has also a published a RISC-V blog with additional information at www.renesas.com/blogs/revving-risc-v-engine-help-designers-explore-breakthrough-processor-architecture.

Renesas MCU Leadership​

A world leader in MCUs, Renesas ships more than 3.5 billion units per year, with approximately 50% of shipments serving the automotive industry, and the remainder supporting industrial and Internet of Things applications as well as data center and communications infrastructure. Renesas has the broadest portfolio of 8-, 16- and 32-bit devices, and is the industry's No. 1 supplier of 16- and 32-bit MCUs combined, delivering unmatched quality and efficiency with exceptional performance. As a trusted supplier, Renesas has decades of experience designing smart, secure MCUs, backed by a dual-source production model, the industry’s most advanced MCU process technology and a vast network of more than 200 ecosystem partners. For more information about Renesas MCUs, visit: www.renesas.com/MCUs.

About Renesas Electronics Corporation​

Renesas Electronics Corporation (TSE: 6723) empowers a safer, smarter and more sustainable future where technology helps make our lives easier. A leading global provider of microcontrollers, Renesas combines our expertise in embedded processing, analog, power and connectivity to deliver complete semiconductor solutions. These Winning Combinations accelerate time to market for automotive, industrial, infrastructure and IoT applications, enabling billions of connected, intelligent devices that enhance the way people work and live. Learn more at renesas.com. Follow us on LinkedIn, Facebook, Twitter, YouTube, and Instagram.
All names of products or services mentioned in this press release are trademarks or registered trademarks of their owners.

The content in the press release, including, but not limited to, product prices and specifications, is based on the information as of the date indicated on the document, but may be subject to change without prior notice.

Share this news on​

 
  • Like
  • Fire
  • Thinking
Reactions: 16 users

Tothemoon24

Top 20
085B36A3-1130-4CD9-A382-0F561BA2D20F.png
 
  • Like
  • Fire
  • Thinking
Reactions: 17 users

Tothemoon24

Top 20

Abstract​

Edge detectors are widely used in computer vision applications to locate sharp intensity changes and find object boundaries in an image. The Canny edge detector is the most popular edge detector, and it uses a multi-step process, including the first step of noise reduction using a Gaussian kernel and a final step to remove the weak edges by the hysteresis threshold. In this work, a spike-based computing algorithm is presented as a neuromorphic analogue of the Canny edge detector, where the five steps of the conventional algorithm are processed using spikes. A spiking neural network layer consisting of a simplified version of a conductance-based Hodgkin–Huxley neuron as a building block is used to calculate the gradients. The effectiveness of the spiking neural-network-based algorithm is demonstrated on a variety of images, showing its successful adaptation of the principle of the Canny edge detector. These results demonstrate that the proposed algorithm performs as a complete spike domain implementation of the Canny edge detector.
Keywords:
edge detection; segmentation; spiking neural networks; bio-inspired neurons



Graphical Abstract

1. Introduction​

Artificial neural networks (ANNs) have become an indispensable tool for implementing machine learning and computer vision algorithms in a variety of pattern recognition and knowledge discovery tasks for both commercial and defense interests. Recent progress in neural networks is driven by the increase in computing power in data centers, cloud computing platforms, and edge computing boards. In size, weight, and power (SWaP)–constrained applications, such as unmanned aerial vehicles (UAVs), augmented reality headsets, and smart phones, more novel computing architectures are desirable. The state-of-the-art deep learning hardware platforms are often based on graphics processing units (GPUs), tensor processing units (TPUs) and field programmable gate arrays (FPGAs). The human brain is capable of performing more general and complex tasks at a minute fraction of the power required by deep learning hardware platforms. Spiking neurons are regarded as the building blocks of the neural networks in the brain. Moreover, research in neuroscience indicates the spatiotemporal computing capabilities of spiking neurons play a role in the energy efficiency of the brain. In addition, spiking neurons leverage sparse time-based information encoding, event-triggered plasticity, and low-power inter-neuron signaling. In this context, neuromorphic computing hardware architecture and spike domain machine learning algorithms offer a low-power alternative to ANNs on von Neumann computing architectures. The availability of neuromorphic processors, such as IBM’s TrueNorth [1], Intel’s Loihi [2], and event-domain neural processors, for example, BrainChip’s Akida [3,4], which offers the flexibility to define both artificial neural network layers and spiking neuron layers, are motivating the research and development of new algorithms for edge computing. In the present work, we have investigated how one can program an algorithm for Canny type edge detection using a spiking neural network and spike-based computing.

2. Background​

An edge detection algorithm is widely used in computer vision to locate object boundaries in images. An edge in an image shows a sharp change in image brightness, which is a result of a sharp change in pixel intensity data. An edge detector computes and identifies the pixels with sharp changes in intensity with respect to the intensity of neighboring pixels. There are several edge detection image processing algorithms.
The three stages in edge detection are image smoothing, detection, edge localization. There are mainly three types of operators in edge detection. These are (i) gradient-based, (ii) Laplacian-based and (iii) Gaussian-based. The gradient-based edge detection method detects the edges by finding the maximum and the minimum in the first derivative of the image using a threshold. The Roberts edge detector [5], Sobel edge detector [6], and Prewitt edge detector [7] are some of the examples of gradient-based edge detectors. These detectors use a 3 × 3 pattern grid. A detailed discussion on these edge detectors and a comparison of their advantages and disadvantages can be found in [8]. The Roberts edge detection method is built on the idea that a difference on any pair of mutually perpendicular directions can be used to calculate the gradient. The Sobel operator uses the convolution of the images with a small, separable, and integer-valued filter in horizontal and vertical directions for edge detection. The Prewitt edge detector uses two masks, each computing the derivate of the image in the x-direction and the y-direction. This detector is suitable to estimate the magnitude and orientation of the edge. Laplacian-based edge detectors find the edges by searching for zero crossings in the second derivative of the image. The Laplacian of the Gaussian algorithm uses a pre-smoothing step with a Gaussian low-pass filter on an image followed by a second-order differential, i.e., Laplacian, which finds the image edge. This method needs a discrete convolutional kernel that can approximate the second derivative for the image which consists of discrete pixels. The Marr–Hildreth edge detector is also based on the Laplacian of the Gaussian operator [9]. The Gabor filter edge detector [10] and Canny edge detector [11] are Gaussian-based edge detectors. The Gabor filter is a linear filter with its impulse response function defined by the product of a harmonic function with a Gaussian function and is similar to the human perception system.
The Canny edge detector provides excellent edge detection, as it meets the three criteria for edge detection [12]: (i) detection with low error rate, (ii) the edge point should localize in the center of the edge, and (iii) an edge should only be marked once and image noise should not create edges. Canny edge detection uses the calculus of variations to optimize a functional which is a sum of four exponential terms, which approximates the first derivative of a Gaussian. A Canny edge detector is a multi-step algorithm designed to detect the edges of any analyzed image. The steps of this process are: (1) removal of noise in the image using a Gaussian filter, (2) calculation of the gradient of the image pixels along x- and y-directions, (3) non-maximum suppression to thin out edges, (4) double-threshold filtering to detect strong, weak and non-relevant pixels, and (5) edge tracking by hysteresis to transform weaker pixels into stronger pixels if at least one of their neighbors is a stronger pixel. The Canny edge detection algorithm is highly cited (∼36,000 citations) and the most commonly used edge detection algorithm [11].
Edge detection is a primary step in identifying an object and further research is strongly desirable to expand these methods to event-domain applications. The Canny edge detector has a better performance as an edge detector compared to Roberts, Sobel and Prewitt edge detectors, but at a higher computational cost [8]. An alternate implementation of the Canny edge detection algorithm is for edge computing event-domain applications, where low-power and real-time solutions can be attractive to target applications, in view of its tunable performance using the standard deviation of the Gaussian filter.

3. Related Work​

In the human vision system, the photoreceptors in the retina convert the light intensity into nerve signals. These signals are further processed and converted into spike trains by the ganglion cells in the retina. The spike trains travel along the optic nerve for further processing in the visual cortex. Neural networks that are inspired by the human vision system have been introduced to improve image processing techniques, such as edge detection [13]. Spiking neural networks, which are built on the concepts of spike encoding techniques [14], spiking neuron models [15] and spike-based learning rules [16], are biologically inspired in their mechanism of image processing. SNNs are gaining attraction for biologically inspired computing and learning applications [17,18]. Wu et al. simulated a three-layer spiking neural network (SNN), consisting of a receptor layer, an intermediate layer with four filters, respectively, for up, down, left, and right directions, and an oputput layer with Hogkin–Huxley-type neurons as the building blocks for edge detection [19].
Clogenson et al. demonstrated how a SNN with scalable, hexagonally shaped receptive fields performs edge detection with computational improvements over rectangular shaped pixel-based SNN approaches [20]. The digital images are converted into a hexagonal pixel representation before being processed by the SNN. A spiking neuron integrates the spikes from a group of afferent neurons in a receptive field. The network model used by the authors consists of an intermediate layer with four types of neurons corresponding to four different receptive fields, corresponding to up, down, right and left orientations. Yedjour et al. [21] demonstrated the basic task of contour detection using a spiking neural network based on the Hodgkin–Huxley neuron model. In this approach, the synaptic weights are determined by the Gabor function to describe the receptive field’s behaviors of simple cells in the visual cortex. Vemuru [22] reported the design of a SNN edge detector with biologically inspired neurons and demonstrated that the edge detector detects edges in simulated low-contrast images. These studies focused on defining SNNs using an array of Gabor filter receptive fields in the edge detector. In view of the success of SNNs in edge detection, it is desirable to develop a spike domain implementation of an edge detector with the Canny edge detector algorithm because it has the potential to offer a high performance alternative for edge detection.

4. Methods​

Network models of the visual cortex are simulated with spiking neurons using Hodgkin and Huxley equations [23]. Retinal ganglion cells convey the visual image from the eye to the brain [24,25]. Receptive fields exist in the visual cortex; however, an accurate representation of the neuron circuits for the visual cortex is still unclear. Neural network models have been proposed explaining how the visual system is able to process an image efficiently, and more research is desired to further our understanding of the visual cortex [26]. As ANNs grow in complexity, their associated energy consumption becomes a challenging problem. Such challenges also exist for computing edges in images, where the computing devices are resource-constrained while operating on a limited energy budget. Therefore, specialized optimizations for deep learning have to be performed at both software and hardware levels. Edge detection can be achieved using a spiking neuron model [19]. Spiking neural networks offer a low-energy computational alternative with only a few layers, while maintaining edge features. Our solution for spike-based edge detection uses only one layer of Hodgkin–Huxley-type neurons (1 neuron/pixel) with five spike processing layers, one conductance calculation layer and a synaptic current update layer. The simple form of Hodgkin–Huxley neurons used in the network are similar to the conductance-based leaky integrate-and-fire neurons, which are frequently used in neuromorphic hardware implementation [1,2].
To implement a neuromorphic analogue of the Canny detector, we invented spike-based computation using the five key steps introduced earlier and implemented in MATLAB. Figure 1 illustrates the flowchart of the algorithmic steps in the spike domain computation of Canny edge detection. The image I(x,y), where (x, y) are the coordinates of the pixels, is first converted into grayscale I(grayscale)(x,y) and scaled such that I(grayscale)max = 0.01 to match the units of the model parameters of the Hodgkin–Huxley neuron model. Then, it is assigned as peak conductance for excitatory synapse qex and peak conductance for inhibitory synapse qin. The peak synapses are then converted into time-dependence conductances, gex and gin, for excitatory and inhibitory synapses, respectively, using the equations
gex=qex(τex×dt)/(τex+dt)
(1)
gin=qin(τin×dt)/(τin+dt),
(2)
where τex = 4 ms and τin = 7 ms are the time constants for excitatory, inhibitory synapses, respectively [27]. Then, the conductances are processed using a Gaussian kernel of 5 × 5, to calculate the synaptic current at each time step t [28]:
Iz(t)=gex(V−Vex)+gin(V−Vin),
(3)
where Vex and Vin are the reverse potentials for excitatory and inhibitory synapses, respectively. Note that the kernel size was smaller than 5 × 5 for edge pixels.
Futureinternet 14 00371 g001 550
 
  • Like
  • Love
Reactions: 13 users

Tothemoon24

Top 20
Biomedical image processing is a field with an increasing demand for advanced algorithms that automate and accelerate the process of segmentation. In a recent review, Wallner et al. reported multi-platofrm performance evaluation of open-source segmentation algorithms, including the Canny-edge-based segmentation method, for cranio-maxillofacial surgery [37]. In this context, it is desirable to test newer algorithms such as the one developed here using spiking neurons on medical images. To this end, we performed an edge detection experiment with a few representative medical images from the computed tomography (CT) emphysema dataset [38] (this database can be used free of charge for research and educational purposes). This dataset consists of 115 high-resolution CT slices as well as 168 square patches that are manually annotated in a subset of the slices. Figure 6illustrates a comparison of the three edge detectors, Sobel, Canny and SNN-based Canny detectors, for a few example images from the CT emphysema dataset. Ground truth for edge maps is not available for this dataset to perform a quantitative comparison. A visual comparison of the edges generated from the three edge detectors, presented in Figure 6, shows that the SNN-based Canny edge detector is competitive with the other two edge detectors and offer an algorithmically neuromorphic alternative to the conventional Canny edge detector.
Futureinternet 14 00371 g006 550

Figure 6. A comparison of edge detection by Sobel, Canny, and SNN Canny edge detectors for medical images from CT emphysema dataset.

6. Conclusions​

In conclusion, we present a spiking neural network (SNN) implementation of the Canny edge detector as its neuromorphic analogue by introducing algorithms for spike based computation in the five steps of the conventional algorithm with the conductance-based Hodgkin–Huxley neuron as the building block. Edge detection examples are presented for RGB and infrared images with a variety of objects. A quantitative comparison of the edge maps from the SNN-based Canny detector, conventional Canny detector and a Sobel detector using the F1-score as a metric, shows that the neuromorphic implementation of the Canny edge detector achieves better performance. The SNN architecture of the Canny edge detector also offers promise for image processing and object recognition applications in the infrared domain. The SNN Canny edge detector is also evaluated with medical images, and the edge maps compare well with the edges generated with the conventional Canny detector. Future work will focus on the implementation of the algorithm on an FPGA or on a neuromorphic chip for hardware acceleration and testing in an infrared object detection task potentially with edge maps as features together with a pre-processing layer to remove any distortions, enhance contrast, remove blur, etc., and a spiking neuron layer as a final layer to introduce a machine learning component. An extension of the SNN architecture of the Canny edge detector with additional processing layers for object detection in LiDAR point clouds would be another interesting new direction of research [39].
 
  • Like
  • Fire
Reactions: 15 users
This will destroy that great line drill sergeants use and undermine everything:

“Think. We don’t pay you to think soldier.”😂🤣😂🪁🪁🪁🪁🪁🪁
 
  • Haha
  • Like
  • Fire
Reactions: 10 users

Tothemoon24

Top 20
Renenas. Vice President.
Excellent listen runs for 9.47 minutes


 
  • Like
  • Love
  • Fire
Reactions: 16 users
Puts to bed the allegation that Brainchip is recruiting the wrong people:

“Both AI and HPC markets are highly competitive, so getting some of the world's best engineers onboard is a must when you want to compete against the likes of established rivals (AMD, Intel, Nvidia) and emerging players (Cerebras, Graphcore).”

Tom’s Hardware know what they are talking about in the technology space.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Fire
Reactions: 22 users

Diogenese

Top 20
I think the first people who are going to revolt against AI, are the ones that the AI get's smarter than first.
Yes. I have been seething with resentment since the telephone got smarter than me.
 
  • Haha
  • Like
  • Love
Reactions: 19 users

Dhm

Regular
Renenas. Vice President.
Excellent listen runs for 9.47 minutes



Excellent video. The bloke said that Renesas produces 9 million microcontrollers every day! That is around 2.25 billion each year. We aren't named but our applications can be inserted in a number of ways. Interesting parts at 3m 40s and from 6m40 onwards. Actually it is ALL interesting.
 
  • Like
  • Fire
  • Love
Reactions: 24 users

stuart888

Regular
Sure hope this is not a repeat. Game changer, and all good for Brainchip!

The world now knows how to run AI ChatGBT on a laptop, or phone, no internet. Generative AI in my opinion is very good for Brainchip smarts on the Edge. Awareness of AI to all decision makers.

This guy is sharp. Did it all in a week!



1680304645246.png
 
Last edited:
  • Like
  • Wow
  • Fire
Reactions: 14 users
Amazing. Consider the AKIDA possibilities:


My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Wow
  • Fire
Reactions: 13 users

Vladsblood

Regular
Amazing. Consider the AKIDA possibilities:


My opinion only DYOR
FF

AKIDA BALLISTA
Now we’re talking… add 20 drops of live Fulvic Acid from Optimally Organics containing the correct amount of Humic and it’s alive lol. Vlad
 
  • Haha
  • Like
Reactions: 9 users

Diogenese

Top 20
Now we’re talking… add 20 drops of live Fulvic Acid from Optimally Organics containing the correct amount of Humic and it’s alive lol. Vlad
This research was sponsored by the Hershey Bar Corporation.
 
  • Haha
  • Like
Reactions: 11 users

stuart888

Regular
And a refresher below

Hyundai Motor Group Launches Boston Dynamics AI Institute to Spearhead Advancements in Artificial Intelligence & Robotics​

2022.08.12 00:00:00 No. 16875
PrintEmail
  • · Hyundai Motor Group and Boston Dynamics to invest over $400 million to establish the new institute
  • · The Institute, led by founder of Boston Dynamics Marc Raibert, to invest resources across the technical areas of cognitive AI, athletic AI and organic hardware design, with each discipline contributing to progress in advanced machine capabilities
  • · The Institute to recruit talent in diverse areas, including AI and robotics research, and software and hardware engineering, as well as to partner with universities and corporate research labs

SEOUL/CAMBRIDGE, MA, August 12, 2022 – Hyundai Motor Group (the Group) today announced the launch of Boston Dynamics AI Institute[1] (the Institute), with the goal of making fundamental advances in artificial intelligence (AI), robotics and intelligent machines. The Group and Boston Dynamics will make an initial investment of more than $400 million in the new Institute, which will be led by Marc Raibert, founder of Boston Dynamics.

As a research-first organization, the Institute will work on solving the most important and difficult challenges facing the creation of advanced robots. Elite talent across AI, robotics, computing, machine learning and engineering will develop technology for robots and use it to advance their capabilities and usefulness. The Institute’s culture is designed to combine the best features of university research labs with those of corporate development labs while working in four core technical areas: cognitive AI, athletic AI, organic hardware design as well as ethics and policy.

“Our mission is to create future generations of advanced robots and intelligent machines that are smarter, more agile, perceptive and safer than anything that exists today,” said Marc Raibert executive director of Boston Dynamics AI Institute. “The unique structure of the Institute — top talent focused on fundamental solutions with sustained funding and excellent technical support — will help us create robots that are easier to use, more productive, able to perform a wider variety of tasks, and that are safer working with people.”

To achieve such advances, the Institute will invest resources across the technical areas of cognitive AI, athletic AI and organic hardware design, with each discipline contributing to progress in advanced machine capabilities. In addition to developing technology with its own staff, the Institute plans to partner with universities and corporate research labs.

The Institute will be headquartered in the heart of the Kendall Square research community in Cambridge, Massachusetts. The Institute plans to hire AI and robotics researchers, software and hardware engineers, and technicians at all levels.

Please visit https://www.bdaiinstitute.com/ for more information and to submit interest in working with the Institute.

In addition to the Institute, Hyundai Motor Group separately announced plans to establish a Global Software Center to lead development of its software capabilities and technologies and to enhance its capabilities to advance development of Software Defined Vehicles (SDVs). The Center will be established on the basis of 42dot, an autonomous driving software and mobility platform startup recently acquired by the Group.
Cost effective edge smarts, go Brainchip. I look at this as the Tide Rises All Boats.

 
  • Like
Reactions: 4 users

stuart888

Regular
25 FPS? Interesting, just seems slow.

 
  • Like
Reactions: 4 users
Top Bottom