BRN Discussion Ongoing

KiKi

Regular
A friend had written an email to Tony Dawe a fews weeks ago with his concerns about the stock price and worries about how successful BRN is going to be and received a reply from him (I cannot post it completely as I do not have his consent to do so) and the last sentences of the reply were:
....
That is what we are focused on (my explanation: meaning customer relationship) and that is what we are working day and night to achieve (my explanation: achieve sales).
It's not a matter of if we are successful, it’s a matter of when. Please remain patient.
Regards
Tony Dawe


I also hate to see the SP going down further and further, but I keep buying more stocks as I trust Tony Dawe's words. Should he be wrong, well then I will be losing a whole lot of money and cry for a while. But until then, I will not cry about the SP! I will remain patient!
 
  • Like
  • Love
Reactions: 37 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Damo4 we agree on something. There communication is consistent.;);)

We already have radio silence. Has anyone here heard of the CEO visiting Australia to present to Australian shareholders?

I'm not the only one concerned with this. Stock analysts are also concerned or bemused with this. Go take a listen to the latest Stocks downunder interview (Marc Kennis) where they raise this concern and many others.

I just wonder what it will take for this current CEO to get out and talk about this company to its shareholders in a meaningful way that gives you a grain of hope that positive things are happening.

I want to hear it from the company and not from a forum.

That's what the AGM is for. So you can hear it from the company. I take it you'll be going, since you seem to have a lot of questions that no-one on here can answer for you.
 
  • Like
  • Haha
  • Fire
Reactions: 16 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
I don’t know who it was but someone posted that mega chips hits revenue around the likes of 20 billion USD. That’s so incredibly far off. The estimated revenue for the financial year 2023 is around 500 million USD.
Please whoever stated that correct yourself. I’m sure you just confused the currencies ans that’s it. And for the love of god stop cancelling people who are not satisfied with the performance of the company. We all do like BRN ans did our due diligence. It doesn’t prevent the fact that our performance is plain awful. Otherwise we can all stop participating in this forum as it would be plain useless.

Sources: https://www.megachips.co.jp/english/pdf/2303ps-3q_e.pdf


Don’t confuse the currencies. I guess that was the mistake and I’m certain that there wasn’t any malicious intent by the poster. So don’t feel offended it was just a mistake and that’s it. No bad feelings from my side.

Hi @DerAktienDude, I'll have to ask @The Pope for forgiveness if I've made an error. In my own defence, I may have been misled by Dr Google and so I should only need to recite a maximum of 3 Hail Mary's.


Screen Shot 2023-04-17 at 5.57.17 pm.png
 
Last edited:
  • Like
  • Haha
Reactions: 10 users
D

Deleted member 118

Guest

Renesas Round-Up: A Q&A with Roger Wendelken on Trends Shaping Industrial MCUs​

Image
Roger Wendelken

Roger Wendelken
Senior Vice President of the MCU Business Division in Renesas’ IoT and Infrastructure Business Unit



Published: March 29, 2023
Renesas is a leading supplier of MCUs, shipping more than 3.5 billion devices each year backed by a dual-source production model, the industry’s most advanced MCU process technology and a network of more than 200 ecosystem partners. Approximately half of Renesas MCUs are designed into automotive platforms, with the remainder serving a diverse set of applications in IoT, edge-to-cloud computing, consumer, industrial and infrastructure systems. Veteran semiconductor industry editor and consultant, Andrew MacLellan, recently sat down with Roger Wendelken, Senior Vice President of the MCU Business Division in Renesas’ IoT and Infrastructure Business Unit, to discuss the evolving MCU market, thoughts on nurturing a user-friendly customer design ecosystem and the impact of artificial intelligence (AI) on MCU component selection and design flows.
Andrew: What are the major trends influencing the MCU landscape for industrial applications?
Roger: One is to address the emerging AI markets of real-time analytics, vision and voice. The move to embedded AI, or what we call “endpoint intelligence,” is driving the need for higher-performance MCU cores that bring in DSP functionality as well as hardware acceleration engines and neural nodes.
A second key trend is that today's engineer would love to design everything over the Internet – and do it very easily. That requires a user interface that provides engineers with a seamless design flow. Frankly, this trend is directly in line with the overriding goal of Renesas, “To Make Our Lives Easier” by complementing human capabilities. If they become frustrated with the documentation, or they can’t find an evaluation kit, or the app notes or software manuals are hard to follow, then they always have the option of locking in with a competitor.
Andrew: How are you enabling customers to embrace these trends, and how are you helping them manage the growing size and complexity of the MCU ecosystem?
Roger: It's a combination of ease of design, helping them do more work remotely and creating an environment where, say, they can play with an eval kit in the cloud as opposed to buying a physical kit and setting it up. In the cloud, there’s no set up required; it's there already for designers to take measurements and do benchmarking. That was the impetus behind the recent launch of our Quick-Connect Studio, which is a cloud-based design platform that lets users graphically build hardware and software and accelerate product development by quickly validating prototypes.
Secondly, because the ecosystem is becoming so complex, we break it into several subsegments, such as security and safety, AI, tools and user experience, connectivity and cloud, and human machine interface sensing and control. Then, we pick ecosystem partners to support each subsegment. Of those, AI is probably the most complex given the need to build, develop, train and deploy models. There could be ecosystem partners for each of those different categories. Reality AI, which we acquired last year, was initially an AI ecosystem partner, and we saw a tremendous benefit for them to become part of Renesas and our overall customer solution.
Image
Renesas MCUs Bridge Key IoT & AI Technologies

Andrew: I read a report recently that debated whether the move to incorporate multiple, highly optimized cores and AI algorithms will result in general-purpose MCUs being replaced by application-specific devices. What’s your take?
Roger: I wouldn’t necessarily agree that they’re becoming more application specific, and I'll give you my reasoning on this. At Renesas, we have both, and what really differentiates the two is the hardware peripherals. For example, the metering or motor control markets may require certain analog functions that aren’t needed in a general-purpose MCU for someone designing for other applications. That necessitates a trade-off between die size and cost.
Why is that becoming less of an issue? It’s because, as we move to lithographies of 22nm and below, you can start putting a lot more functionality into a single piece of silicon that can now address many segments without incurring a heavy cost penalty. In a sense, as you move to smaller lithographies, you can almost create a general-purpose MCU that is able to address a lot of application-specific designs through different packaging schemes and memory size options. For example, our most advanced 32-bit MCUs have very powerful cores with the ability to handle all aspects of AI, from real-time analytics to vision and voice. But you can also take the IP on the die and bond it out into different packages that can address the networking market, or metering or motor control.
Andrew: With more MCUs being designed into the IoT intelligent endpoint, how is Renesas helping customers manage security?
Roger: As more and more products are connected to the cloud via home and industrial networks, we need to ensure that the network and endpoint are secure. AI/ML (machine learning) is a powerful technology, and it’s important not to underestimate its implied security requirements. Protecting AI IP and device cloud communication – efficiently and cost effectively – is vital, but new technology also introduces new threats. This is why governments around the world are responding with legislation such as the Strengthening American Cybersecurity Act and the EU Cyber Resilience Act. These are designed to defend users and their devices from malicious attack. Protecting the AI model, during both training and operation, is critical to secure operation of the device over its entire lifetime.
Andrew: What effect are AI and ML/TinyML having on the way MCUs are designed in at the system level?
Roger: One thing we've learned is that, when a customer implements AI into their product, there is a direct impact on the MCU selection decision, which now needs to happen far earlier in the design process. This is because customers have to decide how big their model needs to be based on the capabilities they’re designing for. All of that impacts what MCU they need to run the model, the optimal memory density and what performance is required to achieve the goals of their AI application. If I can show my customer how they can implement AI on a 50-cent MCU using my ecosystem and my tools versus a 75-cent MCU from a competitor, that’s what it comes down to. This places a heavy reliance on how tightly these algorithms are written so that they can be executed using fewer resources, and again that’s where Reality AI comes in as a winning solution.
Andrew: What advice do you have for customers at the initial stages of their design process?
Roger: For a product that’s entering design today and will hit the market in a year or two and stay relevant for the next five, engineers need to think about using security and AI to future-proof their product so it remains competitive for its full life cycle.
 
  • Like
  • Fire
Reactions: 11 users

suss

Regular
Hi @DerAktienDude, I'll have to ask @The Pope for forgiveness if I've made an error. In my own defence, I may have been misled by Dr Google and so I should only need to recite a maximum of 3 Hail Mary's.


View attachment 34451
According to Dr Google 20,070,000,000 Japanese Yen equals 223,694,991.76 Australian Dollar
 
  • Like
Reactions: 1 users
D

Deleted member 118

Guest
Don’t think I’ve seen this on here before. A few months old but might be worth a watch if anyone is interested

 
  • Like
  • Fire
Reactions: 6 users

wilzy123

Founding Member
That's what the AGM is for. So you can hear it from the company. I take it you'll be going, since you seem to have a lot of questions that no-one on here can answer for you.
Hopefully everyone that is worried right now, especially those that like to continually point out that they are worried, go to the AGM to ask their questions. Or I worry that they may continue to point out that they are worried, and then I would be worried for them, but not the least bit worried about the company as I have been all along.
 
  • Haha
  • Like
Reactions: 17 users
D

Deleted member 118

Guest
Hopefully everyone that is worried right now, especially those that like to continually point out that they are worried, go to the AGM to ask their questions. Or I worry that they may continue to point out that they are worried, and then I would be worried for them, but not the least bit worried about the company as I have been all along.
 
  • Haha
  • Like
Reactions: 8 users

JDelekto

Regular
Hopefully everyone that is worried right now, especially those that like to continually point out that they are worried, go to the AGM to ask their questions. Or I worry that they may continue to point out that they are worried, and then I would be worried for them, but not the least bit worried about the company as I have been all along.
Hopefully, those worried will not only come up with a complete list of concerns but will also ask to have any representation or statement presented by the company to be clarified.

I have seen statements that I believe were taken out of context, such as that made by Sean during the recent 30-minute investor presentation. I think the five or six minutes left afterward were inadequate for answering some of the questions with a well-crafted and clear reply while allowing for any follow-up questions to resolve any ambiguity.
 
  • Like
  • Love
Reactions: 15 users
Don’t think I’ve seen this on here before. A few months old but might be worth a watch if anyone is interested




Thanks @Rocket577 for posting that one.

Another awesome presentation from our trusted partner!
 
  • Like
Reactions: 9 users
I totally agree. There's lots that the CEO can be talking about.

What's missing for mine, are statements along the line that Akida IP is currently being readied for problems in the following areas, or with the following models. If in the past SH has said that Akida can become ubiquitous, then he should provide a couple of new use cases, or retract.

Meanwhile folks on here who make obviously false claims like ARM has developed a chip that contains Akida, or that imply that Renesas' RZ chips already contain Akida do all the harm in the world and should be asked to leave, because they insult us.

Why can't people express their pain at the current SP?
It’s a bit difficult for a CEO to report about a Ubiquitous technology under a swathe of NDA’s… 🤪

Doesn’t leave much wriggle room for discussion and progress.. The CEO is the ecosystem fosterer and his worth is being revealed with its broadening network..

Tangibility of revenue and royalties will come.. Even if you look no further than ARM as a partner and it’s associated key employee migration, there’s enough smoke to warrant a potentially huge future out of control revenue fire.

AIMO
 
  • Like
  • Fire
  • Thinking
Reactions: 17 users
Am I a buyer here? Not yet.

I’m a holder of a manageable and smaller speculative percentage position that is enough to benefit from a sudden significant move.

However, my friends usually hate it when they’ve been riding the consolidations in price for months and years then see me time an entry within days of strong price movements..

So I suspect with BRN as well, if the time is right to take a position at a probable positive inflection point I will be there again to capitalise on it.. Right now, is not yet a highly probable time IMO for a sustained appreciation ..
That may change within another month or even a few weeks, but until it does, I’ll be patient.. My investment account thanks me for it..
 
  • Like
  • Haha
Reactions: 4 users

Jimmy17

Regular
Every second post on this thread is full of doubt.

Where did all this doubt and alwaysvictim come from all of a sudden I wonder.

Gee, has this happened before?

Can it be any more obvious?

View attachment 34457
No Wilzy, enough of your conspiracy theories, just a substantial number of retail holders hanging on for dear life expressing frustration which in my view is fully justified. A number of factors have elevated expectations, including tidbits from the company. It is understandable and should be expected. Its hard to send a polite email to TD when you just wanna say hurry the F up and show me the $$$
 
  • Like
  • Haha
  • Love
Reactions: 9 users

Jimmy17

Regular
"conspiracy theories"
"hanging on for dear life"
"elevated expectations"
"hurry the F up and show me the $$$"

You sound like a perfectly rational and non-emotional investor. 👏
Ok im too emotional and impatient but remain rational.
 
  • Like
Reactions: 4 users
I think it healthy to take occasional breaks from here now and then.
Can become a little consuming and the older among us don't eat and breathe social media like those of you who have grown up with it. 🤣
Or it may be that life has simply gotten in the way, as it has a tendency to sometimes do.
Or maybe there's just nothing new or notable enough to comment on.
You can only advise to DYOR, be patient and work your own plan, just so many times, before it becomes tedious and repetitious.
Maybe they have decided to limit themselves to an inside cabal having sorted the grain from the chaff. 🤣
All supposition on my behalf of course and hope to read them here again in future times, but if not, I thank them for sharing their wisdom and experience in times past.
Or maybe the influx of people tearing the company down all of a sudden made the decision for them. I myself have seen this playbook a few too many times and am considering joining them. I'll stay true to company and hold though.

SC
 
  • Like
Reactions: 6 users
Ok im too emotional and impatient but remain rational.
Every one just wants the money
I feel that they think creating an echo system is done with a wave of the magic 🪄 wand and poof it’s there. Dream on guys, if it were that easy everyone would be doing it.
It’s a snow ball soon to turn into an avalanche
The market is warming not only to get more investors back but to Ai
It’s going to flow like the a torrent after the thaw
What’s everyone’s rush to become rich
i want sustainability a long ride not a quick buck
if i did i would play the pokies
 
  • Like
  • Love
Reactions: 20 users

Kachoo

Regular
I have a hypothetical question to all and be honest?

If BRN accomplished exactly what it has today partnerships and all these social media posts but the price was at say $1.50 would you all pissed off at the chiefs or waiting on up graded numbers and higher values.

I would bet atleast 90 % would be happily waiting. That is a fact.

I would be absolutely happy in that situation.
 
  • Like
Reactions: 12 users
Interesting recent interview with a team member from Sony Semicon who is working with Prophesee on EVS.

Nothing on us however my takeaway is an understanding of the "time and effort" that has to go into the joint dev, testing and production in merging tech to create one final product.

Food for thought & appreciation maybe in some TSE discussions re how long for consumer / commercial / enterprise product releases...and this by one of the industry giants :unsure:



Sony Semiconductor Solutions Group

Pursuing the approach to social value in developing new technology​

March 22, 2023
There has been a growing trend in recent years to combine AI with diverse sensors, and against this backdrop, the Event-based Vision Sensor (EVS) is garnering much interest.
This is a sensor that only extracts changes within a frame, enabling to significantly reduce the power consumption and the volume of data to be handled compared to the conventional frame-based sensing data.
It has great potential for extensive applications. We interviewed the engineer who developed the world’s smallest*1 pixel for EVS, Atsumi Niwa of Sony Semiconductor Solutions Corporation (SSS) Research Division 1, and asked him about his experience in the development project and the secrets of his success in creating the technology the world has never seen before.
*1) According to Sony research (as of September 2021)
member_icon01.png

Niwa Atsumi
Sony Semiconductor Solutions Corporation
Research Division 1
Profile:Niwa joined the Sony Corporation’s Semiconductor Business Group (present SSS) in 2008.
He initially worked on the development of analog circuits for TV tuners, where he mainly designed the base band analog signal processors and AD converters. Subsequently, he contributed to enhance the performance of the CMOS image sensor for mobile applications. He joined the project to develop EVS in 2017, which led to the launch of IMX636 released in 2021. In recognition of this achievement, he received 2021 Sony Outstanding Engineer Award. Today, he pursues the internal rollout of the EVS technology and pursues to expand the scope of application for the technology through his development projects.

The desire to find out what contributions EVS might offer to society prompted him to participate in the project to commercialize the technology​

- What kind of sensor is an EVS?
EVS is a fundamentally different sensor from general image sensors which have been evolving so far to reproduce images as truthfully to our visions as possible. It measures differences in luminance at the pixel level and outputs the data with their coordinates and time stamps. There are three major advantages of EVS.
Firstly, EVS can detect the luminance changes without complex setting depended on light condition.
Thus user can easily use sensor compared to image sensor which requires lots of setting to produce optimal data.Secondly, it consumes far less power because the output data only contains the detected differences.
And thirdly, thanks to the pixel-parallel detection, EVS only outputs change unlike conventional image sensors, which read out approximately 30 to 60 frames of whole images per second. This allows EVS to respond quickly and capture changes at less than a millisecond.

Suppose that we are capturing a welding process in a factory, the conventional image sensors would produce an image that part is white out because the welded part is extremely bright. Similarly, the sparks must be observed to discern whether the welding is applied correctly, but the conventional image sensors are not capable of precisely capturing the fast-moving sparks. The EVS, on the other hand, can easily capture individual sparks even under this high-contrast condition.
Metal process monitoring
Frame-based sensor image
t-12_products_evs17.jpg

The sparks are overexposed due to high luminance,
making linear trails in the image.
EVS image
t-12_products_evs18.jpg

Each fast-moving spark is captured individually [high frame rate]
Data other than the sparks (such as the machinery) are not output [high efficiency: minimal data output]
Application output
t-12_products_evs19.jpg

Each spark is tagged with ID and tracked
⇒ analyzable in terms of the number, size, speed, etc
Alternatively, the sensor’s capability to measure motion can be applied to machine inspection. Engines, for example, normally run constantly, and their movements in a normal condition become disturbed if there are some abnormalities. The EVS can be used to detect such abnormal movements to catch early signs of malfunctions.
Vibration monitoring
Frame-based sensor image
t-12_products_evs23.jpg

It is impossible to discern the vibration
in the model car on the platform with naked eye.
EVS image
t-12_products_evs24.jpg

Only the vibrating areas are processed and
output so that the vibration is clearly visualized.
Application output
t-12_products_evs25_en.jpg

The frequency is analyzed per pixel and
can be mapped out in two dimensions.
- Has EVS existed for a long time?
The technology emerged during the first decade of the 2000s, inspired by the ways in which the human eye recognized images. The sensor only catches moving targets and outputs the data at high speed. This simply represents the high efficiency in only capturing changes, but the vision sensor has evolved through attempts to apply it to various contexts by combining with a neural network.*2
With reference to the point of efficiency I just mentioned, extracting change information from images captured using the conventional image sensors significantly increases power consumption and the volume of data to be handled. To further evolve AI, sensors will be expected to only provide the information required by the neural network, and EVS represents one of the viable options for this purpose.
*2) A series of algorithms for pattern recognition modeled on the ways in which the human brain operates.
- When you first came across this technology, did you immediately want to get involved in developing it?
When I heard of EVS, I found it interesting, and the idea of making one appealed to me. Also, I was curious to know how this technology could contribute to society.
My background was CMOS image sensors, developing them for taking clear pictures.
EVS with its principles totally different from what I knew excited me for discovering new potential. Soon afterwards, a French venture of sensing devices, Prophesee, approached us to propose a joint project to commercialize EVS. This project was the first attempt in the world to turn EVS into a commercial product. I was keen to witness firsthand how the technology would contribute to society and decided to join the project.

Unprecedented external joint development, having to deal with differences in everything from workflows to technical terms​

- What was the most difficult part in developing this EVS with the smallest pixels in the world?
Each pixel in an EVS has its event detection circuit. If the circuit specifications become demanding, the pixel size increases to accommodate all of it, and this was a problem. The possibilities of implementing it on wearable devices and smartphones make it absolutely necessary that the pixels are smaller and more power-efficient.
So, we decided to design a circuit from scratch, drawing on SSS’s expertise, stacking technology and circuit design know-how.
It was hard-going at the stage where we combined this new circuit design with the process technology. We had many trials and errors before achieving the pixel size reduction.
Meanwhile, another difficulty was to deal with so many unknown factors. Usually, we pursue a development by supposing some use contexts for the product, but the technology concerning EVS had not been established within SSS. We could imagine some use cases but could not verify whether desired data could be obtained without actually testing.
Moreover, the technology being new to us, we had no adequate equipment for verification, data acquisition and, therefore, evaluation. It was necessary to develop the environment for technological verification in parallel with the product development, which made the project time-consuming and difficult to organize.
- This was the first collaboration with other companies in the EVS domain. How did you find it?
There have been many collaborative projects where responsibilities were clearly divided. This time, it was the first ever experience for me as well as for SSS in that two companies joined forces to develop and design one stacked sensor.
SSS has predetermined design development workflows, and these are, obviously, unique to the company. The language we use is also different from theirs.
Communication was often riddled with difficulties due to the differences in the technical background knowledge between Prophesee and SSS. We made efforts to maintain close communication with them and ensure mutual understanding because many little misunderstandings would eventually slow down the development schedules.
While Prophesee and SSS were in agreement to leverage our expertise to create a highly sophisticated EVS product, there were some discrepancies as to what level to be aimed at and how the product should be promoted. We needed to spend quite a lot of time to ensure that we had the same understanding.
The collaboration entailed the establishment of a new development flow in tandem with the product development, and also a new evaluation environment was to be developed. All in all, this was a whole new experience for us.
On top of this, the new concept EVS presented necessitated enduring efforts to explain to people within the Company so that they understood the sensor’s characteristics and were interested in this project. These efforts were necessary to build up the project team.
- How did you face the unfamiliar development workflow?
Despite the difficulties experienced in the development processes and communication, I enjoyed the job very much. I also learned much from the project.
For Prophesee, this technology determines their corporate success or failure. Seeing their uncompromising attitude to details, I thought we could learn from it.

Finding semiconductors interesting at university lab​

- When did you become interested in semiconductors?
I was studying circuit design at university, and though our lab was not dealing with the latest processes, we the students were making prototypes and studying the discrepancies between theories and implementations. It was the process of probing why and how the reality defied theories that appealed to me, and I found semiconductors interesting in this way.
The theory-oriented lab taught me the pleasure of in-depth thinking. Joining this lab was a pivotal experience for my future course of life.
- What was the deciding factor for your joining SSS?
At the end of my master’s course, I was in two minds about staying on to pursue a Doctor of Philosophy. Meanwhile, I saw recruiters of various corporations, of whom the members of SSS left a good impression in me. They said that, at SSS, it was possible to propose and pursue projects based on what you found interesting. It seemed to be a great place to be if I wanted to pursue what interested me.
At the time, I was enjoying the process of theorizing and testing the theory at the university lab, and I was also drawn to the idea of creating products and seeing how they benefited people in society. These were the factors that made me want to test my abilities at SSS.
- Have you ever experienced a failure?
I did when I was engaged in the development of CMOS image sensors for cell phones. I was working on a very demanding project in terms of development requirements, so much so that some people in the Company found it unjustifiable.
I was determined to succeed and managed to make it to the prototyping phase. However, a new specification was added before the commercialization. This latest addition proved to be totally incompatible with the structure of the prototype, and eventually my development had to be suspended.
The technology was modified and eventually adopted in the client’s later model. Even so, the taste of failure in the first attempt was so bitter.

Think deeply about the technology, what value it offers and how it can contribute to society​

- What do you pay attention to when you work?
I take time to consider the research and development project at hand in terms of its future significance and diffusion of value.
Any research and development would need to envision how the technology under development might be put to use. However, thinking about it within a framework of today’s conventions and values may lead to an impasse.
If you are confined in the values of conventional image sensors, for example, you might fail to see the worth of EVS by thinking that the sensor “only captures changes,” which is not a feature required for producing clear images. I think this is a trap that we should avoid. I would like to be someone who really think through “what value can be found” in only capturing changes and “what it will make possible.”
Today, I am fortunate to be at the research division, where we give a thorough consideration on the contributions to society which could be made by the technology we develop and study, and the value it offers. I always learn from this approach.
- What would you like to work on in the future?
SSS has so far been focused on image sensors that are capable of capturing the world as we see it as truthfully as possible, and the world has been demanding such technology and products.
However, I believe that the future opens up to approaches to changing our ways of life from the viewpoint of “social richness.” A sensor may not be able to capture a beautiful photo for viewing, but it is designed to extract some information, which will enable a system to function in such a way that it liberates us humans from our heavy-load tasks.
I would like to be part of the efforts to expand the imaging technology for “photography” into the domain of sensing for “obtaining necessary information.” It would be exciting to keep exploring the possible value that sensing technology can create.
I imagine that the future will have richer combinations of new image sensors and their applications, making our lives better and turning society into a more interesting place. With the technology concerning image sensors at the core, how should we leverage it and what value can be derived from it? We would like to adopt this application-oriented approach to prove the value we create so that our work and products will lead to positive changes in society.
- Image sensors that enrich society sounds like a wonderful idea.
An ambition like this cannot be fulfilled by a solo effort. This is why I always discuss desirable image sensors with many colleagues from various fields of expertise.
And discussing is not enough to move things forward. I think it is important to always keep myself open to new information and keep an extensive network of people as potential source.
- Interviewer’s postscript
It is clear that Niwa has a constant attitude of pursuing something, such as theories at university, the project which everyone else had given up on at the development division, and value for society which he currently pursues at the research division. It seemed to me that this attitude and persistence to challenge were a crucial part of his competence that enabled him to create something the world had never known. His journey to explore the potential of image sensors is far from over. It will be exciting to see what kind of technology he comes up with next to surprise us.

Related links​

 
Last edited:
  • Like
  • Fire
Reactions: 24 users

TECH

Regular
Valeo Patent, translate this and you'll read some interesting descriptions and claims in the invention that truly highlight how spiking neural networks and autonomous driving go seamlessly together, if that patent isn't referring to our technology, well then I give up. :ROFLMAO:

It's not to hard to read once you have translated it !

 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 42 users
Valeo Patent, translate this and you'll read some interesting descriptions and claims in the invention that truly highlight how spiking neural networks and autonomous driving go seamlessly together, if that patent isn't referring to our technology, well then I give up. :ROFLMAO:

It's not to hard to read once you have translated it !

Yep, gotta like those highlighted bits.

In another embodiment, detecting objects in the three-dimensional point cloud includes using a neural network.

[0027]​


In another embodiment, detected objects are classified into different categories by the neural network.

[0028]​


The neural network, which can be understood as a software algorithm, is particularly trained for one or more computer vision tasks, for example including object detection and/or segmentation and/or classification.

[0029]​


According to several implementations, the neural network is implemented as a spiking neural network (SNN) or as a convolutional neural network (CNN).

[0030]​

 
  • Like
  • Fire
  • Love
Reactions: 44 users
Top Bottom