buena suerte :-)
BOB Bank of Brainchip
looking a lot better ...Thanks for info sbYes they( the shorts) seem less confident
View attachment 39872
looking a lot better ...Thanks for info sbYes they( the shorts) seem less confident
View attachment 39872
I can tell you're upset.
Take a deep breath and repeat with me.
"WOOOOSAAAAAAH"
Nope, very chill. Iām just donāt think Robs likes mean shit.
this one made me laughā¦
4:18:22 PM | 0.360 | 560,762 | 201,874.320 | ASX | SX XT |
I get the feeling, nothing definitive, MB is moving away from Valeo's Scalia and moving towards Luminar's IRIS which uses Qualcomm. And also moving away from "Hey Mercedes" key word activation towards a system which recognises phrases or tones.Can you elaborate on Mercedes seem to have pivoted? They are still listed under Brainchip website as trusted by.
I still feel very confident that we are still and will be servicing MB ... they also get a mention along side us in the 'latest' Edge Ai 2023 Technology Report!!I get the feeling, nothing definitive, MB is moving away from Valeo's Scalia and moving towards Luminar's IRIS which uses Qualcomm. And also moving away from "Hey Mercedes" key word activation towards a system which recognises phrases or tones.
As you know, I was very bullish on MB using Akida in Valeo's Scala 3 and Hey Mercedes featured in the EQXX in the new 2024 EV MMA platform for small and medium cars. Those cars are due for release in the next couple of months.
Statements since the AGM from BRN have been very dower of late about the lack of progress. I would have thought BRN would be more upbeat if Akida IP was going to be in the new soon to be released 2024 MB small to medium electric line-up. I'm talking about body language and tone not breaking NDA's. They're not exactly bubbling over with excitement about anything.
Anything can be true in this world especially if the same is not provided in writing.I get the feeling, nothing definitive, MB is moving away from Valeo's Scalia and moving towards Luminar's IRIS which uses Qualcomm. And also moving away from "Hey Mercedes" key word activation towards a system which recognises phrases or tones.
As you know, I was very bullish on MB using Akida in Valeo's Scala 3 and Hey Mercedes featured in the EQXX in the new 2024 EV MMA platform for small and medium cars. Those cars are due for release in the next couple of months.
Statements since the AGM from BRN have been very dower of late about the lack of progress. I would have thought BRN would be more upbeat if Akida IP was going to be in the new soon to be released 2024 MB small to medium electric line-up. I'm talking about body language and tone not breaking NDA's. They're not exactly bubbling over with excitement about anything.
I don't think Mercedes would change direction with in 1.5 months like that these products are out soon in 6 months lol. They would likely be starting up production very soon lol. I feel some may have been convinced by the shorters that 20 cents is on the cards and now the price moved north.Anything can be true in this world especially if the same is not provided in writing.
On the other hand Qualcomm was always in picture with MB even when MB announced they are using brainchip akida platform for their concept car EQXX.
I don't know why MB will say the same and then back out. On the whole the market is becoming too hot as far as edge and AI is concerned and if we cannot get our dues it will be considered a very plan.
On the other end there is a lot of hype created and everyone is expecting results should be out in no time. But to get a tasty and healthy meal we have to cook it for required time.
I don't know why but I mostly considers Qualcomm may also be one of our customers.
Dyor
Shorts sometimes play to get direct benefit/loss but sometimes they fulfill assignment of a bigger client with bigger picture than current sp.I don't think Mercedes would change direction with in 1.5 months like that these products are out soon in 6 months lol. They would likely be starting up production very soon lol. I feel some may have been convinced by the shorters that 20 cents is on the cards and now the price moved north.
Can someone please give this guy a buzz or we will all literally run out of water and have to start drinking our own pee-pee?
Elon Musk Launches X.AI To Fight ChatGPT
Martine Paris
Apr 16, 2023,05:51pm EDT
https://policies.google.com/privacy
Elon Musk and son X Ć A-12 (Photo by Theo Wargo/Getty Images for TIME)
GETTY IMAGES FOR TIME
X is for everything in the world of tech billionaire Elon Musk. Itās the name of his child with pop star Grimes. It was the name of his startup X.com which later became PayPal. Itās the corporate name of Twitter as disclosed in court documents last week. And itās the name of the his new company X.AI for which he has been recruiting AI engineers from competitors and possibly buying thousands of GPUs.
Hereās what is known about X.AI so far:
- The company was incorporated in Nevada on March 9, where Twitterās parent X Corp. is now registered, as first reported by the Wall Street Journal.
- Elon Musk is listed as the sole director on the state filing.
- Jared Birchall is listed as secretary. Birchall is the CEO of Muskās chip-implant company Neuralink. He serves on the board of Muskās tunneling entity The Boring Company and is managing director of Muskās family office Excession, which is also the name of a science fiction novel by Iain Banks about a society ruled by hyper-intelligent AI that Musk tweeted about in 2019. Birchall began his career as a financial analyst at Goldman Sachs and later became a wealth adviser at Morgan Stanley before joining Musk.
- Igor Babuschkin, a Silicon Valley AI senior research engineer from Googleās sister company Deepmind has been hired to head the effort at X.AI, according to Insider. Babuschkin worked at OpenAI during the pandemic and prior to that was part of the Deepmind London team. He interned at the revered international research institute, CERN in Geneva, Switzerland, according to his LinkedIn profile.
- Deepmind engineer Manuel Kroiss is another key X.AI hire who co-authored the 2021 Deepmind paper, āLaunchpad: A programming model for distributed machine learning research.ā
- Musk might be planning on using Twitter data to train X.AIās large language model, according to sources reported by the Financial Times. In December, Musk tweeted that he had learned that OpenAI had access to Twitterās database for training and put that on pause.
- Musk recently purchased roughly 10,000 GPUs typically used for large language models for Twitter, sources told Insider. In a Twitter Spaces with the BBC last week where participants said more than three million people tuned in, Musk neither confirmed nor denied the Insider report, rather he responded that Tesla and Twitter are buying GPUs like all other companies and gave a shoutout for Tesla's Dojo supercomputer platform which he said has a lot of potential.
- One challenge Musk is going to face is the vast amount of electricity and water that large language models are consuming in the face of global shortages. "ChatGPT needs to ādrinkā a 500ml bottle of water for a simple conversation of roughly 20-50 questions and answers," noted a paper by University of California and University of Texas professors.
- To fund X.AI, the company has authorized 100 million shares for sale and is courting interest from SpaceX and Tesla investors, reported the Financial Times. The more powerful the AI models, the more expensive it gets to operate. OpenAI estimated its 2022 costs to be around $544.5 million, according to documents seen by Fortune.
Elon Musk Launches X.AI To Fight ChatGPT Woke AI, Says Twitter Is Breakeven
Everything you need to know about Elon Musk's newest AI company X.AI including the tech billionaire's remarks that Twitter might be cash flow positive this quarter and his plans to turn the chatty social network into the everything app.www.forbes.com
Interesting last part of the link https://lnkd.in/ gFckm-ZB
Nobody is talking about changing direction with in 1.5 months. This was done over 12 mths ago which is about the same period where nothing new has been said or reported by MB, Valero, Auto Journo's etc about Akida.I don't think Mercedes would change direction with in 1.5 months like that these products are out soon in 6 months lol. They would likely be starting up production very soon lol. I feel some may have been convinced by the shorters that 20 cents is on the cards and now the price moved north.
There are no published patent applications indicating Luminar has a NN in silicon.Nobody is talking about changing direction with in 1.5 months. This was done over 12 mths ago which is about the same period where nothing new has been said or reported by MB, Valero, Auto Journo's etc about Akida.
There are no published patent applications indicating Luminar has a NN in silicon.
Their latest published application:
US2022309685A1 NEURAL NETWORK FOR OBJECT DETECTION AND TRACKING
claims a method of tracking multiple objects using a processor, ie, it is software running on a processor. Their processing hardware includes a segmentation module, a classification module, and a tracking module. each of which can include a NN:
View attachment 39891
[0076] As seen in FIG. 3, the vehicle includes N different sensors 102 , with N being any suitable integer (e.g., 1, 2, 3, 5, 10, 20, etc.). At least āSensor 1ā of the sensors 102 is configured to sense the environment of the autonomous vehicle by physically interacting with the environment in some way, such as transmitting and receiving lasers that reflect off of objects in the environment (e.g., if the sensor is a lidar device), transmitting and receiving acoustic signals that reflect off of objects in the environment (e.g., if the sensor is a radio detection and ranging (radar) device), simply receiving light waves generated or reflected from different areas of the environment (e.g., if the sensor is a camera), and so on. Depending on the embodiment, all of the sensors 102 may be configured to sense portions of the environment, or one or more of the sensors 102 may not physically interact with the external environment (e.g., if one of the sensors 102 is an inertial measurement unit (IMU)). The sensors 102 may all be of the same type, or may include a number of different sensor types (e.g., multiple lidar devices with different viewing perspectives, and/or a combination of lidar, camera, radar, and thermal imaging devices, etc.).
[0077] The data generated by the sensors 102 is input to a perception component 104 of the sensor control architecture 100 , and is processed by the perception component 104 to generate perception signals 106 descriptive of a current state of the vehicle's environment. It is understood that the term ācurrentā may actually refer to a very short time prior to the generation of any given perception signals 106 , e.g., due to the short processing delay introduced by the perception component 104 and other factors. To generate the perception signals 106 , the perception component 104 may include a segmentation module 110 , a classification module 112 and a tracking module 114 .
[0078] The segmentation module 110 is generally configured to identify distinct objects within the environment, as represented by the sensor data (or a portion of the sensor data). Depending on the embodiment and/or scenario, the segmentation task may be performed separately for each of a number of different types of sensor data (e.g., the segmentation module 110 may include a number of modules operating in parallel), or may be performed jointly on a fusion of multiple types of sensor data. In some embodiments where lidar devices are used, the segmentation module 110 analyzes point cloud frames to identify subsets of points within each frame that correspond to probable physical objects in the environment. In other embodiments, the segmentation module 110 jointly analyzes lidar point cloud frames in conjunction with camera (and/or other) image frames to identify objects in the environment. Examples of lidar devices/systems and point clouds are discussed in further detail below. Other suitable techniques, and/or data from other suitable sensor types, may also be used to identify objects. As used herein, references to different or distinct āobjectsā may encompass physical things that are entirely disconnected (e.g., with two vehicles being two different āobjectsā), as well as physical things that are connected or partially connected (e.g., with a vehicle being a first āobjectā and the vehicle's hitched trailer being a second āobjectā).
[0079] The segmentation module 110 may use predetermined rules or algorithms to identify objects. For example, the segmentation module 110 may identify as distinct objects, within a point cloud, any clusters of points that meet certain criteria (e.g., having no more than a certain maximum distance between all points in the cluster, etc.). Alternatively, the segmentation module 110 may utilize a neural network that has been trained to identify distinct objects within the environment (e.g., using supervised learning with manually generated labels for different objects within test data point clouds, etc.), or another suitable type of machine learning based model. Example operation of the segmentation module 110 is discussed in more detail below in FIG. 5B, for an embodiment in which the perception component 104 processes point cloud data.
[0080] The classification module 112 is generally configured to determine classes (labels, categories, etc.) for different objects that have been identified by the segmentation module 110 . Like the segmentation module 110 , the classification module 112 may perform classification separately for different sets of the sensor data (e.g., the classification module 112 may include a number of modules operating in parallel), or may classify objects based on a fusion of data from multiple sensors, etc. Moreover, and also similar to the segmentation module 110 , the classification module 112 may execute predetermined rules or algorithms to classify objects, use a neural network that has been trained to classify identified objects within the environment (e.g., using supervised learning with manually generated labels for different point cloud representations of distinct objects, etc.), or use another suitable machine learning based model to classify objects. Example operation of the classification module 112 is discussed in more detail below in FIG. 5B, for an embodiment in which the perception component 104 processes point cloud data.
[0081] The tracking module 114 is generally configured to track distinct objects over time (e.g., across multiple lidar point cloud or camera image frames). The tracked objects are generally objects that have been identified by the segmentation module 110 , but may or may not be objects that were classified by the classification module 112 , depending on the embodiment and/or scenario. The segmentation module 110 may assign identifiers to identified objects, and the tracking module 114 may associate existing identifiers with specific objects where appropriate (e.g., for lidar data, by associating the same identifier with different clusters of points, at different locations, in successive point cloud frames). Like the segmentation module 110 and the classification module 112 , the tracking module 114 may perform separate object tracking based on different sets of the sensor data (e.g., the tracking module 114 may include a number of modules operating in parallel), or may track objects based on a fusion of data from multiple sensors. Moreover, and also similar to the segmentation module 110 and the classification module 112 , the tracking module 114 may execute predetermined rules or algorithms to track objects, may use a neural network that has been trained to track identified (and possibly classified) objects within the environment (e.g., using supervised learning with manually generated labels for different pairs or sets of point cloud frames, etc.), or another suitable machine learning model to track objects.
[0082] Because the blocks of FIG. 3 (and various other figures described herein) depict a software architecture rather than physical components, it is understood that, when any reference is made herein to a particular neural network or other software architecture component being ātrained,ā or to the role of any software architecture component (e.g., sensors 102 ) in conducting such training, the operations or procedures described may have occurred on a different computing system (e.g., using specialized development software). Thus, for example, neural networks of the segmentation module 110 , classification module 112 and/or tracking module 114 may have been trained on a different computer system before being implemented within any vehicle. Put differently, the components of the sensor control architecture 100 may be included in a āfinalā product within a particular vehicle, without that vehicle or its physical components (sensors 102 , etc.) necessarily having been used for any training processes.
I think it is improbable that Mercedes will revert to a software NN.
Now there are some interesting features of Luminar's laser projector for lidar, such as foveation which enables the laser pulses to be concentrated more densely on objects of interest, and this could well attract MB, but I doubt they would adopt software to process the reflected pulses.
Proga it takes at the bare minimum 3 years, sometimes 4-5 years for automotive.Nobody is talking about changing direction with in 1.5 months. This was done over 12 mths ago which is about the same period where nothing new has been said or reported by MB, Valero, Auto Journo's etc about Akida.