BRN Discussion Ongoing

Townyj

Ermahgerd
Good Morning Chippers,

Weekend Financial Review paper...

We get a mention , albeit on the wrong side , unfortunately.

Patiently waiting......

Regards,
Esq.

Bit hard to read upside down :p
 
  • Haha
  • Like
  • Fire
Reactions: 9 users

Esq.111

Fascinatingly Intuitive.
Morning Townjy,

Just thought I'd try and spice it up a little.

Yes , sorry about that . Operator error.

Esq.
 
  • Haha
  • Like
  • Love
Reactions: 18 users

HopalongPetrovski

I'm Spartacus!
Morning Townjy,

Just thought I'd try and spice it up a little.

Yes , sorry about that . Operator error.

Esq.
istockphoto-902347542-1024x1024.jpg
 
  • Haha
  • Like
Reactions: 12 users

Townyj

Ermahgerd
  • Haha
  • Like
Reactions: 8 users

stan9614

Regular
hotcrapper is hopeless now, full of misleading lies. It took me a bit of effort to set the record straight about our cash runway, which is approximately 8 quarters, instead of the 3 quarter myth that was spreading around the forum...

I wonder how many people on this forum thought we got only 3 quarters cash left?
 
  • Like
  • Love
  • Fire
Reactions: 25 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Check out this article below from Synopsys about Vision Transformer Networks. Doesn't specifically mention us but we know Akida 2nd gen will support ViTS.

The article discusses ViTS in terms of their ability to amplify contextual awareness. An example given is being able to discern whether an object on the road is a stroller or a motorcycle. This reminds me of the "plastic bag versus a rock" problem which Peter Van Der Made previously discussed as being able to be resolved with AKIDA 2000 and AKIDA 3000 being able to learn the difference between the two because they will learn from the sequences of events and the behaviour of objects in the physical world.

Screen Shot 2023-06-25 at 2.25.17 pm.png





Deep Learning Transformers Transform AI Vision​

Article-Deep Learning Transformers Transform AI Vision​

GettyImages-deeplearning1422693944.jpg

Deep learning algorithms are now being used to improve the accuracy of machine vision.
New algorithms challenge convolutional neural networks for vision processing.
Gordon Cooper, Product Manager, Synopsys Solutions Group | Jun 12, 2023



With the continual evolution of modern technology systems and devices such as self-driving cars, mobile phones, and security systems that include assistance from cameras, deep learning models are quickly becoming essential to enhance image quality and accuracy.
For the past decade, convolutional neural networks (CNNs) have dominated the computer vision application market. However, transformers, which were initially designed for natural language processing such as translation and answering questions, are now emerging as a new algorithm model. While they likely won’t immediately replace CNNs, transformers are being used alongside CNNs to ensure the accuracy of vision processing applications such as context-aware video inference.

As the most widely used model for vision processing over the past decade, CNNs offer an advanced deep learning model functionality for classifying images, detecting an object, semantic segmentation (grouping or labeling every pixel in an image), and more. However, researchers were able to demonstrate that transformers can beat the latest advanced CNNs’ accuracy with no modifications made to the system itself except for adjusting the image into small patches.

In 2020, Google Research Scientists published research on the vision transformer (ViT), a model based on the original 2017 transformer architecture specializing in image classification. These researchers found that the ViT “demonstrate[d] excellent performance when trained on sufficient data, outperforming a comparable state-of-the-art CNN with four times fewer computational resources.” While they require training with large data sets, ViTs are now beating CNNs in accuracy.


Differences Between CNNs and Transformers
The primary difference between CNNs and transformers is how each model blends information from neighboring pixels and their respective scopes of focus. While CNNs’ data is symmetric, for example based on a 3x3 convolution which calculates a weighted sum of nine pixels around the center pixel, transformers use an attention-based mechanism. Attention networks revolve around the learned properties besides location and have a greater ability to learn and demonstrate more complex relationships. This leads to an expanding contextual awareness when the system attempts to identify an object. For example, a transformer, like a CNN, can discern that the object in the road is a stroller rather than a motorcycle. Rather than expending energy taking in less useful pixels of the entire road, a transformer can home in on the most important part of the data.
Transformers are able to grasp context and absorb more complex patterns to detect an object.
In particular, swin (shifted window) transformers reach the highest accuracy for object detection (COCO) and semantic segmentation (ADE20K). While CNNs are usually only applied to one still image at a time without any context of the frame before and after, the transformer can better deploy across video frames and used for action classification.

Drawbacks
Currently, designers must take into account that while transformers can achieve high accuracy, they will run at much fewer frames-per-second (fps) performance and require many more computations and data movement. In the near term, integrating CNNs and transformers will be key to establishing a stronger foundation for future vision processing development. However, even though CNNs are still considered a mainstream vision processing application, deep learning transformers are rapidly advancing and improving upon the capabilities of CNNs.

As research continues, it may not take long for transformers to completely replace CNNs for real-time vision processing applications, amplifying contextual awareness for complex patterns as well as providing higher accuracy will be beneficial for future AI applications.


 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 42 users

rgupta

Regular
Good Morning Chippers,

Weekend Financial Review paper...

We get a mention , albeit on the wrong side , unfortunately.

Patiently waiting......

Regards,
Esq.
One thing I can tell with 100% accuracy,
All financial writers make their public view the way the curve is. They will promote someone going northward and demote the same when they were going southwards.
But on the other hand a share price increase after going southward and decrease after going northward.
 

rgupta

Regular
However, researchers were able to demonstrate that transformers can beat the latest advanced CNNs’ accuracy with no modifications made to the system itself except for adjusting the image into small patches.
Is not it the same technology what Qualcomm is using?
Check out this article below from Synopsys about Vision Transformer Networks. Doesn't specifically mention us but we know Akida 2nd gen will support ViTS.

The article discusses ViTS in terms of their ability to amplify contextual awareness. An example given is being able to discern whether an object on the road is a stroller or a motorcycle. This reminds me of the "plastic bag versus a rock" problem which Peter Van Der Made previously discussed as being able to be resolved with AKIDA 2000 and AKIDA 3000 being able to learn the difference between the two because they will learn from the sequences of events and the behaviour of objects in the physical world.

View attachment 38881




Deep Learning Transformers Transform AI Vision​

Article-Deep Learning Transformers Transform AI Vision​

GettyImages-deeplearning1422693944.jpg

Deep learning algorithms are now being used to improve the accuracy of machine vision.
New algorithms challenge convolutional neural networks for vision processing.
Gordon Cooper, Product Manager, Synopsys Solutions Group | Jun 12, 2023



With the continual evolution of modern technology systems and devices such as self-driving cars, mobile phones, and security systems that include assistance from cameras, deep learning models are quickly becoming essential to enhance image quality and accuracy.
For the past decade, convolutional neural networks (CNNs) have dominated the computer vision application market. However, transformers, which were initially designed for natural language processing such as translation and answering questions, are now emerging as a new algorithm model. While they likely won’t immediately replace CNNs, transformers are being used alongside CNNs to ensure the accuracy of vision processing applications such as context-aware video inference.

As the most widely used model for vision processing over the past decade, CNNs offer an advanced deep learning model functionality for classifying images, detecting an object, semantic segmentation (grouping or labeling every pixel in an image), and more. However, researchers were able to demonstrate that transformers can beat the latest advanced CNNs’ accuracy with no modifications made to the system itself except for adjusting the image into small patches.

In 2020, Google Research Scientists published research on the vision transformer (ViT), a model based on the original 2017 transformer architecture specializing in image classification. These researchers found that the ViT “demonstrate[d] excellent performance when trained on sufficient data, outperforming a comparable state-of-the-art CNN with four times fewer computational resources.” While they require training with large data sets, ViTs are now beating CNNs in accuracy.


Differences Between CNNs and Transformers
The primary difference between CNNs and transformers is how each model blends information from neighboring pixels and their respective scopes of focus. While CNNs’ data is symmetric, for example based on a 3x3 convolution which calculates a weighted sum of nine pixels around the center pixel, transformers use an attention-based mechanism. Attention networks revolve around the learned properties besides location and have a greater ability to learn and demonstrate more complex relationships. This leads to an expanding contextual awareness when the system attempts to identify an object. For example, a transformer, like a CNN, can discern that the object in the road is a stroller rather than a motorcycle. Rather than expending energy taking in less useful pixels of the entire road, a transformer can home in on the most important part of the data.
Transformers are able to grasp context and absorb more complex patterns to detect an object.
In particular, swin (shifted window) transformers reach the highest accuracy for object detection (COCO) and semantic segmentation (ADE20K). While CNNs are usually only applied to one still image at a time without any context of the frame before and after, the transformer can better deploy across video frames and used for action classification.

Drawbacks
Currently, designers must take into account that while transformers can achieve high accuracy, they will run at much fewer frames-per-second (fps) performance and require many more computations and data movement. In the near term, integrating CNNs and transformers will be key to establishing a stronger foundation for future vision processing development. However, even though CNNs are still considered a mainstream vision processing application, deep learning transformers are rapidly advancing and improving upon the capabilities of CNNs.

As research continues, it may not take long for transformers to completely replace CNNs for real-time vision processing applications, amplifying contextual awareness for complex patterns as well as providing higher accuracy will be beneficial for future AI applications.


 
  • Like
Reactions: 2 users

FKE

Regular
I had a strange dream tonight. I was walking down the street and found 100 euros. Since I couldn't think of anything to buy, I thought it would be a good idea to invest the money in shares. In my dream, I was very focused on AI-related tech stocks. In the end, there were two companies to choose from:


Vnidia

A huge company that has made a breathtaking rally lately. In my dream, the technology that generates this company's revenue was called Neu-Vanman. It was at the end of its development and the potential development steps in the future were limited. The company had a valuation of EUR 953 billion. I thought to myself that if it becomes the largest company in the world it can surely reach 5000 billion, or 5 trillion EUR.


Chainbrip

A small company that is currently in a downward spiral. The technology of this company seemed breathtaking to me. In my dream, I actually assumed that this company was developing chips that resembled the function of the brain. The first versions were already on the market, and more were soon to be released. The potential seemed huge, both in terms of the market and the possibilities for further development of the technology. The company had a valuation of EUR 374 million. I thought to myself, if it can reach 1% of the size of Vnidia (if it is the biggest company in the world) that would be a huge success à 50000 million EUR, so 50 billion EUR.


I pulled out my slide rule and realised that for every EUR I invested, I was using the following factors:

1687679352683.png


This led to several questions and conclusions if my vague theories in my confused dream were true:

1.) 100 EUR invested in Vnidia = 520 EUR

2.) 100 EUR invested in Chainbrip = 13370 EUR

3) If I want to have equal total returns, I would have to invest only 0.039 cents in Chainbrip for each EUR invested in Vnidia (5.2 / 133.7)

4) Risk assessment: I only wanted to invest in one company, so I asked myself the following question: What are the probabilities? How likely is it that the above-mentioned market caps will be reached? I speculated in my dream, completely from my gut: For Vnidia the probability is 50%, for Chainbrip 10%. That gives a ratio of 5:1 - per Vnidia.

5) Decision: The risk is 5:1 in favour of Vnidia, the potential returns 25:1 (133.7 / 5.2) in favour of Chainbrip. Thus, even if you call me crazy, I was willing to invest the 100 EUR in Chainbrip.

6) If the downward spiral of Chainbrip would continue, the above calculation and decision for Chainbrip would improve exponentially.


I didn't want to wait and see if the share price would drop further, I was too nervous. So I invested the 100 EUR. Then, unfortunately, I woke up. I hope I will continue to dream the dream in 2-3 years, I would be interested to see how everything has developed.


PS: The share price in Germany has slipped back to 0.21 EUR since its all-time high (approx. 1.67 EUR). This means that I have already experienced 87.5% of the pain. So we are on the home stretch 😉 With the remaining 12.5%, I have a pain ratio of 7:1, which is bearable.
 
  • Like
  • Haha
  • Fire
Reactions: 32 users

Diogenese

Top 20
I had a strange dream tonight. I was walking down the street and found 100 euros. Since I couldn't think of anything to buy, I thought it would be a good idea to invest the money in shares. In my dream, I was very focused on AI-related tech stocks. In the end, there were two companies to choose from:


Vnidia

A huge company that has made a breathtaking rally lately. In my dream, the technology that generates this company's revenue was called Neu-Vanman. It was at the end of its development and the potential development steps in the future were limited. The company had a valuation of EUR 953 billion. I thought to myself that if it becomes the largest company in the world it can surely reach 5000 billion, or 5 trillion EUR.


Chainbrip

A small company that is currently in a downward spiral. The technology of this company seemed breathtaking to me. In my dream, I actually assumed that this company was developing chips that resembled the function of the brain. The first versions were already on the market, and more were soon to be released. The potential seemed huge, both in terms of the market and the possibilities for further development of the technology. The company had a valuation of EUR 374 million. I thought to myself, if it can reach 1% of the size of Vnidia (if it is the biggest company in the world) that would be a huge success à 50000 million EUR, so 50 billion EUR.


I pulled out my slide rule and realised that for every EUR I invested, I was using the following factors:

View attachment 38882

This led to several questions and conclusions if my vague theories in my confused dream were true:

1.) 100 EUR invested in Vnidia = 520 EUR

2.) 100 EUR invested in Chainbrip = 13370 EUR

3) If I want to have equal total returns, I would have to invest only 0.039 cents in Chainbrip for each EUR invested in Vnidia (5.2 / 133.7)

4) Risk assessment: I only wanted to invest in one company, so I asked myself the following question: What are the probabilities? How likely is it that the above-mentioned market caps will be reached? I speculated in my dream, completely from my gut: For Vnidia the probability is 50%, for Chainbrip 10%. That gives a ratio of 5:1 - per Vnidia.

5) Decision: The risk is 5:1 in favour of Vnidia, the potential returns 25:1 (133.7 / 5.2) in favour of Chainbrip. Thus, even if you call me crazy, I was willing to invest the 100 EUR in Chainbrip.

6) If the downward spiral of Chainbrip would continue, the above calculation and decision for Chainbrip would improve exponentially.


I didn't want to wait and see if the share price would drop further, I was too nervous. So I invested the 100 EUR. Then, unfortunately, I woke up. I hope I will continue to dream the dream in 2-3 years, I would be interested to see how everything has developed.


PS: The share price in Germany has slipped back to 0.21 EUR since its all-time high (approx. 1.67 EUR). This means that I have already experienced 87.5% of the pain. So we are on the home stretch 😉 With the remaining 12.5%, I have a pain ratio of 7:1, which is bearable.
Dunno what you're smokin', but there's gotta be a market for it.
 
  • Haha
  • Like
  • Wow
Reactions: 25 users

Tothemoon24

Top 20



IPro licenses Silicon IP to the Israeli Chip Design Community, from selected IP companies world-wide. We deliver key functionality for your design through best-in-class IP partnerships and first-class support.

We act as one company. Operating at the same high standards of support and commitment that you have learned to trust along years of partnership with me in a variety of Sales roles, the IPro Group continues a long tradition of engaged support and information exchange. We inform you, learn your needs, and provide IP solutions for your SoC design challenges, enabling you to reach the market with world-class IP products - fast!

Imagine a vibrant community of Israeli fabless companies and Worldwide IP vendors, collaborating closely and sharing information. Imagine an atmosphere of trust and cooperation and mutual commitment - for the success of your designs and for the constant improvement of our IP offer. This is the IPro vision - a one-stop shop of state-of-the-art IP with unique engagement and bond with our Partners.

About our IP Vendor Partners:
_edited.png
 
  • Like
  • Love
  • Fire
Reactions: 44 users

miaeffect

Oat latte lover
I had a strange dream tonight. I was walking down the street and found 100 euros. Since I couldn't think of anything to buy, I thought it would be a good idea to invest the money in shares. In my dream, I was very focused on AI-related tech stocks. In the end, there were two companies to choose from:


Vnidia

A huge company that has made a breathtaking rally lately. In my dream, the technology that generates this company's revenue was called Neu-Vanman. It was at the end of its development and the potential development steps in the future were limited. The company had a valuation of EUR 953 billion. I thought to myself that if it becomes the largest company in the world it can surely reach 5000 billion, or 5 trillion EUR.


Chainbrip

A small company that is currently in a downward spiral. The technology of this company seemed breathtaking to me. In my dream, I actually assumed that this company was developing chips that resembled the function of the brain. The first versions were already on the market, and more were soon to be released. The potential seemed huge, both in terms of the market and the possibilities for further development of the technology. The company had a valuation of EUR 374 million. I thought to myself, if it can reach 1% of the size of Vnidia (if it is the biggest company in the world) that would be a huge success à 50000 million EUR, so 50 billion EUR.


I pulled out my slide rule and realised that for every EUR I invested, I was using the following factors:

View attachment 38882

This led to several questions and conclusions if my vague theories in my confused dream were true:

1.) 100 EUR invested in Vnidia = 520 EUR

2.) 100 EUR invested in Chainbrip = 13370 EUR

3) If I want to have equal total returns, I would have to invest only 0.039 cents in Chainbrip for each EUR invested in Vnidia (5.2 / 133.7)

4) Risk assessment: I only wanted to invest in one company, so I asked myself the following question: What are the probabilities? How likely is it that the above-mentioned market caps will be reached? I speculated in my dream, completely from my gut: For Vnidia the probability is 50%, for Chainbrip 10%. That gives a ratio of 5:1 - per Vnidia.

5) Decision: The risk is 5:1 in favour of Vnidia, the potential returns 25:1 (133.7 / 5.2) in favour of Chainbrip. Thus, even if you call me crazy, I was willing to invest the 100 EUR in Chainbrip.

6) If the downward spiral of Chainbrip would continue, the above calculation and decision for Chainbrip would improve exponentially.


I didn't want to wait and see if the share price would drop further, I was too nervous. So I invested the 100 EUR. Then, unfortunately, I woke up. I hope I will continue to dream the dream in 2-3 years, I would be interested to see how everything has developed.


PS: The share price in Germany has slipped back to 0.21 EUR since its all-time high (approx. 1.67 EUR). This means that I have already experienced 87.5% of the pain. So we are on the home stretch 😉 With the remaining 12.5%, I have a pain ratio of 7:1, which is bearable.
scary-movie-see-how-we.gif
 
  • Haha
Reactions: 5 users

Draed

Regular
The way I see it, this is a last round attempt to short, before the inevitable pull back. Like a reverse pump and dump. I think maybe we will drop back into the asx300? After this month. Institutional owners will have to close their short position very suddenly. I don't think the asx200 has been good for us.

If we can couple this with a nice little announcement, it might burn them a little bit crispier.
 
  • Like
  • Fire
  • Haha
Reactions: 12 users

Neuromorphia

fact collector



IPro licenses Silicon IP to the Israeli Chip Design Community, from selected IP companies world-wide. We deliver key functionality for your design through best-in-class IP partnerships and first-class support.

We act as one company. Operating at the same high standards of support and commitment that you have learned to trust along years of partnership with me in a variety of Sales roles, the IPro Group continues a long tradition of engaged support and information exchange. We inform you, learn your needs, and provide IP solutions for your SoC design challenges, enabling you to reach the market with world-class IP products - fast!

Imagine a vibrant community of Israeli fabless companies and Worldwide IP vendors, collaborating closely and sharing information. Imagine an atmosphere of trust and cooperation and mutual commitment - for the success of your designs and for the constant improvement of our IP offer. This is the IPro vision - a one-stop shop of state-of-the-art IP with unique engagement and bond with our Partners.

About our IP Vendor Partners:
_edited.png
Our new sales representatives web site
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 31 users
  • Haha
  • Like
Reactions: 3 users

Reuben

Founding Member
Thanks Labsy, it seems a few weren't too happy with what I posted, and that's fine.

To buy Brainchip shares was and still is an individuals choice, to sell Brainchip shares was and still is an individuals choice.

I chose not to sell my shares North of $2.00 and that has in effect cost me over 2.5 million dollars, time to make further investments,
the opportunity to buy back into Brainchip and double my already solid holding, I could moan and whinge on this forum all day long,
feeling sorry for myself, but I choose not to vent and keep venting, it's all good and dandy to vent but it's not all good and dandy to
vent against the company because things don't appear on the surface to be tracking the way your cash position selfishly suggests it
should be.

Individuals have made their own choices, for goodness sake own them and stop bagging our company, that's my vent, please respect my
opinion, it's as valuable as yours.

Many on this forum and the past forum know that I have been one of the most positive, passionate supporters of Peter and Anil and the entire Brainchip team for close on 8 years, I'm hurting seeing our share price so low, but is this the place to be venting, I reserve my opinion.

Trust in your own decisions moving forward, I still see Brainchip crossing that finishing post in 1st place...(y)❤️
Been a while since i logged in, but good 2 see some very positive posts from a few...

if u have done your research.... time will speak for itself.

I am n deep red as well.... but research s the only thing that has kept me holding to my brn shares...
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 48 users

Getupthere

Regular
The way I see it, this is a last round attempt to short, before the inevitable pull back. Like a reverse pump and dump. I think maybe we will drop back into the asx300? After this month. Institutional owners will have to close their short position very suddenly. I don't think the asx200 has been good for us.

If we can couple this with a nice little announcement, it might burn them a little bit crispier.
I agree, asx200 has not been good for us without revenue.

Back to the 300 would sort us for now and limited the numbers of shorters.
 
  • Like
Reactions: 8 users

rgupta

Regular
The way I see it, this is a last round attempt to short, before the inevitable pull back. Like a reverse pump and dump. I think maybe we will drop back into the asx300? After this month. Institutional owners will have to close their short position very suddenly. I don't think the asx200 has been good for us.

If we can couple this with a nice little announcement, it might burn them a little bit crispier.
Interestingly when brn and LKE moved to asx 200 everyone was saying shorters will kill both these companies.
And see both the company's sp is way down and shorters at all time high
 
  • Like
Reactions: 4 users

Frangipani

Regular
Here is another post liked by Nandan Nayampally on LinkedIn, posted by a California internist, who is a specialist in pulmonary and critical care medicine:

View attachment 38866

I googled the hospital, which serves the Napa Valley area (excellent🍷 region!) and discovered the following Spring/Sommer 2023 hospital newsletter with more info on the said lung nodule programme:


View attachment 38867
View attachment 38868
View attachment 38869

How probable is it that Nandan, who resides in Austin, Texas, would 👍🏽 this post about a newly launched “lung nodule programme” by an MD at a California hospital on a six-month demo trial with a state-of-the-art robotic-assisted bronchoscopy system, raising funds for “an Artificial Intelligence System to detect signs of cancerous lung nodules up to a year earlier than manual-only review of x-rays” without Brainchip being involved?!

Just a follow-up on my post above:

You’d expect cutting-edge technology/research to be mainly used/conducted at major academic centres such as university hospitals rather than at small rural hospitals such as the one in St. Helena, CA.

As for their use of a robotic-assisted bronchoscopy system as such, I am not too surprised, as Napa County must surely rank among the most affluent areas in the US, and even the hospital in a town with a population of about 60,000 close to where I live in Germany has been a da Vinci Centre (the state-of-the-art robotic surgical system manufactured by Intuitive Surgical that uses a minimally invasive surgical approach) for more than a decade, with robotic-assisted surgeries performed by urologists, gynaecologist as well as general surgeons.

But why would a pulmonary and critical care specialist at a relatively small rural hospital be involved in a cutting-edge AI project? Well, first of all I would reply “Why not?!” Foresighted visionaries can be found anywhere!
He might have come across the benefits this particular AI system offers by reading about it, by learning about it from colleagues at conferences or while working at another hospital prior to joining the one in St. Helena. And thanks to a relatively wealthy clientele that is generously donating, funding doesn’t appear to be the unsurmountable obstacle it is for less fortunate institutions.

Yet another possibility would be that he came across Brainchip’s revolutionary technology (on the assumption that this particular yet to be fully funded Artificial Intelligence System will have Akida incorporated) through a different channel, namely thanks to a network that should not be underestimated. Judging from his name and outward appearance, Dr. Kiran Madhav Ubhayakar has Indian roots, and so do both Nandan Nayampally and Anil Mankar (who emigrated to the US after graduating from the renowned IIT Bombay).

I have close Indian American friends in Southern California (very cliché-like they are of course all doctors or lawyers 😂) and thus know very well how closely-knit the Indian community in the US is. And while India is a huge and diverse subcontinent, it seems to me that even to second- or third-generation immigrants it ultimately doesn’t matter whether you come from a Hindi, Gujarati, Marathi, Tamil or Malayalam-speaking background, as bonding over common Indian cultural (and religious) heritage such as enjoying culinary delicacies (whether biryani, butter chicken, chana masala, fish curry, naan or dosa), celebrating Diwali/Deepavali, watching Bollywood movies and music videos or upholding Indian wedding traditions is ultimately more important than emphasising the differences in their Indian backgrounds.

Note that Dr. Ubhayakar “received his medical degree from the University of Texas Medical Branch in Galveston, Texas, and completed his internship in internal medicine and pediatrics at the University of Texas Health Science Center at Houston. Ubhayakar completed his internal medicine residency at the University of Texas Medical Branch, and his fellowship in pulmonary and critical care at the University of Texas Southwestern in Dallas, Texas.”

Both Dallas and Houston are thriving hubs of the Indian-American community, and Nandan Nayampally lives in Austin, TX (according to his LinkedIn profile), not too far from Houston.
Less likely from a geographical point of view would be a connection through Anil Mankar, who lives in CA, just like Dr. Ubhayakar, although some 750 km (466 miles) south of him. Then again, info along the Indian information grapevine can travel large distances at high speed. 😂

Of course a possible connection via the Indian-American community is pure speculation, and we don’t even know whether or not Akida is involved. But keep in mind that business and private networks absolutely encourage cross-pollination, both on a local and on a global stage, with the rise of the internet age obviously having been a massive accelerator.

And even if Dr. Ubhayakar has not yet heard of Brainchip and Akida, one thing is for sure: Robotic surgery will undoubtedly play an increasingly important role in the future of medicine. Integrating AI will allow for improved diagnostics and decision-making and enable surgeons in many cases to diagnose and operate in one session, which will in turn save those patients precious time and lessen anxiety, as they won’t have to wait for their biopsy results first and - in case a tumour is found - make another appointment for surgery. And improved AI diagnostics (“System to detect signs of cancerous lung nodules up to a year earlier than manual-only review of x-rays”) will lead to many lives saved or at least prolonged - another amazing use case of Beneficial AI!



Napa biz buzz: Ubhayakar joins Adventist Health Physicians Network St. Helena

From the Biz Buzz: Napa Valley business news roundup series

Oct 11, 2022 Updated Jun 22, 2023
6340b22701eab.image.jpg.webp

Dr. Kiran Ubhayakar

FOR THE REGISTER

Adventist Health announced that Dr. Kiran Ubhayakar, board-certified pulmonary and critical care specialist, has joined its staff.

Ubhayakar is board-certified in internal medicine, pulmonary medicine and critical care medicine, with special interest in critical care, endobronchial ultrasound and robotic navigational bronchoscopy.

Ubhayakar received his medical degree from the University of Texas Medical Branch in Galveston, Texas, and completed his internship in internal medicine and pediatrics at the University of Texas Health Science Center at Houston. Ubhayakar completed his internal medicine residency at the University of Texas Medical Branch, and his fellowship in pulmonary and critical care at the University of Texas Southwestern in Dallas, Texas.

He specializes in the diagnosis and treatment of conditions such as asthma, COPD, interstitial lung disease, pulmonary hypertension and lung cancer.
 
Last edited:
  • Like
  • Love
  • Wow
Reactions: 23 users
Top Bottom