BRN Discussion Ongoing

TechGirl

Founding Member
Just browsing around as I do & came across the Machine Learning Department at CMU


No mention of us but in their recent news it is all very relevant to us :unsure:



New Research Investigates How the Brain Processes Language​

Aaron Aupperlee
Tuesday, November 29, 2022
Print this page.

combined-words-brain.png


New research from a team in the Machine Learning Department shows which regions of the brain processed the meaning of combined words and how the brain maintained and updated the meaning of words.

Humans accomplish a phenomenal amount of tasks by combining pieces of information. We perceive objects by combining edges, categorize scenes by combining objects, interpret events by combining actions, and understand sentences by combining words. But researchers don't yet have a clear understanding of how the brain forms and maintains the meaning of the whole — such as a sentence — from its parts. School of Computer Science (SCS) researchers in the Machine Learning Department (MLD) have shed new light on the brain processes that support the emergent meaning of combined words.

Mariya Toneva, a former MLD Ph.D. student now faculty at the Max Planck Institute for Software Systems, worked with Leila Wehbe, an assistant professor in MLD, and Tom Mitchell, the Founders University Professor in SCS, to study which regions of the brain processed the meaning of combined words and how the brain maintained and updated the meaning of words. This work could contribute to a more complete understanding of how the brain processes, maintains and updates the meaning of words, and could redirect research focus to areas of the brain suitable for future wearable neurotechnology, such as devices that can decode what a person is trying to say directly from brain activity. These devices can help people with diseases like Parkinson's or multiple sclerosis that limit muscle control.

Toneva, Mitchell and Wehbe used neural networks to build computational models that could predict the areas of the brain that process the new meaning of words when they are combined. They tested this model by recording the brain activity of eight people as they read a chapter of "Harry Potter and the Sorcerer's Stone." The results suggest that some regions of the brain process both the meaning of individual words and the meaning of combined words, while others process only the meanings of individual words. Crucially, the authors also found that one of the neural activity recording tools they used, magnetoencephalography (MEG), did not capture a signal that reflected the meaning of combined words. Since future wearable neurotechnology devices might use recording tools similar to MEG, one potential limitation is their inability to detect the meaning of combined words, which could affect their capacity to help users produce language.

The team's work builds on past research from Wehbe and Mitchell that used functional magnetic resonance imaging to identify the parts of the brain engaged as people read a chapter of the same Potter book. The result was the first integrated computational model of reading, identifying which parts of the brain are responsible for such subprocesses as parsing sentences, determining the meaning of words and understanding relationships between characters.

For more on the most recent findings, read the paper "Combining Computational Controls With Natural Text Reveals Aspects of Meaning Composition," in Nature Computational Science.

For More Information
Aaron Aupperlee | 412-268-9068 | aaupperlee@cmu.edu
 
  • Like
  • Love
  • Fire
Reactions: 18 users

TechGirl

Founding Member
And another one


A Low-Cost Robot Ready for Any ObstacleCMU, Berkeley Researchers Design Robust Legged Robot System​

Aaron Aupperlee
Wednesday, November 16, 2022
Print this page.

legged-robot.jpg


A robotic system designed by researchers at CMU and Berkeley allows small, low-cost legged robots to maneuver in challenging environments.
This little robot can go almost anywhere.

Researchers at Carnegie Mellon University's School of Computer Science and the University of California, Berkeley, have designed a robotic system that enables a low-cost and relatively small legged robot to climb and descend stairs nearly its height; traverse rocky, slippery, uneven, steep and varied terrain; walk across gaps; scale rocks and curbs; and even operate in the dark.

"Empowering small robots to climb stairs and handle a variety of environments is crucial to developing robots that will be useful in people's homes as well as search-and-rescue operations," said Deepak Pathak, an assistant professor in the Robotics Institute. "This system creates a robust and adaptable robot that could perform many everyday tasks."

The team put the robot through its paces, testing it on uneven stairs and hillsides at public parks, challenging it to walk across stepping stones and over slippery surfaces, and asking it to climb stairs that for its height would be akin to a human leaping over a hurdle. The robot adapts quickly and masters challenging terrain by relying on its vision and a small onboard computer.

The researchers trained the robot with 4,000 clones of it in a simulator, where they practiced walking and climbing on challenging terrain. The simulator's speed allowed the robot to gain six years of experience in a single day. The simulator also stored the motor skills it learned during training in a neural network that the researchers copied to the real robot. This approach did not require any hand-engineering of the robot's movements — a departure from traditional methods.

Most robotic systems use cameras to create a map of the surrounding environment and use that map to plan movements before executing them. The process is slow and can often falter due to inherent fuzziness, inaccuracies, or misperceptions in the mapping stage that affect the subsequent planning and movements. Mapping and planning are useful in systems focused on high-level control but are not always suited for the dynamic requirements of low-level skills like walking or running over challenging terrains.

The new system bypasses the mapping and planning phases and directly routes the vision inputs to the control of the robot. What the robot sees determines how it moves. Not even the researchers specify how the legs should move. This technique allows the robot to react to oncoming terrain quickly and move through it effectively.

Because there is no mapping or planning involved and movements are trained using machine learning, the robot itself can be low-cost. The robot the team used was at least 25 times cheaper than available alternatives. The team's algorithm has the potential to make low-cost robots much more widely available.

"This system uses vision and feedback from the body directly as input to output commands to the robot's motors," said Ananye Agarwal, an SCS Ph.D. student in machine learning. "This technique allows the system to be very robust in the real world. If it slips on stairs, it can recover. It can go into unknown environments and adapt."

This direct vision-to-control aspect is biologically inspired. Humans and animals use vision to move. Try running or balancing with your eyes closed. Previous research from the team had shown that blind robots — robots without cameras — can conquer challenging terrain, but adding vision and relying on that vision greatly improves the system.

The team looked to nature for other elements of the system, as well. For a small robot — less than a foot tall, in this case — to scale stairs or obstacles nearly its height, it learned to adopt the movement that humans use to step over high obstacles. When a human has to lift its leg up high to scale a ledge or hurdle, it uses its hips to move its leg out to the side, called abduction and adduction, giving it more clearance. The robot system Pathak's team designed does the same, using hip abduction to tackle obstacles that trip up some of the most advanced legged robotic systems on the market.

The movement of hind legs by four-legged animals also inspired the team. When a cat moves through obstacles, its hind legs avoid the same items as its front legs without the benefit of a nearby set of eyes. "Four-legged animals have a memory that enables their hind legs to track the front legs. Our system works in a similar fashion" Pathak said. The system's onboard memory enables the rear legs to remember what the camera at the front saw and maneuver to avoid obstacles.

"Since there's no map, no planning, our system remembers the terrain and how it moved the front leg and translates this to the rear leg, doing so quickly and flawlessly," said Ashish Kumar a Ph.D. student at Berkeley.

The research could be a large step toward solving existing challenges facing legged robots and bringing them into people's homes. The paper "Legged Locomotion in Challenging Terrains Using Egocentric Vision," written by Pathak, Berkeley professor Jitendra Malik, Agarwal and Kumar, will be presented at the upcoming Conference on Robot Learning in Auckland, New Zealand.

For More Information
Aaron Aupperlee | 412-268-9068 | aaupperlee@cmu.edu
 
  • Like
  • Fire
  • Love
Reactions: 16 users
You get no marx for that!
Oh boy🤣 that's a shocker.
I'll drop this here since I'm trying to cut down on my postings 🤣


Got a follow from this chap after liking this tweet.
And my Twitter handle references Akida.
I did not give him a follow but will do shortly.
 
  • Like
  • Sad
Reactions: 5 users

TechGirl

Founding Member
And another


Roll It Out​

CMU Robotics Research Finds Flattening Dough Requires Precise Adjustments​

Stacey Federoff
Tuesday, November 1, 2022
Print this page.

robot-dough.png
SCS researchers used a planning algorithm called trajectory optimization to study how a robot should adjust its movements as it rolled dough into a circle.

Anyone who's made a pizza knows that flattening the crust requires a series of adjustments before it's ready for toppings. The cook starts with an initial dough ball, then makes slight changes, rotating the rolling pin and putting pressure on the dough over and over until it's a flat circle.

While it can be a tricky technique for humans to master, researchers in Carnegie Mellon University's School of Computer Science wondered what it would take to teach a robot this task. Specifically, recent research from the Robotics Institute's (RI) Robots Perceiving and Doing (RPAD) lab used a planning algorithm called trajectory optimization to investigate how a robot should adjust its movements as it worked toward flattening dough into a circle. The team presented their work, "Learning Closed-Loop Dough Manipulation Using a Differentiable Reset Module," at the 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2022) last week in Kyoto, Japan.

"The robot rolls out the dough in simulation, then gives feedback on how well the trajectory is doing rolling the dough," said Xingyu Lin, a member of the research team who earned his Ph.D. in robotics at CMU and is now a postdoctoral scholar at the Berkeley Artificial Intelligence Research (BAIR) Lab. "Then we updated the robot movements based on the result of this roll out. We performed this roll-out-and-update process iteratively until we found the trajectory that gave the desired outcome. It's very hard for trajectory optimization to find the final solution by itself, however, so that is the challenge."

The robot can learn how to improve rolling the dough in one direction, but resetting and rolling again is more difficult. The team developed a reset module to teach the robot to account for the first roll when adjusting before making a second roll. This way, the dough comes out round.

"We brainstormed some approaches, and decided that if we reset the tool in between rolling, then made the reset compatible with the trajectory optimizer, we could jointly optimize multiple rolls," said Carl Qi, a master's student in the Machine Learning Department who worked on the research.

This work is one of three papers related to manipulating dough published by the RPAD lab. The other papers describe how a robot should analyze and approach the task of manipulating the dough, and how it should plan a sequence of actions using different tools to prepare the dough. The robot has to plan a hierarchy of choices for up to six stages, including using a knife to cut the dough, a spatula or scraper to move it, or a rolling pin to roll it.

The overall goal of the lab is to improve how robots handle deformable objects, such as cloth, that change when you touch them. This could enable a class of robots useful for household tasks such as cooking, folding laundry and cleaning.

Research often focuses on robots interacting with rigid objects. The challenge of working with deformable objects allows the RPAD lab to carve out new territory for innovation.

"Dough is very challenging, so anything you do with dough is really interesting to explore," said David Held, an assistant professor in the RI and head of the RPAD lab. "Other researchers are pushing us to see what more complex things we can do with dough, like braiding challah or making pastries."

For More Information
Aaron Aupperlee | 412-268-9068 | aaupperlee@cmu.edu
 
  • Like
  • Fire
  • Love
Reactions: 12 users
Well I suppose in a socialist country fairness dictates that if the state is inefficient and ruins the economy that an efficient well run profitable private enterprise has to be brought under state control and reduced to the same level as the state.

That’s what socialism is, is it not. Every opinion equal, everyone at the same level. As you cannot make all people highly intelligent the only way to achieve the socialist state is to lower everyone to the lowest common denominator. After all its easy for smart people to play dumb but not so easy for the dumb to play smart.🤣😂🤡😂🤣😎

I am just pleased he has escaped with his life.

I would suggest Brainchip give him a job but the CCP would probably shoot his relatives until he gave up AKIDA’s secret sauce.

My opinion only DYOR
FF

AKIDA BALLISTA
I agree with your points but going to avoid commenting further as it will turn into a long winded ranting session for me.
Followed by more ranting and some cussing followed by more cussing and finally garnished with more cussing.
 
  • Haha
  • Like
Reactions: 9 users

TechGirl

Founding Member
And this one is special just because of the problem it is trying to tackle.


Visualization Tool Helps Law Enforcement Identify Human Trafficking​

Aaron Aupperlee
Tuesday, October 25, 2022
Print this page.

infoshield-graphic_web.jpg


A data visualization tool developed in part by SCS researchers could assist law enforcement agencies working to combat human trafficking by identifying patterns in online escort advertisements that often indicate illegal activity.

A data visualization tool developed by School of Computer Science researchers, collaborators from other universities and experts in the field could assist law enforcement agencies working to combat human trafficking by identifying patterns in online escort advertisements that often indicate illegal activity. TrafficVis, which helps analysts visualize data pulled from millions of ads, recently received a best paper honorable mention at IEEE VIS 2022, one of the top visualization conferences.

TrafficVis uses data collected by InfoShield and similar algorithms designed to scan and cluster similarities in the text of online ads to help law enforcement direct their investigations and better identify human traffickers and their victims. SCS researchers also worked on InfoShield, which can collate millions of advertisements and highlight common phrasing or duplication among them. Since a trafficker may write ads for several victims, it is highly likely that clustering commonalities will point to something suspicious.

TrafficVis is the first interface for cluster-level human trafficking detection and labeling. Experts can use the tool to label clusters as human trafficking or other suspicious — but nonhuman — trafficking activity such as spam and scam. This will quickly create labeled datasets to enable further human trafficking research. The team that designed TrafficVis included Computer Science Department Ph.D. students Catalina Vajiac, Meng-Chieh Lee and Namyong Park; computer science and machine learning faculty member Christos Faloutsos; Georgia Tech faculty and CMU alumnus Polo Chau; McGill University faculty Reihaneh Rabbany; and experts Andreas Olligschlaeger and Rebecca Mackenzie from Marinus Analytics, a CMU spin-off company that specializes in human trafficking detection.

For More Information
Aaron Aupperlee | 412-268-9068 | aaupperlee@cmu.edu
 
  • Fire
  • Like
Reactions: 7 users
And this one is special just because of the problem it is trying to tackle.


Visualization Tool Helps Law Enforcement Identify Human Trafficking​

Aaron Aupperlee
Tuesday, October 25, 2022
Print this page.

infoshield-graphic_web.jpg


A data visualization tool developed in part by SCS researchers could assist law enforcement agencies working to combat human trafficking by identifying patterns in online escort advertisements that often indicate illegal activity.

A data visualization tool developed by School of Computer Science researchers, collaborators from other universities and experts in the field could assist law enforcement agencies working to combat human trafficking by identifying patterns in online escort advertisements that often indicate illegal activity. TrafficVis, which helps analysts visualize data pulled from millions of ads, recently received a best paper honorable mention at IEEE VIS 2022, one of the top visualization conferences.

TrafficVis uses data collected by InfoShield and similar algorithms designed to scan and cluster similarities in the text of online ads to help law enforcement direct their investigations and better identify human traffickers and their victims. SCS researchers also worked on InfoShield, which can collate millions of advertisements and highlight common phrasing or duplication among them. Since a trafficker may write ads for several victims, it is highly likely that clustering commonalities will point to something suspicious.

TrafficVis is the first interface for cluster-level human trafficking detection and labeling. Experts can use the tool to label clusters as human trafficking or other suspicious — but nonhuman — trafficking activity such as spam and scam. This will quickly create labeled datasets to enable further human trafficking research. The team that designed TrafficVis included Computer Science Department Ph.D. students Catalina Vajiac, Meng-Chieh Lee and Namyong Park; computer science and machine learning faculty member Christos Faloutsos; Georgia Tech faculty and CMU alumnus Polo Chau; McGill University faculty Reihaneh Rabbany; and experts Andreas Olligschlaeger and Rebecca Mackenzie from Marinus Analytics, a CMU spin-off company that specializes in human trafficking detection.

For More Information
Aaron Aupperlee | 412-268-9068 | aaupperlee@cmu.edu
Hungry Cbs GIF by Paramount+

Sheesh slow down I'm struggling to keep up 🤣
 
  • Haha
  • Love
  • Like
Reactions: 6 users
Hi @Fact Finder, imagine how spun out we'd all be if the next iteration of AKIDA was called"ACE"? 😝

Thanks to @Sam for providing the screenshot below from ChatGPT3.

View attachment 26659
Hi @Bravo

One of the things that academic papers require is footnotes referencing and attributing the source material.

Have you asked CHATGpt to write this article including Footnotes and References attributing the source materials?

Is so what happened?

Regards
FF

AKIDA BALLISTA
 
  • Like
  • Haha
  • Love
Reactions: 16 users

Terroni2105

Founding Member
I for one will be much happier with Sean doing the interview, ............. will probably get shot down, tho never have enjoyed RTs podcasts, especially of late.
God help us if Sean uses 10% of the podcast asking about "favourite super heroes" .................:eek:
Just a "inkling" ......... RT moved to "ecosystem guy" from "world wide sales" ............... BRN hiring more sales staff, ........... RT imo might just be on his "exit stage left"

Definitely with interest this next podcast.

AKIDA BALLISTA
not at all, can't agree about RT exiting, he is the man that would have pulled together the Intel Foundry Services given his role is VP of Ecosystems and Partnerships
 
  • Like
  • Fire
  • Love
Reactions: 21 users

Sam

Nothing changes if nothing changes
Hi @Fact Finder, imagine how spun out we'd all be if the next iteration of AKIDA was called"ACE"? 😝

Thanks to @Sam for providing the screenshot below from ChatGPT3.

View attachment 26659
I’m still picking up the bundle that I dropped last night😂 I will be blown away if everything on that screen shot and the list of partners come to fruition 🫠🫠🫠
 
  • Like
Reactions: 6 users

McHale

Regular
If I recall correctly, Ken Scarince stated that LDA cannot short or lend out shares.
That said, they are loan sharks. The alternative is a capital raise where we give shares to the Instos' at what, a 30% discount!!
Under the LDA model the Institutional investors have to buy on market along with retail investors and purchase at market prices.
I agree with with what you are saying FJ, but will also add that every time a draw down happens with the LDA deal, this same conversation has fired up, LDA are getting less than a 10% discount, "91.5% of the higher of the VWAP of shares of the pricing period"

The timing right now might seem counter intuitive, but Ken Scarince has timed these LDA Capital Call Notices to a nicety in the past, and as has been speculated here already by some posters, the timing would seem to indicate that there is some kind of price sensitive news lurking behind the curtain presently.

So I really don't think anyone in the mngmt at BRN, want to see their shareholdings diluted anymore than anyone on here does, so it would appear highly likely that something is coming very soon, because the end date for the exercise of the Put option "is anticipated in late March or early April".

With regard to Rob Telson he has obviously been applied diligently and doing a very good job expanding the eco-system, which has kind of exploded lately. On the podcasts some have been significantly better than others, however IMO Rob is a player, and if you are in sales and promotion you will not get anywhere if you can't let yourself play. Sales requires courage and conviction (it also helps a lot if you've got a decent product to work with), but the super hero thing encourages the people Rob is engaging with to be playful, it can come across as a bit flaky - but I like Rob's modus operandi there - if I am correct Rob is about taking the opportunity to be building good relationships anywhere he can - and that is definitely about being able to have fun.

A lot has happened since Rob came on board, and now more since Sean was appointed, having said all that, I've gotta say that I am a show me the money kind of guy, so if my above speculation re coming share price sensitive news is accurate - it can't come soon enough for me. My view of global markets going forward is somewhat less enthusiastic than some views posted here, so I definitely want to see our SP well above where it languishes right now.

Having said that I am not expecting a crash next week, but I have doubts about where the market will be - well within 5 years, but I could be wrong on that, and no-one would be happier than me if that was the case.
 
  • Like
  • Fire
  • Love
Reactions: 43 users

Sam

Nothing changes if nothing changes
I thought I’d bring another round of crazy, apologies in advance but might be a bit of fun to see if we can find connections of sorts😅

Sorry again
 

Attachments

  • DB125C1D-2E80-4C85-9D4A-0EB2BA41D73D.png
    DB125C1D-2E80-4C85-9D4A-0EB2BA41D73D.png
    522.8 KB · Views: 121
  • F361EC7A-8A74-4265-9BA2-65D5D3220423.png
    F361EC7A-8A74-4265-9BA2-65D5D3220423.png
    510.8 KB · Views: 120
  • 1C52E74E-0661-47AD-A843-6D8B614F6DE8.png
    1C52E74E-0661-47AD-A843-6D8B614F6DE8.png
    484.9 KB · Views: 106
  • 0CC29AEB-6F12-4B75-AB5D-B86E80A7066F.png
    0CC29AEB-6F12-4B75-AB5D-B86E80A7066F.png
    457.7 KB · Views: 119
  • Haha
  • Like
Reactions: 6 users

Euks

Regular
I thought I’d bring another round of crazy, apologies in advance but might be a bit of fun to see if we can find connections of sorts😅

Sorry again
I could show you some links between megachips, edge impulse, prophesee and Nvisio but for some strange reason the artificial intelligence is not listing those as partners of Brainchip in those screenshots 🤷‍♂️🤦‍♂️😂😂
 
  • Like
  • Haha
Reactions: 2 users
Totally unrelated as are most of my posts 😐
Was going to post an article which would be on topic but then read something within article that made me decide against it.

The whole inclusive thing🤯
Let's be inclusive yet let's divide ourselves into separate labels.
Cheerio off for a couple of beers.🤣
 
  • Haha
  • Like
Reactions: 6 users

equanimous

Norse clairvoyant shapeshifter goddess
1673326114754.png
 
  • Haha
  • Like
  • Love
Reactions: 17 users

Foxdog

Regular
  • Like
  • Fire
Reactions: 6 users

DK6161

Regular
SmartSelect_20230110_125953_Brave.jpg
 
  • Haha
  • Like
  • Love
Reactions: 14 users

equanimous

Norse clairvoyant shapeshifter goddess
He needs a bigger shield at the moment - what a crap day on the markets. Only one thing to do in times like this. Buy more 😂
I did. purchased at .673 and almost nailed the bottom of the day by .003 lol
 
  • Like
  • Love
  • Fire
Reactions: 13 users

TheFunkMachine

seeds have the potential to become trees.
Just a thoughts on the podcast, why Mr Hehir, could it be, bringing on the big gun, to deal with giant, Mr Chatelain is a Managing Director.
View attachment 26648
View attachment 26649

Learning 🏖
"Exciting times" (JMHO)
Maybe it is as simple that Sean and Jean rhymes so it’s natural for Sean to take this one. Rob and Jean just doesn’t have the same effect.. or could it maybe be that Sean and Jean have good report from previous relations and Sean wanted to have a chat with an old buddy? Who knows, but to throw dirt on Rob and assume he is on his way out or any other of rubbish I have read her is just silly.

Time will tell. All is well.
 
  • Like
  • Love
Reactions: 6 users
Top Bottom