BRN Discussion Ongoing

Frangipani

Top 20
Thank you @Frangipani , but I do not understand the results.
Can you explain further, or is there a page missing? The last page, "Experimental Results", does not state which chip it is.

Guten Abend, CHIPS,

the abbreviations DPU, TPU and NPU refer to three different hardware platforms, cf. the preceding presentation slide titled “AI Model Overview” or the following excerpt from the May 2025 post of mine I had tagged above:

https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-462394

“The revised GIASaaS (Global Instant Satellite as a Service, formerly Greek Infrastructure for Satellite as a Service) concept website by OHB Hellas is finally back online - and Akida now no longer shows up as UAV1 (see my post above) but as one of three SATs! 🛰

AKIDA, the envisioned SAT with the AKD1000 PCIe Card onboard, is slated to handle both Object Detection as well as Satellite Detection, while the planned KRIA and CORAL satellites (equipped with a Xilinx KRIA KV260 Vision AI Starter Kit resp. a Google Coral TPU Dev Board) are tasked to handle Vessel Detection, Cloud Detection and Fire Detection (for some reason OHB Hellas removed Flood Detection from the list of applications).

Please note that this is currently a concept website only.”


Flood detection was originally also slated to be tasked by both KRIA (DPU) and CORAL (TPU) (see here as well as under Experimental Results), but is no longer listed as a choosable application on the updated GIASaaS website https://giasaas.eu/.


Generally speaking, it would of course be helpful to also have the video recordings to go along with these conference presentations…
 
  • Like
  • Love
  • Fire
Reactions: 11 users

7für7

Top 20
Good morning chips!

Will we finally start to rise or will we continue to ride the same rollercoaster ?

Pass Out Mr Bean GIF
 
  • Like
  • Haha
  • Thinking
Reactions: 7 users

FJ-215

Regular
Good morning chips!

Will we finally start to rise or will we continue to ride the same rollercoaster ?

Pass Out Mr Bean GIF
Refreshing to see some green this morning.
Hoping it continues but without an announcement with revenue I think we are defying gravity.
Come on Sean......59 days to Christmas!!!
 
  • Like
Reactions: 7 users

HarryCool1

Regular
@Esq.111 is it time to get a little more excited about the buy side increase or am I just interpreting my organic herbal tea leaves wrong??
 
  • Haha
  • Like
Reactions: 9 users

7für7

Top 20
@Esq.111 is it time to get a little more excited about the buy side increase or am I just interpreting my organic herbal tea leaves wrong??

Judging by all the posts shared over the weekend…each one sounding like at least 10 price sensitive announcements …then we should easily hit a dollar this week 🤪 … and that’s without even drinking any herbal tea!
 
  • Haha
  • Like
  • Thinking
Reactions: 7 users

Bravo

Meow Meow 🐾
Following on from the previous post on Anduril's EagleEye headset, if you take a look back at the BrainCHip Technology Roadmap video (below) Jonathan Tapson makes the following comments.


26.28 mins

"The next step is to integrate some cameras into the headset, which will allow it to interpret the scene both in front of and behind the person. You may have seen movies where special forces operate silently and use hand signals to communicate. I mentioned to one of our associates in this area that there’s no reason not to recognise hand gestures made by someone behind you, so you can literally have eyes in the back of your head. They were actually speechless at that possibility because apparently that’s a huge problem when operating silently; you can’t say, ‘Hey, look over here,’ but you still need to be able to signal to each other. That idea was apparently just mind-blowing, and you can see how we can grow the solution from that point.”



So I was very interested to have stumbled over this Instagram pic in my research this morning. 🧐



View attachment 92216





And this...


View attachment 92217

View attachment 92214





More on Anduril's EagleEye helmet as per the above posts.

Over the weekend I watched the BrainChip Technology Roadmap presentation again. There was quite a lot of information that I hadn't really registered or fully appreciated upon my first preview of it.



25.36 mins - Jonathan Tapson discussing Akida 3.0

"So this is an outcome that we've actually proposed to the US defense industry and there's a very high probability of us getting a very positive outcome there. So every soldier now wears a headset and it includes a radio and basic hearing protection. And we actually want to take that headset to the next level. So the things we can already do; we can clear up speech and noise extremely well and we can already answer simple questions, so questions like I have a casualty who is bleeding form the head - what do I do? It's very useful if the soldier can just say that into the headset and get some kind of helpful answer straight away. Or how do I get this vehicle that I've never driven before into gear and moving. It's able to actually give those kind of responses. AND WE HAVE THOSE PARTS WORKING ON FPGA FOR DEMONSTRATION ALREADY."


So, the parts were already working on FPGA for demonstration and the slide from the presentation shows that AKD 3 FPGA is due Q2 2026.


Screenshot 2025-10-27 at 10.05.00 am.png







Bearing this timing in mind, this Defense Scoop article (see below) says " Anduril Industries will deliver roughly 100 units of its new AI-powered helmet and digitally-enhanced eyewear system — EagleEye — to select U.S. Army personnel during the second quarter of the upcoming calendar year, the company’s founder told reporters."





Screenshot 2025-10-27 at 10.17.01 am.png







Seems like 100 units would be samples only, like a small engineering run which would be typically used to evaluate new sensor, compute or firmware combinations.

So, my point being, the Akida 3.0 FPGA parts are working for demonstration right now, and the Akida-3 FPGA due date aligns with the Q2 2026 outlined for the delivery of Anduril's samples.

Is this just remarkably co-incidental timing or something else?

Sounds like perfect timing for a funded prototype that feeds into a 2026–27 soldier evaluation followed by production if tests succeed.

Still speculation at this stage obviously, as BrainChip hasn't been public named as a partner, but Luckey told reporters reporters that Anduril "plans to announce more partners over the next year". So, we'll just have to keep an eye out for any new announcements to see if anything solid is confirmed.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 43 users

7für7

Top 20
Shorter trying Stil hard


Trading Trade GIF by Porky Island
 
  • Like
  • Sad
Reactions: 2 users

Xray1

Regular
Hi Manny, ''We've been filing TENNs patents since 2022, eg:

WO2023250093A1 METHOD AND SYSTEM FOR IMPLEMENTING TEMPORAL CONVOLUTION IN SPATIOTEMPORAL NEURAL NETWORKS 20220622

The recent Akida 1 & Akida 2 data sheets indicate that there are 128 MACs per node. This is a major change in the circuit configuration of Akida.

That means that our engineers have been working on the new hardware design and new TENNs models for the last 3 years.

No doubt the EAPs were kept up to date wit the developments, consistent with patent application secrecy, under NDA.

The unfortunate consequence of this is that, while the development has been taking place over the last 3+ years, we have had nothing to sell but our back catalogue Akida 1/1500 chips. Even the IP was changing hourly.

Software simulation and FPGAs would have been the only way the new hardware could have been demonstrated, apart from the existing old hardware for Akida 1 and 1500.

Clearly the old Akida had demonstrated to the EAPs the potential of Akida, and the new configuration has been shown to exceed the old.

In the meantime, we shareholders have ben kept in the dark because the company has been buried in NDAs up to its eyebrows.

But now we are about to see "The Bonfire of the NDAs".
A few of Seans "BOOKINGS" wouldn't go astray either :) :)
 
  • Like
  • Love
Reactions: 5 users

Esq.111

Fascinatingly Intuitive.
@Esq.111 is it time to get a little more excited about the buy side increase or am I just interpreting my organic herbal tea leaves wrong??
Top of the morning HarryCool1 ,

Certainly feels like the wind may have changed direction .

Close-up of woman chef hands applying oil on raw sardines on a white tray


Feels like a few retail holders may have been consumed as a side dish , main course to follow .

Obviously I'm awaiting the full banquet.

Regards,
Esq
 
  • Haha
  • Like
  • Fire
Reactions: 19 users

AARONASX

Holding onto what I've got
Top of the morning HarryCool1 ,

Certainly feels like the wind may have changed direction .

Close-up of woman chef hands applying oil on raw sardines on a white tray


Feels like a few retail holders may have been consumed as a side dish , main course to follow .

Obviously I'm awaiting the full banquet.

Regards,
Esq
Soon my friend!

monty python GIF by Head Like an Orange
 
  • Haha
  • Like
Reactions: 10 users
Can anyone explain that smell ??
Is it fear
Is it the start of rocket 🚀 fuel leaking
Or is it success smouldering
 
  • Like
  • Haha
Reactions: 11 users

TECH

Top 20
More on Anduril's EagleEye helmet as per the above posts.

Over the weekend I watched the BrainChip Technology Roadmap presentation again. There was quite a lot of information that I hadn't really registered or fully appreciated upon my first preview of it.



25.36 mins - Jonathan Tapson discussing Akida 3.0

"So this is an outcome that we've actually proposed to the US defense industry and there's a very high probability of us getting a very positive outcome there. So every soldier now wears a headset and it includes a radio and basic hearing protection. And we actually want to take that headset to the next level. So the things we can already do; we can clear up speech and noise extremely well and we can already answer simple questions, so questions like I have a casualty who is bleeding form the head - what do I do? It's very useful if the soldier can just say that into the headset and get some kind of helpful answer straight away. Or how do I get this vehicle that I've never driven before into gear and moving. It's able to actually give those kind of responses. AND WE HAVE THOSE PARTS WORKING ON FPGA FOR DEMONSTRATION ALREADY."


So, the parts were already working on FPGA for demonstration and the slide from the presentation shows that AKD 3 FPGA is due Q2 2026.


View attachment 92427






Bearing this timing in mind, this Defense Scoop article (see below) says " Anduril Industries will deliver roughly 100 units of its new AI-powered helmet and digitally-enhanced eyewear system — EagleEye — to select U.S. Army personnel during the second quarter of the upcoming calendar year, the company’s founder told reporters."





View attachment 92429






Seems like 100 units would be samples only, like a small engineering run which would be typically used to evaluate new sensor, compute or firmware combinations.

So, my point being, the Akida 3.0 FPGA parts are working for demonstration right now, and the Akida-3 FPGA due date aligns with the Q2 2026 outlined for the delivery of Anduril's samples.

Is this just remarkably co-incidental timing or something else?

Sounds like perfect timing for a funded prototype that feeds into a 2026–27 soldier evaluation followed by production if tests succeed.

Still speculation at this stage obviously, as BrainChip hasn't been public named as a partner, but Luckey told reporters reporters that Anduril "plans to announce more partners over the next year". So, we'll just have to keep an eye out for any new announcements to see if anything solid is confirmed.
Hey Bravo.......I'm already looking forward to smelling the roses during my personal review of our company's progress in June 2026.

At this stage it appears to be a solid no brainer.......HOLD, HOLD, and continue to Hold.

Thanks for all your research, mate, I appreciate your efforts in highlighting potential links to our wonderful technology.

Cheers......Tech :coffee:(y)
 
  • Like
  • Fire
  • Love
Reactions: 26 users

keyeat

Regular
would be nice if the share price trajectory looked like this !

1761534342298.png
 
  • Like
  • Haha
  • Sad
Reactions: 7 users

FJ-215

Regular
would be nice if the share price trajectory looked like this !

View attachment 92430
Looks more dramatic on the 12mth chart.

Interesting that shorts increased after the announcements on Monday 20th. My guess is that they are eyeing off a call on LDA and potentially 55 million shares being issued that would allow them to cover their positions. :cautious:
 
  • Like
  • Thinking
Reactions: 4 users
FF

My personal list of Brainchip Engagements as at 27 October, 2025
1. FORD
2. VALEO
3. RENESAS
4. NASA
5. TATAConsulting Services
6.MEGACHIPS
7. MOSCHIP
8.SOCIONEXT
9.PROPHESEE
10. VVDN
11. TEKSUN
12. Ai LABS
13. NVISOnow BeEMOTION
14.EMOTION3D
15. ARM
16. EDGEIMPULSE (a QUALCOMM Company)
17. INTEL
18.GLOBALFOUNDRIES
19.BLUERIDGE ENVISIONEERING (now wholly owned by Parsons)
20.MERCEDES BENZ
21. ANT 61
22. QUANTUMVENTURA
23.INFORMATIONSYSTEM LABORATORIES
24.INTELLISENSESYSTEMS
25. CVEDIA
26. LORSERINDUSTRIES
27. SiFIVE
28.IPROSILICONE
29.SALESLINK
30. NUMEM
31. VORAGO
32. NANOSE
33. BIOTOME
34. OCULI
35. CIRCLE8CLEAN TECHNOLOGIES
36. AVIDGROUP
37. TATAELXSI

38. NEUROBUS
39. EDGX
40. EUROPEAN SPACE AGENCY
41 UNIGEN
42. iniVation
43. SAHOMA CONTROLWARE
44. MAGIKEYE
45. MYWAI
46. INFINEON
47. ERICSSON
48. MICROCHIP
49. ONSEMI
50. IPSOLON RESEARCH

51. UBH -HELLAS

52.ACCENTURE

53. FRONTGRADE GAISLER

54. DELLTechnologies

55. BOSTONDYNAMICS

56. AIRBUS

57. PARSONSCORPORATION

58. BASCOMHUNTER

59.ExeLANCE IT

60. USAIRFORCE RESEARCH LABORATORY

61. ONSOR

62. ANDESTECHNOLOGY

63. DEGIRUM

64.VEDYA

65.MULTICOREWARE

66.ARQUIMEA

67.LOCKHEED MARTIN
68. RTX - RAYTHEON & COLLINS

69. Nurjana Technologies

70. ChelpisQuantum GROUP

71. MiRLEGROUP

72. BOSCH

73. RENAULT

74. STMICROELECTRONICS

75. HAILA

76. DATA SCIENCEUA

77. BRAVE1

78.SPANIDEA

79. AiCOWBOYS

80. COLFAX

81. WEEBITNANO

82.University of Virginia

83.University of Oklahoma

84. Arizona StateUniversity
85.Carnegie Mellon University
86. Rochester Institute of Technology
87. DrexelUniversity
88. CornellTech - founded by Cornell University & Technion - (Israel Institute ofTechnology and sponsor of Nanose)
89.University of Western Australia
90. PennState University
 
  • Like
  • Love
  • Wow
Reactions: 38 users

7für7

Top 20
s
FF

My personal list of Brainchip Engagements as at 27 October, 2025
1. FORD
2. VALEO
3. RENESAS
4. NASA
5. TATAConsulting Services
6.MEGACHIPS
7. MOSCHIP
8.SOCIONEXT
9.PROPHESEE
10. VVDN
11. TEKSUN
12. Ai LABS
13. NVISOnow BeEMOTION
14.EMOTION3D
15. ARM
16. EDGEIMPULSE (a QUALCOMM Company)
17. INTEL
18.GLOBALFOUNDRIES
19.BLUERIDGE ENVISIONEERING (now wholly owned by Parsons)
20.MERCEDES BENZ
21. ANT 61
22. QUANTUMVENTURA
23.INFORMATIONSYSTEM LABORATORIES
24.INTELLISENSESYSTEMS
25. CVEDIA
26. LORSERINDUSTRIES
27. SiFIVE
28.IPROSILICONE
29.SALESLINK
30. NUMEM
31. VORAGO
32. NANOSE
33. BIOTOME
34. OCULI
35. CIRCLE8CLEAN TECHNOLOGIES
36. AVIDGROUP
37. TATAELXSI

38. NEUROBUS
39. EDGX
40. EUROPEAN SPACE AGENCY
41 UNIGEN
42. iniVation
43. SAHOMA CONTROLWARE
44. MAGIKEYE
45. MYWAI
46. INFINEON
47. ERICSSON
48. MICROCHIP
49. ONSEMI
50. IPSOLON RESEARCH

51. UBH -HELLAS

52.ACCENTURE

53. FRONTGRADE GAISLER

54. DELLTechnologies

55. BOSTONDYNAMICS

56. AIRBUS

57. PARSONSCORPORATION

58. BASCOMHUNTER

59.ExeLANCE IT

60. USAIRFORCE RESEARCH LABORATORY

61. ONSOR

62. ANDESTECHNOLOGY

63. DEGIRUM

64.VEDYA

65.MULTICOREWARE

66.ARQUIMEA

67.LOCKHEED MARTIN
68. RTX - RAYTHEON & COLLINS

69. Nurjana Technologies

70. ChelpisQuantum GROUP

71. MiRLEGROUP

72. BOSCH

73. RENAULT

74. STMICROELECTRONICS

75. HAILA

76. DATA SCIENCEUA

77. BRAVE1

78.SPANIDEA

79. AiCOWBOYS

80. COLFAX

81. WEEBITNANO

82.University of Virginia

83.University of Oklahoma

84. Arizona StateUniversity
85.Carnegie Mellon University
86. Rochester Institute of Technology
87. DrexelUniversity
88. CornellTech - founded by Cornell University & Technion - (Israel Institute ofTechnology and sponsor of Nanose)
89.University of Western Australia
90. PennState University

Some are very spekulative … but would be nice
 

IloveLamp

Top 20
🤔

1000012955.jpg
1000012952.jpg
 
  • Like
  • Fire
  • Wow
Reactions: 11 users

7für7

Top 20
Toward the end, the rats always give it their best… it’s unbelievable that this isn’t being dealt with.
That BrainChip itself doesn’t take action by submitting a notice to the ASX… it’s obvious that something isn’t right here.
 
  • Like
  • Thinking
Reactions: 5 users

Tothemoon24

Top 20
BOSSA looks like one to keep a watch on in the hearing aid space
IMG_1676.jpeg



IMG_1677.jpeg






In a busy room full of talking people, most of us can still pick out one voice to focus on. This common yet complex task—known as the “cocktail party effect”—relies on the brain’s incredible ability to sort through sound. But for people with hearing loss, filtering out background noise can feel impossible. Even the most advanced hearing aids often struggle in these noisy environments.

Now, researchers at Boston University may have found a new way to help. They’ve developed a brain-inspired algorithm that allows hearing aids to better isolate individual voices in a crowd. When tested, this method boosted speech recognition by an impressive 40 percentage points, far outperforming current technologies.


A New Approach to an Old Problem​

In crowded social settings like dinner parties or workplace meetings, conversations often overlap. For those with hearing loss, these situations can be frustrating. Even with hearing aids, voices blur together in a mess of sound. This makes it hard to follow conversations, stay engaged, or even participate at all.

A New Approach to an Old Problem​

In crowded social settings like dinner parties or workplace meetings, conversations often overlap. For those with hearing loss, these situations can be frustrating. Even with hearing aids, voices blur together in a mess of sound. This makes it hard to follow conversations, stay engaged, or even participate at all.

A New Approach to an Old Problem​

In crowded social settings like dinner parties or workplace meetings, conversations often overlap. For those with hearing loss, these situations can be frustrating. Even with hearing aids, voices blur together in a mess of sound. This makes it hard to follow conversations, stay engaged, or even participate at all.

Virginia Best, a speech and hearing researcher at BU, says this is the number one complaint among those with hearing loss. “These environments are very common in daily life,” Best explains, “and they tend to be really important to people.”
Traditional hearing aids often include tools like directional microphones—also called beamformers—that try to focus on sounds coming from one direction. But these tools have limitations. In complex environments with many voices, beamforming often fails. In fact, in tests conducted by the BU team, the standard industry algorithm didn’t help much—and sometimes made things worse.
That’s where the new technology, known as BOSSA, comes in. BOSSA stands for Biologically Oriented Sound Segregation Algorithm. It was developed by Kamal Sen, a biomedical engineering professor at BU’s College of Engineering. “We were extremely surprised and excited by the magnitude of the improvement in performance,” says Sen. “It’s pretty rare to find such big improvements.”

Built on Brain Science​

Sen has spent two decades exploring how the brain decodes sound. His work focuses on how sound signals travel from the ears to the brain and how certain neurons help identify or suppress sounds. One key finding? The brain uses “inhibitory neurons” to cancel out background noise and enhance the sounds we want to hear.
All subjects average word recognition scores.

All subjects average word recognition scores. (CREDIT: Kamal Sen, et al.)
“You can think of it as a form of internal noise cancellation,” Sen says. Different neurons are tuned to respond to different directions and pitches. This lets your brain focus attention on one sound source while ignoring others.
BOSSA was built to mimic this process. The algorithm uses spatial cues—like how loud a sound is and how quickly it arrives in each ear—to pinpoint its location. It then filters sounds based on these cues, separating them like your brain would. “It’s basically a computational model that mimics what the brain does,” Sen says.

Testing BOSSA in Real-Life Situations​

To find out if BOSSA really works, the BU team tested it in the lab. They recruited young adults with sensorineural hearing loss, the most common form, often caused by genetics or childhood illness. Participants wore headphones and listened to simulated conversations, with voices coming from different directions. They were asked to focus on one speaker while the algorithm worked in the background.
Each person completed the task under three different conditions: no algorithm, the standard beamforming algorithm used in current hearing aids, and BOSSA. The results were striking. BOSSA delivered a major improvement in speech recognition. The standard algorithm showed little or no improvement—and in some cases, performance dropped.
Speech reception thresholds (SRT) are shown as boxplots for each processing condition

Speech reception thresholds (SRT) are shown as boxplots for each processing condition. (CREDIT: Kamal Sen, et al.)
Alexander Boyd, a BU PhD candidate in biomedical engineering, helped collect and analyze the data. He was also the lead author of the study, which was published in Communications Engineering, part of the Nature Portfolio.
Best, who formerly worked at Australia’s National Acoustic Laboratories, helped design the study. She says testing new technologies like BOSSA with real people is essential. “Ultimately, the only way to know if a benefit will translate to the listener is via behavioral studies,” Best says. “That requires scientists and clinicians who understand the target population.”

Big Potential for Hearing Technology​

An estimated 50 million Americans live with hearing loss, and the World Health Organizationpredicts that by 2050, nearly 2.5 billion people worldwide will be affected. That makes the need for better hearing solutions urgent.
Sen has patented BOSSA and hopes to partner with companies that want to bring it to market. He believes that major tech players entering the hearing aid space—like Apple with its AirPod Pro 2, which includes hearing aid features—will drive innovation forward. “If hearing aid companies don’t start innovating fast, they’re going to get wiped out,” says Sen. “Apple and other start-ups are entering the market.”
Individual participant audiograms. The different curves show pure-tone thresholds for each of the eight participants (averaged over left and right ears). Unique symbols distinguish individual subjects.

Individual participant audiograms. The different curves show pure-tone thresholds for each of the eight participants (averaged over left and right ears). Unique symbols distinguish individual subjects. (CREDIT: Kamal Sen, et al.)
And the timing couldn’t be better. As hearing technology becomes more widely available and advanced, tools like BOSSA could help millions of people reconnect with the world around them. From social events to everyday conversations, better sound separation can mean a better life.

Beyond Hearing Loss: A Wider Application​

BOSSA was built to help those with hearing difficulties, but its potential doesn’t end there. The way the brain focuses on sound—what researchers call “selective attention”—matters in many conditions. “The [neural] circuits we are studying are much more general purpose and much more fundamental,” Sen says. “It ultimately has to do with attention, where you want to focus.”
That’s why the team is now exploring how the same science could help people with ADHD or autism. These groups also struggle with multiple competing inputs—whether sounds, visuals, or tasks—and may benefit from tools that help guide attention.
They’re also testing a new version of BOSSA that adds eye-tracking. By following where someone looks, the device could better figure out who they’re trying to listen to. This could make the technology even more effective in fast-paced, real-world settings.


Sharpening Sound, Changing Lives​

The success of BOSSA offers real hope. It’s not just another upgrade in hearing tech—it’s a shift in how we approach sound processing. Instead of trying to boost all sound or block background noise blindly, it takes cues from biology, using the brain’s blueprint to help listeners find meaning in the noise.
For many with hearing loss, this could change everything. Being able to join conversations, pick out voices, and stay connected socially are vital parts of daily life. With tools like BOSSA, those goals move a little closer. And as this technology continues to grow, its reach may extend beyond hearing loss, offering help with focus and attention challenges too.
What started as a solution for a noisy dinner party could one day reshape how we interact with the world.

















Here, we present a system employing a novel strategy for stimulus reconstruction from neural spikes. Conceptually, this strategy uses time-frequency masking by computing a spike-based mask (spike-mask). We first consider the strategy for one-dimensional stimulus (e.g. sound waves). We show how this stimulus reconstruction method can be applied, using the cortical model as an example. We also show that this strategy produces reconstructions with intelligibility and quality higher than those reconstructed from the linear filtering method (table 1). Then we discuss how this strategy may be generalized for multi-dimensional stimulus (e.g. images and videos). The strategy presented here may be generalized to perform reconstruction on both artificial SNNs and neural models from experimental data as long as they satisfy the assumptions for our model.
 

Attachments

  • IMG_1675.jpeg
    IMG_1675.jpeg
    383.2 KB · Views: 103
  • Like
  • Love
  • Fire
Reactions: 25 users
Top Bottom