BRN Discussion Ongoing

equanimous

Norse clairvoyant shapeshifter goddess
Ok so this is pretty Sweet
1657534946296.png
 
  • Like
  • Fire
  • Haha
Reactions: 22 users

equanimous

Norse clairvoyant shapeshifter goddess
1657535806425.png
 
  • Like
  • Fire
  • Love
Reactions: 25 users

Reuben

Founding Member
Are you serious?
@beaglebasher yes a few of us have met him at/after the agm... has anyone met you would be the right question now... if you have seen the amount of research put up by many here including FF, thats a very sorry allegation to be made.. anyways again I hope it was an honest mistake.. if not... the other website is the perfect place for you... 😇
 
  • Like
  • Fire
  • Love
Reactions: 38 users

wilzy123

Founding Member
I have met nobody in person from this forum. That should be obvious..
I asked a sensitive question and I would appreciate an honest answer.
You got your answer. Now what?
 
  • Like
  • Haha
  • Fire
Reactions: 18 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
But wait there is more. The names on this im pretty sure are Korean... Pub date July 2022
View attachment 11096

Ok so this is pretty Sweet
View attachment 11098

 
  • Like
  • Fire
Reactions: 12 users

equanimous

Norse clairvoyant shapeshifter goddess
  • Like
  • Love
  • Haha
Reactions: 17 users

Reuben

Founding Member
  • Like
  • Haha
Reactions: 6 users

equanimous

Norse clairvoyant shapeshifter goddess
  • Like
Reactions: 4 users

AusEire

Founding Member. It's ok to say No to Dot Joining
I have met nobody in person from this forum. That should be obvious..
I asked a sensitive question and I would appreciate an honest answer.
Well the honest answer is yes. I don't know how to articulate that any other way tbh
 
  • Like
  • Haha
  • Fire
Reactions: 14 users

Wags

Regular
I have met nobody in person from this forum. That should be obvious..
I asked a sensitive question and I would appreciate an honest answer.
Yes I have. As much integrity in person as evident over the keyboard.
A role model for sure.
 
  • Like
  • Love
  • Fire
Reactions: 22 users

Slade

Top 20
I have met nobody in person from this forum. That should be obvious..
I asked a sensitive question and I would appreciate an honest answer.
You should think or at least read before you post.
 
  • Like
  • Love
  • Fire
Reactions: 22 users

AusEire

Founding Member. It's ok to say No to Dot Joining
I find it interesting that this person has suddenly appeared to throw shade at someone when that someone isn't here to address it 🤔 Nothing fishy about that at all
 
  • Like
  • Haha
  • Thinking
Reactions: 30 users

equanimous

Norse clairvoyant shapeshifter goddess
Stop replying to him and spamming the pages which is what he wants
 
  • Like
  • Haha
Reactions: 19 users

equanimous

Norse clairvoyant shapeshifter goddess

Brain-inspired Multilayer Perceptron with Spiking Neurons​

Wenshuo Li, Hanting Chen, Jianyuan Guo, Ziyang Zhang, Yunhe Wang
Recently, Multilayer Perceptron (MLP) becomes the hotspot in the field of computer vision tasks. Without inductive bias, MLPs perform well on feature extraction and achieve amazing results. However, due to the simplicity of their structures, the performance highly depends on the local features communication machenism. To further improve the performance of MLP, we introduce information communication mechanisms from brain-inspired neural networks. Spiking Neural Network (SNN) is the most famous brain-inspired neural network, and achieve great success on dealing with sparse data. Leaky Integrate and Fire (LIF) neurons in SNNs are used to communicate between different time steps. In this paper, we incorporate the machanism of LIF neurons into the MLP models, to achieve better accuracy without extra FLOPs. We propose a full-precision LIF operation to communicate between patches, including horizontal LIF and vertical LIF in different directions. We also propose to use group LIF to extract better local features. With LIF modules, our SNN-MLP model achieves 81.9%, 83.3% and 83.5% top-1 accuracy on ImageNet dataset with only 4.4G, 8.5G and 15.2G FLOPs, respectively, which are state-of-the-art results as far as we know.
Comments:This paper is accepted by CVPR 2022
Subjects:Computer Vision and Pattern Recognition (cs.CV)
Cite as:arXiv:2203.14679 [cs.CV]
(or arXiv:2203.14679v1 [cs.CV] for this version)
https://doi.org/10.48550/arXiv.2203.14679
Focus to learn more

Submission history​

From: Wenshuo Li [view email]
[v1] Mon, 28 Mar 2022 12:21:47 UTC (5,592 KB)
 
  • Like
  • Fire
Reactions: 12 users

GStocks123

Regular

Attachments

  • 28BA2037-83EF-4E6A-801C-16ACB0683458.png
    28BA2037-83EF-4E6A-801C-16ACB0683458.png
    1.2 MB · Views: 204
  • 3F24EB43-380B-4A18-9F1C-F5341F44C4ED.jpeg
    3F24EB43-380B-4A18-9F1C-F5341F44C4ED.jpeg
    255.8 KB · Views: 192
  • Like
  • Fire
  • Love
Reactions: 77 users

equanimous

Norse clairvoyant shapeshifter goddess

Neuromorphic Data Augmentation for Training Spiking Neural Networks​

Yuhang Li, Youngeun Kim, Hyoungseob Park, Tamar Geller, Priyadarshini Panda
Developing neuromorphic intelligence on event-based datasets with spiking neural networks (SNNs) has recently attracted much research attention. However, the limited size of event-based datasets makes SNNs prone to overfitting and unstable convergence. This issue remains unexplored by previous academic works. In an effort to minimize this generalization gap, we propose neuromorphic data augmentation (NDA), a family of geometric augmentations specifically designed for event-based datasets with the goal of significantly stabilizing the SNN training and reducing the generalization gap between training and test performance. The proposed method is simple and compatible with existing SNN training pipelines. Using the proposed augmentation, for the first time, we demonstrate the feasibility of unsupervised contrastive learning for SNNs. We conduct comprehensive experiments on prevailing neuromorphic vision benchmarks and show that NDA yields substantial improvements over previous state-of-the-art results. For example, NDA-based SNN achieves accuracy gain on CIFAR10-DVS and N-Caltech 101 by 10.1% and 13.7%, respectively.
Subjects:Computer Vision and Pattern Recognition (cs.CV)
Cite as:arXiv:2203.06145 [cs.CV]
(or arXiv:2203.06145v1 [cs.CV] for this version)
https://doi.org/10.48550/arXiv.2203.06145
Focus to learn more

Submission history​

From: Yuhang Li [view email]
[v1] Fri, 11 Mar 2022 18:17:19 UTC (4,452 KB)
 
  • Like
  • Fire
Reactions: 13 users

equanimous

Norse clairvoyant shapeshifter goddess
  • Like
  • Fire
  • Love
Reactions: 22 users

wilzy123

Founding Member
  • Like
  • Love
  • Fire
Reactions: 29 users

cosors

👀
For the researchers among us and those who, unlike me, know a lot about the subject in detail, this might be interesting. It seemed like the rabbit hole to me. BOSH has been mentioned here many times. I'm not sure if you know the homepage of the Bosch Center for Artificial Intelligence. @Learning and @uiux have at least mentioned the centre or exchanged information with @Diogenese about a staff member.

https://www.bosch-ai.com/

The search function already gives some documents if you search for example for spiking neural network:
https://www.bosch-ai.com/search.html?q=Spiking Neural Networks

Perhaps it becomes more interesting when you look among the staff. Some of those who have something to do with the subject have links to other publications.
https://www.bosch-ai.com/about-us/our-people/

Here, for example, about Mr Michael Pfeiffer, who has already been mentioned here at tse.
https://scholar.google.de/citations?hl=en&user=jDE5tIQAAAAJ&view_op=list_works&sortby=pubdate

or https://scholar.google.de/citations?hl=de&user=e5dO0q0AAAAJ&view_op=list_works&sortby=pubdate

one example:
Hybrid SNN-ANN: Energy-Efficient Classification and Object Detection for Event-Based Vision

or:
https://www.bosch-ai.com/research/publications/

Of course, I cannot say whether the linked documents are purely scientific and have no economic reference to us. I have only just found the page.
It naturally branches out further and further via the staff or the board. For example, one of the collaborations is UvA-Bosch DELTA Lab (Deep Learning Technologies Amsterdam) in the Netherlands.

AI-research preparing the vehicles of the future

The collaboration between Bosch and DELTA Lab



Maybe that is interesting for one or the other of us.
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 28 users
Top Bottom