D
Deleted member 3781
Guest
This is my first time to experience such a low volume stock movement even the news flow is amazing… this is very suspicious IMO… something is brewing
Not such a mystery.This is my first time to experience such a low volume stock movement even the news flow is amazing… this is very suspicious IMO… something is brewing
OTC in the U.S. is just a mirror … the real market for BRN is the ASX. If Wall Street takes a holiday, sure, U.S. investors aren’t active, but that doesn’t set the price in Sydney. The ASX is the primary listing, OTC just follows plus the 19–21 cent range has been locked in for weeks. That’s not about one quiet session, it’s structural. The market is waiting for a real catalyst, and until we see a licensing deal or confirmation from a big OEM, it doesn’t matter whether Wall Street is open or not … the chart won’t budge. IMONot such a mystery.
Labor Day holiday in America. Just the ASX doing nothing until the US markets open again.
| 13:30-13:45, Paper TueLecB04.1 | |
| Ultra-Efficient Network Intrusion Detection Implemented on Spiking Neural Network Hardware (I) | |
| Islam, Rashedul | University of Dayton |
| Yakopcic, Chris | University of Dayton |
| Rahman, Nayim | University of Dayton |
| Alam, Shahanur | University of Dayton |
| Taha, Tarek | University of Dayton |
| Keywords: Neuromorphic System Algorithms and Applications, Machine Learning at the Edge, Other Neural and Neuromorphic Circuits and Systems Topics Abstract: Network intrusion detection is crucial for securing data transmission against cyber threats. Traditional anomaly detection systems use computationally intensive models, with CPUs and GPUs consuming excessive power during training and testing. Such systems are impractical for battery-operated devices and IoT sensors, which require low-power solutions. As energy efficiency becomes a key concern, analyzing network intrusion datasets on low-power hardware is vital. This paper implements a low-power anomaly detection system on Intel’s Loihi and Brainchip’s Akida neuromorphic processors. The model was trained on a CPU, with weights deployed on the processors. Three experiments—binary classification, attack class classification, and attack type classification—are conducted. We achieved approximately 98.1% accuracy on Akida and 94% on Loihi in all experiments while consuming just 3 to 6 microjoules per inference. Also, a comparative analysis with the Raspberry Pi 3 and Asus Tinker Board is performed. To the best of our knowledge, this is the first performance analysis of low power anomaly detection based on spiking neural network hardware. |
To put it another way, traders in Australia won't place any bets on the ASX platform without knowing what direction the US markets are heading. It's not about BRN, volumes will be lower for the majority on the ASX today.OTC in the U.S. is just a mirror … the real market for BRN is the ASX. If Wall Street takes a holiday, sure, U.S. investors aren’t active, but that doesn’t set the price in Sydney. The ASX is the primary listing, OTC just follows plus the 19–21 cent range has been locked in for weeks. That’s not about one quiet session, it’s structural. The market is waiting for a real catalyst, and until we see a licensing deal or confirmation from a big OEM, it doesn’t matter whether Wall Street is open or not … the chart won’t budge. IMO
To put it another way, traders in Australia won't place any bets on the ASX platform without knowing what direction the US markets are heading. It's not about BRN, volumes will be lower for the majority on the ASX today.
The old saying is, if Wall Street sneezes, the ASX catches a cold.
Safety first for traders is to react to what the US markets are doing, not guess what they might do.
I'm not talking OTC.It’s your opinion and that’s fine… but as I see it, the OTC market doesn’t even follow Nasdaq-level rules. It’s thin, it’s secondary, and for BRN it’s nothing more than a mirror. If we were talking about a real Nasdaq-listed BRN share, I’d agree with you 79%.![]()
I like going back to the comments on this post from Mercedes on Linkedin from about 7 months ago....."more to be announced at a later date....stay tunedThe following Article is from 30th January 22 its worth revisiting ”Mercedes Benz delivers smarter operation” as the latest EV model is due to be released soon,here in Aust.it will be sunday evening,we may find out more.
| Component | Description |
| BrainChip Akida USB Dev Kit | Neuromorphic processor (main AI engine) |
| USB Microphone | Audio input (wake-word detection) |
| USB Camera | Visual input (face recognition) |
| Servo Motor (SG90/996R) | Physical lock control |
| Raspberry Pi 4 / Jetson Nano | Host controller with Linux (Ubuntu 20.04) |
| Breadboard + jumper wires | To connect the servo motor |
| Power Source | USB power bank or adapter |
Yes - many a mickle ...Good to see independent external projects starting to pop up.
![]()
AI-Enabled Safe Locker with BrainChip Akida SDK
This Article Explains how to build a Smart AI-enabled Safe Locker with BrainChip Akida SDK, Features, Architecture, and Implementation.www.elprocus.com
AI-Enabled Safe Locker with BrainChip Project
Build a smart locker system that opens only when both the user’s face and voice command match authorized patterns — all processed using low-power neuromorphic AI. This article provides brief information on an AI-enabled safe locker with BrainChip, features, etc.
AI-Enabled Safe Locker with BrainChip
Features
- Face recognition (Vision AI using SNN)
- Wake word detection (Audio SNN)
- Servo-controlled locking mechanism
- Local, low-latency inference (no cloud)
- On-chip learning (for adding new users)
Components Required
Component Description BrainChip Akida USB Dev Kit Neuromorphic processor (main AI engine) USB Microphone Audio input (wake-word detection) USB Camera Visual input (face recognition) Servo Motor (SG90/996R) Physical lock control Raspberry Pi 4 / Jetson Nano Host controller with Linux (Ubuntu 20.04) Breadboard + jumper wires To connect the servo motor Power Source USB power bank or adapter Software Setup
1. Install Dependencies
- sudo apt update.
- sudo apt install python3-pip libatlas-base-dev.
- pip3 install akida speechrecognition opencv-python numpy pyserial.
Install Akida SDK:
Download the BrainChip Akida SDK from the official site. Follow their instructions to install the Python SDK and runtime.
AI-Enabled Safe Locker Project Architecture
AI-Enabled Safe Locker![]()
Step-by-Step Implementation
Step 1: Data Collection & Preprocessing
a. Face Dataset (Images)
Collect 20–30 frontal face images per authorized person using OpenCV:
import cv2
cap = cv2.VideoCapture(0)
for i in range(30):
ret, frame = cap.read()
cv2.imwrite(f”user_face_{i}.jpg”, frame)
cap.release()
b.Voice Samples (Wake Word)
Record your custom phrase (e.g., “Unlock Akida”) using PyAudio or Audacity.
Step 2 : Train SNN Models with Akida
a. Convert Face Classifier to SNN
Use MobileNet or a custom CNN for feature extraction and convert to SNN using akida.Model.
from akida import Model
model = Model(“cnn_model.h5”)
model.quantize()
model_to_akida = model.convert()
model_to_akida.save(“face_model.akd”)
b. Convert Wake Word Classifier
- Use MFCC preprocessing → CNN → SNN
- Convert the audio classifier to an Akida model using the Akida tools.
Step 3: Load Models and Infer
from akida import AkidaModel
face_model = AkidaModel(“face_model.akd”)
audio_model = AkidaModel(“wake_model.akd”)
Audio Inference (Wake Word)
def is_wake_word(audio):
prediction = audio_model.predict(audio)
return prediction == “unlock_akida”
Face Inference (Real-Time Face Match)
def is_authorized_face(frame):
face = detect_and_crop_face(frame)
prediction = face_model.predict(face)
return prediction == “authorized_user”
Control the Servo Lock
import RPi.GPIO as GPIO
import time
servo_pin = 17
GPIO.setmode(GPIO.BCM)
GPIO.setup(servo_pin, GPIO.OUT)
servo = GPIO.PWM(servo_pin, 50)
servo.start(0)
def open_locker():
servo.ChangeDutyCycle(7.5) # Adjust as per lock
time.sleep(1)
servo.ChangeDutyCycle(0)
def close_locker():
servo.ChangeDutyCycle(2.5)
time.sleep(1)
servo.ChangeDutyCycle(0)
Step 5: Integration Logic
import cv2
import speech_recognition as sr
cam = cv2.VideoCapture(0)
while True:
# Wake word check
audio = record_audio_sample()
if not is_wake_word(audio):
continue
# Face check
ret, frame = cam.read()
if is_authorized_face(frame):
open_locker()
print(“Locker opened!”)
else:
print(“Face not recognized.”)
Testing and Validation
Try Implementing the above project and let us know your results..
- Add a new user using Brainchip Akida’s on-chip learning API.
- Try unlocking with the wrong voice or face → the system should deny access.
- Log each attempt (success/failure) for analytics.
Any Akidaholics meeting the below eligibility criteria interested in NASA’s Beyond the Algorithm Challenge that was launched today?
View attachment 79697
NASA Beyond the Algorithm Challenge
NASA Beyond the Algorithm Challenge: Novel Computing Architectures for Flood Analysis The NASA Earth Science Technology Office (ESTO) seeks solutions to complex Earth Science problems using transformative or unconventional computing technologies such as quantum computing, quantum machine...www.nasa-beyond-challenge.org
View attachment 79698 View attachment 79699 View attachment 79706 View attachment 79701