Neurosec

uiux

Regular


Just released this. It should make developing with Akida easier if you are working with a video feed.


Example use case:


Python:
from neurosec import Neurosec


yolo_face = {
    "fbz": "models/yolo_face.fbz",
    "predict_classes": False,
    "anchors": [[0.90751, 1.49967], [1.63565, 2.43559], [2.93423, 3.88108]],
    "classes": 1,
    "labels": {
        0: "face",
    },
    "colours": {0: (255, 0, 0)},
    "pred_conf_min": 0.70,
}

neurosec = Neurosec(
    source=0,
    model=yolo_face,
    resolution=(640, 480),
).start()

while True:
    frame = neurosec.get_neurosec_frame()
    if frame is None:
        break

    cv2.imshow("Output", frame)

    key = cv2.waitKey(1) & 0xFF
    if key == ord("q"):
        break



Also has the option to run a Flask based webserver that let's you login to the node and view the stream or just fetch frame meta:

Python:
from neurosec import NeurosecNode


neurosec_node = NeurosecNode(
    **{
        "source": 0,
        "resolution": (640, 480),
        "host": "0.0.0.0",
        "node_key": "this_is_a_passw0rd",
        "model": {
            "fbz": "models/yolo_face.fbz",
            "predict_classes": False,
            "anchors": [
                [0.90751, 1.49967],
                [1.63565, 2.43559],
                [2.93423, 3.88108],
            ],
            "classes": 1,
            "labels": {
                0: "face",
            },
            "colours": {0: (255, 0, 0)},
            "pred_conf_min": 0.70,
        },
    }
).run()
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 34 users

KMuzza

Mad Scientist
WOW- Yes and great stuff - we will all appreciate your work - thanks @uiux


AKIDA BALLISTA UBQTS
 
  • Like
  • Fire
Reactions: 4 users

KMuzza

Mad Scientist
PS- i see CITIcorp - is the new main player-(y)

AKIDA BALLISTA UBQTS
 
  • Like
Reactions: 1 users

Beebo

Regular


Just released this. It should make developing with Akida easier if you are working with a video feed.


Example use case:


Python:
from neurosec import Neurosec


yolo_face = {
    "fbz": "models/yolo_face.fbz",
    "predict_classes": False,
    "anchors": [[0.90751, 1.49967], [1.63565, 2.43559], [2.93423, 3.88108]],
    "classes": 1,
    "labels": {
        0: "face",
    },
    "colours": {0: (255, 0, 0)},
    "pred_conf_min": 0.70,
}

neurosec = Neurosec(
    source=0,
    model=yolo_face,
    resolution=(640, 480),
).start()

while True:
    frame = neurosec.get_neurosec_frame()
    if frame is None:
        break

    cv2.imshow("Output", frame)

    key = cv2.waitKey(1) & 0xFF
    if key == ord("q"):
        break



Also has the option to run a Flask based webserver that let's you login to the node and view the stream or just fetch frame meta:

Python:
from neurosec import NeurosecNode


neurosec_node = NeurosecNode(
    **{
        "source": 0,
        "resolution": (640, 480),
        "host": "0.0.0.0",
        "node_key": "this_is_a_passw0rd",
        "model": {
            "fbz": "models/yolo_face.fbz",
            "predict_classes": False,
            "anchors": [
                [0.90751, 1.49967],
                [1.63565, 2.43559],
                [2.93423, 3.88108],
            ],
            "classes": 1,
            "labels": {
                0: "face",
            },
            "colours": {0: (255, 0, 0)},
            "pred_conf_min": 0.70,
        },
    }
).run()
Excellent work feeding the ecosystem!
 
  • Like
  • Love
Reactions: 7 users
Top Bottom