Table of Contents
BrainFrame Docs
Plug-and-play smart vision platform
What is BrainFrame?¶
BrainFrame is a smart vision platform that is built to be easy to scale, highly configurable, and deployable on-premises or to the cloud.
Powered by our automatic algorithm fusion and optimization engine, BrainFrame enables plug and play integration of VisionCapsules. This platform turns any connected camera into a smart sensor for sophisticated monitoring and inspection tasks, serving a variety of vertical markets.
Getting Started¶
We recommend our Getting Started guide, which will help you install BrainFrame and explore its features with the user interface.
If you're interested in developing with BrainFrame, you'll want to take a look at the REST API Documentation or the Database Description.
Downloads¶
This page contains downloads for various parts of the BrainFrame system (v0.29.6.2, 2024-01-22 05:34:12).
BrainFrame Server¶
Instructions for downloading and installing BrainFrame can be found in the Getting Started guide.
BrainFrame Client¶
For Windows 10 (Beta): Download
For Ubuntu 18.04: Download
For more information on running the BrainFrame client, follow the instructions in the Getting Started guide.
For all Linux distributions, the client is available on the Snap Store.
# To install or update
sudo snap remove brainframe-client && sudo snap install brainframe-client --channel=0.29/stable
# To run
brainframe-client
StreamGateway¶
StreamGateway for Linux (Ubuntu 18.04): Download
For more information on running the StreamGateway, follow the guide on setting up a Premises.
Capsules¶
To install capsules, follow our tutorial here. Capsules with
a ✨ are recommended as production-ready. The hardware section describes
the devices the capsule can run on: GPU
refers to NVIDIA GPUs, iGPU
refers to Intel integrated graphics, and HDDL
refers to
MyriadX devices accessed through the OpenVINO HDDL plugin.
Name | Description | Hardware | Required Input | Output |
---|---|---|---|---|
Calculator Object Speed | ✨ Measure pixel-per-second speed on tracked detections, and puts the information in the extra_data field. | CPU | Type: Single Detection Tracked: True |
Type: Single Detection |
Classifier Age | Determine a persons approximate age by analyzing their face. | CPU GPU |
Type: Single Detection Detections: Face |
ExpandType: Single DetectionClassifies: Age: 0-2 Years Old, 14-24 Years Old, 25-33 Years Old, 3-6 Years Old, 34-37 Years Old, 38-44 Years Old, 45-59 Years Old, 60-100 Years Old, 7-13 Years Old |
Classifier Behavior Closeup | v3 Determine several behaviors for compliance use cases. | CPU GPU |
Type: Single Detection Detections: Person |
Type: Single Detection Classifies: Behavior: Drinking, Phoning, Smoking, Unknown |
Classifier Behavior Closeup Openvino | v1.0 Determine several behaviors for compliance use cases. | CPU HDDL iGPU |
Type: Single Detection Detections: Person |
Type: Single Detection Classifies: Behavior: True_Drinking, True_Phoning, True_Smoking, Unknown |
Classifier Eyewear Closeup | ✨ Determine what type of eyewear a person is wearing. | CPU GPU |
Type: Single Detection Detections: Face |
Type: Single Detection Classifies: Glasses: Glasses, No_Glasses, Sun_Glasses, Unknown |
Classifier Face Age Gender Openvino | OpenVINO face age/gender classifier. | CPU HDDL iGPU |
Type: Single Detection Detections: Face |
ExpandType: Single DetectionClassifies: Gender: Feminine, Masculine Age: 12 - 17 Years, 18 - 24 Years, 25 - 34 Years, 35 - 44 Years, 45 - 54 Years, 55 - 64 Years, 65+ Years, Under 12 Years |
Classifier Face Emotion Openvino | OpenVINO face emotion classifier. | CPU HDDL iGPU |
Type: Single Detection Detections: Face |
Type: Single Detection Classifies: Emotion: Anger, Happy, Neutral, Sad, Surprise |
Classifier Gender Closeup | ✨ Determine the gender of people based on their faces. | CPU GPU |
Type: Single Detection Detections: Face |
Type: Single Detection Classifies: Gender: Feminine, Masculine, Unknown |
Classifier Hat Administration | Determine if a person is wearing a hat. | CPU GPU |
Type: Single Detection Detections: Face |
Type: Single Detection Classifies: Hat: Hat, No_Hat, Unknown |
Classifier Mask Closeup Openvino | OpenVINO face mask classifier. | CPU HDDL iGPU |
Type: Single Detection Detections: Face |
Type: Single Detection Classifies: Mask: Not_Wearing_Mask, Wearing_Mask |
Classifier Person Attributes Openvino | OpenVINO powered person classifier, for general person appearance attributes. | CPU HDDL iGPU |
Type: Single Detection Detections: Person |
ExpandType: Single DetectionClassifies: Gender: Feminine, Masculine, Unknown Coat_Jacket: Has_Coat_Jacket, No_Coat_Jacket, Unknown Sleeves: Has_Long_Sleeves, Has_Short_Sleeves, Unknown Hair: Has_Long_Hair, Has_Short_Hair, Unknown Bag: Has_Bag, No_Bag, Unknown Hat: Has_Hat, No_Hat, Unknown Pants: Has_Long_Pants, Has_Short_Pants, Unknown Backpack: Has_Backpack, No_Backpack, Unknown |
Classifier Pose Closeup | ✨ Roughly identify the current pose of a person. | CPU GPU |
Type: List of Detections Detections: Person |
ExpandType: List of DetectionsClassifies: Pose: Bend/Bow (At The Waist), Crawl, Crouch/Kneel, Fall Down, Get Up, Jump/Leap, Lie/Sleep, Run/Jog, Sit, Stand, Unknown, Walk |
Classifier Safety Gear Openvino | Roughly identify if person is wearing safety hat and safety vest. | CPU HDDL iGPU |
Type: List of Detections Detections: Person |
ExpandType: List of DetectionsClassifies: Safety_Hat: With_Safety_Hat, Without_Safety_Hat Safety_Vest: With_Safety_Vest, Without_Safety_Vest |
Classifier Vehicle Color | GPU capable vehicle color classifier. Trained with a private dataset. | CPU GPU |
Type: Single Detection Detections: Vehicle, Bus, Car, Motorcycle, Truck |
Type: Single Detection Classifies: Color: Black, Blue, Brown, Green, Grey, Red, White, Yellow |
Classifier Vehicle Color Openvino | OpenVINO vehicle color classifier. | CPU HDDL iGPU |
Type: Single Detection Detections: Car, Bus, Truck, Van, Vehicle |
ExpandType: Single DetectionClassifies: Color: Black, Blue, Gray, Green, Red, White, Yellow Vehicle_Type: Bus, Car, Truck, Van |
Dtag | ✨ Find DTags including pose information and distance. | CPU | Type: List of Detections Detections: Dtag Encoded: True Tracked: True |
|
Detector Face Fast | ✨ Efficiently detect faces in most environments. | CPU GPU |
Type: List of Detections Detections: Face |
|
Detector Face Openvino | ✨ OpenVINO fast face detector. | CPU HDDL iGPU |
Type: List of Detections Detections: Face |
|
Detector Fire Fast | Classifies if there is fire in the videostream or not. It currently does not localize the fire. | CPU GPU |
Type: List of Detections Detections: Fire |
|
Detector License Plates | A low-quality license plate detector that can very well detect close-up plates, but has trouble reading them. | CPU GPU |
ExpandType: List of DetectionsDetections: 6, K, Q, T, 5, Z, 3, 4, 2, L, H, W, G, R, I, 9, J, M, Y, 0, U, O, License_Plate, N, S, C, X, A, D, F, 1, B, E, V, P, 8, 7 |
|
Detector Ocr Cn | ✨ v1.1 OCR text detector and recognition: support over 6000 Chinese charactors. | CPU | ExpandType: Single DetectionDetections: Home_Cell, Home_Cell Object, Home_Cell Object Noise, Roaming_Cell, Roaming_Cell Object, Roaming_Cell Object Noise, Screen |
Type: List of Detections Detections: Text |
Detector Person Administration | Detect people in a low-resolution well lit environment where the camera is typically far from the person, and there are 3-10 people, some behind desks. | CPU GPU |
Type: List of Detections Detections: Person |
|
Detector Person And Vehicle Fast | ✨ v1.1 Find people and vehicles in most environments. | CPU GPU |
ExpandType: List of DetectionsDetections: Car, Sheep, Giraffe, Bird, Dog, Horse, Bear, Train, Boat, Cow, Truck, Bike, Motorcycle, Cat, Vehicle, Elephant, Bus, Person, Zebra |
|
Detector Person Openvino | ✨ v1.1 OpenVINO generic person detector. | CPU HDDL iGPU |
Type: List of Detections Detections: Person |
|
Detector Person Overhead Openvino | OpenVINO fast person detector. Works best in surveillance perspectives from a downwards facing point of view. | CPU HDDL iGPU |
Type: List of Detections Detections: Person |
|
Detector Person Vehicle Bike Openvino | OpenVINO person, vehicle, and bike detector. Optimized for outdoor street crosswalk scenarios. | CPU HDDL iGPU |
Type: List of Detections Detections: Vehicle, Person, Bike |
|
Detector Safety Gear Openvino | OpenVino's safety gear detector (safety vest and safety hat) | CPU HDDL iGPU |
Type: List of Detections Detections: Safety Vest, Safety Hat |
|
Detector Text Openvino | ✨ OpenVINO text detector and reader. | CPU HDDL iGPU |
Type: List of Detections Detections: Text |
|
Detector Vehicle License Plate Openvino | OpenVINO license plate detector. Not capable of reading the plate. Vehicle detection is disabled by default, but can be enabled via the capsule options. This capsule is best used in close-up scenarios. | CPU HDDL iGPU |
Type: List of Detections Detections: License_Plate, Vehicle |
|
Encoder License Plate Openvino | An OpenVINO license plate reader from the OpenVINO model zoo. It is trained on chinese license plates, and works well only when the plate is very close to the camera. | CPU HDDL iGPU |
Type: Single Detection Detections: License_Plate |
Type: Single Detection Encoded: True |
Encoder Person | ✨ Recognize people based on clothing and general appearance. | CPU GPU |
Type: Single Detection Detections: Person |
Type: Single Detection Encoded: True |
Encoder Person Openvino | OpenVINO powered people encoder. | CPU HDDL iGPU |
Type: Single Detection Detections: Person |
Type: Single Detection Encoded: True |
Landmarks Face Openvino Simple | OpenVINO capable. Outputs simple face landmarks in the detections extra_data | CPU HDDL iGPU |
Type: Single Detection Detections: Face |
Type: Single Detection Detections: Face_Landmarks |
Recognizer Face | ✨ Recognize faces. Works best close-up. | CPU GPU |
Type: Single Detection Detections: Face |
Type: Single Detection Encoded: True |
Recognizer Face Landmarks Openvino | OpenVINO powered face recognizer. Requires 'Landmarks Face Openvino Simple' capsule to be loaded + any face detector. This capsule aligns faces before feeding them into the encoder, thus allowing higher accuracy recognition. | CPU HDDL iGPU |
Type: Single Detection Detections: Face_Landmarks |
Type: Single Detection Encoded: True |
Tracker Person | ✨ v1.0 Track people using state of the art techniques. | CPU | Type: List of Detections Detections: Person, Car, Motorcycle, Bus, Truck, Vehicle Encoded: True |
Type: List of Detections Tracked: True |
Tracker Vehicle Iou | ✨ V1.1 Efficient vehicle tracker using IOU. | CPU | ExpandType: List of DetectionsDetections: Car, Motorcycle, Bus, Train, Truck, Boat, Vehicle, License_Plate, Bike, Special Vehicle, Person |
Type: List of Detections Tracked: True |
Python API¶
A Python wrapper around the REST API is available on PyPI.
pip3 install brainframe-api
The source is available on Github.
For more information on using the Python API, refer to Python API.
User Guide
Introduction¶
The BrainFrame ecosystem includes a server for the video processing and a client to view the live results and configure the server.
The server requires a Linux machine, and the client can run on either Ubuntu 18 or Windows 10.
BrainFrame CLI¶
For this guide, we will use the brainframe-cli
(Command Line Interface) to
download and install BrainFrame. This tool requires Ubuntu 18.04.
First, let's install pip, and then use pip to install the brainframe-cli
:
sudo apt update && sudo apt upgrade
sudo apt install -y python3-pip
pip3 install --upgrade pip
sudo -H pip3 install --upgrade brainframe-cli
Starting the Server¶
With the tool installed, now you can use the brainframe
command.
You can navigate the available commands from BrainFrame CLI by running:
brainframe --help
Now, install BrainFrame by running the below command and following the directions:
sudo brainframe install
Info
If you are in mainland China and downloading the BrainFrame Docker images is taking a long time, consider using a Docker mirror by following these instructions.
Once BrainFrame is downloaded, the BrainFrame CLI will ask you if you want to start BrainFrame server now or not. You can either start BrainFrame server now, or with the following command later:
sudo brainframe compose up -d
Then to view logs live:
sudo brainframe compose logs -f
Info
If you said Yes
to adding your user to the brainframe group during the
installation, then rebooting your computer will allow you to use
brainframe
commands without needing sudo
.
Install BrainFrame Client¶
You can download our client app for Windows or Linux from here.
On Windows, run the BrainFrameClient.exe.
On the first run on Linux, run the installation script. Then, run the client.
bash install.sh
bash brainframe_client.sh
Getting a License¶
The client will now request a license be uploaded to the server, like such:
To get a license:
- Go to the website.
- Click the "Sign In" button located near the top of the page.
- Either sign in with your existing account or click "Sign Up" to create a new account.
- On the Account Page, under the "License Key" section, click "Create a New Key". Then, click "Download Key" once the option appears.
Now on the client, select "Configure" and then "License Config". Drag in the
downloaded license_file
into the window, then select "Update License". It
should complete looking like this:
Close the license config and the server config windows, and the client will open!
Next Steps¶
Now with the BrainFrame server and client installed, you can go on to use the ecosystem to your full advantage.
Terminology¶
The BrainFrame documentation uses terms that are common in video analytics but may have different meanings in other fields. We also use a few generic terms to describe specific BrainFrame concepts. If you are new to video analytics or are ever confused by a term, please take a look at this guide.
Term | Meaning |
---|---|
Stream | The live stream that is being fed to the Client or to the Server. A stream is video of some sort, from a video file, webcam, or IP camera. |
Detection | Any machine learning detection. A detection is a "bounding box", and has a label (person, car, dog, etc). |
Region | An area on-screen that has been configured by the user. It could be a door, an area on the floor, etc. The server will automatically count detections in that region. |
Line | A line on the screen. It can count people that cross it, and count who is currently standing on the line. |
Zone | A region or a line |
Alarm | A set of conditions that must occur in a stream in order for an Alert to be raised. |
Alert | An "Alert" is an instance of a specific Alarm occurring. Alerts have a start time and an end time. You can see a log of alerts under the In-Focus view for a stream. |
Journaling | Writing analysis results to a SQL database. |
Architecture Diagram¶
Recommended Hardware¶
This guide will talk about the recommended hardware for running a BrainFrame Server, and how to install drivers to enable different inference accelerators.
To begin with, thethe following hardware configuration or better can provide a good experience with BrainFrame:
- Intel i7 6+ core CPU or equivalent AMD CPU
- 6 GB or more of memory
Enabling GPU Acceleration¶
Most capsules will run significantly faster if an NVIDIA GPU is available to provide hardware acceleration. However, capsules that use OpenVINO are optimized to run well on CPU and won't benefit from the availability of a GPU. We recommend deciding what capsules you'll need before deciding whether a GPU is necessary for you. See all available capsules here.
If you are using a GPU, we recommend an NVIDIA GTX 1060 or better. AMD GPUs are not supported at this time.
Drivers¶
BrainFrame requires that the NVIDIA graphics drivers be installed before the GPU can be utilized. To do this, launch the "Software & Updates", application. Then, on "Additional Drivers" tab, Select "Using NVIDIA driver metapackage" option, and click "Apply Changes". If there are multiple NVIDIA driver metapackages available, prefer the most recent one.
Docker Passthrough¶
If you have an Nvidia GPU, you can enable hardware acceleration by installing Nvidia Docker 2. Please refer to the installation guide.
Then, make the Nvidia Container Runtime the default runtime for Docker by
editing /etc/docker/daemon.json
to have the following configuration:
{
"runtimes": {
"nvidia": {
"path": "nvidia-container-runtime",
"runtimeArgs": []
}
},
"default-runtime": "nvidia"
}
Introduction¶
The BrainFrame Command Line Interface (CLI) is used to install, update, and control the lifecycle of the BrainFrame Server. The BrainFrame server consists of several smaller microservices, which are containerized using Docker. The lifecycle and control of those services is delegated to a tool called Docker Compose, which helps configure the orchestration of these containers.
For instructions on the installation of the CLI, please refer to our getting started guide.
You always want to have the latest version of the CLI. To update, run:
sudo -H pip3 install --upgrade brainframe-cli
Command Structure¶
All CLI commands are structured as
brainframe [Command] [Arguments]
To view a list of commands, you can use brainframe --help
. To view help for a specific command, you can use
brainframe [Command] --help
.
Helpful Commands¶
These are the general commands we expect users will be using most often, so we call them out.
Starting the server¶
brainframe compose up -d
If you remove the -d
flag, it will attach the logs stream to your terminal,
and allow you to close the server when you Ctrl+C
.
Stopping the server¶
brainframe compose down
Restarting the server¶
brainframe compose up -d
Updating the server¶
sudo brainframe update
Streaming Logs¶
To view all logs, run:
brainframe compose logs -f
To view logs just for the core
service, run:
brainframe compose logs -f core
Introduction¶
BrainFrame runs analysis primarily on video streams. Streams can come from many sources, like an IP camera, webcam, or video file. BrainFrame supports all major video formats and is expected to support any IP camera that provides an RTSP, HTTP, or MJPEG stream.
This tutorial assumes that the server, client, and any IP cameras are on the same local network. BrainFrame supports analyzing streams on other private networks using our 'Premises' stream proxying system.
Adding a Stream¶
Click the floating action button at the bottom right of the window to add a stream.
The Add Stream dialog will appear. BrainFrame supports three types of video streams: File, webcam, and IP camera. For this example, we will use a video file.
Once a stream is added, you will see a video thumbnail of the stream, live, on the top left. As you add streams, they will fill this screen in a grid. If you add more video streams, the view will look something like this.
Stream Types¶
Different stream types take different parameters. The following is a quick introduction to what each parameter means.
Files¶
As seen in the above example, file streams simply take a file path. Most common video file types are supported by BrainFrame. When the source video ends, BrainFrame will automatically loop it back to the beginning.
Webcams¶
Webcam streams take a webcam device ID as their only parameter. Webcam device IDs start at zero and increment for each webcam that is connected. For a computer with a single connected webcam, the ID will generally be "0".
IP Cameras¶
IP camera streams take a URL as their main parameter. This URL should point to an RTSP, HTTP, or HTTPS video stream. BrainFrame supports many video encoding formats, including H264, VP8/9, and MJPEG.
For more information on how to get an RTSP URL for your IP camera, take a look at Connecting to IP Cameras.
Introduction¶
BrainFrame allows you to specify areas of interest within a video stream and gain specific insights about that area. These come in two forms, regions and lines, which are collectively referred to as "zones". Region zones are often used for counting objects inside of a region. Line zones are for counting objects that have crossed the line.
Creating a Zone¶
To create a zone, start by clicking on the video stream that you want to add the zone to in the grid view, then click the "Task Config" button in the bottom-right.
You will see the Task Configuration window. This window allows us to create zones and add alarms to these zones, which will be discussed in the next section. For now, click on the "Region" or "Line" button to start creating a zone of that type.
To specify where a region is located, click to create the region's vertices one-by-one. When the region is complete, click the "Confirm" button. For lines, click to specify where the line starts, then click again to specify where it ends.
Now that we know how to create zones in a stream, we will discuss how to get information from these zones with alarms.
Introduction¶
Alarms allow BrainFrame to notify you when a specified condition happens within a zone. Alarms generate alerts when their condition is met. Alerts are made available through the REST API, and a stream with an active alert is brought to the top of the grid view in the client.
All alarms have an "Active Time", a period during the day where the alarm is allowed to trigger. This can be useful if, for example, you want to be notified if a person is in an area after business hours.
Alarm Conditions¶
There are two main types of alarm conditions: count-based conditions, and rate-based conditions.
Count-Based Conditions¶
Count-based conditions are triggered based on the number of objects in a zone. The condition may trigger if the number of a certain object class is greater than, less than, equal to, or not equal to a given value. The condition may also specify an attribute value to filter objects by.
The following are examples of count-based conditions:
- If less than 3 people in uniform are in region "Entrance", raise an alarm
- If there is not 1 person in region "Cash Register", raise an alarm
- If there is greater than 0 cars in region "Restricted Parking", raise an alarm
Rate-Based Conditions¶
Rate-based conditions are triggered based on a change in the number of objects in a zone over time.
The following are examples of rate-based conditions:
- If greater than 10 people enter region "Main Entrance" within 5 seconds, raise an alarm
- If fewer than 1 car exits region "Parking Lot" within 120 seconds, raise an alarm
Creating an Alarm¶
To create an alarm, open the Task Configuration window for the stream of interest and under the "Add New" section, click the "Alarm" button.
This will bring up the "Alarm Configuration" window. Here you may choose which condition type to use and specify each parameter. The "Condition Type" section is meant to be read like a sentence.
Info
If you don't see any class names available in the second drop-down, make sure that you have at least one plugin loaded that is able to detect objects.
When an alarm is triggered, the alert will appear in that stream's alert log and the stream will be moved to the "Streams with ongoing alerts" section.
Intersection Points¶
By default, a detection is said to be inside a zone if the bottom center point of the detection is in the zone. This bottom center point is referred to as the detection's "intersection point". This default works well for most overhead camera angles, but can be changed to the top, left, right, or center of the detection by changing the "Intersection Point" drop-down in the alarm configuration dialog.
Introduction¶
An identity is a unique instance of a certain class of object. Identities allow BrainFrame to recognise a pre-defined specific person in a group of people, or a specific car in a parking lot. To use this feature, the user must upload one or more images or precomputed vectors of that unique instance. BrainFrame will pick out differentiating features in that instance for later use during recognition.
Identities can span multiple classes of object. For example, a person identity might be described by the person's face and by their gait.
Identities are added to BrainFrame in bulk using a specially organized directory. This directory includes child directories for each identity the user wants to add to BrainFrame. These child directories are in the format "unique_id (nickname)", where the unique ID is a differentiating string, and the nickname is a friendly name to display the identity as. These child directories contain one or more other directories that contain images or precomputed vectors of the identifiable object. The name of the directory defines the class of what is in the image, so if the identity is for a person and we have pictures of their face, the directory would be called "face".
An image should be of only a single instance of the class it is being encoded for. Most common image formats are supported, like JPEG and PNG.
Precomputed vectors are arrays of floating point values in a JSON file. These are used when the encoded vector of a class is already known. For example, DTags have their vector value printed on the tag so they can be registered through this method. For most class types, images are preferred.
An example of this format is shown below. In this example, we are creating three identities with pictures of their faces:
$ tree brainframe-identities/
brainframe-identities/
├── employee000000 (John Cena)
│ └── face
│ ├── 0001_01.jpg
│ ├── 0002_01.jpg
│ └── 0003_01.jpg
├── employee000001 (Bob Suruncle)
│ └── face
│ ├── 0001_01.jpg
│ ├── 0002_01.jpg
│ ├── 0003_01.jpg
│ ├── 0004_01.jpg
│ ├── 0005_01.jpg
│ └── 0006_01.jpg
└── employee000002 (Stacy Sgotigoenon)
└── face
├── 0001_01.jpg
├── 0002_01.jpg
├── 0003_01.jpg
├── 0004_01.jpg
├── 0005_01.jpg
├── 0006_01.jpg
└── 0007_02.jpg
Once you have this directory structure created, you're ready to add it using the UI.
Adding Identities¶
To add new identities, start by clicking the "Identity Configuration" button in the toolbar.
The "Identity Configuration" window displays what identities are currently uploaded. On the right is a grid view of all the identities in the database. On the left is a list of encoding classes that are available through currently loading plugins as well as those of any encoding currently in the database. Clicking on the encoding class will filter the grid of identities to show only those encoded with that class. There is also a search tool at the top left that can be used to search by unique names or nicknames of identities.
To add more identities, click the floating action button at the bottom right of the window.
Clicking the Add Identities button will bring up a directory selection dialog. Use the folder button to select the path to the identities directory. If desired, you can also type the path manually in the text box.
Wait for the identities to finish uploading using the progress bar that appears at the bottom as an indicator.
If there is an error with the directory structure, a dialog will pop up with the error. If any errors occurred for any of an identity's encodings, they will be displayed in a separate window as a tree view.
Uploading the same directory twice will not result in duplicates, so it is safe to make any necessary modifications to the data and upload the entire directory again.
Once the identities are uploaded, you will see them in the Identity Configuration window.
Introduction¶
For most deployment configurations, IP cameras will be hosted on a local network that is protected behind a firewall. If the BrainFrame server is not running in this local network, it will not be able to directly access the IP cameras. To enable this kind of configuration, BrainFrame provides tools to proxy video streams from local networks to a remote BrainFrame server, through the concept of a premises.
A premises is a local network with one or more IP cameras. In order for a remote BrainFrame server to have access to IP cameras on a premises, a StreamGateway must be running on that premises. The StreamGateway will take care of proxying local IP camera streams to BrainFrame as needed.
Prerequisites¶
The StreamGateway communicates with a stream proxy server that runs alongside the BrainFrame server. To allow the StreamGateway to communicate with this server, the StreamGateway server port (8004) must be forwarded. See the Server Configuration page for information on how to customize this port.
Setting Up a StreamGateway¶
The StreamGateway executable is included in your distribution of BrainFrame. Transfer the executable to a dedicated machine on the same local network as the IP cameras.
Before starting the StreamGateway, we need to create a new premises with the
new
command. For additional options, run ./StreamGateway new -h
.
./StreamGateway new --premises-name "The Emerald City" --hostname <BrainFrame IP Address>
The new
command creates a new premises and saves the provided information to
a configuration file named gateway_config.json
.
To start the StreamGateway, use the start
command.
./StreamGateway start
Using Premises¶
Now that we've created a premises and started a StreamGateway, we can connect to IP cameras on that premises. In the Add Stream dialog, select the new premises from the "Premises" drop-down and provide an IP camera URL as if it were being connected to directly (which usually means a local IP address should be used).
Deleting a Premises¶
When a premises is no longer needed, it can be deleted using the StreamGateway
delete
command.
./StreamGateway delete
Introduction¶
Capsule Options allow users to customize behaviors of the capsule, for example, using a threshold to filter out less confident detections from an algorithm. Capsules can define their own options, and hence different capsules will have different configurable options.
Once a capsule is loaded, its options will be set to the default value, and those values will apply to all streams. This tutorial will demonstrate how to set the global capsule options, and override global capsule options for specified streams.
Global Capsule Options¶
Global capsule options are applied to all video streams, except those which have overridden the option for a specific stream. In this example, we have two video streams and two capsules loaded.
To set the global capsule options, click Capsules
icon on the left-bottom of
the client, you will see the loaded capsules and their options.
You can see that there are two options for Detector Face Fast
. One is to
enable this capsule or not, the other one is the threshold.
You can change the capsule options value by clicking the checkbox or editing
the text box, depending on the type of the option. You can always click the
Reset to Defaults
button on the right corner to discard your changes and
restore the default values.
Let's disable this capsule by unchecking the checkbox after Capsule Enabled
,
then click the Apply
button.
Now if you go back to the video streams, you will see Detector Face Fast
has
been disabled for all videos.
Override Global Capsule Option¶
You can set unique capsule options for the specified video stream and leave
the other videos stream using the global capsule option. For instance, we
want to enable Detector Face Fast
for the first video stream and leave it
disabled for the second one. Click the first video, then on the video view,
click Stream Capsule Config
button on the right-bottom.
In the Stream Capsule Options
view, check Override Global
checkbox for
Capsule Enabled
Option, then check the check box in the Value
column.
Finally, click Apply
to make it effective.
Now let's go back to the video view, you will face detection for the first video, but not for the second one.
Introduction¶
BrainFrame allows you to analyze the traffic conditions by counting objects in specified zones. You will be able to see the data from the BrainFrame client, or through the REST API. In BrainFrame, there are two metrics to measure the traffic history:
total_entered
: The total number of objects that have entered the zonetotal_exited
: The total number of objects that have exited the zone
This is a core feature of BrainFrame and is enabled by default. However, you will need to match certain capsule requirements in order to utilize this feature. As the implementation is different, the requirements vary for different types of zones, e.g. regions and lines. For more information about zones, please refer to the documentation.
Regions¶
In BrainFrame, regions are represented as polygons (more than two coordinates).
For a region, total_exited
represents the total number of objects that have
moved from the inside of the region to the outside, total_entered
is the
inverse.
You will need an object detector to enable BrainFrame to analyze the traffic.
We supply two starter capsules Detector Person And Vehicle Fast
and
Detector Person Vehicle Bike Openvino
that can satisfy this requirement. As
regions are primarily used to analyze enclosed objects, the client only
displays the counts of objects that are currently contained within regions.
However, you can retrieve the entire, unabridged counts using
get_latest_zone_statuses()
or get_zone_status_stream()
or the corresponding
REST API. You will get a ZoneStatus
object:
{
"2": {
"Store": {
"zone": {...},
"tstamp": 1601450507.8037934,
"total_entered": {
"person": 146
},
"total_exited": {
"person": 148
},
"within": [],
"entering": [],
"exiting": [],
"alerts": []
}
},
"Screen": {...},
}
In the above example, 146 people have entered the "Store" region, and 148 have exited. (Note: in order to make the example clearer, some fields were removed for succinctness)
Lines¶
In BrainFrame, lines are a type of zone that only have two coordinates.
Similar to regions, total_entered
and total_exited
represent the number of
objects that have moved across the line. You will be able to see the traffic
data in the BrainFrame client. For each line, there will be an arrow indicating
the "entering" direction of the line.
Similar to regions, you can also get the data through the REST API:
{
"2": {
"Street": {
"zone": {...},
"tstamp": 1601456309.9824505,
"total_entered": {
"person": 20,
},
"total_exited": {
"person": 15,
},
"within": [],
"entering": [],
"exiting": [],
"alerts": []
}
},
"Screen": {...},
}
In the example above, 20 people have entered the line named "Street", and 15 people have exited.
As it is difficult to determine the direction of movements using only object
detectors, you will also need an object tracker to enable BrainFrame to analyze
the trajectory of objects. Some object tracker capsules track encoded detections
and will depend on an additional encoding capsule. For example, TrackerPerson
will need Encoder Person
to work together. On the other hand,
Tracker Vehicle
doesn't need one as it tracks vehicles using an IOU-based
algorithm.
An encoder capsule's name usually starts with Encoder
, you can distinguish an
encoder by its output type on our download page. If a
capsule states Encoded: True
in its Output
field, it's an encoder. Here are
some encoders available on our website:
Encoder Person
Encoder Person Openvino
Similarly, a tracker capsule's name usually starts with Tracker
, and states
Tracked: True
in its Output
field. In addition, if it states Encoded: True
in the Required Input
field, that means this capsule needs a corresponding
object encoder to work. Here are some trackers available on our website:
Tracker Person
Tracker Vehicle
You can use the following capsules combinations as an example:
Detector Person And Vehicle Fast
Encoder Person
Tracker Person
Or:
Detector Person Vehicle Bike Openvino
Tracker Vehicle
Ended: User Guide
Tutorials
Introductions¶
These tutorials are written with the goal of helping developers better understand the architecture of BrainFrame, how to interact with BrainFrame using our APIs, and the required steps to build a capsule.
You can find all the example scripts and resources on our website or the tutorial repository.
Introduction¶
This tutorial provides tips on how to connect IP cameras to BrainFrame. BrainFrame is expected to work with any IP camera that is compatible with RTSP. To make setup easier, we recommend IP cameras that support ONVIF as well. ONVIF is a complementary standard that allows IP cameras to be automatically discovered on a local network by tools like ONVIF Device Manager, among other things. For more information, see the ONVIF website.
If you are having trouble getting your IP camera connected to BrainFrame, please feel free to contact us on our forum. Please include the make and model of your IP camera, as well as the RTSP URL you are attempting to connect with.
What is RTSP?¶
RTSP is the most common protocol that IP cameras use to communicate with other applications, like BrainFrame. To connect to an RTSP-compatible IP camera, we need to get its RTSP URL.
RTSP URLs are in the following format:
rtsp://{username}:{password}@{ip address}:{port}/{path}
The values in curly braces represent fields that need to be filled in. A full RTSP URL might look something like this:
rtsp://admin:mypassword@10.0.0.104:553/streams/0
Let's break down each portion of the RTSP URL and discuss how you might find its value for your IP camera.
Username and Password¶
Most IP cameras require a username and password to prevent unauthorized access. These values can be configured in the IP camera's settings. If you haven't changed these values, your IP camera may be using its default username and password. These defaults can be found in the camera manual or on aggregation websites like security.world.
IP Address¶
When an IP camera connects to your router, it will be assigned an IP address. An ONVIF discovery tool can discover compatible IP cameras on your network and provide you with their IP addresses. Otherwise, you may find the IP camera on your router's configuration page under its list of connected devices.
Some IP cameras may request a static IP address from the router. If, for whatever reason, that static IP address is unavailable, the IP camera may not successfully connect to your router. Device vendors provide special tools to interact with their IP cameras when in this state, often as a smartphone app or desktop application. Consult your device's manual for details.
Port¶
Chances are, your IP camera is using the default RTSP port, 554
. If that's the
case, you may omit the :{port}
section of the RTSP URL.
Path¶
IP cameras that provide multiple video streams may require you to append a path to the end of the RTSP URL. Even if your IP camera only provides one stream, the manufacturer may require a specific path value anyway. An ONVIF discovery tool should be able to discover this path. Otherwise, aggregation websites like security.world may have this information available for your device.
ONVIF Discovery Tools¶
If your IP camera supports ONVIF, you can use special ONVIF-compatible tools to discover your device on a local network. This allows for easy access to the camera's IP address, port, and path for each stream.
For Windows, ONVIF Device Manager is free software and can be downloaded here.
For Linux, ONVIF Device Tool is a free utility provided by Lingodigit. Download the release corresponding to your distribution here.
Using Your Phone as an IP Camera¶
If you want to test BrainFrame's IP camera support but don't have a dedicated device available, you can repurpose an Android device using IP Webcam in the Play Store.
To start an IP camera stream, select the "Start server" option. Your device's IP address and port should be displayed near the bottom of the screen. Use those values to fill in this RTSP URL, which can be provided to BrainFrame:
rtsp://{ip address}:{port}/h264_ulaw.sdp
REST APIIntroduction¶
In this tutorial, we will connect video streams to the BrainFrame server.
You can find the complete script on our
GitHub repository.
Before we start, you should have BrainFrame Server and Client installed on
your machine, If you don't have them yet, please follow the
setup instructions.
This tutorial will use our Python library that wraps around
the BrainFrame REST API. The library makes programming for BrainFrame in Python
easier. If you're using Python, We strongly recommend you to use our Python API.
Otherwise, you can always follow our REST API documentation to use
the REST API directly.
Setup Environment¶
First, let's install the BrainFrame Python API library and setup the
environment. Run the following commands. You can either do this in your virtual
environment (recommended) or a root environment.
pip3 install brainframe-api
The Python API is now installed and ready for use.
The following APIs will be used in this tutorial:
api.get_stream_configurations()
api.set_stream_configuration(...)
api.start_analyzing(stream_id=...)
Check existing streams¶
Now let's create a new, empty script. The first thing you want to do is to
import the Python API library.
from pathlib import Path
from brainframe.api import BrainFrameAPI, bf_codecs
Then, initialize an API instance with the BrainFrame server URL. In this
tutorial, we will connect to the BrainFrame server instance running on our local
machine.
api = BrainFrameAPI("http://localhost")
The server is now connected, and we can start working with BrainFrame. First,
let's see if there are any streams already connected to BrainFrame.
stream_configs = api.get_stream_configurations()
print("Existing streams: ", stream_configs)
If you run the script, and you only have a freshly-installed BrainFrame server,
you should see just an empty list. Otherwise, the list of streams you have
already connected will appear.
Create a New Stream Codec¶
Next, we will create a new stream configuration. The API function we will use is
api.set_stream_configuration(...)
. Looking at the function, it takes just a
stream configuration codec as input. You can check the definition of different
codecs in the Python library documentation.
Currently, we support three types of video sources:
- IP cameras
- Webcams
- Local files
For different types of video source, you need to set the different connection
types and connection options. For more information, check this
documentation.
IP Camera¶
For an IP camera, the connection type will be IP_CAMERA
. In the
connection_options
, a valid url
is required.
# Create a new IP camera StreamConfiguration codec
new_ip_camera_stream_config = bf_codecs.StreamConfiguration(
# The display name on the client/in API responses
name="IP Camera",
connection_type=bf_codecs.StreamConfiguration.ConnType.IP_CAMERA,
connection_options={
# The url of the IP camera
"url": "your_ip_camera_url",
},
runtime_options={},
premises_id=None,
)
Webcam¶
For a Webcam, the connection type is WEBCAM
. Note: This must be connected to
the server, not the client. In the connection_options
, the device ID of the
webcam is required. On Linux, you can find the device ID using:
ls /dev/ | grep video
After you have the device ID, use it in the codec.
# Create a local file StreamConfiguration codec
new_web_camera_stream_config = bf_codecs.StreamConfiguration(
# The display name on the client/in API responses
name="Webcam",
connection_type=bf_codecs.StreamConfiguration.ConnType.WEBCAM,
connection_options={
# The device ID of the web camera
"device_id": 0,
},
runtime_options={},
premises_id=None,
)
Local File¶
For a local file, you have to first upload the video file to the BrainFrame
server's database and get a storage ID. The connection type is FILE
. In the
connection_options
, the storage ID of the file is required.
# Upload the local file to the database and create a storage id
storage_id = api.new_storage(
data=Path("../videos/shopping_cashier_gone.mp4").read_bytes(),
mime_type="application/octet-stream"
)
# Create a local file stream configuration codec
new_local_file_stream_config = bf_codecs.StreamConfiguration(
# The display name on the client side
name="Local File",
connection_type=bf_codecs.StreamConfiguration.ConnType.FILE,
# The storage id of the file
connection_options={
"storage_id": storage_id,
},
runtime_options={},
premises_id=None,
)
Create a StreamConfiguration on the Server Side¶
Once we have the StreamConfiguration codec, we can tell BrainFrame Server to
connect to it. In this tutorial, we will use the file-based codec. If you have
an IP camera or a Webcam connected to the server, you can try using those as
well.
# Tell the server to connect to the stream configuration
new_local_file_stream_config = api.set_stream_configuration(
new_local_file_stream_config)
Once the server receives the stream configuration, it will connect to it, assign
a stream ID to it, and send it back. It is helpful to keep track of the IDs of
the streams you have added using the return value.
Finally, don't forget to tell BrainFrame to start analyzing/preforming inference
on the stream.
# Start analysis on the stream
api.start_analyzing(new_local_file_stream_config.id)
Now you should be able to see that stream in the BrainFrame client.
Introduction¶
In this tutorial, we will walk you through a simple use case for BrainFrame:
getting WeChat Notification when there is no cashier in the checkout area. You
can find the complete script on our
GitHub repository.
Setup The Environment¶
In a previous tutorial, we installed the BrainFrame server, client, and Python
API libraries. In this tutorial, the API functions we are going to use are:
api.set_stream_configuration(...)
api.set_zone(...)
api.get_latest_zone_statuses()
api.get_zone_status_stream()
We will be using a third-party library called itchat to send
notifications to WeChat. We'll install it using pip
:
pip3 install itchat
We will also use one of our publicly available capsules,
detector_people_and_vehicles_fast
. You can grab it from our
downloads page.
Before we start, you should have the BrainFrame server and client running, and
capsules ready.
Log In to WeChat¶
As usual, we will begin by importing our dependencies:
from pathlib import Path
import itchat as wechat
from brainframe.api import BrainFrameAPI, bf_codecs
Then, let's log in to our WeChat account and send a test message:
wechat.auto_login()
wechat.send_msg(f"Notifications from BrainFrame have been enabled",
toUserName="filehelper")
The script will display a QR code, Scan it with your WeChat app and login. Your
File Helper will then receive the message.
Create a New Stream from a Local File¶
First set the BrainFrame URL:
api = BrainFrameAPI("http://localhost")
We will reuse the code snippet introduced in the
previous tutorial to create a stream configuration
on the BrainFrame server. We're going to use a simulated video file for this
demo, but it will work with live video streams as well.
# Upload the local file to the BrainFrame server's database and get its storage
# ID
storage_id = api.new_storage(
data=Path("../videos/shopping_cashier_gone.mp4").read_bytes(),
mime_type="application/octet-stream"
)
# Create a StreamConfiguration with the storage ID
new_stream_config = bf_codecs.StreamConfiguration(
# The display name on the client side
name="Demo",
# Specify that we're using a file
connection_type=bf_codecs.StreamConfiguration.ConnType.FILE,
connection_options={
# The storage id of the file
"storage_id": storage_id,
},
runtime_options={},
premises_id=None,
)
# Send the StreamConfiguration to the server to have it connect
new_stream_config = api.set_stream_configuration(new_stream_config)
# Tell the server to start analysis on the new stream
api.start_analyzing(new_stream_config.id)
You can download the demo video from our
tutorial scripts repository. We recorded a video
simulating a cashier serving customers.
Create a Zone and Setup an Alarm¶
In BrainFrame, alarms are associated with zones, and you can configure them
through the client or through the API. You can check our documentation on
Zones and Alarms
for more information.
Using the API, we will create a zone around the check-out counter, and an alarm
that will be triggered if no people are in that zone.
# Condition for the Alarm that will trigger when there is <1 person in the zone
# that it is assigned to
no_cashier_alarm_condition = bf_codecs.ZoneAlarmCountCondition(
test=bf_codecs.CountConditionTestType.LESS_THAN,
check_value=1,
with_class_name="person",
with_attribute=None,
window_duration=5.0,
window_threshold=0.5,
intersection_point=bf_codecs.IntersectionPointType.BOTTOM,
)
# Create the ZoneAlarm. It will be active all day, everyday and will be
# triggered if the detection results satisfy the condition we created. Because
# use_active_time==False, the active end/start times will be ignored.
no_cashier_alarm = bf_codecs.ZoneAlarm(
name="Missing Cashier!",
count_conditions=[no_cashier_alarm_condition],
rate_conditions=[],
use_active_time=False,
active_start_time="00:00:00",
active_end_time="23:59:59",
)
# Create a Zone object with the above alarm
cashier_zone = bf_codecs.Zone(
name="Cashier",
stream_id=new_stream_config.id,
alarms=[no_cashier_alarm],
coords=[[513, 695], [223, 659], [265, 340], [513, 280], [578, 462]]
)
# Send the Zone to BrainFrame
api.set_zone(cashier_zone)
In the client, you will be able to see the zone there:
Get Zone Status¶
In BrainFrame, we use the ZoneStatus
data structure to represent the inference
results of frames. Let's use it to get ours.
We can use the API to get the latest ZoneStatus
objects from BrainFrame.
zone_statuses = api.get_latest_zone_statuses()
print("Zone Statuses: ", zone_statuses)
The above code will print out the latest ZoneStatus
objects for each stream
with analysis/inference enabled. Warning: it can be a very long data structure,
depending on how many streams there are and what capsules are loaded.
This is the most direct way to get the most recent inference results from
BrainFrame. However, you have to call this function each time you want new
results, which is a hassle.
A different API function, get_zone_status_stream()
helps alleviate this issue.
Instead of having relying on you polling for ZoneStatus
objects, this function
will return an iterable object to you. Each time BrainFrame has a new result
available, it will be pushed to the iterator.
zone_status_iterator = api.get_zone_status_stream()
for zone_statuses in zone_status_iterator:
print("Zone Statuses: ", zone_statuses)
This script will print the zone statuses as fast as the capsules can process the
frames.
Get Alarms and Send Notifications to WeChat¶
We can iterate through the zone status packets and check if there are any alerts
that recently terminated after lasting >5 seconds. If there were, we send a
notification. Note that for this example, the alert will only trigger after
the cashier returns to the counter, a situation that is not as useful outside of
the demo environment. The script will also only send one notification before
exiting, to avoid sending too many notifications.
# Iterate through the zone status packets
for zone_status_packet in zone_status_iterator:
for stream_id, zone_statuses in zone_status_packet.items():
for zone_name, zone_status in zone_statuses.items():
for alert in zone_status.alerts:
# Check if the alert has ended
if alert.end_time is None:
continue
total_time = alert.end_time - alert.start_time
# Check if the alert lasted for more than 5 seconds
if total_time > 5:
alarm = api.get_zone_alarm(alert.alarm_id)
wechat.send_msg(
f"BrainFrame Alert: {alarm.name} \n"
f"Duration {total_time}", toUserName="filehelper")
# Stop here, for demo purposes
exit()
The script will send an alert to your WeChat File Helper if the cashier has been
missing for more than 5 seconds. It will then exit the loop.
Logout Your WeChat Account¶
Finally, before we exit the script, don't forget to log out your WeChat account.
Put the following code above exit()
.
wechat.logout()
Introduction¶
In other tutorials, we demonstrated how to start a video stream and run
inference, a common scenario. But sometimes you might want to run inference on
images instead of videos. This tutorial will demonstrate how to do that using
BrainFrame.
The use case in this tutorial is pretty simple. We want to iterate over all
images in a directory to find the ones with cats in them. You can find the
complete script and sample images on our GitHub repository.
Setup The Environment¶
In a previous tutorial, we installed BrainFrame
server, client, and Python API libraries. The API functions we are going to use
in this tutorial are:
api.get_plugins(...)
api.process_image(...)
In this tutorial, we will use one of our publicly available capsules:
detector_people_and_vehicles_fast
. You can download it from our
downloads page.
Before we start, you should already have the BrainFrame server and client
running, and the capsule downloaded.
Check the Existing Capsules¶
As usual, let's import the dependencies first:
from pathlib import Path
import cv2
from brainframe.api import BrainFrameAPI
And connect to the server:
api = BrainFrameAPI("http://localhost")
Before we start processing images, we want to check the existing capsules to
verify that detector_people_and_vehicles_fast
is loaded:
# Get the names of existing capsules
loaded_capsules = api.get_plugins()
loaded_capsules_names = [capsule.name for capsule in loaded_capsules]
# Print out the capsules names
print(f"Loaded Capsules: {loaded_capsules_names}")
Make sure detector_people_and_vehicles_fast
is present.
Loaded Capsules: ['detector_people_and_vehicles_fast']
You can also check the loaded capsules using the client.
Iterate through the Image Directory¶
With the capsule loaded, we can iterate over all the images in the directory,
and get the inference results for each image. Then we will filter for detections
with class_name == "cat"
.
# Root directory containing the images.
IMAGE_ARCHIVE = Path("../images")
# Iterate through all images in the directory
for image_path in IMAGE_ARCHIVE.iterdir():
# Use only PNGs and JPGs
if image_path.suffix not in [".png", ".jpg"]:
continue
# Get the image array
image_array = cv2.imread(str(image_path))
# Perform inference on the image and get the results
detections = api.process_image(
# Image array
img_bgr=image_array,
# The names of capsules to enable while processing the image
plugin_names=["detector_people_and_vehicles_fast"],
# The capsule options you want to set. You can check the available
# capsule options with the client. Or in the code snippet above that
# printed capsule names, also print the capsule metadata.
option_vals={
"detector_people_and_vehicles_fast": {
# This capsule is able to detect people, vehicles, and animals.
# In this example we want to filter out detections that are not
# animals.
"filter_mode": "only_animals",
"threshold": 0.9,
}
}
)
print()
print(f"Processed image {image_path.name} and got {detections}")
# Filter the cat detections using the class name
cat_detections = [detection for detection in detections
if detection.class_name == "cat"]
if len(cat_detections) > 0:
print(f"This image contains {len(cat_detections)} cat(s)")
Now the script will tell you if there are cats in those images:
Processed image one-person.jpg and got []
Processed image no_people.jpg and got []
Processed image one-person-png.png and got []
Processed image one_cat.jpg and got [Detection(class_name='cat', coords=[[800, 0], [1566, 0], [1566, 850], [800, 850]], children=[], attributes={}, with_identity=None, extra_data={'detection_confidence': 0.9875224233}, track_id=None)]
This image contains 1 cat(s)
Processed image two_people_and_dtag.png and got []
Processed image two_people.jpg and got []
Introduction¶
In this tutorial, we will walk through a simple use case that checks if
someone is violating social distancing rules.
Please be aware that the goal of this tutorial is to help you get familiar with
the usage of BrainFrame's inference capabilities. A real social distancing use
case is much more complicated than this script.
In this script, we only have two rules:
- Two
person
detection bounding boxes cannot overlap
- The distance between the center of two people detections' bounding boxes
must be greater than 500 pixels (by default; this will be configurable).
You can find the complete script on our GitHub repository.
Setup The Environment¶
The environment setup is similar to the environment we have in the
[WeChat Notification][wechat_tutorial] tutorial. You can refer to it to set up
the environment.
In this tutorial, the API functions that we are going to use are:
api.set_stream_configuration(...)
api.set_zone(...)
api.get_zone_status_stream()
api.set_plugin_option_vals(...)
Help Function¶
First, import the dependencies:
import math
from argparse import ArgumentParser
from pathlib import Path
from brainframe.api import BrainFrameAPI, bf_codecs
To make the script more readable, we'll define two helper functions in advance.
The first will check if two bounding boxes are overlapped or not:
# Help function to check if two detections are overlapped
def is_overlapped(det1: bf_codecs.Detection,
det2: bf_codecs.Detection) -> bool:
"""
:param det1: First Detection
:param det2: Second Detection
:return: If the two Detections' bboxes are overlapped
"""
# Sort the x, y in ascending order
coords1_sorted_x = sorted([c[0] for c in det1.coords])
coords2_sorted_x = sorted([c[0] for c in det2.coords])
coords1_sorted_y = sorted([c[1] for c in det1.coords])
coords2_sorted_y = sorted([c[1] for c in det2.coords])
# Return False if the rects do not overlap horizontally
if coords1_sorted_x[0] > coords2_sorted_x[-1] \
or coords2_sorted_x[0] > coords1_sorted_x[-1]:
return False
# Return False if the rects do not overlap vertically
if coords1_sorted_y[0] > coords2_sorted_y[-1] \
or coords2_sorted_y[0] > coords1_sorted_y[-1]:
return False
# Otherwise, the two rects must overlap
return True
The second helper function one will calculate the distance between the center of
two bounding boxes:
# Helper function to calculate the distance between the center points of two
# detections
def get_distance(det1: bf_codecs.Detection,
det2: bf_codecs.Detection) -> float:
"""
:param det1: First Detection
:param det2: First Detection
:return: Distance between the center of the two Detections
"""
return math.hypot(det1.center[0] - det2.center[0],
det1.center[1] - det2.center[1])
Create a New Stream from Local File¶
First, initialize the API instance and connect to the server.
# Initialize the API
api = BrainFrameAPI("http://localhost")
Then, we want to start a video stream, you can find the sample video on our
tutorial repository.
# Upload the local file to the database and get its storage ID
storage_id = api.new_storage(
data=Path("../videos/social_distancing.mp4").read_bytes(),
mime_type="application/octet-stream"
)
# Create a Stream Configuration referencing the new storage ID
new_stream_config = bf_codecs.StreamConfiguration(
# The display name on the client side
name="Demo",
# This stream will be from a file
connection_type=bf_codecs.StreamConfiguration.ConnType.FILE,
# The storage ID of the file
connection_options={
"storage_id": storage_id,
},
runtime_options={},
premises_id=None,
)
# Tell the server to connect to that stream configuration
new_stream_config = api.set_stream_configuration(new_stream_config)
Next, we will configure some capsule options instead of using the default ones
we defined earlier to filter out some bad detections.
# Filter out duplicate detections
api.set_plugin_option_vals(
plugin_name="detector_people_and_vehicles_fast",
stream_id=new_stream_config.id,
option_vals={
# If one bounding box is overlapped >80% with another bounding box, we
# assume that they are really the same detection and ignore them.
"max_detection_overlap": 0.8,
"threshold": 0.9
}
)
Finally, don't forget to tell BrainFrame to start analyzing/preforming inference
on the stream.
# Start analysis on the stream
api.start_analyzing(new_stream_config.id)
Check Social Distancing Rules¶
Next, similar to the WeChat Notification tutorial,
we will get the zone status iterator. Will will iterate through all of its
zone statuses, checking against the social distancing rules we defined above.
In the WeChat Notification tutorial, the operations on zone statuses were somewhat
complicated, being a nested data structure. In this tutorial, we will reorganize
in order to make our calculations here easier.
# Verify that there is at least one connected stream
assert len(api.get_stream_configurations()), \
"There should be at least one stream already configured!"
# Get the inference stream.
for zone_status_packet in api.get_zone_status_stream():
# Organize detections results as a dictionary of
# {stream_id: [Detections]}.
detections_per_stream = {
stream_id: zone_status.within
for stream_id, zone_statuses in zone_status_packet.items()
for zone_name, zone_status in zone_statuses.items()
if zone_name == "Screen"
}
# Iterate over each stream_id/detections combination
for stream_id, detections in detections_per_stream.items():
# Filter out Detections that are not people
detections = [detection for detection in detections
if detection.class_name == "person"]
# Skip stream frame if there are no person detections
if len(detections) == 0:
continue
# Compare the distance between each detections.
for i, current_detection in enumerate(detections):
violating = False
for j in range(i + 1, len(detections)):
target_detection = detections[j]
current_detection: bf_codecs.Detection
target_detection: bf_codecs.Detection
# If the bbox representing two people are overlapped, the
# distance is 0, otherwise it's the distance between the
# center of these two bbox.
if is_overlapped(current_detection, target_detection):
distance = 0
else:
distance = get_distance(current_detection, target_detection)
if distance < min_distance:
print(f"People are violating the social distancing rules, "
f"current distance: {distance}, location: "
f"{current_detection.coords}, "
f"{target_detection.coords}")
violating = True
break
if violating:
break
Now, whenever people violate our social distancing rules, our script will print
the message out, including where are they located in the frame.
People are violating the social distancing rules, current distance: 499.8899878973373, location: [[30, 340], [411, 340], [411, 926], [30, 926]], [[571, 238], [828, 238], [828, 742], [571, 742]]
Ended: REST API
CapsulesIntroduction¶
This tutorial will guide you through the process of downloading one of our
freely available OpenVisionCapsule capsules and adding
it to BrainFrame. We will be installing our simple face detector capsule that
works on all platforms, even those without a GPU.
Downloading the Capsule¶
On the computer that is hosting the BrainFrame server, navigate to our
downloads page and under Capsules, locate the
Detector Face Fast
entry. Click the link to download the capsule.
Adding the Capsule to BrainFrame¶
In the server's data directory (/var/local/brainframe
by default), there
should be a directory called capsules/
. If the capsules/
directory does not
exist, create it. Place the capsule file that you just downloaded
(detector_face_fast.cap
) within this directory.
Note: If you do not know the location of BrainFrame's data directory, you can
get it directly using the BrainFrame CLI.
mv PATH/TO/detector_face_fast.cap $(brainframe info data_path)/capsules/
An alternative is to download the capsule directly to the capsules/ directory
wget -P $(brainframe info data_path)/capsules {DOWNLOAD_URL}
Verifying That the Capsule Works¶
The capsule should now be ready for use by BrainFrame. Let's open the client and
make sure everything is working properly.
Open the BrainFrame client and then open the Global Capsule Configuration
dialog. You should see an entry for the Detector Face Fast
capsule, with
configuration options.
Once you load a stream, you will be able to see the inference
results on the Streams view.
Introduction¶
In this tutorial, we will walk you through the creation of a basic capsule.
If you get stuck along the way or simply want to view the end-result of the
tutorial, you can find the completed capsule on our GitHub repository.
Before we start, we highly recommend you to read the
OpenVisionCapsules Documentation.
It will give you a bit of background information about capsules.
Set Up Environment¶
To develop your own capsule, you will need to install vcap
and vcap_utils
,
which are a set of Python libraries for encapsulating machine learning and computer
vision algorithms for intelligent video analytics. They can be found on Github
here.
pip3 install vcap vcap_utils
You might also want to download a few of our open-source capsules:
git clone https://github.com/aotuai/capsule_zoo.git
Creating a Basic Capsule¶
In this tutorial, we will create a fake capsule. It isn't going to perform any
inference, but instead return some fake values. The purpose of this example is
to help you to understand our capsule system better.
Directory structure¶
First, let's create a folder called detector_bounding_box_fake
under the
capsules directory under where your docker-compose file is located. Also create
a meta.py
and a capsule.py
under this directory. The resulting structure
will look like:
your_working_directory
├── docker-compose.yml
└── capsules
└── detector_bounding_box_fake
├── meta.conf
└── capsule.py
For more information about the structure, you can check the documentation
here.
Capsule Metadata¶
The meta.conf
file provides basic information about the capsule to BrainFrame
before the rest of the capsule is loaded. In our meta.conf
, we are going to
define the version of the OpenVisionCapsules SDK that this capsule will be
compatible with.
[about]
api_compatibility_version = 0.3
This number should be the same as the Major.Minor version from
pip3 show vcap | grep Version
Capsule¶
In capsule.py
, we define a class called Capsule
, which will define the
actual behavior of the capsule. The Capsule
class provides metadata that
allows BrainFrame to understand the capabilities of the capsule and how it
can be used, and must inherit from BaseCapsule
. For more information about
the BaseCapsule
class, see the documentation
here.
We'll import the dependencies from vcap
first:
from vcap import BaseCapsule, NodeDescription, BaseBackend, DetectionNode
Then we'll define the Capsule
class as a sub-class of BaseCapsule
.
# Define the Capsule class
class Capsule(BaseCapsule):
# Metadata of this capsule
name = "detector_bounding_box_fake"
description = "A fake detector that outputs a single bounding box"
version = 1
# Define the input type. As this is an object detector, and does not require
# any input from other capsules, the input type will be a NodeDescription
# with size=NONE
input_type = NodeDescription(size=NodeDescription.Size.NONE)
# Define the output type. In this case we are going to return a list of
# bounding boxes, so the output type will be size=ALL
output_type = NodeDescription(
size=NodeDescription.Size.ALL,
detections=["fake_box"],
)
# Define the backend. In this example, we are going to use a fake Backend,
# defined below
backend_loader = lambda capsule_files, device: Backend(
capsule_files=capsule_files, device=device)
options = {}
Backend¶
Now let's create a Backend
class. The Backend
class defines how the
underlying algorithm is initialized and used. For more information about
Backend
classes, please refer to the
OpenVisionCapsules Documentation.
# Define the Backend Class
class Backend(BaseBackend):
# Since this is a fake Backend, we are not going to do any fancy stuff in
# the constructor.
def __init__(self, capsule_files, device):
print("Loading onto device:", device)
super().__init__()
# In a real capsule, this function will be performing inference or running
# algorithms. For this tutorial, we are just going to return a single, fake
# bounding box.
def process_frame(self, frame, detection_node: None, options, state):
return [
DetectionNode(
name="fake_box",
coords=[[10, 10], [100, 10], [100, 100], [10, 100]]
)
]
# Batch process can be used to improve the performance, we will skip it in
# this example.
def batch_predict(self, input_data_list):
pass
# This function can be implemented to perform clean-up. We can't skip it
# for this tutorial
def close(self) -> None:
pass
The fake capsule is now complete. If you restart your BrainFrame server, you
will be able to see it loaded.
If you load a stream, you will be able to see the inference results.
Introduction¶
In this tutorial, we will walk through how to make a capsule using an
existing model trained with the [Tensorflow Object Detection API]
[TensorFlow detection model zoo]. You can find the complete capsule on
our GitHub repository.
Setup The Environment¶
See the previous tutorial for information on
setting up a development environment.
A TensorFlow Face Detection Capsule¶
File Structure¶
As in the previous tutorial, we will begin by creating a new folder called
detector_face
, a meta.conf
and a capsule.py
.
You will also need to put the existing TensorFlow model and the metadata in
the directory. For this tutorial, they will be named detector.pb
and
dataset_metadata.json
. Download the detector.pb
and dataset_metadata.json
from here. Other TensorFlow pre-trained
models can be found in the Tensorflow 1 Object Detection Model Zoo and [Tensorflow 2 Object Detection Model
Zoo]
TensorFlow 2 detection model zoo.
So now the file structure will look like:
your_working_directory
├── docker-compose.yml
└── capsules
└── detector_face
├── meta.conf
├── capsule.py
├── detector.pb
└── dataset_metadata.json
Capsule Metadata¶
Just as in the previous tutorial, put the version information in the
meta.conf
:
[about]
api_compatibility_version = 0.3
Capsule¶
First, import the dependencies:
# Import dependencies
import numpy as np
from typing import Dict
from vcap import (
BaseCapsule,
NodeDescription,
DetectionNode,
FloatOption,
DETECTION_NODE_TYPE,
OPTION_TYPE,
BaseStreamState,
rect_to_coords,
)
from vcap_utils import TFObjectDetector
The capsule definition will be a little bit more complicated than the previous
one. In this capsule, we will have the threshold option. In addition, since
we are using a real backend, we will pass in a lambda for backend_loader
. We
will talk more about this in the Backend section below.
# Define the Capsule class
class Capsule(BaseCapsule):
# Metadata of this capsule
name = "face_detector"
description = "This is an example of how to wrap a TensorFlow Object " \
"Detection API model"
version = 1
# Define the input type. Since this is an object detector, and doesn't
# require any input from other capsules, the input type will be a
# NodeDescription with size=NONE.
input_type = NodeDescription(size=NodeDescription.Size.NONE)
# Define the output type. In this case, as we are going to return a list of
# bounding boxes, the output type will be size=ALL. The type of detection
# will be "face", and we will place the detection confidence in extra_data.
output_type = NodeDescription(
size=NodeDescription.Size.ALL,
detections=["face"],
extra_data=["detection_confidence"]
)
# Define the backend_loader
backend_loader = lambda capsule_files, device: Backend(
device=device,
model_bytes=capsule_files["detector.pb"],
metadata_bytes=capsule_files["dataset_metadata.json"])
# The options for this capsule. In this example, we will allow the user to
# set a threshold for the minimum detection confidence. This can be adjusted
# using the BrainFrame client or through REST API.
options = {
"threshold": FloatOption(
description="Filter out bad detections",
default=0.5,
min_val=0.0,
max_val=1.0,
)
}
Backend¶
Because we are using a TensorFlow model, we are going to use a sub-class of
TFObjectDetector
instead of BaseBackend
. The TFObjectDetector
class will
conveniently do the following for us:
- Load the model bytes into memory
- Perform batch inference
- Close the model and clean up the memory when finished
TFObjectDetector
already defines the constructor, batch_process()
and
close()
methods for us, so we can skip defining them ourselves. We just need
to handle the process_frame()
method.
# Define the Backend Class
class Backend(TFObjectDetector):
def process_frame(self, frame: np.ndarray,
detection_node: None,
options: Dict[str, OPTION_TYPE],
state: BaseStreamState) -> DETECTION_NODE_TYPE:
"""
:param frame: A numpy array of shape (height, width, 3)
:param detection_node: None
:param options: Example: {"threshold": 0.5}. Defined in Capsule class above.
:param state: (Unused in this capsule)
:return: A list of detections
"""
# Send the frame to the BrainFrame backend. This function will return a
# queue. BrainFrame will batch_process() received frames and populate
# the queue with the results.
prediction_output_queue = self.send_to_batch(frame)
# Wait for predictions
predictions = prediction_output_queue.get()
# Iterate through all the predictions received in this frame
detection_nodes = []
for prediction in predictions:
# Filter out detections that is not a face.
if prediction.name != "face":
continue
# Filter out detection with low confidence.
if prediction.confidence < options["threshold"]:
continue
# Create a DetectionNode for the prediction. It will be reused by
# any other capsules that require a face DetectionNode in their
# input type. An age classifier capsule would be an example of such
# a capsule.
new_detection = DetectionNode(
name=prediction.name,
# convert [x1, y1, x2, y2] to [[x1,y1], [x1, y2]...]
coords=rect_to_coords(prediction.rect),
extra_data={"detection_confidence": prediction.confidence}
)
detection_nodes.append(new_detection)
return detection_nodes
When you restart BrainFrame, your capsule will be packaged into a .cap
file
and initialized. You'll see its information on the BrainFrame client.
Once you load a stream, you will be able to see the inference results.
Introduction¶
This tutorial will guide you through encapsulating an OpenVINO object detector
model. For this tutorial, we will be using the
person-vehicle-bike-detection-crossroad-1016
model from the Open Model Zoo, but the concepts shown here will work for all
OpenVINO object detectors. You can find the complete capsule on the
Capsule Zoo.
See the previous tutorial for information on
setting up a development environment.
Getting Started¶
We will start by creating a directory where all our capsule code and model
files will reside. By convention, capsule names start with a small description
of the role the capsule plays, followed by the kinds of objects they operate
on, and finally some kind of differentiating information about the capsule's
intended use or implementation. We will name this capsule
detector_person_vehicle_bike_openvino
and create a directory with that name.
Then, we will add a meta.conf
file, which will let the application loading
the capsule know what version of the OpenVisionCapsules API this capsule
requires. OpenVINO support was significantly improved in version 0.2.x, so we
will require at least that minor version of the API:
[about]
api_compatibility_version = 0.3
We will also add the weights and model files into this directory so they can
be loaded by the capsule. After these steps, your data directory should
look like this:
your_data_directory
├── volumes
└── capsules
└── detector_person_vehicle_bike_openvino
├── person-vehicle-bike-detection-crossroad-1016-fp32.bin
├── person-vehicle-bike-detection-crossroad-1016-fp32.xml
└── meta.conf
The Capsule Class¶
Next, we will define the Capsule class. This class provides the application
with information about your capsule. The class must be named Capsule
and
the file it is defined in must be named capsule.py
. We will create that
file in the capsule directory with the following contents:
from vcap import (
BaseCapsule,
NodeDescription,
DeviceMapper,
common_detector_options
)
from .backend import Backend
class Capsule(BaseCapsule):
name = "detector_person_vehicle_bike_openvino"
description = ("OpenVINO person, vehicle, and bike detector. Optimized "
"for surveillance camera scenarios.")
version = 1
device_mapper = DeviceMapper.map_to_openvino_devices()
input_type = NodeDescription(size=NodeDescription.Size.NONE)
output_type = NodeDescription(
size=NodeDescription.Size.ALL,
detections=["vehicle", "person", "bike"])
backend_loader = lambda capsule_files, device: Backend(
model_xml=capsule_files[
"person-vehicle-bike-detection-crossroad-1016-fp32.xml"],
weights_bin=capsule_files[
"person-vehicle-bike-detection-crossroad-1016-fp32.bin"],
device_name=device
)
options = common_detector_options
In this file, we have defined a Capsule
class that subclasses from
BaseCapsule
and defines some fields. The name
field reflects the name
of the capsule directory and the description
field is a short,
human-readable description of the capsule's purpose. The other fields are a bit
more complex, so let's break each one down.
version = 1
This is the capsule's version (not to be confused with the version of the
OpenVisionCapsules API defined in the meta.conf
). Since this is the first
version of our capsule, we'll start it at 1. The version field can be used as a
way to distinguish between different revisions of the same capsule. This field
has no semantic meaning to BrainFrame and can be incremented as the capsule
developer sees fit. Some developers may choose to increment it with every
iteration; others only when significant changes have occurred.
device_mapper = DeviceMapper.map_to_openvino_devices()
This device mapper will map our backends to any available OpenVINO-compatible
devices, like the Intel Neural Compute Stick 2 or the CPU.
input_type = NodeDescription(size=NodeDescription.Size.NONE)
This detector capsule requires no output from any other capsules in order to
run. All it needs is the video frame.
output_type = NodeDescription(
size=NodeDescription.Size.ALL,
detections=["vehicle", "person", "bike"])
This detector provides "vehicle", "person", and "bike" detections as output
and is expected to detect all vehicles, people, and bikes in the video frame.
backend_loader = lambda capsule_files, device: Backend(
model_xml=capsule_files[
"person-vehicle-bike-detection-crossroad-1016-fp32.xml"],
weights_bin=capsule_files[
"person-vehicle-bike-detection-crossroad-1016-fp32.bin"],
device_name=device
)
Here we define a lambda function that creates an instance of a Backend class
with the model and weights files, as well as the device this backend will run
on. We will define this Backend class in the next section.
options = common_detector_options
We give this capsule some basic options that are common among most detector
capsules.
With this new capsule.py
file added, your capsule directory should look
like this:
your_data_directory
├── volumes
└── capsules
└── detector_person_vehicle_bike_openvino
├── capsule.py
├── person-vehicle-bike-detection-crossroad-1016-fp32.bin
├── person-vehicle-bike-detection-crossroad-1016-fp32.xml
└── meta.conf
The Backend Class¶
Finally, we will define the Backend
class. This class defines how the
capsule runs analysis on video frames. An instance of this class will be
created for every device the capsule runs on. The Backend
class doesn't
have to be defined in any specific location, but we will add it to a new file
called backend.py
with the following contents:
from typing import Dict
import numpy as np
from vcap import (
DETECTION_NODE_TYPE,
OPTION_TYPE,
BaseStreamState)
from vcap_utils import BaseOpenVINOBackend
class Backend(BaseOpenVINOBackend):
label_map: Dict[int, str] = {1: "vehicle", 2: "person", 3: "bike"}
def process_frame(self, frame: np.ndarray,
detection_node: DETECTION_NODE_TYPE,
options: Dict[str, OPTION_TYPE],
state: BaseStreamState) -> DETECTION_NODE_TYPE:
input_dict, resize = self.prepare_inputs(frame)
prediction = self.send_to_batch(input_dict).result()
detections = self.parse_detection_results(
prediction, resize, self.label_map,
min_confidence=options["threshold"])
return detections
Our Backend
class subclasses from BaseOpenVINOBackend
. This backend
handles loading the model into memory from the given files, implements batching,
and provides utility methods that make writing OpenVINO backends easy. All we
need to do is define the process_frame
method. Let's take a look at each
call in the method body.
input_dict, resize = self.prepare_inputs(frame)
This line prepares the given video frame to be fed into the model. The video
frame is resized to fit in the model and formatted in the way the model
expects. Also provided is a resize object, which contains the necessary
information to map the resulting detections to the coordinate system of the
originally sized video frame.
This method assumes that your OpenVINO model expects images in the format
(num_channels, height, width) and expects the frame to be in a dict with the key
being the network's input name. Ensure that your model follows this convention
before using this method.
prediction = self.send_to_batch(input_dict).result()
Next, the input data is sent into the model for batch processing. The call to
get
causes the backend to block until the result is ready. The results
are objects with raw OpenVINO prediction information.
detections = self.parse_detection_results(
prediction, resize, self.label_map,
min_confidence=options["threshold"])
return detections
Finally, the results go through post-processing. Detections with a low
confidence are filtered out, raw class IDs are converted to human-readable
class names, and the results are scaled up to fit the size of the original
video frame.
Wrapping Up¶
With the meta.conf, Capsule class, Backend class, and model files, the capsule
is now complete! Your data directory should look something like this:
your_data_directory
├── volumes
└── capsules
└── detector_person_vehicle_bike_openvino
├── backend.py
├── capsule.py
├── person-vehicle-bike-detection-crossroad-1016-fp32.bin
├── person-vehicle-bike-detection-crossroad-1016-fp32.xml
└── meta.conf
When you restart BrainFrame, your capsule will be packaged into a .cap
file
and initialized. You'll see its information on the BrainFrame client.
Load up a video stream to see detection results.
Introduction¶
This tutorial will guide you through encapsulating an OpenVINO classifier model.
For this tutorial, we will be using the
vehicle-attributes-recognition-barrier-0039 model from the
Open Model Zoo, but the concepts shown here apply to all OpenVINO classifiers.
You can find the complete capsule on the Capsule Zoo. This model
is able to classify the color of a detected vehicle.
This capsule will rely on the detector created in the
previous tutorial to
find vehicles in the video frame before they can be classified.
Getting Started¶
Like in the previous tutorial, we will create a new directory for the classifier
capsule. This time we will name it classifier_vehicle_color_openvino
. We will
also add a meta.conf
with the same contents, declaring that our capsule
relies on version 0.2 or higher of OpenVisionCapsules.
[about]
api_compatibility_version = 0.3
We will also add the weights and model files to this directory so that they can
be loaded by the capsule.
The Capsule Class¶
The Capsule class defined here will be very similar in structure to the one in
the detector capsule.
from vcap import BaseCapsule, NodeDescription, DeviceMapper
from .backend import Backend
from . import config
class Capsule(BaseCapsule):
name = "classifier_vehicle_color_openvino"
description = "OpenVINO vehicle color classifier."
version = 1
device_mapper = DeviceMapper.map_to_openvino_devices()
input_type = NodeDescription(
size=NodeDescription.Size.SINGLE,
detections=["vehicle"])
output_type = NodeDescription(
size=NodeDescription.Size.SINGLE,
detections=["vehicle"],
attributes={"color": config.colors})
backend_loader = lambda capsule_files, device: Backend(
model_xml=capsule_files[
"vehicle-attributes-recognition-barrier-0039.xml"],
weights_bin=capsule_files[
"vehicle-attributes-recognition-barrier-0039.bin"],
device_name=device
)
Let's take a look at some of the differences between this Capsule class and the
detector's.
input_type = NodeDescription(
size=NodeDescription.Size.SINGLE,
detections=["vehicle"])
This capsule takes vehicle detections produced by the detector capsule as input.
Each vehicle found in the video frame is processed one at a time.
output_type = NodeDescription(
size=NodeDescription.Size.SINGLE,
detections=["vehicle"],
attributes={"color": config.colors})
This capsule provides a vehicle detection with a color attribute as output. Note
that classifier capsules do not create new detections. Instead, they augment the
detections provided to them by other capsules. We've moved the list of colors
out into a separate config.py
file so that it can also be referenced by the
backend, which we will define in the next section.
# config.py
colors = ["white", "gray", "yellow", "red", "green", "blue", "black"]
You may have noticed that this capsule does not have any options. The options
field can be omitted when the capsule doesn't have any parameters that can be
modified at runtime.
The Backend Class¶
We will once again create a file called backend.py
where the Backend class
will be defined. It will still subclass BaseOpenVINOBackend
and we will only
need to implement the process_frame
method.
from collections import namedtuple
from typing import Dict
import numpy as np
from vcap import (
Resize,
DETECTION_NODE_TYPE,
OPTION_TYPE,
BaseStreamState)
from vcap_utils import BaseOpenVINOBackend
from . import config
class Backend(BaseOpenVINOBackend):
def process_frame(self, frame: np.ndarray,
detection_node: DETECTION_NODE_TYPE,
options: Dict[str, OPTION_TYPE],
state: BaseStreamState) -> DETECTION_NODE_TYPE:
crop = Resize(frame).crop_bbox(detection_node.bbox).frame
input_dict, _ = self.prepare_inputs(crop)
prediction = self.send_to_batch(input_dict).result()
max_color = config.colors[prediction["color"].argmax()]
detection_node.attributes["color"] = max_color
Let's review this method line-by-line.
crop = Resize(frame).crop_bbox(detection_node.bbox).frame
Capsules always receive the entire video frame, so we need to start by cropping
the frame to the detected vehicle.
input_dict, _ = self.prepare_inputs(crop)
We then prepare the cropped video frame to be fed into the model. The video
frame is resized to fit into the model and formatted in the way the model
expects. We can ignore the second return value, the resize object, because
classifiers don't provide any coordinates that need adjusting.
prediction = self.send_to_batch(input_dict).result()
Next, the input data is sent into the model for batch processing. The call to
get
causes the backend to block until the result is ready. The results
are objects with raw OpenVINO prediction information.
max_color = config.colors[prediction["color"].argmax()]
We then pull the color information from the prediction, and choose the color
with the highest confidence. We then convert the color from its integer
representation to a human-readable string using the colors
list defined in
config.py
.
detection_node.attributes["color"] = max_color
Finally, we augment the vehicle detection with the new "color" attribute. This
capsule does not need to return anything because no new detections have been
created.
Wrapping Up¶
Finally, the capsule is complete! Your data directory should look something like
this:
your_data_directory
├── volumes
└── capsules
└── classifier_vehicle_color_openvino
├── backend.py
├── capsule.py
├── config.py
├── meta.conf
├── vehicle-attributes-recognition-barrier-0039.bin
└── vehicle-attributes-recognition-barrier-0039.xml
When you restart BrainFrame, your capsule will be packaged into a .cap
file
and initialized. You'll see its information on the BrainFrame client.
Load up a video stream to see classification results.
Ended: Capsules
Ended: Tutorials
Advanced Usage
Introduction¶
The BrainFrame server uses a docker-compose.yml
file to configure
many aspects of its runtime behavior. Some options may be changed by
setting environment variables in a .env
file, placed in the same
directory as the docker-compose.yml
file.
Any options not exposed here may be overridden by creating a
docker-compose.override.yml
file in the same directory. Configuration
written here will be applied over the original docker-compose.yml
.
Port Configuration¶
BrainFrame makes three ports available to the host environment by
default, the API and documentation on port 80, the Postgres database on
port 5432, and the StreamGateway server on port 8004. If these ports
conflict with other software running on the host machine, they can be
changed by setting the SERVER_PORT
, DATABASE_PORT
,
STREAM_GATEWAY_PORT
, and RABBITMQ_PORT
variables in the .env
file.
BrainFrame may also proxy video streams to ports in the range
10000-20000. At this time, there is no way to reconfigure these ports.
SERVER_PORT=80
DATABASE_PORT=5432
STREAM_GATEWAY_PORT=8004
RABBITMQ_PORT=5672
Authorization Configuration¶
By default, BrainFrame does not authorize clients and all clients have admin
permissions. If your server is being deployed in a network where access control
is desirable, authorization can be turned on using the AUTHORIZE_CLIENTS
variable in the .env
file.
AUTHORIZE_CLIENTS=true
Warning
The admin user is given a default password of "admin". This should be
changed to a secure and unique password for public deployments.
Currently, the admin user's password may only be changed through the REST API.
The following is an example curl
command for doing this. Replace [hostname]
with the hostname of the BrainFrame server and [new password]
with the desired
password.
curl 'http://[hostname]/api/users' \
--user 'admin:admin' \
--request POST \
--header 'Content-Type: application/json' \
--data '{
"id": 1,
"username": "admin",
"password": "[new password]",
"role": "admin"
}'
User Configuration¶
BrainFrame is designed to run using the account of the current non-root user. By
default, 1000 is used for both the user ID and group ID, which matches the
default on most Linux systems. These IDs may be adjusted using the UID
and
GID
variables in the .env
file.
UID=1001
GID=1001
To check the IDs of the currently logged in user, run id -u
for the UID and
id -g
for the GID.
Journal Pruning¶
BrainFrame records analytics results to a Postgres database. Over a long
period of time, this can result in a lot of data. To avoid unbounded
storage use, BrainFrame prunes journal entries over time and deletes
journal entries that are past a certain age.
Journal pruning behavior is controlled by the "pruning age" and "pruning
fraction" variables. The pruning age controls how old a journal entry
must be before it becomes a candidate for pruning. This value also
controls at what interval pruning is run. The pruning fraction variable
controls what portion of journal entries are pruned each time pruning is
run. The pruning fraction variable is a value between 0 and 1, where 0
results in no pruning, and 1 results in the deletion of all journaling
information past the pruning age. These variables may be configured by
setting the PRUNING_AGE
(specified as a duration)
and PRUNING_FRACTION
variables in the .env
file.
# Start pruning journal entries after 1 hour, and run pruning every hour
PRUNING_AGE=0d1h0m
# Prune 5% of all journal entries that are past the pruning age every run
PRUNING_FRACTION=0.05
All journaling information is deleted after it reaches the journal max
age. This value may be configured by setting the JOURNAL_MAX_AGE
variable (specified as a duration) in the .env
file.
# Keep journaling information for 60 days
JOURNAL_MAX_AGE=60d0h0m
Duration Format¶
Settings that specify a duration are in the format XdYhZm
, where X is
the number of days, Y is the number of hours, and Z is the number of
minutes.
AI Accelerator Configuration¶
At the moment the only control over AI accelerator is for OpenVINO devices.
It is possible to change the whitelisted devices and their priority with the
OPENVINO_DEVICE_PRIORITY
.
# Block any device except for CPU
OPENVINO_DEVICE_PRIORITY=CPU
# Load onto both CPU and HDDL, giving priority to CPU
OPENVINO_DEVICE_PRIORITY=CPU,HDDL
# Load onto both CPU and HDDL, giving priority to HDDL
OPENVINO_DEVICE_PRIORITY=HDDL,CPU
Before making a BrainFrame server publicly accessible, some additional
configuration is required.
Authorization¶
By default, BrainFrame does not authorize clients. Authorization should always
be turned on for public deployments to prevent unauthorized access. See
this section on authorization configuration for more information.
Warning
Be sure that the admin user's default password has been changed before
continuing.
Port Forwarding¶
BrainFrame requires that certain ports are forwarded so that the client and
other external programs may establish connections to it. Below is a table of
ports BrainFrame uses and their purpose. For ways to reconfigure these ports,
see this section on port configuration.
Port
Purpose
80
BrainFrame API, dashboard, documentation
8004
StreamGateway server communication
5533
RTSP streams for video files
10000-10100
StreamGateway server video streams
Warning
BrainFrame also makes a Postgres server available on port 5432, but that port
should not be forwarded for security reasons.
IP Camera streams can be configured with their own custom GStreamer pipelines,
allowing for rich configuration of how the stream is processed. This section
will not explain the intricacies of GStreamer pipelines as the official website
provides excellent documentation on how these work.
Instead, included are a few pipeline examples.
BrainFrame does quite a bit of work in the background to ensure that many
different IP camera types are supported seamlessly. When using custom
pipelines, more intimate knowledge of the IP camera stream is required compared
to using BrainFrame normally.
Note that all custom pipelines:
- Must include a
{url}
template field. This is where the specified IP camera
URL will be inserted into the pipeline.
- Must have an
appsink
element named "main_sink". This is where frames will
be extracted from the pipeline for processing.
- May optionally include an element named "buffer_src". This is required for
frame skipping to work with custom
pipelines. This name should be given to an element in the pipeline
that sections off frame data from the network before decoding, like
rtph264depay
.
To specify a custom pipeline, check the "Advanced Options" checkbox in the
stream creation window and enter your pipeline into the "Pipeline" textbox.
Example Pipelines¶
Cropping the Video Stream¶
For composite video streams or for scenes that contain uninteresting sections,
one may want to crop the video stream before processing. Here is an example of
a custom pipeline to accomplish this for an H264 RTSP stream:
rtspsrc location="{url}" ! rtph264depay name="buffer_src" ! decodebin ! videocrop top=x left=x right=x bottom=x ! videoconvert ! video/x-raw,format=(string)BGR ! appsink name="main_sink"
This pipeline uses the videocrop element to crop the video by some configurable
value. The "x" values should be replaced with the amount in pixels to crop from
each side of the frame.
Lower Latency Streaming¶
By default, BrainFrame will “buffer” frames in order to ensure a more stable streaming experience. In order to prevent that, try using the pipeline below:
rtspsrc location="{url}" latency=X ! rtph264depay name="buffer_src" ! decodebin ! videoconvert ! video/x-raw,format=(string)BGR ! appsink name="main_sink"
Replace the "X" in latency=X with 0 for no buffering at all. The unit X is in milliseconds.
Rotating the Video Stream¶
rtspsrc location="{url}" ! rtph264depay ! avdec_h264 ! videoconvert ! videoflip video-direction=x ! videoconvert ! video/x-raw,format=(string)BGR ! appsink name="main_sink"
The "x" value should be the number of degrees to rotate. Try numbers such as 0, 90, 180, and so on.
Hardware Decoding with Multiple Nvidia GPUs¶
BrainFrame automatically detects when an Nvidia GPU is available and attempts
to do hardware video decoding on it. Currently, video decoding is only done on
the first available device. This means that if your machine has multiple
Nvidia GPUs installed, only one of them will be utilized.
GStreamer dynamically creates decoder elements that allow you to choose which
Nvidia GPU the work will be done on. Using H.264 as our example format:
nvh264dec
uses device 0
nvh264device1dec
uses device 1
nvh264device2dec
uses device 2
- ... and so on
By using different decoder elements for each stream's custom pipeline, you can
distribute decoding work across multiple GPUs. For example, if you had fifteen
video streams and three GPUs, you might consider having the first five use
nvh264dec
, the next five use nvh264device1dec
, and the final five use
nvh264device2dec
.
The device IDs referenced here are CUDA device IDs. By default, CUDA orders
devices from fastest to slowest, device 0 being the fastest. It is possible to
change the way CUDA orders devices via an environment variable. See
the official documentation for details.
Here is an example pipeline that uses device 1 to decode an H.264 RTSP stream:
rtspsrc location="{url}" ! rtph264depay name="buffer_src" ! h264parse ! nvh264device1dec ! glcolorconvert ! video/x-raw(memory:GLMemory),format=(string)BGR ! gldownload ! video/x-raw,format=(string)BGR ! appsink name="main_sink"
Warning
Current releases of the BrainFrame Client do not support using NVCODEC
hardware decoding. Using these custom pipelines will cause streaming errors
in the client as a result. We only recommend these pipelines for advanced
use cases.
Frame skipping is a streaming mode available for IP cameras. Using frame
skipping significantly increases the number of streams a single BrainFrame
instance can handle at the cost of framerate.
Frame skipping allows BrainFrame to decode a significantly smaller amount of
frames, cutting down on decoding overhead. As of now, the resulting framerate
will depend on the keyframe interval of the video stream, which can often be
configured in the settings for an IP Camera or NVR.
Frame skipping can be found under "Advanced Options" when creating an IP camera
stream.
If you are specifying a custom pipeline, frame skipping will only work if an
element in the pipeline is named "buffer_src". See the
page on custom pipelines for details.
Some customers may prefer to deploy BrainFrame on a machine that does not have
internet access. This document describes how that may be accomplished, assuming
that a separate machine with internet access is available.
Save Docker Images¶
Start by deploying BrainFrame on a separate development machine using the
instructions found on the Getting Started page.
When BrainFrame is running, open another terminal, and you will see the list of
containers we are running for deployment:
docker ps
The list of of containers should look like this:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c462fa89dc72 aotuai/brainframe_core:0.25.2 "./brainframe_server…" 4 hours ago Exited (0) 3 hours ago release_api_1
54b731bc2a04 aotuai/brainframe_http_proxy:0.25.2 "nginx -g 'daemon of…" 4 hours ago Exited (0) 3 hours ago release_proxy_1
d6ad0f9e0675 aotuai/brainframe_docs:0.25.2 "nginx -g 'daemon of…" 4 hours ago Exited (0) 3 hours ago release_docs_1
4894246049a0 postgres:9.6.17-alpine "/entrypoint.sh mysq…" 4 hours ago Exited (0) 3 hours ago release_database_1
ac564e32f7eb aotuai/brainframe_dashboard:0.25.2 "/run.sh" 4 hours ago Exited (0) 3 hours ago release_dashboard_1
The above containers are the ones we need to save, ignore the other containers
just in case you have your own containers running at the same time.
The next step is to save those images, you can do this by running the
docker save
command:
docker save IMAGE [IMAGE...] -o OUTPUT
For example, in this case, you should run:
docker save \
aotuai/brainframe_core:0.25.2 \
aotuai/brainframe_http_proxy:0.25.2 \
aotuai/brainframe_docs:0.25.2 \
postgres:9.6.17-alpine \
aotuai/brainframe_dashboard:0.25.2 \
-o brainframe
Now all the images we need are save in BrainFrame under the current directory.
Load Docker Images¶
Once you have the packaged Docker images, copy it to the offline machine, and
load it:
docker load -i brainframe
Python API¶
Introduction¶
The BrainFrame Python API is a wrapper around the REST API to make it easier
for Python applications to integrate with BrainFrame. The Python API is
completely open source and available on Github. Reference
documentation and examples for the Python API can be found
on ReadTheDocs.
Applications not written in Python can interact with BrainFrame directly through
the REST API.
Installation¶
The BrainFrame Python API is available on Pip for Python 3.6 onward.
pip3 install brainframe-api
We recommend installing the Python API in a virtualenv to
avoid interference with other projects on the same system.
Introduction¶
If you are located in mainland China, you might have a hard time pulling docker
images, however, you can speed things up by using the Docker Mirror hosted by
USTC.
Configure Docker Daemon¶
You can configure the Docker daemon using a JSON file. Usually it's located at
/etc/docker/daemon.json
; if it doesn't exist create a new one. Then, add
"https://docker.mirrors.ustc.edu.cn/"
to the registry-mirrors
array to pull
from the USTC registry mirror by default.
After editing, your /etc/docker/daemon.json
should look like this:
{
"registry-mirrors": ["https://docker.mirrors.ustc.edu.cn/"]
}
If this is not the first time you are editing the daemon.json
, there may be
other configuration already there. You can simply add the line above after any
existing configuration.
Then restart dockerd
:
sudo systemctl restart docker
Verify Default Registry Mirror¶
You can verify your changes by:
docker info
If you see the following lines, you have configured your Docker daemon
successfully.
Registry Mirrors:
https://docker.mirrors.ustc.edu.cn/
Introduction¶
When validating BrainFrame's performance for a given use-case, you may want to
use video files to simulate connecting to many IP camera streams. The built-in
video file support in BrainFrame is good for some kinds of testing, but is not
recommended for performance testing because it has significant overhead when
compared to IP cameras.
For usecases like this, BrainFrame ships with a small RTSP server utility. This
utility acheives significantly lower overhead by transcoding video files in
advance. However, considering the main focus of this exercise is performance
testing, we highly recommend running the RTSP server on a separate machine from
the one running the BrainFrame server.
Using the RTSP Server Utility¶
Start by creating a directory containing the video files you would like to test
with. Ensure that no other types of files are present in the directory. Then,
run the following command:
docker run \
--network host \
--volume {video file path}:/video_files \
aotuai/brainframe_core:0.29.2 brainframe/tools/rtsp_server/main
Replace {video file path}
with a fully qualified path to the video file
directory you've created.
The RTSP server will start by transcoding all video files in the directory to a
known good format. This is to prevent video format incompatibilities and can
significantly improve streaming performance for the RTSP server. Once
transcoding is complete, new .mkv
files will be created in the video file
directory. Transcoding will not be run again unless new video files are
introduced or the .mkv
files are deleted.
When all video files have been transcoded, the RTSP server will start. An RTSP
URL will be printed for each video file being streamed.
INFO:root:Video traffic_back_to_front.mkv available at rtsp://0.0.0.0:8554/traffic_back_to_front
INFO:root:Video test_store.mkv available at rtsp://0.0.0.0:8554/test_store
INFO:root:Video two_cool_guys.mkv available at rtsp://0.0.0.0:8554/two_cool_guys
Be sure to replace 0.0.0.0
with the local IP address of the machine that's
running the RTSP server.
Connecting to Many Streams¶
When doing performance testing at the level of tens to hundreds of streams, it
can become burdensome to manage that many video files. Instead, it may be
easier to connect BrainFrame to the same video stream multiple times.
BrainFrame does not allow you to connect to the exact same RTSP URL multiple
times, as doing so during standard operation is wasteful. However, you can work
around this limitation by adding dummy query parameters to the end of the RTSP
URL.
rtsp://0.0.0.0:8554/test_store?dummy=1
rtsp://0.0.0.0:8554/test_store?dummy=2
rtsp://0.0.0.0:8554/test_store?dummy=3
rtsp://0.0.0.0:8554/test_store?dummy=4
...
The RTSP server utility is optimized to support many concurrent connections to
the same stream. Make sure your local network has the necessary bandwidth to
facilitate the scale of testing you plan to complete.
Ended: Advanced Usage
Dashboard
Introduction¶
BrainFrame uses a powerful dashboarding tool called Grafana in order to allow
highly customizable realtime visualizations of the BrainFrame Database and API.
With a little SQL knowledge it is possible to quickly get analytics for any
specific problem you need solved.
The default login / pass for the dashboard is admin
, admin
.
Dashboards¶
A Dashboard contains various Panels which visually display information about
the BrainFrame database. BrainFrame populates automatically with dashboards, such
as the Stream Uptime
dashboard, which shows graphs of which cameras are being
processad and when they have been connected / disconnected.
Creating a Graph¶
First, create a dashboard by clicking the +
on the sidebar. Then, give your dashboard a name by clicking the gear icon on the top right.
Now, it's time to add a Panel. Click the graph icon on the top bar, so you
see this:
Let's start by adding a query. Click "Add Query", then click the pencil to
edit the query as SQL.
For an example query, the following query will give a simple graph of the
number of people who entered or exited the "Front Door" Zone over time
SELECT zone_status.tstamp as time,
total_count.count_enter AS entered,
total_count.count_exit AS exited FROM zone_status
LEFT JOIN total_count ON total_count.zone_status_id=zone_status.id
LEFT JOIN zone ON zone_status.zone_id=zone.id
WHERE
total_count.class_name='person' AND
zone.name='Front Door' AND
zone_status.id >= (SELECT id FROM zone_status WHERE tstamp >= $__unixEpochFrom() ORDER BY tstamp ASC LIMIT 1) AND
zone_status.id <= (SELECT id FROM zone_status WHERE tstamp < $__unixEpochTo() ORDER BY tstamp DESC LIMIT 1)
You might notice that the last two lines of the query are fairly complicated. These lines
are intended to allow the query to limit results only to results between two timestamps,
and to do so in a very efficient way. The macros $__unixEpochFrom()
and $__unixEpochTo()
retun the current timestamps that the dashboard user is currently requesting.
Thus, feel free to copy paste the following filter into any slow query in order to
limit results in a SQL efficient way:
WHERE
zone_status.id >= (SELECT id FROM zone_status WHERE tstamp >= $__unixEpochFrom() ORDER BY tstamp ASC LIMIT 1) AND
zone_status.id <= (SELECT id FROM zone_status WHERE tstamp < $__unixEpochTo() ORDER BY tstamp DESC LIMIT 1) AND
"< ANY OTHER CONDITIONALS FOR THE QUERY >"
Introduction¶
This document is a guide for those interested in writing queries for
BrainFrame's SQL database. Included are examples of common queries and an
explanation of each table and their columns. This document is intended for
those with a basic understanding of SQL.
BrainFrame hosts Postgres in a container and makes it available to the host
machine through the default port, 5432.
Relationship Diagram¶
The following is a visual of how BrainFrame's various tables relate to each
other. This is a useful reference when writing queries that span multiple
tables. Click the image to enlarge it.
Example Queries¶
Below are some queries intended to be used as examples for common tasks. Fields
that must be filled in are wrapped in brackets.
Getting the number of detections right now in a zone for a class
This query finds the number of detections that are currently in a zone,
filtered by a class. If you want to know how many people are currently in the
"Couch Area" zone, for instance, this is the query to use.
SELECT COUNT(*) FROM detection
JOIN detection_zone_status ON detection.id = detection_zone_status.detection_id
WHERE detection_zone_status.zone_status_id=(SELECT id FROM zone_status
WHERE zone_status.zone_id=[your zone_id here]
ORDER BY zone_status.tstamp DESC LIMIT 1)
AND detection.class_name=[class_name];
Getting the traffic history of a zone
This query gets cumulative data on how many objects of the given class name
have entered and exited the zone. This could be used to build a graph of
traffic in the zone.
SELECT total_count.count_enter, total_count.count_exit FROM total_count
JOIN zone_status ON zone_status.id=total_count.zone_status_id
WHERE zone_status.zone_id=[your zone_id here]
AND total_count.class_name=[your class name here];
Getting the last zone that an identity was seen in
This query finds the last zone that an identity was found in.
SELECT * FROM zone
JOIN zone_status ON zone_status.zone_id = zone.id
JOIN detection_zone_status ON detection_zone_status.zone_status_id = zone_status.id
JOIN detection ON detection.id=detection_zone_status.detection_id
JOIN identity ON identity.id=detection.identity_id
WHERE identity.unique_name = [your unique name here]
ORDER BY zone_status.tstamp DESC
LIMIT 1;
Getting the number of times a zone alarm has been triggered
This query counts the total amount of times a zone alarm has been triggered
given its' alarm ID.
SELECT COUNT(*) FROM alert WHERE alert.zone_alarm_id = 1
Get the number of people entering or exiting a specific zone, with timestamps
This will return the rows for the following columns: tstamp, count_enter,
count_exit
SELECT zone_status.tstamp, total_count.count_enter, total_count.count_exit
FROM total_count
LEFT JOIN zone_status ON zone_status.id = total_count.zone_status_id
WHERE zone_status.zone_id =
(SELECT zone.id FROM zone WHERE zone.name = 'YOUR_ZONE_NAME_HERE')
AND total_count.class_name = 'person'
ORDER BY zone_status.tstamp;
Get the total number of entering and exiting detections of a specific class for all time for a zone
SELECT total_count.count_enter FROM total_count
JOIN zone_status ON zone_status.id=total_count.zone_status_id
WHERE zone_status.zone_id=[your zone id here]
AND total_count.class_name=[your class name here]
ORDER BY zone_status.tstamp DESC LIMIT 1;
Tables: For Analysis¶
zone_status¶
This is an important table for SQL queries. It holds a point in time for a
specific stream. The tstamp and zone_id are the key to finding specific
detection in a certain place at a certain time.
Column
Description
id
A unique identifier.
zone_id
The zone that this status is for. There is also a zone_id field that is the ID of this zone.
tstamp
The Unix timestamp of when this status was recorded.
detection¶
An object that has been detected in a video stream.
Column
Description
parent_id
A parent detection, if any. For instance, a face detection might have a parent that is a person detection.
class_name
The class name of the detection. it describes what the detection is. Ie, "person", "cat", or "dog"
identity_id
The identity that this detection is recognized as, if any. For example, if class_name is "face" and there is a face recognition capsule, and that capsule recognized the detection as someone known, it will be attached with an identity.
extra_data_json
This is a json of form {'key': VAL, 'key2': 'VAL'} where the values can be of any json encodable type. It is intended to carry capsule-specific and/or customer-specific information without tying it too closely to the brainframe product.
coords_json
A JSON-encoded array of arrays specifying where in the frame the detection is. In the format: [[x1,y1], …]
track_id
A nullable UUID string. Detections that have the same track_id refer to the same object according to the tracking algorithm being used. This can be used to find the path of a single object throughout a video stream. If null, then the detection has not been successfully tracked.
identity¶
A table for storing a known specific person or object, that other tables can
link information about.
Column
Description
id
A unique identifier.
unique_name
Some uniquely identifying string of the object, like an employee number or an SSN.
nickname
A display name for the identity which may not be unique, like a person’s name.
metadata_json
Any additional user-defined information about the identity.
alert¶
An alert that tells the user an alarm's condition has been met.
Column
Description
id
A unique identifier.
zone_alarm_id
The alarm this alert came from.
start_time
The Unix timestamp of when this alarm started.
end_time
The Unix timestamp of when this alarm ended. May be null if the alert is still ongoing.
verified_as
If True, this alert was verified as legit. If False, the alert was a false alarm. If None, it hasn’t been verified yet.
total_count¶
The total number of a certain class of object that has entered or exited a zone
at some time. There are zero or more of these per ZoneStatus.
Column
Description
id
A unique identifier.
zone_status_id
The zone status that this total count is for.
class_name
The name of the class of object that we're keeping count of.
count_enter
The amount of objects that have "entered" the zone.
count_exit
The amount of objects that have "exited" the zone.
capsule¶
A capsule loaded through the REST API.
Column
Description
name
The unique name of the capsule
data_storage_id
The data storage row that holds the capsule data
source_path
Path to the capsule's source code on the developer machine, or null if no source is available
Tables: For Configuration Storage¶
premises¶
This defines a physical area with an internal local network of some sort.
This could be a Mall, an office building, a shop, etc. The idea of a Premises
is to keep track of which local network a camera or edge device might be running
in, in order to forward results through a gateway to a central cloud server.
Column
Description
id
A unique identifier.
name
The human readable name for which this premises this table refers to.
stream_configuration¶
This defines a video stream and how BrainFrame should connect to it.
Column
Description
id
A unique identifier.
premises_id
Nullable. If not null, it represents the premises for which this camera is streaming from.
name
The name of the video stream as it appears to the user on the UI.
connection_type
The type of connection being defined. This has to do with whether or not the video comes from a file, webcam, or IP camera.
connection_options_json
A JSON object that contains configuration information about how to connect to the stream.
runtime_options_json
A JSON object that contains configuration information which changes the runtime behavior of the stream.
metadata_json
A JSON object that contains any additional information the user may want associated with this stream.
global_capsule_configuration¶
This table is automatically created when BrainFrame loads a capsule that didn’t
exist before.
Column
Description
name
The (unique) name of the capsule that this configuration refers to.
option_values_json
A json with the option values that this capsule exposes Format: { "option_key": "option_value", "other_option": 0.75 }
is_active
The default value for this capsule (on or off). It is overridden by the stream_capsule_configuration if the value is not null.
stream_capsule_configuration¶
This table will be created when the a specific stream has options modified for
a capsule. The table is intended to ‘patch’ an existing
global_capsule_configuration to modify behavior of a capsule for a specific
stream.
Column
Description
global_configuration_name
The global_capsule_configuration that this stream_capsule_configuration is patching
stream_id
The stream_configuration that this stream_capsule_configuration is modifying capsule options for.
option_values_patch_json
A json that can be empty, but also can modify the global capsule configuration by simply having a key: modified_value pair. {} or { "option_key": "modified option value" }
is_active
Overrides the global_capsule_configuration is_active value if this value is not null. That means that, if is_active is True on the stream_capsule_configuration, then the global_capsule_configuration is ignored. If is_active is null on the stream_capsule_configuration, then the global_capsule_configuration is used.
attribute¶
An Attribute refers to classifications, and are used to describe detections.
For example, there may be a category of classification such as "gender". A
particular detection might have an attribute with category "gender" and value
"male".
Column
Description
category
The category of attribute. ("gender", "car_type", etc). This attribute is a key.
value
The value of the attribute. ("male", "prius", etc). This attribute is a key.
zone¶
A space in a video stream to look for activity in.
Column
Description
id
A unique identifier
name
The name of the zone as it appears to the user.
stream_id
The ID of the stream that this zone is for.
coords_json
Two or more 2D coordinates defining the shape of the zone in the stream. Defined as a two-dimensional JSON array, or "null" if the zone applies to the entire frame.
zone_alarm¶
Defines a set of conditions that, if they take place in a zone, should trigger
an alarm to the user.
Column
Description
id
A unique identifier
name
The name of the alarm as it appears to the user.
use_active_time
If true, then alarms only happen between start_time and end_time. If false, then they can happen at any time.
active_start_time
The time to start monitoring the stream at every day. Only used if use_active_time is true. Stored in the format "HH:MM:SS".
active_end_time
The time to start monitoring the stream at every day. Only used if use_active_time is true. Stored in the format "HH:MM:SS".
zone_id
The zone that this alarm is assigned to watch. There is also a zone_id field that is the ID of this zone.
zone_alarm_count_condition¶
A condition that must be met for an alarm to go off. Compares how many of some
object is in a zone against a test value.
Column
Description
id
A unique identifier
zone_alarm_id
The zone alarm that this condition applies to.
test
The test condition, either ">", "<", "=", "!=".
check_value
The value to apply the test condition to.
with_class_name
The name of the class to count in the zone.
attribute_id
An optional attribute that the object must have to be counted. (nullable)
window_duration
The size of the sliding window used for this condition. A larger sliding window size may reduce false positives but increase latency.
window_threshold
A value between 0.0 and 1.0 that controls what portion of the sliding window results must evaluate to true for the alarm to trigger.
intersection_point
The point on the detection to use when calculating if the detection is in the zone. Either "bottom", "top", "left", "right", or "center".
zone_alarm_rate_condition¶
A condition that must be met for an alarm to go off. Compares the rate of
change in the count of some object against a test value.
Column
Description
id
A unique identifier
zone_alarm_id
The zone alarm that this condition applies to.
test
The test condition, either '>=' or '<='.
duration
The time period with which the change in object count happens, in seconds.
change
The change in object count that happens within a period of time.
direction
The direction of movement, either 'entering' the zone, 'exiting' the zone, or 'entering_or_exiting'.
with_class_name
The name of the class of objects to look for in the zone.
attribute_id
An optional attribute that the object must have to be counted.
intersection_point
The point on the detection to use when calculating if the detection is in the zone. Either "bottom", "top", "left", "right", or "center".
encoding¶
A vector encoding of some data that defines an identity. For example, an
encoding for a human face that can be compared to other encodings to identify
if it is the same human face.
Column
Description
id
A unique identifier
identity_id
The identity that this encoding describes
class_name
The name of the class that this encoding is of.
vector_json
A JSON-encoded array of values. The amount of values will depend on the class name of the identity this encoding is attached to.
Tables: For Linking¶
alert_frame¶
Links an alert to a data_storage table containing the first frame in the video
where this alert happened.
Column
Description
id
A unique identifier
alert_id
The alert this frame is for.
data_storage_id
The data_storage table that contains the frame.
zone_status_alert¶
Links a zone_status to an alert that was in progress at the time of the
zone_status.
Column
Description
zone_status_id
The zone_status being linked to.
alert_id
The alert being linked to.
detection_zone_status¶
Links zone statuses to the detections that happened in them.
Column
Description
detection_id
The linked detection.
zone_status_id
The linked zone_status.
transition_state
The location of the detection relative to the zone
detection_attribute¶
Links detections to the attributes that describe them.
Column
Description
detection_id
The linked detection.
attribute_id
The linked attribute.
encoding_data_storage¶
Links encodings to the data that was used to create the vector. This tends to
be an image.
Column
Description
data_storage_id
The linked data_storage
encoding_id
The linked encoding
Tables: Miscellaneous¶
data_storage¶
References some external file found elsewhere.
Column
Description
id
A unique identifier.
name
The name of the file, used to find it in storage.
hash
A SHA256 hash of the data.
mime_type
The mime type of the file being stored.
user¶
Contains information on user accounts.
Column
Description
id
A unique identifier.
username
The user's unique username.
password_hash
The user's password, hashed with argon2.
role
The user's role, which controls what permissions they have.
Ended: Dashboard
The BrainFrame software is Copyrighted. BrainFrame™ is a trademarked name.
End-User License Agreement¶
END USER LICENSE AGREEMENT
This copy of Software Package ("the Software Product") and accompanying
documentation is licensed and not sold. This Software Product is protected by
copyright laws and treaties, as well as laws and treaties related to other
forms of intellectual property. Aotu, Inc or its subsidiaries, affiliates, and
suppliers (collectively "Aotu") own intellectual property rights in the
Software Product. The Licensee's ("you" or "your") license to download, use,
copy, or change the Software Product is subject to these rights and to all the
terms and conditions of this End User License Agreement ("Agreement").
Acceptance
YOU ACCEPT AND AGREE TO BE BOUND BY THE TERMS OF THIS AGREEMENT BY SELECTING
THE "ACCEPT" OPTION AND DOWNLOADING THE SOFTWARE PRODUCT OR BY INSTALLING,
USING, OR COPYING THE SOFTWARE PRODUCT. YOU MUST AGREE TO ALL OF THE TERMS OF
THIS AGREEMENT BEFORE YOU WILL BE ALLOWED TO DOWNLOAD THE SOFTWARE PRODUCT. IF
YOU DO NOT AGREE TO ALL OF THE TERMS OF THIS AGREEMENT, YOU MUST SELECT
"DECLINE" AND YOU MUST NOT INSTALL, USE, OR COPY THE SOFTWARE PRODUCT.
License Grant
This Agreement entitles you to install and use one (1) copy of the Software
Product. In addition, you may make one (1) archival copy of the Software
Product. The archival copy must be on a storage medium other than a hard drive,
and may only be used for the reinstallation of the Software Product. This
Agreement does not permit the installation or use of multiple copies of the
Software Product, or the installation of the Software Product on more than one
host machine (including but not limited to a robot, a general or specialized
computer) at any given time, on a system that allows shared use of
applications, on a multi-user network, or on any configuration or system of
host machines that allows multiple users. Multiple-copy use or installation is
only allowed if you obtain an appropriate licensing agreement for each user and
each copy of the Software Product.
Restrictions on Transfer
Without first obtaining the express written consent of Aotu, you may not assign
your rights and obligations under this Agreement, or redistribute, encumber,
sell, rent, lease, sublicense, or otherwise transfer your rights to the
Software Product.
Restrictions on Use
You may not use, copy, or install the Software Product on any system with more
than one host machine, or permit the use, copying, or installation of the
Software Product by more than one user or on more than one host machine. If you
hold multiple, validly licensed copies, you may not use, copy, or install the
Software Product on any system with more than the number of host machines
permitted by license, or permit the use, copying, or installation by more
users, or on more host machines than the number permitted by license.
You may not decompile, "reverse-engineer", disassemble, or otherwise attempt to
derive the source code for the Software Product.
You may not use the database portion of the Software Product in connection with
any software other than the Software Product.
Restrictions on Alteration
You may not modify the Software Product or create any derivative work of the
Software Product or its accompanying documentation. Derivative works include
but are not limited to translations. You may not alter any files or libraries
in any portion of the Software Product. You may not reproduce the database
portion or create any tables or reports relating to the database portion.
Restrictions on Copying
You may not copy any part of the Software Product except to the extent that
licensed use inherently demands the creation of a temporary copy stored in a
host machine’s memory and not permanently affixed on storage medium. You may
make one archival copy which must be stored on a medium other than a computer
hard drive.
Disclaimer of Warranties and Limitation of Liability
UNLESS OTHERWISE EXPLICITLY AGREED TO IN WRITING BY AOTU, AOTU MAKES NO OTHER
WARRANTIES, EXPRESS OR IMPLIED, IN FACT OR IN LAW, INCLUDING, BUT NOT LIMITED
TO, ANY IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR
PURPOSE OTHER THAN AS SET FORTH IN THIS AGREEMENT OR IN THE LIMITED WARRANTY
DOCUMENTS PROVIDED WITH THE SOFTWARE PRODUCT.
Aotu makes no warranty that the Software Product will meet your requirements or
operate under your specific conditions of use. Aotu makes no warranty that
operation of the Software Product will be secure, error free, or free from
interruption. YOU MUST DETERMINE WHETHER THE SOFTWARE PRODUCT SUFFICIENTLY
MEETS YOUR REQUIREMENTS FOR SECURITY AND UNINTERRUPTABILITY. YOU BEAR SOLE
RESPONSIBILITY AND ALL LIABILITY FOR ANY LOSS INCURRED DUE TO FAILURE OF THE
SOFTWARE PRODUCT TO MEET YOUR REQUIREMENTS. Aotu WILL NOT, UNDER ANY
CIRCUMSTANCES, BE RESPONSIBLE OR LIABLE FOR THE LOSS OF DATA ON ANY COMPUTER OR
INFORMATION STORAGE DEVICE.
UNDER NO CIRCUMSTANCES SHALL AOTU, ITS DIRECTORS, OFFICERS, EMPLOYEES OR AGENTS
BE LIABLE TO YOU OR ANY OTHER PARTY FOR INDIRECT, CONSEQUENTIAL, SPECIAL,
INCIDENTAL, PUNITIVE, OR EXEMPLARY DAMAGES OF ANY KIND (INCLUDING LOST REVENUES
OR PROFITS OR LOSS OF BUSINESS) RESULTING FROM THIS AGREEMENT, OR FROM THE
FURNISHING, PERFORMANCE, INSTALLATION, OR USE OF THE SOFTWARE PRODUCT, WHETHER
DUE TO A BREACH OF CONTRACT, BREACH OF WARRANTY, OR THE NEGLIGENCE OF AOTU OR
ANY OTHER PARTY, EVEN IF AOTU IS ADVISED BEFOREHAND OF THE POSSIBILITY OF SUCH
DAMAGES. TO THE EXTENT THAT THE APPLICABLE JURISDICTION LIMITS AOTU'S ABILITY
TO DISCLAIM ANY IMPLIED WARRANTIES, THIS DISCLAIMER SHALL BE EFFECTIVE TO THE
MAXIMUM EXTENT PERMITTED.
Limitation of Remedies and Damages
Your remedy for a breach of this Agreement or of any warranty included in this
Agreement is the correction or replacement of the Software Product. Selection
of whether to correct or replace shall be solely at the discretion of Aotu.
Aotu reserves the right to substitute a functionally equivalent copy of the
Software Product as a replacement. If Aotu is unable to provide a replacement
or substitute Software Product or corrections to the Software Product, your
sole alternate remedy shall be a refund of the purchase price for the Software
Product exclusive of any costs for shipping and handling.
Any claim must be made within the applicable warranty period. All warranties
cover only defects arising under normal use and do not include malfunctions or
failure resulting from misuse, abuse, neglect, alteration, problems with
electrical power, acts of nature, unusual temperatures or humidity, improper
installation, or damage determined by Aotu to have been caused by you. All
limited warranties on the Software Product are granted only to you and are
non-transferable. You agree to indemnify and hold Aotu harmless from all
claims, judgments, liabilities, expenses, or costs arising from your breach of
this Agreement and/or acts or omissions.
Term and Termination
This Agreement shall remain in effect unless terminated as set forth herein
(the “Term”). You may terminate this Agreement by ceasing to use and destroying
all copies of the Software production and accompanying documentation. Either
party may, upon written notice to the other party, terminate this Agreement for
material breach, provided that such material breach is not cured within thirty
(30) days following receipt of such notice. Upon expiration or earlier
termination of this Agreement, the license shall also terminate, and You shall
cease using and destroy all copies of the Software Product and accompanying
documentation. Notwithstanding any expiration or termination of this Agreement,
any provisions of this Agreement which by their terms are intended to survive
expiration or termination of this Agreement shall so survive and continue in
full force and effect.
Maintenance and Support.
This Agreement does not entitle You to any maintenance or support services with
respect to the Software Product.
Governing Law, Jurisdiction and Costs
This Agreement is governed by the laws of California, without regard to
California's conflict or choice of law provisions. Any legal action or
proceeding relating to this Agreement shall be brought exclusively in courts
located in Santa Clara, CA, and each party consents to the jurisdiction
thereof. The prevailing party in any action to enforce this Agreement shall be
entitled to recover costs and expenses including, without limitation,
attorneys’ fees. This Agreement is made within the exclusive jurisdiction of
the United States, and its jurisdiction shall supersede any other jurisdiction
of either party’s election.
Severability
If any provision of this Agreement shall be held to be invalid or
unenforceable, the remainder of this Agreement shall remain in full force and
effect. To the extent any express or implied restrictions are not permitted by
applicable laws, these express or implied restrictions shall remain in force
and effect to the maximum extent permitted by such applicable laws.
SDK Licensing¶
Unless otherwise stated, the following commercial license applies to all
other SDK components.
SOFTWARE LICENSE AGREEMENT FOR SOFTWARE DEVELOPMENT KIT
Notice to user: THIS IS A LICENSE AGREEMENT BETWEEN YOU AND AOTU, INC. BY
INDICATING YOUR ACCEPTANCE AS SET FORTH BELOW, YOU ACCEPT ALL THE TERMS AND
CONDITIONS OF THIS LICENSE AGREEMENT. This License Agreement accompanies AOTO's
Software Development Kit(s) and related explanatory materials, including but
not limited to any example code, (together, the "SDK"). This copy of the SDK is
licensed to You as the end user or to Your employer or another third party
authorized to permit Your use of the SDK. You agree that this License Agreement
is enforceable like any written negotiated agreement signed by You and that
Your use of the SDK constitutes acceptance of the Agreement terms. If you do
not agree to the terms of this Agreement, do not use the SDK.
1. DEFINITIONS.
1.1. "Licensed Material" means the SDK in source, binary, or object
code format.
1.2. "Product" means AOTU's BrainFrame and VisionCapsules package.
1.3. "SDK" means the Software Development Kit(s) and related
explanatory materials for the Product, including but not limited to any
example code, any update, revision, modification, and new version of
the SDK, and any SDK Derivative. 1.4. "SDK Derivatives" means source,
binary, or object code derived exclusively from the SDK; provided,
however, that SDK Derivatives do not include applications which may be
developed using the SDK. By way of example, an application that is
developed using the SDK would not be a SDK Derivative. By way of
example, but not limitation, a SDK Derivative is or would be: either
(i) an adaptation of a utility or piece of code from the SDK to improve
efficiency; or (ii) an addition of code or improvement to the SDK that
adds functionality.
1.5. "Developer", "You", or "Your" means any person or entity acquiring
or using the SDK under the terms of this License Agreement.
1.6. "Platform" means the BrainFrame, VisionCapsules or any system
designed, developed, or manufactured based on the technology.
1.7. "Licensor" means Aotu, Inc.
1.8. "Licensee" means YOU.
2. LICENSE. Subject to the terms, conditions, and restrictions contained in
this Section 2, Licensor grants to You a nonexclusive, worldwide, royalty free
license to use the items in the Licensed Material only for development of
applications that are designed for or compatible with the Platform.
2.1. You may use or merge all or portions of the Licensed Material with
Your applications and distribute it as part of Your products. Your
applications must be designed for or compatible with the Platform. Any
used or merged portion of the Licensed Material is subject to this
License Agreement. You are required to include Licensor's copyright
notices on Your applications where such Licensed Material is used.
2.2. You must NOT create SDK Derivatives.
3. PROPRIETARY RIGHTS. The items contained in the Licensed Material are the
intellectual property of Licensor and are protected by United States copyright
and patent law, international treaty provisions and applicable laws of the
country in which it is being used. You agree to protect all copyright and other
ownership interests of Licensor in all items in the Licensed Material supplied
under this License Agreement. You agree that all copies of the items in the
Licensed Material, reproduced for any reason by You, contain the same copyright
notices, and other proprietary notices as appropriate, as appear on or in the
master items delivered by Licensor in the Licensed Material. Licensor retains
title and ownership of the items in the Licensed Material, the media on which
it is recorded, and all subsequent copies, regardless of the form or media in
or on which the original and other copies may exist. Except as stated above,
this License Agreement does not grant You any rights to patents, copyrights,
trade secrets, trademarks or any other rights in respect to the items in the
Licensed Material.
4. TERM. This License Agreement is effective until terminated. Licensor has the
right to terminate this License Agreement immediately, without judicial
intervention, if You fail to comply with any term herein. Upon any such
termination You must remove all full and partial copies of the items in the
Licensed Material from your computer and discontinue the use of the items in
the Licensed Material.
5. DISCLAIMER OF WARRANTY. Licensor licenses the Licensed Material to You only
on an "AS-IS" basis. Licensor makes no representation with respect to the
adequacy of any items in the Licensed Material, whether or not used by You in
the development of any products, for any particular purpose or with respect to
their adequacy to produce any particular result. Licensor shall not be liable
for loss or damage arising out of this License Agreement or from the
distribution or use of Your products containing portions of the Licensed
Material. LICENSOR DISCLAIMS ALL WARRANTIES, EITHER EXPRESS OR IMPLIED,
INCLUDING BUT NOT LIMITED TO IMPLIED CONDITIONS OR WARRANTIES OF
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE OR NONINFRINGEMENT OF ANY
THIRD PARTY RIGHT IN RESPECT OF THE ITEMS IN THE LICENSED MATERIAL OR ANY
SERVICES RELATED TO THE LICENSED MATERIAL.
Some states or jurisdictions do not allow the exclusion or limitation of
incidental, consequential or special damages, or the exclusion of implied
warranties or limitations on how long an implied warranty may last, so the
above limitations may not apply to You. You may have rights which vary from
state to state or jurisdiction to jurisdiction. The foregoing does not affect
or prejudice Your statutory rights. To the extent permissible any implied
warranties are limited to ninety (90) days.
Licensor is under no obligation to provide any support under this License
Agreement, including upgrades or future versions of the Licensed Material or
any portions thereof, to You or to any other party.
6. LIMITATION OF LIABILITY. Notwithstanding any other provisions of this
License Agreement, Licensor's liability to You under this License Agreement
shall be limited to the amount paid by You for the Licensed Material.
IN NO EVENT WILL LICENSOR BE LIABLE TO YOU FOR ANY CONSEQUENTIAL, INCIDENTAL OR
SPECIAL DAMAGES INCLUDING DAMAGES FOR ANY LOST PROFITS, LOST SAVINGS, LOSS OF
DATA, COSTS, FEES OR EXPENSES OF ANY KIND OR NATURE ARISING OUT OF ANY
PROVISION OF THIS LICENSE AGREEMENT OR THE USE OR INABILITY TO USE THE ITEMS IN
THE LICENSED MATERIAL, EVEN IF A Licensor REPRESENTATIVE HAS BEEN ADVISED OF
THE POSSIBILITY OF SUCH DAMAGES, OR FOR ANY CLAIM BY ANY PARTY.
Some jurisdictions do not allow the exclusion or limitation of incidental,
consequential or special damages, so the above limitation or exclusion may not
apply to You. Nothing contained in this Agreement shall prejudice the statutory
rights of any party dealing as a consumer.
7. INDEMNIFICATION. You agree to indemnify, hold harmless, and defend Licensor
from and against any claims or lawsuits, including attorneys' fees, that arise
or result from the use and distribution of Your product that contains or is
based upon any portion of the Licensed Material, provided that Licensor gives
You prompt written notice of any such claim, tenders to You the defense or
settlement of such a claim at Your expense and cooperates with You, at Your
expense, in defending or settling such claim.
8. CHOICE OF LAW. This License Agreement shall be governed by and construed in
accordance with the substantive laws in force in the State of California. You
agree that any controversy or claim arising out of or relating to this License
Agreement, or the breach thereof, shall be settled by arbitration administered
by the American Arbitration Association in accordance with its Consumer
Arbitration Rules, and judgment on the award rendered by the arbitrator(s) may
be entered in any court having jurisdiction thereof. This License Agreement
will not be governed by the conflict of law rules of any jurisdiction or the
United Nations Convention on Contracts for the International Sale of Goods, the
application of which is expressly excluded.
9. NO WAIVER. Failure by Licensor at any time to enforce any of the provisions
of this License Agreement will not be construed as a waiver of such provisions
or in any way affect the validity of this License Agreement or parts thereof.
10. SEVERABILITY. If parts of this License Agreement are held to be illegal or
otherwise unenforceable, the remainder of this License Agreement should still
apply.
OEM Licensing¶
TECHNOLOGY LICENSE AGREEMENT
THIS IS A LICENSE AGREEMENT BETWEEN YOU AND AOTU, INC. BY INDICATING YOUR
ACCEPTANCE AS SET FORTH BELOW, YOU ACCEPT ALL THE TERMS AND CONDITIONS OF THIS
LICENSE AGREEMENT. IF YOU DO NOT AGREE TO THE TERMS OF THIS AGREEMENT, DO NOT
USE ANY PART OF THE LICENSED TECHNOLOGY, LICENSED PATENTS, LICENSED TRADE
SECRETS, LICENSED WORKS, OR LICENSED TRADEMARKS AS DEFINED IN THIS LICENSE
AGREEMENT AND IMMEDIATELY DESTROY ANY COPY OF THE LICENSED WORKS YOU POSSESS OR
HAVE CREATED.
1. DEFINITIONS
1.1. "Change of Control" means, with respect to a party, a transaction
or series of related transactions that results in: (a) a sale of all or
substantially all of the assets of such party to a third party; (b) the
transfer of fifty percent (50%) or more of the outstanding voting power
of such party to a third party; or (c) the acquisition by a third party
of the right or power to appoint or cause to be appointed a majority of
the directors (or in the case of an entity that is not a corporation,
for the election of the corresponding managing authority).
1.2. "Confidential Information" means: (a) information or material in
tangible form disclosed to a party in the course of the discussions and
project related to this Agreement and marked as "confidential" at the
time it is disclosed; (b) proprietary or confidential information
disclosed by a party to the other party orally that is identified as
confidential when disclosed, and such information is confirmed as being
confidential in a written communication from the disclosing party to
the receiving party within thirty (30) days of the disclosure.
"Confidential Information" does not include information that: (i) was
already known to the receiving party, other than under an obligation of
confidentiality, at the time of disclosure; (ii) was available to the
public or otherwise part of the public domain at the time of
disclosure; (iii) became available to the public after its disclosure,
other than through any act or omission of the receiving party in breach
of this Agreement; (v) was subsequently lawfully disclosed by the
receiving party to a person other than a party to this Agreement; or
(vi) was developed independently by the receiving party without
misappropriating confidential information.
1.3. "Field of Use" means the field of selling and distribution of
Licensee Products in the Territory.
1.4. "Licensed Patents" means the following patents:
1.5. "Licensed Trade Secrets" means the Licensor"s proprietary
information, data, and source code contained in the Licensed
Technology.
1.6. "Licensed Works" means the BrainFrame and VisionCapsules software
package and related documentation.
1.7. "Licensed Trademarks" means the "BrainFrame" and "VisionCapsules"
trademark.
1.8. "Licensed Technology" means Licensor"s BrainFrame and
VisionCapsules technology.
1.9. "Licensee Products" shall have the meaning specified in the
Business Contract.
1.10. "Territory" means the United States of America.
1.11. "Business Contract" refers to a business agreement entered
between the Licensor and the Licensee in connection with this
Agreement.
1.12. "Licensor" means Aotu, Inc.
1.13. "Licensee" means YOU.
2. LICENSE GRANTS. Subject to the terms and conditions of this Agreement and
the Business Contract, and during the term of this Agreement or the term of the
Business Contract, whichever is shorter, Licensor hereby grants to Licensee:
2.1. Patent License. A non-exclusive royalty-bearing license to
practice the Licensed Patents to make, have made, use, sell, to offer
to sell, and import Licensee Products in the Field of Use in the
Territory;
2.2. Trade Secret License. A non-exclusive royalty-bearing license to
use the Licensed Trade Secrets in the development, production and
manufacturing of the Licensee Products in the Field of Use in the
Territory.
2.3. Copyright License. A non-exclusive royalty-bearing license to use,
copy, and distribute the Licensed Works in the development, production
manufacturing, marketing and distribution of the Licensee Products in
the Field of Use in the Territory.
2.4. Trademark License. A non-exclusive royalty-bearing license to use
the Licensed Trademarks only in connection with the distribution,
advertising, promotion, and marketing of the Licensee Products in the
Territory, provided that the Licensee must ensure that all Licensee
Products and all of its advertising, promotional, and other related
uses of the Licensed Trademarks conform to the Licensor's standards, as
they may change during the Term.
2.5. No Sublicenses. Licensee shall have no right to transfer or
sublicense any of the rights set forth in this Agreement to any third
party.
3. PAYMENTS
3.1. All payment terms are governed by the Business Contract.
4. RECORDS, REPORTS, AND AUDIT
4.1. All records, reports, and audit terms are governed by the Business
Contract.
5. REPRESENTATIONS AND WARRANTIES
5.1. Mutual Representations and Warranties. Each party represents and
warrants to the other party that:
(a) It has the full right, power, and authority to enter into
this Agreement and perform its obligations hereunder;
(b) It is duly organized, validly existing, and in good
standing in the jurisdiction in which it is incorporated or
doing business.
(c) Its execution, delivery, and performance of this Agreement, and the
other party"s exercise of rights under this Agreement, will not
conflict with or result in a breach or other violation of any agreement
or other third-party obligation by which it is bound;
(d) During the term of this Agreement, it will not enter into any
agreement that would conflict with this Agreement or impair its ability
to perform this Agreement; and
(e) It will comply with all applicable laws in its performance of this
Agreement.
5.2. No Warranty of Fitness or Validity of IP. LICENSOR MAKES NO
WARRANTIES, EXPRESS OR IMPLIED, OF MERCHANTABILITY, FUNCTIONALITY,
COMPLIANCE WITH TECHNICAL STANDARDS, OR FITNESS FOR A PARTICULAR PURPOSE OF
ANY INTELLECTUAL PROPERTY LICENSED UNDER THIS AGREEMENT. Licensor does not
warrant the validity of the intellectual property licensed under this
Agreement.
6. DISCLAIMERS AND LIMITATIONS OF LIABILITY
6.1. NEITHER PARTY WILL BE LIABLE (WHETHER IN CONTRACT, WARRANTY, TORT,
PRODUCT LIABILITY, OR OTHER THEORY) TO THE OTHER PARTY FOR COST OF COVER OR
FOR ANY INDIRECT, INCIDENTAL, SPECIAL, CONSEQUENTIAL, PUNITIVE, OR
EXEMPLARY DAMAGES (INCLUDING DAMAGES FOR LOSS OF PROFIT, BUSINESS, OR DATA)
ARISING OUT OF THIS AGREEMENT.
7. INDEMNIFICATION
7.1. Licensee agrees to indemnify, hold harmless, and defend Licensor from
and against any claims or lawsuits, including attorneys" fees, that arise
or result from the use and/or distribution of Licensed Patents, Licensed
Trade Secrets, Licensed Works, Licensed Trademarks, Licensed Techonlogy,
and/or Licensee Products, provided that Licensor gives Licensee prompt
written notice of any such claim, tenders to Licensee the defense or
settlement of such a claim at Licensee's expense and cooperates with
Licensee, at Licensee's expense, in defending or settling such claim.
8. IP MAINTENANCE AND ENFORCEMENT
8.1. Licensor Obligation to Maintain Patents and other Licensed IP.
Licensor shall file, prosecute, and maintain all Licensed Patents in the
Territory. Licensor shall also have the obligation during the term of this
Agreement to take appropriate measures under applicable law to protect and
maintain the legal status of the Licensed Works, the Licensed Trade
Secrets, and the Licensed Trademarks.
8.2. Notice of Infringement. In any case in which Licensee becomes of aware
of infringement or misappropriation of the intellectual property licensed
under this Agreement or has reason to believe that such infringement or
misappropriation is occurring, Licensee will notify the Licensor promptly
in writing. The written notice shall be delivered in any event no later
than five (5) days after Licensee becomes aware of such infringement or
misappropriation, and the notice shall provide any available specific
information including persons involved, sources of information, dates, and
technology identification.
8.3. Licensor Enforcement. Licensor shall have the right and obligation to
enforce the intellectual property rights licensed under this Agreement
against infringers or misappropriators by asserting claims, bringing
lawsuits and prosecuting such suits as necessary.
9. PRODUCT MARKING.
9.1. Patent Marking. Licensee shall mark the Licensee Products with a patent
notice adequate to comply with the requirements of 35 U.S.C. Section 287
(a). Such marking shall be placed in a conspicuous location on the Licensee
Product and shall include the word "patent" or "pat" and the number of any
issued patent that claims a process or method used in the Licensee Product.
As an alternative, Licensee may provide virtual marking of the Licensee
Product compliant with 35 U.S.C. Section 287, as amended.
9.2. Avoidance of False Marking. Licensee shall use reasonable diligence to
avoid any false or misleading patent marking on any bundled products or
parts of the Licensed Product that are not covered by at least one claim of
the patents licensed under this Agreement.
9.3. Copyright Notice. Licensee will use and maintain copyright notices on
the Licensed Works in the manner directed by Licensor.
10. CONFIDENTIALITY
10.1. Duty to Maintain Confidentiality. Licensee must safeguard the
confidentiality of the Licensed Trade Secrets and any Confidential
Information. Licensee must not disclose the Licensed Trade Secrets or
Confidential information to any third party. Licensee agrees to disclose
the Licensed Trade Secrets and Confidential Information only to persons
with a "need to know" within the company. Licensee agrees to use the
Licensed Trade Secrets and any Confidential Information only in connection
with the development of the Licensee Products and to take reasonable
precautions to prevent its accidental copying or distribution to persons
outside of the Licensee or to employees and officers of Licensee without a
need to know this Confidential Information.
11. DISPUTE RESOLUTION
11.1. Choice of Law. This Agreement shall be construed, and the legal
relations between the parties hereto shall be determined, in accordance
with the laws of the State of California and, as applicable, the laws of
Territory.
11.2. Jurisdiction and Venue. All disputes arising out of this Agreement
shall be resolved by adjudication in the Superior Court for the State of
California or the Federal District Court for the Northern District of
California. Venue shall be in Santa Clara, California.
12. TERM AND TERMINATION
12.1. Term. The term of this Agreement shall begin from the date when
Licensee accepts the terms of this Agreement or the date when Licensee and
Licensor enter into the Business Contract, whichever is later, and shall be
effective only during the time when both this Agreement and the Business
Contract are effective.
12.2. Termination for Breach. In the event that Licensee is in material
breach of its obligations under this Agreement, Licensor may deliver to the
Licensee a written Notice of Proposed Termination. If Licensee fails to
cure the breach within thirty (30) days of its receipt of Notice of
Proposed Termination, Licensor may terminate this Agreement by providing to
Licensee a Notice of Termination. However, breach of the confidentiality
provisions of Section 9 of this Agreement may be grounds for termination
upon 72 hours written notice if the breach jeopardizes the legal protection
of the Confidential Information of the non-breaching party, or otherwise
causes irreparable injury to the non-breaching party.
12.3. Termination upon Change of Control. Licensor may terminate this
Agreement in the event that the Licensee undergoes a Change of Control.
12.4. Effect of Termination. In the event of termination of this Agreement
by Licensor under Sections 12.2 or 12.3 of this Agreement: (a) All licenses
conferred by this Agreement shall cease, provided that Licensee may
continue, for a period of forty-five (45) days after the date of its
receipt of the Notice of Termination, to ship any inventory of the Licensee
Products ordered in writing by customers prior to the date of Licensee's
receipt of the Notice of Termination; (b) Licensee shall return to Licensor
all Licensed Trade Secrets in tangible form and Licensed Works, and any
copies thereof, no later than thirty (30) business days after the date of
Licensee's receipt of the Notice of Termination.
12.5. Survival. The following sections of this Agreement shall survive its
termination or expiration: 10.
12.6. Remedies. The right of the parties to terminate this Agreement shall
not be the exclusive remedy for breach.
13. GENERAL PROVISIONS
13.1. Assignment. Neither party shall assign this Agreement in whole or in
part, without the prior written consent of the other party, which consent
may be withheld for any reason.
13.2. Entire Agreement. This Agreement and the Business Contract constitute
the entire agreement between the parties relating to the subject matter of
the intellectual property and licenses referenced herein, and all prior
negotiations, representations, agreements, letters, and understandings are
merged into, extinguished by, and integrated into this Agreement. No
modification of this Agreement or any of its terms shall be effective
unless a written amendment is signed by the parties.
13.3. Force Majeure. Neither party will be responsible to the other party
for non-performance or breach of any terms of this Agreement due to
occurrences beyond the control of the party, including acts of God, acts of
government, terrorism, wars, riots, strikes or other labor disputes,
shortages of labor or materials, fires, and floods, provided that the
non-performing party must promptly provide written notice of the
occurrence, including specific details and a plan for mitigating the
situation.
13.4. Severable Terms. The provisions of this Agreement are severable, and
in the event that any provision of this Agreement shall be determined to be
invalid or unenforceable under any controlling body of law, this
determination shall not in any way affect the validity or enforceability of
the remaining provisions of this Agreement.
Open Source Licenses¶
Open source licenses for the BrainFrame client can be found under the
legal/licenses
directory.
Open source licenses for the BrainFrame server can be found in the core
Docker image under the standard locations provided by the apt
and pip
package managers.
Replacing Python Libraries¶
We offer BrainFrame client users the option to replace some libraries that have
been packaged alongside or within the client binary with an API-compatible
version of the library. Simply set the environment variable corresponding to
the library you want to replace, and BrainFrame will use that version.
For example:
export PYGOBJECT_PATH=/usr/local/pygobject-custom
bash ./brainframe_client.sh
PyGObject¶
Environment variable: PYGOBJECT_PATH
Source: https://github.com/GNOME/pygobject
Argh¶
Environment variable: ARGH_PATH
Source: https://github.com/neithere/argh
Chardet¶
Environment variable: CHARDET_PATH
Source: https://github.com/chardet/chardet
Replacing C++ Libraries¶
Replaceable libraries are included in the release under the lib
directory and
are dynamically linked at runtime. In order to use a custom version of these
dependencies, simply replace the included dynamic library files with your
version.
Please consult the source for their corresponding copyrights. Links to each
library's source code can be found in lib/brainframe_qt/legal/sources.txt
in the binary .zip.
Introduction¶
In this tutorial, we will connect video streams to the BrainFrame server. You can find the complete script on our GitHub repository.
Before we start, you should have BrainFrame Server and Client installed on your machine, If you don't have them yet, please follow the setup instructions.
This tutorial will use our Python library that wraps around the BrainFrame REST API. The library makes programming for BrainFrame in Python easier. If you're using Python, We strongly recommend you to use our Python API. Otherwise, you can always follow our REST API documentation to use the REST API directly.
Setup Environment¶
First, let's install the BrainFrame Python API library and setup the environment. Run the following commands. You can either do this in your virtual environment (recommended) or a root environment.
pip3 install brainframe-api
The Python API is now installed and ready for use.
The following APIs will be used in this tutorial:
api.get_stream_configurations()
api.set_stream_configuration(...)
api.start_analyzing(stream_id=...)
Check existing streams¶
Now let's create a new, empty script. The first thing you want to do is to import the Python API library.
from pathlib import Path
from brainframe.api import BrainFrameAPI, bf_codecs
Then, initialize an API instance with the BrainFrame server URL. In this tutorial, we will connect to the BrainFrame server instance running on our local machine.
api = BrainFrameAPI("http://localhost")
The server is now connected, and we can start working with BrainFrame. First, let's see if there are any streams already connected to BrainFrame.
stream_configs = api.get_stream_configurations()
print("Existing streams: ", stream_configs)
If you run the script, and you only have a freshly-installed BrainFrame server, you should see just an empty list. Otherwise, the list of streams you have already connected will appear.
Create a New Stream Codec¶
Next, we will create a new stream configuration. The API function we will use is
api.set_stream_configuration(...)
. Looking at the function, it takes just a
stream configuration codec as input. You can check the definition of different
codecs in the Python library documentation.
Currently, we support three types of video sources:
- IP cameras
- Webcams
- Local files
For different types of video source, you need to set the different connection types and connection options. For more information, check this documentation.
IP Camera¶
For an IP camera, the connection type will be IP_CAMERA
. In the
connection_options
, a valid url
is required.
# Create a new IP camera StreamConfiguration codec
new_ip_camera_stream_config = bf_codecs.StreamConfiguration(
# The display name on the client/in API responses
name="IP Camera",
connection_type=bf_codecs.StreamConfiguration.ConnType.IP_CAMERA,
connection_options={
# The url of the IP camera
"url": "your_ip_camera_url",
},
runtime_options={},
premises_id=None,
)
Webcam¶
For a Webcam, the connection type is WEBCAM
. Note: This must be connected to
the server, not the client. In the connection_options
, the device ID of the
webcam is required. On Linux, you can find the device ID using:
ls /dev/ | grep video
After you have the device ID, use it in the codec.
# Create a local file StreamConfiguration codec
new_web_camera_stream_config = bf_codecs.StreamConfiguration(
# The display name on the client/in API responses
name="Webcam",
connection_type=bf_codecs.StreamConfiguration.ConnType.WEBCAM,
connection_options={
# The device ID of the web camera
"device_id": 0,
},
runtime_options={},
premises_id=None,
)
Local File¶
For a local file, you have to first upload the video file to the BrainFrame
server's database and get a storage ID. The connection type is FILE
. In the
connection_options
, the storage ID of the file is required.
# Upload the local file to the database and create a storage id
storage_id = api.new_storage(
data=Path("../videos/shopping_cashier_gone.mp4").read_bytes(),
mime_type="application/octet-stream"
)
# Create a local file stream configuration codec
new_local_file_stream_config = bf_codecs.StreamConfiguration(
# The display name on the client side
name="Local File",
connection_type=bf_codecs.StreamConfiguration.ConnType.FILE,
# The storage id of the file
connection_options={
"storage_id": storage_id,
},
runtime_options={},
premises_id=None,
)
Create a StreamConfiguration on the Server Side¶
Once we have the StreamConfiguration codec, we can tell BrainFrame Server to connect to it. In this tutorial, we will use the file-based codec. If you have an IP camera or a Webcam connected to the server, you can try using those as well.
# Tell the server to connect to the stream configuration
new_local_file_stream_config = api.set_stream_configuration(
new_local_file_stream_config)
Once the server receives the stream configuration, it will connect to it, assign a stream ID to it, and send it back. It is helpful to keep track of the IDs of the streams you have added using the return value.
Finally, don't forget to tell BrainFrame to start analyzing/preforming inference on the stream.
# Start analysis on the stream
api.start_analyzing(new_local_file_stream_config.id)
Now you should be able to see that stream in the BrainFrame client.
Introduction¶
In this tutorial, we will walk you through a simple use case for BrainFrame: getting WeChat Notification when there is no cashier in the checkout area. You can find the complete script on our GitHub repository.
Setup The Environment¶
In a previous tutorial, we installed the BrainFrame server, client, and Python API libraries. In this tutorial, the API functions we are going to use are:
api.set_stream_configuration(...)
api.set_zone(...)
api.get_latest_zone_statuses()
api.get_zone_status_stream()
We will be using a third-party library called itchat to send
notifications to WeChat. We'll install it using pip
:
pip3 install itchat
We will also use one of our publicly available capsules,
detector_people_and_vehicles_fast
. You can grab it from our
downloads page.
Before we start, you should have the BrainFrame server and client running, and capsules ready.
Log In to WeChat¶
As usual, we will begin by importing our dependencies:
from pathlib import Path
import itchat as wechat
from brainframe.api import BrainFrameAPI, bf_codecs
Then, let's log in to our WeChat account and send a test message:
wechat.auto_login()
wechat.send_msg(f"Notifications from BrainFrame have been enabled",
toUserName="filehelper")
The script will display a QR code, Scan it with your WeChat app and login. Your File Helper will then receive the message.
Create a New Stream from a Local File¶
First set the BrainFrame URL:
api = BrainFrameAPI("http://localhost")
We will reuse the code snippet introduced in the previous tutorial to create a stream configuration on the BrainFrame server. We're going to use a simulated video file for this demo, but it will work with live video streams as well.
# Upload the local file to the BrainFrame server's database and get its storage
# ID
storage_id = api.new_storage(
data=Path("../videos/shopping_cashier_gone.mp4").read_bytes(),
mime_type="application/octet-stream"
)
# Create a StreamConfiguration with the storage ID
new_stream_config = bf_codecs.StreamConfiguration(
# The display name on the client side
name="Demo",
# Specify that we're using a file
connection_type=bf_codecs.StreamConfiguration.ConnType.FILE,
connection_options={
# The storage id of the file
"storage_id": storage_id,
},
runtime_options={},
premises_id=None,
)
# Send the StreamConfiguration to the server to have it connect
new_stream_config = api.set_stream_configuration(new_stream_config)
# Tell the server to start analysis on the new stream
api.start_analyzing(new_stream_config.id)
You can download the demo video from our tutorial scripts repository. We recorded a video simulating a cashier serving customers.
Create a Zone and Setup an Alarm¶
In BrainFrame, alarms are associated with zones, and you can configure them through the client or through the API. You can check our documentation on Zones and Alarms for more information.
Using the API, we will create a zone around the check-out counter, and an alarm that will be triggered if no people are in that zone.
# Condition for the Alarm that will trigger when there is <1 person in the zone
# that it is assigned to
no_cashier_alarm_condition = bf_codecs.ZoneAlarmCountCondition(
test=bf_codecs.CountConditionTestType.LESS_THAN,
check_value=1,
with_class_name="person",
with_attribute=None,
window_duration=5.0,
window_threshold=0.5,
intersection_point=bf_codecs.IntersectionPointType.BOTTOM,
)
# Create the ZoneAlarm. It will be active all day, everyday and will be
# triggered if the detection results satisfy the condition we created. Because
# use_active_time==False, the active end/start times will be ignored.
no_cashier_alarm = bf_codecs.ZoneAlarm(
name="Missing Cashier!",
count_conditions=[no_cashier_alarm_condition],
rate_conditions=[],
use_active_time=False,
active_start_time="00:00:00",
active_end_time="23:59:59",
)
# Create a Zone object with the above alarm
cashier_zone = bf_codecs.Zone(
name="Cashier",
stream_id=new_stream_config.id,
alarms=[no_cashier_alarm],
coords=[[513, 695], [223, 659], [265, 340], [513, 280], [578, 462]]
)
# Send the Zone to BrainFrame
api.set_zone(cashier_zone)
In the client, you will be able to see the zone there:
Get Zone Status¶
In BrainFrame, we use the ZoneStatus
data structure to represent the inference
results of frames. Let's use it to get ours.
We can use the API to get the latest ZoneStatus
objects from BrainFrame.
zone_statuses = api.get_latest_zone_statuses()
print("Zone Statuses: ", zone_statuses)
The above code will print out the latest ZoneStatus
objects for each stream
with analysis/inference enabled. Warning: it can be a very long data structure,
depending on how many streams there are and what capsules are loaded.
This is the most direct way to get the most recent inference results from BrainFrame. However, you have to call this function each time you want new results, which is a hassle.
A different API function, get_zone_status_stream()
helps alleviate this issue.
Instead of having relying on you polling for ZoneStatus
objects, this function
will return an iterable object to you. Each time BrainFrame has a new result
available, it will be pushed to the iterator.
zone_status_iterator = api.get_zone_status_stream()
for zone_statuses in zone_status_iterator:
print("Zone Statuses: ", zone_statuses)
This script will print the zone statuses as fast as the capsules can process the frames.
Get Alarms and Send Notifications to WeChat¶
We can iterate through the zone status packets and check if there are any alerts that recently terminated after lasting >5 seconds. If there were, we send a notification. Note that for this example, the alert will only trigger after the cashier returns to the counter, a situation that is not as useful outside of the demo environment. The script will also only send one notification before exiting, to avoid sending too many notifications.
# Iterate through the zone status packets
for zone_status_packet in zone_status_iterator:
for stream_id, zone_statuses in zone_status_packet.items():
for zone_name, zone_status in zone_statuses.items():
for alert in zone_status.alerts:
# Check if the alert has ended
if alert.end_time is None:
continue
total_time = alert.end_time - alert.start_time
# Check if the alert lasted for more than 5 seconds
if total_time > 5:
alarm = api.get_zone_alarm(alert.alarm_id)
wechat.send_msg(
f"BrainFrame Alert: {alarm.name} \n"
f"Duration {total_time}", toUserName="filehelper")
# Stop here, for demo purposes
exit()
The script will send an alert to your WeChat File Helper if the cashier has been missing for more than 5 seconds. It will then exit the loop.
Logout Your WeChat Account¶
Finally, before we exit the script, don't forget to log out your WeChat account.
Put the following code above exit()
.
wechat.logout()
Introduction¶
In other tutorials, we demonstrated how to start a video stream and run inference, a common scenario. But sometimes you might want to run inference on images instead of videos. This tutorial will demonstrate how to do that using BrainFrame.
The use case in this tutorial is pretty simple. We want to iterate over all images in a directory to find the ones with cats in them. You can find the complete script and sample images on our GitHub repository.
Setup The Environment¶
In a previous tutorial, we installed BrainFrame server, client, and Python API libraries. The API functions we are going to use in this tutorial are:
api.get_plugins(...)
api.process_image(...)
In this tutorial, we will use one of our publicly available capsules:
detector_people_and_vehicles_fast
. You can download it from our
downloads page.
Before we start, you should already have the BrainFrame server and client running, and the capsule downloaded.
Check the Existing Capsules¶
As usual, let's import the dependencies first:
from pathlib import Path
import cv2
from brainframe.api import BrainFrameAPI
And connect to the server:
api = BrainFrameAPI("http://localhost")
Before we start processing images, we want to check the existing capsules to
verify that detector_people_and_vehicles_fast
is loaded:
# Get the names of existing capsules
loaded_capsules = api.get_plugins()
loaded_capsules_names = [capsule.name for capsule in loaded_capsules]
# Print out the capsules names
print(f"Loaded Capsules: {loaded_capsules_names}")
Make sure detector_people_and_vehicles_fast
is present.
Loaded Capsules: ['detector_people_and_vehicles_fast']
You can also check the loaded capsules using the client.
Iterate through the Image Directory¶
With the capsule loaded, we can iterate over all the images in the directory,
and get the inference results for each image. Then we will filter for detections
with class_name == "cat"
.
# Root directory containing the images.
IMAGE_ARCHIVE = Path("../images")
# Iterate through all images in the directory
for image_path in IMAGE_ARCHIVE.iterdir():
# Use only PNGs and JPGs
if image_path.suffix not in [".png", ".jpg"]:
continue
# Get the image array
image_array = cv2.imread(str(image_path))
# Perform inference on the image and get the results
detections = api.process_image(
# Image array
img_bgr=image_array,
# The names of capsules to enable while processing the image
plugin_names=["detector_people_and_vehicles_fast"],
# The capsule options you want to set. You can check the available
# capsule options with the client. Or in the code snippet above that
# printed capsule names, also print the capsule metadata.
option_vals={
"detector_people_and_vehicles_fast": {
# This capsule is able to detect people, vehicles, and animals.
# In this example we want to filter out detections that are not
# animals.
"filter_mode": "only_animals",
"threshold": 0.9,
}
}
)
print()
print(f"Processed image {image_path.name} and got {detections}")
# Filter the cat detections using the class name
cat_detections = [detection for detection in detections
if detection.class_name == "cat"]
if len(cat_detections) > 0:
print(f"This image contains {len(cat_detections)} cat(s)")
Now the script will tell you if there are cats in those images:
Processed image one-person.jpg and got []
Processed image no_people.jpg and got []
Processed image one-person-png.png and got []
Processed image one_cat.jpg and got [Detection(class_name='cat', coords=[[800, 0], [1566, 0], [1566, 850], [800, 850]], children=[], attributes={}, with_identity=None, extra_data={'detection_confidence': 0.9875224233}, track_id=None)]
This image contains 1 cat(s)
Processed image two_people_and_dtag.png and got []
Processed image two_people.jpg and got []
Introduction¶
In this tutorial, we will walk through a simple use case that checks if someone is violating social distancing rules.
Please be aware that the goal of this tutorial is to help you get familiar with the usage of BrainFrame's inference capabilities. A real social distancing use case is much more complicated than this script.
In this script, we only have two rules:
- Two
person
detection bounding boxes cannot overlap - The distance between the center of two people detections' bounding boxes must be greater than 500 pixels (by default; this will be configurable).
You can find the complete script on our GitHub repository.
Setup The Environment¶
The environment setup is similar to the environment we have in the [WeChat Notification][wechat_tutorial] tutorial. You can refer to it to set up the environment.
In this tutorial, the API functions that we are going to use are:
api.set_stream_configuration(...)
api.set_zone(...)
api.get_zone_status_stream()
api.set_plugin_option_vals(...)
Help Function¶
First, import the dependencies:
import math
from argparse import ArgumentParser
from pathlib import Path
from brainframe.api import BrainFrameAPI, bf_codecs
To make the script more readable, we'll define two helper functions in advance.
The first will check if two bounding boxes are overlapped or not:
# Help function to check if two detections are overlapped
def is_overlapped(det1: bf_codecs.Detection,
det2: bf_codecs.Detection) -> bool:
"""
:param det1: First Detection
:param det2: Second Detection
:return: If the two Detections' bboxes are overlapped
"""
# Sort the x, y in ascending order
coords1_sorted_x = sorted([c[0] for c in det1.coords])
coords2_sorted_x = sorted([c[0] for c in det2.coords])
coords1_sorted_y = sorted([c[1] for c in det1.coords])
coords2_sorted_y = sorted([c[1] for c in det2.coords])
# Return False if the rects do not overlap horizontally
if coords1_sorted_x[0] > coords2_sorted_x[-1] \
or coords2_sorted_x[0] > coords1_sorted_x[-1]:
return False
# Return False if the rects do not overlap vertically
if coords1_sorted_y[0] > coords2_sorted_y[-1] \
or coords2_sorted_y[0] > coords1_sorted_y[-1]:
return False
# Otherwise, the two rects must overlap
return True
The second helper function one will calculate the distance between the center of two bounding boxes:
# Helper function to calculate the distance between the center points of two
# detections
def get_distance(det1: bf_codecs.Detection,
det2: bf_codecs.Detection) -> float:
"""
:param det1: First Detection
:param det2: First Detection
:return: Distance between the center of the two Detections
"""
return math.hypot(det1.center[0] - det2.center[0],
det1.center[1] - det2.center[1])
Create a New Stream from Local File¶
First, initialize the API instance and connect to the server.
# Initialize the API
api = BrainFrameAPI("http://localhost")
Then, we want to start a video stream, you can find the sample video on our tutorial repository.
# Upload the local file to the database and get its storage ID
storage_id = api.new_storage(
data=Path("../videos/social_distancing.mp4").read_bytes(),
mime_type="application/octet-stream"
)
# Create a Stream Configuration referencing the new storage ID
new_stream_config = bf_codecs.StreamConfiguration(
# The display name on the client side
name="Demo",
# This stream will be from a file
connection_type=bf_codecs.StreamConfiguration.ConnType.FILE,
# The storage ID of the file
connection_options={
"storage_id": storage_id,
},
runtime_options={},
premises_id=None,
)
# Tell the server to connect to that stream configuration
new_stream_config = api.set_stream_configuration(new_stream_config)
Next, we will configure some capsule options instead of using the default ones we defined earlier to filter out some bad detections.
# Filter out duplicate detections
api.set_plugin_option_vals(
plugin_name="detector_people_and_vehicles_fast",
stream_id=new_stream_config.id,
option_vals={
# If one bounding box is overlapped >80% with another bounding box, we
# assume that they are really the same detection and ignore them.
"max_detection_overlap": 0.8,
"threshold": 0.9
}
)
Finally, don't forget to tell BrainFrame to start analyzing/preforming inference on the stream.
# Start analysis on the stream
api.start_analyzing(new_stream_config.id)
Check Social Distancing Rules¶
Next, similar to the WeChat Notification tutorial, we will get the zone status iterator. Will will iterate through all of its zone statuses, checking against the social distancing rules we defined above.
In the WeChat Notification tutorial, the operations on zone statuses were somewhat complicated, being a nested data structure. In this tutorial, we will reorganize in order to make our calculations here easier.
# Verify that there is at least one connected stream
assert len(api.get_stream_configurations()), \
"There should be at least one stream already configured!"
# Get the inference stream.
for zone_status_packet in api.get_zone_status_stream():
# Organize detections results as a dictionary of
# {stream_id: [Detections]}.
detections_per_stream = {
stream_id: zone_status.within
for stream_id, zone_statuses in zone_status_packet.items()
for zone_name, zone_status in zone_statuses.items()
if zone_name == "Screen"
}
# Iterate over each stream_id/detections combination
for stream_id, detections in detections_per_stream.items():
# Filter out Detections that are not people
detections = [detection for detection in detections
if detection.class_name == "person"]
# Skip stream frame if there are no person detections
if len(detections) == 0:
continue
# Compare the distance between each detections.
for i, current_detection in enumerate(detections):
violating = False
for j in range(i + 1, len(detections)):
target_detection = detections[j]
current_detection: bf_codecs.Detection
target_detection: bf_codecs.Detection
# If the bbox representing two people are overlapped, the
# distance is 0, otherwise it's the distance between the
# center of these two bbox.
if is_overlapped(current_detection, target_detection):
distance = 0
else:
distance = get_distance(current_detection, target_detection)
if distance < min_distance:
print(f"People are violating the social distancing rules, "
f"current distance: {distance}, location: "
f"{current_detection.coords}, "
f"{target_detection.coords}")
violating = True
break
if violating:
break
Now, whenever people violate our social distancing rules, our script will print the message out, including where are they located in the frame.
People are violating the social distancing rules, current distance: 499.8899878973373, location: [[30, 340], [411, 340], [411, 926], [30, 926]], [[571, 238], [828, 238], [828, 742], [571, 742]]
Introduction¶
This tutorial will guide you through the process of downloading one of our freely available OpenVisionCapsule capsules and adding it to BrainFrame. We will be installing our simple face detector capsule that works on all platforms, even those without a GPU.
Downloading the Capsule¶
On the computer that is hosting the BrainFrame server, navigate to our
downloads page and under Capsules, locate the
Detector Face Fast
entry. Click the link to download the capsule.
Adding the Capsule to BrainFrame¶
In the server's data directory (/var/local/brainframe
by default), there
should be a directory called capsules/
. If the capsules/
directory does not
exist, create it. Place the capsule file that you just downloaded
(detector_face_fast.cap
) within this directory.
Note: If you do not know the location of BrainFrame's data directory, you can get it directly using the BrainFrame CLI.
mv PATH/TO/detector_face_fast.cap $(brainframe info data_path)/capsules/
An alternative is to download the capsule directly to the capsules/ directory
wget -P $(brainframe info data_path)/capsules {DOWNLOAD_URL}
Verifying That the Capsule Works¶
The capsule should now be ready for use by BrainFrame. Let's open the client and make sure everything is working properly.
Open the BrainFrame client and then open the Global Capsule Configuration
dialog. You should see an entry for the Detector Face Fast
capsule, with
configuration options.
Once you load a stream, you will be able to see the inference results on the Streams view.
Introduction¶
In this tutorial, we will walk you through the creation of a basic capsule. If you get stuck along the way or simply want to view the end-result of the tutorial, you can find the completed capsule on our GitHub repository.
Before we start, we highly recommend you to read the OpenVisionCapsules Documentation. It will give you a bit of background information about capsules.
Set Up Environment¶
To develop your own capsule, you will need to install vcap
and vcap_utils
,
which are a set of Python libraries for encapsulating machine learning and computer
vision algorithms for intelligent video analytics. They can be found on Github
here.
pip3 install vcap vcap_utils
You might also want to download a few of our open-source capsules:
git clone https://github.com/aotuai/capsule_zoo.git
Creating a Basic Capsule¶
In this tutorial, we will create a fake capsule. It isn't going to perform any inference, but instead return some fake values. The purpose of this example is to help you to understand our capsule system better.
Directory structure¶
First, let's create a folder called detector_bounding_box_fake
under the
capsules directory under where your docker-compose file is located. Also create
a meta.py
and a capsule.py
under this directory. The resulting structure
will look like:
your_working_directory
├── docker-compose.yml
└── capsules
└── detector_bounding_box_fake
├── meta.conf
└── capsule.py
For more information about the structure, you can check the documentation here.
Capsule Metadata¶
The meta.conf
file provides basic information about the capsule to BrainFrame
before the rest of the capsule is loaded. In our meta.conf
, we are going to
define the version of the OpenVisionCapsules SDK that this capsule will be
compatible with.
[about]
api_compatibility_version = 0.3
This number should be the same as the Major.Minor version from
pip3 show vcap | grep Version
Capsule¶
In capsule.py
, we define a class called Capsule
, which will define the
actual behavior of the capsule. The Capsule
class provides metadata that
allows BrainFrame to understand the capabilities of the capsule and how it
can be used, and must inherit from BaseCapsule
. For more information about
the BaseCapsule
class, see the documentation
here.
We'll import the dependencies from vcap
first:
from vcap import BaseCapsule, NodeDescription, BaseBackend, DetectionNode
Then we'll define the Capsule
class as a sub-class of BaseCapsule
.
# Define the Capsule class
class Capsule(BaseCapsule):
# Metadata of this capsule
name = "detector_bounding_box_fake"
description = "A fake detector that outputs a single bounding box"
version = 1
# Define the input type. As this is an object detector, and does not require
# any input from other capsules, the input type will be a NodeDescription
# with size=NONE
input_type = NodeDescription(size=NodeDescription.Size.NONE)
# Define the output type. In this case we are going to return a list of
# bounding boxes, so the output type will be size=ALL
output_type = NodeDescription(
size=NodeDescription.Size.ALL,
detections=["fake_box"],
)
# Define the backend. In this example, we are going to use a fake Backend,
# defined below
backend_loader = lambda capsule_files, device: Backend(
capsule_files=capsule_files, device=device)
options = {}
Backend¶
Now let's create a Backend
class. The Backend
class defines how the
underlying algorithm is initialized and used. For more information about
Backend
classes, please refer to the
OpenVisionCapsules Documentation.
# Define the Backend Class
class Backend(BaseBackend):
# Since this is a fake Backend, we are not going to do any fancy stuff in
# the constructor.
def __init__(self, capsule_files, device):
print("Loading onto device:", device)
super().__init__()
# In a real capsule, this function will be performing inference or running
# algorithms. For this tutorial, we are just going to return a single, fake
# bounding box.
def process_frame(self, frame, detection_node: None, options, state):
return [
DetectionNode(
name="fake_box",
coords=[[10, 10], [100, 10], [100, 100], [10, 100]]
)
]
# Batch process can be used to improve the performance, we will skip it in
# this example.
def batch_predict(self, input_data_list):
pass
# This function can be implemented to perform clean-up. We can't skip it
# for this tutorial
def close(self) -> None:
pass
The fake capsule is now complete. If you restart your BrainFrame server, you will be able to see it loaded.
If you load a stream, you will be able to see the inference results.
Introduction¶
In this tutorial, we will walk through how to make a capsule using an existing model trained with the [Tensorflow Object Detection API] [TensorFlow detection model zoo]. You can find the complete capsule on our GitHub repository.
Setup The Environment¶
See the previous tutorial for information on setting up a development environment.
A TensorFlow Face Detection Capsule¶
File Structure¶
As in the previous tutorial, we will begin by creating a new folder called
detector_face
, a meta.conf
and a capsule.py
.
You will also need to put the existing TensorFlow model and the metadata in
the directory. For this tutorial, they will be named detector.pb
and
dataset_metadata.json
. Download the detector.pb
and dataset_metadata.json
from here. Other TensorFlow pre-trained
models can be found in the Tensorflow 1 Object Detection Model Zoo and [Tensorflow 2 Object Detection Model
Zoo]
TensorFlow 2 detection model zoo.
So now the file structure will look like:
your_working_directory
├── docker-compose.yml
└── capsules
└── detector_face
├── meta.conf
├── capsule.py
├── detector.pb
└── dataset_metadata.json
Capsule Metadata¶
Just as in the previous tutorial, put the version information in the
meta.conf
:
[about]
api_compatibility_version = 0.3
Capsule¶
First, import the dependencies:
# Import dependencies
import numpy as np
from typing import Dict
from vcap import (
BaseCapsule,
NodeDescription,
DetectionNode,
FloatOption,
DETECTION_NODE_TYPE,
OPTION_TYPE,
BaseStreamState,
rect_to_coords,
)
from vcap_utils import TFObjectDetector
The capsule definition will be a little bit more complicated than the previous
one. In this capsule, we will have the threshold option. In addition, since
we are using a real backend, we will pass in a lambda for backend_loader
. We
will talk more about this in the Backend section below.
# Define the Capsule class
class Capsule(BaseCapsule):
# Metadata of this capsule
name = "face_detector"
description = "This is an example of how to wrap a TensorFlow Object " \
"Detection API model"
version = 1
# Define the input type. Since this is an object detector, and doesn't
# require any input from other capsules, the input type will be a
# NodeDescription with size=NONE.
input_type = NodeDescription(size=NodeDescription.Size.NONE)
# Define the output type. In this case, as we are going to return a list of
# bounding boxes, the output type will be size=ALL. The type of detection
# will be "face", and we will place the detection confidence in extra_data.
output_type = NodeDescription(
size=NodeDescription.Size.ALL,
detections=["face"],
extra_data=["detection_confidence"]
)
# Define the backend_loader
backend_loader = lambda capsule_files, device: Backend(
device=device,
model_bytes=capsule_files["detector.pb"],
metadata_bytes=capsule_files["dataset_metadata.json"])
# The options for this capsule. In this example, we will allow the user to
# set a threshold for the minimum detection confidence. This can be adjusted
# using the BrainFrame client or through REST API.
options = {
"threshold": FloatOption(
description="Filter out bad detections",
default=0.5,
min_val=0.0,
max_val=1.0,
)
}
Backend¶
Because we are using a TensorFlow model, we are going to use a sub-class of
TFObjectDetector
instead of BaseBackend
. The TFObjectDetector
class will
conveniently do the following for us:
- Load the model bytes into memory
- Perform batch inference
- Close the model and clean up the memory when finished
TFObjectDetector
already defines the constructor, batch_process()
and
close()
methods for us, so we can skip defining them ourselves. We just need
to handle the process_frame()
method.
# Define the Backend Class
class Backend(TFObjectDetector):
def process_frame(self, frame: np.ndarray,
detection_node: None,
options: Dict[str, OPTION_TYPE],
state: BaseStreamState) -> DETECTION_NODE_TYPE:
"""
:param frame: A numpy array of shape (height, width, 3)
:param detection_node: None
:param options: Example: {"threshold": 0.5}. Defined in Capsule class above.
:param state: (Unused in this capsule)
:return: A list of detections
"""
# Send the frame to the BrainFrame backend. This function will return a
# queue. BrainFrame will batch_process() received frames and populate
# the queue with the results.
prediction_output_queue = self.send_to_batch(frame)
# Wait for predictions
predictions = prediction_output_queue.get()
# Iterate through all the predictions received in this frame
detection_nodes = []
for prediction in predictions:
# Filter out detections that is not a face.
if prediction.name != "face":
continue
# Filter out detection with low confidence.
if prediction.confidence < options["threshold"]:
continue
# Create a DetectionNode for the prediction. It will be reused by
# any other capsules that require a face DetectionNode in their
# input type. An age classifier capsule would be an example of such
# a capsule.
new_detection = DetectionNode(
name=prediction.name,
# convert [x1, y1, x2, y2] to [[x1,y1], [x1, y2]...]
coords=rect_to_coords(prediction.rect),
extra_data={"detection_confidence": prediction.confidence}
)
detection_nodes.append(new_detection)
return detection_nodes
When you restart BrainFrame, your capsule will be packaged into a .cap
file
and initialized. You'll see its information on the BrainFrame client.
Once you load a stream, you will be able to see the inference results.
Introduction¶
This tutorial will guide you through encapsulating an OpenVINO object detector model. For this tutorial, we will be using the person-vehicle-bike-detection-crossroad-1016 model from the Open Model Zoo, but the concepts shown here will work for all OpenVINO object detectors. You can find the complete capsule on the Capsule Zoo.
See the previous tutorial for information on setting up a development environment.
Getting Started¶
We will start by creating a directory where all our capsule code and model
files will reside. By convention, capsule names start with a small description
of the role the capsule plays, followed by the kinds of objects they operate
on, and finally some kind of differentiating information about the capsule's
intended use or implementation. We will name this capsule
detector_person_vehicle_bike_openvino
and create a directory with that name.
Then, we will add a meta.conf
file, which will let the application loading
the capsule know what version of the OpenVisionCapsules API this capsule
requires. OpenVINO support was significantly improved in version 0.2.x, so we
will require at least that minor version of the API:
[about]
api_compatibility_version = 0.3
We will also add the weights and model files into this directory so they can be loaded by the capsule. After these steps, your data directory should look like this:
your_data_directory
├── volumes
└── capsules
└── detector_person_vehicle_bike_openvino
├── person-vehicle-bike-detection-crossroad-1016-fp32.bin
├── person-vehicle-bike-detection-crossroad-1016-fp32.xml
└── meta.conf
The Capsule Class¶
Next, we will define the Capsule class. This class provides the application
with information about your capsule. The class must be named Capsule
and
the file it is defined in must be named capsule.py
. We will create that
file in the capsule directory with the following contents:
from vcap import (
BaseCapsule,
NodeDescription,
DeviceMapper,
common_detector_options
)
from .backend import Backend
class Capsule(BaseCapsule):
name = "detector_person_vehicle_bike_openvino"
description = ("OpenVINO person, vehicle, and bike detector. Optimized "
"for surveillance camera scenarios.")
version = 1
device_mapper = DeviceMapper.map_to_openvino_devices()
input_type = NodeDescription(size=NodeDescription.Size.NONE)
output_type = NodeDescription(
size=NodeDescription.Size.ALL,
detections=["vehicle", "person", "bike"])
backend_loader = lambda capsule_files, device: Backend(
model_xml=capsule_files[
"person-vehicle-bike-detection-crossroad-1016-fp32.xml"],
weights_bin=capsule_files[
"person-vehicle-bike-detection-crossroad-1016-fp32.bin"],
device_name=device
)
options = common_detector_options
In this file, we have defined a Capsule
class that subclasses from
BaseCapsule
and defines some fields. The name
field reflects the name
of the capsule directory and the description
field is a short,
human-readable description of the capsule's purpose. The other fields are a bit
more complex, so let's break each one down.
version = 1
This is the capsule's version (not to be confused with the version of the
OpenVisionCapsules API defined in the meta.conf
). Since this is the first
version of our capsule, we'll start it at 1. The version field can be used as a
way to distinguish between different revisions of the same capsule. This field
has no semantic meaning to BrainFrame and can be incremented as the capsule
developer sees fit. Some developers may choose to increment it with every
iteration; others only when significant changes have occurred.
device_mapper = DeviceMapper.map_to_openvino_devices()
This device mapper will map our backends to any available OpenVINO-compatible devices, like the Intel Neural Compute Stick 2 or the CPU.
input_type = NodeDescription(size=NodeDescription.Size.NONE)
This detector capsule requires no output from any other capsules in order to run. All it needs is the video frame.
output_type = NodeDescription(
size=NodeDescription.Size.ALL,
detections=["vehicle", "person", "bike"])
This detector provides "vehicle", "person", and "bike" detections as output and is expected to detect all vehicles, people, and bikes in the video frame.
backend_loader = lambda capsule_files, device: Backend(
model_xml=capsule_files[
"person-vehicle-bike-detection-crossroad-1016-fp32.xml"],
weights_bin=capsule_files[
"person-vehicle-bike-detection-crossroad-1016-fp32.bin"],
device_name=device
)
Here we define a lambda function that creates an instance of a Backend class with the model and weights files, as well as the device this backend will run on. We will define this Backend class in the next section.
options = common_detector_options
We give this capsule some basic options that are common among most detector capsules.
With this new capsule.py
file added, your capsule directory should look
like this:
your_data_directory
├── volumes
└── capsules
└── detector_person_vehicle_bike_openvino
├── capsule.py
├── person-vehicle-bike-detection-crossroad-1016-fp32.bin
├── person-vehicle-bike-detection-crossroad-1016-fp32.xml
└── meta.conf
The Backend Class¶
Finally, we will define the Backend
class. This class defines how the
capsule runs analysis on video frames. An instance of this class will be
created for every device the capsule runs on. The Backend
class doesn't
have to be defined in any specific location, but we will add it to a new file
called backend.py
with the following contents:
from typing import Dict
import numpy as np
from vcap import (
DETECTION_NODE_TYPE,
OPTION_TYPE,
BaseStreamState)
from vcap_utils import BaseOpenVINOBackend
class Backend(BaseOpenVINOBackend):
label_map: Dict[int, str] = {1: "vehicle", 2: "person", 3: "bike"}
def process_frame(self, frame: np.ndarray,
detection_node: DETECTION_NODE_TYPE,
options: Dict[str, OPTION_TYPE],
state: BaseStreamState) -> DETECTION_NODE_TYPE:
input_dict, resize = self.prepare_inputs(frame)
prediction = self.send_to_batch(input_dict).result()
detections = self.parse_detection_results(
prediction, resize, self.label_map,
min_confidence=options["threshold"])
return detections
Our Backend
class subclasses from BaseOpenVINOBackend
. This backend
handles loading the model into memory from the given files, implements batching,
and provides utility methods that make writing OpenVINO backends easy. All we
need to do is define the process_frame
method. Let's take a look at each
call in the method body.
input_dict, resize = self.prepare_inputs(frame)
This line prepares the given video frame to be fed into the model. The video frame is resized to fit in the model and formatted in the way the model expects. Also provided is a resize object, which contains the necessary information to map the resulting detections to the coordinate system of the originally sized video frame.
This method assumes that your OpenVINO model expects images in the format (num_channels, height, width) and expects the frame to be in a dict with the key being the network's input name. Ensure that your model follows this convention before using this method.
prediction = self.send_to_batch(input_dict).result()
Next, the input data is sent into the model for batch processing. The call to
get
causes the backend to block until the result is ready. The results
are objects with raw OpenVINO prediction information.
detections = self.parse_detection_results(
prediction, resize, self.label_map,
min_confidence=options["threshold"])
return detections
Finally, the results go through post-processing. Detections with a low confidence are filtered out, raw class IDs are converted to human-readable class names, and the results are scaled up to fit the size of the original video frame.
Wrapping Up¶
With the meta.conf, Capsule class, Backend class, and model files, the capsule is now complete! Your data directory should look something like this:
your_data_directory
├── volumes
└── capsules
└── detector_person_vehicle_bike_openvino
├── backend.py
├── capsule.py
├── person-vehicle-bike-detection-crossroad-1016-fp32.bin
├── person-vehicle-bike-detection-crossroad-1016-fp32.xml
└── meta.conf
When you restart BrainFrame, your capsule will be packaged into a .cap
file
and initialized. You'll see its information on the BrainFrame client.
Load up a video stream to see detection results.
Introduction¶
This tutorial will guide you through encapsulating an OpenVINO classifier model. For this tutorial, we will be using the vehicle-attributes-recognition-barrier-0039 model from the Open Model Zoo, but the concepts shown here apply to all OpenVINO classifiers. You can find the complete capsule on the Capsule Zoo. This model is able to classify the color of a detected vehicle.
This capsule will rely on the detector created in the previous tutorial to find vehicles in the video frame before they can be classified.
Getting Started¶
Like in the previous tutorial, we will create a new directory for the classifier
capsule. This time we will name it classifier_vehicle_color_openvino
. We will
also add a meta.conf
with the same contents, declaring that our capsule
relies on version 0.2 or higher of OpenVisionCapsules.
[about]
api_compatibility_version = 0.3
We will also add the weights and model files to this directory so that they can be loaded by the capsule.
The Capsule Class¶
The Capsule class defined here will be very similar in structure to the one in the detector capsule.
from vcap import BaseCapsule, NodeDescription, DeviceMapper
from .backend import Backend
from . import config
class Capsule(BaseCapsule):
name = "classifier_vehicle_color_openvino"
description = "OpenVINO vehicle color classifier."
version = 1
device_mapper = DeviceMapper.map_to_openvino_devices()
input_type = NodeDescription(
size=NodeDescription.Size.SINGLE,
detections=["vehicle"])
output_type = NodeDescription(
size=NodeDescription.Size.SINGLE,
detections=["vehicle"],
attributes={"color": config.colors})
backend_loader = lambda capsule_files, device: Backend(
model_xml=capsule_files[
"vehicle-attributes-recognition-barrier-0039.xml"],
weights_bin=capsule_files[
"vehicle-attributes-recognition-barrier-0039.bin"],
device_name=device
)
Let's take a look at some of the differences between this Capsule class and the detector's.
input_type = NodeDescription(
size=NodeDescription.Size.SINGLE,
detections=["vehicle"])
This capsule takes vehicle detections produced by the detector capsule as input. Each vehicle found in the video frame is processed one at a time.
output_type = NodeDescription(
size=NodeDescription.Size.SINGLE,
detections=["vehicle"],
attributes={"color": config.colors})
This capsule provides a vehicle detection with a color attribute as output. Note
that classifier capsules do not create new detections. Instead, they augment the
detections provided to them by other capsules. We've moved the list of colors
out into a separate config.py
file so that it can also be referenced by the
backend, which we will define in the next section.
# config.py
colors = ["white", "gray", "yellow", "red", "green", "blue", "black"]
You may have noticed that this capsule does not have any options. The options
field can be omitted when the capsule doesn't have any parameters that can be
modified at runtime.
The Backend Class¶
We will once again create a file called backend.py
where the Backend class
will be defined. It will still subclass BaseOpenVINOBackend
and we will only
need to implement the process_frame
method.
from collections import namedtuple
from typing import Dict
import numpy as np
from vcap import (
Resize,
DETECTION_NODE_TYPE,
OPTION_TYPE,
BaseStreamState)
from vcap_utils import BaseOpenVINOBackend
from . import config
class Backend(BaseOpenVINOBackend):
def process_frame(self, frame: np.ndarray,
detection_node: DETECTION_NODE_TYPE,
options: Dict[str, OPTION_TYPE],
state: BaseStreamState) -> DETECTION_NODE_TYPE:
crop = Resize(frame).crop_bbox(detection_node.bbox).frame
input_dict, _ = self.prepare_inputs(crop)
prediction = self.send_to_batch(input_dict).result()
max_color = config.colors[prediction["color"].argmax()]
detection_node.attributes["color"] = max_color
Let's review this method line-by-line.
crop = Resize(frame).crop_bbox(detection_node.bbox).frame
Capsules always receive the entire video frame, so we need to start by cropping the frame to the detected vehicle.
input_dict, _ = self.prepare_inputs(crop)
We then prepare the cropped video frame to be fed into the model. The video frame is resized to fit into the model and formatted in the way the model expects. We can ignore the second return value, the resize object, because classifiers don't provide any coordinates that need adjusting.
prediction = self.send_to_batch(input_dict).result()
Next, the input data is sent into the model for batch processing. The call to
get
causes the backend to block until the result is ready. The results
are objects with raw OpenVINO prediction information.
max_color = config.colors[prediction["color"].argmax()]
We then pull the color information from the prediction, and choose the color
with the highest confidence. We then convert the color from its integer
representation to a human-readable string using the colors
list defined in
config.py
.
detection_node.attributes["color"] = max_color
Finally, we augment the vehicle detection with the new "color" attribute. This capsule does not need to return anything because no new detections have been created.
Wrapping Up¶
Finally, the capsule is complete! Your data directory should look something like this:
your_data_directory
├── volumes
└── capsules
└── classifier_vehicle_color_openvino
├── backend.py
├── capsule.py
├── config.py
├── meta.conf
├── vehicle-attributes-recognition-barrier-0039.bin
└── vehicle-attributes-recognition-barrier-0039.xml
When you restart BrainFrame, your capsule will be packaged into a .cap
file
and initialized. You'll see its information on the BrainFrame client.
Load up a video stream to see classification results.
Ended: Capsules
Ended: Tutorials
Advanced Usage
Introduction¶
The BrainFrame server uses a docker-compose.yml
file to configure
many aspects of its runtime behavior. Some options may be changed by
setting environment variables in a .env
file, placed in the same
directory as the docker-compose.yml
file.
Any options not exposed here may be overridden by creating a
docker-compose.override.yml
file in the same directory. Configuration
written here will be applied over the original docker-compose.yml
.
Port Configuration¶
BrainFrame makes three ports available to the host environment by
default, the API and documentation on port 80, the Postgres database on
port 5432, and the StreamGateway server on port 8004. If these ports
conflict with other software running on the host machine, they can be
changed by setting the SERVER_PORT
, DATABASE_PORT
,
STREAM_GATEWAY_PORT
, and RABBITMQ_PORT
variables in the .env
file.
BrainFrame may also proxy video streams to ports in the range 10000-20000. At this time, there is no way to reconfigure these ports.
SERVER_PORT=80
DATABASE_PORT=5432
STREAM_GATEWAY_PORT=8004
RABBITMQ_PORT=5672
Authorization Configuration¶
By default, BrainFrame does not authorize clients and all clients have admin
permissions. If your server is being deployed in a network where access control
is desirable, authorization can be turned on using the AUTHORIZE_CLIENTS
variable in the .env
file.
AUTHORIZE_CLIENTS=true
Warning
The admin user is given a default password of "admin". This should be changed to a secure and unique password for public deployments.
Currently, the admin user's password may only be changed through the REST API.
The following is an example curl
command for doing this. Replace [hostname]
with the hostname of the BrainFrame server and [new password]
with the desired
password.
curl 'http://[hostname]/api/users' \
--user 'admin:admin' \
--request POST \
--header 'Content-Type: application/json' \
--data '{
"id": 1,
"username": "admin",
"password": "[new password]",
"role": "admin"
}'
User Configuration¶
BrainFrame is designed to run using the account of the current non-root user. By
default, 1000 is used for both the user ID and group ID, which matches the
default on most Linux systems. These IDs may be adjusted using the UID
and
GID
variables in the .env
file.
UID=1001
GID=1001
To check the IDs of the currently logged in user, run id -u
for the UID and
id -g
for the GID.
Journal Pruning¶
BrainFrame records analytics results to a Postgres database. Over a long period of time, this can result in a lot of data. To avoid unbounded storage use, BrainFrame prunes journal entries over time and deletes journal entries that are past a certain age.
Journal pruning behavior is controlled by the "pruning age" and "pruning
fraction" variables. The pruning age controls how old a journal entry
must be before it becomes a candidate for pruning. This value also
controls at what interval pruning is run. The pruning fraction variable
controls what portion of journal entries are pruned each time pruning is
run. The pruning fraction variable is a value between 0 and 1, where 0
results in no pruning, and 1 results in the deletion of all journaling
information past the pruning age. These variables may be configured by
setting the PRUNING_AGE
(specified as a duration)
and PRUNING_FRACTION
variables in the .env
file.
# Start pruning journal entries after 1 hour, and run pruning every hour
PRUNING_AGE=0d1h0m
# Prune 5% of all journal entries that are past the pruning age every run
PRUNING_FRACTION=0.05
All journaling information is deleted after it reaches the journal max
age. This value may be configured by setting the JOURNAL_MAX_AGE
variable (specified as a duration) in the .env
file.
# Keep journaling information for 60 days
JOURNAL_MAX_AGE=60d0h0m
Duration Format¶
Settings that specify a duration are in the format XdYhZm
, where X is
the number of days, Y is the number of hours, and Z is the number of
minutes.
AI Accelerator Configuration¶
At the moment the only control over AI accelerator is for OpenVINO devices.
It is possible to change the whitelisted devices and their priority with the
OPENVINO_DEVICE_PRIORITY
.
# Block any device except for CPU
OPENVINO_DEVICE_PRIORITY=CPU
# Load onto both CPU and HDDL, giving priority to CPU
OPENVINO_DEVICE_PRIORITY=CPU,HDDL
# Load onto both CPU and HDDL, giving priority to HDDL
OPENVINO_DEVICE_PRIORITY=HDDL,CPU
Before making a BrainFrame server publicly accessible, some additional configuration is required.
Authorization¶
By default, BrainFrame does not authorize clients. Authorization should always be turned on for public deployments to prevent unauthorized access. See this section on authorization configuration for more information.
Warning
Be sure that the admin user's default password has been changed before continuing.
Port Forwarding¶
BrainFrame requires that certain ports are forwarded so that the client and other external programs may establish connections to it. Below is a table of ports BrainFrame uses and their purpose. For ways to reconfigure these ports, see this section on port configuration.
Port | Purpose |
---|---|
80 | BrainFrame API, dashboard, documentation |
8004 | StreamGateway server communication |
5533 | RTSP streams for video files |
10000-10100 | StreamGateway server video streams |
Warning
BrainFrame also makes a Postgres server available on port 5432, but that port should not be forwarded for security reasons.
IP Camera streams can be configured with their own custom GStreamer pipelines, allowing for rich configuration of how the stream is processed. This section will not explain the intricacies of GStreamer pipelines as the official website provides excellent documentation on how these work. Instead, included are a few pipeline examples.
BrainFrame does quite a bit of work in the background to ensure that many different IP camera types are supported seamlessly. When using custom pipelines, more intimate knowledge of the IP camera stream is required compared to using BrainFrame normally.
Note that all custom pipelines:
- Must include a
{url}
template field. This is where the specified IP camera URL will be inserted into the pipeline. - Must have an
appsink
element named "main_sink". This is where frames will be extracted from the pipeline for processing. - May optionally include an element named "buffer_src". This is required for
frame skipping to work with custom
pipelines. This name should be given to an element in the pipeline
that sections off frame data from the network before decoding, like
rtph264depay
.
To specify a custom pipeline, check the "Advanced Options" checkbox in the stream creation window and enter your pipeline into the "Pipeline" textbox.
Example Pipelines¶
Cropping the Video Stream¶
For composite video streams or for scenes that contain uninteresting sections, one may want to crop the video stream before processing. Here is an example of a custom pipeline to accomplish this for an H264 RTSP stream:
rtspsrc location="{url}" ! rtph264depay name="buffer_src" ! decodebin ! videocrop top=x left=x right=x bottom=x ! videoconvert ! video/x-raw,format=(string)BGR ! appsink name="main_sink"
This pipeline uses the videocrop element to crop the video by some configurable value. The "x" values should be replaced with the amount in pixels to crop from each side of the frame.
Lower Latency Streaming¶
By default, BrainFrame will “buffer” frames in order to ensure a more stable streaming experience. In order to prevent that, try using the pipeline below:
rtspsrc location="{url}" latency=X ! rtph264depay name="buffer_src" ! decodebin ! videoconvert ! video/x-raw,format=(string)BGR ! appsink name="main_sink"
Replace the "X" in latency=X with 0 for no buffering at all. The unit X is in milliseconds.
Rotating the Video Stream¶
rtspsrc location="{url}" ! rtph264depay ! avdec_h264 ! videoconvert ! videoflip video-direction=x ! videoconvert ! video/x-raw,format=(string)BGR ! appsink name="main_sink"
The "x" value should be the number of degrees to rotate. Try numbers such as 0, 90, 180, and so on.
Hardware Decoding with Multiple Nvidia GPUs¶
BrainFrame automatically detects when an Nvidia GPU is available and attempts to do hardware video decoding on it. Currently, video decoding is only done on the first available device. This means that if your machine has multiple Nvidia GPUs installed, only one of them will be utilized.
GStreamer dynamically creates decoder elements that allow you to choose which Nvidia GPU the work will be done on. Using H.264 as our example format:
nvh264dec
uses device 0nvh264device1dec
uses device 1nvh264device2dec
uses device 2- ... and so on
By using different decoder elements for each stream's custom pipeline, you can
distribute decoding work across multiple GPUs. For example, if you had fifteen
video streams and three GPUs, you might consider having the first five use
nvh264dec
, the next five use nvh264device1dec
, and the final five use
nvh264device2dec
.
The device IDs referenced here are CUDA device IDs. By default, CUDA orders devices from fastest to slowest, device 0 being the fastest. It is possible to change the way CUDA orders devices via an environment variable. See the official documentation for details.
Here is an example pipeline that uses device 1 to decode an H.264 RTSP stream:
rtspsrc location="{url}" ! rtph264depay name="buffer_src" ! h264parse ! nvh264device1dec ! glcolorconvert ! video/x-raw(memory:GLMemory),format=(string)BGR ! gldownload ! video/x-raw,format=(string)BGR ! appsink name="main_sink"
Warning
Current releases of the BrainFrame Client do not support using NVCODEC hardware decoding. Using these custom pipelines will cause streaming errors in the client as a result. We only recommend these pipelines for advanced use cases.
Frame skipping is a streaming mode available for IP cameras. Using frame skipping significantly increases the number of streams a single BrainFrame instance can handle at the cost of framerate.
Frame skipping allows BrainFrame to decode a significantly smaller amount of frames, cutting down on decoding overhead. As of now, the resulting framerate will depend on the keyframe interval of the video stream, which can often be configured in the settings for an IP Camera or NVR.
Frame skipping can be found under "Advanced Options" when creating an IP camera stream.
If you are specifying a custom pipeline, frame skipping will only work if an element in the pipeline is named "buffer_src". See the page on custom pipelines for details.
Some customers may prefer to deploy BrainFrame on a machine that does not have internet access. This document describes how that may be accomplished, assuming that a separate machine with internet access is available.
Save Docker Images¶
Start by deploying BrainFrame on a separate development machine using the instructions found on the Getting Started page.
When BrainFrame is running, open another terminal, and you will see the list of containers we are running for deployment:
docker ps
The list of of containers should look like this:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c462fa89dc72 aotuai/brainframe_core:0.25.2 "./brainframe_server…" 4 hours ago Exited (0) 3 hours ago release_api_1
54b731bc2a04 aotuai/brainframe_http_proxy:0.25.2 "nginx -g 'daemon of…" 4 hours ago Exited (0) 3 hours ago release_proxy_1
d6ad0f9e0675 aotuai/brainframe_docs:0.25.2 "nginx -g 'daemon of…" 4 hours ago Exited (0) 3 hours ago release_docs_1
4894246049a0 postgres:9.6.17-alpine "/entrypoint.sh mysq…" 4 hours ago Exited (0) 3 hours ago release_database_1
ac564e32f7eb aotuai/brainframe_dashboard:0.25.2 "/run.sh" 4 hours ago Exited (0) 3 hours ago release_dashboard_1
The above containers are the ones we need to save, ignore the other containers just in case you have your own containers running at the same time.
The next step is to save those images, you can do this by running the
docker save
command:
docker save IMAGE [IMAGE...] -o OUTPUT
For example, in this case, you should run:
docker save \
aotuai/brainframe_core:0.25.2 \
aotuai/brainframe_http_proxy:0.25.2 \
aotuai/brainframe_docs:0.25.2 \
postgres:9.6.17-alpine \
aotuai/brainframe_dashboard:0.25.2 \
-o brainframe
Now all the images we need are save in BrainFrame under the current directory.
Load Docker Images¶
Once you have the packaged Docker images, copy it to the offline machine, and load it:
docker load -i brainframe
Python API¶
Introduction¶
The BrainFrame Python API is a wrapper around the REST API to make it easier for Python applications to integrate with BrainFrame. The Python API is completely open source and available on Github. Reference documentation and examples for the Python API can be found on ReadTheDocs.
Applications not written in Python can interact with BrainFrame directly through the REST API.
Installation¶
The BrainFrame Python API is available on Pip for Python 3.6 onward.
pip3 install brainframe-api
We recommend installing the Python API in a virtualenv to avoid interference with other projects on the same system.
Introduction¶
If you are located in mainland China, you might have a hard time pulling docker images, however, you can speed things up by using the Docker Mirror hosted by USTC.
Configure Docker Daemon¶
You can configure the Docker daemon using a JSON file. Usually it's located at
/etc/docker/daemon.json
; if it doesn't exist create a new one. Then, add
"https://docker.mirrors.ustc.edu.cn/"
to the registry-mirrors
array to pull
from the USTC registry mirror by default.
After editing, your /etc/docker/daemon.json
should look like this:
{
"registry-mirrors": ["https://docker.mirrors.ustc.edu.cn/"]
}
If this is not the first time you are editing the daemon.json
, there may be
other configuration already there. You can simply add the line above after any
existing configuration.
Then restart dockerd
:
sudo systemctl restart docker
Verify Default Registry Mirror¶
You can verify your changes by:
docker info
If you see the following lines, you have configured your Docker daemon successfully.
Registry Mirrors:
https://docker.mirrors.ustc.edu.cn/
Introduction¶
When validating BrainFrame's performance for a given use-case, you may want to use video files to simulate connecting to many IP camera streams. The built-in video file support in BrainFrame is good for some kinds of testing, but is not recommended for performance testing because it has significant overhead when compared to IP cameras.
For usecases like this, BrainFrame ships with a small RTSP server utility. This utility acheives significantly lower overhead by transcoding video files in advance. However, considering the main focus of this exercise is performance testing, we highly recommend running the RTSP server on a separate machine from the one running the BrainFrame server.
Using the RTSP Server Utility¶
Start by creating a directory containing the video files you would like to test with. Ensure that no other types of files are present in the directory. Then, run the following command:
docker run \
--network host \
--volume {video file path}:/video_files \
aotuai/brainframe_core:0.29.2 brainframe/tools/rtsp_server/main
Replace {video file path}
with a fully qualified path to the video file
directory you've created.
The RTSP server will start by transcoding all video files in the directory to a
known good format. This is to prevent video format incompatibilities and can
significantly improve streaming performance for the RTSP server. Once
transcoding is complete, new .mkv
files will be created in the video file
directory. Transcoding will not be run again unless new video files are
introduced or the .mkv
files are deleted.
When all video files have been transcoded, the RTSP server will start. An RTSP URL will be printed for each video file being streamed.
INFO:root:Video traffic_back_to_front.mkv available at rtsp://0.0.0.0:8554/traffic_back_to_front
INFO:root:Video test_store.mkv available at rtsp://0.0.0.0:8554/test_store
INFO:root:Video two_cool_guys.mkv available at rtsp://0.0.0.0:8554/two_cool_guys
Be sure to replace 0.0.0.0
with the local IP address of the machine that's
running the RTSP server.
Connecting to Many Streams¶
When doing performance testing at the level of tens to hundreds of streams, it can become burdensome to manage that many video files. Instead, it may be easier to connect BrainFrame to the same video stream multiple times.
BrainFrame does not allow you to connect to the exact same RTSP URL multiple times, as doing so during standard operation is wasteful. However, you can work around this limitation by adding dummy query parameters to the end of the RTSP URL.
rtsp://0.0.0.0:8554/test_store?dummy=1
rtsp://0.0.0.0:8554/test_store?dummy=2
rtsp://0.0.0.0:8554/test_store?dummy=3
rtsp://0.0.0.0:8554/test_store?dummy=4
...
The RTSP server utility is optimized to support many concurrent connections to the same stream. Make sure your local network has the necessary bandwidth to facilitate the scale of testing you plan to complete.
Ended: Advanced Usage
Dashboard
Introduction¶
BrainFrame uses a powerful dashboarding tool called Grafana in order to allow highly customizable realtime visualizations of the BrainFrame Database and API.
With a little SQL knowledge it is possible to quickly get analytics for any specific problem you need solved.
The default login / pass for the dashboard is admin
, admin
.
Dashboards¶
A Dashboard contains various Panels which visually display information about
the BrainFrame database. BrainFrame populates automatically with dashboards, such
as the Stream Uptime
dashboard, which shows graphs of which cameras are being
processad and when they have been connected / disconnected.
Creating a Graph¶
First, create a dashboard by clicking the +
on the sidebar. Then, give your dashboard a name by clicking the gear icon on the top right.
Now, it's time to add a Panel. Click the graph icon on the top bar, so you see this:
Let's start by adding a query. Click "Add Query", then click the pencil to edit the query as SQL.
For an example query, the following query will give a simple graph of the number of people who entered or exited the "Front Door" Zone over time
SELECT zone_status.tstamp as time,
total_count.count_enter AS entered,
total_count.count_exit AS exited FROM zone_status
LEFT JOIN total_count ON total_count.zone_status_id=zone_status.id
LEFT JOIN zone ON zone_status.zone_id=zone.id
WHERE
total_count.class_name='person' AND
zone.name='Front Door' AND
zone_status.id >= (SELECT id FROM zone_status WHERE tstamp >= $__unixEpochFrom() ORDER BY tstamp ASC LIMIT 1) AND
zone_status.id <= (SELECT id FROM zone_status WHERE tstamp < $__unixEpochTo() ORDER BY tstamp DESC LIMIT 1)
You might notice that the last two lines of the query are fairly complicated. These lines
are intended to allow the query to limit results only to results between two timestamps,
and to do so in a very efficient way. The macros $__unixEpochFrom()
and $__unixEpochTo()
retun the current timestamps that the dashboard user is currently requesting.
Thus, feel free to copy paste the following filter into any slow query in order to limit results in a SQL efficient way:
WHERE
zone_status.id >= (SELECT id FROM zone_status WHERE tstamp >= $__unixEpochFrom() ORDER BY tstamp ASC LIMIT 1) AND
zone_status.id <= (SELECT id FROM zone_status WHERE tstamp < $__unixEpochTo() ORDER BY tstamp DESC LIMIT 1) AND
"< ANY OTHER CONDITIONALS FOR THE QUERY >"
Introduction¶
This document is a guide for those interested in writing queries for BrainFrame's SQL database. Included are examples of common queries and an explanation of each table and their columns. This document is intended for those with a basic understanding of SQL.
BrainFrame hosts Postgres in a container and makes it available to the host machine through the default port, 5432.
Relationship Diagram¶
The following is a visual of how BrainFrame's various tables relate to each other. This is a useful reference when writing queries that span multiple tables. Click the image to enlarge it.
Example Queries¶
Below are some queries intended to be used as examples for common tasks. Fields that must be filled in are wrapped in brackets.
Getting the number of detections right now in a zone for a class
This query finds the number of detections that are currently in a zone, filtered by a class. If you want to know how many people are currently in the "Couch Area" zone, for instance, this is the query to use.
SELECT COUNT(*) FROM detection
JOIN detection_zone_status ON detection.id = detection_zone_status.detection_id
WHERE detection_zone_status.zone_status_id=(SELECT id FROM zone_status
WHERE zone_status.zone_id=[your zone_id here]
ORDER BY zone_status.tstamp DESC LIMIT 1)
AND detection.class_name=[class_name];
Getting the traffic history of a zone
This query gets cumulative data on how many objects of the given class name have entered and exited the zone. This could be used to build a graph of traffic in the zone.
SELECT total_count.count_enter, total_count.count_exit FROM total_count
JOIN zone_status ON zone_status.id=total_count.zone_status_id
WHERE zone_status.zone_id=[your zone_id here]
AND total_count.class_name=[your class name here];
Getting the last zone that an identity was seen in
This query finds the last zone that an identity was found in.
SELECT * FROM zone
JOIN zone_status ON zone_status.zone_id = zone.id
JOIN detection_zone_status ON detection_zone_status.zone_status_id = zone_status.id
JOIN detection ON detection.id=detection_zone_status.detection_id
JOIN identity ON identity.id=detection.identity_id
WHERE identity.unique_name = [your unique name here]
ORDER BY zone_status.tstamp DESC
LIMIT 1;
Getting the number of times a zone alarm has been triggered
This query counts the total amount of times a zone alarm has been triggered given its' alarm ID.
SELECT COUNT(*) FROM alert WHERE alert.zone_alarm_id = 1
Get the number of people entering or exiting a specific zone, with timestamps
This will return the rows for the following columns: tstamp, count_enter, count_exit
SELECT zone_status.tstamp, total_count.count_enter, total_count.count_exit
FROM total_count
LEFT JOIN zone_status ON zone_status.id = total_count.zone_status_id
WHERE zone_status.zone_id =
(SELECT zone.id FROM zone WHERE zone.name = 'YOUR_ZONE_NAME_HERE')
AND total_count.class_name = 'person'
ORDER BY zone_status.tstamp;
Get the total number of entering and exiting detections of a specific class for all time for a zone
SELECT total_count.count_enter FROM total_count
JOIN zone_status ON zone_status.id=total_count.zone_status_id
WHERE zone_status.zone_id=[your zone id here]
AND total_count.class_name=[your class name here]
ORDER BY zone_status.tstamp DESC LIMIT 1;
Tables: For Analysis¶
zone_status¶
This is an important table for SQL queries. It holds a point in time for a specific stream. The tstamp and zone_id are the key to finding specific detection in a certain place at a certain time.
Column | Description |
---|---|
id | A unique identifier. |
zone_id | The zone that this status is for. There is also a zone_id field that is the ID of this zone. |
tstamp | The Unix timestamp of when this status was recorded. |
detection¶
An object that has been detected in a video stream.
Column | Description |
---|---|
parent_id | A parent detection, if any. For instance, a face detection might have a parent that is a person detection. |
class_name | The class name of the detection. it describes what the detection is. Ie, "person", "cat", or "dog" |
identity_id | The identity that this detection is recognized as, if any. For example, if class_name is "face" and there is a face recognition capsule, and that capsule recognized the detection as someone known, it will be attached with an identity. |
extra_data_json | This is a json of form {'key': VAL, 'key2': 'VAL'} where the values can be of any json encodable type. It is intended to carry capsule-specific and/or customer-specific information without tying it too closely to the brainframe product. |
coords_json | A JSON-encoded array of arrays specifying where in the frame the detection is. In the format: [[x1,y1], …] |
track_id | A nullable UUID string. Detections that have the same track_id refer to the same object according to the tracking algorithm being used. This can be used to find the path of a single object throughout a video stream. If null, then the detection has not been successfully tracked. |
identity¶
A table for storing a known specific person or object, that other tables can link information about.
Column | Description |
---|---|
id | A unique identifier. |
unique_name | Some uniquely identifying string of the object, like an employee number or an SSN. |
nickname | A display name for the identity which may not be unique, like a person’s name. |
metadata_json | Any additional user-defined information about the identity. |
alert¶
An alert that tells the user an alarm's condition has been met.
Column | Description |
---|---|
id | A unique identifier. |
zone_alarm_id | The alarm this alert came from. |
start_time | The Unix timestamp of when this alarm started. |
end_time | The Unix timestamp of when this alarm ended. May be null if the alert is still ongoing. |
verified_as | If True, this alert was verified as legit. If False, the alert was a false alarm. If None, it hasn’t been verified yet. |
total_count¶
The total number of a certain class of object that has entered or exited a zone at some time. There are zero or more of these per ZoneStatus.
Column | Description |
---|---|
id | A unique identifier. |
zone_status_id | The zone status that this total count is for. |
class_name | The name of the class of object that we're keeping count of. |
count_enter | The amount of objects that have "entered" the zone. |
count_exit | The amount of objects that have "exited" the zone. |
capsule¶
A capsule loaded through the REST API.
Column | Description |
---|---|
name | The unique name of the capsule |
data_storage_id | The data storage row that holds the capsule data |
source_path | Path to the capsule's source code on the developer machine, or null if no source is available |
Tables: For Configuration Storage¶
premises¶
This defines a physical area with an internal local network of some sort. This could be a Mall, an office building, a shop, etc. The idea of a Premises is to keep track of which local network a camera or edge device might be running in, in order to forward results through a gateway to a central cloud server.
Column | Description |
---|---|
id | A unique identifier. |
name | The human readable name for which this premises this table refers to. |
stream_configuration¶
This defines a video stream and how BrainFrame should connect to it.
Column | Description |
---|---|
id | A unique identifier. |
premises_id | Nullable. If not null, it represents the premises for which this camera is streaming from. |
name | The name of the video stream as it appears to the user on the UI. |
connection_type | The type of connection being defined. This has to do with whether or not the video comes from a file, webcam, or IP camera. |
connection_options_json | A JSON object that contains configuration information about how to connect to the stream. |
runtime_options_json | A JSON object that contains configuration information which changes the runtime behavior of the stream. |
metadata_json | A JSON object that contains any additional information the user may want associated with this stream. |
global_capsule_configuration¶
This table is automatically created when BrainFrame loads a capsule that didn’t exist before.
Column | Description |
---|---|
name | The (unique) name of the capsule that this configuration refers to. |
option_values_json | A json with the option values that this capsule exposes Format: { "option_key": "option_value", "other_option": 0.75 } |
is_active | The default value for this capsule (on or off). It is overridden by the stream_capsule_configuration if the value is not null. |
stream_capsule_configuration¶
This table will be created when the a specific stream has options modified for a capsule. The table is intended to ‘patch’ an existing global_capsule_configuration to modify behavior of a capsule for a specific stream.
Column | Description |
---|---|
global_configuration_name | The global_capsule_configuration that this stream_capsule_configuration is patching |
stream_id | The stream_configuration that this stream_capsule_configuration is modifying capsule options for. |
option_values_patch_json | A json that can be empty, but also can modify the global capsule configuration by simply having a key: modified_value pair. {} or { "option_key": "modified option value" } |
is_active | Overrides the global_capsule_configuration is_active value if this value is not null. That means that, if is_active is True on the stream_capsule_configuration, then the global_capsule_configuration is ignored. If is_active is null on the stream_capsule_configuration, then the global_capsule_configuration is used. |
attribute¶
An Attribute refers to classifications, and are used to describe detections. For example, there may be a category of classification such as "gender". A particular detection might have an attribute with category "gender" and value "male".
Column | Description |
---|---|
category | The category of attribute. ("gender", "car_type", etc). This attribute is a key. |
value | The value of the attribute. ("male", "prius", etc). This attribute is a key. |
zone¶
A space in a video stream to look for activity in.
Column | Description |
---|---|
id | A unique identifier |
name | The name of the zone as it appears to the user. |
stream_id | The ID of the stream that this zone is for. |
coords_json | Two or more 2D coordinates defining the shape of the zone in the stream. Defined as a two-dimensional JSON array, or "null" if the zone applies to the entire frame. |
zone_alarm¶
Defines a set of conditions that, if they take place in a zone, should trigger an alarm to the user.
Column | Description |
---|---|
id | A unique identifier |
name | The name of the alarm as it appears to the user. |
use_active_time | If true, then alarms only happen between start_time and end_time. If false, then they can happen at any time. |
active_start_time | The time to start monitoring the stream at every day. Only used if use_active_time is true. Stored in the format "HH:MM:SS". |
active_end_time | The time to start monitoring the stream at every day. Only used if use_active_time is true. Stored in the format "HH:MM:SS". |
zone_id | The zone that this alarm is assigned to watch. There is also a zone_id field that is the ID of this zone. |
zone_alarm_count_condition¶
A condition that must be met for an alarm to go off. Compares how many of some object is in a zone against a test value.
Column | Description |
---|---|
id | A unique identifier |
zone_alarm_id | The zone alarm that this condition applies to. |
test | The test condition, either ">", "<", "=", "!=". |
check_value | The value to apply the test condition to. |
with_class_name | The name of the class to count in the zone. |
attribute_id | An optional attribute that the object must have to be counted. (nullable) |
window_duration | The size of the sliding window used for this condition. A larger sliding window size may reduce false positives but increase latency. |
window_threshold | A value between 0.0 and 1.0 that controls what portion of the sliding window results must evaluate to true for the alarm to trigger. |
intersection_point | The point on the detection to use when calculating if the detection is in the zone. Either "bottom", "top", "left", "right", or "center". |
zone_alarm_rate_condition¶
A condition that must be met for an alarm to go off. Compares the rate of change in the count of some object against a test value.
Column | Description |
---|---|
id | A unique identifier |
zone_alarm_id | The zone alarm that this condition applies to. |
test | The test condition, either '>=' or '<='. |
duration | The time period with which the change in object count happens, in seconds. |
change | The change in object count that happens within a period of time. |
direction | The direction of movement, either 'entering' the zone, 'exiting' the zone, or 'entering_or_exiting'. |
with_class_name | The name of the class of objects to look for in the zone. |
attribute_id | An optional attribute that the object must have to be counted. |
intersection_point | The point on the detection to use when calculating if the detection is in the zone. Either "bottom", "top", "left", "right", or "center". |
encoding¶
A vector encoding of some data that defines an identity. For example, an encoding for a human face that can be compared to other encodings to identify if it is the same human face.
Column | Description |
---|---|
id | A unique identifier |
identity_id | The identity that this encoding describes |
class_name | The name of the class that this encoding is of. |
vector_json | A JSON-encoded array of values. The amount of values will depend on the class name of the identity this encoding is attached to. |
Tables: For Linking¶
alert_frame¶
Links an alert to a data_storage table containing the first frame in the video where this alert happened.
Column | Description |
---|---|
id | A unique identifier |
alert_id | The alert this frame is for. |
data_storage_id | The data_storage table that contains the frame. |
zone_status_alert¶
Links a zone_status to an alert that was in progress at the time of the zone_status.
Column | Description |
---|---|
zone_status_id | The zone_status being linked to. |
alert_id | The alert being linked to. |
detection_zone_status¶
Links zone statuses to the detections that happened in them.
Column | Description |
---|---|
detection_id | The linked detection. |
zone_status_id | The linked zone_status. |
transition_state | The location of the detection relative to the zone |
detection_attribute¶
Links detections to the attributes that describe them.
Column | Description |
---|---|
detection_id | The linked detection. |
attribute_id | The linked attribute. |
encoding_data_storage¶
Links encodings to the data that was used to create the vector. This tends to be an image.
Column | Description |
---|---|
data_storage_id | The linked data_storage |
encoding_id | The linked encoding |
Tables: Miscellaneous¶
data_storage¶
References some external file found elsewhere.
Column | Description |
---|---|
id | A unique identifier. |
name | The name of the file, used to find it in storage. |
hash | A SHA256 hash of the data. |
mime_type | The mime type of the file being stored. |
user¶
Contains information on user accounts.
Column | Description |
---|---|
id | A unique identifier. |
username | The user's unique username. |
password_hash | The user's password, hashed with argon2. |
role | The user's role, which controls what permissions they have. |
Ended: Dashboard
The BrainFrame software is Copyrighted. BrainFrame™ is a trademarked name.
End-User License Agreement¶
SDK Licensing¶
Unless otherwise stated, the following commercial license applies to all other SDK components.
OEM Licensing¶
Open Source Licenses¶
Open source licenses for the BrainFrame client can be found under the
legal/licenses
directory.
Open source licenses for the BrainFrame server can be found in the core
Docker image under the standard locations provided by the apt
and pip
package managers.
Replacing Python Libraries¶
We offer BrainFrame client users the option to replace some libraries that have been packaged alongside or within the client binary with an API-compatible version of the library. Simply set the environment variable corresponding to the library you want to replace, and BrainFrame will use that version.
For example:
export PYGOBJECT_PATH=/usr/local/pygobject-custom
bash ./brainframe_client.sh
PyGObject¶
Environment variable: PYGOBJECT_PATH
Source: https://github.com/GNOME/pygobject
Argh¶
Environment variable: ARGH_PATH
Source: https://github.com/neithere/argh
Chardet¶
Environment variable: CHARDET_PATH
Source: https://github.com/chardet/chardet
Replacing C++ Libraries¶
Replaceable libraries are included in the release under the lib
directory and
are dynamically linked at runtime. In order to use a custom version of these
dependencies, simply replace the included dynamic library files with your
version.
Please consult the source for their corresponding copyrights. Links to each
library's source code can be found in lib/brainframe_qt/legal/sources.txt
in the binary .zip.