Running the Live Camera Recognition Demo terug naar de inleiding
Realtime classificeren met de camera met:
jetson-inference/build/aarch64/bin/imagenet-camera.py
De schets kan optioneel gebruik maken van 4 commando argumenten
1) --network vlag voor een classificatie model (default is GoogleNet)
2) --camera vlag voor het type camera (default = --camera=0 voor de MPI CSI camera dit is de raspberry pi (RPI) V2 camera)
3) en 4) --width en --height vlaggen van de camera resolutie (default is dit 1280x720)
Voorbeelden
Maakt gebruik van GoogleNet, de RPI camera en een resolutie van 1280x720
Maakt gebruik van ResNet-18, de RPI camera en een resolutie van 1280x720
import jetson.inference
import jetson.utils
import argparse
import sys
# parse the command line
parser = argparse.ArgumentParser(description="Classify a live camera stream using an image recognition DNN.",
formatter_class=argparse.RawTextHelpFormatter, epilog=jetson.inference.imageNet.Usage())
parser.add_argument("--network", type=str, default="googlenet", help="pre-trained model to load (see below for options)")
parser.add_argument("--camera", type=str, default="0", help="index of the MIPI CSI camera to use (e.g. CSI camera 0)\nor for VL42 cameras, the /dev/video device to use.\nby default, MIPI CSI camera 0 will be used.")
parser.add_argument("--width", type=int, default=1280, help="desired width of camera stream (default is 1280 pixels)")
parser.add_argument("--height", type=int, default=720, help="desired height of camera stream (default is 720 pixels)")
try:
opt = parser.parse_known_args()[0]
except:
print("")
parser.print_help()
sys.exit(0)
# load the recognition network
net = jetson.inference.imageNet(opt.network, sys.argv)
# create the camera and display
font = jetson.utils.cudaFont()
camera = jetson.utils.gstCamera(opt.width, opt.height, opt.camera)
display = jetson.utils.glDisplay()
# process frames until user exits
while display.IsOpen():
# capture the image
img, width, height = camera.CaptureRGBA()
# classify the image
class_idx, confidence = net.Classify(img, width, height)
# find the object description
class_desc = net.GetClassDesc(class_idx)
# overlay the result on the image
font.OverlayText(img, width, height, "{:05.2f}% {:s}".format(confidence * 100, class_desc), 5, 5, font.White, font.Gray40)
# render the image
display.RenderOnce(img, width, height)
# update the title bar
display.SetTitle("{:s} | Network {:.0f} FPS".format(net.GetNetworkName(), net.GetNetworkFPS()))
# print out performance info
net.PrintProfilerTimes()