Detect Person and Chat
#!/usr/bin/env python3 import jetson_utils from jetson_inference import detectNet from jetson_utils import videoSource import time import subprocess def start_detection(): net = detectNet("ssd-mobilenet-v2", threshold=0.5) camera = videoSource("csi://0") # '/dev/video0' for V4L2 while True: time.sleep(1) img = camera.Capture() # Return a jetson.utils.cudaImage object detections = net.Detect(img) for detection in detections: if (detection.ClassID == 1) and (detection.Confidence > 0.95): # > 95% is a 'person' class_name = net.GetClassDesc(detection.ClassID) print(f"Detected '{class_name}'") chat_process = subprocess.Popen("python3 /home/jetson/Code/ChatBot/ChatBot.py", shell=True) # Open ChatBot in another terminal chat_process.wait() time.sleep(5) break if __name__ == "__main__": start_detection()
The code will allow the Nvidia Jetson Nano to start a chat (ChatBot) when a person is detected:
In the code, “subprocess” is used to start running “ChatBot.py” in another Terminal. A couple of other methods were tried but failed:
- Import “ChatBot” and call the functions defined in “ChatBot.py”
- Start “ChatBot” functions in separated threads
Both methods of starting “ChatBot” will fail with error message like below:
RuntimeWarning: use mixer: /home/jetson/.local/lib/python3.8/site-packages/pygame/../pygame.libs/libgomp-d22c30c5.so.1.0.0: cannot allocate memory in static TLS block (ImportError: /home/jetson/.local/lib/python3.8/site-packages/pygame/../pygame.libs/libgomp-d22c30c5.so.1.0.0: cannot allocate memory in static TLS block)