How Can a Spacecraft Use AI? Part 1 - Computer Vision Introduction and Face Detection

Published: 05/18/2018  

Last Updated: 05/18/2018

By Justin Shenk

Imagine you are a passenger aboard a large spacecraft bound for Mars. The ship is equipped with state-of-the-art sensors to maintain a high quality of life for the long (300-day) journey. All the passengers’ needs are provided for, including an ample supply of oxygen. The ship is equipped with many sensors that work together to provide a comfortable living space that responds to changes occurring on board. In this three-part series, we’ll be exploring computer vision for face, body, and pose detection.

Why AI on a Spacecraft?

A spaceship far from Earth depends on local computation for the majority of its operations. There are no cell towers far from Earth’s surface, so computers will have to rely on algorithms that can carry on without supervision. AI in space must leverage local information with specialized hardware and software that minimizes heat production and power consumption while performing well.

What is an Intelligent Spacecraft?

An ordinary thermostat in the home holds temperature at a steady level. A “smart” thermostat with AI can adjust the level depending on other factors present, such as mood, intention, or predictions. The boundary is fuzzy, but generally speaking, the more complex the learned features, and the more flexible the system is to respond to new situations, the more intelligent the machine is assumed to be. The ultimate AI in a spacecraft, from a human perspective, anticipates and provides for all the needs of the passengers.

Computer Vision and AI

One of the most powerful sensors to support AI on a spacecraft is the camera. By detecting and responding to patterns found in images, a journey to Mars can be made much more pleasant. Consider, for example, how we act differently when we noticed a friend’s mood has changed. By combining visual signals with machine intelligence, we can solve many problems that arise at the far reaches of space. Some use cases for computer vision on a spaceship:

  • Automatically controlling lights when people enter a room to preserve energy
  • Detecting overall activity and motion
  • Predicting oxygen consumption in a room based on activity
  • Predicting body heat production
  • Monitoring mood and regulating exercise
  • Identifying highly trafficked areas of the ship

Tools for Detecting Passengers

For this task, we only want to measure human activity (sorry, no droid detection in this tutorial). Fortunately, the public areas on our vehicle are equipped with sensors that can detect human presence and motion. Let’s attempt the challenge using OpenCV* and Python*.

OpenCV is an open-source computer vision library written in C++ with bindings for Python*, Java* and MATLAB*/Octave.

Install OpenCV

Ubuntu* installation:

sudo apt-get install python-opencv

If using Raspberry Pi* or Up-squared* instead, or a laptop with Mac* or Windows*, follow the respective instructions for your system, and be sure to include Python* bindings to follow this tutorial. Feel free to use the programming language of your choice.

Face Detection Example

How many people are facing the camera? This problem is easy to solve using a face detector known as a Haar classifier. In very simple terms, a Haar classifier scans a frame looking for edges that add up to a typical person’s face.

Build a face detector using OpenCV and your webcam. First, import OpenCV:

import cv2

Initialize your VideoCapture stream and put your capture and display in a while loop, stopping when the Esc key is pressed:

cap = cv2VideoCapture(0)

while True:
	ret, frame =
	cv2.imshow('Faces', frame)
	if cv2.waitKey(1) == 27:

If all goes well, a window pops up displaying the webcam stream. Let’s collect some data with a face detector. The XML files used to detect faces are stored in the opencv/data/haarcascades/ folder and can also be found online

import cv2
face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
cap = cv2.VideoCapture(0)

while True:
	ret, frame =
	gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
	faces = face_cascade.detectMultiScale(gray, 1.3, 5)
	for (x,y,w,h) in faces:
	print("{} face(s) found".format(len(faces))
	cv2.imshow('Faces', frame)
	if cv2.waitKey(1) == 27:

After detecting faces in each screen, we may want to update the frame to show how many faces are detected. We can use OpenCV's putText() method below the print() statement:

cv2.putText(frame, "{} face(s) found".format(len(faces)), (40, 40),
	cv2.FONT_HERSHEY_SIMPLEX, 1 (0, 255, 0))

The top left corner will show how many faces are detected in the screen:

To collect data for later analysis, we can store the information in a variety of formats. Updating a basic Python list is the easiest way to begin. For starters, let's track the horizontal position of the face over time. First, initialize a list above the while loop:

data_x = []

Above the rectangle() statement, add data_x.append(x). This creates a one-dimensional list of all x positions of faces found which can be saved for later analysis. In the next post, we will use a more broadly applicable human detector to track people in the screen, regardless of whether they are looking at the camera.


  • Modify the data structure to include simultaneous and missing faces (Basic)
  • Plot the data using Matplotlib (Intermediate)
  • Use face (or other body part) positions to control LED lights or music (Advanced)

Other Parts of these Series

How Can a Spacecraft Use AI?

justin shenk

About our Author:

Justin Shenk

Master student and research assistant, University of Osnabrueck, Germany
Osnabrueck, Germany.

Justin is an AI Master thesis worker at Peltarion researching deep learning model introspection. He develops AI software as an Intel Software Innovator and demos his projects at Intel’s booths at NIPS, ICML, and CVPR. He previously worked as a neuroscientist in the US.


Product and Performance Information


Performance varies by use, configuration and other factors. Learn more at