Hands-On AI Part 13: Image Data Exploration

Published: 10/13/2017

Last Updated: 10/12/2017

A Tutorial Series for Software Developers, Data Scientists, and Data Center Managers

We decided to use images from existing picture databases because the quality of a large portion of Flickr* images was poor. In particular, we pulled high-quality images from three psychology research databases. Refer to the Image Dataset Search article for information on these databases. Each image included ratings of (un)pleasantness and intensity that were collected from several people. Across these three databases, four categories emerged from 1,986 images. These categories covered 87 percent of images including 34 percent animals, 28 percent humans, 13 percent scenes, and 12 percent objects, with the remaining 13 percent miscellaneous.

Animals

Figure 1.  An example of images from the animals category.

About a third of the images involved animals, either in isolation or with other animals, as shown above (Figure 1). Among these examples, ratings of pleasantness increase as you move from left to right. The unpleasant images of hyenas eating their prey and cockroaches may elicit emotional responses such as fear, sadness, and disgust. In contrast, the images on the right of the sleeping cat and the smiling dog may elicit feelings of affection and happiness.

Humans

Figure 2. An example of images from the humans category.

The humans category of images included pictures of individuals and groups, with pictures of groups often involving more contextual information. For instance, the image of a marching band seems to have a stadium of fans in the background, suggesting that this image is capturing some kind of performance during a sporting event. In contrast, the image of the angry woman lacks context—we have no way to know or even to guess what the woman is angry about. Notably, not all images with multiple individuals or groups have more information represented in them. For example, the image of men strewn out in a row on the floor with wounds and blood on their clothing does not provide clues as to what is happening. Nonetheless, even with this lack of information, the images of humans evoked varying emotional reactions.

Scenes

The scenes category of our image set captured a diverse range of scenery, from scenes involving man-made structures and objects, to scenes of nature and even the galaxy.

Figure 3.  An example of images from the scenes category.

Objects

Figure 4. An example of images from the objects category.

The objects category in our image set comprised images that focused on one object, as illustrated in the above examples. These images largely lack situational context, especially compared to other categories in our set.

Miscellaneous

Figure 5. An example of images from the miscellaneous category.

Finally, there was a subset of images in our set that could not be categorized as animals, humans, scenes, or objects. Oftentimes, as shown, these images were scenes that only comprised several objects, but lacked the context that a scene prototypically carries. These types of images tended to be rated as neither pleasant nor unpleasant.

Emotion Categories for Image Database

To determine emotion categories for our image database, we relied upon the normative subjective ratings of valence provided for each image, as reported in the Geneva Affective PicturE Database* (GAPED*) and the Open Affective Standardized Image Set* (OASIS*) databases. Because the GAPED used a 0-to-100 Likert* scale, whereas the OASIS used a 1-to-7 Likert scale, we applied a linear transformation to convert all scores to a 0-to-100 continuous scale. Then, we explored two potential rules for emotion categorization.

First, intuition might suggest sorting our images based on pleasantness and then splitting our database based on thirds of the rating scale—with scores of 0-33.33 representing the negative category, 33.33-66.67 representing neutral, and the bottom-third (scores of 33.33 or below), and 66.67-100 representing positive. We used the following Python* code to implement this one-third categorization rule:

import os
import shutil
import csv

def organizeFolderGAPED(original, pos, neg, neut):
# Copies each image in the GAPED database to the corresponding folder
# Make a dictionary of file names to valence
dict = {}
files = os.listdir(original)
for file in files:
if '.txt' in file:
with open(os.path.join(original, file), 'r') as f:
for l in f:
l = l.split()
dict[l[0][:-4]] = l[1]

# Walk through the images and categorize files as pos/neg/neut according to valence
for roots, dirs, files, in os.walk(original):
for file in files:
if '.bmp' in file:
if float(dict[file[:-4]]) < 100/3:
shutil.copy(os.path.join(roots, file), neg)
elif float(dict[file[:-4]]) > 200/3:
shutil.copy(os.path.join(roots, file), pos)
else:
shutil.copy(os.path.join(roots, file), neut)

def organizeFolderOASIS(original, pos, neg, neut):
# Copies each image in the GAPED database to the corresponding folder
# Make a dictionary of file names to valence
dict = {}
with open('/Users/harrys/Desktop/OASIS.csv') as file:
dict[row[1]] = row[4]

# Walk through the images and categorize files as pos/neg/neut according to normalized	valence
for roots, dirs, files, in os.walk(original):
for file in files:
if '.jpg' in file:
if (float(dict[file[:-4]])-1)*100/6 < 100/3:
shutil.copy(os.path.join(roots, file), neg)
elif (float(dict[file[:-4]])-1)*100/6 > 200/3:
shutil.copy(os.path.join(roots, file), pos)
else:
shutil.copy(os.path.join(roots, file), neut)

if __name__ == '__main__' :
gaped = 'path/to/your/project/directory/GAPED'
oasis = 'path/to/your/project/directory/Oasis'
pos = 'path/to/your/project/directory/Positive'
neg = 'path/to/your/project/directory/Negative'
neut = 'path/to/your/project/directory/Neutral'
organizeFolderOASIS(oasis, pos, neg, neut)
organizeFolderGAPED(gaped, pos, neg, neut)

This approach divided our database into 417 negative images, 774 neutral images, and 442 positive images. This one-third categorization rule caused unpleasant images that did not reach the one-third threshold to be categorized as neutral images; for example, the images of a carcass, crying baby, and cemetery were categorized as neutral. Though such images were more pleasant compared to other images in the negative category, we were concerned that these images were not quite neutral.

Thus, we opted for a categorization rule that would optimize for the normal Gaussian data distribution and also better categorize our stimuli into emotion categories. We categorized scores of 0–39 as negative, 40–60 as neutral, and 61–100 as positive. We used the following Python code to implement this rule:

import os
import shutil
import csv

def organizeFolderGAPED(original, pos, neg, neut):
# Copies each image in the GAPED database to the corresponding folder
# Make a dictionary of file names to valence
dict = {}
files = os.listdir(original)
for file in files:
if '.txt' in file:
with open(os.path.join(original, file), 'r') as f:
for l in f:
l = l.split()
dict[l[0][:-4]] = l[1]

# Walk through the images and categorize files as pos/neg/neut according to valence
for roots, dirs, files, in os.walk(original):
for file in files:
if '.bmp' in file:
if float(dict[file[:-4]]) < 40:
shutil.copy(os.path.join(roots, file), neg)
elif float(dict[file[:-4]]) > 60:
shutil.copy(os.path.join(roots, file), pos)
else:
shutil.copy(os.path.join(roots, file), neut)

def organizeFolderOASIS(original, pos, neg, neut):
# Copies each image in the GAPED database to the corresponding folder
# Make a dictionary of file names to valence
dict = {}
with open('path/to/your/project/directory/OASIS.csv') as file:
dict[row[1]] = row[4]

# Walk through the images and categorize files as pos/neg/neut according to normalized valence
for roots, dirs, files, in os.walk(original):
for file in files:
if '.jpg' in file:
if (float(dict[file[:-4]])-1)*100/6 < 40:
shutil.copy(os.path.join(roots, file), neg)
elif (float(dict[file[:-4]])-1)*100/6 > 60:
shutil.copy(os.path.join(roots, file), pos)
else:
shutil.copy(os.path.join(roots, file), neut)

if __name__ == '__main__' :
gaped = 'path/to/your/project/directory/GAPED'
oasis = 'path/to/your/project/directory/Oasis'
pos = 'path/to/your/project/directory/Positive'
neg = 'path/to/your/project/directory/Negative'
neut = 'path/to/your/project/directory/Neutral'
organizeFolderOASIS(oasis, pos, neg, neut)
organizeFolderGAPED(gaped, pos, neg, neut)

With this 40-60-40 categorization rule, our 567 positive images were rated as more pleasant than our 502 neutral and 564 negative images, with the negative images being less pleasant than the neutral images. Thus, we retained the intended meaning of our emotion categories while also improving the distribution of images to each category. Below (Figure 6), we illustrate the pleasantness associated with each category. The differing lengths of the arms of each box plot demonstrate how the emotional image categories (negative and positive) were more variable in their ratings compared to the neutral image category.

Figure 6. The mean pleasantness ratings associated with each emotion category.

We concluded that this categorization rule was sufficient for classifying our images based on emotion. As with our stimuli categories for our image database, below we demonstrate the types of images that represented each emotion category. Notably, each stimuli category (animals, humans, scenes, objects, and miscellaneous) is represented in each emotion category.

Conclusion

To conclude, we divided our image database into neutral, negative, and positive emotion categories by using normative ratings of valence that ranged from 0 to 100, wherein scores from 0–39 indicated negative images, 40–60 indicated neutral, and 61–100 indicated positive. Images were relatively evenly distributed across these emotion categories. Finally, included within each emotion category were pictures of animals, humans, scenes, objects, and other miscellaneous items.

View All Tutorials ›

Product and Performance Information

1

Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.