Browsing Faculty Publications by Title
Now showing items 1-4 of 4
Crowdsourcing Image Extraction and Annotation: Software Development and Case StudyWe describe the development of web-based software that facilitates large-scale, crowdsourced image extraction and annotation within image-heavy corpora that are of interest to the digital humanities. An application of this software is then detailed and evaluated through a case study where it was deployed within Amazon Mechanical Turk to extract and annotate faces from the archives of Time magazine. Annotation labels included categories such as age, gender, and race that were subsequently used to train machine learning models. The systemization of our crowdsourced data collection and worker quality verification procedures are detailed within this case study. We outline a data verification methodology that used validation images and required only two annotations per image to produce high-fidelity data that has comparable results to methods using five annotations per image. Finally, we provide instructions for customizing our software to meet the needs for other studies, with the goal of offering this resource to researchers undertaking the analysis of objects within other image-heavy archives.
Faces extracted from Time Magazine 1923-2014We present metadata of labeled faces extracted from a Time magazine archive that contains 3,389 issues ranging from 1923 to 2012. The data we are publishing consists of three subsets: Dataset 1) the gender labels and image characteristics for each of the 327,322 faces that were automatically-extracted from the entire Time archive, Dataset 2) a subset of 8,789 faces from a sample of 100 issues that were labeled by Amazon Mechanical Turk (AMT) workers according to ten dimensions (including gender) and used as training data to produce Dataset 1, and Dataset 3) the raw data collected from the AMT workers before being processed to produce Dataset 2.
Generating Facial Character: A Systematic Method Accumulating Expressive HistoriesThe author presents a method to simulate facial character development by accumulating an expressive history onto a face. The model analytically combines facial features from Paul Ekman’s seven universal facial expressions using a simple Markov chain algorithm. The output is a series of 3D digital faces created in Blender with Python. The results show that systematically imprinting features from emotional expressions onto a neutral face transforms it into one with distinct character. This method could be applied to creative works that depend on character creation, ranging from figurative sculpture to game design, and allows the creator to incorporate chance into the creative process. The author demonstrates the method’s application to sculpture with ceramic casts of generated faces.
What’s in a Face? Gender Representation of Faces in Time, 1940s-1990sWe extracted 327,322 faces from an archive of Time magazine containing 3,389 issues dating from 1923 to 2014, classified the gender of each extracted face, and discovered that the proportion of female faces contained within this archive varied in interesting ways over time. The proportion of female faces first peaked in the mid-to-late 1940s. This was followed by a dip lasting from the mid-1950s to the early 1960s. The 1970s saw another peak followed by a dip over the course of the 1980s. Finally, we see a slow and steady rise in the proportion of female faces from the early 1990s onwards. In this paper, we seek to make sense of these variations through an interdisciplinary framework drawing on psychology, visual studies (in particular, photography theory), and history. Through a close reading of our Time archive from the 1940s through the 1990s, we conclude that the visual representation of women in Time magazine correlates with attitudes toward women in both the historical context of the era and the textual content of the magazine.