What is the Media Ecology Project?
The Media Ecology Project (MEP) is a digital lab at Dartmouth College directed by Prof. Mark Williams that enables researcher access to archival moving image collections and contribution of critical analysis back to the archival and research communities. The Media Ecology Project enables new research capacities toward the critical understanding of historical media and facilitates expanded research context bridging technical, disciplinary, and epistemological boundaries.
What we do
MEP enables researchers across disciplines to:
- Digitally access archival moving image collections
- Build a dynamic context of research that enhances search and discovery within these archives–a key value add
- Develop new research uses over time by a wide range of users
- Contribute back to the archival community through the fluid contribution of metadata and other knowledge.
MEP is working to realize a virtuous cycle of access and preservation enabled by technological advance and new practical applications of digital tools and platforms. In a fundamental sense this is a sustainability project regarding moving image history as public memory. By fostering innovations in both granular close-textual analysis (traditional Arts and Humanities) and computational distant reading (computer vision and machine reading), MEP is a collaborative incubator to develop 21st-century research methods.
In addition to working with existing software, we have developed two new tools that have greatly enhanced our goal of developing new networked research about archival moving image materials. Both tools support close textual analysis for moving pictures using time-based annotations: selecting a start time and stop time for a clip, and providing descriptions and tags for that clip.
- Our Semantic Annotation Tool (SAT) enables the creation of time-based annotations for specific geometric regions of the motion picture frame — a truly granular approach to scholarly annotations.
- Onomy.org is a vocabulary-building tool that helps to ensure that tags applied to time-based annotations are in harmony with one another. In relation to this tool, we have started to build an international dictionary of film terms. English-to-Chinese is the first entry of this dictionary.
Our work with computer scientists has been truly inspiring, and has produced new tools to support machine-reading capacities regarding moving images. One direction of this research produces feature extraction (isolating specific formal and aesthetic features of moving images). The second direction is very promising work in “deep learning” approaches that employ convolutional neural networks to begin to identify objects and actions in motion pictures. These tools can be used in relation to the “manual” (human-produced) annotation tools mentioned above to create synthetic and iterative research workflows.
The future development of new research questions in relation to these workflows will literally transform the value of media archives, and also fundamentally grow and develop new inter-disciplinary research and curricular goals for visual culture studies in the 21st century.