Knight Prototype Grant for MEP and Visual Learning Group!

We are delighted to announce that the Knight Foundation has awarded a Prototype Grant for Media Innovation to The Media Ecology Project (MEP) and Prof. Lorenzo Torresani’s Visual Learning Group at Dartmouth, in conjunction with The Internet Archive and the VEMI Lab at The University of Maine.

“Unlocking Film Libraries for Discovery and Search” will apply existing software for algorithmic object, action, and speech recognition to a varied collection of 100 educational films held by the Internet Archive and Dartmouth Library. We will evaluate the resulting data to plan future multimodal metadata generation tools that improve video discovery and accessibility in libraries.

Delighted to be working with Prof. Torresani, Dimitrios Latsis at The Internet Archive, John Bell (architect of MEP) from Information Technology Services and The Academic Commons at Dartmouth, and Prof. Nicholas Giudice of the Virtual Environment and Multimodal Interaction Lab at The University of Maine.

Where the library of the 20th century focused on texts, the 21st century library will be a rich mix of media, fully accessible to library patrons in digital form. Yet the tools that allow people to easily search film and video in the same way that they can search through the full text of a document are still beyond the reach of most libraries. How can we make the rich troves of film/video housed in thousands of libraries searchable and discoverable for the next generation?

Dartmouth College’s Media Ecology Project and the Visual Learning Group will conduct a 6-month Prototype to apply tools for object, action, and speech recognition to a rich collection of one hundred educational films. Each of these tools generates annotations that describe one aspect of a particular film, but they have rarely been combined to create a larger contextual view of the contents of that film. What was once a roll of film, indexed only by its card catalog description, will instead soon be searchable scene-by-scene, adding immense value for library patrons, scholars and the visually impaired.

Most of the progress currently being made in utilizing computer vision and machine learning for moving image culture is delegated to very contemporary, hi-definition video formats such as videos taken via new cell phones. Part of the significance of this project will be enabling essential first steps in object and action recognition for more historical formats of film/video, thereby providing incentives for the field of computer vision and machine learning to develop new research capacities regarding film/video from prior eras. This capacity would be transformative for archives and libraries, and realize extraordinary new use value for historical moving images as essential resources of public memory — a central goal of the Media Ecology Project. We hope to eventually make available to many libraries and archives the potential to add immense value for library patrons, scholars, and the visually impaired via further development and full-scale integration of these tools.

Grateful to the Knight Foundation for this opportunity!

#newschallenge, #prototype, #knightfoundation, @knightfdn