MEP and DEV Studio Receive Mellon Foundation Public Knowledge Grant to Advance Fair Use Under the Digital Millennium Copyright Act
The 1998 Digital Millennium Copyright Act (DMCA) was passed to bring U.S. copyright law into the network age. Though modernization was clearly needed, certain provisions of the DMCA have remained controversial ever since as society tries to find a balance between rewarding creators and supporting an open, shared culture. In 2021, the Librarian of Congress agreed with the need for some exceptions to anti-circumvention provisions of the DMCA and created the Text and Data Mining (TDM) Exemption permitting a big data approach to analysis of visual culture. The 2021 TDM Exemption is a temporary measure that will be reviewed in coming years to determine if it is useful and effective at unlocking large, copyrighted datasets for scholarly inquiry. Looking ahead to forthcoming decisions about whether or not the TDM Exemption should be renewed, in 2022 the Mellon Foundation convened scholars across the country with the goal of developing demonstration projects advancing the argument that text and data mining of content constitute applications of fair use under the DMCA.
The Mellon Foundation has awarded two Dartmouth faculty researchers with grant resources to demonstrate uses of text and data mining in studies of visual culture. Mark Williams, an associate professor and director of the Media Ecology Project, and John Bell, a lecturer and director of ITC’s Data Experiences and Visualizations Studio, are delighted to have support from the Mellon Foundation to create the Deep Screens project. For over a century, motion pictures have been a primarily two-dimensional medium. Occasional forays into 3D glasses aside, most television and film has flattened the three-dimensional world into a two-dimensional image with fixed camera perspectives, defined frame borders, and monocular vision. Williams and Bell will be turning those 2D frames back into 3D spaces–and in the process, providing a case study supporting the extension of a copyright rule critical to digital cultural heritage research.
Over the next 12-months, Deep Screens will use machine learning software to analyze actor poses in a vast curated collection of U.S. film and television texts from 1895 to the 1970s and extract data about their movements. This movement data will then be statistically analyzed, with derivatives and results made available in Dataverse through a partnership with the Dartmouth Library. To make the movement data more relatable, motions and gestures will also be applied to animated avatars that can be viewed in virtual reality, abstracted from the context of the original film or television text. The combination of quantitative analysis of the data itself and qualitative viewing of the abstracted movements will provide insight into how acting, cinematography, and technology have evolved across the span of moving image history.
A project that uses hundreds of hours of commercial film and television copied from over 800 DVDs would normally be legally fraught because, in addition to the underlying United States copyright that protects the narrative content of videos, it is illegal to circumvent the digital encryption protecting video distributed on many DVD, Blu-Ray, and streaming sources. The anti-circumvention rule defined in section 1201 of the DMCA has been widely recognized by researchers as too restrictive, with potential unintended consequences ranging from preventing the kind of research Bell and Williams are undertaking to contributing to a future ‘digital dark age’ resulting in the loss of a century of cultural production.
Bell and Williams will work on Deep Screens throughout 2023 with a team of colleagues at Dartmouth, including Assoc. Prof. Desirée Garcia (Latin American, Latino, and Caribbean Studies), Asst. Prof. SouYoung Jin (Computer Science), and the Dartmouth Library, as well as an international group of researchers. They believe this initial foray into examining the third dimension of film will be just the beginning of what is possible using this sort of 2D to 3D machine learning technology. Williams notes that “these digital tools and methods will allow us to pursue new elaborate research questions regarding the poetics of both generic and innovative classical and minority cinemas, plus the symbiosis of television, as moving images became the dominant visual culture of the 20th century.”
“Deep Screens is an opportunity to not just do new, experimental research on film and media history but also to support the concept of critical scholarship as a whole,” says Bell. “Mellon is providing an opportunity to impact the ongoing debate about who owns digital culture and what the rights of its viewers and critics will be in the future.”
For more information about Deep Screens please visit: