Show simple item record

dc.contributor.authorSayers, Jentery
dc.date.accessioned2020-05-21T20:42:13Z
dc.date.available2020-05-21T20:42:13Z
dc.date.issued2013-09-14
dc.identifier.urihttp://hdl.handle.net/1808/30379
dc.descriptionKeynote Talk. Digital Humanities Forum: Return to the Material. University of Kansas. September 14, 2013: http://idrh.ku.edu/dhforum2013

Full slidedeck available at: http://uvicmakerlab.github.io/conferenceMaterials/kansas2013/keynote

Jentery Sayers is at the University of Victoria.
en_US
dc.description.abstractSince its initial role in artificial intelligence research during the early 1970s, computer vision — defined, for the purposes of this talk, as the automated description and reconstruction of the physical world (including its subjects and objects) through algorithms — has grown increasingly accessible to a wide variety of audiences through a broad range of consumer electronics. For instance, consider the number of cultural heritage projects relying extensively on optical character recognition. Or, in commonplace apps like iPhoto, note the use of face detection techniques for image description and searching. Elsewhere, web-based repositories such as Thingiverse are housing museum collections (e.g., at the Art Institute of Chicago) of 3D scans and print-on-demand models generated by both staff and patrons. And now Kinect hacks are practically ubiquitous on the web, with people regularly repurposing the sensor to create games, build DIY robots, and construct playful interfaces. Unpacking these phenomena across academic and popular domains, this talk highlights the need for digital humanities practitioners to not only engage how computer vision is embedded in our research but also explore how it actively transduces our materials, with an emphasis on the production of prototypes — or “fabrications” — that do not yet exist in the physical world. Here, the talk draws examples from recent research conducted by the Maker Lab in the Humanities at the University of Victoria, where — through its “Z-Axis” research initiative — practitioners are conducting experiments in stitching (i.e., translating 2D photos into 3D models), decimation (i.e., reducing the polygon count of models), and displacement (i.e., pushing and pulling the geometry of models to generate depth and detail) in order to articulate new-form arguments about literary and cultural histories. The Lab’s Z-Axis methodologies develop existing digital humanities research in speculative computing (Drucker and Nowviskie), geospatial expression (Moretti), data visualization (Manovich), algorithmic criticism (Ramsay), and ruination (McGann, Sample, and Samuels) in order to: 1) build persuasive objects that, like written essays, function as scholarship, 2) explore the potential of 3D techniques, desktop fabrication, and critical making for humanities research, 3) open material culture and history to unique modes of perception and interpretation, and 4) resist quotidian assumptions that computer vision affords neutral, high-fidelity replicas of our lived, social realities. To “lie” with computer vision, then, is to tinker with its default settings and transductions, reconfigure them, and mobilize them toward novel and unanticipated forms of scholarly persuasion.en_US
dc.relation.isversionofhttps://youtu.be/H4arwvdC7Z4en_US
dc.subjectDigitalen_US
dc.subjectHumanitiesen_US
dc.subjectDigital Humanitiesen_US
dc.subjectComputer Visionen_US
dc.subjectDesktop Fabricationen_US
dc.subject3D Technologiesen_US
dc.titleFabrications, or How to Lie with Computer Visionen_US
dc.typeVideoen_US
dc.rights.accessrightsopenAccessen_US


Files in this item

Video

This item appears in the following Collection(s)

Show simple item record