Abstract
Forging new pathways for reading with an AI controlled laser.
Things have forgotten what the shapes are for, (2022) is an automated-art-system which algorithmically burns books. More specifically, the custom-built system leverages Computer Vision and Artificial Intelligence, calculation, choice and chance to precisely laser-cut away parts of books. Through a process reminiscent of the 20th century art techniques of cutup and collage, the system assists in the exploration of new relationships between the images and texts across different pages of any book.
How does it work?
After an initial analysis of each page through computer vision, the system adapts a database of the publication’s semantic, semiotic and sentiment information before determining the optimal areas of paper to remove with its laser. Here “optimal” is considered a somewhat indeterminate intersection of a number of variables (programmable uncertainty), including considerations of colour, contrast and content on the given page, along with the potential preservation or revelation of matter on other underlying pages. The system is comparing each new page with every preceding page in an attempt to uncover salient features and significant combinations of imagery across different pages. It repeats the process for every page of the book, building various “wormholes” which travel through the book in ways which appear entirely alien when compared to the typical pathways for human reading. It is worth noting that the system has been designed to allow the book to retain its original cover and binding throughout this process.
The removal of the shape(s) occurs by driving a high-powered laser module that, through amplification and focus, uses concentrated light waves to burn the page. This burning process evokes literary and historic precedents of book burning, censorship, information control and mass media. The project also engages with post-digital publishing practices that seek to explore a new materiality of the book as medium as its more traditional role in information and cultural distribution is disrupted by networked computing. The AI and machine-learning algorithms and libraries on which the system is built, are necessarily informed by the mass-digitisation of the book; how this ‘artificial reading’ informs the process creates informatic and philosophical short-circuits and feedback loops.
The entire process, which takes between 1 to 3 hours per book, proceeds automatically with only minimal human intervention (eg. turning the pages of the book). Once initiated, the system inexorably proceeds in destroying the original, while producing a unique new work. Like the Ship of Theseus, progressively transformed through constant repair, our project tests the idea of the book itself. At which point have we removed enough material from the original for it to become a different book, or something else altogether? Finally, what can these remains of the book still offer?
1 We refer to these systems as programmed uncertainty— all potential outcomes are foreseeable, being the result of a coded process, yet the embedded potential for variability, through weighted decision pathways, renders the potential combinations of readings of any given book beyond the scope of any human prediction.
2 Generally speaking, both human and, by design also machine-learning reading ‘pathways’ serve to parse the traditional grid-like organisation of the printed page (ie.words and images distributed vertically or horizontally from top to bottom of the page; from left to right in Western regions; right to left in Middle-East to Eastern regions of the world). Similarly, computation parsing of images has also been trained on human gaze-based image recognition processes.