Books traditionally have edges: some are rough-cut, some are smooth-cut, and a few, at least at my extravagant publishing house, are even top-stained. In the electronic anthill, where are the edges? The book revolution, which, from the Renaissance on, taught men and women to cherish and cultivate their individuality, threatens to end in a sparkling cloud of snippets. So, booksellers, defend your lonely forts. Keep your edges dry. Your edges are our edges. For some of us, books are intrinsic to our sense of personal identity.
—John Updike, “The End of Authorship”1
Fixity, or the idea of a stable, standardized, and reliable text, ready to endure the ages, is a quality that often gets attributed to printed codex books—so much so that it has come to signify one of the essential defining elements of what we perceive a book to be today: a collection of bound pages. Fixity here relates to the bound nature of the printed codex book in a spatial sense, but it also refers to the book’s stability, continuity, and durability as a means of communication over time. This is because the combination of bound and easily duplicated printed editions of texts has offered an excellent preservation strategy.2 Fixity, however, not only emerged in connection to the medial, technological, and material affordances of the printed book, exemplified by developments in design and by typographic elements—look, for instance, at cover pages, titles, chapters, standardized fonts, indices, and concordances, all of which were incremental in turning the book into a fixed object that is easy to navigate. Fixity also advanced as part of the practices, institutions, and discourses that surround the printed book, as I briefly touched upon in the previous chapters. Here, concepts and practices such as authorship, the ownership of a work, and copyright, were incremental in fixing, legally and morally, the contents of a book.3 Moreover, and as discussed in chapters 3 and 4, books have also been sold and disseminated as finalized and bound commodities by (scholarly) publishers, as well as being preserved and indexed by our libraries and archives as permanent, stable, and solid artifacts.
The concept of gathering plays an important role in creating fixity, as emphasized in commentaries on Mallarmé’sUn Coup de Dés (see Figure 1) by both Blanchot and Derrida.4 Binding takes place here in the sense of “gathering together from dispersion,” something that, as Derrida has argued, is essential to the idea of the library too. Readers also bind and gather a book together through their reading practices, both conceptually—cutting it down in their interpretation or meaning giving—and practically. For instance, when it comes to hypertexts, it is specific readings that serve to bind disparate routes and texts together. In an online environment, readers as writers cut, paste, and gather dispersed networked nodes together in fluid digital scrapbooks and book collections. However, alongside these practices and institutions, there have also been strong cultural discourses that have stimulated the bound nature of the book, promoting its perception as a finished and completed object, the culmination of a writer’s work. This discourse is strongly embedded in academia, in which the published book is most often perceived as the endpoint of the research process, in certain areas of the humanities especially. Similarly, it is common practice in many humanities disciplines for an academic only to become an author or a researcher in the true sense, viable for employment, tenure, promotion, and so forth, once their first book has been published. Here the book fixes or determines the author in a similar way too.
This chapter analyzes the discursive-material practices that have promoted the idea and use of the academic book as a fixed object of communication. The printed codex book has come to exemplify durability, authority, and responsibility, as opposed to the more fluid, flowing visions of information transmission that are commonly attached to oral cultures and exchanges and, more recently, to digital forms of communication. This alternative fluid or liquid vision of communication carries important consequences with it for scholarly research, which, one could argue, has based its modern existence on the reliable transmission of research results. Under the influence of digital technology, what is seen as the essential fixed and bound nature of the book has, however, increasingly given way to visions of the rhizomatic, the fluid, the wikified, the networked, and the liquid book—as well as to other, similar entities that explore the book’s potential unbinding. What do these more fluent forms entail for the idea of the limits or the edges of the book? Can a collection of texts, pages, or websites still be called a book without some form of enduring stability? What would a potential unbinding entail for academic research? Especially when bound and stable texts have been of fundamental importance to our ideas of science and scholarship: to ensure that experiments can be repeated according to the same conditions in which they were originally conducted; as a preservation mechanism to make sure academics have access to the research materials needed; but also as a means to assure that authors can take responsibility for certain fixed and relatively unchangeable sequences of text, guaranteeing a work’s integrity. Will we be able to imagine new forms of scholarship and preservation of research that no longer rely so strongly on the idea of a fixed and stable text? Will we be able to allow for more fluidity in our age of virtually unlimited digital dissemination and storage capabilities?
When considering these questions, it might be beneficial to look at them from a different angle. For it can also be argued that books have never been fixed, stable, and linear, and that print as a medium and technology is not and has never been able to guarantee fixity—not the least because fixity is for a large part embedded in social structures.5 Similarly, the way in which digital media have been taken up in academic publishing—their potential for unbinding the book notwithstanding—mostly mirrors the practices of fixing and stabilizing that were introduced and further developed through print media. It can even be argued that, with its potential for unlimited storage, the digital is much better suited to create forms of fixity than print ever was. This becomes obvious if we look at Wikipedia. Its MediaWiki software has made it much easier to preserve changes to a text and therefore to detect and track these changes. All alterations to, and revisions of, a text can now conceivably be saved.6 Therefore, the preservation capacities of the net have the possibility to offer texts far more durability—and in that sense, stability—than print could potentially ever have.7
In this respect, it might be more useful to start thinking beyond dialectical oppositions such as bound/unbound and fixed/fluid, and to explore the idea of research being processual—although it also necessarily needs to be bound and cut at some point for us to make sense of it. If we then conceive the book as a potential form of binding or gathering this processual research together, we may be able to start to shift our focus toward questions of why it is that we cut and bind.
It is these questions that are explored in this chapter, through an analysis of the demarcations or boundaries that we as academics enact. This includes an examination of the bindings that are made for us by the book’s changing materiality and the institutions, discourses, and power struggles that have grown up around it. The question then becomes: How can we rethink the way we cut and paste our processual research together? Especially in a context in which these boundaries that are enacted (including forms of print fixity) are actually unstable, as we iteratively produce research and books through our incisions and boundary-making practices. How can we start to rework these forms of binding? And what role can the book continue to play in these processes of gathering and collecting? It is important to emphasize here that books are not determinate objects in themselves that are bound or unbound or that have inherent properties and boundaries. Books emerge from specific intra-actions or phenomena, which, in Barad’s words, “do not merely mark the epistemological inseparability of observer and observed, or the results of measurements; rather, phenomena are the ontological inseparability/entanglement of intra-acting ‘agencies.’”8 In this sense, and as I have argued previously, it is through our book binding and unbinding practices, cutting our research together and apart, that both the book as we know it and we ourselves as scholars arise.
Rethinking how we bind research therefore includes asking questions about who and what binds and about the ways in which we currently gather our research together. What are the media-specific factors in the book’s material becoming that force forms of binding on us in their intra-actions with our institutions and practices? In which specific ways do these material structures currently tie our research and our books together, and what new forms of (digital) gathering do they propose?
This chapter starts with a section that outlines how certain authoritative scholars within the book historical field have helped further construct, historiographically, an image of the book as fixed and bound and how they have done so by focusing on how, historically, the printed book, in its materiality and through our institutions and practices, developed the forms of book fixity and trust that we are accustomed to today. The following section then analyzes several recent digital experiments that have explored the unbinding of scholarly research, most notably in the form of fluid, remixed, and modular (scholarly) books and projects that are focused on remixed authorship and digital archives. I argue that these unbound book alternatives are not so much examples of unbinding as proposals for alternative ways of gathering research together. This section focuses on some of the critiques these experiments have formulated concerning the ways we bind and are being bound, while analyzing some of the different forms of cutting and pasting that are currently being put forward. The fact that these alternative projects and practices do not so much unbind as propose new forms of gathering—forms that still seem to mirror in the main our codex-based forms of closure (e.g., via authorship, copyright, design, and interface)—shows how difficult it is to let go of the methods of binding developed as part of the print paradigm.
Nonetheless, it is important to challenge, critique, and rethink some of the major practices and institutions of gathering and fixity we currently adhere to, from copyright to authorship to the book as a published object and commodity. It is important to do so not only to challenge the humanist focus on essentialized notions such as the unity of the work and the individual author, but also to counter the problems created by the book-bound commodity fetish within academic publishing, which I discussed in chapters 3 and 4. This includes investigating the power structures and interests that are invested in maintaining stable texts and that determine when a text is fixed and finalized, and for what reasons. For instance, commercial interests promote the creation of heavily copyrighted or DRM-controlled academic works, which, it can be argued, are standing in the way of the more widespread sharing and dissemination of scholarly research online. The current communication model is based on codex-shaped journals and books with stable and static content, a situation that protects the integrity of the liberal author’s work.
In this context, experiments with alternative hypertextual and multimodal forms of publishing, or with reuse, updating, and versioning, are hard to sustain. And this is the case even though these experiments with the form and shape of publications could offer us ways to rethink and reperform scholarly communication in a different and potentially more ethical way, along with offering us the possibility to explore what Tara McPherson has referred to as emergent genres for multimodal scholarship.9 This could include exploring the capacity of new technologies to produce scholarship at various scales—for example, audiovisual and sensory ones—but also reappreciating humanities scholarship from the perspective of aesthetics or, as McPherson argues, “how multimodal expression might allow for different relationships of form to content.” This includes the potential of the digital to better accommodate movement and change. But it also allows, or even demands, a further questioning of how computation, which, as McPherson remarks, has “long been deeply intertwined with visuality, aesthetics, and the sensory,” intersects with both the humanities and with the human.10
What could be the potential in alternative unbound book projects to re-envision the way we perceive the book and do research, to explore different forms of cutting and binding, and to promote forms of processual research? Are there other ways of binding that do not necessarily close down research and the book (and even ourselves as scholars) by means of strict forms of authorship and copyright, for example? And what does it mean for the political potential of the book as a medium through which we can reimagine alternative futures for scholarly communication, if we uncritically foreclose its open-endedness?
Here it is again worth emphasizing—and this is something scholars of bibliography and critical editing are already intensely familiar with—that print has always been an unstable medium and only offers, as Johanna Drucker has rightly noted, “the illusion of fixity.”11 As she continues: “A book is a snapshot of a continuous stream of intellectual activity. Texts are fluid. They change from edition to edition, from copy to copy, and only temporarily fix the state of a conversation among many individuals and works across time. . . . A book is a temporary intervention in that living field.”12 The second half of this chapter explores this idea of texts and books as forms of temporary intervention and fixing in more depth by looking at the concept of the cut as theorized in new materialism, continental philosophy, and remix studies. Again, this analysis is not an attempt on my part to explore the problem of the fixity and stability of the book from a perspective of bound or unbound—where both print and digital media have the potential to bind and unbind—but rather from that of cutting and iterative boundary-making. I want to focus on how books can be shaped and bound in a way that doesn’t foreclose or demarcate them. In this respect, this chapter asks: If we see research as an ongoing process that needs to be gathered together at some point, that needs to be cut, how can we do it differently and potentially better? Here the focus is not on the book-object unbinding, but on the processes of research and how we can imagine different cuts to stabilize it. That is, How can we give meaning to its fluidity by making the right incisions?
From Orality to Fixity?
One of the main points of contention concerning the development of fixity as a material condition and concept remains the question whether a book can ever be defined as a stable text, and, if so, whether this quality of stability and fixity is an intrinsic element of print—or to a lesser extent of manuscripts—or whether it is something that has been imposed on the printed object by historical actors. How did a selection of influential scholars, who have played a key role in shaping and forming the book-historical discourse, frame questions around the book’s permanence and durability along the lines of this binary analysis, with that playing an important part in the discursive construction of the book as bound and gathered?
On the one hand, book historians have identified standardization and uniformity as properties integral to the development of print technology. In The Printing Press as an Agent of Change, Eisenstein analyzes how print influenced many aspects of scholarship and science. As she argues, print influenced the dissemination, standardization, and organization of research results, but it also impacted upon data collection and the preservation, amplification, and reinforcement of science.13 Books became much cheaper, and a more varied selection of books was available, to the benefit of scholars. This encouraged the transition from the wandering to the sedentary scholar and stimulated the cross-referencing of books. Increasingly, printers also began standardizing the design of books. They started by experimenting with the readability and classification of data in books, introducing title pages (see Figure 3), indexes, running heads, footnotes, and cross-references.14 Yet as Eisenstein, but also McLuhan and Ong, have emphasized, scholars benefited most from the standardization of printed images, maps, charts, and diagrams, which had previously proven very difficult to multiply identically by hand. This was essential for the development of modern science, they maintain.15
Yet others, including Ong, contend that fixity was already enabled by preceding technologies. For Ong, it is writing and literacy that are inherently connected to fixity and stability; he argues that scientific thinking should be seen as a result of writing, for instance. In opposition to Eisenstein, who emphasizes the fixity brought about by printing in comparison to the scribal culture that preceded it, Ong focuses more on the relationship between orality and literacy—specifically, on the differences in mentality between oral and writing cultures. The shift from orality to writing, he argues, is essentially a shift from sound to visual space, where print mostly had effects on the use of the latter. Writing, he states, locks words into a visual field—as opposed to orality, in which language is much more flexible.16
Eisenstein, however, emphasizes that fixity could only really come about with the development of print. She sees standardization and uniformity as properties of print culture, properties that were usually absent in a predominantly scribal environment.17 No manuscript at that time could be preserved without undergoing corruption by copyists, she argues.18 Long-term preservation of these unique objects also left a lot to be desired, as the use of manuscripts led to wear and tear, while moisture, vermin, theft, and fire all meant that “their ultimate dispersal and loss was inevitable.”19 Although printing required the use of paper, which is much less durable than either parchment or vellum, the preservative powers of print, Eisenstein emphasizes, lay mainly in its strategy of conservation by duplication and making public: printing a lot of books and spreading them widely proved a viable preservation strategy.
Eisenstein similarly points out that printing, through its powers of precise reproduction, helped spread a number of cultural revolutions (i.e., the Renaissance, the Reformation, and the scientific revolution)—revolutions that were, as she claims, essential in the shaping of the modern mind.20 Febvre and Martin also explored the influence of the book on the Renaissance and the Reformation, analyzing print’s causes and effects as part of a socioeconomic history of book production and consumption over a long period of time. Being slightly more cautious, they wonder how successful the book has been as an agent for the propagation of new ideas.21 They see preservation through duplication and (typographic) fixity as basic prerequisites for the advancement of learning, agreeing that it was print that gave the book a permanent and unchanging text.22 However, for them printing is just part of a set of innovations. The printing press is only one of a number of actors in the general social and political history they try to reconstruct.
Although Eisenstein acknowledges this plurality of actors, in her view print was the main agent of change impacting the revolutionary developments detailed previously. She argues that it builds on previous achievements but emphasizes that the preservative powers of print were more permanent than previous movements: print revolutionized these previous systems. Where scribal copying ultimately led to more mistakes and corruption of the text, successive print editions allowed for corrections and improvements to be made. With fixity, Eisenstein explains, came “cumulative cognitive advance.”23 Even if the printing press also multiplied and accelerated errors and variants—and many errata had to be issued—the fact was that errata could now be issued. Therefore, she states, print made corruption more visible at the same time.24 In Eisenstein’s vision, this print-enabled fixity was essential for the development of modern science. Texts, she states, were now sufficiently alike for scholars in different regions to correspond with each other about what was, to all intents and purposes, a uniform text. Networks of correspondents were created, which in turn led to new forms of feedback that had not been possible in the age of scribes. This again was an influence on the scientific method and on the modern idea of scientific cooperation.
Print, however, went further than just encouraging popularization and propaganda and the mere spreading of new ideas.25 It was the availability and access to diverse materials that was really revolutionary, Eisenstein argues. Permanence was also able to bring out progressive change, she states, where “the preservation of the old . . . launched a tradition of the new.”26 From valuing the ancients, the emphasis increasingly came to be placed on admiring the new. According to Eisenstein, the communications revolution created a “fixed distance in time,” influencing the development of a modern historical consciousness. McLuhan similarly claims that with print, a “fixed point of view” became possible; print fosters the separation of functions and a specialist outlook.27 Eisenstein confesses that it is hard to establish how exactly printed materials affected human behavior; nonetheless, enhanced access to a greater abundance of records and a standardization brought about by printing did influence the literate elite, she argues.28 For example, printing standardized vernacular languages and led to the nationalization of politics (increasingly, political documents were written in the vernacular) and the fragmentation of Latin. Drawing on McLuhan, Eisenstein also shows how the thoughts of readers are guided by the way the contents of books are arranged and presented. Basic changes in book format led to changes in thought patterns; for example, standardization helped to develop a new esprit de système (including systematic cataloging and indexing).29 She also makes a clear claim for the importance of print on the development of the Reformation: the press was the ultimate propaganda machine. However, Eisenstein points out that print not only diffused Reformation views but also shaped them: print stabilized the bible (and scholars were being provided with Greek and Hebrew texts), and its availability in vernacular languages changed who read the bible and how they read it.30
In contrast to Eisenstein’s arguments toward the agency of print in establishing fixity, Adrian Johns, and others with him, emphasizes that it is not printing per se that possesses preservative power, but the way printing is put to use in particular ways. If we reassess the way print has been constructed, Johns argues, we can contribute to our historical understanding of the conditions of knowledge itself, how it emerged and came to depend on stability. Printed books themselves do not contain attributes of credibility and fixity—which are features that take much work to maintain—and as such printed records were not necessarily authorized or faithful, Johns remarks. According to Johns, it was the social system then in place, not the technology, that needed to change first in order for the printing revolution or print culture to gain ground.31
Johns brings the cultural and the social to the center of our attention through his interest in the roles of historical figures (i.e., readers, authors, and publishers) in bringing about fixity. He argues that Eisenstein neglects the labors through which fixity was achieved, to the extent that she describes what Johns sees as being the results of those labors, as powers or agency intrinsic to texts instead. For Johns, then, fixity is not an inherent quality but a transitive one; fixity exists only inasmuch as it is recognized and acted upon by people—and not otherwise. In this sense, fixity, he states, is the result of manifold representations, practices, and, most importantly, conflicts and struggles that arise out of the establishment of different print cultures.32
Roger Chartier similarly argues against the direct influence of print on readers’ consciousness. He is interested in how books as material forms do not impose but command uses and appropriations. In his vision, works have no stable, universal, or fixed meaning as they are “invested with plural and mobile significations that are constructed in the encounter between a proposal and a reception”; in other words, Chartier’s route map to a history of reading is based on the paradox of the freedom of the reader versus the order of the book: How is the order of the book constructed, and how is it subverted through reading?33 As part of his work as a historian, he reconstructs the variations in what he calls the espaces lisibles, the texts in their discursive and material forms, and the variations that govern their effectuation.34
Although Johns acknowledges that print to some extent led to the stabilization of texts, he questions “the character of the link between the two.”35 For him, printed texts were not intrinsically trustworthy, nor were they seen as self-evidently creditable in early modern times, when piracy and plagiarism and other forms of “impropriety” were widespread. This meant that the focus was not so much on “assumptions of fixity,” as Johns calls it, but on “questions of credit” and on the importance of trust in the making of knowledge.36 Print culture came about through changes in the conventions of civility and in the practice of investing credit in materials (i.e., by the historical labors of publishers, authors, and readers) as much as through changes in technology, he argues.37 Johns is therefore interested in how knowledge was made (where knowledge is seen as contingent). How did readers decide what to believe?
Reading practices were very important to cope with the appraisal of books, Johns points out; especially with respect to the issue of piracy, the credibility of print became a significant issue, one with both economic and epistemic implications.38 As discussed in previous chapters, the character of a printer or stationer was very influential in the establishment of trust or credit. This trust, Johns explains, was related to a respect of the principle of copy, meaning the recognition of another (printer’s) prior claim to the printing of a work, based on a repudiation of piracy. As Johns shows, the stationer’s name on a book’s title page could tell prospective readers as much about its contents as could the author’s name.39 The character of booksellers mattered, too, he notes, as they determined what appeared in print and what could be bought, sold, borrowed, and read. Readers thus assessed printed books according to the places, personnel, and practices of their production and distribution. To contemporaries, Johns argues, the link between print and stable or fixed knowledge seemed far less secure, not least because a certain amount of creativity (i.e., textual adaptation) was essential to the stationer’s craft, where piracy was also not unfamiliar: in fact, it was far more common than was certainty and uniform editions. Furthermore, pirates were not a distinguishable social group, existing as they did at all ranks of the stationers’ community, and at times they were among its most prominent and “proper” members, Johns explains.40 It is important in this respect to realize that piracy was not attached to an object; it was used as a category or a label to cope with print, as a tactic to construct and maintain truth claims.
The reliability of printed books thus depended in large part on representations of the larger stationers’ community as proper and well-ordered, Johns emphasizes.41 This clashed, he states, with the characteristic feature of the stationers’ commonwealth—namely, uncertainty. Print culture was characterized by endemic distrust, conspiracies, and counterfeits. The concept of piracy was used as a representation of these cultural conditions and practices as they were prevailing in the domain of print, Johns explains. With this uncertainty, it became clear that the achievement of print-based knowledge and authorship was transient.
Yet readers did come to trust and use print, Johns points out, as books were of course produced, sold, read, and put to use, meaning that the epistemological problems of reading them were, in practice, overcome.42 Trust could become possible, Johns argues, because of a disciplining regime—including elaborate mechanisms to deal with all the problems of piracy—brought about by publishers, booksellers, authors, and the wider realm of institutions and governments, exemplified for Johns by the Stationers’ Company. Licensing, patenting, and copyright were similarly machineries for producing credit, Johns points out, where the register set up by the Royal Society, together with the Philosophical Transactions—which became their trademark symbols of credibility and propriety—were also achievements that required strenuous efforts to discipline the processes of printing and reading.43 With this regime in place, Johns claims that trust in printed books could become a routine possibility.44 As he explains, however, power struggles arose regarding who gets to decide on or govern these social mechanisms for generating and protecting credit in printed books, displaying the complex interactions of piracy, propriety, political power, and knowledge. Conflicts arose over the implementation of patents and/or copyright and about the different consequences a print culture governed by a specific entity (stationers or the crown, for Johns) would face. These conflicts held, according to Johns, “the potential for a fundamental reconsideration of the nature, order, and consequences of printing in early modern society.”45
The debate outlined thus far between those who can be perceived as some of the most influential book historical theorists shows how fixity has been narrated predominantly in a binary manner, with a focus on the effects of either technology or societal structures on the standardization and fixity print enabled. Yet what I want to put forward here is that these historical narratives further strengthen a perception of the book (either as technology or as societal construct) as stable, fixed, and permanent—notwithstanding the ambivalence that thinkers such as Johns and Chartier also introduce. As part of the historiographical dispute around the agency of print and its institutions in the development of fixity, a—perhaps unintended—outcome of this debate has been a continued focus on the more or less linear development of fixity and standardization as integral aspects (whether intrinsic or transitive) of printing and the book and of science and scholarship more in general. Instead of focusing on the inherent fluidity, mutability, or malleability of the book, for example, or the open and flexible nature of scholarly publications, this narrative remains dominant. The next section explores examples of theorists and publishing projects that have tried to examine this preconception of or even fixation on print and fixity, questioning the inherent connection between stability and the book that continues to be reinstilled by both sides of the book historical debate.
Before I turn to this next section, I want to highlight how more recently a new generation of book historians have started to question this preconception, focusing on the malleability of texts instead. Leslie Howsam, for example, has argued that “no consequential history of books and the cultures they inhabit will be possible until historians take mutability, not fixity, as their starting point.”46 Yet even here, with this gradual shift in the book-historical discourse, there remains a danger of the debate falling back into binary distinctions between stability and malleability again (i.e., in the sense of a turn from a focus on the one, fixity, to the other, malleability). Instead of focusing on whether texts are fixed or fluid, I want to explore here why there is, and has been, a tendency within the book-historical discourse to focus on either of these characterizations. What I want to argue for instead is more reflection on how this shift in the debate—in which the perception of print as stable and fixed starts to be complicated—has a direct material influence on the object under study, the book itself, paying more attention, in other words, on how the discourse itself is performative. Following this thread, then, the fact that the discourse itself is changing can be understood as a response to and a reflection on the changing materiality of the book, because the digital is in many ways, as I have argued previously, making us rethink the perceived stability of the book, both online and in print. Therefore, as Bolter has argued, “it is important to remember . . . that the values of stability, monumentality and authority, are themselves not entirely stable: they have always been interpreted in terms of the contemporary technology of handwriting or printing.”47 These kinds of historiographical cuts, choices that are made by us as scholars in intra-action with the materiality of the book and in response to discursive fields, therefore once again show the complexity and multiplicity of agencies involved in the creation of fixity.
If we contend that—until more recently, at least—book-historical narratives have contributed to the vision of the book as fixed, durable, and bound, then they should be perceived as part of the disciplining regime Johns talks about, which has privileged certain cuts in intra-action with the book’s material becoming. While the growing use and importance of the digital medium in scholarship is affecting the materiality of the book, it is in the interplay with the established disciplining regime (which again includes the historiography of the book) that its development is being structured. An increasing interest in the communication and publishing of humanities research in what can be seen as a less fixed and more open way is nonetheless challenging the integrity of the book, something that the systems surrounding it have tried so hard to develop and maintain. Technological change has in this respect triggered a questioning of many taken-for-granted stabilizations.
Why is this disciplining regime, and the specific print-based stabilizations it promotes, being interrogated at this particular point in time? First, and as the genealogies provided previously testify, this regime has seen a continuing power struggle over its upkeep and constituency and as such has always been disputed. Nonetheless, changes in technology, and in particular the development of digital media, have acted as a disruptive force, especially because much of the discourse surrounding digital media, culture, and technology tends to promote a narrative of openness, fluidity, and change. In this respect, this specific moment of disruption and remediation brings with it an increased awareness of how the semblances of fixity that were created and upheld in, and by, the printed medium are indeed a construct, upheld to maintain certain established institutional, economic, and political (and even historiographical) structures.48 This has led to a growing awareness of the fact that these structures are formations we can rethink and perform otherwise. All of which may explain why there is currently a heightened interest in how we can intra-act with the digital medium in such a way as to explore potential alternative forms of fixity and fluidity, from blogs to multimodal and versioned publications, to wikis and networked books.
The construction of what we perceive as stable knowledge objects serves certain goals, mostly to do with the establishment of authority, preservation (archiving), reputation building (stability as threshold), and commercialization (the stable object as a reproducible product). In Writing Space: Computers, Hypertext, and the Remediation of Print (2001), Bolter conceptualizes stability (as well as authority) as a value under negotiation, as well as the product of a certain writing technology. This acknowledgment of the relative and constructed nature of stability and of the way we presently cut with and through media encourages us to conduct a closer analysis of the structures underlying our knowledge and communication system and how they are set up at present: Who is involved in creating a consensus on fixity and stability? Similarly, what forms of fluidity are allowed, and what is valued—and what is not—in this process?
It could therefore be argued that it is the specific cuts or forms of fixing and binding of scholarship that are being questioned at the moment, while the potential of more processual research is being explored at the same time: for example, via the publication of work in progress on blogs or personal websites. The ease with which continual updates can be made has brought into question not only the stability of documents but also the need for such stable objects. Wikipedia is one of the most frequently cited examples of how the speed of improving factual errors and the efficiency of real-time updating in a collaborative setting can win out over the perceived benefits of stable material knowledge objects. There has perhaps been a shift away in this respect from the need for fixity in scholarly research and communication toward the importance of other values, such as collaboration, quality, speed, and efficiency, combined with a desire for more autonomous forms of publishing. Scholars are using digital media to explore the possibilities for publishing research in more direct ways, often cutting out the traditional middlemen (e.g., publishers and libraries) that have become part of the print disciplining regime they often aim to critique. Accordingly, they are raising the question: Do these middlemen still serve the needs of their users, of scholars as authors and readers? For example, the desire for flexibility, speed, autonomy, and so on has caused new genres of formal and informal scholarly communication to arise; a focus on openness and fluidity is seen as having the potential to expand academic scholarship to new audiences; digital forms of publishing have the potential to include informal and multimodal scholarship that hasn’t been communicated particularly extensively before; and new experimental publishing practices are assisting scholars in sharing research results and forms of publication that cannot exist in print because of their scale, their multimodality, or even their genre. In what way, then, could making the processual aspect of scholarship more visible—which includes the way we collaborate, informally communicate, review, and publish our research—and highlighting not only the successes but also the failures that come with that potentially aid in demystifying the way scholarship is produced?
From social media to blogging software, mailing lists, institutional repositories, and academic social research sharing networks (e.g., commercial services such as Academia.edu and ResearchGate or the not-for-profit Humanities Commons), scholars are increasingly moving to digital media and the internet to publish both their informal and formal research in what they perceive as a more straightforward, direct, and open way. This includes the mechanisms developed for the more formal publication of research discussed in the previous chapter, via either green (archiving) or gold (directly via a press or journal) open access publishing. Nonetheless, the question remains whether these specific open forms of publishing have really produced a fundamental shift away from fixity and its disciplinary regime and discourse. The next section therefore draws attention to a specific feature of openness, a feature that can in many ways be seen as one of its most contested aspects—namely, the possibility to reuse, adapt, modify, and remix material.49 Although remix and reuse has an extensive predigital history, the digital environment has further stimulated and facilitated remix practices, both within and outside of an academic context.50 It is this part of the ethos or definition of openness (libre more than gratis) that can be said to most actively challenge the concepts of stability, fixity, trust, and authority that have accompanied the rhetoric of printed publications for so long.51 Where more stripped-down versions of openness focus primarily on achieving greater access, and do so in such a way that the stability of a text or product need not be affected (indeed, as remarked before, the open and online distribution of books might even promote its fixity and durability due to the enlarged availability of digital copies in multiple places), libre openness directly challenges the integrity of a work by enabling different versions of a work to exist simultaneously (by allowing reuse rights that include derivatives). At the same time, libre forms of openness also problematize such integrity by offering readers the opportunity to remix and reuse (parts of) the content in different settings and contexts, from publications and learning materials to translations, visualizations, and data mining. Within academia, this creates not only practical problems (which version to cite and preserve, who is the original author, who is responsible for the text) but also theoretical problems (what is an author, in what ways are texts ever stable, where does the authority of a text lie). The founding act of a work—that specific function of authorship described by Foucault in his seminal article “What Is an Author?”—becomes less important for both the interpretation and the development of a work once it goes through the processes of adaptation and reinterpretation, and the meaning given as part of the author function becomes dispersed—and with that the authorial force of binding is weakened.52
Fitzpatrick discusses the repurposing of academic content in this regard, which remains problematic within a print paradigm: “What digital publishing facilitates, however, is a kind of repurposing of published material that extends beyond mere reprinting. The ability of an author to return to previously published work, to rework it, to think through it anew, is one of the gifts of digital text’s malleability—but our ability to accept and make good use of such a gift will require us to shake many of the preconceptions that we carry over from print.”53
The ability to expand and build upon, to make modifications and create derivative works, to appropriate, change, and update content within a digital environment, also has the potential to shift the focus in scholarly communication away from the publication as a fixed, final, and definitive research output and on to the process of researching.54 It is a shift that, as discussed previously, may have the ability to make us more aware of the contingency of our research and the cuts and boundaries we enact and that are enacted for us when we communicate and disseminate our findings. It is this shift away from models of print stability and toward process and fluidity (including the necessary stabilizations) that the following sections focus on in order to explore some of the ways in which both the practical and theoretical problems that are posed within this development are being dealt with at this moment in time and whether these should or can be approached differently.
I want to focus on three alternatives in particular here that have been put forward by or have derived from within this context of reworking and remaking, which include suggestions for alternative concepts and performative practices to explore or deal with questions of fixity, stable authorship, and (print-based forms of) authority, within more open, fluid, or networked environments—alternatives that, I argue, can potentially have important consequences for knowledge production in the humanities. As such, I briefly discuss the concept of modularity, as discussed in the work of Lev Manovich, before proceeding to the concept of the fluid text, as put forward by textual critic John Bryant. I end with an exploration of the role played by the (networked) archive in a digital environment, by looking at the work of remix theorist Eduardo Navas.
As part of my analysis of these concepts and practices, I outline how they still mostly end up adhering to fixtures and boundaries—such as liberal humanist authorship and authority—that have been created within the print paradigm and how they often end up uncritically maintaining or repeating established institutions and practices. My aim in offering such a critique is to push forward our thinking on the different kinds of cuts and stabilizations that are possible within humanities research, its institutions, and practices; to explore interruptions that are perhaps both more ethical and open to difference and critical of both the print paradigm and of the promises of the digital.55 How might these alternative and affirmative cuts enable us to conceive a concept of the book built upon openness and, with that, a concept of the humanities built upon fluidity?
Within his research on remix and software culture, media theorist Lev Manovich discusses the concept of modularity (of digital media) extensively, among others, as one of his five principles of new media.56 He describes how with the coming of software, a shift in the nature of what constitutes a cultural object has taken place; in his vision, cultural content no longer has finite boundaries. Similar to the modular character of code and software, Manovich argues that new media consist of various independent elements (images, text, code, sound) or modules that, (re)combined together, form a new digital media object. Furthermore, he explains that the shift away from stable environments in a digital online environment means there are no longer senders and receivers of information in a classical sense; there are only temporary reception points in information’s path through remix. The role of the user is thus expanded in this vision, as content is no longer simply received by the user but is traversed, constructed, and managed. Thus, culture becomes a product that is constructed by both the maker and the consumer. What is more, according to Manovich, culture is actively being modularized by users to make it more adaptive; in other words, in his vision culture is not modular, but is (increasingly) made modular in digital environments.57 However, as Manovich explains, the real remix revolution lies in the possibility this generates to exchange information between media—what in Software Takes Command he calls the concept of deep remixability—describing a situation in which modularity is increasingly being extended to media themselves. In a common software-based environment, the remixing of various media has now become possible, along with a remixing of the methodologies of these media, offering the possibility of mash-ups of text with audio and visual content, expanding the range of cultural and scholarly communication.58
Manovich sketches a rather utopian future here (one that does not take into account present copyright regimes, for instance), in which cultural forms will be deliberately made from Lego-like modular building blocks, designed to be easily copied and pasted into new objects and projects. For Manovich, this involves forms of standardization, which function as a strategy to make culture freer and more shareable, with the aim of creating an ecology in which remix and modularity become a reality. In this respect, for Manovich, “helping cultural bits move around more easily” is a method to devise a new way with which we can perform cultural analysis.59 Similarly, the concept of modularity and of recombinable datasets offers him a way of looking beyond static knowledge objects, presenting an alternative view of how we structure and control culture and data, as well as how we can analyze our ever-expanding information flows. With the help of these software-based concepts, Manovich thus examines how remix can be an active stance by which people will be able to deliberately shape culture in the future and deal with knowledge objects in a digital context.
Within scholarly communication, the concept of modularity has similarly proved popular when it comes to making research more efficient and to cope with information overload: from triplets and nanopublications to other forms of modular publishing, these kinds of software-inspired concepts have mostly found their way into scientific publishing.60 Within this context, Joost Kircz, for instance, argues that instead of structuring scholarly research according to linear articles, we should have a coherent set of “well-defined, cognitive, textual modules.”61 Similarly, Jan Velterop and Barend Mons suggest moving toward a model of nanopublications in order to deal with information overload, which can be seen as a move in the direction of both more modularity and the standardization of research outcomes.62
There are, however, problems with applying this kind of modular database logic to cultural objects. Of course, in those cases in which culture or cultural objects are already structured and modular, reuse and repurposing are much easier. However, cultural objects tend to differ, and it is not necessarily always possible or even appropriate to modularize or cut up a scholarly or fictional work; not all cultural objects are translatable into digital media objects either. Hence, too strict a focus on modularity might be detrimental to our ideas of cultural difference. Media theorist Tara McPherson formulates an important critique of modularity to this end. She is mostly interested in how the digital, privileging as it does a logic of modularity and seriality, became such a dominant paradigm in contemporary culture: How did these discourses from software and coding cultures translate into the wider social world?63 In other words, what is the specific relationship between context and code in this historical context? How have code and culture become so intermingled? As McPherson argues, in the mid-twentieth century, modular thinking took hold in a period that also saw the rise of identity politics and racial formations in the US, hyperspecialization and niched production of knowledge in the university, and forms of Fordist capitalism in economic systems: all of which represent a move toward modular knowledges. However, modular thinking, McPherson points out, tends to obscure the political, cultural, and social context from which this thinking emerged. She emphasizes the importance here of understanding the discourses and peculiar histories that have created these forms of the digital and of digital culture, which encourage forms of partitioning. This includes being more aware of how cultural and computational operating systems mutually infect one another. In this respect, McPherson wonders, “how has computation pushed modularity in new directions, directions in dialogue with other cultural shifts and ruptures? Why does modularity emerge in our systems with such a vengeance across the 1960s?”64 She argues that these forms of modular thinking, which function via a lenticular logic, offer “a logic of the fragment or the chunk, a way of seeing the world as discrete modules or nodes, a mode that suppresses relation and context. As such, the lenticular also manages and controls complexity.”65 We therefore need to be wary of this bracketing of identity in computational culture, McPherson warns, where it holds back complexity and difference. She favors the application of Barad’s concept of the agential cut in these contexts, using this to replace bracketing strategies (which bring modularity back); for McPherson, then, as a methodological paradigm, the cut is more fluid and mobile.66
The concept of modularity, as described by Manovich (where culture is made modular), does not seem able to guarantee these more fluid and contingent movements of culture and knowledge. The kind of modularity he is suggesting does not so much offer a challenge to object and commodity thinking as apply the same logic of stability and standardized cultural objects or works, only on another scale. Indeed, Manovich defines his modular Lego blocks as “any well-defined part of any finished cultural object.”67 There is thus still the idea of a finished and bound entity (the module) at work here, but it is smaller, compartmentalized.
Fluid Texts and Liquid Publications
Where Manovich’s concept of modularity mostly focuses on criticizing stability and fixity from a spatial perspective (dividing objects into smaller recombinable blocks), within a web environment, forms of temporal instability—over time, cultural objects change, adapt, get added to, re-envisioned, enhanced, and so on—are also being increasingly introduced. In this respect, experiments with liquid texts and with fluid books not only stress the benefits and potential of processual, iterative, and versioned scholarship, of capturing research developments over time and so forth, but also challenge the essentialist notions that underlie the perceived stability of scholarly works.
Textual scholar John Bryant theorizes the concept of fluidity extensively in his book The Fluid Text: A Theory of Revision and Editing for Book and Screen (2002). Bryant’s main argument revolves around the myth of stability, insofar as he argues that all works are fluid texts. As he explains, this is because fluidity is an inherent phenomenon of writing itself; we keep on revising our words to approach our thoughts more closely, with our thoughts changing again in this process of revision. In The Fluid Text, Bryant displays (and puts into practice) a way of editing and doing textual scholarship that is based not on a final authoritative text, but on revisions. He argues that for many readers, critics, and scholars, the idea of textual scholarship is designed to do away with the otherness that surrounds a work and to establish an authoritative or definitive text. This urge for stability is part of a desire for what Bryant calls “authenticity, authority, exactitude, singularity, fixity in the midst of the inherent indeterminacy of language.”68 By contrast, Bryant calls for the recognition of a multiplicity of texts, or rather the fluid text. Texts are fluid in his view because the versions flow from one to another. For this, he uses the metaphor of a work as energy that flows from version to version.
In Bryant’s vision, this idea of a multiplicity of texts extends from different material manifestations (drafts, proofs, editions) of a certain work to an extension of the social text (translations and adaptations). Logically, this also leads to a vision of multiple authorship, wherein Bryant wants to give a place to what he calls the collaborators of or on a text, to include those readers who also materially alter texts. For Bryant, with his emphasis on the revisions of a text and the differences between versions, it is essential to focus on the different intentionalities of both authors and collaborators. The digital medium offers the perfect possibility to achieve this, he argues, and to create a fluid text edition. Bryant established such an edition—both in a print and an online edition (see Figure 4)—for Melville’sTypee, showing how a combination of book format and screen can be used to effectively present a fluid textual work.69
For Bryant, this specific choice of a textual presentation focusing on revision is at the same time a moral choice. This is because, for him, understanding the fluidity of language enables us to better understand social change. Furthermore, constructionist intentions to pin a text down fail to acknowledge that, as Bryant puts it, “the past, too, is a fluid text that we revise as we desire.”70 Finally, he argues that the idea of a fluid text encourages a new kind of critical thinking, one that is based on difference, otherness, variation, and change. This is where, in his vision, the fixation on the idea of having a stable text to achieve easy retrieval and unified reading experiences loses out to a discourse that focuses on the energies that drive text from version to version. In Bryant’s words, “by masking the energies of revision, it reduces our ability to historicize our reading, and, in turn, disempowers the citizen reader from gaining a fuller experience of the necessary elements of change that drive a democratic culture.”71
Bryant’s fluid text edition of Melville’s Typee is a prime example of a practical experiment focusing upon the benefits of fluidity for scholarly communication. Within academic publishing, however, fluid books have mostly been experimented with within the open educational resources (OER) movement, in the form of open textbooks. Open textbooks are published with licenses that allow users to adapt them and recombine them with other texts or resources. The European Liquid Publications (or LiquidPub) project was an important early experiment in open (text)book publishing.72 As described by Casati et al., this was a project that tried to bring into practice the idea of modularity as described previously.73 Focusing mainly on textbooks in the sciences, the aim of this project was to enable teachers to compose a customized and evolving book out of modular precomposed content. This book would then be a multiauthor collection of materials on a given topic that can include different types of documents.
The LiquidPub project tried to cope with questions of authority and authorship in a liquid environment by making a distinction between versions and editions. Editions are solidifications of the liquid book, with stable and fixed content, which can be referred to, preserved, and made commercially available. The project also created different roles for authors—from editors to collaborators—which were accompanied by an elaborate rights structure, with the possibility for authors to give away certain rights to their modular pieces while holding on to others. As a result, the LiquidPub project was a very pragmatic project, catering to the needs and demands of authors (mainly for the recognition of their moral rights), while at the same time trying to benefit from, and create efficiencies and modularity within, a fluid environment. The project offered authors a choice of different ways to distribute content, from completely open and reuseable books, to partially open and completely closed books.
Introducing graduations of authorship such as editors and collaborators, as proposed in both the work of Bryant and in the LiquidPub project, is one way to deal with plural authorship or authorship in collaborative research or writing environments. However, as I showed in chapter 2, it does not fundamentally resolve some of the main questions it intends to address around authority—namely, how to establish authority in an environment (e.g., a wiki) where the contributions of a single author are difficult to source and content is created by anonymous users or machine-generated by algorithms, bots, and AIs. Furthermore, what becomes of the proposed role of editor or collaborator as an authoritative figure when selections can be made redundant and choices altered and undone by mass-collaborative, multiuser remixes and mash-ups? The projects mentioned earlier are therefore not so much posing a challenge to liberal humanist notions of authorship—or, more specifically, are not really questioning the authorship function as it is currently established as a force of binding. They are merely applying this established author function to smaller compartments of text and are dividing publications, and the responsibilities that come with them, up accordingly.
In addition to that, the concept of fluidity as described by Bryant, together with the notion of liquidity as used in the LiquidPub project, does not necessarily problematize or disturb the idea of object-like thinking or fixity within scholarly communication either. For Bryant, for example, a fluid book edition is still made up of separate, different versions, while in the LiquidPub Project, which focuses mostly on an ethos of speed and efficiency, a liquid book is a customized combination of different recombinable documents. In this sense, both projects adhere quite closely to the concept of modularity as described by Manovich (where culture is made modular), and the question remains whether they can thus be seen as fluid or liquid—that is, if one is to perceive fluidity or liquidity as a condition in which the stability and fixity of a text is fundamentally reconsidered in a continual or processual manner or as part of which cuts are made without simply demarcating the text anew. The idea of the object or the module still plays an essential role; however, it is smaller, compartmentalized: witness the way both these projects still hinge on the idea of extracted objects, of editions and versions, in their liquid projects. For example, Bryant’s analysis is focused not so much on creating fluidity or a fluid text—however impossible this might be—but on creating a network between more or less stable versions while showcasing their revision history. He thus still makes a distinction between works and versions, neither seeing these versions as part of one extended work nor giving them the status of separate works. In this way, he keeps a hierarchical and linear thinking alive: “A version can never be revised into a different work because by its nature, revision begins with an original to which it cannot be unlinked unless through some form of amnesia we forget the continuities that link it to its parent. Put another way, a descendant is always a descendant, and no amount of material erasure can remove the chromosomal link.”74 Texts here are not fluid, at least not in the sense of being (able to be) continually updated; they are networked at the most. McKenzie Wark’s terminology for her book Gamer Theory(see Figure 5)—which Wark distinctively calls a networked book—might therefore be more fitting and applicable in such cases. A networked book, at least in its wording, positions itself as being located more in between the ideal types of stability and fluidity.75
A final remark concerning the way in which these two projects theorize and bring into practice the fluid or liquid book: in both projects, texts are actively made modular or fluid by outside agents, by authors and editors. There is not a lot of consideration here of the inherent fluidity or liquidity that exists as part of a text or book’s emergent materiality, in intra-action with the elements of what theorists such as Jerome McGann and D. F. McKenzie have called the social text—which, in an extended version, is what underlies Bryant’s concept of the fluid text. In the social text, human agents create fluidity through the creation of various instantiations of a text post production. As McKenzie has put it: “A book is never simply a remarkable object. Like every other technology, it is invariably the product of human agency in complex and highly volatile contexts.”76 McKenzie, in his exploration of the social text, sought to highlight the importance of a wide variety of actors in a text’s emergence and meaning giving, from printers to typesetters. He does so in order to argue against a narrow focus on a text’s materiality or an author’s intention. However, there is a lack of acknowledgement here of how the processual nature of the book comes about out of an interplay of agential processes of both a human and nonhuman nature.
Something similar can be seen in the work of Bryant, in that for him a fluid text is foremost fluid because it consists of various versions. Bryant wants to showcase material revision here, by authors, editors, or readers, among others. But this is a very specific—and humanist—understanding of the fluid text. For revision is, arguably, only one major source of textual variation or fluidity. In this sense, to provide some alternative examples, it is not the inherent emergent discursive-materiality of a text, nor the plurality of material (human or machinic) reading paths through a text, that make a text always already unstable for Bryant. What does make a text fluid for him is the existence of multiple versions brought into play by human and authorial agents of some sort. This is related to his insistence on a hermeneutic context in which fluid texts are representations of extended and distributed forms of intentionality. As I will ask ahead, would it not be more interesting to perceive of fluidity or the fluid text rather as a process that comes about out of the entanglement and performance of a plurality of agentic processes: material, discursive, technological, medial, human and nonhuman, intentional and nonintentional? From this position, a focus on how incisions, interruptions, and boundaries are being enacted within processual texts and books, in an inherently emergent and ongoing manner, might offer a more inclusive strategy to deal with the complexity of a book’s fluidity. This idea is explored in more depth toward the end of this chapter, when I return to theories of textual criticism to take a closer look at Jerome McGann’s work.
As discussed in chapter 2, remix as a practice has the potential to raise questions for the idea of authorship, as well as for related concepts of authority and legitimacy. For example, do moral and ownership rights of an author extend to derivative works? And who can be held responsible for the creation of a work when authorship is increasingly difficult to establish in music mash-ups or in data feeds, through which users receive updated information from a large variety of sources? As touched upon previously, one of the suggestions made in discussions of remix to cope with the question of authorship in a digital context has involved shifting the focus from the author to the selector, moderator, or curator. Yet in addition to that, in cases in which authorship is hard to establish or even absent, the archive has been put forward as a means to potentially establish authority in fluid environments retrospectively.
Eduardo Navas has examined both notions as potential alternatives to (established) forms of authority within knowledge environments that rely on continual updates and in which process is preferred to product. Navas emphasizes, however, that to establish authority and to make knowledge possible, keeping a critical distance from a text or work is necessary. As authorship has been replaced by sampling—and “sampling allows for the death of the author,” according to Navas, as the origin of a tiny fragment of a musical composition becomes hard to trace—he argues that this critical position in remix is taken in by s/he who selects the sources to be remixed. Yet in mash-ups, this critical distance increasingly becomes difficult to uphold. As Navas puts it, “This shift is beyond anyone’s control, because the flow of information demands that individuals embed themselves within the actual space of critique, and use constant updating as a critical tool.”77
To deal with the constantly changing present, Navas therefore turns to history as a source of authority: to give legitimacy to fluidity retrospectively by means of the archive (e.g., see the data collected in digital environments by search engines and social media platforms or by public institutions and nonprofits such as the Internet Archive and the Library of Congress). The ability to search the archive establishes the remix’s reliability and its market value (i.e., by mining the archive’s database), Navas points out. By recording information, it becomes metainformation, information that is static, available when needed, and always in the same form, he argues. Retroactively, this recorded state, this staticity of information, is what makes theory and philosophical thinking possible. As Navas claims, “The archive, then, legitimates constant updates allegorically. The database becomes a delivery device of authority in potentia: when needed, call upon it to verify the reliability of accessed material; but until that time, all that is needed is to know that such archives exist.”78 Yet Navas is at the same time ambivalent about the archive as a search engine. He argues that in many ways it is a truly egalitarian space—able to answer all queries possible—but it is a space that is easily commercialized too and hence keeps changing, in part due to market interests. What does it mean when Google or Facebook harvest the data we collect and contribute, and our databases and archives are predominantly built upon commercial social media sites? In this respect, Navas states, we are also witnessing an increasing rise of information flow control and lock in.79
The importance of Navas’s theorizing in this context lies in the possibilities his thinking offers for the book and the knowledge system we have created around it. First of all, as discussed previously, he proposes the role of s/he who selects, curates, or moderates as an alternative to that of the author; he also explores the archive as a way of both stabilizing flow and of creating a form of authority out of fluidity and the continual updating of information. In a way, this alternative model of agency is already quite akin to the one found in scholarly communication, wherein selection of resources and referring to other sources, next to collection building, is part of the research and writing process of most academics. Yet although these are interesting steps to think beyond the status quo of the book as fixed and self-contained—challenging scholarly thinking to experiment with notions of process and sharing and to question idealized ideas of authorship—nonetheless, as Navas also already highlights, the archive as a tool poses some serious problems with respect to legitimating fluidity retrospectively and providing the necessary critical distance, as Navas positions it.80 For the archive as such does not provide any legitimation but is built upon the authority and the commands that constitute it: what Derrida calls “the politics of the archive.”81 What is kept and preserved within archives is connected to power structures, to the interests of those who decide what to collect (and on what grounds) and the capacity to interpret the archive and its content when called upon for legitimation claims later on. The question of authority does not so much lie with the archive, then, but with who has access to the archive and with who gets to constitute it. At the same time, although it has no real power of its own to legitimize fluidity, the archive is used as an objectified extension of these power structures that constitute and control it; as Derrida argues, archiving is an act of externalization.82
A still further critique of the archive states that, rather than functioning as a legitimizing device, its focus is first and foremost on objectification, commercialization, and consummation. In the archive, knowledge streams are turned into knowledge objects when we order our research into consumable bits of data. Witness the way in which publishing companies such as Reed Elsevier (or RELX, as it has renamed itself) increasingly brand themselves as data and information analytics companies, highlighting that for them the published object becomes valuable once we are able to create, collect, and extract (and ultimately sell) the data around it. As Navas has shown, the search engine, based on the growing digital archive we are collectively building online, is Google’s bread and butter. By initiating large projects like Google Books, for instance, Google aims to make the world’s archive digitally available or to digitize the “world’s knowledge”—or at least, that part of it that Google finds appropriate to digitize (i.e., mostly works in American and British libraries, and thus mostly English-language works). In Google’s terms, this means making the information it deems most relevant—based on the specific programming of its algorithms—freely searchable, and Google partners with many libraries worldwide to make this service available. However, most of the time only snippets of poorly digitized information are freely available; for full-text functionality, or more contextualized information, books must be acquired via Google Play Books (formerly Google eBooks and Google Editions) on the Google Play store, for instance. This makes it clear how search is fully embedded within a commercial framework in this environment.
The interpretation of the archive is therefore a fluctuating one, and the stability it seems to offer is, arguably, relatively selective and limited. As Derrida points out in Archive Fever, using the example of email, the digital offers new and different ways of archiving and thus also provides a different vision of what it constitutes and archives (both from a producer and a consumer perspective).83 Furthermore, the archiving possibilities also determine the structure of the content that will be archived as it is becoming. The archive thus produces just as much as it records the event. In this respect, the archive is highly performative: it produces information, creates knowledge, and decides how we determine what knowledge will be. And the way the archive is constructed is very much a consideration under institutional and practical constraints. For example, what made the Library of Congress decide in 2010 to preserve and archive all public Twitter feeds starting from its inception in 2006? And why only Twitter and not other similar social media platforms?84 The relationship of the archive to scholarship in this respect is a mutual one, as they determine one another: a new scholarly paradigm asks for and creates a new vision of the archive. This is why the archive does not stabilize or guarantee any concept. As Derrida aptly states, “The archive is never closed. It opens out of the future.”85
Foucault acknowledges this fluidity of the archive, where he sees it as a general system of both the formation and transformation of statements. However, the archive also structures our way of perceiving the world as we operate and see the world from within the archive. As Foucault states, “It is from within these rules that we speak.”86 The archive can thus be seen as governing us, and this again directly opposes the idea of critical distance that Navas has explored through the concept of the archive, as we can never be outside of it (nor can the archive be outside of the event it memorializes). Matthew Kirschenbaum argues along similar lines when he discusses the preservation of digital objects, pointing out that their preservation is “logically inseparable from the act of their creation.”87 He explains this as follows: “The lag between creation and preservation collapses completely, since a digital object may only ever be said to be preserved if it is accessible, and each individual access creates the object anew. One can, in a very literal sense, never access the ‘same’ electronic file twice, since each and every access constitutes a distinct instance of the file that will be addressed and stored in a unique location in computer memory.”88
This means that every time we access a digital object, we duplicate it, we copy it. And this is exactly why, in our strategies of conservation, every time we access a file we also (re)create these objects anew over and over again. Critical distance here is impossible when we are actively involved in the archive’s functioning. Kirschenbaum quotes Abby Smith, who states that “the act of retrieval precipitates the temporary reassembling of 0’s and 1’s into a meaningful sequence that can be decoded by software and hardware.”89 Here the agency of the archive, of the software and hardware, also becomes apparent. Kirschenbaum refers to Wolfgang Ernst’s notion of archaeography, which denotes forms of machinic or medial writing—or, as Ernst puts it, “expressions of the machines themselves, functions of their very mediatic logic.”90 At this point, archives become “active ‘archaeologists’ of knowledge”—or, as Kirschenbaum puts it, “the archive writes itself.”91
Let me reiterate that this critique is not focused on doing away with either the archive or the creation of (open access) archives: archives play an essential role in making scholarly research accessible, preserving it, adding metadata, and making it harvestable. Yet a critical awareness of the structures at play behind the archive, while putting question marks on both its perceived stability and its (objective) authority and legitimacy, should remain an important aspect of the scholarly method.
The Limits of Fluidity and Stability
These experiments with modular, fluid, and liquid publications, with new forms of authorship and retrospective archival legitimation, provide valuable insights into the possibilities the digital medium offers to organize knowledge production differently to accommodate more fluid environments. However, as I have shown, most of the “solutions” presented earlier when it comes to engaging with or accommodating fluidity in online environments continue to rely on preestablished print-based conventions and demarcations. Although these experiments all explore alternative ways of establishing authority and authorship in increasingly fluid environments, these alternatives still very much rely on print-based forms and concepts of stability and fixity (structured around the liberal humanist author and the work as a bound and defined object) and the knowledge and power systems built around them. In many ways, these experiments thus remain bound to the essentialisms established as part of this object-oriented scholarly communication system: for example, when they propose smaller or more compartmentalized modular objects, a strategy that favors the fixed and the standard over the more diverse, complex, and relational; or when they explore linear, networked versions of original works, which remain connected to intentional and humanist authorial agencies; or are legitimized by archives that cannot uphold an objectified external function, as they are embedded in the objects and events they performatively (re)produce. As such, these experiments also do not fundamentally challenge our established notions and conventional understandings of the autonomous human subject, the author, the text, and fixity in relation to the printed book, authorship, authority, and stability in a digital context.
However, my critique of these notions is not intended as a condemnation of their experimental potential. On the contrary, I support these explorations of fluidity strongly, for all the reasons outlined here. Yet instead of intentionally or unintentionally reproducing humanist and print-based forms of fixity and stability in a digital context, as the concepts and projects mentioned previously still end up doing, I want to examine these practices of stabilizing themselves and the value systems on which they are based. Books are an emergent property; instead of trying to cope with the fluidity offered by the digital medium by using the same disciplinary regime we are used to from a print context to fix and cut down the digital medium, I want to argue that we should direct our attention more toward the cuts we make in, and as part of, our research, and the reasons why we make these cuts (both in a print and digital context) as part of our intra-active becoming with the book.
As I made clear earlier, instead of emphasizing the dualities of fixity/fluidity, closed/open, bound/unbound, and print/digital, I want to shift attention to the issue of the cut; to the performative processes of the demarcation of scholarly knowledge, of the fixing we need to do at specific points during its communication. How can we, by cutting, take responsibility for the boundaries we enact and that are being enacted? How can we do this while simultaneously enabling responsiveness by promoting forms and practices of cutting that allow the book to remain emergent and processual (i.e., that do not tie it down or bind it to fixed and predetermined meanings, practices, and institutions) and that also examine and disturb the humanist and print-based notions that continue to accompany the book?
Rather than seeing the book as either a stable or a processual entity, a focus on the agential processes that bring about book-objects, on the constructions and value systems we adhere to as part of our daily scholarly practices, might be key in understanding the performative nature of the book as an ongoing effect of these agential incisions. The next section therefore returns to remix theory, this time exploring it from the perspective of the cut. I want to analyze the potential of remix here as part of a discourse of critical resistance against essentialism to question humanist notions such as fixity and authorship/authority; notions that continue to structure humanities scholarship and on which a great deal of the print-based academic institution continues to rest. As I argue, within a posthumanist performative framework, remix, as a form of differential cutting, can be a means to intervene in and rethink humanities knowledge production, specifically with respect to the political-economy of book publishing and the commodification of scholarship into knowledge objects.
Remix and the Cut
Cutting can be understood as an essential aspect of the way reality at large is structured and provided with meaning. However, within remix studies there has been a tendency to theorize the cut and the practice of cutting from a representationalist framework. Instead, my analysis here will be juxtaposed and entangled with a diffractive reading of a selection of critical theory, feminist new materialist, and media studies texts that specifically focus on the act of cutting from a performative perspective, to explore what forms a posthumanist vision of remix and the cut might take.92 I then explore how the potential of the cut and, relating to that, how the politics inherent in the act of making an incision can be applied to scholarly book publishing in an affirmative way. How can we account for our own ethical entanglements as scholars in the becoming of the book?93 Based on Foucault’s concept of the apparatus, as well as on Barad’s posthumanist expansion of this concept, I argue that the scholarly book currently functions as an apparatus that cuts the processes of scholarly creation and becoming into authors, scholarly objects, and an observed world separate from these and us.94 Drawing attention to the processual and unstable nature of the book instead, I focus on the book’s critical and political potential to question these cuts and to disturb these existing scholarly practices and institutions.
After analyzing how the book functions as an apparatus, a materialdiscursive formation or assemblage that enacts incisions, I explore two book publishing projects—Open Humanities Press’s Living Books About Life series and Mark Amerika’s remixthebook—that have tried to rethink and reperform this apparatus by specifically taking responsibility for the cuts they make in an effort to cut well.95 In what way do these projects create spaces for alternative, more inclusive posthumanities’ methods and practices to perform scholarship, accommodating a plurality of human and nonhuman agencies and subjectivities? How have they established an alternative politics and ethics of the cut that is open to change, and what have been some of their potential shortcomings?
The Material-Discursive Cut within a Performative Framework
As discussed previously, media theorist Eduardo Navas has written extensively about cut/copy and paste as a practice and concept within remixed music and art. For Navas, remix, as a process, is deeply embedded in a cultural and linguistic framework, where he sees it as a form of discourse at play across culture.96 This focus on remix as a cultural variable or as a form of cultural representation seems to be one of the dominant modes of analysis within remix studies as a field.97 Based on his discursive framework of remix as representation and repetition (following Jacques Attali), Navas makes a distinction between copying and cutting. He sees cutting (into something physical) as materially altering the world, while copying, as a specific form of cutting, keeps the integrity of the original intact. Navas explores in his work how the concept of sampling was altered under the influence of changes in mechanical reproduction, where sampling as a term started to take on the meaning of copying as the act of taking, not from the world, but from an archive of representations of the world. Sampling thus came to be understood culturally as a meta-activity.98 In this sense, Navas distinguishes between material sampling from the world (which is disturbing) and sampling from representations (which is a form of metarepresentation that keeps the original intact). The latter is a form of cultural citation—where one cites in terms of discourse—and this citation is strictly conceptual.99
What I want to do here instead is extend remix beyond a cultural logic operating at the level of representations, by seeing it as an always already material practice that disturbs and intervenes in the world. It will be beneficial here to apply the insights of new materialist theorists, to explore what a material-discursive and performative vision of cutting and the cut is able to contribute to the idea of remix as a critical affirmative doing. Following Barad, “The move toward performative alternatives to representationalism shifts the focus from questions of correspondence between descriptions and reality (e.g. do they mirror nature or culture?) to matters of practices/doings/actions.”100 Here remixes as representations are not just mirrors or allegories of the world, but direct interventions in the world. Therefore, both copying and cutting are performative, in the sense that they change the world; they alter and disturb it.101 Following this reasoning, copying is not ontologically distinct from cutting, as there is no distinction between discourse and the real world: language and matter are entangled, where matter is always already discursive and vice versa.102
As I explored in more depth in the introduction and in chapter 1, Barad’s material-discursive vision of the cut focuses on the complex relationship between the social and the nonsocial, moving beyond the binary distinction between reality and representation by replacing representationalism with a theory of posthumanist performativity. Her form of realism is not about representing an independent reality outside of us, but about performatively intervening, intra-acting with and as part of the world.103 For Barad, intentions are attributable to complex networks of agencies, both human and nonhuman, functioning within a certain context of material conditions.104 Where in reality agencies and differences are interwoven phenomena, what Barad calls agential cuts cleave things together and apart, creating subjects and objects by enacting determinate boundaries, properties, and meanings. These separations that we create also enact specific inclusions and exclusions, insides and outsides. Here it is important to take responsibility for the incisions that we make, where being accountable for the complex relationalities of self and other that we weave also means we need to take responsibility for the exclusions we create.105 Although not enacted directly by us, but rather by the larger material arrangement of which we are a part (cuts are made from the inside), we are still accountable to the cuts we help to enact: there are new possibilities and ethical obligations to act (cut) at every moment.106 In this sense, “cuts do violence but also open up and rework the agential conditions of possibility.”107 It matters which incisions are enacted, where different cuts enact different materialized becomings. As Barad states: “It’s all a matter of where we place the cut. . . . What is at stake is accountability to marks on bodies in their specificity by attending to how different cuts produce differences that matter.”108
Related to this, media theorists Sarah Kember and Joanna Zylinska explore the notion of the cut as an inevitable conceptual and material interruption in the process of mediation, focusing specifically on where to cut insofar as it relates to how to cut well. As they point out, the cut is both a technique and an ethical imperative; cutting is an act that is necessary to create meaning, to be able to say something about things.109 Here they see a similarity with Derrida’s notion of différance, a term that functions as an incision, where it stabilizes the flow of mediation (which is also a process of differentiation) into things, objects, and subjects.110 Through the act of cutting, we shape our temporally stabilized selves (we become individuated), as well as actively form the world we are part of and the matter surrounding us. On a more ontological level, therefore, “cutting is fundamental to our emergence in the world, as well as our differentiation from it.”111 Cutting thus enacts both separation and relationality (it cleaves) where an incision becomes an ethical imperative, a decision, one however that is not made by a humanist, liberal subject but by agentic processes. In this more performative vision, cutting becomes a technique, not of rendering or representing the world, but of managing it, of ordering and creating it, of giving it meaning.
Kember and Zylinska are specifically interested in the ethics of the cut. If we inevitably have to intervene in the process of becoming (to shape it and give it meaning), how is it that we can cut well? How can we engage with a process of differential cutting, as they call it, enabling space for the vitality of becoming? To enable a productive engagement with the cut, Kember and Zylinska explore performative and affirmative acts of cutting, using the example of photography to examine “this imperative [that] entails a call to make cuts where necessary, while not forgoing the duration of things.” Cutting well for them thus involves leaving space for duration, where cutting does not close down creativity or “foreclose on the creative possibility of life.”112
The Affirmative Cut in Remix Studies
To explore further the imperative to cut well, I want to return to remix theory and practice, in which the potential of the cut and of remix as subversion and affirmative logic, and of appropriation as a political tool and a form of critical production, has been explored extensively. In particular, I want to examine what forms a more performative vision of remix might take to again examine how this might help us in reconstructing an alternative politics of the book, one which, instead of focusing on either achieving states of stability or fluidity, instead enacts cuts while leaving space for duration—in other words, while not foreclosing on the duration of things (or, following Kember and Zylinska, on the creative possibility of life). In what sense do remix theory and practice also function, in the words of Barad, as “specific agential practices/intra-actions/performances through which specific exclusionary boundaries are enacted?”113 Navas, for instance, conceptualizes remix as a vitalism: as a formless force, capable of taking on any form and medium. In this vitalism lies the power of remix to create something new out of something already existing, by reconfiguring it. In this sense, as Navas states, “to remix is to compose.”
Through these reconfiguring and juxtaposing gestures, remix also has the potential to question and critique, becoming an act that interrogates “authorship, creativity, originality, and the economics that supported the discourse behind these terms as stable cultural forms.”114 However, Navas warns of the potential of remix to be both what he calls regressive and reflexive, where the openness of its politics means that it can also be easily co-opted, where “sampling and principles of Remix . . . have been turned into the preferred tools for consumer culture.”115 A regressive remix, then, is a recombination of something that is already familiar and has proved to be successful for the commercial market. A reflexive remix, on the other hand, is regenerative, as it allows for constant change.116 Here we can find the potential seeds of resistance in remix, where, as a type of intervention, Navas states it has the potential to question conventions, “to rupture the norm in order to open spaces of expression for marginalized communities,” and, if implemented well, to become a tool of autonomy.117
One of the realms of remix practice in which an affirmative position of critique and politics has been explored in depth, while taking clear responsibility for the interventions it enacts, is in feminist remix culture—most specifically in vidding and political remix video. Francesca Coppa defines vidding as “a grassroots art form in which fans re-edit television or film into music videos called ‘vids’ or ‘fanvids.’”118 By cutting and selecting certain bits of videos and juxtaposing them with others, the practice of vidding, beyond or as part of a celebratory fan work, has the potential to become a critical textual engagement, as well as a recutting and recomposing (cutting together) of the world differently. As fandom scholars Kristina Busse and Alexis Lothian state, vidding practically takes apart “the ideological frameworks of film and TV by unmaking those frameworks technologically.”119 Coppa sees vidding as an act of both bringing together and taking apart (“what a vidder cuts out can be just as important as what she chooses to include”); the act of cutting is empowering to vidders in Coppa’s vision, insofar as “she who cuts” is better than “she who is cut into pieces.”120
Video artist Elisa Kreisinger, who makes queer video remixes of TV series such as Sex and the City and Mad Men(see Figure 6), states that political remix videos harvest more of an element of critique in order to correct certain elements (such as gender norms) in media works, without necessarily having to be fan works. As Kreisinger argues, “I see remixing as the rebuilding and reclaiming of once-oppressive images into a positive vision of just society.”121 Africana studies scholar Renee Slajda is interested in this respect in how Kreisinger’s remix videos can be seen as part of a feminist move beyond criticism, as part of which remix artists turn critical consciousness into a creative practice aiming to “reshape the media—and the world—as they would like to see it.”122 For Kreisinger, too, political remix video is not only about creating “more diverse and affirming narratives of representation”; it also has the potential to effect actual change (although, like Navas, she is aware that remix is also often co-opted by corporations to reinforce stereotypes). Remix challenges dominant notions of ownership and copyright, as well as the author/reader and owner/user binaries that support these notions. Kreisinger explains how by challenging these notions and binaries, remix videos also challenge the production and political economy of media.123 As video artist Martin Leduc argues in this respect, “We may find that remix can offer a means not only of responding to the commercial media industry, but of replacing it.”124
The Agentic Cut in Remix Studies
Alongside providing valuable affirmative contributions to the imperative to cut well and its critical potential to reconfigure boundaries, remix has also been important with regard to rethinking and reperforming agency and authorship in art and academia. In this context, it critiques the liberal humanist subject that underpins most academic performances of the author, while exploring more posthumanist and entangled notions of agency in the form of agentic processes in which agency is more distributed.
For example, Paul Miller writes about flows and cuts in his artist’s book Rhythm Science (see Figure 7). For Miller, sampling is a doing, a creating with found objects, yet this involves taking responsibility for its genealogy, for “who speaks through you.”125 Miller’s practical and critical engagement with remix and the cut is especially interesting therefore when it comes to his conceptualizing of identity, where—as in the new materialist thinking of Barad—he does not presuppose a pregiven identity or self, but states that our identity comes about through our incisions, the act of cutting, shaping, and creating our selves: “The collage becomes my identity,” he states.126 For Miller, agency is thus not related to our identity as creators or artists, but to the flow or becoming, which always comes first. We are so immersed in and defined by the data that surrounds us on a daily basis that “we are entering an era of multiplex consciousness,” he argues.127
Where Miller talks about creating different personas as shareware, Mark Amerika is interested in the concept of performing theory and critiquing individuality and the self through notions such as “flux personae,” establishing the self as an “artist-medium” and a “post-production medium.”128 Amerika sees performing theory as a creative process, in which pluralities of conceptual personae are created that explore their becoming. Through these various personae, Amerika wants to challenge the “unity of the self.”129 In this vision, the artist becomes a medium through which language, in the form of prior inhabited data, flows. When artists write their words, they don’t feel like their own words but like a “compilation of sampled artefacts” from the artist’s cocreators and collaborators. By becoming an artist-medium, Amerika thus argues that “the self per se disappears in a sea of source material.”130 By exploring this idea of the networked author concept or of the writer as an artist-medium, Amerika contemplates what could be a new (posthuman) author function for the digital age, with the artist as a postproduction medium, even “becoming instrument” and “becoming electronics.”131
Cutting Scholarship Together-Apart
What can we take away from this transversal reading of feminist new materialism, media theory, and remix studies with respect to cutting as an affirmative, material-discursive practice—especially where this reading concerns how remix and the cut can performatively critique established humanist notions such as authorship, authority, and fixity, which continue to underlie scholarly book publishing? How can this reading trigger alternatives to the political economy of book publishing, especially the latter’s persistent focus on ownership and copyright and the book as an object and commodity? Could this (re)reading even pose potential problems for our ideas of critique and ethics themselves when notions of stability, objectivity, and distance tend to disappear? Taking the previously discussed works into consideration, the question then is: How can we make ethical, critical cuts in our scholarship while at the same time promoting a politics of the book that is open and responsible to change, difference, and the inevitable exclusions that result?
To explore this further, I want to analyze the way the book functions and has functioned as an apparatus. The concept of dispositive or apparatus originates from Foucault’s later work. As a concept, it expands beyond the idea of discursive formation to more closely connect discourse with nondiscursive elements, with material practices. The apparatus, then, Foucault argues, is the system of relations that can be established between these disparate elements.132 However, an apparatus for Foucault is not a stable and solid “thing” but a shifting set of relations inscribed in a play of power, one that is strategic and responds to an “urgent need,” a need to control.133 In comparison, Deleuze’s more fluid outlook sees the apparatus as an assemblage capable of escaping attempts at subversion and control. Deleuze is specifically interested in the variable creativity that arises out of dispositifs (in their actuality), or in the ability of the apparatus to transform itself; as he explains, we as human beings belong to dispositifs and act within them.134 Barad, meanwhile, connects the notion of the cut to her posthumanist Bohrian concept of the apparatus. As part of our intra-actions, apparatuses, in the form of certain material arrangements or practices, effect an agential cut between subject and object, which are not separate but come into being through these intra-actions.135 Apparatuses, for Barad, are thus open-ended and dynamic material-discursive practices, practices that articulate concepts and things.136
Applying this more directly, in what way has the apparatus of the book—consisting of an entanglement of relationships between, among other things, authors, books, the outside world, readers, the material production and political economy of book publishing, and the discursive formation of scholarship—executed its power relations through cutting in a certain way? In the present scholarly book publishing constellation, it has mostly operated via a logic of incision: one that favors neat separations between books, authors (as human creators), and readers; that cuts out fixed scholarly book-objects of an established quality and originality; and that simultaneously pastes this system together via a system of strict ownership and copyright rules. The manner in which the apparatus of the book enacts these delineations at the present moment does not take into full consideration the processual aspects of the book, research, and authorship, nor does it leave space for their ongoing duration. Neither does this current, still predominantly print-based apparatus explore in depth the possibilities to recut our research results in such a way as to experiment with collaboration, updates, versionings, and multimedia enhancements in a digital context. The dominant book-apparatus instead enforces a political economy that keeps books and scholarship closed off from the majority of the world’s potential readers, functioning in an increasingly commercial environment (albeit one fueled by public money and free labor), which makes it very difficult to publish specialized scholarship lacking marketable promise. The dominant book-apparatus thus does not take into consideration how the humanist discourse on authorship, quality, and originality that continues to underlie the humanities perpetuates this publishing system in a material sense. Nor does it analyze how the specific print-based materiality of the book and the publishing institutions that have grown around it have likewise been incremental in shaping the discursive formation of the humanities and scholarship as a whole.
Following this chapter’s diffractively collected insights on remix and the cut, I want to again underscore the need to see and understand the book as a process of becoming, as an interweaving of plural (human and nonhuman) agencies. The separations or cuts that have been forced out of these entanglements by specific material-discursive practices have created inclusions and exclusions, book-objects and author-subjects, both controlling positions.137 Books as apparatuses are thus performative; they are reality shaping. Not enough responsibility is taken—not by scholars, nor by publishers nor the academic system as a whole—for the specific closures that are enacted with and through the book as an apparatus. Most humanities research—just as this research, to some extent—ends up as a conventional, bound, printed (or, increasingly, hybrid), single-authored book or journal article, published by an established publisher or in an esteemed journal and disseminated mainly to university libraries. These hegemonic scholarly practices are simultaneously affecting scholars and the way they act in and describe the world and/or their object of study—including, as Hayles has argued, the way scholars are “conceptualizing projects, implementing research programs, designing curricula, and educating students.”138 It is important to acknowledge this entanglement, as it highlights the responsibility scholars have for the practices they are very much a part of and for the inclusions and exclusions they enact and enforce (and that are enacted and enforced for them) as part of their book publishing practices. However, this entanglement with the book apparatus also offers opportunities for scholars to recut and (re)perform the book and scholarship, as well as themselves, differently and to experiment with what a posthumanities could potentially entail.
Following the insights of Foucault, Deleuze, and Barad discussed earlier, it becomes clear that the book apparatus, of which scholars are a part, also offers new lines of flight, or the ability to transform itself.139Living Books About Life and remixthebookare two book publishing projects, initiated by scholars, that have explored the potential of the cut and remix for an affirmative politics of publishing, to challenge our objectoriented and modular systems. In what sense have they been able to promote, through their specific publishing incisions and decisions, an open-ended politics of the book that enables duration and difference?140
At the beginning of August 2011, Mark Amerika launched remixthebook.com (see figure 8), a website designed to serve as an online companion to his print volume, remixthebook. Amerika is a multidisciplinary artist, theorist, and writer, whose various personas offer him the possibility of experimenting with hypertext fiction and net.art, as well as with more academic forms of theory and artist’s writings, and to do so from a plurality of perspectives.141Remixthebook is a collection of multimedia writings that explore the remix as a cultural phenomenon by themselves referencing and mashing up curated selections of earlier theory, avant-garde and art writings on remix, collage, and sampling. It consists of a printed book and an accompanying website that functions as a platform for a collaboration between artists and theorists exploring practice-based research.142 The platform features multimedia remixes from over twenty-five international artists and theorists who were invited to contribute a remix to the project site based on selected sample material from the printed book. Amerika questions the bound nature of the printed book and its fixity and authority by bringing together this community of diverse practitioners performing and discussing the theories and texts presented in the book, via video, audio, and text-based remixes published on the website, opening the book and its source material up for continuous multimedia recutting. Amerika further challenges dominant ideas of authorship by playing with personas and by drawing from a variety of remixed source material in his book, as well as by directly involving his remix community as collaborators on the project.
For Amerika, then, the remixthebook project is not a traditional form of scholarship. Indeed, it is not even a book in the first instance. As he states in the book’s introduction, it should rather be seen as “a hybridized publication and performance art project that appears in both print and digital forms.”143 Amerika applies a form of patch or collage writing in the twelve essays that make up remixthebook. This is part of his endeavor to develop a new form of new media writing, one that constitutes a crossover between the scholarly and the artistic and between theory and poetry, mixing these different modalities.144 For all that, Amerika’s project has the potential to change scholarly communication in a manner that goes beyond merely promoting a more fluid form of new media writing, extending the boundaries of the scholarly realm from an artistic viewpoint. What is particularly interesting about his hybrid project, both from the print book side and from the platform network performance angle, is the explicit connections Amerika makes through the format of the remix to previous theories and to those artists/theorists who are currently working in and are theorizing the realm of digital art, humanities, and remix. At the same time, the remixthebook website functions as a powerful platform for collaboration between artists and theorists who are exploring the same realm, celebrating the kind of practice-based research Amerika applauds.145 By creating and performing remixes of Amerika’s source material, which is again based on a mash-up of other sources, a collaborative interweaving of different texts, thinkers, and artists emerges, one that celebrates and highlights the communal aspect of creativity in both art and academia.
However, a discrepancy remains visible between Amerika’s aim to create a commons of renewable source material along with a platform on which everyone (amateurs and experts alike) can remix his and others’ source material, and the specific choices Amerika makes—or that the prestige and market-focused book apparatus with which he is interwoven allows him to make—and the outlets he chooses to fulfill this aim. For instance, remixthebook is published as a traditional printed book (in paperback and hardcover); more importantly, it is not published on an open access basis or with a license that allows reuse, which would make it far easier to remix and reuse Amerika’s material by copying and pasting directly from the web or a PDF, for instance.
Amerika in many ways tries to evade the bounded nature of the printed edition by creating this community of people remixing the theories and texts presented in the book. He does so not only via the remixes that are published on the accompanying website, but also via the platform’s blog and the remixthebookTwitter feed to which new artists and thinkers were asked to contribute on a weekly basis. However, here again, the website is not openly available for everyone to contribute to. The remixes have been selected or curated by Amerika along with his fellow artist and cocurator Rick Silva, and the artists and theorists contributing to the blog and Twitter as an extension of the project have also been selected by Amerika’s editorial team. Although people are invited to contribute to the project and platform, then, it is not openly accessible to everyone. Furthermore, although the remixes and blog posts are available and accessible on the website, they are themselves not available to remix, as they all fall under the website’s copyright regime, which is licensed under a traditional all rights reserved copyright. Given all the possibilities such a digital platform could potentially offer, the question remains as to how much Amerika (or connected to him, his publisher or editorial team) has really put the source material “out there” to create a “commons of renewable source material” for others to “remix the book.”146
In 2011, the media and cultural theorists Clare Birchall, Gary Hall, and Joanna Zylinska initiated Living Books about Life (see Figure 9), a series of open access books about life published by Open Humanities Press and designed to provide a bridge between the humanities and sciences. All the books in this series repackage existing open access science-related research, supplementing it with an original editorial essay to tie the collection together. They also provide additional multimedia material, from videos to podcasts to whole books. The books have been published online on an open source wiki platform, meaning they are themselves “living” or “open on a read/write basis for users to help compose, edit, annotate, translate and remix.”148 Interested potential contributors can also contact the series editors to contribute a new living book. These living books can then collectively or individually be used and/or adapted for scholarly and educational contexts as an interdisciplinary resource bridging the sciences and humanities.
As Hall has argued, this project was designed to, among other things, challenge the physical and conceptual limitations of the traditional codex by including multimedia material and even whole books in its living books, but also by emphasizing its duration by publishing using a wiki platform and thus “rethinking ‘the book’ itself as a living, collaborative endeavor.”149 Hall points out that wikis offer a potential to question and critically engage issues of authorship, work, and stability. They can offer increased accessibility and induce participation from contributors from the periphery. As he states, “Wiki-communication can enable us to produce a multiplicitous academic and publishing network, one with a far more complex, fluid, antagonistic, distributed, and decentered structure, with a variety of singular and plural, human and non-human actants and agents.”150 However, the MediaWiki software employed by the Living Books About Life project (see Figure 10), in common with a lot of wiki software, keeps accurate track of which user is making what changes. This offers the possibility to other users (or bots) to monitor recent changes to pages, to explore a page’s revision history, and to examine all the contributions of a specific user. The wiki software thus already has mechanisms written into it to “manage” or fix instances of the text and its authors by keeping a track record or archive of all the changes that are made.
But the Living Books About Life project also enforces stability and fixity (both of the text and of its users) on the front-end side by clearly mentioning the specific editor’s name underneath the title of each collection, as well as on the book’s title page; by adding a fixed and frozen version of the text in PDF format, preserving the collection as it was originally created by the editors; and by binding the book together by adding a cover page (see Figure 11) and following a rather conventional book structure (complete with an editorial introduction followed by thematic sections of curated materials).
Mirroring the physical materiality of the book (in its design, layout, and, structuring) in such a way also reproduces the aura of the book, including the discourse of scholarship (as stable and fixed, with clear authority) this brings with it. This might explain why the user interaction with the books in the series has been limited in comparison to some other wikis, which are perhaps more clearly perceived as multiauthoring environments. Here the choice to recut the collected information as a book, with clear authors and editors, while and as part of rethinking and reperforming the book as concept and form, might paradoxically have been responsible for both the success and the limitations of the project. These choices meant the project had to conform again to some of the same premises it initially set out to question and critique.
What both the Living Books About Life and OHP’s earlier Liquid Books project share, however, is a continued theoretical reflection on issues of fixity, authorship, and authority, both by its editors and by its contributors in various spaces connected to the project.151 This comes to the fore in the many presentations and papers the series editors and authors have delivered on these projects, engaging people with their practical and theoretical issues. These discussions have also taken place on the blog that accompanied the Living Books About Life series, and in Hall and Birchall’s multimodal text and video-based introduction to the Liquid Books series (see Figure 12), to give just some examples.152
It is in these connected spaces that continued discussions are being had about copyright, ownership, authority, the book, editing, openness, fluidity and fixity, the benefits and drawbacks of wikis, quality and peer review, and so on. I would like to argue that it is here, on this discursive level, that the aliveness of these living books is perhaps most ensured. These books live on in continued discussion about where we should cut them, and when, and who should be making the incisions, taking into consideration the strategic compromises—which might indeed include a frozen version and a book cover, and clearly identifiable editors—we might have to make due to our current entanglements with certain practices, institutions, and pieces of software, all with their own specific power structures and affordances.
In “Future Books: A Wikipedia Model?,” an introduction to one of the books in the Liquid Books series—namely, Technology and Cultural Form: A Liquid Reader, which has been collaboratively edited and written by Joanna Zylinska and her MA students (together forming a “liquid author”)—the various decisions and discussions that could be made and had concerning liquid, living, and wiki books are considered in depth: “It seems from the above that a completely open liquid book can never be achieved, and that some limitations, decisions, interventions and cuts have to be made to its ‘openness.’ The following question then presents itself: how do we ensure that we do not foreclose on this openness too early and too quickly? Perhaps liquid editing is also a question of time, then; of managing time responsibly and prudently.”153
Looking at it from this angle, these discussions are triggering critical questions from a user (writer/reader) perspective, as part of their interconnections and negotiations with the institutions, practices, and technologies of scholarly communication. Within a wiki setting, questions concerning what new kinds of boundaries are being set up are important: Who moderates decisions about what is included or excluded (what about spam?) Is it the editors? The software? The press? Our notions of scholarly quality and authority? What is kept and preserved, and what new forms of closure and inclusion are being created in this process? How is the book disturbed and at the same time recut? It is our continued critical engagement with these kinds of questions in an affirmative manner, both theoretically and practically, that keeps these books open and alive.
To conclude this chapter, I want to return to the issue of the performativity of the stories and discourses that we as scholars weave around the book and the responsibility that comes with this toward the object of our narratives, with which we are always already directly interconnected. Following on from my earlier analysis of Bryant’s work on the fluid text, I would like to briefly reexamine theories of textual criticism, which as a field has always actively engaged itself with issues concerning the fixity and fluidity of texts. This is embodied mainly in the search for the ideal text or archetype, but also in the continued confrontation with a text’s pluralities of meaning and intentionality, next to issues of interpretation and materiality. In this respect, critical editing, as a means of stabilizing a text, has always revolved around an awareness of the cuts that are made to a text in the creation of scholarly editions. It can therefore be stated that, as Bryant has argued, the task of a textual scholar is to “manage textual fluidity.”154
One of the other strengths of textual criticism is an awareness on the part of many of the scholars in the field that their own practical and theoretical decisions or cuts influence the interpretation of a text. They can therefore be seen to be mindful of their entanglement with its becoming. As Bryant has put it, “Editors’ choices inevitably constitute yet another version of the fluid text they are editing. Thus, critical editing perpetuates textual fluidity.”155 These specific cuts, or “historical write-ups,” that textual scholars create as part of their work with critical editions don’t only construct the past from a vision of the present; they also say something about the future. As textual scholar Jerome McGann has pointed out:
All poems and cultural products are included in history—including the producers and the reproducers of such works, the poet and their readers and interpreters. . . . To the historicist imagination, history is the past, or perhaps the past as seen in and through the present; and the historical task is to attempt a reconstruction of the past, including, perhaps, the present of that past. But the Cantos reminds us that history includes the future, and that the historical task involves as well the construction of what shall be possible.156
It is this awareness that a critical edition is the product of editorial intervention—which creates a material-discursive framework that influences future texts’ becoming—that I am interested in here, especially in relation to McGann’s work on the performativity of texts, which again allows for more agency for the book as a material form itself. For McGann, every text is a social text, created under specific sociohistorical conditions; he theorizes texts not as things or objects, but as events. He argues therefore that texts are not representations of intentions but are processual events in themselves. Thus, every version or reading of a text is a performative (as well as a deformative) act.157 In this sense, McGann makes the move in textual criticism from a focus on authorial intention and hermeneutics (or representation) to seeing a text as a performative event and critical editions as performative acts. As part of this, he argues for a different, dynamic engagement with texts, not focused on discovering what a text “is” but on an “analysis [that] must be applied to the text as it is performative.”158 This includes taking into consideration the specific material iteration of the text one is studying (and how this functions, as Hayles has argued, as a technotext—namely, how its specific material apparatus produces the work as a physical artifact), as well as an awareness of how the scholar’s textual analysis is itself part of the iteration and othering of the text.159 And in addition to this, as Barad has argued, we have to be aware of how the text’s performativity shapes us in our entanglement with it.
The question then is: Why can’t we be more like critical textual editors (in the style of Jerome McGann) ourselves when it comes to our own scholarly works, taking into consideration the various cuts we make and that are made for us as part of the processes of knowledge production? Should assuming responsibility for our own incisions as textual critics of our own work—exploring what I have called in chapter 4 and elsewhere in relation to the work of Joan Retallack the poethics of scholarship—in this respect then not involve, in the first instance
• taking responsibility for our involvement as scholars in the production, dissemination, and consumption of the book;
• engaging with the material-discursive institutional and cultural aspects of the book and book publishing; and
• experimenting with an open-ended and radical politics of the book (which includes exploring the processual nature of the book, while taking responsibility for the need to cut; to make incisions and decisions on where to create meaning and difference, where to cleave the flow of book becoming)?160
This would involve experimenting with alternative ways of cutting our bookish scholarship together-apart: with different forms of authorship, both human and nonhuman; with the materialities and modalities of the book, exploring multimodal and emergent genres, while continuously rethinking and performing the fixity of the book itself; and with the publishing process, examining ways to disturb the current political economy of the book and the objectification of the book within publishing and research. From where I stand, this would mean a continued experimentation with remixed and living books, with versionings, and with radical forms of openness, while at the same time remaining critical of the alternative incisions that are made as part of these projects, of the new forms of binding they might weave. This also involves being aware of the potential strategic decisions that might need to be made in order to keep some iterative bindings intact (for reasons of authority and reputation, for instance) and why we choose to do so. As I have outlined in this chapter, it will be more useful to engage with this experimenting not from the angle of the fixed or the fluid book, but from the perspective of the cut that cuts together-apart the emergent book and, when done well, enables its ongoing becoming.
This text, like the projects mentioned previously, has attempted to start the process of rethinking (through its diffractive methodology) how we might start to cut differently when it comes to our research and publication practices. Cutting and stabilizing still needs to be done, but it might be accomplished in different ways, at different stages of the research process, and for different reasons than we are doing now. What I want to emphasize here is that we can start to rethink and reperform the way we publish our research if we start to pay closer attention to the specific decisions we make (and that are made for us) as part of our publishing practices. The politics of the book itself can be helpful in this respect. As Gary Hall and I have argued elsewhere, “If it is to continue to be able to serve ‘new ends’ as a medium through which politics itself can be rethought . . . then the material and cultural constitution of the book needs to be continually reviewed, re-evaluated and reconceived.”161 The book itself can thus be a medium with the critical and political potential to question specific decisions and to disturb existing scholarly practices and institutions. Books are always a process of becoming (albeit one that is continuously interrupted and disturbed). Books are entanglements of different agencies that cannot be discerned beforehand. In the cuts that we make to untangle them, we create specific material book-objects. In these incisions, the book has always already redeveloped, remixed. It has mutated and moved on. The book is processual, ephemeral, and contextualized; it is a living entity, which we can use a means to critique our established practices and institutions, both through its forms (and the decisions made to create these forms) and its metaphors, and through the practices that accompany it.
Java is a high-level programming language originally developed by Sun Microsystems and released in 1995. Java runs on a variety of platforms, such as Windows, Mac OS, and the various versions of UNIX. Java is known for its simplicity, ease of use, and portability. Java is an object-oriented language, with a simple, clear syntax. Java is platform-independent, meaning that programs written in Java can run on any operating system that supports the Java Runtime Environment, Java is used in a wide variety of applications, from desktop to enterprise and from web-based to embedded systems.
Businesses have recently begun to replace outdated methods with new ones. Businesses now use technologies like ERP (Enterprise Resource Planning solutions). Consequently, SAP is a well-liked kind of ERP system for a range of corporate applications.SAP Course In Pune