18 September 2012
4Humanities
I was delighted to be among the speakers at an excellent day organised by Melissa Terras at UCL on 18 September 2012, calling 'Showing the Arts and Humanities Matter'. The redoubtable Ernesto Priego was assiduous in live tweeting the day and has storified it here:
http://humanistica.ualberta.ca/2012/09/4humanitiesatucl/
Copies of the slides from my presentation are available at:
http://www.slideshare.net/burgess1822/how-the-humanities-can-help-transform-science
6 September 2012
Made In Sheffield: Industrial Perspectives on the Digital Humanities
This is the text of my keynote for the Digital Humanities Congress at the University of Sheffield, 6 September 2012.
For Watt and the others, this making was an aspect of data. One of Watt’s earliest inventions was a perspective machine to assist artists. One great contribution of the Soho Manufactory was the production of the first precise slide rules, essential to calculate boiler pressures. Watt envisaged the production of a mechanical calculating machine, but felt that the engineering techniques of the time could not produce sufficiently precise parts – a problem that Babbage was later to encounter. Towards the end of his life, Watt became preoccupied with developing a sculpture copying machine and his workshop was littered with busts and casts associated with this project. The creation of this machine required both accurate data and methods to make the sculpture – as a mixture of issues of data and making, it was very characteristic of the Industrial Revolution. When the contents of Watt’s workshop were recently moved into a new display at the Science Museum, a mould of an unknown bust was found there. It was realized that the mould could be imaged and the resulting 3d model could be used to print out the bust. The work was done by a team from Geomatic Engineering at UCL, and when the bust was printed, it was found to be a previously unknown bust of James Watt (For more on this, see: www.thehistoryblog.com/archives/9892)
Made
in Sheffield: Industrial Perspectives on the Digital Humanities
It is a great honour to be asked to
inaugurate this first Digital Humanities Congress at the University of
Sheffield. My connections with digital humanities at Sheffield go back to 1995
when the remarkable portfolio of projects in the Humanities Research Institute
at Sheffield caught the attention of the British Library, and I was asked as
one of the library’s curators to foster links with the pioneering work at
Sheffield. Since that time, it has been both a pleasure and an education to
watch how Sheffield has produced a stream of imaginative and forward-looking
work in the digital humanities. I’m going to suggest that the ‘little mesters’
of the Humanities Research Institute form part of a tradition of innovation in
Sheffield which reaches deep into the history of the town, but I’ll start a
long way from Sheffield, with the recent uprisings in the Middle East and North
Africa known as the Arab Spring.
An aspect of the Arab Spring which has caused
particular comment in the West has been the use by protestors of social media.
One protestor tweeted ‘We use Facebook to schedule the protests, Twitter to
coordinate, and YouTube to tell the world!’ A prominent Egyptian blogger, Wael
Ghonim, named his book on the Egyptian uprising Revolution 2.0, and declared that ‘Our revolution is like Wikipedia
… Everyone is contributing content, [but] you don’t know the names of the
people contributing the content’. Western media quickly labelled the risings in
Tunisia, Egypt and elsewhere the ‘Twitter Revolutions’. It was even claimed
that an Egyptian couple named their baby ‘Facebook’. For some commentators,
these events proved that new communication technologies were a force for
democracy. Phillip Howard and Muzammil Hussain of the University of Washington
have argued that whereas in the past protest movements in this region had been
suppressed,
The Internet,
mobile phones, and social media made the difference this time. Using these
technologies, people interested in democracy could build extensive networks,
create social capital, and organize political action with a speed and on a
scale not seen before. Thanks to these technologies, virtual networks
materialized in the streets. Digital media became the tool that allowed social
movements to reach once-unachievable goals…
However, it seems that such a cyber-utopian
reading of these events is misplaced. It has been pointed that there does not
appear to be a correlation between internet penetration and the extent of Arab protests.
Thus, there were widespread protests in the Yemen, where rate of internet
penetration is low, but few protests in the Gulf States where there was greater
access to the internet. An analysis of clicks on links in tweets relating to
the protests indicates that much of the internet traffic generated by the
risings came from outside the countries affected, suggesting that the chief
role of social media was not to coordinate protests but rather to alert the
outside world to what was happening. When the internet was switched off in
Egypt, the protests actually grew in size, suggesting that social media was not
essential to the co-ordination of protests. New media did not simply supplant
traditional sources of news. Indeed, it seems that much of the impact of new
media was a result of its use as a source of information by traditional news
outlets. For example, it has been suggested that much of the mainstream media’s
coverage of events in Tunisia was derived from Tunisian Facebook pages which had
been repackaged for a blog maintained for Tunisian exiles and then passed onto
journalists via Twitter (Cottle p. 652). There appears to have been a
realignment in which old and new media remediated each other in a complex
interplay.
Anne Alexander and Miriyam Aouragh in an
important recent study have used interviews with Egyptian activists to
contextualize the role of new media in the Egyptian uprising. They describe how
activists including representatives of youth movements, workers’ groups and the
Muslim Brotherhood, met for weeks beforehand to plan the protests. Alexander
and Aouragh emphasise that ‘the Egyptian activists we interviewed rightly
reject simplistic claims that technology somehow caused the 2011 uprisings, and
they say it undermines the agency of the millions of people who participated in
the movement that brought down Hosni Mubarak’. But Alexander and Aouragh remind
us that there is also a risk of falling into the opposite trap by assuming
that, if social media did not cause the Arab Spring, then they were of no
significance. A million and half tweets from Egypt at the time of the rising suggest
this is wrong, and Alexander and Aouragh insist that we need to move away from
false polarisations and place the internet activism of the Arab Spring in the
context of wider developments in media and the public sphere. The Arab Spring saw
a profound realignment of the relationship between new and old media, in which
new media emerged as an important additional space for dissent and protest. In
past revolutions, it has often been difficult to recapture the voices of the
insurgents; social media now gives us unparalleled opportunities to explore
these textualities of revolt.
However, what I am interested in here is
the cyber-myth, the idea that Facebook and Twitter allowed the Arab protests to
succeed when previously they had easily been suppressed. This is a myth that
has gained a firm hold in the popular imagination, and it reflects a deeply
held belief that the digital revolution will not simply alter our working life
and give us new forms of leisure but will also lead to major political and
social upheaval, on a par with such great historical movements of the past as
the Reformation. This widespread belief in inexorable technological progress
has been well expressed by Michael Brodie, the Chief Scientist of Network
Technologies for Verizon, the American telecommunications company, who suggests
that we are about to see a digital revolution which will make the Reformation
or the Industrial Revolution seem low-key. Brodie declared that:
the Gutenberg
Bible led to religious reformation while the Web appears to be leading towards
social and economic reformation. But the Digital Industrial revolution, because
of the issues and phenomena surrounding the Web and its interactions with
society, is occurring at lightning speed with profound impacts on society, the
economy, politics, and more.
There is a common assumption in the West that
changes in digital technologies will inexorably generate major transformations
in social, political and economic structures. The American business guru
Clayton Christiansen introduced in 1995 the idea that business success was
associated with the development and adoption of ‘disruptive technologies’.
Christiansen subsequently adopted the wider term ‘disruptive innovations’ to
reflect the idea that business models could also be disruptive. In coining the
term Web 2.0, Tim O’Reilly picked up on the disruptive zeitgeist and disruption
has consistently been seen as a feature of Web 2.0. The strapline for one of
the first Web 2.0 conferences in 2008 was ‘Design, Develop, Disrupt’.
New technologies of communication have been
seen as particularly disruptive and likely to produce major social and
political upheaval. Among the most influential media theorists have been the
Toronto school of Harold Innes, Marshall McLuhan and Walter Ong, who suggested
that major epochs in human history were marked by the appearance of new
communication media. They proposed that the shift from an oral to a literate
society was one such shift. The appearance of printing in the West is seen as
another major transformation precipitating great upheaval. In this analysis, the
impact of the printing press is a pointer towards the type of social and
cultural disruptions which will be produced by the emergence of electronic and
digital forms of communication. The idea that the printing press was a major
agent of social, religious and political change has become widely accepted as a
result of the work of Elizabeth Eisenstein. In a monumental study, Eisenstein suggested
that the role of printing had not been given sufficient weight in accounts of
the Renaissance, Reformation or Scientific Revolution and that printing was
‘the unacknowledged revolution’. Eisenstein argued that there were two major
means by which printing acted as an agent of change. First, she suggested that
print standardized texts which had been fluid during periods of oral and
manuscript circulation. This enabled knowledge to become more settled and
easily transmitted. Second, Eisenstein argued that, by making large numbers of texts
available, their contradictions and mistakes became more evident, so that readers
became more critical and sceptical of authority.
The circulation of digital information alters
once again these two key characteristics of information. Texts have perhaps ceased
to become fixed, so that it could be suggested we have reverted to the fluidity
of oral and manuscript culture. In a recent presentation at MIT, the folklorist
Tom Pettit proposed the Gutenberg thesis, ‘the idea that oral culture was in a
way interrupted by Gutenberg's invention of the printing press and the roughly
500 years of print dominance; a dominance now being challenged in many ways by
digital culture and the orality it embraces’. If Eisenstein was right, then it seems
reasonable to expect that we will shortly see new historical movements
comparable to the Renaissance and Reformation, disruptions and transformations
on a cataclysmic scale. Yet a growing number of historical bibliographers are
expressing doubts about Eisenstein’s thesis. There were states which were to
resist the printing press. The church and state ensured that the printing press
was kept out of Russia and when a press was set up in Moscow in 1564 it was
soon destroyed by a mob. The Ottoman Empire was likewise able to keep printing
at bay, with the first Turkish press only being established in the eighteenth
century. Moreover, the printing press did not kill off the manuscript. David
McKitterick has described how a manuscript of a treatise by Walter Hilton was
copied at Sheen in 1499, despite the fact that the owner of the manuscript had
a copy of the printed version of the same treatise produced by Wynkyn de Worde
five years previously. Although the production of printed gazettes flourished
in seventeenth-century England, manuscript newsletters were equally important
in the dissemination of news. Indeed, many regarded manuscript news as more
reliable than the printed version and the Duke of Newcastle warned Charles II
that the pen was actually far more dangerous than the press, since opponents
might be bolder in a letter than in print. John Donne and Andrew Marvell were
suspicious of print and believed that manuscripts might prove to be more
durable.
The survival of a mixed media economy after
Gutenberg is perhaps not surprising, but a more substantial objection to
Eisenstein’s work is that there is substantial evidence that printing did not
standardise texts. Printing was a craft activity and just like manuscript
copying there were many points in the processing of printing in which
accidents, errors and mistakes could be introduced. As David McKitterick has
pointed out:
From the 42-line
Bible onwards, thousands of books [printed in the fifteenth century] exist with
different type settings for reasons that are not always clear but that always
emanate from some adjustment found necessary in the printing house or the
binder’s bench … Of three dozen copies
surviving of Fust and Schoeffer’s Durandus
(1459), no two copies are exactly alike.
Examples of printed books which differ as
much as manuscripts can be multiplied endlessly. Famously, no two copies of
Shakespeare’s First Folio are exactly the same. William Aldus Wright compared
ten copies of the 1625 edition of Bacon’s Essays, and found that none were the
same. Wright observed that:
The cause of
these differences is not difficult to conjecture. Corrections were made while
the sheets were being printed off, and the corrected and uncorrected sheets
were afterwards bound up indiscriminately. In this way the number of different
copies might be multiplied to any extent.
In other words, it is likely that no two
copies of this edition of Bacon’s work are the same. The implications of this
for online presentation of early printed books are fundamental and have not I
believe been sufficiently discussed. Early English Books Online presents us
with images of just one copy of the 1625 edition of Bacon’s work from Cambridge
University Library, so we have no way online of investigating the other variant
copies. Far from making the text of Bacon’s work more fluid, the online
presentation destroys our awareness of the fluidity and variation of the
printed text.
The picture which emerges from historical
bibliographers such as David McKitterick, Adrian Johns and Sabrina Baron is
that Gutenberg’s introduction of the press marked one stage in the long process
of the evolution of printing. As Raymond Williams pointed out, the rise in
literacy and access to information was a long revolution in which the
appearance of the steam-driven printing press in the nineteenth century was
just as important as the work of Gutenberg. Moreover, this process was not
technologically driven. Political struggles over issues such as censorship and
taxes were just as important as technological innovation in opening up access
to printed information. As David McKitterick has pointed out: ‘the printing
revolution itself, a phrase which has been taken to heart by some historians,
was no revolution in the sense that it wrought instant change. The revolution was
part technological, and part bibliographical and social. It was prolonged, and
like many revolutions its process was irregular, and its effects were variable,
even erratic.’
The picture painted by McKitterick and
other historical bibliographers of the impact of printing recalls the
description of the Arab Spring by Anne Alexander and Miriyam Aouragh. The
process was a complex and extended one, involving the realignment and repurposing
of media rather than a simple disruptive transformation. In the light of these
types of analysis, it becomes very difficult to accept the technologically-led
disruptive model of media history proposed by the Toronto school of Innes,
McLuhan and Ong. Moreover, the Toronto school privileges technologies of
communication, which make it sound as if technologies like the printing press
dropped from the sky. The history of media reflects a much broader
technological base. Printing presses only became capable of mass production
when they began to be powered by steam engines in the early nineteenth century.
To feed the new steam-powered presses, it was necessary to devise new methods
of making paper. Even then, the new machine-made books would not have been
widely distributed without canals and railways. All these technologies were
necessary to make printed books everyday objects.
In recent discussion of disruptive
innovations, the focus is frequently on the history of the media, and
comparatively little attention is paid to one of the most disruptive moments in
Western history, the profound economic changes which began in the late
eighteenth century and are known as the Industrial and Agricultural
Revolutions. This period is conventionally taken as marking the rise of
modernity, and in a wide range of scholarly literature across many disciplines
is seen as a major watershed in human history. In contemplating the digital
revolution, it may seem as if there is little to learn from looking back to the
Industrial Revolution. The clean, hi-tech electronic world of the digital seems
utterly opposed to the smoky, muscle-driven factories of early
industrialization. The digital is frequently represented as a means of escape
from the industrial. Yet our digital world is largely a creation of many of the
key technologies of that industrial world. The development of the telegraph was
closely linked to the growth of railways, and the concept of the digital was
the creation of engineers seeking to improve the performance of telegraph
wires. One of the great icons of the Industrial Revolution, Brunel’s steamship
the Great Eastern, was used to lay
the first transatlantic cable, thereby effectively laying the foundations of
the internet. Some of the fundamental concepts behind the computer programme as
a sequence of logical instructions were developed in the 1820s from punch card
mechanisms used to control mechanical looms. Moreover, it was the machines created
by the Industrial Revolution which provided the technological infrastructure to
create computers – to create the turbines, valves, transistors, silicon,
cathode ray tubes which makes the computer one of the most sophisticated
products of Western industrialisation.
The industrialization of the late
eighteenth century was a process which first gained momentum in various regions
of Great Britain, such as South Yorkshire. There can be no better place than
Sheffield, one of the great centres of the industrial revolution, to consider
the industrial dimensions of the digital and contemplate the Industrial
Revolution as a disruptive moment. Does the Industrial Revolution, and the
associated developments of the Agricultural Revolution, have anything to teach
us in considering potential digital transformations? This was clearly a period
when technological innovation was important. It felt like a period of
transformation. Tourists travelled from Europe to admire such wonders as the
Ironbridge at Coalbrookdale and artists such as Joseph Wright and Phillip
Loutherbourg celebrated these new technologies. In works of literature such as
Thomas Love Peacock’s Headlong Hall, the rights and wrongs of the manufacturing
system were earnestly debated, with one character praising the profound
researches, scientific inventions and complicated mechanisms which had given
employment and multiplied comfort, while another denounced the innovations:
‘Wherever this boasted machinery is established, the children of the poor are
death-doomed from their cradles. Look for one moment into a cotton mill, amidst
the smell of oil, the smoke of lamps, the rattling of wheels, the dizzy and
complicated motions of diabolical mechanisms’. Wide-ranging cultural and social
transformations have been attributed to these technological changes, such as regular
working hours and standardized timekeeping.
Sheffield has an industrial tradition as a
centre of cutlery manufacture which goes back to the middle ages. It was partly
the specialized skills available in Sheffield which prompted Benjamin Huntsman
to establish himself in Sheffield to undertake his experiments in the production
of crucible steel which laid the basis of Sheffield’s steel industry. Sheffield’s
light trades remained important even after Thomas Bessemer’s inventions allowed
the production of steel in bulk from the middle of the nineteenth century. As, thanks to Bessemer, huge steel plants appeared
in the city, making rails, steel plates and armaments, the transformative
effect of the new technologies was evident in the physical fabric of the city
itself. As early as 1768, a visitor commented that ‘Sheffield is very large and
populous, but exceedingly dirty and ill paved. What makes it more disagreeable
is the excessive smoke from the great multitude of forges which the town is
crowded with’. By 1842, the social reformer Edwin Chadwick declared that
‘Sheffield is one of the dirtiest and smokiest towns I ever saw. One cannot be
long in the town without experiencing the necessary inhalation of soot…There
are however numbers of persons in Sheffield who think the smoke healthy’. The importance of industry in the history of
Sheffield in the Victorian town hall, which is surmounted by a statue of Vulcan
and incorporates statues of figures representing electricity and steam who hold
scrolls with the names of such great technological pioneers as Watt, Stephenson, Faraday and Davy.
In this Victorian view, the Industrial
Revolution was the achievement of technological genius and enterprise. If this
was indeed the case, then perhaps the digital world does have something to
learn from its industrial great-grandparents. This view still holds sway, as is
suggested by a recent comment of the Sheffield MP Nick Clegg that it was the
likes of Brunel not the bankers who made Britain great. However, since the
Great Western Railway cost 6,500,000 million pounds (over 300 million pounds in
modern value, and twice the original estimate), presumably the bankers were of
assistance in facilitating this technological revolution at some point. For
politicians, it is convenient to hope that genius and inventiveness can quickly
bring prosperity and wealth. But the history of industrialization suggests that
this process of change can be amorphous, patchy in impact and above all subject
to long timescales. Just like the printing revolution of Gutenberg, the
industrial revolution dissolves under closer examination and become very
difficult to pin down.
The term industrial revolution was not
used as a shorthand for the changes which began in Britain until the late
nineteenth century. It expressed the idea that Britain had gone through changes
at this time which comparable in scale and importance to the political
revolutions in France and Germany. Clearly something of profound importance had
happened in Britain towards the end of the eighteenth century, but economic
historians have struggled to get a clear view of the nature and structure of
the process. The period from 1760 to 1830 was characterized by a wealth of
disruptive innovation, yet most recent research suggests that economic growth during
this period was not particularly marked. It appears that productivity growth
and technological progress were confined to a few small sectors such as cotton,
wool, iron and machinery in remote regions such as south Yorkshire, whereas
much of the rest of manufacturing remained stagnant until after 1830. For some
historians, the important features of early industrialization were not so much
economic developments or technological changes as the social and cultural
changes introduced by the growth of factory working and changes in farming.
Just like the printing revolution or the Arab Spring, the Industrial Revolution
proves to closer examination to be a much more complex and amorphous process
than is suggested by the use of the word revolution.
This is vividly illustrated by the
story of industrial development in Sheffield, which has been described by such
distinguished historians from Sheffield University as Sidney Pollard and David
Hey. Like other major industrial cities such as Birmingham and Glasgow,
industrialization did not take place in Sheffield by accident. The availability
of water power had made Sheffield a centre of craft production of cutlery since
the middle ages. It was partly the availability of skill and expertise in metal
working which encouraged the scientific instrument maker Benjamin Huntsman to
move from Doncaster to Sheffield to undertake his experiments in creating
crucible steel. However, despite Huntsman’s innovation in steelmaking, the
initial industrial growth in Sheffield was in its historic light trades such as
the making of tools, cutlery and silver plate. The techniques in Sheffield’s
light trades changed very slowly. Before 1850, the only major change was the
use of steam instead of water to drive the wheels used by grinders. The light
trades remained dominated by the ‘small mesters’ who hired rooms in works with
steam-powered wheels. It was only in the 1850s that factory production and
mechanization began to be introduced in the light trades. Similarly, steel
production and heavy industry only began to dominate Sheffield from 1850, chiefly
as a result of the establishment by Henry Bessemer in 1859 of a steelworks
using his new method of bulk steel production. The creation of heavy industry
in Sheffield was a product of the third quarter of the nineteenth century.
Between 1851 and 1891, employment increased over 300% in the heavy trades,
compared with 50% in the light trades. In 1851, less than a quarter of the
workers in the city were employed in heavy industry; by 1891, two thirds of the
city’s workers worked in heavy industry.
We assume that new digital technologies
will very rapidly bring major cultural and social transformations in their
wake, but the lessons of industrialization suggest that the process may be
longer and more complex than we generally imagine. Huntsman first produced
crucible steel in the 1740s and steam power arrived in the city in 1786, yet it
took nearly a hundred years for Sheffield to become a steel city. The history
of industrialization suggests that the process of digital transformation may be
both more extended and more complex than is often assumed. The model of disruptive
innovation is not a helpful way of imaging the process of industrialization. It
was actually the ability of industrialization not to disrupt but instead to support
sustained change which was important. In Joel Mokyr’s words, ‘The Industrial
Revolution was “revolutionary” because the technological progress it witnessed
and the subsequent transformation of the economy were not ephemeral events and
moved society to a permanent different
economic trajectory’. (p. 3) If industrialization is seen in this way, as a
sustained trajectory of economic change, it is a process which still continues,
and the digital world can simply be seen as an extension of a process which
began in the eighteenth century. Indeed, this continuity can be seen as
stretching further back. As we have noted, Sheffield’s growth reflected skills
developed since the Middle Ages, and such long-standing commercial traditions
fed into the early development of industrialisation.
The Industrial Revolution suggests that
the model of disruption and transformation we use in thinking about the digital
world may be over-simplistic. Are there other ways in which thinking about
industrialization can help us in understanding the digital world? I would like
to suggest that there are. In thinking about the digital humanities, we tend to
focus our attention on tools and methods, but it is striking that in cities
like Sheffield and Birmingham at the time of industrialization, tools and
working methods often did not greatly change, but environment did. Sidney
Pollard has pointed out how ‘a visitor to the metalworking areas of Birmingham
or Sheffield in the mid nineteenth-century would have found little to
distinguish them superficially from the same industries a hundred years
earlier. The men worked as independent sub-contractors in their own or rented
workshops using their own or hired equipment … These industries .. were still
waiting for their Industrial Revolution’. Yet, as Pollard emphasized, the
environment in which these workmen operated had been completely transformed.
Their wheels were now powered by steam and there were other gadgets which
speeded up minor operations such as stamping and cutting. The workshop might be
lit by gas and have a water supply. Railways made distribution easier and
cheaper and gave access to a larger labour market. Cheap printing would assist
in advertising products. While the ‘small mester’ may have been working in an old-fashioned
way, his environment had been completed transformed. Likewise, it may be that
the most important changes in the digital humanities will be in the environment
in which researchers into the humanities operate, and we should perhaps be
giving more attention to this.
The fascination of the digital lies in
its immense variety: 3D printing, multispectral imaging, mobile technologies,
RFID: these all have their part to play in humanities scholarship as well as more
familiar methods as linked data, geo-spatial visualisations, text encoding and
many others. This need for a pluralistic outlook in dealing with the digital is
one that is reinforced by the history of industrialization. While developments
such as steam, telegraph and steelmaking were important, they only formed a
part of an enormous spectrum of technological developments. It is striking how the
interests of such celebrated figures of the Industrial Revolution as James Watt
were very wide. Watt was as preoccupied with the making of musical instruments
or the copying of sculpture as he was in the application of steam power.
Likewise, among Thomas Bessemer’s inventions were an early type-composing
machine, new methods of making pencils, machines for making plate glass and an
(unsuccessful) ship to avoid seasickness, as well as his new method of steel
manufacture. The examples of men like
Watt and Bessemer remind us of the importance of an eclectic approach to the
digital humanities, of embracing an approach that affirms that there is no
single answer, no single piece of kit or method which will unlock the digital
humanities. Digital transformations will involve a variety of
approaches, embracing both risky short-term
experimentation and support for sustainability, embracing both mash-ups
made in bedrooms and experiments with synchrotrons, as
well as digital art works and huge
quantitative visualisations. The digital humanities will not only be a critical
and theoretical debate but will also code.
It encompasses both data and materiality.
While we tend to associate the
Industrial Revolution with such major inventions as the steam engine, a key
driver of industrialization was the small improvement or adjustment – tinkering
with and progressively improving technology. The first steam engine was built
by Thomas Newcomen at the beginning of the eighteenth century. Watt’s invention
of the separate steam condenser was a microinvention which made steam power
economically viable. Watt’s low pressure steam engine was not suitable for locomotives,
and it was further refinements by many others which eventually made a high
pressure steam engine practicable. It is tempting to assume that economic
transformation is associated with the paradigm shifting macroinvention, but
this is not necessarily the case. Two of the great macroinventions of the
eighteenth century, the hot air balloon and the smallpox vaccine, had limited
economic impact, whereas Henry Cort’s invention of puddling and rolling was
technically modest, but by allowing the production of wrought iron had enormous
economic impact. We are regularly urged by research councils and others to
deliver the macro-invention, to demonstrate the paradigm shift. Yet the history
of industrialization suggests that the small improvement, the micro-invention,
can be more important. Moreover, it is perhaps precisely this kind of
micro-improvement that the digital humanities is particularly well placed to
deliver.
Some of the technical developments of the
Industrial Revolution were linked to new scientific theories. Watt’s separate
condenser was influenced by the theory of latent heat proposed by Watt’s mentor
at the University of Glasgow, Joseph Black. However, for the most part, as Joel
Mokyr has observed, ‘The inventions that set the British changes in motion were
largely the result of mechanical intuition and dexterity, the product of
technically brilliant but basically empirical tinkerers, or ‘technical
designers’’ (p. 75). The late eighteenth century was a period of scientific and
technological ferment, but this took place outside any formal academic
structure. This is illustrated again by James Watt in Glasgow. James Watt is
one of the outstanding names associated with the University of Glasgow, but he
was never a member of the University’s academic staff. He was employed to
repair scientific instruments. It was in the process of repairing a model of a
steam engine owned by the University that Watt hit on the idea of a separate
condenser. Although Watt wasn’t a lecturer but a mere craftsman, his workshop
became the intellectual hub of the University. His friend John Robison, who afterwards became
Professor of Chemistry at Glasgow, recalled how: ‘All the young lads of our
little place that were any way remarkable for scientific predilection were
acquaintances of Mr Watt; and his parlour was a rendezvous for all of his
description. Whenever any puzzle came in the way of any of us, we went to Mr
Watt. He needed only to be prompted; everything became to him the beginning of
a new and serious study; and we knew that he would not quit it till he had
either discovered its insignificance, or had made something of it’.
Watt was not exceptional. In Sheffield,
Benjamin Huntsman was also a scientific instrument maker. Sheffield plating was
accidentally discovered in 1743 by a Sheffield cutler Thomas Boulsover while
repairing a customer’s knife. Henry Bessemer received only elementary
schooling, preferring to gain practical experience in his father’s type
foundry. When Bessemer was invited to describe his steel process to the British
Association, he protested that he had ‘never written or read a paper to a
learned society’. Stainless steel was developed in Sheffield in 1913 not in the
University but in the research laboratory of the steel firms Firth and Brown by
Harry Brearley, a self-taught metallurgist who had never received any formal
education. One of the great challenges which digital technologies present us is
the need also to develop spaces which allow theory, making and tinkering to collide
– a digital equivalent of Watt’s workshop at Glasgow. Ideally, this would be
precisely what a digital humanities centre should be like, but sadly we have
rarely achieved this. The pressure of university funding structures means that
most digital humanities centres are soft-funded and are on a treadmill of
project funding which restricts the ability to act as centres for innovative
thinking. Moreover, in Britain at least, universities are increasingly making a
stronger distinction between academic and professional staff. This is without
doubt a retrograde development, but the political and administrative drivers
behind it are formidable. In this context, it is difficult to see how digital
humanities centres can become more like Watt’s workshop or Harry Brearley’s
laboratory at Firth and Brown, yet I think we must try.
Such new spaces of making and
collaboration of course need not necessarily be physical spaces, but they must
embrace different skills, outlooks and conversations. We need to create spaces
which would embrace the digital equivalent of a James Watt or a Harry Brearley.
The creation of such spaces was a fundamental feature of early
industrialization. Economic historians are increasingly emphasizing the role of
social capital as fundamental to understanding early British industrialization.
Historians have frequently been puzzled as to why the first industrialization
occurred in Britain. There were other more technologically advanced countries
such as France. It seems that an important part of the reason for Britain’s
early lead was that it had social structures which facilitated the spread of
ideas and the making of contacts and partnerships. The multitude of clubs and
societies in eighteenth-century Britain helped spread expertise and encourage
new enterprises. A celebrated example is the Lunar Society, based in the West
Midlands, which include many of the mos famous names of the period such as
Matthew Boulton, James Watt, Josiah Wedgwood and Erasmus Darwin. Such
friendships were vital to the new enterprises. Watt had struggled to develop
his team engine in Glasgow, but Boulton in Birmingham had access to the
necessary precision craftsmanship which allowed the successful manufacture of
steam engines. Moreover, while the specializations of the Lunar Society were
distinct, their fascinations overlapped tremendously--so they were able to
support each other's ideas and endeavors well outside their own field proper in
a kind of early inter-disciplinarity. The Lunar Society was not exceptional.
Britain contained hundreds of philosophical clubs, masonic lodges and
statistical societies which were essential in encouraging that hands-on,
tinkering culture which encouraged early industrialization.
We may feel that in learned societies
like ALLC or ADHO we have the equivalent of a Lunar Society in digital
humanities. But the model of something like ALLC is that of a
nineteenth-century learned society, and the Lunar Society was more flexible and
informal than that. Bodies like the ALLC or ADHO are designed to affirm the
respectability and seriousness of their members, to show that they are worthy
professional people. But the informal, drunken societies of the eighteenth
century show the value of using much looser and informal arrangements to
generate social capital. We need to think about how we can recreate that kind
of eighteenth century social excitement in the digital sphere. What is
particularly important about these eighteenth century clubs is that they
operated a particularly big tent. There was not set view in the eighteenth
century as to whether the engineer or the money man should take the lead. It
has been suggested that the key skill was ‘to identify a need or opportunity,
then cooperate with others who possessed a different skill to take advantage of
it’. This description of the skills
necessary for success in the eighteenth century is, I would suggest, equally
applicable to the digital world. However, in the eighteenth century this also involved
an appetite for risk. Watt was constantly terrified by what he saw as Boulton’s
imprudence. Two of the greatest engineers and entrepreneurs of the Industrial
Revolution, Richard Trevithick and
Richard Roberts, died penniless. I wonder whether, in the dot.com age, we have
the same appetite for risk.
But what is particularly striking about
industrialization is the passion for making. John Robison described how for
James Watt, ‘everything
became to him the beginning of a new and serious study; and we knew that he
would not quit it till he had either discovered its insignificance, or had made
something of it. No matter in what line – languages, antiquity, natural
history, - nay, poetry, criticism, and works of taste; as to anything in the
line of engineering, whether civil or military, he was at home, and a ready
instructor’. According to Robison when Watt was asked to repair the University
of Glasgow’s model steam engine, it was ‘at first a fine plaything to Mr Watt…But
like everything which came into his hands, it soon became an object of most
serious study’. The mixture of play, tinkering, science and hands-on
experimentation is the most powerful legacy of the Industrial Revolution and it
is in that art of making, that materiality, that perhaps the most potent legacy
of industrialization lies.
For Watt and the others, this making was an aspect of data. One of Watt’s earliest inventions was a perspective machine to assist artists. One great contribution of the Soho Manufactory was the production of the first precise slide rules, essential to calculate boiler pressures. Watt envisaged the production of a mechanical calculating machine, but felt that the engineering techniques of the time could not produce sufficiently precise parts – a problem that Babbage was later to encounter. Towards the end of his life, Watt became preoccupied with developing a sculpture copying machine and his workshop was littered with busts and casts associated with this project. The creation of this machine required both accurate data and methods to make the sculpture – as a mixture of issues of data and making, it was very characteristic of the Industrial Revolution. When the contents of Watt’s workshop were recently moved into a new display at the Science Museum, a mould of an unknown bust was found there. It was realized that the mould could be imaged and the resulting 3d model could be used to print out the bust. The work was done by a team from Geomatic Engineering at UCL, and when the bust was printed, it was found to be a previously unknown bust of James Watt (For more on this, see: www.thehistoryblog.com/archives/9892)
This
exercise seems to me to bring the story full circle, and the way in which new
methods of fabrication are giving use new approaches to data seems to me to
bring the story full circle. Industrialisation and making will, it seems to me,
become more pertinent than ever as digital fabrication becomes increasingly
important. I’d like to conclude my lecture by quickly sharing with you some
video clips that seem to me to make this point very well. The first is a news
report on an exhibition last year at the V&A called, appropriately enough,
Industrial Revolution 2.0:
Industrial
Revolution 2.0: http://www.youtube.com/watch?v=JUo6EqAix-o
From
this it is a short step to using fabrication machines to replicate objects in
museums, and this clip shows the Makerbot, an affordable 3d fabricator, used to
replicate objects in the Getty Museum in Los Angeles. I hardly need to point
out the parallekls with James Watts’s sculpture copying machine:
Through
a Scanner, Getty: http://www.youtube.com/watch?v=blKcIsEEoag
The
Makerbot was recently used for a hackathon at the Metropolitan Museum of Art in
New York in which artists used fabrications of objects in the Museum’s
collection to create new works of art. Here’s a short glimpse of the evenr in
Bew York in June:
It
is striking how in these clips there are frequent references to revolutions and
disruptions. What I think we have seen is that in fact these new methods echo
deeper continuities. The Arab Spring, the arrival of printing and the
Industrial Revolution all show us how change is not necessarily revolutionary
or disruptive. The processes we think of as revolutionary can be lengthy, patchy
in character, amorphous, difficult to measure and unpredictable, and there is
no reason to think that the digital will be any different. It’s the
continuities and the parallels that are often as striking as the disruptions.
Let me end with one last quick clip which shows the Fab Lab in Manchester which
to my mind inescapably recalls James Watts’s workshop in Glasgow, and points us
towards one digital space of the future which is deeply rooted in the past:
Fab
Lab Manchester: http://www.youtube.com/watch?v=1S8_K2ctNWs
25 August 2012
Making Universities More Open
Sometime I will write a fuller paper on pedagogy in the digital humanities. When I was at Lampeter, I became quite closely involved in a number of e-learning initiatives which seemed to me imaginative and forward-looking, and I was sad that there appeared to be so little contact between the e-learning and digital humanities communities. My colleague Willard McCarty recently made a provocative post to the Humanist online seminar:
Our colleague Jascha Kessler has sent me a letter he wrote to the Editor of the Financial Times, for Saturday, 18 August 2012, "Brave new world without teachers, or learning, or thinkers". It concerns dire predictions of what will happen to higher education as a result of prominent efforts to teach very large classes by online means. (I send it along as my first attachment, below.) Perhaps this effort will be as successful as various tsunamis have been in wiping out costal settlements. (The metaphor is columnist Christopher Caldwell's, for which see my second attachment.) But I recall prominent efforts at the University of California at Berkeley in the early 1960s to promote teaching by television, accompanied at registration by enthusiastic posters declaring e.g. "See Professor Helson on television!" One can still find the large, now empty, brackets for the televisions in some places. I spit nails, but not here. I think of all my years in classrooms, with people, face to face. "Now we see through a glass darkly, then face to face" reversed? I know, Paul's words are more accurately for us translated "by means of a mirror in an enigma", but the point remains, does it not? Comments? Yours, WM
Here's my response which attempted to indicate some of the ways in which better links between these areas of activity could be built up:
Dear Willard, It is interesting how this issue which, as you observe has been around in different forms for many years, is suddenly causing such anxiety in the United States - concerns about readiness for on-line activities underpinned a lot of the recent controversy about the unsuccessful attempt to dismiss the President at the University of Virginia. I assume that the reason this is causing such concern is what one might call the i-Tunes effect - the way in which the success of music downloading has heightened awareness of senior managers in all types of activities of the potential for new digitally-based business models to cause radical transformation quite rapidly. It is by no means certain that disruptions (that favourite neo-liberal idea) evident in one area of activity will necessarily be replicated in another - indeed, part of the nature of disruptive tendencies must be their unpredictability, which must include the possibility that they do not occur. However, in terms of this American discussion (and it is very much as framed here about the relative inflexibility of the structures developed by North American Higher Education over the past fifty years), the following considerations from the UK might be relevant: - The first and most important point is I think that there has been a lamentable rift between much digital humanities work and new developments in pedagogy over the past ten-fifteen years. In the early 1990s, we believed that not only would new digital and networked technologies would transform research and our access to research materials, we also believed that equally important was the transformation that would occur in pedagogy. However, much of our effort since then has gone into creating and financing digital humanities centres which were supported by soft funding and therefore necessarily concentrated on a series of short-term research projects. Teaching activity has tended to be rather an after-thought for most digital humanities centres. However, in the meantime, e-learning and technology-enhanced learning have made enormous strides and for many universities in Britain have been a major focus of activity and funding. The rift is illustrated by the separate professional organisations that have been established. I am not aware that bodies like ADHO or ACH have any significant contact with the parallel bodies for learning technologists, such as the Association for Learning Technology (http://www.alt.ac.uk/). The ALT conference is at the University of Manchester from 11-13 September 2013, and looks very interesting. It might be a good way of starting to explore these links in a better way. Another organisation which has of course championed the importance of pedagogy in the digital humanities is HASTAC, and I think this is one reason why HASTAC is the most exciting and interesting organisational activity in the digital humanities work at present. There is a great deal of the HASTAC website which bears closely on the themes you have raised. - While you shudder at the thought of American experiments in lectures by television, we should also remember that we have one enormously successful institution in the UK which sprang from precisely such activities, namely the Open University. To my mind, the Open University is, after the NHS, the most important piece of social innovation in Britain in modern times, and deserved a place in the opening ceremony of the Olympics. The Open University has of course long ago moved on from the late night television lectures on BBC2 which we remember from the 1970s, and Open University is pioneering new types of online approaches, including a major development in enhancement of Moodle. A hint of some of the Open University's initiatives in this field can be gleaned from the Open Learn section of their website: http://www.open.edu/openlearn/. OU have also been pioneering work on mobile access, particularly mobile libraries. The OU of course famously links its distance provision to residential courses, but I suspect its structures are one that provide a good guide to future developments. I think it is sad that the antiquated insistence of UK higher education on educational autonomy prevents a more co-ordinated and strategic development around the Open University. Given that is quite probably that new online methods will cause changes, it would make great sense if in the UK we scrapped absurd anachronisms like Oxford and Cambridge Universities, and created a more integrated and strategic service based around the OU. - Finally, it is worth noting that concerns about the mechanisation of learning are not new. The use of numerical grades for assessment began in Cambridge in the 1790s in direct response to an increase in the number of students, and may be considered at a number of levels a response to the increasingly industrialisation of society. When marked examinations for school children were introduced in the 1850s, there were many concerns that it privileged repetitive learning, short term memory and the retention of conventional knowledge. As a schoolchild myself in the 1960s, I was always struck and enthused by the willingness at that time of many educational bodies to try and break down the obsession with exams and measurement and try new methods of learning. And of course our excitement about digital technologies is that they open up precisely such possibilities. Maybe our aim should be to try and bring that kind of pedagogic liberalism to the new learning environments which are emerging?
Here's the response from Professor Kessler:
I do appreciate the earnestness revealed in Prof. Prescott's comments. I do think he rather misses what is the point of the present discussions. He concentrates on "learning." Viz., *"** As a schoolchild myself in the 1960s, I was always struck and enthused by the willingness at that time of many educational bodies to try and break down the obsession with exams and measurement and try new methods of learning..."* *Methods of learning? * What does that mean, exactly? I was a schoolchild in the 1930s-40s. I dont think there was or is a method of learning, unless it is taught somehow. By digitized instructors? Kids learn, Homo sapiens learns as it learns, sans "method" or methodologies concocted by...whom? A robot might learn by implantation of code. Okay, we stick silicon chips in newborn heads? But then the chips learn, and what does each unique individual brain make of it all internally? There may be methods to teach say violin technique, but they are applied and tested one on one: teacher and pupil. Results vary by talents. Apart from all that, what I questioned in my letter to the FT was the costs of teachers vs. internet teaching. The learning part requires foot soldiers, future teachers in higher Ed, what schools have been and been about since Sumeria, to test what has been learned, grade and tutor or instruct it. When the Univ of California at Santa Cruz was inaugurated, Prof C Page Smith [in my letter] went up to organize it. It was all Pass/Fail...no grades. Assuming perhaps Humanists and Historians and Lit and the rest reviewed the written work, not multiple choice Xses, of students. It took but a few years until the scientists rebelled at the lack of grading for qualifications in hard subjects, not philosophical or literary chatter. And grading was back, and how, even for a largely pothead and hippy university student body in the 70s and 80s and perhaps beyond, up in the Redwoods paradise. Even with an Open University scheme, Lenin's question remains: *WHO, WHOM?* All may enter and study... but what has been learned by each individual? That costs, and doing away with the absurdity of OxBridge doesn't solve the question of judgments by individuals, referees. You cannot get away with anything in competitive sports. Some are better than others, as in horse and dog racing, and judging there is easy: whoever finishes first second third, etc. Not including Lance Armstrong, et alia, as it turns out. Then, too, we are advised: "It would make great sense if in the UK we scrapped absurd anachronisms like Oxford and Cambridge Universities, and created *a more integrated and strategic** **service based around* the OU." What, it may be asked, is meant by that phrase in italics? More integration of what? Service meaning...teachers? Who, Whom? what qualifies? Integrated whos? Serviced by Whoms? * O, Orwell, thou shouldst be living at this hour!* I take Prescott to be serious, but the questions I raised about Humanities and the Internet remain. It is *not* a matter of tv lectures. When the few expert lecturers have retired, who takes their place? Who has learned what from the medium? I like documentary films, How it is made, where the penguins walk? but then all that may be teaching me what is out there. Still, what goes on, how and why, stanza by stanza in the Divine Comedy? Who will learn or teach what the Divine is, the Commedia means? Or even says? E=MC2 says what it means, and means what it says, and a digitized quiz can locate my grasp of those letters. However, and for example, I offer an Honors Seminar for Frosh, first year students, pass/fail, just show up, and select one assignment. I provide 100 pages of poems; I lay out the fundamental 3 modes of poems written from history. I require each to pick a poem, read it aloud and deliver orally 1 written page that tells the rest what the poem says. I forbid students to say what anything, lines, stanzas, whatever *means*. *Meanings are idiosyncratic and arbitrary. If anyone imagines contemporary student of 19-20 can write one double-spaced page of sentences stating what the poem says, lines say, that one is mistaken. These University of California youth are admitted as of the top 17-19% of high school graduates. We have 2 dozen State Universities for the lower tiers; and many community, 2 year colleges for all the rest who want something after high school and need a lot for work and life and career. A sort of Open system a la UK. But...there is hardly any system to integrate persons tomorrow who have not studied and learned and been graded. Quality is quality. Finally re my Honors Seminar: I attach Plato's Symposium, and tell them to read that short work. As all will recall, each principal vocation speaks in turn all that night, and each man speaks only of what he knows from his craft or profession. Not a one is able to tell the group what it is that the god Eros does to discipline or inspire or create their work[s]. They are all good and educated senior Athenians. But as for understanding the matter of daily life and work's structures and statements, let alone meaning...*nada, nada e pues nada. *In the end, Socrates overturns the evening, although what he has to say remains a mystery, clearly presented. And he got it all from some old Sybil in the mountains. The SYMPOSIUM, in short, remains exemplary regarding this problem. The scientists and technologists are crystal clear about what things say, not what they [might or could] mean; they measure, and measure has always been, or measuring, the foundation effort of civilization: Tekne, the Greeks called it. But I am sure it was known to the painters of Paleolithic caves. That is clear enough, or should be. As for *meaning?* Alas, that is the burden of would be Humanists, digital, digitized, or whatever. Jascha Kessler
And my reaction:
Professor Kessler is right that I did not address the main point in his letter, which is that the use of new technologies in learning does not automatically mean that the academic profession is doomed. He is right on this. What I wanted to point out is that we have over forty years of experience in Britain of providing university education through a mixture of television, radio, internet, radio cassette and other media, and it seems to me very strange that the current fevered discussion in the United States does not ever refer to this experience which provides very clear pointers for future development. The idea of a 'university of the air' was proposed in Britain as early as 1926 when a historian working for the BBC suggested the development of a 'wireless university'. The idea of a 'university of the air' gathered momentum in the early 1960s, and the creation of an experimental university using television and radio was a prominent part of the Labour Party's manifesto when it was elected to government in 1964. The intention was to offer university education without the requirement for any prior educational qualification. That seems to me one important difference between the discussions in the 1960s and the debates on which Professor Kessler comments - in Britain, we have always seen new technologies as providing a key to offering wider access to education; the current discussions in America seem to focus almost entirely on technology as a cost-saving option. The Open University was established at Milton Keynes in 1969. The Tory minister Iain McLeod called the idea of a 'university of the air' 'blithering nonsense' and threatened to abolish it if the Conservatives formed the next government, but fortunately Margaret Thatcher, the new Education Secretary, decided to allow the experiment to go ahead and the first 25,000 students were admitted in 1971 to be taught by a mixture of television, audio cassette, home science kits, course packs and residential courses. Today, the Open University is the largest single university in Britain with more than 260,000 current students. Since 1969, over 1.5 million students, many without previous formal educational qualifications, have graduated from the Open University. As I mentioned in my previous post, the Open University is pioneering on-line methods of teaching. But, above all, I think the most important achievement of the Open University was that (in the words of its website)'The Open University was the first institution to break the insidious link between exclusivity and excellence'. The Open University has been revolutionary in many of its pedagogical methods and many of these have been since adopted by conventional British universities. But, to support Professor Kessler's key contention, what the Open University demonstrates above all is that such innovative educational achievement depends on first-rate academic staff. The Open University currently employs more than more than 1,200 full-time academic staff and more than 3,500 support and administrative staff. Above all, it has a network of 7,000 tutors locally based (as famously depicted in 'Educating Rita'). The chief lesson of the Open University experience supports Professor Kessler's argument - to successfully use new media to widen access to higher education then you need committed and inspirational academic staff. I think this alone shows why current discussions about the use of new technologies in teaching should take the experience of the Open University in Britain as a starting point.
Much more information about the Open University can be found on its website: http://www8.open.ac.uk/about/main/the-ou-explained/history-the-ou I suggested in my previous post that the Open University stands comparison within the National Health Service as one of the greatest social achievements of Britain in modern times. On reflection, I wonder if the Open University isn't the greater of the two achievements. To create a collectivised medical system chiefly requires a society with a strong sense of a social justice and a political and administrative determination to put a fairer and more civilised system in place - it wasn't necessary to do much new in terms of the medicine. The creation of the Open University required an equally strong social sense of social justice but also needed to develop completely new ways of providing a university education which didn't compromise on standards. We need a similar set of values in approaching the pedagogical possibilities provided by new technologies. For further reflections on some of these themes, I would recommend the blog on the history of the Open University maintained by my friend Dan Weinbren: http://www.open.ac.uk/blogs/History-of-the-OU/
A recent post by Dan is pertinent to these discussions:
"Open learning is a movement that isn’t going to go away
The idea that technology can be deployed to support learners isn’t new to those who work at the OU. Suddenly, however, it is in the headlines because Harvard and the Massachusetts Institute of Technology have formed a $60m (£38m) alliance to launch edX, a platform to deliver courses online – with the modest ambition of ‘revolutionising education around the world’.
Paying relatively little attention to the decades-long history of sophisticated use of television, radio, video and the internet that has occurred at the OU the director of MIT’s Computer Science and Artificial Intelligence Laboratory and one of the pioneers of the MITx online prototype. Anant Agarwal said ‘This could be the end of the two-hour lecture…You can’t hit the pause button on a lecturer, you can’t fast forward’. While MIT might be struggling to catch up pedagogically this development could be a challenge to the OU, as well as an opportunity for it to demonstrate its experience in the field of supported open learning. As Dr Anka Mulder, head of Delft University in the Netherlands and President of the OpenCourseWare group which advocates free online course materials, said ‘Open learning is a movement that isn’t going to go away’".
5 July 2012
Making the Digital Human: Anxieties, Possibilities, Challenges
During my time in charge of the stunning
Founders’ Library at St David’s College Lampeter in Wales, one volume which particularly
fascinated me was this early thirteenth century theological manuscript, the
oldest in the library. When George Borrow visited Lampeter in 1854, he was told
that the leaves of this manuscript were stained with the blood of monks
slaughtered at the time of the Reformation. The story of the monks’ blood is
apocryphal, but this manuscript is remarkable in other ways, because it is an
early manuscript of Peter of Capua’s Distinctiones
Theologicae. The collections of biblical extracts known as distinctiones compiled by Peter Chanter,
Peter of Capua and others represent a key moment in human history, because they
are among the earliest experiments in alphabetization. Collections of biblical
extracts in alphabetical form enabled preachers more readily to locate relevant
texts. Contemporaries expressed amazement at the richness of innovative references
in the sermons of preachers who made use of this remarkable new tool. These
manuscripts of the distinctiones
were, as Richard and Mary Rouse have pointed out, the direct ancestor of all
later alphabetical and searchable tools.
The idea that texts could be arbitrarily
arranged according to an abstract system such as the letters of the alphabet
was a startling one in the middle ages, which had previously sought in
arranging texts to illustrate their relationship to the natural order. But the distinctiones showed the advantages of
more abstract methods, and they paved the way for the first concordance to the
scriptures, which was compiled under the supervision of the Dominican Hugh of
St Cher between 1235 and 1249 at the Dominican monastery of St Jacques in
Paris. This is a manuscript of the first verbal concordance from St Jacques.
The creation of this concordance, which organized every word in the bible
alphabetically, was one of the greatest-ever feats of information engineering.
It is said that about 500 Dominicans worked on compiling the concordance. The
organization of the project was almost industrial in its scale and conception,
with each Dominican assigned blocks of letters for indexing. The idea that a sacred
text like the Bible could be approached in such an abstract and arbitrary
fashion was revolutionary. Not only was the creation of the concordance a great
technical and intellectual advance, but it implied a change in the relationship
between man, text and God. The development of alphabetical tools changed the
way people behaved and thought. Previously, memory had been the key attribute
used in engaging with and making accessible the Bible. With these new alphabetical
tools, the cultivation of memory became less important and it was the ability
to manipulate these new knowledge systems which counted. The distinctiones and concordances altered
the way in which man explored his relationship with God changed; they changed
conceptions of what it meant to be human.
In 1875, the librarians at the British
Museum were sorting through duplicate books prior to disposing of them. To
their surprise, they found among the refuse a manuscript which had been
acquired by Sir Hans Sloane, the founder of the British Museum, and was among
his greatest treasures. This volume contained William Harvey’s notes for the course
of public lectures in 1616 in which he first described the circulation of the
blood. Harvey’s discovery of the circulation of the blood was another moment
when understanding of what it meant to be human was radically changed. Harvey
portrayed a world in which the human heart seemed no more than a pump, so that
the body started to sound like a machine. As Allison Muri has discussed in her
fascinating study, The Enlightenment
Cyborg, Harvey’s discovery was to usher in from the end of the seventeenth
century a vigorous debate about the extent to which the human is a machine and
whether machines could become human.
It is possible to interpret the history of
much science and technology as one of constant renegotiation of our
understanding of the nature of being human and of the place of the human in the
wider universe. When Edmond Halley calculated the dates of astronomical events
which he knew he would never see, this raised many issues about the wider place
of the human in the universe and changed human self-perception. The ferocious
objections to Jenner’s use of vaccination against smallpox were largely due to
his introduction of animal matter into the human bloodstream. Likewise, industrialization
fundamentally reshaped many aspects of human life and behaviour: Wordsworth
portrays factory workers as having been fundamentally dehumanized and turned
into machines.
In 1948, Claude Shannon’s landmark paper The Mathematical Theory of Communication
established many of the fundamentals of digital theories of communication and
introduced the concept of the bit as a unit of measurement of information. Shannon
calculated that a wheel used in an adding machine comprised three bits. Single
spaced typing represented 103 bits. Shannon considered that the
genetic constitution of man represented 105 bits. With the decoding
of the human genome, the reduction of humanity to bits and bytes implicit in
Shannon’s calculation seems complete. It seems that this reengineering of our
understanding of the human is daily assuming greater speed and depth. In her
celebrated cyborg manifesto of 1985, Donna Harraway declared that ‘we are all chimeras, theorized and fabricated hybrids of
machine and organism; in short, we are cyborgs’. This ushered in the idea that
we are post-human – that is to say, that the Enlightenment understanding of the
relationship between body and mind has ceased to be relevant as a result of
technological advances. Exactly what our post-human condition might be is of
course not clear, but it is clearly very different to the understanding that
say Halley might have had of his position in the universe.
I don’t hear want to venture into a
complex area of critical theory which I am ill equipped to discuss. In
considering the implications of the post-human, it is better to refer to the
works of much larger intellects than mine, such as particularly Katherine
Hayles. The formulation post-human (first recorded in 1888) is a deliberately provocative
one. It does not merely mean that humans will somehow be pushed aside by
machines – this is an oversimplistic and perhaps philosophically impossible
notion. The term post-human rather suggests that our sense of what it is to be
human has changed – as Katherine Hayles puts it, the post-human is a state of
mind, a realization that mankind has finally understood that it is definitely
not the centre of the universe. My concern here is to consider the implications
of this post-human state of mind for our understanding and practice of the
digital humanities. Although the debates about what the digital humanities are
have ranged far and wide, the focus of the discussion has mainly been on the
digital side of the equation. There has been little discussion of what we mean
by the humanities. The orthodox view of the humanities which prevailed when I
was a young man was best summarized by the American literary critic Ronald
Crane in his 1967 book The Idea of the
Humanities. For Crane, the humanities was first and foremost the study of
human achievement. Crane described how human beings (of course, chiefly men in
his view) developed languages, produced works of literature and art, and created
philosophical, scientific and historical system. Such human achievements were
for Crane at the heart of the study of the humanities.
Since Crane wrote, the idea that the
humanities should explore and celebrate mankind’s achievements has been progressively
challenged. The human has ceased to be the exclusive focus of the humanities.
This partly reflects the impact of technology, which has become so pervasive
and so deeply integrated into everyday life that influential theorists such as
McLuhan and Kittler portray technology as displacing the human. But the
dethroning of the human also reflects wider shifts in understanding. Historians
such as Ferdinand Braudel have shown how human society may be shaped by deep
underlying geographical factors. Cary Wolfe and others have forcefully reminded
us that the relationship between human society and the animal and plant worlds
is complex and symbiotic, and by no means a one-way traffic. All these trends
have helped displace the human from the centre of debate.
Another assumption central to Crane’s
view of the humanities was that there is a neatly packaged cultural canon defining the heights of human achievement. This view has been been
subject to sustained and justifiable attack. In a British context, for example,
Raymond Williams charged that the concept of a cultivated minority which helped
preserve civilised standards from the threat of a ‘decreated’ mass was both
arrogant and socially damaging. For Williams, ‘culture is ordinary in every
society and in every mind’. In response to these developments it has been argued
that we need to develop a post-humanities which overturns any vestiges of an
elitist of view of the humanities, while also seeing the human in a more
interactive sense. Thus, Geoffrey Winthorp Young has proposed that post-humanities
should be characterized by a focus on technology accompanied by a critical
engagement with biological matters – a post-humanities which looks at the
interaction of climates and computers, mammals and machines, media and
microbes.
In a compelling series of recent talks and lectures, Tim Hitchcock has discussed the implications for humanities scholars
of tools like Google’s n-Gram viewer or the use of visualisations to analyse
data from corpora like the Old Bailey Proceedings. Tim forcefully argues that
the interests of humanities scholars need to shift towards interrogating and
manipulating in new ways the vast quantities of data which have now become
available. Hitchcock says that he dreams of ‘a bonfire of the disciplines’
which would release scholars from the constraints of their existing
methodologies and allow them to develop new approaches to the large datasets
now becoming available. Tim’s position is a recognizably post-human one. Tim’s
call for a bonfire of the humanities echoes the frustration expressed by Neil
Badmington in his outline of ‘Cultural Studies and the Posthumanities’.
Badmington describes how he was writing in the Humanities Building in Cardiff
University and declares ‘I wish for the destruction of this cold, grey
building. I wish for the dissolution of the departments that lie within its
walls. I wish, finally, that from the rubble would arise the Posthumanities’.
Discussion of the digital humanities frequently
gives vent to impatience with disciplinary boundaries and expresses a desire to
reshape the humanities. This has been pithily put by Mark Sample: ‘It’s all about innovation and disruption. The digital
humanities is really an insurgent humanities’. Comments such as this have
excited the ire of the eminent critic Stanley Fish who noted that little is
said of the ‘humanities’ part of the digital humanities, and asked ‘Does the
digital humanities offer new and better ways to realize traditional humanities
goals? Or does the digital humanities completely change our understanding of
what a humanities goal (and work in the humanities) might be?’ Fish’s questions
are fair ones, and are not asked often enough. Is the digital humanities
aligned with a conventional Ronald Crane view of the humanities, or do they
seek to help move us towards – as both Hitchcock’s and Sample’s comments seem
to suggest – a post-humanities?
In Britain, digital humanities centres
have recently been very active in creating directories of projects which
provide us with an overview of the current intellectual agenda of the digital
humanities in the UK. A comprehensive listing of projects is available on
arts-humanities.net, but this includes a number of commercial and other
packages not produced by digital humanities centres. In order to get a clearer
idea of what the digital humanities as formally constituted in Britain
represents, it is best to look at the directories of projects created by the
major digital humanities centres. Let’s start with my own centre at King’s College London. The type of humanities represented by the directory of projects
undertaken by the Department of Digital Humanities at King’s College is one
which would have gladdened the heart of Ronald Crane. Of the 88 content
creation projects listed, only 8 are concerned in any way with anything that
happened after 1850. The overwhelming majority – some 57 projects – deal with
subjects from before 1600, and indeed most of them are concerned with the
earliest periods, before 1100. The geographical focus of most of the projects
are on the classical world and western Europe. The figures that loom largest
are standard cultural icons: Ovid, Shakespeare, Ben Jonson, Jane Austen,
Chopin. This is an old-style humanities, dressed out in bright new clothes for
the digital age.
Oxford University has recently launched
a very impressive directory digitalhumanities@Oxford, which lists around 190
content creation projects in the humanities at the University. While Oxford
seems a little more willing to countenance modernity than King’s College, the
figures are still not impressive: about 30 of the 190 projects at Oxford are
concerned with the period after 1850. While these include some projects on
major modern themes such as the First World War archive and the Around 1968
project, the connection of other projects with the modern world is more
tangential, such as Translations of Classical Scholarship, which just happens
to extend to 1907. At Oxford, the centre of gravity of the digital humanities
is also firmly rooted in earlier periods, with about half of the projects being
concerned with the period before 1600. And again we are presented with an
extremely conservative view of the humanities, in which the classical world has
an elevated position, and names like Chaucer, Leonardo, Holinshed, John Foxe
and Jonathan Swift dominate. The smaller range of projects produced by the
Humanities Research Institute at Sheffield reflect a similar bias, with just
over half dealing with the period before 1700. Glasgow, I am pleased to say,
has by far the highest proportion of more modern projects, with almost a half
of its forty projects covering the period since 1850. However, this stronger
emphasis on more modern subjects at Glasgow doesn’t seem generally to reflect a
difference in intellectual approach – the projects are dominated by such
old-style male cultural icons as Burns, Mackintosh and Whistler.
For all the rhetoric about digital
technologies changing the humanities, the overwhelming picture presented by the
activities of digital humanities centres in Great Britain is that they are
busily engaged in turning back the intellectual clock and reinstating a view of
the humanities appropriate to the 1950s which would have gladdened the heart of
Ronald Crane. One of the great achievements of humanities scholarship in the
past fifty years is to have widened our view of culture and to have expanded
the subject matter of scholarship beyond conventional cultural icons. There is
virtually no sense of this in digital humanities as it is practiced in Britain.
If recent scholarship in the humanities has managed (in the words of Raymond
Williams) to wrest culture away from the Cambridge teashop, the digital humanities
seems intent on trying to entice culture back to the Earl Grey and scones. This
use of digital technologies to inscribe very conservative views of culture is
not restricted to digital humanities centres. Libraries and museums have
frequently seen digital technologies as a means of giving access to their so-called
‘treasures’, so that it is the elite objects rather than the everyday to which
we get access. The sort of priorities evident in the British Library are very
similar to those of digital humanities centres: the Codex Sinaiticus, Caxton
editions of Chaucer, illuminated manuscripts from the old library of the
English Royal Family, early Byzantine manuscripts, and Renaissance Festival
Books.
There are some more intellectually and
culturally imaginative projects at the British Library, such as the excellent
UK Soundmap, but significantly they do not come from the mainstream library
areas. Digital technologies have generally not enabled libraries and archives to
enhance access to concealed and hidden material in their collections, and does
not offer those outside the library fresh perspectives on their collections.
Here’s one example. As a legal deposit library, the British Library has
historically received vast quantities of printed material which it does not
have the resources to catalogue. Thousands of such items lurk under one line ‘dump
entries’ which can be located in the printed catalogue but are paradoxically
very difficult to find in the new online ‘Explore the British Library’. This
unknown and unrecorded material in the British Library includes for example
thousands of estate agents prospectuses for new suburban developments in the
1930s. This material is of potentially very great cultural, historical and
local importance, but at present it is completely inaccessible. Shouldn’t the
British Library be giving a higher priority to making available documents like
these, recording an everyday culture, rather than making its so-called
‘treasures’ available in an ever-increasing range of technological forms?
I am conscious that my remarks are
based very much on Britain and of course in painting such a general picture
there are always bound to be major exceptions (I would for example suggest that
the Old Bailey Proceedings has developed a very different cultural and
intellectual agenda to the majority of British digital humanities projects).
But nevertheless I feel confident in my general charge: to judge from the
projects it produces, the digital humanities as formally constituted has been
party to a concerted attempt to reinstate an outmoded and conservative view of
the humanities. The reasons for this are complicated, and again the American
situation is different to the British one in some important respects, but in
Britain the problem is I think that the digital humanities has failed to
develop its own distinctive intellectual agendas and is still to all intents
and purposes a support service. The digital humanities in Britain has generally
emerged from information service units and has never fully escaped these
origins. Even in units which are defined as academic departments, such as my
own in King’s, the assumption generally is that the leading light in the project
will be an academic in a conventional academic department. The role of the
digital humanities specialists in constructing this project is always at root a
support one. We try and suggest that we are collaborating in new ways, but at
the end of the day a unit like that at King’s is simply an XML factory for
projects led by other researchers. We are interdisciplinary, in that we work
with different departments, but so do other professional services. Departments
like ours can only keep people in work if we constantly secure funding for new
research projects. So we are sitting ducks – if a good academic has a bright
idea for a project, it is difficult to say no, because otherwise someone might
be out of a job. But this means that intellectually, the digital humanities is
always reactive. Above all, it means that it is vulnerable to those subjects,
like classics or medieval studies, who are anxious about their continued relevance
and funding, who are desperate to demonstrate that their subjects can be
switched on, up to date and digital. The digital humanities has become caught up
in a form of project dependency which will eventually kill it unless it can be
weaned off the collaborative drug.
Now I am a medievalist by training, and,
recovering recently from my broken leg, I realized that there is nothing I
would like now to do as much as spend my time using the remarkable online archive of medieval legal records created by Richard Palmer in Texas. But I
also subscribe strongly to a point of view which sees Super Mario or Coronation
Street or Shrek as just as culturally interesting and significant as Ovid and
Chaucer. It is an article of faith for me that You Tube is just as worthy of
scholarly examination as an illuminated manuscript. One of the stimulating
things about working somewhere like the British Library is that it brings home
just how many amazing forms culture takes. On the shelves of the British
Library, you regularly encounter an ancient potsherd with writing by Ethiopian merchant
next to a regency laundry list underneath an Aztec picture manuscript and just
across the corridor from a Fats Waller LP. One of the exciting things about
digital cultures is that they give us access to such an eclectic, boundary-crossing
view of culture, and if our digital humanities fails to embrace such an
inclusive and all-embracing view of culture and of the humanities, then there
will always be a disjunction between the digital humanities and the digital
world it professes to inhabit. But our academic collaborators in classics or
history or even literature will want to keep us close to hand and prevent us
wandering away down such paths. Until we seize control of our own intellectual
agendas, the digital humanities are doomed to be – at best – no more than an
ancillary discipline (the term frequently applied in the past to paleography
and bibliography).
Our stress on collaboration and
interdisciplinarity are our worst enemies. I take pride in having been returned
for three different panels for research assessment exercises, so I feel that I
have really committed personally to interdisciplinarity. However, as far as the
digital humanities are concerned, interdisciplinarity is just a cover for the lack
of a distinctive intellectual agenda. We rarely assemble truly
interdisciplinary teams – Tim Hitchcock’s current collaboration with social
scientists and mathematicians is an exception which proves the rule. Similarly,
team working has become routine with the establishment of research council
funding in the humanities. We are not unusual because we work in teams – it is
the lone scholar which is more of a rarity nowadays. Everyone claims to be
interdisciplinary today, so the digital humanities to claim this as one of its
distinctive characteristics is to claim nothing.
Another major obstacle preventing the
digital humanities developing its own scholarly identity is our interest in
method. If we focus on modelling methods used by other scholars, we will simply
never develop new methods of our own. The idea – at the heart of a lot of
thinking about methods, models and scholarly primitives - that a synthesis could
be developed from these methods to produce a sort of alchemical essence of
scholarship is absurd. If we truly believe that digital technologies can be
potentially transformative, the only way of achieving that is by forgetting the
aging rhetoric about interdisciplinarity and collaboration, and starting to do
our own scholarship, digitally. A lot of this will be ad hoc, will pay little
attention to standards, won’t be seeking to produce a service, and won’t worry
about sustainability. It will be experimental.
The starting point is to start to
saying no to other people’s projects if they don’t enthuse us. Everyone now
accepts that digital technology is changing scholarship. We don’t need to
convince them and don’t need to embrace as a convert every humanities academic
who thinks that a computer might help. What we need more urgently to do is to
develop our own projects that are innovative, inspiring, and different, rather
than endlessly cranking up what Torsten Reimer has called the digital
photocopier. We might start by seeking closer contact with our colleagues in
Cultural and Media Studies. There is a huge body of scholarship on digital
cultures with which we engage only patchily and which offers us powerful
critical frameworks in articulating our own scholarly programme. One lesson which immediately emerges from
dipping a toe into this burgeoning scholarship is that those of us in the
digital humanities need to engage more with the born digital. Humanities scholars are increasingly studying
the digital, yet the digital humanities (paradoxically) does not got much
involved in this discussion – the huge preponderance of projects concerned with
the period before 1600 is an eloquent declaration that British digital
humanities is mostly not very interested in what is currently happening on the
internet. Again, we might link this to the way in which the digital humanities
has become annexed by a very conservative view of the nature of humanities
scholarship – digital humanities practitioners have too often seen their role
as being responsible for shaping on-line culture and for ensuring the provision
of suitably high-brow material. But this is a futile enterprise as the culture
of the web has exploded. The internet has become a supreme expression of how
culture is ordinary and everywhere, and there is a great deal for us to
explore.
I’m sure you will have seen the videos of very young children instinctively using an iPad or iPhone, which are used to
illustrate how children can instinctively accept the digital (or at least a tablet). But watching a
child playing with an iPad raises a host of other issues about text, record and
memory. My former colleague at Glasgow, Chris Philo, has produced some very thought-provoking papers about the methodological issues posed by recording
childhood activities such as writing and drawing. Since researchers have their
own memories of childhood and frequently parenthood as well, in attempting to
record conversations with children or encouraging children to write and draw,
researchers often impose their own memories of childhood. Correspondingly the
children themselves are eager to please and this will shape their drawing and
writing for adult researchers. Philo raises the question of whether, given such
complex feedback loops, archives of childhood are ever feasible. He asks how we can ever accurately document childhood. He also
suggests that maybe new technology provides an answer. Can we gain a more direct insight into
childhood by recording and analyzing how a three-year old uses an iPad? Maybe
this is the sort of new digital humanities, analyzing the human intersection
with the machine, which we might pursue.
There are also increasing quantities of
born digital materials more recognizable as the conventional stuff of
humanities research. For major disruptive events such as terrorist attacks, our
information has in the past often been largely textual or produced by
professional media, so that the information is often restricted to the
immediate incident. The July 2005
bombings in London were among the first events that were recorded in a variety
of ways: apart from conventional media, there were blog reports, mobile phone images
uploaded on Flickr, SMS traffic, CCTV coverage. What is fascinating in the
reportage of July 7th is the way in which these different media
affect the way in which we can explore the nature and structure of the event.
While there are a few dramatic mobile phone images from the bombed tube trains,
the vast majority of the pictures of the July 7th bombings on Flickr
show the disruption in the streets: people trying to find their way home,
gathering anxiously to get news. The emergency services nervously try to
control the situation; normally busy streets are eerily deserted. This is a
curiously de-centered view of the event. For many people, the memory of 7th
July was one of confusion, waiting and
uncertainty. This is an aspect of such major events which often is not recorded
in conventional media, but one that we can explore here.
The chief issue which emerges from this
material on 7th July is that of memorialization, as a recent special issue of Memory Studies has
discussed. The engagement of people with the events of that day was heavily
mediated in many different ways throughout technology and they also sought to
use technology to memorialize and record their experiences on that day.
Different forms of technology created different forms of memorialisation – the
mobile phone interaction (as itself memorialized through Flickr) was very
different to that in blogs or in conventional media. Moreover, the new media
also enabled older informal methods of communication and memorialization to be
recorded. Presumably on other occasions in the past, poignant handwritten
notices and posters had appeared, but generally they are not recorded. However,
the availability and cheapness of mobile phones and digital cameras means that
this distinctive type of textuality from such disruptive events has been
recorded.
While the digital and textual traces of
the July bombings provide rich material for investigating the memorialization of
major events, this does not mean that our focus needs to be restricted to the
contemporary and recent. One feature of the digital humanities should be that
we provide the historical and critical range and depth to help provide new
contexts for contemporary technologies – we understand that the internet in
some ways is the heir of the thirteenth-century concordance. We might compare the
digital and media traces of July 7th to the way in which earlier
major events such as the Fire of London or the Peasants’ Revolt of 1381 appear
in the media of the period. Major events such as these appear differently when
viewed through the lens of broadside ballads or medieval chronicles. This kind
of historical media studies is one rich area for a future digital humanities.
One major theme which would emerge from
such a study is the intersection between technology and the different types of
human memory and understanding to which we give the overall label of textuality.
The blog is used in a different way to the mobile phone which in turn is used
in a different way to the handwritten poster. These differ from printed ballads
or manuscript chronicles. A fundamental aspect of our engagement with
textuality is a materiality which should be at the heart of the digital
humanities, and which should enable us to bridge the gap between the born
digital and the medieval. Although much cultural commentary on the digital
portrays it as disembodied, flickering, volatile and elusive, digital
technology is as material (maybe more so) than writing and printing. As Matt
Kirschenbaum has reminded us in the ‘Grammatology of the Hard Drive’ in his book Mechanisms, the
computer in the end comes down to a funny whirring thing that works much like a
gramophone. The internet is not magic; it depends on vast cables protected by
one of the great Victorian discoveries, gutta percha. At Porthcurno Beach in
Cornwall, fourteen cables linked Britain to the rest of the British Empire, and
the internet still comes ashore through cables at Porthcurno.
Katherine Hayles has described how one
of the fundamental issues in the emergence of the post-human derived from
Claude Shannon’s work on improving the quality of telegraph communication over cables
like those at Porthcurno. Shannon found
that on-off signals – bits – could be retrieved more efficiently and accurately
over cables. Shannon proposed that the information should be separate from the
medium carrying it. Shannon
declared that the ‘fundamental problem of communication
is that of reproducing at one point either exactly or approximately a message
selected at another point’ – in other words, communication science should strip
a message down to those essentials which could be fixed in such a way that it
could be reproduced at a distance. In short, information is about fixing and
attempting to stabilize what are construed as the essential elements. Even at the time, there were complaints that Shannon’s approach
was over formalistic and by ignoring issues like meaning was inadequate as the
basis for a theory of communication. But the practical need to improve the
quality of the cable traffic at Porthcurno prevailed. Shannon’s discoveries
form the basis of modern computing, but it by no means follows that in thinking
about the way in which textuality works we should be bound by this model. For
large parts of the humanities, our understanding of the nature of textuality
(in its broadest sense as construing images, video, sound and all other forms
of communication as well as verbal information) is deeply bound up with its
materiality. The interaction between carver and stone is important in
understanding the conventions and structure of different types of inscription.
The craft of the scribe affected the structure and content of the manuscript.
The film director is shaped by the equipment at his disposal. I write differently
when I tweet to when I send an e-mail. Text technologies have a complex
interaction with textuality and thus with the whole of human understanding.
Texts are always unstable, chaotic,
messy and deceptive for a simple reason – because they are human. The only way
in which we can recover and explore this human aspect of the text is by
exploring its materiality. It will never become wholly disembodied data. We can
display information from ship’s logs in geographical form and manipulate it in
a variety of ways, but at the end of the day if the captain had bad writing,
was careless in keeping his log or got drunk for days on hand, then the data
will be deceptive. We can get a much better idea of the nature of that log and
the human being behind it by exploring its material nature – were there a lot
of ink blots? Were pages ripped out? Were sections corrected? And it is by
exploring this materiality that we can start to reintegrate the human and the
digital, and develop a view which transcends the post-humanities and, while
accepting that technology changes the experience of being human, it can also
enable us to explore in new ways the way in which different textual objects,
from manuscripts to films, from papyri to tweets, engage with humans and
humanity. It is impossible to come here to Oxford and not mention the name of
Dom Mackenzie, who taught us how historical bibliography is a means of
exploring the cultural and social context of text. The mission of the digital
humanities should be to bring the vision that Mackenzie brought to historical
bibliography to bear on the whole range of textual technologies.
And in pursuing such a new vision of
the study of digital cultures and text technologies, we need to create new
scholarly alliances and new conjunctions. I tried to suggest earlier that our
claim to be distinguished by a commitment to interdisciplinarity is a rather
empty one, and that such claims carry increasingly less weight as
interdisciplinarity becomes more widespread. But I nevertheless believe that
digital humanities is uniquely well placed to create new conjunctions between
the humanities scholar, the curator, the scientist, the librarian and the
artist. A focus on the materiality of text enhances such alliances. After the
notebook of William Harvey was rediscovered, it was noticed how badly faded it
was. The infant science of photography was used to try and enhance the damaged
pages of Harvey’s notes. Likewise, we can use new imaging and scanning
techniques to explore Harvey’s manuscript – we can do much much more than
simply digitizing it, and we should be developing such projects. Similarly,
much of our evidence for understanding how those Dominican monks compiled the first concordance to the
scriptures in the thirteenth century comes from discarded manuscript fragments they
used in listing the words. We could imagine a project which imaged those
fragments and reintegrated them to understand the working methods of the
compilers of the concordance. But, in doing so, our aims should not simply be
to help breath new life into medieval studies. We should be seeking to develop
new technologies and new science as a result of this work. We should be seeking
to provide new perspectives on the way in which technology interacts with text.
And in so doing we provide new perspectives on what it means to be human.



