Friday, March 14, 2014

Wikilinks in FromThePage

From March 10-12, I got to participate in the iDigBio Original Sources Digitization Workshop, a gathering of natural history collections managers, archivists, and technologists. Although the focus of digitization within natural history has been on specimens or specimen labels, this workshop sought to address the challenges and opportunities involved in digitizing ledgers, field notes, and other non-specimen data. As usual for iDigBio events, the workshop was spectacular.

Carolyn Sheffield chaired a panel (video recording) on crowdsourcing which included Rob Guralnik discussing Notes From Nature, Christina Fidler talking about the Grinnell field notes on FromThePage, my talk, and a long, valuable discussion among all participants. My presentation covered the data model and uses of wiki links as I'm using them in FromThePage.

Video, slides, and transcript are below:

"From The Page" - Ben Brumfield from iDigBio on Vimeo.
I'm Ben Brumfield.  You saw a little bit about FromThePage in Christina Fidler's presentation, so I wanted to talk about the internals -- the design and the datastructures behind some of the things that make this a little bit different from NotesFromNature or the NARA Transcribr Drupal module.
This is the transcription screen.  You've seen this with Christina, so I'll probably go over this pretty quickly.  This is a full-text transcription, not individual records like you get with Notes From Nature. 
The reason for that is that FromThePage was built to be a wiki-like tool, purpose-built for creating amateur editions.  So we've got a text and we want to create an edition from the text that can then be re-used, printed, and analyzed.

I say "amateur" editions because we're not dealing with the kinds of things that textual scholars in the humanities are dealing with, where they're trying to compare different variant manuscript versions of Chaucer.  [By contrast, we] have something that's very straightforward, and we're interested in some fairly simple annotations.

It's purpose-built -- free-standing on MySQL and Ruby on Rails, so it's not integrated with MediaWiki or anything like that.
So who's using it?

[FromThePage] was built originally for a set of my great-great grandmother's diaries.

Since then it's been used for military diaries by libraries and history departments.
It's been used for literary diaries--in this case for Shelby Foote's diaries--for literary drafts, and for punk rock fanzines.  (Which is kind of awesome!)
So what does that have to do with the people in this room and the kind of material [we're working with]?

Here's an example:  This is an 1859 journal from an expedition in which someone went out and made a number of observations and collected some things to bring back with them.  There are scholars interested in mining those.

But it's not a naturalist expedition.  This is Viscountess Emily Anne Smyth Strangford, who in this case is touring the Mediterranean and visiting a lot of classical monuments.  The folks at the Duke Computational Classics Collaboratory are interested in finding all the places in which she recorded Latin and Greek inscriptions, coming up with her itenerary, and figuring out how [that data] connects to the objects her father-in-law had collected for the British Museum twenty years earlier.

So there's a lot of correspondence, I tend to think, with field notes.
The San Diego Natural History Museum started using FromThePage for field books in 2010.  They're still working on the project.
  • They've identified ten thousand subjects worth classifying in their system.
  • Individual pages have been edited twenty-four thousand times.  And this goes back to the wiki-like approach -- people transcribe a page, and then they revisit it. They make a number of edits to a page as they get comfortable with the handwriting.
  • And then they've linked individual observations, species mentioned, and people in the field notes to those subjects forty-two thousand times.
Then there are a couple of other projects working with field notes.  [Museum of Vertebrate Zoology] obviously is in trial, and [the Museum of Comparative Zoology] and Missouri Botanical Gardens are just evaluating the software right now.  
So, what is a wiki link?

Any of us who've edited Wikipedia may be used to this.  I followed the same syntax [in FromThePage].

What we have here is a set of double square braces with the canonical name of the subject--this could be a formatted date, this could be a full name that's spelled out--and then the text that's actually used within the verbatim transcript.

So our example here -- this is when Grinnell meets Klauber.  The field note actually says "L. M. Klauber", so the person transcribing has expanded this out to "Laurence M. Klauber".  So we have the ability to handle variance in references to Klauber, but still identify them as Klauber.
Technically speaking, what's behind one of these wiki links?

There are a lot of tables in this database.
  • We know that there's this page that Klauber is mentioned on.  It's S1 Page 3 in the Grinnell field notes that MVZ has online.
  • We've got a subject which is Laurence M. Klauber.
  • The subject is categorized as a person, which can be used for analysis and filtering, like Christina showed you.
  • And then the individual link between the page and the subject, that contains the variation, is also stored.
So there are a lot of things you can do with that.
  • You can show all the pages that mention Laurence M. Klauber, and read the pages in context or just get a listing of them.
  • More helpfully, as you're transcribing we can mine those links to automatically suggest mark-up.  So the next time we encounter "L. M. Klauber", we can push a button and that will automatically expand the mark-up of "L. M. Klauber" to "[[Laurence M. Klauber|L. M. Klauber]]".
  • You can also feed this to full-text searches.  So if you've got a lot of plain-text transcripts which contain Laurence M. Klauber, we can automatically populate the search with those variations, creating an OR query with "Klauber", "L. M. Klauber"
  • And then we can mine the mark-up for correspondences [between subjects] as Christina showed.

The last thing you can do with it is export.
Here is a TEI-XML export of the Joseph Grinnell notes.  This is useful for interchange, but the most important thing this does is that it allows amateurs to create well-formatted, TEI P5-compliant XML.  And it will handle one of the things that's very hard about creating TEI in an XML editor, which is associating reference string to their entries over in the TEI header which describes who the people are outside the text.
This is a CSV export of the Grinnell field notes.  Basically this is every observation and every person who's mentioned, exported as a CSV file with links back to the pages and URLs at which those pages can be found.  This is the kind of thing that perhaps could be ingested into [museum collection management database] Arctos.
Future plans:

We're going to be doing more CMS integrations.  We're working on Omeka.  The Internet Archive is done.  There are a couple of grant applications that involve hooking FromThePage up to Fedora Commons.

We also really want to contextualize links in time and place.  We want the ability for people to define where the person writing the journal is where they're writing, and then to apply those geotags and chronotags to the references.  So you could map when species were mentioned.  You could extract a visual itenerary.

We need more formatting options.  One of our volunteers has found all kinds of crazy editorial issues for handling strike-outs and things like that.

And the last thing that we're looking for is more projects.

Tuesday, December 31, 2013

Code and Conversations in 2013

It's often hard to explain what it is that I do, so perhaps a list of what I did will help.  Inspired by Tim Sherratt's "talking" and "making" posts at the end of 2012, here's my 2013. 

Code

I work on a number of software projects, whether as contract developer, pro bono "code fairy", or product owner.  

FromThePage

It's been a big year for FromThePage, my open-source tool for manuscript transcription and annotation.  We started work upgrading the tool to Rails 3, and built a TEI Export (see discussion on the TEI-L) and an exploratory Omeka integration.  Several institutions (including University of Delaware and the Museum of Vertebrate Zoology) launched trials on FromThePage.com for material ranging from naturalist field notes to Civil War diaries.  Pennsylvania State University joined the ranks of on-site FromThePage installations with their "Zebrapedia", transcribing Philip K. Dick's Exegesis -- initially as a class project and now as an ongoing work of participatory scholarship.

One of the most interesting developments of 2013 was that customizations and enhancements to FromThePage were written into three grant applications.  These enhancements--if funded--would add significant features to the tool, including Fedora integration, authority file import, redaction of transcripts and facsimiles, and support for externally-hosted images.  All these features would be integrated into the FromThePage source, benefiting everybody.

Two other collaborations this year promise interesting developments in 2014.  The Duke Collaboratory for Classics Computing (DC3) will be pushing the tool to support 19th-century women's travel diaries and Byzantine liturgical texts, both of which require more sophisticated encoding than the tool currently supports.  (Expect Unicode support by Valentine's Day.)  The Austin Fanzine Project will be using a new EAC-CPF export which I'll deliver by mid-January.

OpenSourceIndexing / FreeREG 2

Most of my work this year has been focused on improving the new search engine for the twenty-six million church register entries the FreeREG organization has assembled in CSV files over the last decade and a half.  In the spring, I integrated the parsed CSV records into the search engine and converted our ORM to Mongoid.  I also launched the Open Source Indexing Github page to rally developers around the project and began collecting case studies from historical and genealogical organizations.

In May, I built a parser for historical dates into the search engine I'm building for FreeREG.  It handles split dates like "4 Jan 1688/9", illegible date portions in UCF like "4 Jan 165_", and preserves the verbatim transcription as well as programmatically handling searching and sorting correctly.  Eventually I'll incorporate this into an antique_date gem for general use.

Most of the fall was spent adding GIS search capabilities to the search engine.   In fact, my last commit of the year added the ability to search for records within a radius of a place.  The new year will bring more developments on GIS features, since an effective and easy interface to a geocoded database is just as big a challenge as the geocoding logic itself.

Other Projects

In January I added a command-line wrapper to Autosplit, my library for automatically detecting the spine in a two-page flatbed scan and splitting the image into recto and verso halves.  In addition to making the tool more usable, it also added support for notebook-bound books which must be split top-to-bottom rather than left-to-right.

For the iDigBio Augmenting OCR Hackathon in February, I worked on two exploratory software projects.  HandwritingDetection (code, write-up) analyzes OCR text to look for patterns characteristically produced when OCR tools encounter handwriting.    LabelExtraction (code, write-up) parses OCR-generated bounding boxes and text to identify labels on specimen images.  To my delight, in October part of this second tool was generalized by Matt Christy at the IDHMC to illustrate OCR bounding boxes for the eMOP project's work tuning OCR algorithms for Early Modern English books.

In June and July, I started working on the Digital Austin Papers, contract development work for Andrew Torget at the University of North Texas.  This was what freelancers call a "rescue" project, as the digital edition software had been mostly written but was still in an exploratory state when the previous programmer left.  My job was to triage features, then turn off anything half-done and non-essential, complete anything half-done and essential, and QA and polish core pieces that worked well.  I think we're all pretty happy with the results, and hope to push the site to production in early 2014.  I'm particularly excited about exposing the TEI XML through the delivery system as well as via GitHub for bulk re-use.

Also in June, I worked on a pro bono project with the Civil War-era census and service records from Pittsylvania County, Virginia which were collected by Jeff McClurken in his research.  My goal is to make the PittsylvaniaCivilWarVets database freely available for both public and scholarly use.   Most of the work remaining here is HTML/CSS formatting, and I'd welcome volunteers to help with that. 

In November, I contributed some modifications to Lincoln Mullen's Omeka client for ruby.  The client should now support read-only interactions with the Omeka API for files, as well as being a bit more robust.

December offered the opportunity to spend a couple of days building a tool for reconciling multi-keyed transcripts produced from the NotesFromNature citizen science UI.  One of the things this effort taught me was how difficult it is to find corresponding transcript to reconcile -- a very different problem from reconciliation itself.  The project itself is over, but ReconciliationUI is still deployed on the development site.

Conversations

February 13-15 -- iDigBio Augmenting OCR Hackathon at the Botanical Research Institute of Texas.  "Improving OCR Inputs from OCR Outputs?" (See below.)

February 26 -- Interview with Ngoni Munyaradzi of the University of Cape Town.  See our discussion of his work with Bushman languages of southern Africa.

March 20-24 -- RootsTech in Salt Lake City.  "Introduction to Regular Expressions"

April 24-28 -- International Colloquium Itinera Nova in Leuven, Belgium.  "Itinera Nova in the World(s) of Crowdsourcing and TEI". 

May 7-8 -- Texas Conference on Digital Libraries in Austin, Texas.  I was so impressed with TCDL when Katheryn Stallard and I presented in 2012 that I attended again this year.  While I was disappointed to miss Jennifer Hecker's presentation on the Austin Fanzine Project, I was so impressed with Nicholas Woodward's talk in the same time slot that I talked him into writing it up as a guest post.

May 22-24 -- Society of Southwestern Archivists Meeting in Austin, Texas.  On a fun panel with Jennifer Hecker and Micah Erwin, I presented "Choosing Crowdsourced Transcription Platforms"

July 11-14 -- Social Digital Scholarly Editing at the University of Saskatchewan.  A truly amazing conference.  My talk: "The Collaborative Future of Amateur Editions".

July 16-20 -- Digital Humanities at the University of Nebraska, Lincoln.  Panel "Text Theory, Digital Document, and the Practice of Digital Editions".  My brief talk discussed the importance of blending both theoretical rigor and good usability into editorial tools.

July 23 -- Interview with Sarah Allen, Presidential Innovation Fellow at the Smithsonian Institution.  Sarah's notes are at her blog Ultrasaurus under the posts "Why Crowdsourced Transcription?" and "Crowdsourced Transcription Landscape".

September 12 -- University of Southern Mississippi. "Crowdsourcing and Transcription".  An introduction to crowdsourced transcription for a general audience.

September 20 -- Interview with Nathan Raab for Forbes.com.  Nathan and I had a great conversation, although his article "Crowdsourcing Technology Offers Organizations New Ways to Engage Public in History" was mostly finished by that point, so my contributions were minor.  His focus on the engagement and outreach aspects of crowdsourcing and its implications for fundraising is one to watch in 2014.

September 25 -- Wisconsin Historical Society"The Crowdsourced Transcription Landscape".  Same presentation as USM, with minor changes based on their questions.  Contents: 1. Methodological and community origins.  2. Volunteer demographics and motivations.  3. Accuracy.  4. Case study: Harry Ransom Center Manuscript Fragments.  5. Case study: Itinera Nova at Stadarchief Leuven.

September 26-27 -- Midwest Archives Conference Fall Symposium in Green Bay, Wisconsin.  "Crowdsourcing Transcription with Open Source Software".  1. Overview: why archives are crowdsourcing transcription.  2. Selection criteria for choosing a transcription platform.  3. On-site tools: Scripto, Bentham Transcription Desk, NARA Transcribr Drupal Module, Zooniverse Scribe.  4. Hosted tools deep-dive: Virtual Transcription Laboratory, Wikisource, FromThePage.

October 9-10 -- THATCamp Leadership at George Mason University.  In "Show Me Your Data", Jeff McClurken and I talked about the issues that have come up in our collaboration to put online the database he developed for his book, Take Care of the Living.  See my summary or the expanded notes.

November 1-2 -- Texas State Genealogy Society Conference in Round Rock, Texas.  Attempting to explore public interest in transcribing their own family documents, I set up as an exhibitor, striking up conversations with attendees and demoing FromThePage.  The minority of attendees who possessed family papers were receptive, and in some cases enthusiastic about producing amateur editions.  Many of them had already scanned in their family documents and were wondering what to do next.  That said, privacy and access control was a very big concern -- especially with more recent material which mentioned living people.

November 7 -- THATCamp Digital Humanities & Libraries in Austin, Texas. Great conversations about CMS APIs and GIS visualization tools.

November 19-20 -- Duke UniversityI worked with my hosts at the Duke Collaboratory for Classics Computing to transcribe a 19th-century travel diary using FromThePage, then spoke on "The Landscape of Crowdsourcing and Transcription", an expansion of my talks at USM and WHS.  (See a longer write-up and video.)

December 17-20 -- iDigBio Citizen Science HackathonDue to schedule conflicts, I wasn't able to attend this in person, but followed the conversations on the wiki and the collaborative Google docs.  For the hackathon, I built ReconciliationUI, a Ruby on Rails app for reconciling different NotesFromNature-produced transcripts of the same image on the model of FamilySearch Indexing's arbitration tool.

2014

All these projects promise to keep me busy in the new year, though I anticipate taking on more development work in the summer and fall.  If you're interested in collaborating with me in 2014--whether to give a talk, work on a software project, or just chat about crowdsourcing and transcription--please get in touch.

Saturday, November 23, 2013

"The Landscape of Crowdsourcing and Transcription" at Duke University

I spent part of this week at Duke University with the Duke Collaboratory for Classics Computing -- Josh Sosin, Hugh Cayless, and Ryan Baumann. We discussed ideas for mobile epigraphy applications, argued about text encoding, and did some hacking. We loaded an instance of FromThePage onto the DC3's development machine, seeded it with the 1859 journal of Viscontess Emily Anne Beaufort Smyth Strangford (part of Duke Libraries' amazing collection of Women's Travel Diaries). Transcribing six pages of her tour through Smyrna and Syria together suggested some exciting enhancements for the transcription tool, revealing a few bugs along the way. I'm really looking forward to collaborating with the DC3 on this project.

On Wednesday, I gave an introductory talk on crowdsourced manuscript transcription at the Perkins Library: "The Landscape of Crowdsourcing and Transcription":
One of the most popular applications of crowdsourcing to cultural heritage is transcription. Since OCR software doesn’t recognize handwriting, human volunteers are converting letters, diaries, and log books into formats that can be read, mined, searched, and used to improve collection metadata. But cultural heritage institutions aren’t the only organizations working with handwritten material, and many innovations are happening within investigative journalism, citizen science, and genealogy.
This talk will present an overview of the landscape of crowdsourced transcription: where it came from, who’s doing it, and the kinds of contributions their volunteers make, followed by a discussion of motivation, participation, recruitment, and quality controls.
The talk and visit got a nice write-up in Duke Today, which includes this quote by Josh Sosin:
Sosin said that although many students and professors visit the library's collections and partially transcribe the sources that are pertinent to their research, nearly all of these transcripts disappear once the researchers leave the library.
"Scholars or students come to the Rubenstein, check out these precious materials, they transcribe and develop all sorts of interesting ideas about them," Sosin said. "Then they take their notebooks out of the library and we lose all the extra value-added materials developed by these students. If we can host a platform for students and scholars to share their notes and ideas on our collections, the library's base of knowledge will grow with every term paper or book that our scholars produce."
Video of "The Landscape of Crowdsourcing and Transcription" (by Ryan Baumann):

Slides from the talk:



Previous versions of this talk were delivered at University of Southern Mississippi (2013-09-12) and the Wisconsin Historical Society (2013-09-25). It differs substantially in the discussion of quality control mechanisms (on the video from 26:15 through 31:30, slides 37-40), an addition which was suggested by questions posed at USM and WHS.

Friday, October 25, 2013

Feature: TEI-XML Export

How do you get the data out?

This is a question I hear pretty often, particularly from professional archivists.  If an institution and its users have put the effort into creating digital editions on FromThePage, how can they pull the transcripts out of FromThePage to back it up, repurpose it, or import it into other systems?

This spring, I created an XHTML exporter that will generate a single-page XHTML file containing transcripts of a work's pages, their version history, all articles written about subjects within the work, and internally-linked indices between subjects and pages.  Inspired by conversations at the TEI and SDSE conferences and informed by my TEI work for a client project, I decided to explore a more detailed export in TEI.

This is the result, posted on github for discussion:
https://gist.github.com/benwbrum/6933615
Zenas Matthews' Mexican War Diary was scanned and posted by Southwestern University's Smith Library Special Collections.  It was transcribed, indexed, and annotated by Scott Patrick, a retired petroleum worker from Houston.

https://gist.github.com/benwbrum/6933603
Julia Brumfield's 1919 Diary was scanned and posted by me, transcribed largely by volunteer Linda Tucker, and indexed and annotated by me.

I requested comment on the TEI mailing list (see the thread "Draft TEI Export from FromThePage"), and got a lot of really helpful, generous feedback both on- and off-list.  It's obvious that I've got more work to do for certain kinds of texts--which will probably involve creating a section header notation in my wiki mark-up--but I'm pretty pleased with the results.


One of the most exciting possibilities of TEI export is interoperability with other systems.  I'd been interested in pushing FromThePage editions to TAPAS, but after I posted the TEI-L announcement, Peter Robinson pulled some of the exports into Textual Communities.  We're exploring a way to connect the two systems, which might give editors the opportunity to do the sophisticated TEI editing and textual scholarship supported by Textual Communities starting from the simple UI and powerful indexing of FromThePage.   I can imagine an ecosystem of tools good at OCR correction, genetic mark-up, display and analysis of correspondence, amateur-accessible UIs, or preservation -- all focusing on their strengths and communicating via TEI-XML.


I'm interested in more suggestions for ways to improve the exports, new things to do with TEI, or systems to explore integration options before I deploy the export feature on production. 

Sunday, October 20, 2013

A Gresham's Law for Crowdsourcing and Scholarship?

This is a comment I wanted to make at Neil Fraistat's "Participatory DH" session (proposal, notes) at THATCamp Leadership, but ended up having on twitter instead.

Much of the discussion in the first half of the session focused on the qualitative difference between the activities we ask amateurs to do and the activities performed by scholars.  One concern voiced was that we're not asking "citizen scholars" to do real scholarly work, and then labeling their activity scholarship -- a concern I share with regard to editing.  If most crowdsourcing projects ask amateurs to do little more than wash test tubes, where are the projects that solicit scholarly interpretation?

The Harry Ransom Center's Manuscript Fragments Project is just such a crowdsourcing project, and I think the results may be disquieting.  In this project, fragments of medieval manuscripts reused as binding for printed books are photographed and posted on Flickr.  Volunteers use the comments to identify the fragments, discussing the scribal hand and researching the source texts. I'd argue that while this does not duplicate the full range of an academic medievalist's scholarly activities, it's certainly not just "bottle-washing" either.

The project has been very successful.  (See organizer Micah Erwin's talks for details.)  Most of the contributions to the project have been made on Flickr in the comments by a few "super volunteers" -- retired rare book dealers and graduate students among them.  However, around 20% of the identifications were made by professional medievalists who learned about the project, visited the Flickr site, and then called or emailed the project organizer.  None of their contributions were made on the public Flickr forum at all.

So why did professional scholars avoid contributing in public?  I related this on Twitter, and got some interesting suggestions
Many of these suggest a sort of Gresham's Law of crowdsourcing, in which inviting the public to participate in an activity lowers that activity's status, driving out professionals concerned with their reputation. 

There's a more reassuring explanation as well -- many people with domain expertise still aren't very comfortable with technology.  Asking them to use a public forum puts additional pressure on them, as any mistakes typing, encoding, and using the forum will be public and likely permanent.  This challenge is not confined to professionals, either -- I receive commentary on the Julia Brumfield Diaries via email from people without high school degrees, who have no professional reputation to protect.

Wednesday, July 24, 2013

University of Delaware and Cecil County Historical Society on FromThePage

Over the last few months, the University of Delaware and the Cecil County Historical Society have been using FromThePage to transcribe the diary of a minister serving in the American Civil War.  They're using the project to expose undergraduates to primary sources while also improving access to an important local history document.

The county has documented the process with an extensive post on the Cecil County Historical Society Blog, which was picked up by the Cecil Daily.

The university also put together a lovely video providing background on the project and interviewing students and faculty members involved in the project:



One of the things I find most interesting about the project is the collaboration between digital humanities-focused university faculty and the county historical society:
Kasey Grier, director of the Museum Studies Program and the History Media Center at the university, says the transcription will be done by students in a process called “crowd sourcing.”

“Crowd sourcing,” according to Grier, “is when students in remote locations, review the handwritten text and try their hand at transcribing it. They then submit their contributions which are reviewed and put up online. Eventually, all of the diary entires will be available for anyone to access and read.”
Historical Society of Cecil County President Paul Newton says the society welcomes this collaboration with the University of Delaware and hopes to strengthen it because it broadens the society’s horizons and reach.
“The university’s focus is in the area of the digital humanities, which allows us to take largely unused and un-accessed collections and get the material out to a broader audience for study. It is also a preservation method as it reduces handling and makes interpretation much easier,” Grier said.
 You can see the Joseph Brown Diary and the students' work on it at the project site on FromThePage.com.

Saturday, July 13, 2013

The Collaborative Future of Amateur Editions

This is the transcript of my talk at Social Digital Scholarly Editing at the University of Saskatchewan in Saskatoon on July 11 2013.
I'm Ben Brumfield.  I'm not a scholarly editor, I'm an amateur editor and professional software developer.  Most of the talks that I give talk about crowdsourcing, and crowdsourcing manuscript transcription, and how to get people involved. I'm not talking about that today -- I'm here to talk about amateur editions.

So let's talk about the state of amateur editions as it was, as it is now, as it may be, and how that relates to the people in this room.
Let's start with a quote from the past.  This was written in 1996, representing what I think may be a familiar sort of consensus [scholarly] opinion about the quality of amateur editions, which can be summed up in the word "ewww!"
So what's going on now?  Before I start looking at individual examples of amateur editions, let's define--for the purpose of this talk--what an amateur edition is.

Ordinarily people will be talking about three different things:
  • They can be talking about projects like Paul's, in which you have an institution who is organizing and running the project, but all the transcription, editing, and annotation is done by members of the public.
  • Or, they can be talking about organizations like FreeREG, a client of mine which is a genealogy organization in the UK which is transcribing all the parish registers of baptisms, marriages, and burials from the reformation up to 1837.  In that case, all the material--all the documents--are held at local records offices and and archives, who in many cases are quite hostile to the volunteer attempt to  put these things online.  Nevertheless, over the last fifteen years, they've managed to transcribe twenty-four million of these records, and are still going strong.
  • Finally, amateur run editions of amateur-held documents.  These are cases like me working on my great-great grandmother's diaries, which is what got me into this world [of editing].
I'm going to limit that [definition] slightly and get rid of crowdsourcing.  That's not what I want to talk about right now.  I don't want to talk about projects that have the guiding hand of an institutional authority, whether that's an archive or a [scholarly] editor.
So let's take a look at amateur editions.  Here's a site called Soldier Studies.  Soldier Studies is entirely amateur-run.  It's organized by a high-school history teacher who got really involved in trying to rescue documents from the ephemera trade.
The sources of the transcripts of correspondence from the American Civil War are documents that are being sold on E-Bay.  He sees the documents that are passing through--and many of them he recognizes as important, as an amateur military historian--and he says, I can't purchase all of these, and I don't belong to an institution that can purchase them. Furthermore, I'm not sure that it's ethical to deal in this ephemera trade--there is some correlation to the antiquities trade--but wouldn't it be great if we could transcribe the documents themselves and just save those, so that as they pass from a vendor to a collector, some of the rest of us can read what's on these documents?
So he set up this site in which users who have access to these transcripts can upload letters.  They upload these transcripts, and there's some basic metadata about locations and subjects that makes the whole thing searchable.  
But the things that I think people in here--and I myself--will be critical about are the transcription conventions that he chose, which are essentially none.  He says, correspondence can be entered as-is--so maybe you want to do a verbatim transcript, but maybe not--and the search engines will be able to handle it.

A little bit more shocking is that -- you know, he's dealing with people who have scans--they have facsimile images--so he says, we're going to use that.  Send us the first page, so that we know that you're not making this piece of correspondence up completely, fabricating it out of whole cloth. 
So that's not a facsimile edition, and we don't have transcription conventions.  He has this caveat, in which he explains that this [site] is reliable because we have "the first page of the document attached to the text transcription as verification that it was transcribed from that source."  So you'll be able to read one page of facsimile from this transcript you have.  We do our best, we're confident, so use them with confidence, but we can't guarantee that things are going to be transcribed validly.

Okay, so how much use is that to a researcher? 
This puts me in the mind of Peter Shillingsburg's "Dank Cellar of Electronic Texts", in which he talks about the world "being overwhelmed by texts of unknown provenance, with unknown corruptions, representing unidentified or misidentified versions."

He's talking about things like Project Gutenberg, but that's pretty much what we're dealing with right here.  How much confidence could a historian place in the material on this site?  I'm not sure.

Here's an example of an amateur edition which is in a noble cause, but which is really more ammunition for the earlier quote.
So what about amateur editions that are done well?  This is the Papa's Diary Project, which is a 1924 diary of a Jewish immigrant to New York, transcribed by his grandson.

What's interesting about this -- he's just using Blogger, but he's doing a very effective job of communicating to his reader:
So here is a six-word entry.  We have the facsimile--we can compare and tell [the transcript] is right: "At Kessler's Theater.  Enjoyed Kreuzer Sonata."

So the amateur who's putting this up goes through and explains what Kessler's theater is, who Kessler was.
Later on down in that entry, he explains that Kessler himself died, and the Kreuzer Sonata is what he died listening to.  Further down the page you can listen to the Kreuzer Sonata yourself.

So he's taken this six-word diary entry and turned it into something that's fascinating, compelling reading.  It was picked up by the New York Times at one point, because people got really excited about this.
Another thing that amateurs do well is collaborate.  Again: Papa's Diary Project.  Here is an entry in which the diarist transcribed a poem called "Light". 
Here in the comments to that entry, we see that Jerroleen Sorrensen has volunteered: Here's where you can find [the poem] in this [contemporary] anthology, and, by the way, the title of the poem is not "Light", but "The Night Has a Thousand Eyes".

So we have people in the comments who are going off and doing research and contributing.
I've seen this myself.  When I first started work on FromThePage, my own crowdsourced transcription tool, I invited friends of mine to do beta testing.

I started off with an edition that I was creating based on an amateur print edition of the same diary from fifteen years previously.

If you look at this note here, what you see is Bryan Galloway looking over the facsimile and seeing this strange "Miss Smith sent the drugg... something" and correcting the transcript--which originally said "drugs"--saying, Well actually that word might be "drugget", and "drugget" is, if you look on Wikipedia, is a coarse woolen fabric.  Which--since it's January and they're working with [tobacco] plant-beds--that's probably what it is.

Well, I had no idea--nobody who's read this had any idea--but here's somebody who's going through and doing this proofreading, and he's doing research and correcting the transcription and annotating at the same time.
Another thing that volunteers do well is translate.  This is the Kriegstagebuch von Dieter Finzen, who was a soldier in World War I, and then was drafted in World War II.  This is being run by a group of volunteers, primarily in Germany.

What I want to point out is, that here is the entry for New Year's Day, 1916.  They originally post the German, and then they have volunteers who go online and translate the entry into English, French, and Italian.

So now, even though my German is not so hot, I can tell that they were stuck drinking grenade water.
So, what's the difference?

What's the difference between things that amateurs seem to be doing poorly, and things that they're doing well?

I think that it comes down to something that Gavin Robinson identified in a blog post that he wrote about six years ago about the difference between professional historians/academic historians and amateur historians.  What he essentially says is that professionals--particularly academics, but most professionals--are particularly concerned with theory.  They're concerned with their methodologies and with documenting their methodologies.

This is something that amateurs, in many cases, are not concerned with -- don't know exist -- maybe have never even been exposed to.
So, based on that, let's talk about the future.

How can we get amateurs--doing amateur editions on their own--to move from the things that they're doing well and poorly to being able to do everything well that's relevant to researchers' needs?

I see three major challenges to high-quality amateur editions.

The first one is one which I really want to involve this community in, which is ignorance of standards.  The idea that you might actually include facsimiles of every page with your transcription -- that's a standard.  I'm not talking about standards like TEI -- I'd love for amateur editions to be elevated to the point that print editions were in 1950 -- we're just talking about some basics here.

Lack of community and lack of a platform.
So let's talk about standards.

How does an amateur learn about editorial methodologies?  How do they learn about emendations?  How do they learn about these kinds of things?

Well, how do they learn about any other subject?  How do they learn about dendrochronology if they're interested in measuring tree rings? 
Wikipedia!

Let's go check out Wikipedia!
Wikipedia has a problem for most subjects, which is that Wikipedia is filled with jargon.  If you look up dendrochronology, you don't really have a starting place, a "how to".  If you look up the letter X, you get this wonderful description of how 'X' works in Catalan orthography, but it presupposes you being familiar with the International Phonetic Alphabet, and knowing that that thing which looks like an integral sign is actually the 'sh' sound.

Now if amateurs are trying to do research on scholarly editing and documentary editing in Wikipedia, they have a different problem:
There's nothing there. There's no article on documentary editing.
There's no article on scholarly editing.

These practices are invisible to amateurs
So if they can't find the material online that helps them understand how to encode and transcribe texts, where are they going to get it?

Well--going back to crowdsourcing--one example is by participation in crowdsourcing projects.  Crowdsourcing projects--yes, they are a source of labor; yes they are a way to do outreach about your material--but they are a way to train the public in editing.  And they are training the public in editing whether that's the goal of the transcription project or not.  The problem is that the teacher in this school is the transcription software--is the transcription website.

This means that the people who are teaching the public about transcription--the people who are teaching the public about editing--are people like me: developers.

So, how do developers learn about transcription?

Well, sometimes, as Paul [Flemons] mentioned, we just wing it.  If we're lucky, we find out about TEI, and we read the TEI Guidelines, and we find out that there's so much editorial practice that's encoded in the TEI Guidelines that that's a huge resource.

If we happen to know the people in this room or the people who are meeting at the Association for Documentary Editing in Ann Arbor, we might discover traditional editorial resources like the Guide to Documentary Editing.  But that requires knowing that there's a term "Documentary Editing".

So what does that mean?  What that means is that people like me--developers with my level of knowledge or ignorance--are having a tremendous amount of influence on what the public is learning about editing.  And that influence does not just extend to projects that I run -- that influence extends to projects that archives and other institutions using my software run.  Because if an archive is trying to start a transcription project, and the archivist has no experience with scholarly editing, I say, You should pick some transcription conventions.  You should decide how to encode this.  Their response is, What do you think?  We've never done this before.  So I'm finding myself giving advice on editing.
Okay, moving on.

The other thing that amateurs need is community.

Community is important because community allows you to collaborate.  Communities evaluate each [member's] work and say, This is good.  This is bad.  Communities teach each [member].  And communities create standards -- you don't just hang out on Flickr to share your photos -- you hang out on Flickr to learn to be a better photographer.  People there will tell you how to be a better photographer.

We have no amateur editing community for people who happen to have an attic full of documents and want to know what to do with them.
So communities create standards, and we know this.  Let me quote my esteemed co-panelist, Melissa Terras, who, in her interviews with the managers of online museum collections--non-institutional online "museums"--found that people are coming up with "intuitive metadata" standards of their own, without any knowledge or reference to existing procedures in creating traditional archival metadata.
The last big problem is that there's currently no platform for someone who has an attic full of documents that they want to edit.  They can upload their scans to Flickr, but Flickr is a terrible platform for transcription.

There's no platform that will guide them through best practices of editing.

What's worse, if there were one, it would need a "killer feature", which is what Julia Flanders describes in the TAPAS project as a compelling reason for people to contribute their transcripts and do their editing on a platform that enforces rigor and has some level of permanence to it -- rather than just slapping their transcripts up on a blog.
So, let's talk about the future.  In his proposal for this conference, Peter Robinson describes a utopia and dystopia: utopia in which textual scholars train the world in how to read documents, and a dystopia in which hordes of "well-meaning but ill-informed enthusiasts will strew the web willy-nilly with error-filled transcripts and annotations, burying good scholarship in rubbish." 
This is what I think is the road to dystopia:
  1. Crowdsourcing tools ignore documentary editing methodologies.  If you're transcribing using the Transcribe Bentham tool, you learn about TEI.  You learn from a good school.  But almost all of the other crowdsourced transcription tools don't have that.  Many of them don't even contain a place for the administrator to specify transcription conventions to their users!
  2. As a result, the world remains ignorant of the work of scholarly editors, because we're not finding you online--because you're invisible on Wikipedia--and we're not going to learn about your work through crowdsourcing.
  3. So you have the public get this attitude that, well, editing is easy -- type what you see.  Who needs an expert?  I think that's a little bit worrisome.
  4. The final thing--which, when I started working on this talk, was a sort of wild bogeyman--is the idea that new standards come into being without any reference whatsoever to the tradition of scholarly or documentary editing.
I thought that [idea] was kind of wild.  But, in March, an organization called the Family History Information Standards Organization--which is backed by Ancestry.com, the Federation of Genealogy Societies, BrightSolid, a bunch of other organizations--announced a Call for Papers for standards for genealogists and family historians to use -- sometimes for representing family trees, sometimes for source documents.
And, in May, Call for Papers Submission number sixty-nine, "A Transcription Notation for Genealogy", was submitted.
Let's take a look at it.

Here we have what looks like a fairly traditional print notation.  It's probably okay.
What's a little bit more interesting, though, is the bibliography.

Where is your work in this bibliography?  It's not there.

Where is the Guide to Documentary Editing?  It's not there.

So here's a new standard that was proposed the month before last.  Now, I hope to respond to this--when I get the time--and suggest a few things that I've learned from people like you.  But these standards are forming, and these standards may become what the public thinks of as standards for editing.
All right, so let's talk about the road to utopia.

The road to the utopia that Peter described I see as in part through partnerships between amateurs and professionals:  you get amateurs participating in projects that are well run -- that teach them useful things about editing and how to encode manuscripts.

Similarly, you get professionals participating in the public conversation, so that your methodologies are visible.   Certainly your editions are visible, but that doesn't mean that editing is visible.  So maybe someone here wants to respond to that FHISO request, or maybe they just want to release guides to editing as Open Access.

As a result, amateurs produce higher-quality editions on their own, so that they're more useful for other researchers; so that they're verifiable.

And then, amateurs themselves become advocates -- not just for their material and the materials they're working on through crowdsourcing projects, but for editing as a discipline.

So that's what I think is the road to utopia.
So what about the past?

Back in Shillingsburg's "Dank Cellar" paper, he describes the problems with the e-texts that he's seeing, and he really encourages scholarly editors not to worry about it -- to disengage -- [and] instead to focus on coming up with methodologies--and again, this is 2006--for creating digital editions.  He says that these aren't well understood yet.  Let's not get distracted by these [amateur] things -- let's focus on what's involved in making and distributing digital editions.

Is he still right?  I don't know.

Maybe--if we're in the post-digital age--it's time to re-engage.