- ab
- abbr
- acquisition
- add
- additional
- additions
- antiphon
- app
- bibl
- binding
- bindingDesc
- catDesc
- category
- cb
- Certainty
- change
- choice
- cit
- citedRange
- collation
- collection
- colophon
- condition
- country
- creation
- custEvent
- date
- decoDesc
- decoNote
- del
- depth
- desc
- dim
- dimensions
- div
- editor
- ex
- expan
- explicit
- facsimile
- faith
- filiation
- foliation
- foreign
- gap
- geo
- graphic
- keywords
- handDesc
- handNote
- handShift
- height
- hi
- history
- idno
- incipit
- item
- l
- language
- layout
- layoutDesc
- lb
- lem
- list
- listApp
- listBibl
- listPerson
- listRelation
- listWit
- locus
- material
- measure
- msContents
- msDesc
- msIdentifier
- msItem
- msFrag
- msPart
- nationality
- notatedMusic
- note
- objectDesc
- occupation
- orig
- origDate
- origin
- origPlace
- p
- pb
- persName
- person
- personGrp
- physDesc
- place
- placeName
- provenance
- ptr
- q
- quote
- rdg
- ref
- region
- relation
- repository
- roleName
- rubric
- seal
- sealDesc
- seg
- settlement
- signatures
- source
- space
- subst
- summary
- supportDesc
- supplied
- surrogates
- TEI
- term
- textLang
- title
- unclear
- watermark
- width
- witness
- active
- ana
- assertedValue
- atLeast
- atMost
- cRef
- calendar
- cause
- cert
- color
- columns
- contemporary
- corresp
- defective
- dur
- evidence
- facs
- form
- from
- hand
- href
- ident
- key
- n
- name
- new
- notAfter
- notAfter-custom
- notBefore
- notBefore-custom
- part
- passive
- pastedown
- place
- reason
- ref
- rend
- rendition
- resp
- role
- sameAs
- script
- source
- subtype
- target
- to
- type
- unit
- url
- value
- when
- when-custom
- who
- wit
- writtenLines
- xml:base
- xml:id
- xml:lang
- @source
- Additional
- Additions and Varia
- Aligning transliteration and morphological annotations with Alpheios Alignment Tool
- Art Themes
- Attribution of single statements
- Authority files (keywords)
- Bibliographic References
- Binding Description
- Canonicalized TEI
- Catalogue Workflow
- Collation
- Colophons, Titles and Supplications
- Contributing sets of images to the research environment
- Contributing to the research environment
- Corpora
- Create New Entry
- Create a new file, delete existing, deal with doublets
- Critical Apparatus
- Critical Edition Workflow
- Dates
- Decoration Description
- Definition of Works, Textparts and Narrative Units
- Documentary Texts
- Dubious spelling
- Editing the Schema
- Editing these Guidelines
- Editions in Work Records
- Entities ID structure
- Event
- Figures and Links to Images
- General
- General Structure of Work Records
- Groups
- Hands Description
- History
- Identifiers Structure
- Images
- Images of Manuscripts for editions
- Inscriptions
- Keywords
- La Syntaxe du Codex
- Language
- Layout
- Letters
- Linking from Wikidata to the research environment
- Manuscript Contents
- Manuscript Description
- Manuscript Physical Description
- Manuscripts
- Named Entities
- Narrative Units
- Object Description
- Person
- Place or Repository
- Places
- References
- References to a text and its structure
- Referencing parts of the manuscript
- Relations
- Relative Location
- Repositories
- Revisions
- Roles and roleNames
- Scrolls
- Seals Description
- Setup
- Some useful how-to for personal workspace set up
- Spaces
- Stand-off annotations with Hypothes.is
- Standardisation of transcription from Encyclopaedia Aethiopica
- State and Certainty
- Statements about persons
- Structure
- Summary on the Use of @ref and @corresp
- TEI
- Taxonomy
- Team IDs
- Text Encoding
- Training Materials
- Transcriptions with Transkribus
- Transformation
- Transliteration Principles
- Users
- Using Xinclude
- Validation process
- Workflow
- Works
- Works Description
- Zotero Bibliography Guidelines
- titleStmt of Manuscript Records
Critical Edition Workflow
A Digital-first Collaborative Critical Edition
This page lists one possible workflow for producing a printed critical edition working from and with the digital research environment Beta maṣāḥǝft. Currently, Beta maṣāḥǝft has no employee responsible for giving technical support to such a project.
This is a strategic page, very similar to the one about the workflow for a digital-first collaborative catalogue of manuscripts. It is not meant to be comprehensive or tutorial-like, but to give you the right pointers in an organized fashion.
There are many ways to do a digital-first edition which can then be the basis for a paper based product. Some tools are listed here, and can be found looking at the training materials many more exist based on XML, LaTeX, etc. There is a ton of examples and literature. Each workflow comes with its requirements and learning curve. Needless to say the best one is the one which works best for you and your needs. This page describes one possibility which benefits at several stages from the commonly edited data in Beta maṣāḥǝft (Bm) and contributes in several citable and version-aware ways to it. It has not been used yet for a real project. By going through the hypothetical phases of the work involved for a critical edition of a text, we link to those resources and pages of documentation which are available. These do not need to be used in the exact ways described. The page here will also try to clarify what can and what cannot be supported in such a process. The key idea is that what works for you and gets the result done tidily for you, does not necessarily mean it is the best thing to do for the future and for others, neither the smartest or more adequate to state-of-the-art methodology. Taking a few steps towards a deeply collaborative way of working benefits you, the quality of your work and the direct impact of that on others, so, it is definitely worth it.
The digital-first workflow presented here is based on the principle of separation of concerns, which is typical of the word of coding and programming. This methodological approach involves the splitting of as many different levels of concern as it is possible and useful. Instead of trying to do everything with one tool and with one method, each separate part of the work is done with a specifically designed method. The most common and basic separation of concerns is the separation between the concern about the semantic annotation of the text and that of its rendering on the printed page. Instead of saying I make a footnote to indicate the list of variants for this passage, and I put it here because it looks nice, we turn it around and think I mark the variants as variants. and then in a separate step I want the variants to be listed in footnotes. and in yet another independently-documented step I want the footnotes to be arranged in this way because it looks nice. Separating the steps also allows for much more control and consequently consistence. Mastering this concept is however not easy, because it requires a postponement of satisfaction, and thus a higher perceived risk. This page will try to persuade with evidence at each step of the falsity of this widely spread perception and serve as a reassurance that it is indeed the opposite which is true.
With this in mind, within the Beta maṣāḥǝft Research Environment there are many things one can use, they will be linked in the different sections.
This workflow is made possible by some set up and implementation choices made by the project, namely the TEI XML based data, the DTS API and the GitHub based collaborative workflow.
Because the author of this page is illiterate of Classical Ethiopic, the step-by-step examples will become more and more fictitious has we go to the end. While looking forward to replace them with a real example, there is none at the moment.
Requirements Identification
To start with the right foot, one should be quite clear from the outset about the desired final output. In most cases if working at a critical edition this will involve the requirements of the editors. Typically none of these requirements should prevent any of the following steps, that is to say, there is no such excuse as the editor does not let me.
My suggestion: make a list. If you did not do this before even submitting your project and getting your funding, I suggest it is the first thing you do once you have been approved.
Benefits to you
If you know what you are doing from the beginning, where it will land, everything will be easier. e.g. if the translation goes in a different book, then you will avoid wasting time with the alignment on facing pages, etc.
Select the manuscripts
Most likely if you are setting of on the journey to build the edition of a text, you know where to find your manuscripts and probably already know which are where. Just in case you do not, Beta maṣāḥǝft may help you. Search for the title of your work, open the relevant work, and on the right, in red, you will find a list of manuscripts, sorted by type of content and eventually also telling you about other textual units related to the one you are looking for.
This kind of search may not however bring you to the digital surrogates directly. The project also maintains a list of available images.
Because who writes does not have a clue, he looked with a script in the data searching
for a textual unit attested in manuscripts with a likelihood to have accessible images.
Below is the script, just because it is always nice. It looks in the Ethio-SpaRE
manuscripts, and selects only identified and encoded texts which occur 5 times in
manuscripts whose images are available online (<idno>
↗ has both @facs
and @n
).
Among the results I picked a short text which is not related to many others.
<thistext>LIT4714Salamlaki<iscontained>5</iscontained> <ms>ESqs014<locus xmlns="http://www.tei-c.org/ns/1.0" from="7vc" to="8ra">7vc-8ra</locus> </ms> <ms>ESamm007<locus xmlns="http://www.tei-c.org/ns/1.0" from="5va" to="5vb"/> </ms> <ms>ESamq014<locus xmlns="http://www.tei-c.org/ns/1.0" from="5va" to="5vb"/> </ms> <ms>ESmr001<locus xmlns="http://www.tei-c.org/ns/1.0" from="93rb" to="94rb"/> </ms> <ms>ESum014<locus xmlns="http://www.tei-c.org/ns/1.0" from="9ra" to="9rb"/> </ms> </thistext>Checking in the database, https://betamasaheft.eu/works/LIT4714Salamlaki/main is called Salām laki ḫoḫǝta mǝśrāq and there are 13 manuscripts in total whose descriptions point to the identifier, some of them contain this text in the main and some in the secondary strata. Looks good. Because of my query I know that there are images and they are available.
Benefits to you
Well, you may find in the research environment manuscript descriptions and images, curated by a team and peer reviewed. That does not sound bad, isn't it?
Benefits to the community
What about the others? Those which you know about and are not in the research environment? Consider as a first step having a unified source of information, thus describing the manuscripts and linking the images and the text in the research environment. Catalogue descriptions can be long or short there are no requirement. What is then important is that you link your textual unit to the description, as a main content or addition. If in doing this you also add a description, links to other named entities and relevant bibliography, all will be able to benefit from that. If you do not want to do so, you can still list witnesses which are external to BM. If these have accessible images and data, link them. However, the PDF script listed in this page will not work out of the box, can be adapted though.
Transcription
You may be very lucky and find out we have already added a transcription based on the images, but this is not a very frequent case. Let us assume that there is no transcription, as for my example. I located the text using the viewer for each manuscript and downloaded the relevant photos following the manifest links. This may be not possible for you, please, simply ask. I then asked a colleague, Solomon, to help me identify the exact place where the text began and ended. We found out that only in four cases out of five we could locate the text on the images where expected. This is still quite good and fine for the purpose of the demo.
I uploaded the images to Transkribus, following the instructions on the steps involved which are also present in these guidelines. However, because the text is very short, I did not do the entire manuscript, only the part interesting to me for this text.
- https://betamasaheft.eu/ESum014 Begins 9r, columns 1, line 13 and ends 9r, column 2, line 7
- https://betamasaheft.eu/ESmr001 Begins 83r, column 2, line 1 and ends 83r, column 2, line 13
- https://betamasaheft.eu/ESqs014 Begins 8v column 3, line 12 and ends 9r column 1, line 6
- https://betamasaheft.eu/ESamm007 Begins with verso column 1 line 9 and ends verso column 2, line 9.
At this point you can also download from Transkribus .docx, .pdf, .txt and many other formats besides the TEI which we will be using here. The project has funds to support transcription (HTR) alignment and correction steps, by using its credits and payed trained students to do some of this steps. Ask.
Benefits to you
The three steps (loading, layout analysis and HTR) took me less than twenty minutes for four images. At the end of the process you have a text in Unicode, aligned to the images, ready to be used.
Benefits to the community
Correcting transcriptions in transkribus (see next step) will allow in the future to further train the model so that the transcription can be improved and the corrections needed less and less.
Correction and Encoding of the transcriptions
The transcription is not perfect, it needs to be corrected. You can correct in Transkribus, which is nice in keeping you aligned to the text. Some features will however need to be encoded after. So, I opted for doing the corrections in my TEI export.
In the manuscripts where the identification is correct I can start encoding the structure necessary for the text. This could have been supported by using the transkribus2BM.xsl, which is however here not necessary, because the text is very short.
Now is time to get the data into the research environment. Not familiar with GitHub? In these Guidelines and in our GitHub organisation you will find plenty of guidance. You could start from here.
I did a branch of the Manuscripts repository and opened the four relevant files for the manuscripts.
I did a branch of the Works repository and opened the textual unit of the text.
I then copied from the TEI output of transrkibus into the BM manuscript records and encoded the text structure. I used for this step the Atom editor. Any other is ok.
<div type="edition" xml:lang="gez">
<div type="textpart" subtype="folio" n="9">
<div type="textpart" corresp="#p1_i1.2">
<ab><pb n="9r" facs="#facs_5"></pb>
<cb n="a" facs="#facs_5_r3"></cb>
<lb facs="#facs_5_r3l13" n="13"></lb>ኖኂተ፡ ሞሥ <lb facs="#facs_5_r3l14" n="14"></lb>እእ <lb facs="#facs_5_r3l15" n="15"></lb>ራቅ፡ ዘወልደ፡ ኖሬ፡ ሰላም <lb facs="#facs_5_r3l16" n="16"></lb>ለኪ፡ ዓፀድ፡ ዘእስከደረ፡ <lb facs="#facs_5_r3l17" n="17"></lb>ለም፡ ስርጉት፡ በወሪ <lb facs="#facs_5_r3l18" n="18"></lb>ውሬ፡ ሰለኪ፡ ሰድልት <lb facs="#facs_5_r3l19" n="19"></lb>ንጸሬ፡ ሰላኪ፡ በትረ፡ እ <lb facs="#facs_5_r3l20" n="20"></lb>ሮን፡ ጸዋሪተ፡ ፍሬ፡ ሶብ <lb facs="#facs_5_r3l21" n="21"></lb>፡ እንተ፡ ወለድኪ፡ ፈ <lb facs="#facs_5_r3l22" n="22"></lb>ብሬ፡ ዘምስለ፡ ዝማረ፡ <lb facs="#facs_5_r3l23" n="23"></lb>ንወድለኪ፡ ለበይት <lb facs="#facs_5_r3l24" n="24"></lb>በብ፡ መጽሐፈ፡ ተአም <lb facs="#facs_5_r3l25" n="25"></lb>ርኪ፡ በሰላም፡ ኪያነ፡ ባ <cb n="b" facs="#facs_5_r4"></cb>
<lb facs="#facs_5_r4l1" n="1"></lb>ርኪ። ወኢትርስእኒ፡ ለ <lb facs="#facs_5_r4l2" n="2"></lb>ገብርኪ፡ ኃጥእ፡ ወአባሴ <lb facs="#facs_5_r4l3" n="3"></lb>ተ፡ወልደ፡ ማርያም <lb facs="#facs_5_r4l4" n="4"></lb>ማር፡ አቡነ፡ ትክለ፡ ሃይወ <lb facs="#facs_5_r4l5" n="5"></lb>ሮት፡ ወልደሙ፡ ለአቡን <lb facs="#facs_5_r4l6" n="6"></lb>ዘገብርኤል፡ በዓለመ፡ ዘ <lb facs="#facs_5_r4l7" n="7"></lb>ለም፡ አሜን፡ ወአሜን፡ </ab>
</div>
</div>
</div>
Example 1
Here we still have to add the encoding of, e.g. rubrication, corrections, normalizations, etc. The correction of the text can be done in transkribus before exporting or in the TEI of the manuscript. If done in transkribus only visible text should be entered. Markup will be added in the TEI. Once correction and encoding took place the text in the mss will look like the following example, more or less.
<div type="edition" xml:lang="gez">
<div type="textpart" subtype="folio" n="93">
<ab>
<pb facs="#facs_1" xml:id="AMM-007_006.jpeg" n="5v"></pb><cb facs="#facs_1_r1" n="a"></cb><lb facs="#facs_1_r1l9" n="9"></lb>ንፈስ።</ab>
<div type="textpart" corresp="#ms_i1.1.1">
<ab>ሰላም፡ ለኪ፡ ማርያ<lb facs="#facs_1_r1l10" n="10"></lb>ም። ኆኅተ፡ ምሥራቅ፡ ዘወ<lb facs="#facs_1_r1l11" n="11"></lb>ልደ፡ ናዊ፡ ሰላም፡ ለኪ፡ ዓፀድ፡<lb facs="#facs_1_r1l12" n="12"></lb>ዘእሰከደሬ፡ ሰላም፡ ለኪ፡ ጽ<lb facs="#facs_1_r1l13" n="13"></lb>ድልተ፡ ንፂሬ፡ ሥርጉት፡
በወ<lb facs="#facs_1_r1l14" n="14"></lb>ራውሬ፡ ሰላም፡ ለኪ፡ በቅ<lb facs="#facs_1_r1l15" n="15"></lb>ረ፡ አሮን፡ ጸዋሪተ፡ ፍሬ፡ ሳላ<lb facs="#facs_1_r1l16" n="16"></lb>ም፡ ለኪ። አንተ፡ ወለድኪ፡
ፈ<lb facs="#facs_1_r1l17" n="17"></lb>ጣሬ፡ ሰላም፡ ለኪ። ዘምስለ፡<lb facs="#facs_1_r1l18" n="18"></lb>ዝማሬ፡ ንሰ<add place="top">ማ</add>ድ፡ ለኪ፡ ሶበ፡ ይ<lb facs="#facs_1_r1l19" n="19"></lb>ትነበብ፡ መጽሐፈ፡ ተአም<lb facs="#facs_1_r1l20" n="20"></lb>ርኪ። በሰላም፡ ኪያነ፡ በር<cb facs="#facs_1_r2" n="b"></cb><lb facs="#facs_1_r2l1" n="1"></lb>ኪ። ወላዲተ፡ አምላክ፡ ማ<lb facs="#facs_1_r2l2" n="2"></lb>ርያም፡ እንበለ፡ ዕብሳብ፡ ወ<lb facs="#facs_1_r2l3" n="3"></lb>ሩካቤ፡ በይነ፡ ዘዓቀሂ፡ ብኩ<lb facs="#facs_1_r2l4" n="4"></lb>ለኪ፡ ንስቲተ፡ ቃለ፡ ይባቤ፡<lb facs="#facs_1_r2l5" n="5"></lb>ፈትዊ፡ እሙ፡ በረከተ፡ አፉ<lb facs="#facs_1_r2l6" n="6"></lb>ኪ፡
ሙሐዘ፡ ከርቤ። ለነዳይ፡<lb facs="#facs_1_r2l7" n="7"></lb>ብእሲ፡ ወለዘረከበ፡ ምንደ<lb facs="#facs_1_r2l8" n="8"></lb>ቤ፡ ኅብስተከ፡ ፈትት፡ ኢሳይ<lb facs="#facs_1_r2l9" n="9"></lb>ያስ፡ ይቤ። </ab>
</div>
</div>
</div>
Example 2
The important part here is that text has been checked line by line against the image
(here there are certainly errors, because I am analphabet) and transcription phenomena
have been encoded, for example the letter added at the top in line 18. Note that spaces
before <pb>
↗, <cb>
↗ and <lb>
↗ have been removed. This text is now
ready for Collatex.
Want to opt out now? You can use your encoded text in Oxygen for example to get a different format as output, e.g. .txt, .pdf, .odt.
Benefits to you
You may have noticed that I did not need to do any description of the manuscripts. I added the transcriptions which where not there but all the data about the manuscripts is already there.
Benefits to the community
Data is now in your branches of the different repositories. You could make a PR and contribute that corrected transcription. Or still wait and keep the transcriptions with you until you are happy. When they will be eventually merged they will be published online also as HTML and PDF, while on the branch they are still available but only as XML data.
Collation
I have the text transcribed from the four witnesses. Now I want to collate these four texts. I opened Collatex Demo ticked some options and copied the corrected texts as four different witnesses, removing the XML brackets. Click Collate and you get the results, also as TEI. I copied the resulting TEI P5 into the Work file on my branch. This I am unlikely to propose for a PR, because a mockup edition would not be a good contribution to the community.
<div type="edition" xml:lang="gez">
<div type="textpart" subtype="incipit" xml:id="Incipit">
<ab> ሰላም፡ ለኪ፡<app>
<rdg wit="W1">ማርያም።</rdg>
<rdg wit="W2 W3 W4"></rdg>
</app>ኆኅተ፡ ምሥራቅ፡</ab>
</div>
<div type="textpart">
<ab>
<app>
<rdg wit="W3 W4 W1">ዘወልደ፡</rdg>
<rdg wit="W2">በወልዳ፡</rdg>
</app><app>
<rdg wit="W1">ናዊ፡</rdg>
<rdg wit="W2 W3 W4">ኖሬ፡</rdg>
</app>ሰላም፡ ለኪ፡<app>
<rdg wit="W3 W4 W1">ዓፀድ፡</rdg>
<rdg wit="W2">ጽጽልት፡</rdg>
</app><app>
<rdg wit="W1">ዘእሰከደሬ፡</rdg>
<rdg wit="W2"></rdg>
<rdg wit="W3">ዘኢስኪድራ፡</rdg>
<rdg wit="W4">ዘእስከደሬ፡ ለሶኪ፡ ስርጉት፡ በወሪውሬ፡ ሰለኪ፡ ፅድልት፡</rdg>
</app><app>
<rdg wit="W2 W4 W1"></rdg>
<rdg wit="W3">ሰላም፡ ለኪ፡</rdg>
</app><app>
<rdg wit="W2 W4 W1"></rdg>
<rdg wit="W3">እስመ፡ ወልድኪ፡</rdg>
</app><app>
<rdg wit="W2 W4 W1"></rdg>
<rdg wit="W3">ፈጣሬ፡</rdg>
</app><app>
<rdg wit="W2 W4 W1"></rdg>
<rdg wit="W3">ሰላ፡</rdg>
</app><app>
<rdg wit="W2 W4 W1"></rdg>
<rdg wit="W3">ለኪ፡ ጽድልተ፡</rdg>
</app><app>
<rdg wit="W2 W1"></rdg>
<rdg wit="W3 W4">ንጸሬ፡</rdg>
</app><app>
<rdg wit="W3 W1">ሰላም፡</rdg>
<rdg wit="W2"></rdg>
<rdg wit="W4">ሰላኪ፡</rdg>
</app><app>
<rdg wit="W1">ለኪ፡ ጽድልተ፡</rdg>
<rdg wit="W2 W3 W4"></rdg>
</app><app>
<rdg wit="W1">ንፂሬ፡</rdg>
<rdg wit="W2 W3 W4"></rdg>
</app><app>
<rdg wit="W3 W1">ሥርጉት፡</rdg>
<rdg wit="W2 W4"></rdg>
</app><app>
<rdg wit="W2 W1">በወራውሬ፡</rdg>
<rdg wit="W3">በወራውረ፡ ሰ፡</rdg>
<rdg wit="W4"></rdg>
</app><app>
<rdg wit="W2 W1">ሰላም፡</rdg>
<rdg wit="W3 W4"></rdg>
</app><app>
<rdg wit="W2 W1">ለኪ፡</rdg>
<rdg wit="W3 W4"></rdg>
</app><app>
<rdg wit="W1">በቅረ፡</rdg>
<rdg wit="W2">ወጽዕድውት፡ ዘምስለ፡ ንዳሬ፡</rdg>
<rdg wit="W3 W4"></rdg>
</app><app>
<rdg wit="W3 W4 W1"></rdg>
<rdg wit="W2">ሳላም፡ ለኪ።</rdg>
</app><app>
<rdg wit="W1"></rdg>
<rdg wit="W2 W3 W4">በትረ፡</rdg>
</app><app>
<rdg wit="W2 W3 W1">አሮን፡</rdg>
<rdg wit="W4">እሮን፡</rdg>
</app><app>
<rdg wit="W4 W1">ጸዋሪተ፡</rdg>
<rdg wit="W2">ጸዋፊተ፡</rdg>
<rdg wit="W3">ጸዋሪተ፡ፍሬ፡ ሰ፡</rdg>
</app><app>
<rdg wit="W2 W4 W1">ፍሬ፡</rdg>
<rdg wit="W3"></rdg>
</app><app>
<rdg wit="W1">ሳላም፡ ለኪ።</rdg>
<rdg wit="W2 W3 W4"></rdg>
</app><app>
<rdg wit="W2 W3 W1"></rdg>
<rdg wit="W4">ሶላኪ፡ እንተ፡</rdg>
</app><app>
<rdg wit="W3 W4 W1"></rdg>
<rdg wit="W2">ሰላም፡</rdg>
</app><app>
<rdg wit="W3 W4 W1"></rdg>
<rdg wit="W2">ኪ፡</rdg>
</app><app>
<rdg wit="W2 W1">አንተ፡</rdg>
<rdg wit="W3 W4"></rdg>
</app><app>
<rdg wit="W4 W1">ወለድኪ፡</rdg>
<rdg wit="W2">ወለጽኪ፡ ፈብሬ፡</rdg>
<rdg wit="W3"></rdg>
</app><app>
<rdg wit="W4 W1">ፈጣሬ፡</rdg>
<rdg wit="W2 W3"></rdg>
</app><app>
<rdg wit="W1">ሰላም፡</rdg>
<rdg wit="W2 W3 W4"></rdg>
</app><app>
<rdg wit="W1"></rdg>
<rdg wit="W2 W3 W4">ዘምስለ፡</rdg>
</app><app>
<rdg wit="W2 W3 W1"></rdg>
<rdg wit="W4">ዝማረ፡ ንስጣድ፡</rdg>
</app><app>
<rdg wit="W4 W1"></rdg>
<rdg wit="W2 W3">ዝማሬ፡</rdg>
</app><app>
<rdg wit="W2 W3 W1"></rdg>
<rdg wit="W4">ለኪ፡</rdg>
</app><app>
<rdg wit="W4 W1"></rdg>
<rdg wit="W2">እስግድ፡</rdg>
<rdg wit="W3">ንሰግድ፡</rdg>
</app><app>
<rdg wit="W2 W1">ለኪ።</rdg>
<rdg wit="W3"></rdg>
<rdg wit="W4">ለስይትን በብ፡</rdg>
</app><app>
<rdg wit="W1">ዘምስለ፡ ዝማሬ፡</rdg>
<rdg wit="W2 W3 W4"></rdg>
</app><app>
<rdg wit="W3 W4 W1"></rdg>
<rdg wit="W2">መጽሐፈ፡</rdg>
</app><app>
<rdg wit="W1">ንሰማድ፡</rdg>
<rdg wit="W2">ተአምናኪ፡</rdg>
<rdg wit="W3 W4"></rdg>
</app><app>
<rdg wit="W3 W1">ለኪ፡ ሶበ፡</rdg>
<rdg wit="W2 W4"></rdg>
</app><app>
<rdg wit="W2 W3 W1">ይትነበብ፡</rdg>
<rdg wit="W4"></rdg>
</app><app>
<rdg wit="W3 W4 W1">መጽሐፈ፡</rdg>
<rdg wit="W2"></rdg>
</app><app>
<rdg wit="W3 W4 W1"></rdg>
<rdg wit="W2">ለለወትረ፡</rdg>
</app><app>
<rdg wit="W1">ተአምርኪ።</rdg>
<rdg wit="W2"></rdg>
<rdg wit="W3 W4">ተአምርኪ፡</rdg>
</app><app>
<rdg wit="W3 W4 W1">በሰላም፡</rdg>
<rdg wit="W2"></rdg>
</app><app>
<rdg wit="W1"></rdg>
<rdg wit="W2 W3 W4">ኪያነ፡</rdg>
</app><app>
<rdg wit="W1"></rdg>
<rdg wit="W2">ባርኪ፡ ሰላመ፡ ባርኪ፡</rdg>
<rdg wit="W3 W4">ባርኪ።</rdg>
</app><app>
<rdg wit="W2 W1">ኪያነ፡</rdg>
<rdg wit="W3"></rdg>
<rdg wit="W4">ወኢትርስእኒ፡ ለገብርኪ፡ ኃጥእ፡ ወአባሴ፡ ዘወልደ፡ ማርያሞ፡ ማር፡ አቡነ፡ ትክለ፡ ሃይማሮት፡ ወልደሙ፡
ለአቡን፡ ዘገብርኤል፡ ለዓለመ፡ ንለም፡ አሜን፡ ወአሜን፡</rdg>
</app><app>
<rdg wit="W1">በርኪ። ወላዲተ፡ አምላክ፡ ማርያም፡ እንበለ፡ ዕብሳብ፡ ወሩካቤ፡ በይነ፡ ዘዓቀሂ፡ ብኩለኪ፡ ንስቲተ፡
ቃለ፡ ይባቤ፡ ፈትዊ፡ እሙ፡ በረከተ፡ አፉኪ፡ ሙሐዘ፡ ከርቤ። ለነዳይ፡ ብእሲ፡ ወለዘረከበ፡ ምንደቤ፡ ኅብስተከ፡ ፈትት፡
ኢሳይያስ፡ ይቤ።</rdg>
<rdg wit="W2">ኅቡረ።</rdg>
<rdg wit="W3 W4"></rdg>
</app>
</ab>
</div>
</div>
Example 3
In the above text there are only <rdg>
↗ elements. I randomly changed one in each
to a <lem>
↗ for this demo. The encoding of the critical
text is described here. What is important to know is that this is only a
practical starting point, you will want to change this and add much more precise
encoding in your edition.
If the manuscripts had been committed already with the transcription, one could have used the collatex instance of Beta Masaheft to run this step. While this can be a quick way to look at the results of the process, and performs the clean up of the text for you with this script the output is not in XML, so for this example we will use the demo online instead. Collatex can do much more and much better than I used it for in this demo!
Benefits to you
Collatex does the boring job of comparing the texts for you. You will still have to do the boring job of checking the result of that job. Collatex will present some TEI, how much of it you will have to redo and improve varies of course. You may want to simply skip this, do your collation manually and get to know your text slowly. It is advisable that you select carefully the portions of text to compare and take care of the encoding to enrich it as adequate.
Benefits to the community
Once you have a text for the edition you could contribute that the community even if you do not print it as a book. Just make a PR from your branch and this will be peer reviewed.
Analysis of the errors and construction of the Stemma
I skipped this step, because there is no real help which can be offered here. There are
books about supported stemmatology with digital tools, but I would not be able to offer
here a concrete demo. The analysis of the variants and their encoding and evaluation
towards a stemma which may guide the editorial choices of the text can be encoded, for
example changing a <rdg>
↗ into a <lem>
↗ or adding <note>
↗ elements,
<choice>
↗, etc.
Encoding of the Critical Edition
Guidelines on how to encode the edition are provided already in this Guidelines: See
edition and images for the edition. Do not forget to correctly
list the <witnesses>
↗.
Benefits to you
Your encoded edition will not have yet a bookish look and feel, although you can use one of many viewers, but it is an edition, recording all the relevant phenomena in a univocal and standardized way, naming each feature and describing it with attributes as necessary.
Benefits to the community
Even if the text is only presented online, if you have made a PR and this was merged your text is edited and available especially as TEI data, it can be used by others not simply to read it but to reuse its encoded features.
Viewers will often expect that everything you want is contained in your input, which is in many cases a false expectation. The expectation should be that each part of your input will be tying up in a specific way a series of other resources of different types. You can do the job for the viewer or software of your choice and tidily pack everything explicitly for it, and have the certainties coming out of having layed out everything explicitly (or at least think you did so) or you can embrace the sources and play with that. In the GitHub Make PDF repository (Pietro's personal Fork), branch editionTest I have had a go at a simple package to allow producing also a PDF output from this steps, which would include the full potential of the research environment collaborative work, that is, you do only the edition, all the rest is done, which may not be always the case but is more and more so, since so many people contribute to the common good.
If you open that package in Oxygen, open driver.xml and click play, the package will use BM to fetch resources from the research environment and print your book. But wait, you do not need to do that either, because output.pdf is already the result of running this, simply open the PDF.
The package is easily customizable with the settings file not only for the layout features but also for the parts, their order, the contents and is all in TEI and XQuery, the standard languages of data for the humanities. The main functionalities are documented here and the typesetting is done in Oxygen XML Editor using XSL-FO and Apache FOP.
To use it, please fork (not branch) this branch of the repository to your own account.
Benefits to you
Relying on your work file, which is included locally, so it can be on your branch only and not yet public, the script will use the data stored in BM as XML and the DTS API to fetch information to compile your book. For example,
- it will list all known witnesses and provide a print out of their dating
- it will collect bibliography and print it into a specific section using the HLZ styles and the EthioStudies library pulling citations from the edition, the driver file and the related manuscripts.
- it will organize the text and its apparata (only one in the example)
- it will collect all named entities and print the indexes (also selected in settings).
- notice that some examples are provided of how to use the existing information to attribute the contents reused from the Research Environment.
Of course, as you change your data, either the manuscripts or the edition running the transformation will update the output.
All can be customized, entirely changed, adapted, features can be added, etc. You can read the script or ask somebody who can read it for you, is one file only. For example, if a translation is provided it can be put in the facing page and kept aligned with its own apparata.
Fine touches can be done in the XSL-FO, as this is also simply an XML file.
Benefits to the community
Any one could follow the same steps or some of them here to reproduce your work. Anyone will be able to find the exact point and layer of the information which can be criticized (or not), if they wanted to.
Let us imagine that the same package is used by two scholars for two editions of texts which happen to share a witness. The description of the witness may be exactly the same in both. The authority files unequivocal, the data directly indexed and retrievable, the consistency ensured across different editions. Sure, there is much more that can be done...
Morphological Annotation
Once your text is established you may wish to record your interpretation of the morphology and eventually the syntax. Here you can find a video tutorial on how to use the Alpheios Annotation tool for this purpose. You can copy paste in the TEI of the edition into the tool, or fetch the text from the DTS API if you had it merged. This page will guide you in the transliteration and list tools which can be used for that preliminary step.
Benefits to you
By sharing your reading of the text and encoding it, you will provide more evidence of your interpretation and perhaps of some of the choices you made.
Benefits to the community
The annotation will directly enrich the Online Lexicon Linguae Aethiopicae and the ability to determine a morphological interpretation of the Morphological parser, both of which are available for any user of any page on the web.
Deposit Your Dataset
Working with the data in the research environment will provide you with stable citable references for each committed and merged version of your files. But what about the entire group of resources you have used? To make the set really reproducible and your collected resources citable as a set, e.g. with your TEI edition, the files of the manuscript description has you have used them in the final version and eventually the script, you may deposit them in Zenodo or another similar repository.
Benefits to you
Your dataset, as a group of resources will be citable. You may also submit a data paper to describe your dataset.
Copyright of remixed content
As every single page of our website states, as well as each file in the Beta Masaheft dataset, the copyright of the data is of the Akademie der Wissenschaften in Hamburg, Hiob-Ludolf-Zentrum für Äthiopistik. Sharing and remixing are permitted under terms of the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Therefore books created remixing and reusing this data (catalogues, editions, etc.) should have the same copyright, or a formulation equally or more permissive adequately negotiated with the publisher.
Publication products generated from BM data must respect this copyright and should contain appropriate attribution to all contributors. The above PDF producing script does some of this, not all that can be done! Once all is ok, the book/article is ready the licensing of the data, with the agreement of the above copyright owners, can be upgraded for commercial use as well, so that the publisher can print and sell (CC-BY-SA-NC —> CC-BY-SA). This is possible and positive, as it allows even more reuse. A Deposit of the source data, as descrbed above can be made and a DOI of the data deposit added to the book content. Copyright of the book/article, product of the remix of data, can be silently or formally passed to the editor. The statement in the book, in respect to the original copyright should state (C) editor name… CC-BY-SA, or an equally allowing licence as the data on which it is based requires. Editors can commercialise it and sell the book no problem respecting this copyright. As soon as possible and following agreement with the editor the data can additionally be made Open Access. The DOI of the open access book is added to the deposit of the data with the proper relation so that the two DOI know about each other. This example workflow yields the following distinct products, each of which is distinctly and separately citable, adding to each other over time:
- publication of the source data NC until publication CC-BY-SA-NC (C) Akademie+HLCEES + Open Access and versioned in GitHub
- publication of the source data after publication CC-BY-SA (C) Akademie+HLCEES + Open Access and versioned in GitHub
- HTML versions of the data throughout CC-BY-SA-NC (C) Akademie+HLCEES + Openly available, with indepently generated Permanent Identifiers for reference and stable URIs for data relations.
- Several additional data formats remixes with CC-BY-SA-NC (C) Akademie+HLCEES + Openly available
- publication of the dataset for the publication CC-BY-SA (C) Akademie+HLCEES + Open Access for the dataset with DOI
- the print publication CC-BY-SA (C) Editor
- the digital publication CC-BY-SA (C) Editor + Open Access for book with DOI.
- An additional versions of the DOI of the deposit of the data, once this is updated with relation to the DOI of the book.
This page is referred to in the following pages
Revisions of this page
- Pietro Maria Liuzzo on 2021-11-18: first version from test run on LIT4714