Commit | Line | Data |
---|---|---|
db9452a9 | 1 | $Cambridge: exim/doc/doc-docbook/HowItWorks.txt,v 1.5 2006/07/31 13:19:36 ph10 Exp $ |
168e428f PH |
2 | |
3 | CREATING THE EXIM DOCUMENTATION | |
4 | ||
5 | "You are lost in a maze of twisty little scripts." | |
6 | ||
7 | ||
8 | This document describes how the various versions of the Exim documentation, in | |
9 | different output formats, are created from DocBook XML, and also how the | |
10 | DocBook XML is itself created. | |
11 | ||
12 | ||
13 | BACKGROUND: THE OLD WAY | |
14 | ||
15 | From the start of Exim, in 1995, the specification was written in a local text | |
16 | formatting system known as SGCAL. This is capable of producing PostScript and | |
17 | plain text output from the same source file. Later, when the "ps2pdf" command | |
18 | became available with GhostScript, that was used to create a PDF version from | |
19 | the PostScript. (A few earlier versions were created by a helpful user who had | |
20 | bought the Adobe distiller software.) | |
21 | ||
22 | A demand for a version in "info" format led me to write a Perl script that | |
23 | converted the SGCAL input into a Texinfo file. Because of the somewhat | |
24 | restrictive requirements of Texinfo, this script has always needed a lot of | |
9b371988 | 25 | maintenance, and was never totally satisfactory. |
168e428f PH |
26 | |
27 | The HTML version of the documentation was originally produced from the Texinfo | |
28 | version, but later I wrote another Perl script that produced it directly from | |
29 | the SGCAL input, which made it possible to produce better HTML. | |
30 | ||
31 | There were a small number of diagrams in the documentation. For the PostScript | |
32 | and PDF versions, these were created using Aspic, a local text-driven drawing | |
33 | program that interfaces directly to SGCAL. For the text and texinfo versions, | |
34 | alternative ascii-art diagrams were used. For the HTML version, screen shots of | |
35 | the PostScript output were turned into gifs. | |
36 | ||
37 | ||
38 | A MORE STANDARD APPROACH | |
39 | ||
40 | Although in principle SGCAL and Aspic could be generally released, they would | |
41 | be unlikely to receive much (if any) maintenance, especially after I retire. | |
42 | Furthermore, the old production method was only semi-automatic; I still did a | |
43 | certain amount of hand tweaking of spec.txt, for example. As the maintenance of | |
44 | Exim itself was being opened up to a larger group of people, it seemed sensible | |
45 | to move to a more standard way of producing the documentation, preferable fully | |
46 | automated. However, we wanted to use only non-commercial software to do this. | |
47 | ||
48 | At the time I was thinking about converting (early 2005), the "obvious" | |
49 | standard format in which to keep the documentation was DocBook XML. The use of | |
50 | XML in general, in many different applications, was increasing rapidly, and it | |
51 | seemed likely to remain a standard for some time to come. DocBook offered a | |
52 | particular form of XML suited to documents that were effectively "books". | |
53 | ||
54 | Maintaining an XML document by hand editing is a tedious, verbose, and | |
55 | error-prone process. A number of specialized XML text editors were available, | |
56 | but all the free ones were at a very primitive stage. I therefore decided to | |
9b371988 PH |
57 | keep the master source in AsciiDoc format, from which a secondary XML master |
58 | could be automatically generated. | |
59 | ||
60 | The first "new" versions of the documents, for the 4.60 release, were generated | |
61 | this way. However, there were a number of problems with using AsciiDoc for a | |
62 | document as large and as complex as the Exim manual. As a result, I wrote a new | |
63 | application called xfpt ("XML From Plain Text") which creates XML from a | |
64 | relatively simple and consistent markup language. This application has been | |
65 | released for general use, and the master sources for the Exim documentation are | |
66 | now in xfpt format. | |
168e428f PH |
67 | |
68 | All the output formats are generated from the XML file. If, in the future, a | |
69 | better way of maintaining the XML source becomes available, this can be adopted | |
70 | without changing any of the processing that produces the output documents. | |
71 | Equally, if better ways of processing the XML become available, they can be | |
72 | adopted without affecting the source maintenance. | |
73 | ||
74 | A number of issues arose while setting this all up, which are best summed up by | |
9b371988 | 75 | the statement that a lot of the technology is (in 2006) still very immature. It |
168e428f PH |
76 | is probable that trying to do this conversion any earlier would not have been |
77 | anywhere near as successful. The main problems that still bother me are | |
78 | described in the penultimate section of this document. | |
79 | ||
9b371988 | 80 | The following sections describe the processes by which the xfpt files are |
168e428f | 81 | transformed into the final output documents. In practice, the details are coded |
9b371988 | 82 | into a Makefile that specifies the chain of commands for each output format. |
168e428f PH |
83 | |
84 | ||
85 | REQUIRED SOFTWARE | |
86 | ||
87 | Installing software to process XML puts lots and lots of stuff on your box. I | |
88 | run Gentoo Linux, and a lot of things have been installed as dependencies that | |
89 | I am not fully aware of. This is what I know about (version numbers are current | |
90 | at the time of writing): | |
91 | ||
9b371988 | 92 | . xfpt 0.00 |
168e428f | 93 | |
9b371988 | 94 | This converts the master source file into a DocBook XML file. |
168e428f PH |
95 | |
96 | . xmlto 0.0.18 | |
97 | ||
98 | This is a shell script that drives various XML processors. It is used to | |
99 | produce "formatted objects" for PostScript and PDF output, and to produce | |
100 | HTML output. It uses xsltproc, libxml, libxslt, libexslt, and possibly other | |
101 | things that I have not figured out, to apply the DocBook XSLT stylesheets. | |
102 | ||
103 | . libxml 1.8.17 | |
9b371988 PH |
104 | libxml2 2.6.22 |
105 | libxslt 1.1.15 | |
168e428f PH |
106 | |
107 | These are all installed on my box; I do not know which of libxml or libxml2 | |
108 | the various scripts are actually using. | |
109 | ||
9b371988 | 110 | . xsl-stylesheets-1.68.1 |
168e428f PH |
111 | |
112 | These are the standard DocBook XSL stylesheets. | |
113 | ||
114 | . fop 0.20.5 | |
115 | ||
116 | FOP is a processor for "formatted objects". It is written in Java. The fop | |
9b371988 PH |
117 | command is a shell script that drives it. It is used to generate PostScript |
118 | and PDF output. | |
168e428f PH |
119 | |
120 | . w3m 0.5.1 | |
121 | ||
122 | This is a text-oriented web brower. It is used to produce the Ascii form of | |
9b371988 PH |
123 | the Exim documentation (spec.txt) from a specially-created HTML format. It |
124 | seems to do a better job than lynx. | |
168e428f PH |
125 | |
126 | . docbook2texi (part of docbook2X 0.8.5) | |
127 | ||
128 | This is a wrapper script for a two-stage conversion process from DocBook to a | |
129 | Texinfo file. It uses db2x_xsltproc and db2x_texixml. Unfortunately, there | |
130 | are two versions of this command; the old one is based on an earlier fork of | |
131 | docbook2X and does not work. | |
132 | ||
133 | . db2x_xsltproc and db2x_texixml (part of docbook2X 0.8.5) | |
134 | ||
135 | More wrapping scripts (see previous item). | |
136 | ||
137 | . makeinfo 4.8 | |
138 | ||
139 | This is used to make a set of "info" files from a Texinfo file. | |
140 | ||
9b371988 PH |
141 | In addition, there are a number of locally written Perl scripts. These are |
142 | described below. | |
168e428f PH |
143 | |
144 | ||
145 | THE MAKEFILE | |
146 | ||
147 | The makefile supports a number of targets of the form x.y, where x is one of | |
148 | "filter", "spec", or "test", and y is one of "xml", "fo", "ps", "pdf", "html", | |
149 | "txt", or "info". The intermediate targets "x.xml" and "x.fo" are provided for | |
150 | testing purposes. The other five targets are production targets. For example: | |
151 | ||
152 | make spec.pdf | |
153 | ||
154 | This runs the necessary tools in order to create the file spec.pdf from the | |
9b371988 | 155 | original source spec.xfpt. A number of intermediate files are created during |
168e428f PH |
156 | this process, including the master DocBook source, called spec.xml. Of course, |
157 | the usual features of "make" ensure that if this already exists and is | |
158 | up-to-date, it is not needlessly rebuilt. | |
159 | ||
160 | The "test" series of targets were created so that small tests could easily be | |
9b371988 | 161 | run fairly quickly, because processing even the shortish XML document takes |
168e428f PH |
162 | a bit of time, and processing the main specification takes ages. |
163 | ||
164 | Another target is "exim.8". This runs a locally written Perl script called | |
165 | x2man, which extracts the list of command line options from the spec.xml file, | |
166 | and creates a man page. There are some XML comments in the spec.xml file to | |
167 | enable the script to find the start and end of the options list. | |
168 | ||
169 | There is also a "clean" target that deletes all the generated files. | |
170 | ||
171 | ||
9b371988 | 172 | CREATING DOCBOOK XML FROM XFPT INPUT |
168e428f | 173 | |
9b371988 PH |
174 | The small amount of local configuration for xfpt is included at the start of |
175 | the two .xfpt files; there are no separate local xfpt configuration files. | |
176 | Running the xfpt command creates a .xml file from a .xfpt file. When this | |
177 | succeeds, there is no output. | |
168e428f PH |
178 | |
179 | ||
180 | DOCBOOK PROCESSING | |
181 | ||
182 | Processing a .xml file into the five different output formats is not entirely | |
183 | straightforward. For a start, the same XML is not suitable for all the | |
184 | different output styles. When the final output is in a text format (.txt, | |
185 | .texinfo) for instance, all non-Ascii characters in the input must be converted | |
186 | to Ascii transliterations because the current processing tools do not do this | |
187 | correctly automatically. | |
188 | ||
189 | In order to cope with these issues in a flexible way, a Perl script called | |
190 | Pre-xml was written. This is used to preprocess the .xml files before they are | |
191 | handed to the main processors. Adding one more tool onto the front of the | |
192 | processing chain does at least seem to be in the spirit of XML processing. | |
193 | ||
194 | The XML processors themselves make use of style files, which can be overridden | |
195 | by local versions. There is one that applies to all styles, called MyStyle.xsl, | |
196 | and others for the different output formats. I have included comments in these | |
197 | style files to explain what changes I have made. Some of the changes are quite | |
198 | significant. | |
199 | ||
200 | ||
201 | THE PRE-XML SCRIPT | |
202 | ||
203 | The Pre-xml script copies a .xml file, making certain changes according to the | |
204 | options it is given. The currently available options are as follows: | |
205 | ||
168e428f PH |
206 | -ascii |
207 | ||
208 | This option is used for Ascii output formats. It makes the following | |
209 | character replacements: | |
210 | ||
168e428f | 211 | ’ => ' apostrophe |
9b371988 PH |
212 | © => (c) copyright |
213 | † => * dagger | |
214 | ‡ => ** double dagger | |
215 | => a space hard space | |
216 | – => - en dash | |
217 | ||
218 | The apostrophe is specified numerically because that is what xfpt generates | |
219 | from an Ascii single quote character. Non-Ascii characters that are not in | |
220 | this list should not be used without thinking about how they might be | |
221 | converted for the Ascii formats. | |
222 | ||
223 | In addition to the character replacements, this option causes quotes to be | |
224 | put round <literal> text items, and <quote> and </quote> to be replaced by | |
225 | Ascii quote marks. You would think the stylesheet would cope with the latter, | |
226 | but it seems to generate non-Ascii characters that w3m then turns into | |
227 | question marks. | |
168e428f PH |
228 | |
229 | -bookinfo | |
230 | ||
231 | This option causes the <bookinfo> element to be removed from the XML. It is | |
232 | used for the PostScript/PDF forms of the filter document, in order to avoid | |
233 | the generation of a full title page. | |
234 | ||
235 | -fi | |
236 | ||
237 | Replace any occurrence of "fi" by the ligature fi except when it is | |
238 | inside an XML element, or inside a <literal> part of the text. | |
239 | ||
240 | The use of ligatures would be nice for the PostScript and PDF formats. Sadly, | |
241 | it turns out that fop cannot at present handle the FB01 character correctly. | |
242 | The only format that does so is the HTML format, but when I used this in the | |
243 | test version, people complained that it made searching for words difficult. | |
244 | So at the moment, this option is not used. :-( | |
245 | ||
246 | -noindex | |
247 | ||
9b371988 PH |
248 | Remove the XML to generate a Concept Index and an Options index. The source |
249 | document has two types of index entry, for a concept and an options index. | |
250 | However, no index is required for the .txt and .texinfo outputs. | |
168e428f PH |
251 | |
252 | -oneindex | |
253 | ||
254 | Remove the XML to generate a Concept and an Options Index, and add XML to | |
9b371988 PH |
255 | generate a single index. The only output processor that supports multiple |
256 | indexes is the processor that produces "formatted objects" for PostScript and | |
257 | PDF output. The HTML processor ignores the XML settings for multiple indexes | |
258 | and just makes one unified index. Specifying two indexes gets you two copies | |
259 | of the same index, so this has to be changed. | |
168e428f | 260 | |
9b371988 PH |
261 | -optbreak |
262 | ||
263 | Look for items of the form <option>...</option> and <varname>...</varname> in | |
264 | ordinary paragraphs, and insert ​ after each underscore in the | |
265 | enclosed text. The same is done for any word containing four or more upper | |
266 | case letters (compile-time options in the Exim specification). The character | |
267 | ​ is a zero-width space. This means that the line may be split after | |
268 | one of these underscores, but no hyphen is inserted. | |
168e428f PH |
269 | |
270 | ||
271 | CREATING POSTSCRIPT AND PDF | |
272 | ||
9b371988 PH |
273 | These two output formats are created in three stages, with an additional fourth |
274 | stage for PDF. First, the XML is pre-processed by the Pre-xml script. For the | |
275 | filter document, the <bookinfo> element is removed so that no title page is | |
276 | generated. For the main specification, the only change is to insert line | |
277 | breakpoints via -optbreak. | |
168e428f PH |
278 | |
279 | Second, the xmlto command is used to produce a "formatted objects" (.fo) file. | |
280 | This process uses the following stylesheets: | |
281 | ||
282 | (1) Either MyStyle-filter-fo.xsl or MyStyle-spec-fo.xsl | |
283 | (2) MyStyle-fo.xsl | |
284 | (3) MyStyle.xsl | |
285 | (4) MyTitleStyle.xsl | |
286 | ||
287 | The last of these is not used for the filter document, which does not have a | |
288 | title page. The first three stylesheets were created manually, either by typing | |
289 | directly, or by coping from the standard style sheet and editing. | |
290 | ||
291 | The final stylesheet has to be created from a template document, which is | |
292 | called MyTitlepage.templates.xml. This was copied from the standard styles and | |
293 | modified. The template is processed with xsltproc to produce the stylesheet. | |
294 | All this apparatus is appallingly heavyweight. The processing is also very slow | |
295 | in the case of the specification document. However, there should be no errors. | |
296 | ||
9b371988 PH |
297 | The reference book that saved my life while I was trying to get all this to |
298 | work is "DocBook XSL, The Complete Guide", third edition (2005), by Bob | |
299 | Stayton, published by Sagehill Enterprises. | |
300 | ||
301 | In the third part of the processing, the .fo file that is produced by the xmlto | |
302 | command is processed by the fop command to generate either PostScript or PDF. | |
303 | This is also very slow, and you get a whole slew of errors, of which these are | |
304 | a sample: | |
168e428f PH |
305 | |
306 | [ERROR] property - "background-position-horizontal" is not implemented yet. | |
307 | ||
308 | [ERROR] property - "background-position-vertical" is not implemented yet. | |
309 | ||
310 | [INFO] JAI support was not installed (read: not present at build time). | |
311 | Trying to use Jimi instead | |
312 | Error creating background image: Error creating FopImage object (Error | |
313 | creating FopImage object | |
314 | (http://docbook.sourceforge.net/release/images/draft.png) : | |
315 | org.apache.fop.image.JimiImage | |
316 | ||
317 | [WARNING] table-layout=auto is not supported, using fixed! | |
318 | ||
319 | [ERROR] Unknown enumerated value for property 'span': inherit | |
320 | ||
321 | [ERROR] Error in span property value 'inherit': | |
322 | org.apache.fop.fo.expr.PropertyException: No conversion defined | |
323 | ||
324 | [ERROR] Areas pending, text probably lost in lineinclude parts matched in the | |
325 | response by response_pattern by means of numeric variables such as | |
326 | ||
327 | The last one is particularly meaningless gobbledegook. Some of the errors and | |
328 | warnings are repeated many times. Nevertheless, it does eventually produce | |
329 | usable output, though I have a number of issues with it (see a later section of | |
330 | this document). Maybe one day there will be a new release of fop that does | |
9b371988 PH |
331 | better (there are now signs - February 2006 - that this may be happening). |
332 | Maybe there will be some other means of producing PostScript and PDF from | |
333 | DocBook XML. Maybe porcine aeronautics will really happen. | |
334 | ||
335 | The PDF file that is produced by this process has one problem: the pages, as | |
336 | shown by acroread in its thumbnail display, are numbered sequentially from one | |
337 | to the end. Those numbers do not correspond with the page numbers of the body | |
338 | of the document, which makes finding a page from the index awkward. There is a | |
339 | facility in the PDF format to give pages appropriate "labels", but I cannot | |
340 | find a way of persuading fop to generate these. Fortunately, it is possibly to | |
341 | fix up the PDF to add page labels. I wrote a script called PageLabelPDF which | |
db9452a9 PH |
342 | does this. They are shown correctly by acroread and xpdf, but not by |
343 | GhostScript (gv). | |
9b371988 PH |
344 | |
345 | ||
346 | THE PAGELABELPDF SCRIPT | |
347 | ||
348 | This script reads the standard input and writes the standard output. It | |
349 | searches for the PDF object that sets data in its "Catalog", and adds | |
350 | appropriate information about page labels. The number of front-matter pages | |
351 | (those before chapter 1) is hard-wired into this script as 12 because I could | |
352 | not find a way of determining it automatically. As the current table of | |
353 | contents finishes near the top of the 11th page, there is plenty of room for | |
354 | expansion, so it is unlikely to be a problem. | |
355 | ||
356 | Having added data to the PDF file, the script then finds the xref table at the | |
357 | end of the file, and adjusts its entries to allow for the added text. This | |
358 | simple processing seems to be enough to generate a new, valid, PDF file. | |
168e428f PH |
359 | |
360 | ||
361 | CREATING HTML | |
362 | ||
363 | Only two stages are needed to produce HTML, but the main specification is | |
9b371988 | 364 | subsequently postprocessed. The Pre-xml script is called with the -optbreak and |
168e428f PH |
365 | -oneindex options to preprocess the XML. Then the xmlto command creates the |
366 | HTML output directly. For the specification document, a directory of files is | |
367 | created, whereas the filter document is output as a single HTML page. The | |
368 | following stylesheets are used: | |
369 | ||
370 | (1) Either MyStyle-chunk-html.xsl or MyStyle-nochunk-html.xsl | |
371 | (2) MyStyle-html.xsl | |
372 | (3) MyStyle.xsl | |
373 | ||
9b371988 | 374 | The first stylesheet references the chunking or non-chunking standard DocBook |
168e428f PH |
375 | stylesheet, as appropriate. |
376 | ||
9b371988 PH |
377 | You may see a number of these errors when creating HTML: "Revisionflag on |
378 | unexpected element: literallayout (Assuming block)". They seem to be harmless; | |
379 | the output appears to be what is intended. | |
380 | ||
168e428f PH |
381 | The original HTML that I produced from the SGCAL input had hyperlinks back from |
382 | chapter and section titles to the table of contents. These links are not | |
383 | generated by xmlto. One of the testers pointed out that the lack of these | |
384 | links, or simple self-referencing links for titles, makes it harder to copy a | |
385 | link name into, for example, a mailing list response. | |
386 | ||
387 | I could not find where to fiddle with the stylesheets to make such a change, if | |
388 | indeed the stylesheets are capable of it. Instead, I wrote a Perl script called | |
389 | TidyHTML-spec to do the job for the specification document. It updates the | |
390 | index.html file (which contains the the table of contents) setting up anchors, | |
391 | and then updates all the chapter files to insert appropriate links. | |
392 | ||
393 | The index.html file as built by xmlto contains the whole table of contents in a | |
394 | single line, which makes is hard to debug by hand. Since I was postprocessing | |
395 | it anyway, I arranged to insert newlines after every '>' character. | |
396 | ||
068aaea8 PH |
397 | The TidyHTML-spec script also processes every HTML file, to tidy up some of the |
398 | untidy features therein. It turns <div class="literallayout"><p> into <div | |
399 | class="literallayout"> and a matching </p></div> into </div> to get rid of | |
400 | unwanted vertical white space in literallayout blocks. Before each occurrence | |
401 | of </td> it inserts so that the table's cell is a little bit wider than | |
402 | the text itself. | |
403 | ||
168e428f | 404 | The TidyHTML-spec script also takes the opportunity to postprocess the |
4f578862 | 405 | spec_html/ix01.html file, which contains the document index. Again, the index |
168e428f PH |
406 | is generated as one single line, so it splits it up. Then it creates a list of |
407 | letters at the top of the index and hyperlinks them both ways from the | |
408 | different letter portions of the index. | |
409 | ||
410 | People wanted similar postprocessing for the filter.html file, so that is now | |
411 | done using a similar script called TidyHTML-filter. It was easier to use a | |
412 | separate script because filter.html is a single file rather than a directory, | |
413 | so the logic is somewhat different. | |
414 | ||
415 | ||
416 | CREATING TEXT FILES | |
417 | ||
9b371988 PH |
418 | This happens in four stages. The Pre-xml script is called with the -ascii, |
419 | -optbreak, and -noindex options to convert the input to Ascii characters, | |
420 | insert line break points, and disable the production of an index. Then the | |
421 | xmlto command converts the XML to a single HTML document, using these | |
422 | stylesheets: | |
168e428f PH |
423 | |
424 | (1) MyStyle-txt-html.xsl | |
425 | (2) MyStyle-html.xsl | |
426 | (3) MyStyle.xsl | |
427 | ||
428 | The MyStyle-txt-html.xsl stylesheet is the same as MyStyle-nochunk-html.xsl, | |
429 | except that it contains an addition item to ensure that a generated "copyright" | |
430 | symbol is output as "(c)" rather than the Unicode character. This is necessary | |
431 | because the stylesheet itself generates a copyright symbol as part of the | |
432 | document title; the character is not in the original input. | |
433 | ||
434 | The w3m command is used with the -dump option to turn the HTML file into Ascii | |
435 | text, but this contains multiple sequences of blank lines that make it look | |
9b371988 PH |
436 | awkward. Furthermore, chapter and section titles do not stand out very well. A |
437 | local Perl script called Tidytxt is used to post-process the output. First, it | |
438 | converts sequences of blank lines into a single blank lines. Then it searches | |
439 | for chapter and section headings. Each chapter heading is uppercased, and | |
440 | preceded by an extra two blank lines and a line of equals characters. An extra | |
441 | newline is inserted before each section heading, and they are underlined with | |
442 | hyphens. | |
168e428f PH |
443 | |
444 | ||
445 | CREATING INFO FILES | |
446 | ||
9b371988 PH |
447 | This process starts with the same Pre-xml call as for text files. Non-ascii |
448 | characters in the source are transliterated, and the <index> elements are | |
449 | removed. The docbook2texi script is then called to convert the XML file into a | |
450 | Texinfo file. However, this is not quite enough. The converted file ends up | |
451 | with "conceptindex" and "optionindex" items, which are not recognized by the | |
4f578862 PH |
452 | makeinfo command. These have to be changed to "cindex" and "findex" |
453 | respectively in the final .texinfo file. Furthermore, the main menu lacks a | |
454 | pointer to the index, and indeed the index node itself is missing. These | |
455 | problems are fixed by running the file through a script called TidyInfo. | |
456 | Finally, a call of makeinfo creates a set of .info files. | |
168e428f PH |
457 | |
458 | There is one apparently unconfigurable feature of docbook2texi: it does not | |
459 | seem possible to give it a file name for its output. It chooses a name based on | |
460 | the title of the document. Thus, the main specification ends up in a file | |
461 | called the_exim_mta.texi and the filter document in exim_filtering.texi. These | |
462 | files are removed after their contents have been copied and modified by the | |
4f578862 | 463 | TidyInfo script, which writes to a .texinfo file. |
168e428f PH |
464 | |
465 | ||
466 | CREATING THE MAN PAGE | |
467 | ||
468 | I wrote a Perl script called x2man to create the exim.8 man page from the | |
9b371988 | 469 | DocBook XML source. I deliberately did NOT start from the xfpt source, |
168e428f PH |
470 | because it is the DocBook source that is the "standard". This comment line in |
471 | the DocBook source marks the start of the command line options: | |
472 | ||
473 | <!-- === Start of command line options === --> | |
474 | ||
475 | A similar line marks the end. If at some time in the future another way other | |
9b371988 | 476 | than xfpt is used to maintain the DocBook source, it needs to be capable of |
168e428f PH |
477 | maintaining these comments. |
478 | ||
479 | ||
480 | UNRESOLVED PROBLEMS | |
481 | ||
482 | There are a number of unresolved problems with producing the Exim documentation | |
483 | in the manner described above. I will describe them here in the hope that in | |
484 | future some way round them can be found. | |
485 | ||
9b371988 PH |
486 | (1) When a whole chain of tools is processing a file, an error somewhere |
487 | in the middle is often very hard to debug. For instance, an error in the | |
488 | xfpt file might not show up until an XML processor throws a wobbly because | |
168e428f PH |
489 | the generated XML is bad. You have to be able to read XML and figure out |
490 | what generated what. One of the reasons for creating the "test" series of | |
491 | targets was to help in checking out these kinds of problem. | |
492 | ||
493 | (2) There is a mechanism in XML for marking parts of the document as | |
9b371988 PH |
494 | "revised", and I have arranged for xfpt markup to use it. However, at the |
495 | moment, the only output format that pays attention to this is the HTML | |
168e428f PH |
496 | output, which sets a green background. There are therefore no revision |
497 | marks (change bars) in the PostScript, PDF, or text output formats as | |
498 | there used to be. (There never were for Texinfo.) | |
499 | ||
500 | (3) The index entries in the HTML format take you to the top of the section | |
501 | that is referenced, instead of to the point in the section where the index | |
502 | marker was set. | |
503 | ||
504 | (4) The HTML output supports only a single index, so the concept and options | |
505 | index entries have to be merged. | |
506 | ||
507 | (5) The index for the PostScript/PDF output does not merge identical page | |
508 | numbers, which makes some entries look ugly. | |
509 | ||
510 | (6) None of the indexes (PostScript/PDF and HTML) make use of textual | |
511 | markup; the text is all roman, without any italic or boldface. | |
512 | ||
513 | (7) I turned off hyphenation in the PostScript/PDF output, because it was | |
514 | being done so badly. | |
515 | ||
516 | (a) It seems to force hyphenation if it is at all possible, without | |
517 | regard to the "tightness" or "looseness" of the line. Decent | |
518 | formatting software should attempt hyphenation only if the line is | |
519 | over some "looseness" threshold; otherwise you get far too many | |
520 | hyphenations, often for several lines in succession. | |
521 | ||
522 | (b) It uses an algorithmic form of hyphenation that doesn't always produce | |
523 | acceptable word breaks. (I prefer to use a hyphenation dictionary.) | |
524 | ||
525 | (8) The PostScript/PDF output is badly paginated: | |
526 | ||
527 | (a) There seems to be no attempt to avoid "widow" and "orphan" lines on | |
528 | pages. A "widow" is the last line of a paragraph at the top of a page, | |
529 | and an "orphan" is the first line of a paragraph at the bottom of a | |
530 | page. | |
531 | ||
532 | (b) There seems to be no attempt to prevent section headings being placed | |
533 | last on a page, with no following text on the page. | |
534 | ||
535 | (9) The fop processor does not support "fi" ligatures, not even if you put the | |
536 | appropriate Unicode character into the source by hand. | |
537 | ||
9b371988 PH |
538 | (10) There are no diagrams in the new documentation. This is something I hope |
539 | to work on. The previously used Aspic command for creating line art from a | |
168e428f PH |
540 | textual description can output Encapsulated PostScript or Scalar Vector |
541 | Graphics, which are two standard diagram representations. Aspic could be | |
542 | formally released and used to generate output that could be included in at | |
543 | least some of the output formats. | |
544 | ||
9b371988 PH |
545 | (11) The use of a "zero-width space" works well as a way of specifying that |
546 | Exim option names can be split, without hyphens, over line breaks. | |
547 | However, when an option is not split, if the line is very "loose", the | |
548 | zero-width space is expanded, along with other spaces. This is a totally | |
549 | crazy thing to, but unfortunately it is suggested by the Unicode | |
550 | definition of the zero-width space, which says "its presence between two | |
551 | characters does not prevent increased letter spacing in justification". | |
552 | It seems that the implementors of fop have understood "letter spacing" | |
553 | also to include "word spacing". Sigh. | |
554 | ||
168e428f PH |
555 | The consequence of (7), (8), and (9) is that the PostScript/PDF output looks as |
556 | if it comes from some of the very early attempts at text formatting of around | |
557 | 20 years ago. We can only hope that 20 years' progress is not going to get | |
558 | lost, and that things will improve in this area. | |
559 | ||
560 | ||
561 | LIST OF FILES | |
562 | ||
9b371988 | 563 | Markup.txt Describes the xfpt markup that is used |
168e428f PH |
564 | HowItWorks.txt This document |
565 | Makefile The makefile | |
168e428f PH |
566 | MyStyle-chunk-html.xsl Stylesheet for chunked HTML output |
567 | MyStyle-filter-fo.xsl Stylesheet for filter fo output | |
568 | MyStyle-fo.xsl Stylesheet for any fo output | |
569 | MyStyle-html.xsl Stylesheet for any HTML output | |
570 | MyStyle-nochunk-html.xsl Stylesheet for non-chunked HTML output | |
571 | MyStyle-spec-fo.xsl Stylesheet for spec fo output | |
572 | MyStyle-txt-html.xsl Stylesheet for HTML=>text output | |
573 | MyStyle.xsl Stylesheet for all output | |
574 | MyTitleStyle.xsl Stylesheet for spec title page | |
575 | MyTitlepage.templates.xml Template for creating MyTitleStyle.xsl | |
576 | Myhtml.css Experimental css stylesheet for HTML output | |
9b371988 | 577 | PageLabelPDF Script to postprocess PDF |
168e428f PH |
578 | Pre-xml Script to preprocess XML |
579 | TidyHTML-filter Script to tidy up the filter HTML output | |
580 | TidyHTML-spec Script to tidy up the spec HTML output | |
4f578862 | 581 | TidyInfo Script to sort index problems in Texinfo output |
168e428f | 582 | Tidytxt Script to compact multiple blank lines |
9b371988 PH |
583 | filter.xfpt xfpt source of the filter document |
584 | spec.xfpt xfpt source of the specification document | |
168e428f PH |
585 | x2man Script to make the Exim man page from the XML |
586 | ||
168e428f PH |
587 | |
588 | Philip Hazel | |
4f578862 | 589 | Last updated: 30 March 2006 |