========================================================================= Date: Sat, 1 Dec 90 18:47:11 EDT Reply-To: The TUG DVI driver standards discussion list Sender: The TUG DVI driver standards discussion list From: Jerry Leichter Subject: Rule positioning - how about brute force? The algorithms suggested so far are all very subtle and elegant. Sometimes in computation, subtlety is no match for brute force. My suggestion is to use a look-aside table to force alignment. The basic idea is very simple: Keep a pair of tables of sp->pixel mappings, one for horizon- tal positions in sp's, one for vertical. Whenever you are asked to draw a rule, determine the positions of the lower left and upper right corner in sp's. For each coordinate position, check the appropriate look-aside table to determine if this sp has been seen before. If so, use the pixel position given. If not, calculate the pixel position using any algorithm you like - presumably, something much like the current DVITYPE algorithm will do - and save the mapping in the appropriate table. (Note that x and y positions are handled completely independently - to map (x,y), you look up x in the table of horizontal positions and y in the table of vertical positions. It would be a mistake to try to map (x,y) pair as a unit, since two points with the same x and very slightly different y's could easily show up and result in different horizontal pixel positions.) This algorithm absolutely guarantees that, if TeX's arithmetic shows that two rules share a common endpoint, they are displayed that way. (The converse cannot, of course, be true in general.) As long as the algorithm used to do the computation when the lookup fails is monotonic, it further guarantees that the "between" relation for points in sp's is also preserved for the displayed points. Given monotonicity, we need nothing else from the computational algo- rithm - it can be optimized solely with an eye toward providing the best rela- tionship possible between rule and character positioning. (Of course, it had better be "reasonable" - mapping all (x,y)'s to (0,0) would "work" in some sense but would hardly endear the resulting driver to anyone.) The look-aside tables can be discarded at the top of each new page. Note, however, that as a result "the same" rule drawn on different pages may be positioned differently, since its end points may have been calculated and saved under different conditions. I doubt this is a problem, and in any case it is nothing new - in fact, the whole problem is that today it can happen even on a single page! (Of course, if you have the memory, you can keep the tables around as long as you like. A difference could be visible for two pages on opposite sides of a sheet of paper, or (less likely) for two facing pages, or (even less likely) for two successive pages; it's hard to imagine that there will ever be other conditions where anyone would notice. So an intermediate implemenation could discard, at BOP, any map entries not used in the preceeding n pages, where n=3 should be adequate in all but very unusual circumstances.) Few pages contain very large numbers of rules, so the amount of memory needed to store the look-aside tables is reasonable. For memory-tight systems, one could decide to store only a limited number of translations and discard them on an LRU basis. Given the way that boxes and such are drawn, in almost all cases common end-points occur in a single short burst, so this should work well even if a fairly small number of translations are retained. I'd be very surprised if anyone can find real examples where retaining the last 30 entries on an LRU basis wouldn't be sufficient. In any case, the worst that can happen if a needed translation has been purged is that program falls back to what all drivers do today. -- Jerry ========================================================================= Date: Sat, 1 Dec 90 18:01:50 CST Reply-To: The TUG DVI driver standards discussion list Sender: The TUG DVI driver standards discussion list From: MESSAGE AGENT Subject: Re: Rule positioning - how about brute force? Dear The TUG DVI driver standards discussion list, This is an automatic reply. Feel free to send additional mail, as only this one notice will be generated. The following is a prerecorded message, sent for phil I am attending the Sun User Group Conference in San Jose, California. I will return to work on Friday, December 7. If you need immediate assistance with something related to departmental computing facilities, please contact K-H Jan at the address . William LeFebvre ========================================================================= Date: Sat, 1 Dec 90 20:09:31 CST Reply-To: The TUG DVI driver standards discussion list Sender: The TUG DVI driver standards discussion list From: "Thomas J. Reid" Subject: Re: Rule positioning - how about brute force? In-Reply-To: Message of Sat, 1 Dec 90 18:47:11 EDT from Jerry, et al., Your approach is interesting. It would appear that it would ensure that rules correctly align. However, the algorithm as it stands does not preserve consistent mapping of rule thicknesses. Consider the following structure (which is similar to the table rules that Tom Rokicki mentions in his latest note): ----------------------------------------------------------------- -------------------------------Rule 1---------------------------- ----------------------------------------------------------------- ||| ||| ||| ||| |2| |3| |4| |5| ||| ||| ||| ||| ----------------------------------------------------------------- -------------------------------Rule 6---------------------------- ----------------------------------------------------------------- | | | | | | | | a b c d e f g h Assuming that the rules are processed by the driver in the order they are numbered, the entries in the horizontal dvi-unit to pixel mapping table will be: 1 a -> pixel_round(a) 2 h -> pixel_round(a) + pixel_rule(h-a) 3 b -> pixel_round(a) + pixel_rule(b-a) 4 c -> pixel_round(c) 5 d -> pixel_round(c) + pixel_rule(d-c) 6 e -> pixel_round(e) 7 f -> pixel_round(e) + pixel_rule(f-e) 8 g -> pixel_round(g) The problem is that to ensure that the horizontal widths of rules 2 -- 5 are consistent, h for rule 5 should map to pixel_round(g) + pixel_rule(h-g). Instead, the width will be pixel_round(a) + pixel_rule(h-a) - pixel_round(g) when it should be: pixel_rule(h-g). The same circumstances which would have caused rule 1 to overshoot rule 5 will now cause rule 5 to be one pixel thicker than rules 2, 3, and 4. For the default rule thickness on a 300 dpi device, the rule comes out to be 3 pixels when it should have been 2; certainly a noticeable difference. Of course, the "new" requirement of "ensure consistent dvi-unit to pixel mappings for widths of 'thin' rules" can be added to your method by changing the lookup rules. The change is to look up h-left and h-right (or v-bottom and v-top) at the same time. (The list still only maps one dvi-unit value to one pixel value.) Four cases are possible: 1) h-left not found, h-right not found; 2) h-left not found, h-right found; 3) h-left found, h-right not found; and 4) h-left found, h-right found. For case 1, simply use the DVItype (or maybe Rokicki's method) to map and store both h-left and h-right. For case 2, use the pixel_rule function to compute h-left knowing the mapping for h-right and what the thickness of the rule should be. For case 3, we h-right by adding pixle_rule(h-right - h-left) to the mapping for h-left. For case 4, both mappings already exist. This can be a problem: Consider the following case: -------------------------------------------------------- ------------------------Rule 1-------------------------- -------------------------------------------------------- ||| ||| ||| ||| |2| |4| |||--------------------------------------------------||| |||---------------------Rule 3-----------------------||| |||--------------------------------------------------||| | | | | a b c d Assuming that the rules are done in the order they are numbered, then point d is mapped according to rule 1 and point c is mapped according to rule 3. There is nothing to ensure that the mapping for d minus the mapping for c gives the same result as the difference between the mappings for points b and a. Given horizontal widths of rules 2, 3, and 4 as being slightly more than a whole number of pixels, the resulting width of rule 4 will be two pixels less than the pixel width of rule 2. Of course, the situation described above does not represent what one would get from normal usage. The structure does resemble a table with an hrule for rule 1, vrules as part of the column entries for rules 2 and 4, and a multispanned rule for rule 3. The difference is that rule 4 would be processed prior to rule 3, so the mapping list will be different which will give correct results. However, further research should be done to ensure that case 4 does not happen in realistic situations. Alternatively, we could treat case 4 as either case 2 or 3. But which one? For the example cited above, treating it as case 2 would give correct results (although rule 3 will extend into rule 4 by one pixel). Does this always hold true? Should the remapped h-left (or h-right) replace the older value? Regarding the question of when to flush the mapping tables: Given the problems caused by case 4 of the modified algorithm, I would think that the mapping tables should be flushed after each TeX table. Identifying this point will be a problem. Alternatively, partial flushing may work. Instead of flushing the whole table, we could flush mapping entries when encountering the POP opcode for the level at which the entry was added to the list. Further research is needed to see if this can avoid many of the "case 4" problems. ---Tom Reid ========================================================================= Date: Sun, 2 Dec 90 08:53:01 EDT Reply-To: The TUG DVI driver standards discussion list Sender: The TUG DVI driver standards discussion list From: Jerry Leichter Subject: Re: Rule positioning - how about brute force? Tom Reid makes a good point: My proposed algorithm forces rules to align, but does not attempt to make them of uniform thickness. This was a tradeoff I made deliberately by recommending that the look-aside tables store mappings for LL and UR (x,y) coordinates. One could, of course, store (say) LL (x,y) coordinates and the height+depth and width. This would ensure that any two rules with identical dimensionality in TeX arithmetic end up with the same pixel dimensions; however, we lose the guarantee that identical endpoints map to identical endpoints. In essense, we trade off guaranteed correctness in two different situations: When drawing corners (correctness condition is exact intersection) and when drawing parallel lines (correctness condition is uniform thickness). However, the story is actually much more complicated than that, since in fact (a) a corner drawn with rules of different thickness won't look good; (b) not only do you want uniform thickness for the parallel lines, but if you have more than two of them you'd like the inter-line distances to be uniform, too. (I've actually run into (b) in a letterhead I did; I had to fine-tune the positioning or the result looked bad. Of course, the fine-tuning is only good for one particular resolution and perhaps only for my current driver.) Unfortunately, (b), in conjunction with uniform rule thickness, cannot be guaranteed by any algorithm that only looks at local data - it's just another version of the classical global vs. local rounding dilemma, usually seen for fixed-width characters. Perhaps the best one can do is the following: Retain THREE look-aside lists, one for x, one for y, one for rule thickness. The algorithm for drawing a rule then becomes: First, position the basepoint by looking up its x and y coordinates. Next, if either the (height+depth) or the width of the rule is no more than THIN, and it is in the thickness table, use the value in the thickness table to compute the corresponding x or y coordinate for the op- posite corner. Next, compute any remaining coordinates for the opposite cor- ner using the x and y tables. Finally, insert any dimension of the resulting rule which is no more than THIN into the thickness table. THIN should be quite small - say a tenth of an inch. This kind of algorithm guarantees that horizontal rules in a table are of both uniform thickness (they are under THIN in thickness so the thickness table will force them all to use the thickness calculated for the first) and uniform length (they are longer than THIN so their right-hand ends all are forces to the same x pixel position by the x table). My concern is that one can cause problems at corners, such as: ------ --1-||| ----||| |2| ||| ||| If the rules had ended up 2 pixels thick, this would have been correct. How- ever, if they end up 3 pixels thick we've lost the forced correspondence between (x,y) coordinates at the UR corners. (Both these rules have their basepoints at "the other end". Hmm ... I suppose the TeX code could draw each rule twice, once with the basepoint at LL, once at UR.) -- Jerry ========================================================================= Date: Mon, 3 Dec 90 09:28:17 EDT Reply-To: The TUG DVI driver standards discussion list Sender: The TUG DVI driver standards discussion list From: Gaulle Bernard Subject: Nelson's proposal about \special I've read carefully Nelson's proposal about \special standardisation and as expected, I was very impressed. It's simple, efficient, clear, extensible, powerful, ... and source code in C is available| It's quite marvellous| I support his paper and his ideas without any critic. I STRONGLY ASK FOR A VOTE ON IT, AS QUICKLY AS POSSIBLE. For TeX users around the world, Bernard GAULLE ========================================================================= Date: Mon, 3 Dec 90 19:07:00 CST Reply-To: The TUG DVI driver standards discussion list Sender: The TUG DVI driver standards discussion list From: "Thomas J. Reid" Subject: Re: Rule positioning - how about brute force? In-Reply-To: Message of Sun, 2 Dec 90 08:53:01 EDT from On Sun, 2 Dec 90 08:53:01 EDT Jerry Leichter said: >Tom Reid makes a good point: My proposed algorithm forces rules to align, >but does not attempt to make them of uniform thickness. ... > >In essense, we trade off guaranteed correctness in two different situations: >When drawing corners (correctness condition is exact intersection) and when >drawing parallel lines (correctness condition is uniform thickness). However, >the story is actually much more complicated than that, since in fact (a) a >corner drawn with rules of different thickness won't look good; (b) not only >do you want uniform thickness for the parallel lines, but if you have more >than two of them you'd like the inter-line distances to be uniform, too. >(I've actually run into (b) in a letterhead I did; I had to fine-tune the >positioning or the result looked bad. Of course, the fine-tuning is only good >for one particular resolution and perhaps only for my current driver.) > >... Consistency of "thin" rule dimensions is more important than alignment of rules. In fact, given the additional requirement of consistency of "thin" gaps between rules, I would rank the three as: 1) Consistent mappings of "thin" rules; 2) Consistent mappings of "thin" gaps; and 3) Proper alignment of rules. It seems to me that the problem you describe for your letterhead is a device-dependent one and should be resolved as such. METAFONT has provisions for coping with such problems; things such as the blacker parameter and the measures taken in the program for the lowercase m in the Computer Modern fonts to ensure consistent gap sizes in it. Thus, one possibility is to handle your letterhead as a font. The same gap size problems are likely to be faced by those trying to do bar codes in TeX. For large enough codes, single pixel errors will still be within tolerances (ie, for codes that are 5.5 or less characters per inch on a 300 dpi printer). Doing the characters in METAFONT instead of using rules would reduce spacing problems within the characters and would permit denser codes to be used. Tolerances between characters are typically higher, so some correction can be applied there. However, since bar codes have more bars and gaps than does the lowercase m, differences between the device width and the rounded TFM width could be higher than the 3 we saw with the m. Large corrections should be prevented; charac- ters should be allowed to drift without bound in this case. (We could go back to the "correct one pixel at a time" rule we discussed earlier.) As an alternative to using a font, such things can be done as a device- dependent graphic and merged with the DVI file by the driver. (Question: Are graphics subject to the max-drift algorithm? ;-) Still another approach is to introduce the concept of a "white" (or transparent) rule. Your letterhead could then be done by placing "black" rules immediately adjacent to "white" ones with modifications made to the dvi-unit-to-pixel mapping algorithm to ensure consistent gap sizes. This would result in the rules after the first drifting away from their "correct" locations. Such drift should be exempt from a max-drift algorithm. >Perhaps the best one can do is the following: Retain THREE look-aside lists, >one for x, one for y, one for rule thickness. The algorithm for drawing a >rule then becomes: First, position the basepoint by looking up its x and y >coordinates. Next, if either the (height+depth) or the width of the rule is >no more than THIN, and it is in the thickness table, use the value in the >thickness table to compute the corresponding x or y coordinate for the op- >posite corner. Next, compute any remaining coordinates for the opposite cor- >ner using the x and y tables. Finally, insert any dimension of the resulting >rule which is no more than THIN into the thickness table. > >THIN should be quite small - say a tenth of an inch. > >This kind of algorithm guarantees that horizontal rules in a table are of both >uniform thickness (they are under THIN in thickness so the thickness table >will force them all to use the thickness calculated for the first) and uniform >length (they are longer than THIN so their right-hand ends all are forces to >the same x pixel position by the x table). My concern is that one can cause >problems at corners, such as: > > ------ > --1-||| > ----||| > |2| > ||| > ||| When a new "thin" dimension is encountered (ie, one that's not in the list), there are basically two ways to map its value into pixel units: A) The right way: Scale the "thin" dimension from dvi-units to pixels and take a ceiling function on the result. (This is the DVItype method. Rokicki's method would also do this since the dimension is "thin.") B) The wrong way: Scale the horizontal (or vertical) positions of each end and take the difference to get the pixel value for the width. If I seem to show a bias for (A), it is because of the problems which (B) can create: Two different values can occur for the same actual difference depending upon where the endpoints happen to fall. Of course, once the value is mapped, the same value will be used throughout the scope of the mapping table. But, what is the scope of the mapping table? If it is more than one page, we have lost page independence. The mapping will depend upon the pages selected and the order they are processed. If it is less than a page, the possibility exists that two tables on the same page will have the same value map differently. If the scope is exactly one page, there is the possibility that two tables on facing pages will map differently. Thus, we lose consistency of "thin" rule mappings; rule (B) must be rejected. O.K., so we always use rule (A). Now, the mapping is only dependent upon the width in dvi-units. The ceiling function will always give the same result; so we can dispense with the lookup table. (The new algorithm becomes the same as the old one.) Now let's turn our attention to computing the longer dimension of rule 1. Two possibilities exist: 1) The horizontal value for the right side is not in the list: We must compute the value by one of two methods: a) The DVItype method. b) The Rokicki method. In both cases, it has been shown that misalignment is possible. 2) The value is present in the list, so we simply use it. But how was that number computed when it was added to the list? If it was done by one of the above methods, we still have a problem. The possibility of rule 1 overshooting rule 2 can be eliminated by processing rule 2 first. But then, rule 2 could overshoot rule 1. A possible algorithm which could be researched is compute the endpoints for *all* "thin" dimensions before computing the endpoints of *any* non- "thin" ones. This could properly address the alternating "black" and "white" "thin"-rule example given above. It may also be possible to add a "thin-gap" detector to eliminate the need for "white" rules. > ... (Both these rules have their >basepoints at "the other end". Hmm ... I suppose the TeX code could draw >each rule twice, once with the basepoint at LL, once at UR.) I do not understand what you mean by the "basepoint" of the rule. The "basepoint" within TeX is not reflected in the opcodes which are written into the DVI file. While TeX rules have height, depth, and width (as well as position), rules in a TeX-generated DVI file only have a width and height (both of which are strictly positive) with position controlled by other opcodes. That is, the TeX code: % Futile attempt to draw a rule backwards. \hbox to\hsize{\kern\hsize \vrule height0pt depth1pt width-\hsize \hss} \end results in a PUSH, a DOWNn, and a POP being written to the DVI file. The rule does not appear because its width is not positive. > -- Jerry The problems which rules present are not simple. While I feel that the problem could be resolved entirely within the DVI driver, the questions are: How complex will such an algorithm be? How far should we search to find one? Will it be practical to actually implement it? We should take a lesson from the field of data communications: Tasks are divided into layers. Each layer resolves problems not addressed by the next lower layer. In the case of the TCP/IP protocol suite, packets transmitted at the IP layer can be lost. While losing packets is not a good thing, the action is accepted and it is the responsibility of the TCP layer (the next layer up) to work around the problem. Applying this philosophy to DVI drivers, I think that we should accept that problems exist with respect to rules as represented in DVI files; that we document the problems; and that we suggest workarounds which can be employed at "the next layer up" (which would be the TeX macro layer). This would suggest that the algorithm used by a DVI driver when dealing with rules be fairly simple and that its behaviour be predictable. If the algorithm also helps out with typical problems (such as Rokicki's does), this is an added benefit. ---Tom Reid ========================================================================= Date: Tue, 4 Dec 90 13:30:01 MEZ Reply-To: The TUG DVI driver standards discussion list Sender: The TUG DVI driver standards discussion list From: XITIJSCH@DDATHD21.BITNET Subject: Re: Rule positioning - how about brute force? Thomas J. Reid said: > Applying this philosophy to DVI drivers, I think that we should accept > that problems exist with respect to rules as represented in DVI files; > that we document the problems; and that we suggest workarounds which > can be employed at "the next layer up" (which would be the TeX macro > layer). This would suggest that the algorithm used by a DVI driver > when dealing with rules be fairly simple and that its behaviour be > predictable. If the algorithm also helps out with typical problems > (such as Rokicki's does), this is an added benefit. I think Tom Reid is completely right. And it seems to be that Tom Rokicki's algorithm is the best up to now. In my opinion we will need a new standard tier for it -- or should it go into level 0 ??!! Joachim ========================================================================= Date: Mon, 17 Dec 90 10:51:00 PST Reply-To: The TUG DVI driver standards discussion list Sender: The TUG DVI driver standards discussion list From: Don Hosek Subject: Message regarding positioning of characters from Michael Doob From: IN%"mdoob@ccu.umanitoba.ca" "Michael Doob" 17-DEC-1990 10:40:52.51 To: Don Hosek CC: Subj: DVI drivers avoiding the hh parameter Return-path: Received: from JNET-DAEMON by HMCVAX.CLAREMONT.EDU with PMDF#10000; Sun, 16 Dec 1990 17:04 PST Received: From UOFMCC(MAILER) by HMCVAX with Jnet id 0090 for DHOSEK@HMCVAX; Sun, 16 Dec 90 17:04 PST Received: (from UOFMCCU.BITNET for via BSMTP) Received: (from MAILER@UOFMCCU for MAILER@UOFMCC via NJE) (MAILER-9341; 51 LINES); Sun, 16 Dec 90 13:42:09 CST Received: by ccu.UManitoba.CA (4.1/25-eef) id AA16933; Sun, 16 Dec 90 13:34:00 CST Date: 16 Dec 90 13:33 -0600 From: Michael Doob Subject: DVI drivers avoiding the hh parameter To: Don Hosek Message-id: <615*mdoob@ccu.umanitoba.ca> X-Envelope-to: DHOSEK Hi Don, In most DVI drivers the width of a character (in pixels) is accumulated somewhere as it is printed. The difference between the placement of characters in pixels and the true TeX position is then computed and allowed to get as big as max_drift. You had some suggested parameter settings in your document from the DVI standards committee. I have a question about all this. When devices have higher resolution, this whole question, it would seem, becomes much less necessary since the pixel resolution is a better approximation of the internal TeX units. Given that the number of characters per line is bounded by some relatively small number, perhaps a hundred, and the error in placing a character using a rounded TeX position is no more that half a pixel, the maximum error would be around 50 pixels. The expected error would be much smaller, of course. Now for a typical laser printer this error would be completely unacceptable, but I'm doing some work on a 2500 dpi machine. Since I've already written a couple of device drivers, I could adapt my code easily to get a less robust but much faster driver. Has anyone thought about the consequences of ignoring max_drift on high resolution devices? There is a second question related to this. For PostScript devices, the character widths can be computed using the PostScript interpreter; in other words, the character positioning can be done by converting the TeX position to PostScript point units and let PostScript choose the pixel alignment. Has anyone thought about leaving the max_dirft problem to the PostScript engine? What happens, and what is the implication for high resolution devices? Thanks for the information. Michael ========================================================================= Date: Mon, 17 Dec 90 14:15:41 -0800 Reply-To: The TUG DVI driver standards discussion list Sender: The TUG DVI driver standards discussion list From: Pierre MacKay Subject: Message regarding positioning of characters from Michael Doob In-Reply-To: Don Hosek's message of Mon, 17 Dec 90 10:51:00 PST <9012171934.AA23344@june.cs.washington.edu> Stefan v Bechtolsheim will have the best answer to the question about the results of letting PostScript do the max_drift calculation. Generally, on a low to mid-resolution device, the results are not good. It is one of the chief distinctions of SvB's approach that he takes care of max_drift in TeXPS, and I for one find the results convincing. But on a 2500 dpi device, max_drift calculations may not make much sense. There was a time when the max_drift problem was not even recognized, in the early days of TeX82, and I doubt that it would ever have been recognized at 2500 dpi. It may be reasonable to consider a threshold of resolution below which max_drift calculations are required and above which they may be considered unnecessary. The max_drift problem only appears in long character strings without an intervening large space. If we were to set the threshold at 1200 dpi (a good rough break between typesetting and pseudo-typesetting), we have 166.666 pixels/em, and with an average character size of 0.5 em, a twenty-character word is 1666 pixels wide, with a maximum drift of 20 pixels. You will certainly not get all that drift in one intercharacter spacing, but even if you did, it is .0016 inch. I doubt that that would be noticeable. It might be a problem for graphics generated by continuous repetition of font characters, however, and that should be considered. I am not going to advocate this right now, but Michael Doob has raised a point worth looking at in regard to high-resoution devices, and I would like to see what others think of it. Email concerned with UnixTeX distribution software should be sent primarily to: elisabet@max.u.washington.edu Elizabeth Tachikawa otherwise to: mackay@cs.washington.edu Pierre A. MacKay Smail: Northwest Computing Support Center TUG Site Coordinator for Thomson Hall, Mail Stop DR-10 Unix-flavored TeX University of Washington Seattle, WA 98195 (206) 543-6259 ========================================================================= Date: Wed, 19 Dec 90 12:09:45 MST Reply-To: The TUG DVI driver standards discussion list Sender: The TUG DVI driver standards discussion list From: "Nelson H.F. Beebe" Subject: Comment on PostScript handling max_drift Michael Doob says (I'm paraphrasing): "Why not let the PostScript printer handle the max_drift correction?" I have seen at least one PostScript DVI driver that relegated the max_drift correction to the PostScript interpreter. I think this is somewhat dangerous, for two reasons. First, Adobe has produced many versions of the PostScript interpreter (about 50 so far), and these have been implemented on several different processors. Second, other companies are producing their own PostScript interpreters. I suspect that if we let the device do it, then we are immediately in for inconsistencies between different printers, either from the software implementation, or from differing floating-point systems. I therefore plan to keep the max_drift handling in my DVI driver code; there, at least, the user has some control over it. Relevant to the software question: It was only earlier this year that the first papers on exact conversion between binary and decimal representations of floating-point numbers were published in @Article{Clinger:floating-point-input, author = "William D. Clinger", title = "How to Read Floating Point Numbers Accurately", journal = SIGPLAN, year = "1990", volume = "25", number = "6", pages = "92--101", month = jun, note = "See also output algorithm in \cite{Steele:floating-point-output}.", } @Article{Steele:floating-point-output, author = "Guy L. {Steele Jr.} and Jon L. White", title = "How to Print Floating-Point Numbers Accurately", journal = SIGPLAN, year = "1990", volume = "25", number = "6", pages = "112--126", month = jun, note = "See also input algorithm in \cite{Clinger:floating-point-input}. In electronic mail dated Wed, 27 Jun 90 11:55:36 EDT, Guy Steele reported that an intrepid pre-SIGPLAN 90 conference implementation of what is stated in the paper revealed 3 mistakes: \begin{itemize} \item[1.] Table~5 (page 124):\par\noindent insert {\tt k <-- 0} after assertion, and also delete {\tt k <-- 0} from Table~6. \item[2.] Table~9 (page 125):\par\noindent \begin{tabular}{ll} for & {\tt -1:USER!({"}{"});} \\ substitute & {\tt -1:USER!({"}0{"});} \end{tabular}\par\noindent and delete the comment. \item[3.] Table~10 (page 125):\par\noindent \begin{tabular}{ll} for & {\tt fill(-k, {"}0{"})}\\ substitute & {\tt fill(-k-1, {"}0{"})} \end{tabular} \end{itemize} \def\EatBibTeXPeriod#1{\ifx#1.\else#1\fi}\EatBibTeXPeriod", } I very much doubt that any existing compiler, run-time library, or PostScript printer, employs such exact algorithms. Thus, any expression of character widths as ASCII decimal strings is bound to lead to small errors that can make a difference of 1 pixel, which is quite visible on a 300-dpi printer. ======================================================================== Nelson H.F. Beebe Center for Scientific Computing Department of Mathematics 220 South Physics Building University of Utah Salt Lake City, UT 84112 Tel: (801) 581-5254 FAX: (801) 581-4148 Internet: beebe@math.utah.edu ======================================================================== ========================================================================= Date: Wed, 19 Dec 90 17:28:10 +0100 Reply-To: The TUG DVI driver standards discussion list Sender: The TUG DVI driver standards discussion list From: Eberhard Mattes Subject: The standard and operating systems, ... (too long) .FLAME ON Summary: On small machines it may be more reasonable not to meet The Standard and create usable drivers instead. .FLAME OFF > \subsubsection{Number of \DVI\ characters per page} > > \begin{standard} > The \DVI\ processor must be able to render a page containing as many as > 20,000 \DVI\ characters unless this is not > possible due to device constraints as outlined in > Section~\ref{escape-clause}. > \end{standard} What's `a page'? Think of a driver which can perfectly print a page containg 20,000 characters if it is the first page of a DVI file. And which fails to print such a page after printing hundreds of `normal' pages with hundreds of fonts? How can this be tested? What's about operating system constraints? Of cause it is possible to build a driver capable of above (and other requirements) on any `suitable' operating system. But this may increase the size of the code in such a way that the driver cannot be used in a convenient environment. Has a driver (which wants to meet The Standard) to meet above requirement (and other requirements) in all *possible* environments? Or in all *reasonable* environments? Or in all *convenient* environments? Think of MS-DOS (:=minimal environment) with some device drivers, keyboard driver for the non-US keyboard (:=usable environment), with networking software including NFS, keyboard enhancers (to allow command line editing with history) (:=reasonable environment), and some fancy user interface (:=convenient environment). In this environment, the current release of my drivers cannot handle 20,000 characters on every page. It is almost impossible to make a dvi driver run in such an environment. And it is even more impossible to make the driver fulfill The Standard. But it is also almost impossible to run TeX (TeXs without virtual memory won't run, TeXs with virtual memory will be *very* slow). The algorithms used by my drivers don't have an upper limit on the number of characters per page. It is only limited by memory. My drivers run both under MS-DOS and OS/2. The *same* executable has problems with 20,000 characters per page under MS-DOS, but no problems (if there's enough swap space) under OS/2. After some small changes (rounding), my drivers will meet The Standard when run under OS/2. But not (the same code! the same executable file!) when run under MS-DOS. Curious situation. If I want to support 20,000 characters per page in all reasonable environments and in all dvi files (say, append a 20,0000 character page to the end of The TeXbook) I have to add extra code (swapping data to disk). The size of this extra code may cause the drivers not run in a convenient environment. There are even users who don't want swapping (on a diskless workstation in a LAN). But swapping may become necessary due to the increased code size. I have to choose (I am exaggerating): Either make the drivers meet The Standard. Then the drivers will not work in a convenient environment and user's won't like/use the drivers. Or make the drivers usable (with normal documents) even in convenient environments (ie, make them `usable'). Then the drivers cannot meet the standard. I am even more exaggerating: What is more important: A usable driver or a driver meeting The Standard? I think it is possible on every operating system to make every driver *not* meet the standard. By reducing swap space or memory or disk space (:=almost impossible environment). Conclusion: No program meets The Standard. Maybe The Standard should demand the drivers to meet the requirements only in reasonable environments. New definition of reasonable environment: An environment in which a standard TeX (main memory size approx. 65535) can be run without running into major problems. Or have I misunderstood The Standard: Should I read `There exists an environment in which this driver meets the requirements' instead of `This driver meets the requirements in every environment'? Maybe one should say: A driver claiming to meet This Standard must fulfill the following requirements in all but almost impossible environments. Numbers given in parenthesis must be fulfilled in all reasonable environments (running TeX). Example: ... 20,000 (10,000) characters ... Just trying to make The Standard meet everyday life (and the user) on small machines. Eberhard Mattes (mattes@azu.informatik.uni-stuttgart.de) ========================================================================= Date: Wed, 19 Dec 90 17:42:10 +0100 Reply-To: The TUG DVI driver standards discussion list Sender: The TUG DVI driver standards discussion list From: Eberhard Mattes Subject: page/sheet? -std? font naming? > \subsubsection{Number of \DVI\ characters per page} > > \begin{standard} > The \DVI\ processor must be able to render a page containing as many as > 20,000 \DVI\ characters unless this is not > possible due to device constraints as outlined in > Section~\ref{escape-clause}. > \end{standard} Many drivers can put multiple (DVI) pages on one sheet of paper. When putting n (DVI) pages on one sheet: has the driver to be able to handle 20,000 characters per sheet or n * 20,000 characters per sheet? I think `render a page containing...' means 20,000 characters per sheet for any n. Is it allowed to add a `-std' command line option to a driver to make it meet The Standard and still calling it a `Standard Driver'? Here's why I'm asking: > \subsection{Specials} > > \begin{standard} > The Level~0 Standard requires no support for specials. Specials > not officially defined by the \DVI\ processor standards > committee should be flagged with a warning when read from the > \DVI\ file. If any specials are ignored by the processor, the > processor must issue a warning message. These warning messages > may optionally be turned off at run time. > \end{standard} There are many drivers out there with special \special features. Users are used to those \special commands. And the \special commands are generated automatically by various programs. I don't want the user be frightened by warnings if everything is ok. If (some day) The Standard defines \special commands and these are different from those used by those users and programs, the driver has to warn the user about correct (from the user's point of view) \special commands and not to warn if incorrect (from The Standard's point of view) \special commands are encountered (and ignored). And maybe a different memory allocation / swapping scheme is required for meeting the standard (see my previous message). Then, the -std option would make the driver use special (and very slow) algorithms to meet the standard; unusable algorithms, from the user's point of view. Now about font naming: > From: "Nelson H. F. Beebe" > Subject: Some remarks on DVI drivers and directory structures > [..] > Here finally is the documentation excerpt: > > FONTFMT For each operating system, there is a > built-in list of font file formats; they can > be overridden at run-time by this environ- > ment variable. Its value contains one or > more format strings, separated by semi- > colons, that define how the font file names > (not including the directory paths) are to > be constructed. For example, on TOPS-20, > the default format list is > > %n.%dpk;%n.%dgf;%n.%mpxl > > A semicolon separates formats in the list. > > These format specifications are recognized: > > n Substitute the font name. > > m Substitute the magnification in old > Metafont style (1000 means 200 > dots/inch). > > d Substitute the magnification in new > Metafont style (dots/inch). > > #p # is a digit string; substitute the > first # characters of the font name. > This facilitates subdividing large col- > lections of fonts into subdirectories > named by #-character prefixes of the > file names. You should be able to insert both the horizontal and the vertical resolution into the font name. For instance, I'm using 360x360 dpi fonts and 360x180 dpi fonts. Maybe we should invent a naming scheme for fonts with odd aspect ratio. Example: device resolution 360x180 dpi: /texfonts/epsonlq/h360v180/cmr10.pk 'portrait mode' /texfonts/epsonlq/h180v360/cmr10.pk 'landscape mode' To print in landscape mode on a device with odd aspect ratio, one needs fonts with 'inverted' aspect ratio. My drivers currently use in both cases the 'horizontal' resolution for naming the font: 360 dpi for portrait mode 180 dpi for landscape mode BTW, my drivers not only support portrait mode and landscape mode, they support all 8 combinations of rotation/reflection; but only two sets of fonts (on devices with odd aspect ratio) are required, as the fonts are rotated (but not scaled) by the drivers. Eberhard Mattes (mattes@azu.informatik.uni-stuttgart.de) ========================================================================= Date: Wed, 19 Dec 90 20:47:46 LCL Reply-To: The TUG DVI driver standards discussion list Sender: The TUG DVI driver standards discussion list Comments: W: Invalid RFC822 field -- "(5.61/PURDUE_CS-1.2) ID ; WED, 19 DEC 90 21:". Rest of header flushed. Comments: E: "From:"/"Sender:" field is missing. From: Undetermined origin c/o Postmaster > Stefan v Bechtolsheim will have the best answer to the question > about the results of letting PostScript do the max_drift calculation. And here is what Stephan v Bechtolsheim (spelled with a "ph") says: One of the things I DON'T LIKE about PostScript is that when it comes down to it you NEVER know precisely what happens. Let's say, I do a rlineto from (12.34, 2.34) to (-2.3425, 77.834) in the user coordinate system, assuming the width of the line is 1.324. You can't determine based on that PRECISELY which pixels are blackened and which are not. And the above example applies to about everything in PostScript. Which is the reason why in MY driver everything is done inside the driver as far as positioning is concerned. The driver therefore MUST know about the PostScript printer's resolution (you just lost true device independence as claimed by PostScript). Anyway, you guys: have a nice Christmas and a happy new year. And forgive Purdue's mailer which generates some junk in the mail's header, when I mail this out. May be next year this will be fixed! Stephan v. Bechtolsheim Computer Sciences Department svb@cs.purdue.edu Computer Science Building (317) 494 7802 Purdue University FAX: (317) 494 0739 West Lafayette, IN 47907 ========================================================================= Date: Fri, 28 Dec 90 16:10:24 -0800 Reply-To: The TUG DVI driver standards discussion list Sender: The TUG DVI driver standards discussion list From: Pierre MacKay Subject: Help with fontdesc files In-Reply-To: Andrew Zachary's message of Thu, 27 Dec 90 10:42:38 EST <9012271542.AA01917@dopey.cray.com> > . . . > except I cannot figure out how to use the fontdesc file. In particular, > the documentation and the example fontdesc file imply that a given > font, say cmr10.300pk, will be in the directory /usr/lib/tex/fonts/cmr10. > However, the fonts in the TeX package are not distributed that way. As > you know, they are distributed by resolution, so that the file cmr10.300pk > will be in the directory /usr/lib/tex/fonts/pk/pk300. All my fonts, and > I have quite a few of them, are organized in just this way. > > The instructions provided with both mctex and with SeeTeX do ***NOT*** > make clear how to rewrite either the fonts.c routine or the fontdesc > file to accommodate my present font arrangement. Could you suggest > what changes I should make in either fonts.c or in the fontdesc file? In working over the small number of drivers that we still actively support for the UnixTeX distribution, we have found few things more frustrating than trying to evaluate the various ways of compartmentalizing font directories. Until there is a real agreement, supported by argument, about the reasons for preferring one form over another, we are inclined to bypass the question. This can't, of course, go on forever. If the namelist in a directory gets over a certain size, nasty things happen. File access degrades in non-linear ways, and you can even reach the poinbt where globbing * is no longer possible. I don't think font counts have reached that point yet, but it is a worry. Certainly, when the rich resources implied by Karl Berry's recent article in TUGboat 11:4 "Filenames for Fonts" becomes available, we will need something, but what? I worked out a scheme to distribute fonts into subdirectories in connection with the old dvi2ps driver, and a script for that is on the distribution, but I don't think it is compatible with fontdesc compartmentalization. The question is ultimately going to surface in the discussion of driver standards that is now going on, but I have no ready answers for you now. We concentrate our efforts nowadays on TeXPS, and I confess that for the moment I am using an absolutely flat single directory for all pk300 files. Even when I was compartmentalizing them, I linked them all to a flat directory, because xdvi looked at them one way, and TeXPS another. We do not distribute fonts compartmentalized in any scheme for all the above reasons. We are more interested in keeping families distinct. Computer Modern is one family, in the broad sense; LaTeX is another; Utilityfonts is a sort of non-family of fonts associated with the development of TeX and METAFONT, and ams is the AMS fonts. To get them into fontdesc compartments I am afraid you will have to move them there directly. You might find the Makefile SUBDIRmakefile useful as a guide, even if it is not for the same compartments. It is in the bottom level DVIware directory. I do hope and expect that we will get this sorted out eventually. Email concerned with UnixTeX distribution software should be sent primarily to: elisabet@max.u.washington.edu Elizabeth Tachikawa otherwise to: mackay@cs.washington.edu Pierre A. MacKay Smail: Northwest Computing Support Center TUG Site Coordinator for Thomson Hall, Mail Stop DR-10 Unix-flavored TeX University of Washington Seattle, WA 98195 (206) 543-6259