[-empyre-] Books And pixels

John Haltiwanger john.haltiwanger at gmail.com
Wed Jun 16 22:03:36 EST 2010


On Fri, Jun 11, 2010 at 2:33 PM, Joost Kircz-HvA <j.g.kircz at hva.nl> wrote:
> John Haltiwanger wrote:
>>
>> This sounds like research that would benefit the humanities
>> significantly. Has there been much cross-pollination with your
>> research into fields outside of the physics/computer science/cell
>> biology that you mention in the end of the abstract?
>>
>
> In principle it should work in fields were we have structured reasoning and
> a well designed line of argumentation. As this work started with phyics and
> Anita de Waard expands it to cell biology. Modularisation is a way to handle
> hypertext and to help people skip those parts of an article the a) know
> already, b) are not interested in or c) are too technical.
> Unfortunatley I'm not aware of other reserach projects along this line.
> Please read more than the abstract.

Always intended to, but it is thesis season here and it had to come
into the reading list at its appropriate time.

Having read this piece now, I would first say that I found it quite
exciting to read about these adventures into semanticizing and
modularizing documents. It is refreshing to see a productive
hybridization of humanities (through your analytical approach) and
engineering (for the actual implementation). It is obviously a
productive exercise and is something I believe should be more common,
especially as regards new media theory and computer
science/engineering/design.

However, it seems that once again science has a privileged position in
the adoption of the new processes you've developed. This is far from a
criticism, it just belies the underlying schism between rhetoric and
empirical experimentation. I think one angle that could most easily
adopt a format similar to ABCDEF or IMRaD would be the work of Richard
Rogers and the Digital Methods Initiative. One could even imagine a
structure where the interface allows one to "re-run" the methodology
of a given text (which is after all simply a computational process in
which the parameters are written already and thus reproducible short
of Google disappearing or blocking an IP). This would allow one to
compare contemporary data with the data of the original publication,
at which point it becomes possible to revisit the conclusions of that
publication. Presumably other social sciences could integrate
something similar, but without the advantage of "refreshable" data
sets.

In terms of straight theory papers, it is hard to imagine the kind of
processes required to connect various arguments and 'discourse
relations' in any satisfyingly automated way. Perhaps one angle would
be to automatically connect citations of a given text across the
various texts that contain a reference. Building back from there it
might be possible create some sort of categorization scheme for
contexts in which the citation appears. However, this hardly sounds
like an easily-automated task. Perhaps I'm not thinking outside the
box enough, but it is hard to see your most recent approaches being
extractable into purely theoretical documents. (It would be great to
see something like the DOPE project emerge for organizing and
exploring theory, though, or even something like XPharm could be used
to generate a sort of knowledge base of existing claims).

As a last point, there is also the question of whether the humanities
would be well-served by the sort of rigid format which in fact enables
your approaches. Theory (outside of social science) just isn't that
slice-and-dice-able even when it does follow the standard academic
mode. Pushing it deeper into a rigid structure could be seen as a
counter-productive violence. (Not saying that you claimed otherwise,
simply thinking out loud).

>>> In that sense, my interest is the question”What communication demands
>>> what
>>> technology”and explicitly not gee look Msword 2020 will be able to show
>>> the
>>> coding just as Wordperfect does.
>>>
>>
>> I was wondering if you could follow this line of thought a bit,
>> perhaps with more details of what fits where, and how to decide?
>> Have we already developed the ideal grammars of typography for dealing
>> with long-form prose (essays, books) through our experience with
>> printing ink on paper? Or does the screen offer space for new grammars
>> that we are still to encounter?
>>
>
> Typography is a helper for structuring a text. Not the other way around.
> But just consider the differences between typesetting/ word processing.
> MS word and Latex (or Tex) mix presentation and structure. This is whole
> issue of SGML (1986!) to split that. After a long period of slumbering in
> the HTML phase, XML as a SGML dialect is picking up.
> There a more issues in this context: 1) the content (meaning) must be clear
> and a emphases must be added (bold, exclamation mark, etc). This is true for
> digitally born material 2) Given a historical standardised typography you
> can use that to analyse a text. This is true for digitalisation programmes.
> 3) using tags of all kind enables you to change text, search a text etc.
> Presentation is coming after content formulation.

It was the printing press that led to our current typographical
grammars, and so I think there is a strong argument to be made that a
medium rewrites the rules of how typography should structure text
(something which you seem to agree with). The question is how do we
understand the process of such developments. It took more than a
century before type began to look like type and not a manuscript,
which was only the beginning of this rewriting. Do we see any real
movements happening or are we still a century away from the changes
that the computer will make to how we read and write? Given how
quickly computers change, it is difficult to guess. Yet I would argue
that it is still an important question to investigate.

Personally I think there is a large amount of space for new
punctuation, especially 'interactive' punctuation, on the screen.
Through the computer metamedium we have the capacity to represent
concepts or expressivities in any form we wish and also give it any
agency that the computer can provide. There is space even for more (or
purely) symbolic discourse in which the symbols act as
intermediaries---that is, they are a grounding point from which you
can 'click' and they will translate into your native tongue. This is
in contrast to translations between phonetic languages: in this case
there is a central language from, and to, which translations are made.
For an example of this, see iConji ( http://www.iconji.com/ ).
Certainly any such transition would invoke massive changes in
typographical grammar (which is in fact different between many
languages anyway).

I'll stick with pushing for new and interactive punctuations for now, though.

Regards,
John


More information about the empyre mailing list