[sc34wg3] Re: Montreal meeting recommendations

Lars Marius Garshol sc34wg3@isotopicmaps.org
18 Sep 2001 09:18:09 +0200

* Lars Marius Garshol
| As a vendor I'm concerned that this seems to mean throwing away
| everything we've done so far (as vendors), and I'm not very keen to
| do that.

* Steven R. Newcomb
| I think you greatly exaggerate the negatives of this proposal for
| vendors like Ontopia, and I strongly disagree with you about its
| likely effects.

I don't think you understand what my concern is. I probably didn't
explain it very well, so I'll make another attempt.

Any non-trivial piece of software uses interfaces to separate the
various modules it consists of from one another. The good thing about
this is enables abstraction to be used as a scalpel to cut a huge,
unmanageable project into manageable pieces. 

This scalpel is double-edged, however, because it makes the software
on both sides of the interface depend on the interface. If the
interface is very abstract it can be used throughout many different
parts of the software, which makes the software consistent, easier to
learn, reduces the amount of code, and enables code reuse, but at the
same time it causes large parts of the software to depend on this
highly reused interface.  If you change the interface you have to
change all the pieces of code that use it.

A plausible generalization of XML might be to remove the artificial
distinction between attributes and elements. If you did this it would
mean that the DOM interface would have to change quite dramatically in
order to support this generalization. If you change the DOM radically
it means that every DOM, XPath, XSLT, and XQuery implementation that
uses it also has to change. Lots of packages like dom4j, JDOM, XML
editors, web publishing tools, XML generation tools, validators,
document comparison tools, document databases, and so on and so forth
also have to change. (Even TM4J would have had to change.) 

It is very much the same with topic maps and the topic map model.
If we radically change the model it means that we have to change every
topic map engine, and every piece of software built on those engines.
That is a huge cost, and one that will have to be paid by the vendors,
their customers, the open source developers, and everyone who's done
work with the open source tools.

If this discussion seems a bit vague the best thing you can do is
probably to download TM4J and look at its javadoc. That should show
you what the interfaces I am talking about look like, even if you may
not be a Java programmer. tmproc, TM4J, K42, and the Ontopia Engine
all have different interfaces, but they are roughly at the same level
of abstraction, so this discussion applies equally to all of them.

This doesn't mean that we should never change the model. Please don't
think that that's what I'm saying. It just means that we should be
very careful when we do it, and that we should take care not to do it
any more often than what is absolutely necessary.  There will be times
when we have to, and I am resigned to that, but let us be careful, and
let us make these changes knowing what their costs are.

| * None of our vendors is so naive as to think that the
|   software it has already created will form the basis
|   of its business forever.  Everyone knows that the
|   maintenance of a technology product line is a
|   never-ending effort that frequently involves "eating
|   one's own children" in order to remain competitive.

This is a truism, but it doesn't mean that model changes can be made
at no cost.
| * No existing Topic Maps system vendor will lose the
|   value of its name-recognition on account of the fact
|   that the definition of Topic Maps becomes more
|   generalized.

I don't worry about the name "Ontopia"; I worry about the name "topic
| * I don't want to draw the conclusion that Ontopia's
|   attitude about the standardization process is not
|   oriented toward building the largest possible
|   competitive arena for public and fairly-distributed
|   private benefit, but rather toward freezing the
|   dimensions of that arena in order to influence the
|   distribution of benefit in a way that will unfairly
|   favor Ontopia's existing product line.

I don't see how Ontopia could do this, and even if we could I wouldn't
be interested. I am sitting here way past midnight writing this email
because I want a standard with no room for differing interpretations,
and of which there will be as many interoperable implementations as
possible.  If I didn't think that extremely important I wouldn't have
bothered with this model work at all.

|   Please reassure us that this isn't so!

Frankly, I can't understand why you need this reassurance. I can't see
why you need to ask for it. I don't think empolis is any more keen on
the core model than we are, but that doesn't for a moment make me
suspect that they are intent on some kind of monkey business.  Why the
fact that I am concerned about this core model thing makes you so
suspicious is a mystery to me.
* Lars Marius Garshol
| As a person interested in topic maps and wishing them to succeed I
| am worried that changing topic maps in a radical way (for the third
| time) is something that has great destructive potential for the
| community.
* Steven R. Newcomb
| Not true.  It is not a radical change simply to acknowledge that
| what we've been doing is a special case of something that has a
| wider scope.

If you don't understand that it is you need to read what I wrote above
again. Changes that mean just about every single piece of topic map
software there is will have to change are radical changes.

* Lars Marius Garshol
| How can we ensure confidence in the standards we make when we keep
| changing them all the time? (SGML has made, and kept, a vow never to
| break valid documents.)
* Steven R. Newcomb
| How does this proposal break valid documents?  

It doesn't. That was an analogy, but not a very direct one.

| On the contrary, in fact, what we're proposing gives the possibility
| of recognizing and integrating the *intended* meaning of documents
| produced according to software designed according to *varying*
| interpretations of the assertion types described in the existing
| standard(s).

This is not a problem with existing topic map documents, but with
existing topic map software. And don't think that breaking software is
just a problem for the vendors. Several of our customers have much
greater investments in software built on our software than they do in
topic map data. So the fact that the documents are not broken is of
little help to them.

I would expect this to be true in general throughout the topic map
world.  Some mainly have data, and for them this is not so serious.
Others have mainly software investments, and for them this is much
more serious.

| Indeed, it occurs to me that perhaps this is exactly what you fear:
| the idea that the existing nuances of Ontopia's interpretation
| would, according to the new proposal, be distinctly identifiable,
| just like everyone else's, and the potential for Ontopia's nuances
| to achieve de-facto standard status, with concomitant huge
| advantages for Ontopia's existing software investments, will be
| lost.  I'd welcome your reassurance that this isn't your attitude.
| Cooperation between competitors depends on trust that everyone, at
| least in the cooperative context, is working for the *common*
| benefit.

I can't believe you're even asking this question!

Who is that has been crying out for a formalized specification with a
model for the past twelve months?  Who was upset that XTM 1.0 was far
too vague long before it was finalized?  Who even put together an XTM
canonicalization proposal in order to make it possible for anyone to
do automatic comparisons of XTM implementations?
The fact that ISO 13250 and XTM 1.0 are fuller of holes than a swiss
cheese executed by a firing squad armed with shotguns has been driving
me absolutely nuts for the past year (as should be richly evidenced by
various list archives), and I have been foaming at the mouth trying to
get this community to do something about it.

What more reassurance could I possibly give? What more could I
possibly DO?!?

In addition, you are suggesting that I am severely lacking in
professional integrity, which is an obvious insult. I don't really
care about that, I'm just saying it to make it clear to you what
you're doing. What upsets me is that what I've been doing for the past
year does not in itself constitute reassurance. When it doesn't I
don't see how any statement I make now could reassure you, so why are
you even asking about this?

| Like almost everyone else involved in this cooperative activity, I'm
| interested in the huge increase in the size of the market that will
| result from the potential to integrate all Topic Maps resources
| emanating from software offered by all vendors.  

This is what I hope we can achieve by creating a solid model-based
standard free of holes. This is what has been my goal with this model
work for the past year. I'm not convinced that PMTM4 can do this
(which is why I've been so opposed to it), but now that I see where
you want to go I am willing to join the effort to try and get there.

In the meantime I still think we need the infoset model, which I
believe has the potential to achieve this.

| I'm unmoved by anyone's desire to achieve hegemony in this
| marketplace.  From the perspective of the public interest, such
| hegemony would be a bad thing, at best.  We would all be
| irresponsible (and possibly corrupt) if we allowed the international
| standardization process to be used in such an unfair way.

I fully agree. If you think this discussion is about Ontopia's desire
for a greater share of the market you need to think again.
| I have been saying for some time that all of our controversies have
| been about the meaning of the "Topic Maps" brand name.  You seem to
| want to keep the scope of this brand small and specific.  Many of us
| want to make the scope as large as necessary and practical in order
| to maximize the total public and private benefit.

I understand that that is what you want, but I am concerned that if
you do it the way you are currently doing you will do great damage to
the topic map community as a whole.
| Maybe this controversy will ultimately be resolved by vote.  If so,
| I know which way I'm going to vote, and I will also lobby implacably
| on behalf of our common interests.

Nobody should be expected to do otherwise.

I think it's too early for a vote. I think we should discuss this
first, so that everyone has an opportunity to hear the arguments from
both sides, and if that doesn't resolve the issue we should do a vote.
* Lars Marius Garshol
| I think we should stick to the plan agreed to in Montréal for the
| time being, and only revise it when we've learned enough about the
| core model and its relationship to topic maps to know what best to
| do with the core model.
* Steven R. Newcomb
| If we decide not to act until everyone feels that they have "learned
| enough", we will never act at all.  Those who refuse to learn it
| will always be able to block it.

I'm not sure you understood what I meant. What I meant was that we
should go ahead and make two models, one core model and one based on
the infoset model, with a mapping between them. 

Although I am worried about the possible consequences of the core
model for topic maps(ISO 13250 meaning) I don't think we should stop
working on it, nor do I think we should separate it out into a
project of its own within SC34 at this time. Maybe that will be the
right thing to do in the end, but I don't think we can know that yet.
| I wasn't trying to say that a syntax is analogous to a model.  I was
| trying to make the point that the only thing we know for sure about
| the future is the fact that it will be different from today.  The
| best and only way to protect our investments is to make them as
| adaptable as may be practical.

I agree, and that is why I think working on the core model is
worthwhile, despite the risk involved.
| Currently, Topic Maps has no standard underlying model.

True. However, every piece of topic map software implements _a_ model,
and those of the engines that I have looked at seem to be very
similar.  In fact, their models are very close (although not
identical) to that of the infoset model. The work on the infoset model
(and subsequent conformance testing efforts) should be enough to deal
with the incompatibilities there are bound to be.

I see (now) why you want to work on a core model, but the core model
is not needed in order to solve the interoperability problems arising
from the current lack of a standardized model. The infoset model can
do that, and other models also could.

That I am concerned about the core model does NOT mean that I don't
want a model. What it means is that I am concerned with that
particular model. In my opinion what topic maps(ISO 13250 meaning)
need more than anything else is a model. I wrote the infoset model in
my non-existent spare time specifically to deal with that omission,
and because I was thoroughly dissatisfied with the available

| What it has instead might be considered analogous to an implicit
| RDBMS schema, but that schema can't be rigorously and explicitly
| expressed because there is no underlying model (analogous to the
| "relational model" that underlies RDBMSs) with which to express it.
| The proposed PMTM4-like "core model" provides that underlying model.

It provides a proposal for such a model, as does the infoset model.
One of the many problems I have with PMTM4 is that it is by no means
formal enough to serve as a firm basis for implementation. I know it
is claimed that people have implemented it, but the one implementation
I have seen is a partial skeleton on an implementation can be built,
and it contains extensions to PMTM4. Even so it is not able to
represent all of topic maps(XTM 1.0 meaning) in its current state, nor
does it support export or import of ISO 13250 or XTM documents at all.

The fact that there are implementations does not mean that these will
be interoperable. There are implementations of XTM 1.0 as well, and I
am just as concerned about the interoperability of those.

I also have many other issues with PMTM4, but hopefully we'll be able
to resolve them as we move forward with the model process. I know you
have issues with the infoset model, and I hope we will likewise be
able to resolve those as we move forward.

| If the world needs Topic Maps, it also needs such an underlying
| model, and the sooner we have it, the better.

Which is what I've been shouting at the top of my lungs for the past
twelve months. Frankly, I don't think either ISO 13250 or XTM 1.0
should have been published at all without a model.
| My own recollection is that SGML was not changed because certain
| persons, with excellent and public-spirited intentions, did
| everything possible to prevent it from being changed (except to
| rectify some bugs, all of which were minor).  In the end, I guess
| they overdid it, because the only way to make the changes that were
| needed in SGML was for W3C to simply ignore ISO almost entirely.
| The W3C effort therefore necessarily *had* to change the name of the
| standard.  Nowadays, it's not normally called "SGML"; it's much more
| widely known as "XML".

This matches my understanding. (XML and SGML are not the same, though.)
| If ISO cannot make an *adaptable* Topic Maps standard, [...]

I agree that this is a worthwhile goal, and that is why I think
continuing with the core model is worth the risk.

--Lars M.