[sc34wg3] a new name for the Reference Model

Michel Biezunski sc34wg3@isotopicmaps.org
Thu, 23 Jan 2003 19:47:46 -0500


> * Michel Biezunski
> | 
> | I think this is a misunderstanding. The purpose of the reference
> | model as I understand it is not to prescribe how implementations
> | should work. Instead, it should analyze the result of what gets
> | interchanged in XTM (and probably HyTm as well) and explicitly
> | declares what is what. It actually declares the list of nodes and
> | arcs which result from what gets interchanged. I don't see how this
> | impacts the existing implementations at all.

* Lars Marius Garshol

> If that is truly how it is intended to be used I agree. The question
> is whether that is the case. If it is, for example, why does it need a
> conformance seection? It talks about conforming applications (like
> SAM), implementations, and data sets. Why isn't all this just left to
> the applications?


99% should be left to the application. The remaining 1% which might
result in creating some really pathological topic maps that will look
OK to a parser but would actually behave so ridiculously that they would
end up being unusable should be detected.

> | I would like to revisit this statement because I feel that TMCL has
> | a central role to play in distinguishing what is a user-defined
> | topic map application and what is the standard application (as
> | opposed to other "non-standard" applications).
> |
> | It looks to me that the starting point is once we start with the
> | topic map model, we create a set of constraints to define a
> | particular application. The question is whether the SAM itself
> | should be understood as an application of the RM with specific
> | constraints (such as pre-defined semantics of assertion types) in
> | the same sense that TMCL would define what an application is.
> 
> But, Michel, aren't you now contradicting what you wrote in your first
> paragraph above? I'll explain how I see it first, then explain how
> what you write seems to me to contradict that.
> 
> If the RM is only an analytical tool then all topic maps will be
> instances of the SAM, because that will be the implementor model.  In
> that case TMCL will be a constraint language for SAM instances, and so
> there will be no interaction with the RM because TMCL will not allow
> you to modify the SAM model except by defining rules like those in
> OSL:
> 
>   - topics of type "composer" must have exactly one association of
>     type "born-in" where they play the role of "person" and there is
>     exactly one other role of type "place" played by a "city"
> 
>   - topics of type "opera" may have one name in the scope "Italian"
> 
>   - topics of type "composer" may also be instances of the type
>     "librettist"
> 
>   - topics of type "person" must have unique values for their
>     occurrences of type "email"
> 
> and so on. Nothing of this impacts the SAM in the slightest, nor does
> it affect its relationship to the RM in any way.
> 
> If, on the other hand, TMCL is to be used to define new applications
> of the RM that modify the SAM then I don't see how any implementor can
> possibly implement the SAM rather than the RM since their users will
> expect to be able to change things around using TMCL.
> 
> Do you see what I mean?

Yes, I see what you mean. As far as I know, there has not been any
decision made about what TMCL is and what it contains. What you propose
and what I propose is different, albeit complementary. I don't see why
you and I couldn't get what we describe, even if it's different.
  
> * Lars Marius Garshol
> |
> | I don't see how that follows at all. If I declare in my TMCL schema
> | that email occurrences must be unique, how is that incompatible with
> | anything? It's not even an extension, it's just a specification of a
> | constraint.

OK.

> * Michel Biezunski
> |
> | It depends how you want your application to behave when merging with
> | other topic maps that have the same characteristics but have been
> | created with other sets of constraints. Understanding how topic maps
> | merge when they come from different environments is really central
> | to what we're doing. This is where the value of topic maps is. If
> | what we do is just a way to merge databases that have an identical
> | schema, I don't see why we are wasting our time. We can just use
> | existing, well-established technologies to do exactly that.
> 
> Yes, but what does this have to do with the SAM/RM relationship? I
> don't see anything in your answer that relates to the RM at all.

Because thanks to the RM, it becomes possible to regard other 
applications than those strictly conforming to SAM be TM 
applications as well.

> * Lars Marius Garshol
> |
> | I think XTM 1.0 provides all this. I don't see any need for anything
> | more. If I did I wouldn't have been one of the founders of Ontopia;
> | I would have waited for the technology to be ready.
>  
> * Michel Biezunski
> |
> | Ontopia is not the only company which is developing topic maps. 
> 
> I'm explaining my personal view. Obviously that has no relation to any
> company except Ontopia, because I am not involved in any other company.
> The part you should take note of in what I wrote is "I think XTM 1.0
> provides all this". Not the part about Ontopia, which is not important.
> 
> So let me repeat: I think XTM 1.0 has all the flexibility it needs to
> have already, that it does not have any damaging limitations fixed by
> the RM, nor that it's lifetime is somehow limited.

I agree with you that the model brought by XTM is wide enough to
accommodate a huge number of information models. As far as I am
concerned, it looks like this is all what I would ever need to work
on for my customers based applications (although it's difficult to
be sure about this). However, I have heard people claiming that they
need applications not covered by the SAM and I don't see why they
shouldn't be able to accommodate their needs. We just need to understand
more precisely what they are.

> Sure, but the RM is no closer to the SAM than RDF is. SRN himself has
> said that creating a SAM<->RM mapping is a tricky thing to do. I think
> that speaks for itself.

No it doesn't. I have no idea why it is tricky unless I know more.
Why? What? How? (not who?)
 
> | The way it's implemented (APIs, properties of objects, etc.) is not
> | what topic maps have been designed for. We have tried to leave wide
> | open the creativity of implementers so that we can have many
> | products which are doing things which are completely different (for
> | example search engines, document management systems, annotation
> | editors, databases, etc.) and have all a common substrate. 
>
> Yes, but even so we must ensure interoperability, and the only way to
> do that is to put strict and well-defined requirements on
> implementations and instances. Either that, or we should be publishing
> technical reports.

I am not sure exactly what you mean by that difference. I like
the idea that some of the things are still "experimental" and need
technical study before we can "legislate" what goes in the standard
and what not. This is particularly important when deciding what is
a conforming application. We don't want to standardize too early
(ie. something that later will prove not-standardizable!). The idea
of distinguishing where is the limit between what we are actually
sure of (and should go in the standard) and what is there to be
tried, is a good one.

 
> | I strongly believe that we should stick to the principle of keeping
> | the field of applications wide open. And this is what the Reference
> | model offers, by explicitating the generic layer above the way the
> | topic map concepts appear.  The problem with the Reference model
> | (one of them) is that it's probably not generic, not abstract,
> | enough. So it seems like a set of arbitrary constraints, while it
> | actually is not, or should not be.
> 
> How is the RM model any more independent of application area than the
> SAM is?

Because it's more low-level. As you said before, RM is more like Ascii
whereas SAM is more like XML. There are plenty of things in Ascii that
are not in XML (and that even are not considered applications).


> | This is why I believe the current status is not enough mature to go
> | to publication. We -- all of us -- have to make a particular effort
> | to try to understand what the others are doing, and not considering
> | a prioris such as "they get in my way, let's get rid of their
> | stuff".
> 
> I'm trying.
> 
> | (I don't believe that new users are prevented to use topic maps as
> | they are now, because the standard exists now and has proven to be
> | stable, even with the addition of XTM.)
> 
> Frankly, Michel, I find this statement telling. We added XTM to the
> standard, and when we did implementations had to change to accomodate
> a number of new concepts. XTM changed the data model of topic maps in
> fundamental ways, so adding it was a violation of stability.
> 
> Even worse, it's now more than a year later, and we still haven't
> given people a straight answer to how XTM relates to HyTM. We are
> working on it, but we have not provided it. So how you can describe
> this as stability I find baffling. 
> 
> As a technical report, a general guideline for how to organize
> information it's been pretty stable. As a standard, something intended
> to guarantee the interoperability of data and implementations, it has
> simply been a mess.

No, I disagree. We have not said that HyTm was not usable. I now
believe we should not say that, even if we know most implementations
don't use it, just in case someone needs it. We have provided a new
way to do TM which was more in sync with the practice of the industry.
We are not preventing any one to continue using HyTm. The 2 syntaxes
relate because they use the same concepts. The only problem I see is
the one Martin pointed out: it's not obvious how to use facets with
XTM. Apart from that, the rest is all there. It's not documented yet,
but we haven't had a lot of complaints how to map one into the other.

By leaving the standard as it was and adding the XTM DTD, we have
technically speaking not removed anything from it, we have supplemented
it with something else. It's an extension, not a reduction. 

The "mess" you are describing is precisely what we are trying to
fix now, with the SAM and the RM. It takes a while to figure out
exactly what it is and it is because we have user feedback that
we can do that. We won't have had any user feedback if we wouldn't
have been perceived as a stable standard.

There is no contradiction between the notion of stability and
evolution. A standard can be stable and still evolve. It can be
extended, made more precise, fixed, maintained, all this enters
in the category of stability. On the other hand, a standard where 
everything can be changed by anyone new who enters the game is 
not a stable standard. The topic map model is pretty stable. 
It doesn't mean it's absolutely perfectly defined in all respect.

[...]
 
> I agree with that. The only difficulties are
> 
>   a) agreeing on the new editorial structure (multipart/singlepart),
>      and a new roadmap (relations between the pieces), and
> 
>   b) figuring out exactly what the RM is supposed to be.

and figuring out exactly what the SAM is supposed to be.

> These two questions are bound together, of course, because the nature
> of the RM affects the relationship of the pieces to one another. If
> the RM is a conceptual tool only then the N0278 roadmap is just what
> we need. If it is a technology in its own right then the N0278 roadmap
> is not going to be sufficient.

The the roadmap needs to be amended accordingly if necessary. The
roadmap is part of what we are discussing.

Michel
===================================
Michel Biezunski
Coolheads Consulting
402 85th Street #5C
Brooklyn, New York 11209
Email:mb@coolheads.com
Web  :http://www.coolheads.com
Voice: (718) 921-0901
==================================