Everitt:
There’s a value cycle in our strategy. We tell customers, “We have this
Open Source platform with great value being added from developers and
companies worldwide that you can tap into.” We then have to execute on
having a strong, attractive platform for developers to create
interesting things like Squishdot,
Metapublisher,
etc.
Then we turn it back around and explain back to the community how
customer engagements are driving things that are clearly important to
the platform’s viability, such as enterprise scale.
It’s worked out very well, although there are times where the choice has
to be made, and this almost always means the consulting customer wins.
As we’ve learned from these situations, we’ve adapted our organizational
model to better leverage the synergy. How we’re now structuring
ourselves is becoming as exciting as the software itself.
oss4lib:
The new Content Management Framework
(CMF) should appeal to many
libraries, especially those wanting to empower patrons to manage their
own content and allow customized content views. One of the most
interesting things about the CMF is its deep support for
Dublin Core (DC),
with every object supporting DC descriptions. What led you in this
direction?
Everitt:
Believe it or not, Mozilla!
I’ve been doing this information resource and discovery thing for a
while, with Harvest in 1993 and CNIDR and the like. I had followed Dublin
Core for a while, plus related initiatives such as IAFA.
However, Mozilla was the first time I had seen DC built into a
platform. Being tied to RDF
nearly made it out of reach for people.
But the value of having every object or resource in Mozilla support a
standard set of properties was apparent, even for a knucklehead like me.
:^)
I’m surprised Dublin Core hasn’t become universal amongst CMS vendors.
Nah, I take it back, I’m not surprised. :^)
oss4lib:
A common frustration with Dublin Core is that it would be all
the more powerful in the aggregate if more applications and sites implemented it.
Everitt:
Alas indeed! But it’s not hard to see why it hasn’t taken off. It’s
hard to get authors to participate in metadata. And when there’s nearly
no payoff or visible benefit, the incentive is even lower.
RDF has suffered from this same chicken-and-egg problem. It’s needed a
killer app that simultaneously sparks both supply and demand.
oss4lib:
Seeing DC in the CMF gives us hope. 🙂 A likely upside is that if
more applications and sites use DC, everyone will clamor for more robust
metadata. In what ways are you planning for that next level?
Everitt:
I believe Ken would agree that the next area of interest for us over the
next six months is the “space in between” content.
Manheimer:
Yes! There’s a lot of metadata that can be inferred
on the basis of process and content.
For instance, we can identify the “lineage” of a document according to
the document from which it was created. We can harvest the actions of
visitors, like site-bookmarking, commenting, and rating documents, to
glean orientation info for subsequent visitors. We can infer key
concepts from the content, eg, common names (in the wiki, WikiNames).
Overall, we can reduce the burden on the content author and editors to
fill in metadata when it can be inferred from process and content.