Tim Berners-Lee, inventor of the WWW, says , “Most of the Web’s content today is designed for humans to read, not for computer programs to manipulate meaningfully. Computers can adeptly parse Web pages for layout and routine processing here a header, there a link to another page, but in general, computers have no reliable way to process the semantics: this is the home page of the Hartman and Strauss Physio Clinic, this link goes to Dr. Hartman’s curriculum vitae. The Semantic Web will bring structure to the meaningful content of Web pages…”
For more information, see [semanticweb.org]
While the SemanticWeb as expressed by Tim Berners Lee is several years off, the individual pieces that make up the SemanticWeb are fairly well understood and useful now. One element is the [ontology], which is useful for organizing large knowledge bases with more complex query possibilities than [faceted taxonomies] or[thesauri].
I fear that the Semantic Web will go the way of SGML? and for basically the same reason: normalization of metadata works real well in confined applications where the payoff is high, control is centralized and discipline can be enforced. In other words: not the Web.
This (abbreviated) criticism is founded on the assumption that near perfect take-up is required before benefits would accrue. I’m not so sure I agree – semantics, like language, is a thing driven by social interactions. That there may be multiple equivalent terms for the same referent is to be expected. I believe we can also expect thatthe process of natural selection will weed out the less valued variations. — EricScheid