Nowadays, it seems as though the only people who haven't heard of the Internet and the World Wide Web are technophobic hermits who consider the radio to be a dangerously new invention. Indeed, the media bandwagon seems to have created hype of such a magnitude that the Web cannot possibly live up to the expectations people have been lead to have of it.
Both the mainstream and the specialist press continue to laud the Web as the ultimate digital information system, a rather over ambitious claim. My personal view is that the Web is just the most recent of a succession of Internet-based information systems, each more sophisticated than its predecessor; before the Web came gopher, and before that came ftp.
However, the Web was the first large scale distributed hypertext system on the Internet, and is probably responsible for introducing more people to the concept of hypertext than any of the work in this area during the previous thirty years.
Eventually, all hypertext systems end up being compared to Xanadu, the grand creation of Ted Nelson, who first coined the term 'hypertext'. Xanadu, like Coleridge's poem, was never completed, but the fragments that exist point to a richer system of hypertext than that offered to us by the Web.
Many of the features of Xanadu have appeared in other hypertext systems, both before and since the introduction of the Web. In some ways, the only strong point of the Web seems to be its extensiveness, since its lack of features make for a rather primitive hypertext system.
A small selection of features which a second generation hypertext system would be expected to have, and which the Web does not have, might be as follows:
In the same way that a citation index shows which papers cite a given paper, bidirectional links allow you to see which documents link to a given document.
The perennial problem of broken links, where the document at the destination of a link no longer exists, could become a thing of the past if bidirectional links were used throughout. If a document were deleted or moved, it would be possible to see what links point to it, and amend them appropriately.
At the present, the Web is a unidirectional medium with a marked separation between authors and readers. If a hypertext system is to be more than a fancy word processing system with links, it should allow people to work together more effectively, with all users as both authors and readers.
The current brute force technique used to build an index of the documents on the Web is starting to show its limitations. Even powerful search engines such as Altavista cannot hope to index more than a small fraction of the Web without containing out of date information.
Distributing the indexing processes between a large number of servers and building indices of indices (so that index servers would not need to build universal indices) would help to reduce the inherent bottlenecks of the current technique.
Another term coined by Ted Nelson, transclusion refers to the process of including a section of one document in another. In this way, an author quoting someone else's work would not copy the relevant section, but would include a reference to it instead. The implementation of a transclusion mechanism would make transparent document version management a near-trivial proposition.
There are development attempts underway to implement these and other features for the Web, but many of these are trying to produce results by adding layers on top of the basic structure of the Web, rather than by altering that basic structure.
One of my growing concerns is that the growth of the Web has been so rapid that much Web technology became widespread before reaching maturity. It is now effectively impossible to make any major changes to either the underlying protocols or data formats, and the shortcomings of the Web cannot be overcome without these major changes.
So what will the successor to the Web look like?
The next generation of distributed hypertext systems are starting to appear already. Hyper-wave (formerly Hyper-G) is a system initially developed at the Technical University of Graz in Austria which supports a number of the features listed above, and there are other contenders, such as Microcosm from the University of Southampton.
One thing is certain; the successor to the Web must remain backwardly compatible with the Web (in the same way that the Web integrated gopher and ftp with itself) if the recent World Wide Web boom is not going to leave us with a collection of legacy data of gargantuan proportions.