(Chapter for Handman, Gary, Video Collection
Management & Development, Greenwood Press, 1993)
Video technology is changing rapidly. Fifteen years ago there was no consumer video market, and virtually no feature films available on video; today a huge number of Americans own VCRs, and almost all mainstream feature films are released onto video shortly after their first theatrical runs. Libraries have adapted well to these changes, and many have substantial circulating video collections.
But many more changes are ahead. Just as libraries have had to cope with format changes in the past (from U-Matic to beta to VHS and videodisc), they will continue to deal with similar changes in the future (S-VHS, 8 mm, HDTV, etc.). More importantly, technological changes in the computer world, as well as the convergence between computer and video technology is going to cause fundamental changes in how we use and perceive our collections, and in the very idea of a collection itself.
The increasing availability of information in digital form (first full-text of books, then still images and sound, then moving images) and the proliferation of high speed computer networks will make video images accessible from across the country. This will lead libraries to become less important for what they actually physically collect and more important for what they can access. The ability to seamlessly alter images (as we've already seen with colorization of older films) leads to interesting authorship and copyright problems. And the increasing amount of "unfinished" or "re-worked" material available through such a network will lead to a revision of our very idea of what constitutes a "work".
This chapter will consider a number of the changes
that we can expect in the coming years. First it will review the changing
formats for moving image materials and the problems that these changes
pose for preservation and access. Next, it will investigate the advantages
offered by a move towards digital storage of moving images, including integration
of multimedia materials, remote access, preservation, and instruction.
Finally, it will discuss the more fundamental changes we can expect in
the concept of what constitutes a "work" or a "collection".
Libraries with moving image collections are familiar with changes in the formats for moving image materials. Format changes are driven by economic forces much larger than the library market, and libraries tend to be market followers rather than market creators. The formats in libraries have been based upon the formats used by the film industry and later for broadcast television.
For many years moving image materials housed in libraries consisted almost exclusively of 16mm films. In the 1970s, with the advent of relatively inexpensive and easy-to-use formats such as Super 8 and U-Matic (3/4 inch) video, many libraries began to collect moving image materials in these formats.
Experience with the U-Matic format can offer us a good example of failure to anticipate market changes. This relatively portable format was designed primarily for industrial and institutional use. Both the cassettes and the players were rugged, and the quality was very good. Then, in the late 1970s, two new portable formats were introduced: Beta and VHS. These were both lower quality in terms of both hardware construction and image resolution, but eventually developed a mass market appeal due to a wide variety of factors. The built-in capability of taping off-the-air programs while out on errands was particularly appealing (even if people often had difficulty learning how to use this feature). These new machines attracted such a large market that economies of scale brought the hardware costs down even further, and eventually all new research and innovation was concentrated in these formats. A decade ago, no one could have envisioned how widespread video stores would become, or how commonplace VCR ownership would be. A decade after the introduction of the VHS VCR, the quality and available features had far surpassed those of U-Matic. Even industrial users such as televisions stations had moved to the smaller formats, and U-Matic was all but dead.
There are a number of newer formats available that threaten to render VHS as dead as the Beta and U-Matic formats. Currently available video systems include S-VHS, 8mm video, and laserdisc video. All of these systems are analog. But the current movement in the film and broadcasting industries is towards digital, and the library market will likely follow these trends.
In the recent past, most special effects (most notably in television broadcasts and commercial science fiction films) have been created digitally, but the storage medium has remained analog. An analog video clip is converted to digital, changes are made using a digital special effects generator, then the results are output back onto an analog storage device. This process will evolve to fully-digital modes in the near future as digital storage devices become more prevalent. Companies such as Pixar (which originated as a division of LucasFilm, but a number of years later was sold to Apple Computer co-founder Steve Jobs) have pioneered this form of special effects, a form which has subsequently moved out into mainstream broadcasting and filmmaking. Low price devices for digital image generation and manipulation such as New Tek's Video Toaster promise to have widespread impact upon the market.
New digital storage mechanisms are already being marketed as "multimedia" devices. Most products of this type are being marketed on disks the size of audio CDs, and are designed to be played on CD-type players attached to either a television or a home computer. New "home entertainment" disk drives are being released, capable of playing the new multimedia disks as well as CD audio and/or CD ROM, videodiscs, etc. As of this writing, these products face a marketing barrier due to a shortage of interesting software. Currently, developers are not interested in producing software until they are assured of a market, and consumers are reluctant to join the market until there is sufficient guarantee of interesting software. But this is likely to change due to a number of factors, particularly the introduction of Kodak's Photo CD, expected in the fall of 1991.
The Photo CD, while designed for still (as opposed to moving) images, is likely to have a profound impact upon the marketplace for all types of multimedia formats. In the summer of 1992 Kodak plans to introduce a service whereby customers can bring their rolls of film into a local photofinishing lab and receive back both a set of prints and a CD with images of all their prints for a total charge of under $20. The prints could then be viewed on an enhanced CD audio player attached to a television set. This multimedia format is likely to gain such a large degree of market penetration that developers will flock to it, writing not just extensions, but adding on new features as well.
A second significant marketing factor is that both new low-end (consumer-oriented) multimedia formats (CDTV and CD-I) are designed to play both audio CDs and Kodak's new Photo CDs, allowing them to piggy-back upon these markets. Because the incremental cost for their equipment over that of a standard audio CD player or Photo CD player is not substantial, they only have to show a small amount of added usefulness to convince people to pay the marginal cost. And because they are likely to sell a significant number of these devices, developers are likely to produce a significant amount of software for them (which, in turn, is likely to increase sales of these devices and attract still more developers).
Several of the multimedia formats alluded to above have, as of this writing, recently been released and will be discussed below. Because of the vast storage requirements of multimedia, each of these formats specifies both a physical layout of information on the disk and a set of compression schemes to minimize storage. Though each format uses its own proprietary compression schemes, most formats claim that they will adopt the ISO JPEG (International Standards Organization, Joint Photographic Experts Group, ISO/IEC JTC/SC2 Working Group 8 for the Coded representation of picture & audio information) standard for still images, and the MPEG (International Standards Organization, Motion Picture Experts Group) standard for moving images when these stabilize.
CD-I (Compact Disc Interactive) is the multimedia format designed for the upper end of the consumer (mass) market. Developed by Philips and Sony, this format was announced in 1986 but not released until 1991. The hardware consists of a box about 2/3 the size of a VCR which can hook up to a television set or a higher resolution display device. The user interface is a remote control pad that looks like a VCR control but with additional arrow buttons for moving in four different directions. Depending upon the application, the user can select items off the screen or pull down menus using the arrows on this device. The CD-I format provides for storage of up to 7000 still images, 16 hours of monaural sound, or 8 hours of stereo. Alternatively, it can store up to 72 minutes of a quarter-screen full color, full-motion video image. Normal applications will have a mixture of different types of media. As of this writing, the player lists for under $1600 and a number of multimedia disks are listed for less than $40.
CDTV is a low-end multimedia format developed by Commodore/Amiga. The hardware is very similar to that of CD-I, but the playback devices sell for about half as much ($700 at this writing) and the disks are priced as low as $30. Storage capacity is equivalent to that of CD-I. Authoring system software for CDTV is much cheaper than for CD-I, and is likely to spur those with more limited resources to develop products in this medium.
CD ROM XA is a multimedia format aimed at those who already have home computers. It is run on a CD ROM drive attached to a personal computer. Designed by Microsoft, Sony, and Philips, it provides for FM quality audio interleaved with various other kinds of data, including a small full-motion video window. Because it is attached to a computer, it will provide for much more sophisticated applications, particularly those involving information retrieval. But because the market for this is much smaller (it requires a home computer and will not play audio CDs), it is likely that less work will go into software development for this format. Philips claims that it intends to encourage developers to develop applications which will run on both CD-I and CD ROM XA, but as of this writing no such applications have been completed.
DVI (Digital Video Interactive) runs on an MS DOS computer with an attached disk drive. It provides for storage and playback of full-screen, full-motion video, audio, text, and graphics. It is really just a compression format, and currently requires a decompression board costing approximately $2,000. The developers at the David Sarnoff Laboratories concentrated most of their work on perfecting their proprietary video compression algorithms, hence this is the most sophisticated of any of the formats mentioned here in terms of handling full-motion video. The format is now owned by IBM and Intel, and it is expected that IBM will provide decompression chips as standard equipment on the motherboard of its next generation of personal computers. This will assure a huge marketplace for the format, and is likely to spur developers to produce interesting software for this format.
New developments in HDTV (High definition television) are likely to spur research into video compression, and result in a whole range of new devices for storage and retrieval of full-motion video. It is expected that around the turn of the century a new HDTV standard will replace the current NTSC standard for broadcast television in the United States. At this writing the Federal Communications Commission is currently taking bids for different types of HDTV systems, but it seems clear that whichever is chosen as the national standard, it will be a digital delivery system. Again, once that happens, a huge market will loom ahead, and this will spur developers to work on issues such as moving image compression, as well as to develop sideline products.
We can expect a rash of other new products to
hit the market in coming years. It is expected, for instance, that Sony's
new Mini Disc player (a Walkman-sized device which will be able to both
play and record information on a small compact disc) will incorporate multimedia
information, some of it available pre-recorded by Sony (which now owns
Columbia and Tri-Star Pictures). Also worthy of note are two recently introduced
methods for storage and display of full-motion video (as well as other
multimedia) on personal computers: Apple's Quicktime and IBM's M-Motion.
Implications of Changing Formats
The lack of a set of stable formats has profound implications in the areas of conservation and preservation, public service, and copyright. Foremost among these is the issue of whether material collected today will even be accessible a decade or two hence.
Because of format changes, there is always the worry of amassing a collection in one format, only to have that format superseded by another format. Libraries that began collecting video more than a decade ago collected almost exclusively in 3/4 inch U-Matic format. As VHS players and tapes became cheaper, and as it became increasingly difficult to find U-Matic players (as well as replacement parts for older players), more and more of these libraries abandoned their U-Matic collections in favor of VHS.
The U-Matic experience is an example of technological market trends which teach us that we can expect that any storage format will be superseded by another format in less than 20 years. Though this may at first appear to be a conservator's nightmare, we must also realize that these kind of analog storage formats have only a limited shelf life, and tend to deteriorate over time. So, in a sense, the obsolescence of the formats goes hand-in-hand with their physical deterioration.
As tapes get worn out or formats superseded, can the library copy these onto more stable substrates or newer formats? There are copyright problems (see below), but with analog storage there are also technical problems.
All video storage devices in the past, including videodiscs, have been analog rather than digital. With analog storage, each subsequent copy is inferior to the previous generation. In practice a copy beyond the third generation is generally too unstable to even be viewed. Digital copies, on the other hand, are exact replicas of the previous copy; even the 100th generation should be as good a copy as the first (see the following section, "Implications for Digital").
When considering changing formats and the stability of a storage medium, one last important point to remember is that even if we were to develop a base strata that was relatively stable and might last even 100 years, it is doubtful that anyone will have a device to play it on even 20 years hence. Who today could find a projector capable of playing a regular 8mm film (popular in the 1960s) or a videotape player capable of playing a 1/2 inch reel-to-reel videotape (popular until about 20 years ago)?
Changing formats also have important implications for public service. If a library's user base is familiar with a more modern format, it is usually in the library's best interest to upgrade its collection to keep up with its users. For example, many libraries were reluctant to part with their exclusive devotion to the U-Matic format when VHS and Beta became popular. Eventually, these libraries all reached a crisis situation where a large number of their users were familiar with the new, less expensive formats (and even had them at home), but were unable to adjust to the U-Matic format used in the library.
If the format we have is outmoded or physically deteriorating, can we legally copy it onto another medium? Is the copy we make a "different work", and must we seek out the copyright holder and secure permission for the right to copy it? The entertainment industry (which has obvious monetary interests in this process) would tend to support a fundamentalist interpretation that any copying onto any format would require identification of the copyright holder and the granting of permission to copy. Intellectual property "libertarians" would counter that once one purchased a copy of a work, one was entitled to continual access to the intellectual content of that work; if the copy itself was deteriorating, one was entitled to make a new copy and destroy the deteriorating one. The pivotal question here revolves around licensing: in the original purchase agreement did one license the right to view this work only in the original format? Or did one pay for the right to view the work in any possible format?
When we mount digital versions of films and videos on a computer network we begin to confuse the difference between a work (which is some kind of expression) and a copy (which is a tangible form of that expression). The two collapse together into a display.
Libraries have always been motivated by conservation and preservation concerns. When a book cover wears out or pages get destroyed, most libraries do not hesitate to rebind or photocopy pages without consulting the copyright holder. If acidic paper becomes brittle and a microform copy is recommended, little thought is given to tracing down the copyright holder unless the work is clearly "in print". In general, librarians have only worried about copyright clearance in copying to new formats when the work was relatively new, likely to be "in print", and the changes affected more than 50% of the work. The assumption has been that for an older work not currently in print, the need to preserve a copy outweighs the need to track down the copyright holder. Implicit in this is an assumption that the copyright holder would be happy that someone was taking the steps to preserve a copy of the work. Incidents from the history of film and video indicate that the copyright holder's position shifts depending upon the perceived market.
In the late 1970s and early 1980s film archives
spent hundreds of thousands of dollars reconstructing "more complete' versions
of Hollywood films that had been released in shortened forms. The studios
and distributors watched these activities with complete disinterest until
showings of the restored films proved popular enough that they realized
that a profit could be made from these. They then stepped in to assert
their copyright holdings and to control the activity.
When films and videos are stored in digital form,
the issue of copying to a different substrate becomes even more complex.
Each copy is an exact replica. Unlike the issue of copying from one videotape
format to another, each digital copy can be used on exactly the same equipment
as another digital copy. One can expect that "digital movies" will be stored
on large file servers. It is common to periodically clean up disks and
rearrange the locations of files. Under a strict "fundamentalist" interpretation
of copyright, a repository would have to get permission from each copyright
holder each time it cleaned up (or even backed up) its disks, as these
activities require copying of the digital files.
Implications of Digital
The possibility of digital storage of moving image materials has tremendous implications for preservation as well as access. If films and videos are stored in digital form, iterative copying does not cause deterioration. This means that, as the physical strata on which the images are stored begins to wear out, one need only copy it onto a new substrate and the copy will be as high a quality as the "print" it was copied from.
Elsewhere, this author has explained how the ability to seamlessly change and manipulate digitally-stored images will both allow conservators to preserve and restore damaged artworks, as well as allow creative people to combine and alter existing images (Besser 1991, Besser 1987). In the mid-1980s, while working for the Pacific Film Archive (PFA) in Berkeley, this author proposed the development of a set of digitally-based tools for removing scratches from old films, rebalancing faded colors, and adding missing frames through motion interpolation. These easy-to-use tools would be provided to a director who would spend a month or two in residence at PFA restoring a print of his film and creating a new archival-quality digital print of that film. That proposal was premature; the technological barriers to digital storage of moving images was too great, and that proposal was diluted to apply only to still images, and eventually evolved into the Berkeley Image Database System (Besser 1990, Besser & Snow 1990). But today that proposal is possible. Digital storage of moving image material is a routine business; most commercial motion pictures do this to accomplish special effects. And tools to remove scratches or rebalance colors seem primitive in comparison with the special effects used in such films as Terminator 2.
Films stored in digital form could provide an exciting learning experience for film students. Tools could be developed to allow a student to re-edit a film in various ways and immediately play back each version on their workstation screen. Theoretical concepts such as that of the ellipses (Burch 1973) (the repetition of frames during a cut on motion) can be concretely shown by playing back a portion of film with one or two frames removed.
Digital storage will also permit moving image material to be combined with and incorporated into other types of material. Students at MIT currently are able to turn in "papers" that incorporate slices of documentary footage in appropriate places (Gerstein & Sasnett 1989, Michon 1990), and when the instructor reads these on his or her workstation, the proper moving images are played back. This combination of text, still and moving images, sound, and the computer has been labeled "multimedia", and offers tremendous educational potential.
Films and videos stored digitally offer the potential
of access across a digital network. Though a number of technological barriers
still need to be worked out for large collections of films, network access
will allow libraries to simultaneously serve users at different workstations
and even in different locations. This notion that the "print" does not
have to be physically in close proximity to the viewer will cause a significant
shift in the way we view the library's service to the viewer.
As faster and faster networks link more and more libraries together, and as more and more media resources become available in digital form, the very notion of what constitutes a library's collection is beginning to change.
Technological changes coupled with economic developments (such as the skyrocketing cost of serials) are forcing libraries to re-evaluate their traditional role as people who preside over a storehouse of information that they themselves manage. There are two key ways in which both the concept of what constitutes a collection and the role of librarians are likely to change.
Library futurists have begun to re-evaluate the
current working model of amassing a storehouse of books and periodicals
that will serve its users. Libraries can no longer afford this as the serials
pricing crisis has demonstrated. Technological developments are making
it possible to share resources across a network, and the library of the
future will be as important for the network services it can access as it
is for the material it actually owns. The librarian of the future is likely
to be less concerned with managing an in-house collection, and more concerned
with providing access to a variety of services across a network. This will
be a fundamental shift in the role of the librarian.
Current Trends in Libraries
Rising costs of serials and plummeting telecommunications costs have already led many libraries to develop cooperative collection development policies where material is FAXed from one library to another. The RLIN ARIEL project uses the national Internet network instead of telephone lines to transmit these images. More advanced projects, such as one sponsored by the National Agriculture Library and North Carolina State University, transport digital images of ILL documents from one library to another, and even to individual end-user's workstations (Kirk & Alldredge 1992).
Rawer and rawer data is becoming available in machine-readable form. First we had bibliographic records of books as part of online catalogs, and journal citations through indexing and abstracting services. Then we began providing access to abstracts of journal articles in machine-readable form, which allowed us to do search for the occurrence of a word within the abstract, proximity queries, and automatic indexing. Now we're starting to see both images and full-text of journal articles in machine-readable form, and soon we are likely to see full-text of books available as well.
What were formerly library online catalogs are positioning themselves as sources for a wide variety of information. More and more systems are mounting indexing and abstracting services as part of their online access.
Libraries have begun to see the utility of making
their online catalogs available across a network. Today faculty and staff
at most universities can access the online catalogs of scores of libraries
around the country while sitting in the comfort of their offices. Through
some of these catalog services they can access indexing and abstracting
services (such as Medline and PsychInfo), search through tables of contents
of many journals, and can even make ILL requests that will be FAXed to
them within 24 hours. Soon they will be able to access the full text of
various types of documents, still image databases, and eventually even
moving image archives.
What we can expect in the future
As more full-text material becomes available in machine-readable form, as high-speed networks begin to connect more and more libraries, and as we begin to develop standards and protocols (such as Z39.50) for looking at other libraries' collections using our own familiar user interfaces, futurists are beginning to predict the formation of "virtual" libraries. These futurists predict that libraries will become less important for what material they actually store, and more important for what materials they can access. The library of the future will not be a storehouse of material as much as it will be a gateway to a world of information residing elsewhere.
Video collections are not immune to these changes. In fact, the video collection of the future is likely to be less of a collection of videos, and more of a user interface to videos that may reside elsewhere. The librarian might manage a gateway to a wide variety of video databases around the country and around the world. Access to these might be provided on the level of individual films/programs, scenes, shots, or even frames. For example, a user might want to view every scene in every program that contained a particular actor. Or every shot of the Empire State Building in every film in all accessible databases.
The video librarian might be responsible for
developing or managing a user interface which allows one to view onscreen
clips in slow motion, combine these with scrolling text annotations, and
even to re-edit or combine clips from a wide variety of databases.
Barriers to virtual collections over a network
A number of issues have to be resolved before we can actually develop virtual collections accessible as network resources. Some of these are relevant to all forms of virtual collections.
At the base level, all network applications will need to separate the user interface from the retrieval functions. Once this is implemented the menu choices, buttons pushed, and general user display can be optimized for particular user groups, yet all of the different user interfaces developed can access the same base data. This is already being done for bibliographic information with the Z39.50 protocols (Lynch 1991), but would need to be extended to visual information.
A key problem specific to moving image materials is the set of technical difficulties posed by the vast storage requirements of such materials (Besser 1992). Because these files are so huge (ten minutes of an 8-bit moving image filling a PC screen would typically require approximately 4,500 megabytes of storage), they take up storage space, they create issues involving disk access (as the highest capacity disks are optical storage devices which tend to have the slowest disk retrieval speed -- potentially not fast enough to keep pace with the image display rate), and they eat up network bandwidth. Despite the fact that storage costs are plummeting, disks are becoming faster, and higher bandwidth networks are becoming more available and affordable, it is still unlikely that digital moving image access will become viable this decade without significant breakthroughs in compression. But such breakthroughs will very likely be forthcoming as progress on both videoconferencing and HDTV will force industry to address full-motion and high quality video compression. And work is currently progressing on an international standard for moving image compression: the International Standards Organization Motion Picture Experts Group (MPEG) standard.
A clear problem is in gaining compression ratios sufficient to store an hour or two of relatively good quality moving images in a reasonable amount of space (i.e., 500-1000 megabytes), and having disk access and decompression fast enough to display the moving images in real time. Further complications arise from problems associated with a lack of sufficient network bandwidth and network throughput. Still other complications will develop if promising algorithms such as interframe compression (storing the first frame of every scene, and subsequently storing only the differences between a new frame and the frame that preceded it) are used, but the user only wants to display the middle of a scene (and will be forced to wait for the computer to calculate all the preceding frames).
There are a number of less technical barriers as well. We will need to develop standards for multimedia storage and presentation. Standards such as JPEG (for compression of still images) and MPEG (for compression of moving images) are well on their way to adoption by the International Standards Organization. Work on standards such as HyTime (a hypermedia mark-up language) are moving along at a rapid clip. But there is little or no agreement between vendors as to actual storage formats for what is often termed "compound documents" Multimedia systems primarily use their own proprietary schemes for storage and presentation, and it is difficult (if not impossible) to display material on any system other than the initial development system. This is analogous to the problem with word processing systems in the early to mid-1980s: if you created a document in one system, you considered yourself lucky to be able to import simple text (without any formatting, boldface, or underlining) from one package into another. For still images we have developed a number of standard formats (TIFF, PICT, GIF, CGM, encapsulated postscript, etc.), a situation which poses almost as serious a problem as having no standards at all. We need to develop a single multimedia storage format and convince all software manufacturers to support access to multimedia information stored in this format. It must support many different types of documents (text, still images, moving images, etc.). And it must be able to indicate a time relationship between these.
What technological features must be incorporated into the servers that we build for multimedia information? Most fundamental is the support for device-independent display. With such a feature, users who don't require full-color images, who don't need full (unshakey) motion, or who can get along with a video window that only takes up part of their screen will only use the system resources necessary to generate the quality of image that their workstation requests.
Network delivery of moving image materials poses interesting problems in the area of intellectual property. The key problem here revolves around the issue of simultaneous use: if one buys a copy of a moving image document, do the rights they have purchased include the right to make it available to a number of simultaneous users across a network? Though recent rewriting of contract language to incorporate network mounting of digital information (such as CD-ROM indexing and abstracting services) might prove of some use here, the situation for moving image material is not completely analogous to that of formerly "print" material. Indexing and abstracting services in their original form were designed for single-user use, and mounting these on a network where many people could simultaneously access them seemed to imply extending use beyond the originally intended "one user at a time". Moving image documents, on the other hand, usually carry with them the expectation that multiple users will simultaneously "watch" them. Thus, the argument that this violates the spirit of the original license carries less weight. Perhaps a more fruitful precedent to look towards might be from the world of cable television: do hotels that redistribute pay movie channels (such as Home Box Office and Showtime) have to pay a fee for each room that has the potential of watching that channel, for each room that actually watches it, or just for a single-user license? Generally, they negotiate blanket licenses. A model to follow would be that of ASCAP for reproduction of music. In any case, all these intellectual property problems will eventually be solved through a combination of the re-writing of licenses, court test cases, and re-writing of the copyright laws -- all of which will take many years. In the meantime, the librarian needs to be wary over which rights are actually passed on.
An area of intellectual property law that will be even slower to resolve is that challenged by digital features which allow the user to seamlessly alter moving images, re-edit sequences, and combine copyrighted sequences with new material. This "new" material is really just old material re-arranged in new ways (a sort of "derivative work"). Who is the "author" of this new material? How much re-arrangement is needed for the authorship to shift from the author of the original material to the re-arranger? At what point does the re-arrangement of the material constitute a copyright violation? This is very similar to questions raised in the 1980s by postmodern art. The issue of whether a work is really derivative or truly new (section 106 of the 1976 Copyright Law) will probably be decided on a case-by-case basis, often in a court of law. To librarians, what is even more important than issues of derivative works and copyright is the challenges that new technologies are posing to our traditional notion that a "work" is what a publisher puts in a package and markets (Wilson 1989).
Also of issue is the level of unit appropriate for indexing films or videos: Is it sufficient to index whole works, or might we want to address portions of them? Traditionally, with books and journal articles we have just provided access on the level of the whole work. But with the increasing availability of material in machine-readable form, access is being provided on the level of individual words -- first in titles, then in abstracts, and finally within the complete text of the work. More and more searches are being done based upon words occurring within an abstract or in proximity to another word. We are beginning to see articles with hypertext links that join phrases and paragraphs within and article with other relevant materials. If we want to make similar links to moving image materials, we need to first assess what level of unit for indexing, cataloging, or linking is appropriate.
Several interesting projects are currently underway which attempt to provide users with access to portions of moving image documents (such as individual scenes, images of particular people or places, etc.). Cinema Studies researchers at UCLA and University of Iowa have begun building databases of indexes to videodisc copies of films like Hitchcock's Rebecca to allow viewers to find the frame number where a particular shot begins, where a character appears, or where a camera movement starts. Unfortunately, there are not yet standards for this kind of activity, and it is unlikely that the various projects of this nature could be easily integrated.
Researchers at Apple Computer (Mills 1992) and Xerox PARC have begun efforts to construct browsing tools that could be used on any set of moving images. Efforts up to now have treated images as a continuous set of still frames, and have used consistent sampling intervals (such as every 10,000th frame) to create a set of small still images that can be browsed.
For efforts like this to be really useful, it
would be desirable to develop a structurally-based mark-up language to
access individual portions of moving image or multimedia documents. (Mark-up
langages use embedded characters to indicate structural elements within
a document which can then be retrieved separately or displayed in different
formats. For example, if an outline were stored in a mark-up language a
user could choose to display the major points in boldfaced type, other
important points in normal type, and less important points in italics.
Or a user could choose to display only the major points, eliminating the
others. Similarly, a user viewing a script stored in a mark-up language
could choose to display only camera angles and lighting, only one character's
lines and stage direction, etc.) There is a movement in the information
retrieval world to extend text mark-up languages like SGML (Structured
General Markup Language implementing ISO 8879) to other environments. Projects
like the Museum Computer Network's Computerized Interchange of Museum
Information project have begun looking at in-progress standards such
as HyTime (DIS 10744 -- hypermedia extensions to SGML) as aids in storing
their complex multimedia information (Bearman 1992). If we had such a mark-up
language for moving image documents, we could tag index terms to individual
shots, groups of shots, scenes, or entire films/videos. We could relate
individual (different-sized) pieces of one film/video to other pieces in
the same or other films/videos, creating true hypermedia links. We could
look at individual "chunks" of a film/video in a structured fashion, much
as many people use Outliners in their word processing systems. Though
this would allow us to provide much better forms of association between
pieces of the film/video, we would first need to better understand the
structural basis of the medium. It is likely that for this we would have
to rely heavily on theories such as semiology (Metz 1974). And, of course,
implementing something like this will require much more extensive and detailed
indexing than is currently done, but this may become more economically
justifiable if the results are viewed as networked resources with a broad
Moving image collections have changed significantly
since the late 1970s due to the proliferation of a consumer market for
video. The imminent arrival of a consumer market for digital storage of
moving images promises to have a much more far-reaching effect. The availability
of digital versions of moving image materials (as with digital versions
of print materials) is likely to have a profound impact not just upon practices
of storage and retrieval, but on the very idea of what constitutes a collection
and what is the librarian's role.
Several of the ideas in this chapter began to
germinate in the early 1980s through discussions with Bertrand Augst, a
faculty member at the University of California at Berkeley who had been
responsible for the birth of the Film Studies Program there. These ideas
were nurtured along with assistance from Tom Schmidt of the Pacific Film
Archive and through a meeting put together at the Library of Congress by
Stephen Gong (then of the National Endowment for the Arts) and Henriette
Avrams of LC. More recent influences have been from the discussion on the
future of libraries and information banks that have taken place within
the Coalition for Networked Information, and I am particularly indebted
to Paul Peters and Clifford Lynch for engendering those discussions. And
William Nasri helped clear up some points on copyright.
Bearman, David. CALS of Great Interest to CIMI, Computerized Interchange of Museum Information Committee News 3, January 1992.
Besser, Howard. Adding an Image Database to an Existing Library and Computer Environment: Design and Technical Considerations, in Susan Stone and Michael Buckland (eds.), Multimedia Information Systems (Proceedings of the 1991 Mid-Year Meeting of the American Society for Information Science), Medford, NJ: Learned Information, Inc., 1992.
Besser, Howard. Advanced Applications of Imaging: Fine Arts, Journal of the American Society of Information Science, September 1991
Besser, Howard. Visual Access to Visual Images: The UC Berkeley Image Database Project, Library Trends 38 (4), Spring, 1990, pages 787-798
Besser, Howard and Maryly Snow. Access to Diverse Collections in University Settings: The Berkeley Dilemma, in Toni Petersen and Pat Moholt (eds.), Beyond the Book: Extending MARC for Subject Access, Boston: G. K. Hall, 1990, pages 203-224
Besser, Howard. Digital Images for Museums, Museum Studies Journal 3 (1), Fall/Winter, 1987, pages 74-81
Burch, Nöel. Theory of Film Practice, New York: Praeger, 1973.
Casorso, Tracy M. Research Materials: Now only keystrokes away, College & Research Library News 53:1, February 1992, page 128.
Gerstein, Rosalyn and Russell Sasnett. Marital Fracture: An Interactive Video Case Study for the Social Sciences, in Databases in the Humanities and Social Sciences-4, edited by Lawrence McCrank, Medford, NJ: Learned Information, 1989, pages 253-257.
Kirk, Thomas G. and Noreen S. Alldredge. Coalition for Networked Information: The second year, College & Research Library News 53, part 1: January 1992, pages 10-11, part 2: February 1992, pages 98-99.
Lynch, Clifford. The Z39.50 Information Retrieval Protocol: An Overview and Status Report, Computer Communications Review 21:1 (Sigcomm) January 1991, pages 58-70.
Metz, Christian. Language and Cinema, The Hague: Mouton, 1974.
Michon, Brian. Motion Video User Interfaces for Multimedia Workstations, in Electronic Imaging '90 West: Advance Printing of Paper Summaries, Waltham, MA: BISCAP International, 1990, pages 873-877.
Mills, Michael, Jonathan Cohen, and Yin Yin Wong. A Magnifier Tool for Video Data, CHI '92 (Proceedings of the 1992 Conference of the Computer Human Interface Special Interest Group), Association for Computing Machinery, 1992.
O'Conner, Brian. Selecting Key Frames of Moving Image Documents: A Digital Environment for Analysis and Navigation, Microcomputers for Information Management 8:2, 1991, pages 119-133.
Wilson, Patrick. The Second Objective, in Conceptual foundations of descriptive cataloging, edited by Elaine Svenonius, San Diego: Academic Press, 1989, pages xx-yy.
Last modified: 1/18/2000