Image DatabasesÝ
Database: The Magazine of Electronic Database Reviews1
© NOTICE
By Howard Besser
 
 
 
Affiliation at time of Publication:  Contact Information as of 6/1/98
Visiting Associate Professor Visiting Associate Professor
School of Information & Library Studies School of Information Management & Systems
University of Michigan University of California at Berkeley
howard@sims.berkeley.edu
 


    Many disciplines have collections of visually-oriented materials that need to be managed, from photographs and slides to diagrams and charts, from maps to manuscripts, to objects (or what librarians used to call "realia").  Most (though by no means all) of these collections reside in libraries, museums, and archives.

    Access and management of these collections pose enormous problems for their institutions.  Because the images/objects are often very fragile, there has been a constant tension between access and preservation -- the more they're looked at, the faster they deteriorate.  Cataloging has been inadequate -- a picture often requires more than a thousand words to describe it.

    Items tend to be relatively unique, so searching through OCLC or RLIN for cataloging copy is generally not productive.  Original cataloging at the item level has been considered a very low priority, so the sparse cataloging that has been done has generally focused on collection-level records.

    But technology is beginning to catch up.  The coming "Information SuperHighway" plans to deliver a wealth of "multimedia content" which has both promoted speculation in the value of visual materials (as we can see from the highly inflated prices being paid for corporations such as Paramount which own large quantities of visual materials) and has spurred the development of technologies to capture, store, manage, and deliver multimedia material.

    Our computers can now store not just cataloging records about images and objects, but surrogate images as well.  Giving users access to onscreen images can vastly improve search precision and help to better preserve the originals.  Users with access to surrogate images are likely to require far less detailed descriptive information in their cataloging records, which holds out the possibility that museums and libraries can quicken the pace of cataloging these materials.

    On the technical side, with increases in storage capacity and in telecommunications bandwidth, multi-user image databases are finally becoming financially feasible.  On college campuses, libraries and computer centers are beginning to consider offering online access to collections of images as a part of their increased set of services in their roles as clearinghouses for all kinds of campus information.

<<verify details of these 2 paragraphs>> At Cornell, students can query a database and retrieve onscreen portraits of rare birds and even ask to hear the birds sing.  At Berkeley, researchers from anywhere on campus can query a database of botanical specimens and view images of botanical specimens alongside early 20th century handwritten notes about these specimens.  From any site on the WorldWide Web, users can view the contents of the Krannert Art Museum, and browse through and view the images of paintings in each gallery.  <<add on-demand printing example>>

    For a fee, advertising designers or magazine editors can use nice visual interfaces to query and generate orders and deliveries from online stock photo databases of hundreds of thousands of images using assigned index terms and relevancy feedback (Kodak's Picture Exchange), or natural language queries based on photo captions (ITP's Seymour).  <<add Corbis?>>

    This article will outline some of the major points that one needs to consider in developing an image database system.  Image database development involves a number of activities that are unfamiliar to most library and database specialists, so the focus of this article is to make the reader aware of the major considerations that go into image database planning and to give a very basic introduction to functional areas which are likely to be new to the reader.  The emphasis here is on continuous-tone images (rather than on document images), and on network-accessible multi-user image databases.
 

What is a Digital image?
    A digital image is composed of a set of pixels  (picture elements) which are similar to dots on a newspaper photograph or grains on a photographic print.  The number of pixels across a given area (described as dots-per-inch or the pixel dimensions of a display device) governs the resolution  of an image.  (Resolution is a measurement of detail, similar to the screening value of a half-tone or photograph.)  Dynamic range  governs the color or greyscale levels that can be represented by a digital image.  As anything stored within a computer must (at the lowest level) be represented as bits and bytes dynamic range measures the number of bits that compose a given pixel (and hence the number of colors or greys that a given pixel can represent).  What we call an 8-bit image  allows each of its pixels to be represented by up to 28 (or 256) different colors/greys (approximately the level of standard broadcast television colors).  What we call a 24-bit image  (sometimes referred to as "true color") allows the pixels to represent up to 224 (approximately 16 million) different colors/greys.

    Image Capture is the process of scanning an image at a certain resolution and dynamic range (digitizing), then formatting and tagging the resulting file so that it can be easily retrieved.  The image capture process is very labor-intensive, and (with cataloging/indexing) one of two activities that can together account for up to 90% of the total cost of building an image database.  Because of this, it is critical that images be captured at the highest quality that is practical (so that this process does not have to be repeated in the near future), and that capture be done as efficiently as possible.

    Scanning is a process that generally resembles photography or photocopying (exposing light-sensitive diodes instead of film or paper).  The resolution of an image is measured either by the total number of dots/pixels across the camera's entire field of vision, or by the number of dots per inch across a fixed field (like the top of a copy machine).

    Technical information about the image (such as its dimensions in pixels and its dynamic range) is called header information.  A header is attached to the captured digital image file.  The resulting tagged image should conform to a particular file storage format (such as TIFF, GIF, PICT, TGA, Sun raster, CGM, etc.).  As high resolution digital image files are very large a 24-bit fullscreen image to be displayed on a workstation (a Sun) would be 3 megabytes users often feel the need to reduce the size of image files.  Image compression  is the process of making images smaller by methods such as abbreviating repeating information or eliminating difficult-to-see information.  An image viewed after lossless compression  will be identical to the way it was before being compressed.  An image viewed after lossy compression  will be different than before it was compressed because some information has been lost.  Commonly used compression formats include CCITT Group III and Group IV (used by most fax machines), JPEG, JBIG, and LZW.

    Users and Uses

    The choice of imaging technology will depend upon who will use the resulting database and how it will be used.  User studies will prove more useful when segmented by image type (Renaissance architecture, Jurassic life-forms, Contemporary photography, Cancer specimens), user groups (curators, conservators, researchers, faculty, students) and use functions (browse, research and analysis, conservation).  Users and uses should then be related to a required level of image quality (high resolution, low resolution, or browse images, for example) as shown in the section "Image Quality".

    The level of resolution determines the amount of information an image contains.  It also influences the way it can be used.  For example, medium resolution images of a particular collection may be sufficient for classroom use by undergraduate students, but would be inappropriate for a conservator exploring the technical construction of a work.  It is important to study both current users and use (i.e.. for non-automated systems) as well as to anticipate potential future users and uses.  Technical and statistical data that will need to be gathered for this (as well as other implementation concerns) are outlined in the section "Constructing a Working environment".

    Within an institution, an image database should fit well with other automated functions (both current and planned).  The image database should be incorporated into a general institution-wide automation plan that takes into consideration hardware, software, operating systems, and networks.  (For example, a Windows-based institution probably would not be well-served by purchasing a Mac-based image database product.)  In addition, most institutions will want to integrate their image database with a collection management system, an online public access catalog, a publishing systems, and/or a business or administrative system.  How images and accompanying descriptive information passes between any of these systems and the image database must still be studied.

    Functional needs for each class of potential users must be examined.  Will a research-oriented workstation prove useful?  Will users want to integrate image display with other institutional information management services?  (For example, would users want to display a record from a curatorial research database alongside the image?  Or a critique from an online journal article?  Or would a user want the results of a search of a collection management system to display images for each of those objects?)  Will users want to integrate image display with various personal information services?  (For example, will users want to place images within a word processing document, or put text records describing images into a bibliography or footnote?)  The answers to these questions will all have an impact upon the imaging system selected.

Documentation/Standards
    The images in any image database will require accompanying documentation.  The depth of this documentation is likely to vary from institution to institution, depending upon institutional and user needs.  An institution will benefit greatly from using documentation standards; not only will they be able to take advantage of outside expertise in the vocabulary and rules needed to describe an image, but by adhering to standards they will be able to share information with other institutions (and may even be able to save cataloging and indexing time by incorporating portions of documentation records from other institutions).

    When dealing with documentation, it is important to maintain the distinction between documentation that refers to the original work, to photographic representations of that work, and to a digital image that may be captured from either the original, or a photographic representation, or derived from a pre-existing image file.

    As with any library application, consistency (both in terms of descriptive and access terminology) is highly desirable.  A key to consistency is controlled vocabulary, but broad tools like LCSH or Sears are usually ineffective in dealing with images coming from any particular domain.  Image Database developers should examine other controlled vocabulary tools such as:  Art and Architecture Thesaurus, the Thesaurus of Geographic Names, ICONCLASS, MeSH/UMDS, Nomenclature, etc.

    Various types of documentation refer specifically to surrogate images (rather than to the original works).  Standards for descriptive elements such as views ("as seen from the northwest", "left bottom corner detail") are being developed by the Visual Resources Association.

    Technical documentation standards (including information about the scanning process, compression, source image) are being examined by groups such as CIMI (Computerized Interchange of Museum Information).  Information about image resolution and compression used will be critical to image display and decompression.  Noting the scanner model and scan date will prove important for future color balancing on different display devices.  Information about rights and reproductions, image source, and links to other views and details will be required by users.  Recording the scanner operator will help ensure quality control.

    If an institution wishes to exchange images with other institutions and to ensure a clear upgrade path for its systems, it should rigorously adhere to common standards.  Proprietary compression and file storage formats should be avoided.  Commonly accepted image storage formats include:  TIFF, GIF, PICT, TGA, Sun raster, and CGM; common compression formats include: JPEG, JBIG,  LZW, and Quicktime.  But the reader should be cautioned that some commonly-accepted standards are not fully standardized, and a file created in one version of a standard format such as TIFF is not necessarily readable by all TIFF readers.  In addition, other types of information for which there are not yet standards (rights and restrictions, basic credit information, information about the scan) should accompany an image in any type of exchange.

Access
    Most searches of image databases begin with text-based queries.  Different groups of users use different criteria in order to find the images they are seeking.  This must be considered when standards are adopted for description and indexing.  Before selecting image database software, it is important to identify which categories of information must be searchable (or act as access points).
 
    In the future it may be possible to retrieve images based upon criteria other than pre-assigned index terms.  Current research into automatic indexing and pattern recognition may prove fruitful in retrieving images by colors, iconic shapes, and positioning of elements within the image frame.  Stock photo houses that cater to the advertising industry have had some success in using automatic indexing to answer queries like: "retrieve images with shades of blue in the top part of the frame and shades of green in the bottom part" (landscapes).

Access must be defined by the needs of image database users; what types of queries each group will make.

Do users require browse images (and if so, what types of identification should accompany each image)?
Will users need to view images in various levels of resolution?
Would image processing functions (such as changing colors, zooming, or annotation) be useful?

Planning for access requires more than just functional concerns; it also, should consider legal questions.  Issues of rights and restrictions are very complex.   The legal framework within which image databases are managed is rapidly evolving and has yet to be defined.

The institution must be sure it has the right to reproduce an image electronically for user display (simply owning a work does not mean owning electronic reproduction rights, and distribution rights vary depending upon the purpose proposed).
The institution must determine the conditions under which each group of users has the right to print or download that image, and it must build a system that enforces those rules.  In some instances this will require a system that can track frequency of use for each image (and do this by user group).
The institution may decide to display text which outlines use restrictions adjacent to each image.
The institution may decide to embed digital "fingerprints" into each image to help track subsequent unauthorized use.

Image Quality

    Image quality must be evaluated based upon the use to which an image is put.  Image quality is the cumulative result of the scanning resolution, the dynamic range of the scanned image, the source material scanned, the scanning device or technique used, and the skill of the scanning operator.  The full quality of the image stored is often not reflected in display, as most display devices are capable of far less resolution than printing devices.  There are a wide variety of trade-offs that system designers must make when choosing quality levels, and most of these result from the fact that image files could take up more than 30 megabytes each.  The large file size required for high quality images places demands upon storage, transmission, and viewing.  Some image quality may have to be compromised to enhance system performance.

    Digital image quality can never be better than that of the source material from which it is scanned.  If the institution wants the digital image quality to be superior to that of a 35mm slide, they must scan from a source that is of a higher quality.  Decisions about quality should be made in the context of how the images will be used.  A key trade-off is storage vs. image quality.  The higher the quality of image the more storage it will occupy and the more system resources it will require, including higher bandwidth networks to move it around, more memory in each workstation to display it, and longer and costlier scanning.

Integrating Images and Text into a Database: Commercial Systems
    Most image databases consist of three basic components: a text database, software for browsing thumbnail images, and software for viewing individual images in detail.  The Image database system must also include the software to seamlessly integrate these components.

    The  text-based database can be a bibliographic retrieval system, a collection management system,  a relational database management system, or a very simple text-based retrieval system.  Usually an additional field (like a MARC 856 location field) containing the filename of the image is added to this text-based system to provide a link to an image file.  The user queries the text-based system as if s/he were looking for a text record.  When the query is narrowed sufficiently, the user may press a button or hit a function key which causes the overall software to look up the filename fields in the current query set of text records, and invokes browsing software that displays a set of thumbnail images associated with the current query set.  Most systems also allow the user to point and click on any of these images, invoking viewing software which will display a larger version of that image.
 
    Most browsing software integrates smoothly with most viewing software.  But at this point in time the integration between text management and browsing tools is still in its infancy.  Most implementations have begun with a sophisticated tool for one of these, and attached a very simplistic tool for the other.

    Sophisticated text management tools (like bibliographic retrieval systems and collection management systems) have generally either written their own software or acquired proprietary software to handle browsing and viewing functions.  Though these tend to incorporate the most common functions requested by current users, the vendors of these products are not likely to be able to respond very quickly (if at all) to future demands and technological changes that are certain to come.  The tiny market (hundreds of sales are considered a good year) makes it difficult for any vendor to even track technological developments in the image/multimedia area, let alone assign a team to actually implement products.  And the proprietary nature of most of these systems call into question interoperability with other software and functions.

    The large consumer marketplace has driven the development of image browsers.  The marketers of image browsers (such as ShoeBox, Picture Prowler, Aldus Fetch, Media Cataloger, Media Tree, Kudo Image Browser, ImageAccess, Cumulus, CompassPoint, and Multi-Ad Search) anticipate tens of thousands of sales, which give them the economies of scale to permit constant monitoring of technological developments and continuous upgrading of their software.  And, indeed, these tools provide visually appealing graphic interfaces, good browsing functionality, and excellent integration with viewing tools.  But because they feel that their mass market has little need for sophisticated text management, they have tended to provide only superficial functionality in this area.  Most of these browsers limit the number of text fields to a dozen or less, and many of them fix the names of some the fields (such as "photographer").  For most of these tools, the only way to assign multiple values to a field (such as two subject headings) is to put both terms into the same field separated by a semicolon.  (This means that most of these packages use character-string matches for retrieval on any given field, which is both slow and leads to false hits [like retrieving words such as "cart", "artifact", "start", and "particular" in response to a query for "art"].)  Even the most sophisticated of these browsing tools (Kodak's ShoeBox) poses significant additional problems for those in the library world: there is no networked version and all dates are assumed to be 19xx.

    What we really need is the best of both these worlds: sophisticated text management systems that link seamlessly to high quality image browsers designed for the consumer market.  For this to happen, the library world must first get together and define the ways in which these systems must communicate, then, following the path taken to get Z39.50 adopted, pressure the vendors into accepting these definitions/standards.  Work along these lines has already begun through the Coalition for Networked Information's Committee on the Transformation of Scholarly Communication.  Those who choose to go ahead and use either the proprietary bibliographic retrieval system browsers or the consumer oriented image browsers still need to consider what labeling their users will need adjacent to each browse image, and what type of text record display (if any) should be available by clicking on a browse image.

Selecting Scanners
    The type of scanner used will be influenced by whether capture will take place from the work itself, or from photographic reproductions.  One of the largest cost centers in an imaging project is the process of bringing material to the scanner, setting up lighting and changing lens focal length, and returning the source material to its proper place.  Institutions that already have surrogate images (such as 35mm slides) of their collection will find it much more cost-effective to use these as source materials.  But if they do, the resultant digital image quality will be limited by the quality of those source materials.

    Scanning requires both a piece of hardware (like a camera or a photocopy machine) and a piece of software running on a computer, that controls part of the scanning process.  Often these are bundled together.  The software is used for a number of purposes including: controlling exposure, adjusting resolution, framing the image, and storing the digital image in an appropriate format.

    An alternative to setting up onsite scanning is to contract with a vendor to scan material offsite.  This tends to be a viable option only if all the source material are similar-size surrogates (such as 35mm slides), and resolution needs are oriented towards viewing entire images at once (rather than zooming in on details).  The current technology that is most cost-effective for this is Kodak's Photo-CD.  A wide variety of service bureaus offer image capture using this technology.  In all types of scanning (both in-house and off-site), scanning quality can vary widely, and must be monitored to ensure consistent results.

Other Resources
    Imagelib is a listserve covering general non-technical discussions about accessing, building, and maintaining image databases. Moderated by Stuart Glogoff of the University of Arizona Library, this is the primary forum for the library world to engage in ongoing discussions about digital imaging projects. Topics expected to be covered include accessing existing databases, creating image databases, exploring copyright issues, and exploring academic use of images. To subscribe, send the message "SU B imagelib" to listserv@arizvm1.ccit.arizona.edu or view it through readnews at news:bit.listserv.imagelib.

    The group that maintains Imagelib also is building a Clearinghouse of Image DB Information which will contain information about existing image database projects. To access this, telnet to dizzy.library.arizona.edu and login as gopher.

    In mid-1994 the Getty Art History Information Program (AHIP) created the AHIP Imaging Initiative, with the goal of jump-starting imaging in the area of cultural heritage and the arts. Under the direction of Jennifer Trant, the initiative plans to distribute a number of educational documents (written by Howard Besser) to help readers to design and create image databases. The initiative is also supporting activities to deal with imaging standards and intellectual property issues (such as the Museum Education al Site License Project outlined below). For further information contact Kim Chaskel at kchaskel@getty.edu.

    The Museum Educational Site License Project (MESL, formerly MUSE) will be a cooperative venture for licensing digital images (and accompanying rich documentation) from six museums to six higher educational institutions. Initial funding for the project comes from the Getty Art History Information Program's Imaging Initiative, and the project is designed to address licensing and rights issues. In the course of this project it will be necessary to build operable image access and delivery systems, and there is some possibility that a central image bank (or at least a central image registry) will be a by-product of this project. Groups supporting this project include the Getty, the Coalition for Networked Information, and the Association of Art Museum Directors. Principals involved include: Howard Besser, David Bearman, Jennifer Trant, Geoffrey Samuels, Clifford Lynch, Brian Kahane, and Maxwell Anderson. A discussion group can be accessed at musimage@cni.org. Further information can be obtained from ftp://ftp.cni.org/pub/MUSE, or by contacting Geoffrey Samuels at geoffsam@aol.com.

    The Research Libraries Group has formed the Digital Image Access Project (DIAP). At the root this is a preservation project that focuses on access to photographic collections in digital form as a method to avoid wear and tear on the originals. Over half a dozen RLG libraries participate in this project, and are working together to explore issues of intellectual control, collection and resource management, etc. The project hopes to develop guidelines and models to assist in decision-making about online image access systems. One of the project sites (UC Berkeley) is exploring the incorporation of the digital images from this project into an NEH-sponsored project to create a set of SGML DTDs for archival finding aids. Project manager is Patti McClung from RLG, and the vendor providing resources for this project is Stokes Imaging (johnr@stokes.com).

    The Coalition for Networked Information, the American Council for Learned Societies, and the Getty Art History Information Program have formed the National Initiative for a Networked Cultural Heritage (NINCH). Because of the perception that the needs of other communities (such as the scientific one) are automatically incorporated into new technological developments by policymakers, this initiative is designed to assure the fullest possible participation of the arts and humanities in the formation of the National Information Infrastructure (NII). One of the goals of this initiative is to define the particular challenges that cultural heritage poses for the NII, and to lobby policymakers to make sure that the needs of the arts and humanities are not left out. The document that led to the formation of this group can be found at gopher://gopher.cni.org/00/cniwg/transformation/humanities.initiative.

    The Museum Informatics Project at UC Berkeley maintains the MIP WWW Server that contains a variety of useful information related to image databases. Through this server one can access a variety of Image Database projects on the Berkeley campus, including the Architecture Slide Library (SPIRO), the SMASCH Project (botanical information), and the California Native American Project. Plans are being developed to mount a variety of other resources (including Howard Besser's Image Database Bibliography). The MIP Server is accessible at http://www.mip.berkeley.edu.

    Howard Besser maintains pointers to a wide variety of image and multimedia resources, included many that he and his students have helped to build. These are available either through his homepage at University of Michigan ( http://http2.sils.umich.edu/~howardb/HomePage.html) or through the homepage for his UC Berkeley class (http://bliss.berkeley.edu/impact/main.html).

Information about standards and methods for describing images can be found in the publications and online listserves of the Visual Resources Association and the Art Library Association of North America, as well as in a variety of publications and projects of the Getty Art History Information Group.

Conclusion
    Digital imaging holds great promise for providing wider access to original material and enhancing scholarship.  For a digital image database to be useful beyond a single short-term project and/or beyond a narrow user base, the database must be constructed according to common standards in both technical and descriptive areas.
 

1 The author wishes to acknowledge the generous assistance of the Getty Art History Information Program, which funded the conversion of his lectures into prose form.  Parts of the material in this article are © by the Getty Trust, and used here with their permission.
 

Top
Howard's Home Page
Back to List of Papers
 


This paper originally written in Microsoft Word for Macintosh 5.0 in January 1995.
It was converted to HTML 4 using Netscape Composer 4.04 on 5/18/98.