Delivering Images to Multiple Users:
Problems and Pitfalls
<draft submission to R. I. A. O.  1991>
© NOTICE
 
Howard Besser & Chas DiFatta
Library & Information Science
University of Pittsburgh
Pittsburgh, PA  15260
USA
1 - (412) 624-9457
 

    Providing access to electronic images in a multi-user environment poses a number of problems not faced by designers of single-user image storage and retrieval systems.  Among these are networking problems, and the distribution of functions such as processing, storage, and compression among centralized and end-user workstations.  This paper will explain the issues involved, then describe the Berkeley ImageQuery model which was designed to address some of these.  Finally, the paper will explain what has been learned from experimentation with the ImageQuery model, and point to still other areas which need to be examined.

Analog vs. Digital Storage
    Most experiments involving the storage and retrieval of electronic images have involved single-user systems, and have taken place on small computers such as the Macintosh and the IBM PC.  Some of these (such as Voyager's Renoir project and IBM's Historic Vienna project) have relied upon analog storage of images (usually on videodiscs), while others (such as Bank Street College's Palenque project) have stored images digitally.

    Far more experimentation has been done with analog systems, primarily because of storage considerations.  Analog systems are capable of storing 108,000 color images on a single videodisc platter, while (depending upon resolution, dynamic range, compression, and the capacity of the optical disk) digital systems seldom can store even 10,000 images on an optical disk (and are likely to store less than half that number).  This issue of storage capacity has been the most important factor serving to limit experimentation with digital image delivery systems.

    A normal digital color image that fills the screen of a standard megapel workstation (such as a SUN, MicroVax, NeXT, or RT) will take up 1 megabyte of storage for an 8 bit image or 3 megabytes for a "true color" 24 bit image.  Compression algorithms might bring storage down to 1/3 to 1 megabyte per image, but not much further than that (Alexander 1990; Getty 1988:29).  This means that only 1,000-3,000 images of this kind could be stored on a 1 gigabyte disk.  Storing smaller images, of the size of a PC screen, would increase the storage four-fold, to 4,000-12,000 images/gigabyte.

    In the past, the cost of storage for an image set of sufficient size to really do research on retrieval issues has inhibited many researchers from experimenting with digital image databases.  But perhaps more importantly, the high cost of both storage and workstations has made it impractical to consider the actual implementation a digital database of even a moderately sized image collection.  This has also had some effect on research, as many potential funding sources have been reluctant to grant money for research into digital storage systems when analog storage is available at a fraction of the cost.

    But certain limitations of analog systems, as well as cost breakthroughs in digital systems, are combining to make digital systems a viable option.

The primary problem with analog (videodisc) systems is that they are almost exclusively single-user systems, and images cannot be viewed remotely. [footnote 1]   This means that each user must have a video workstation (usually consisting of a videodisc player and separate monitor) adjacent to their computer workstation.  The relatively expensive cost per seat is the same no matter how many users there are on the system.  Unlike a digital system (where there are large upfront costs for database storage and each additional seat spreads that cost more thinly), there is no financial benefit to adding additional users.

    Single-user systems cause problems in updating and adding to the database.  If even a single image is added or changed, the original disc must be remastered and each videodisc at each workstation must be replaced.  Updating or adding to a multi-user (digital) system creates none of these problems, and the database remains consistent to all users at all times.

    We have seen how the vast storage needs of image collections are more realistically handled by an analog (videodisc) system.  Yet we have also seen how such systems are not very practical for multiple users, particularly when the database is growing and/or changing.  Inevitable drops in digital storage cost with the mass production of digital optical storage devices may lead to such systems becoming practical for small to medium sized collections.  At the moment, trends seem to indicate that that is a possibility, which will make research into this area more viable.

Issues for a Multi-user Image Database
    Rejecting the analog/videodisc notion of the entire database stored at each user's workstation, the remainder of this essay will examine the issues involved in the construction of a multi-user digital image database.  We will assume that any such configuration will encompass a number of workstations joined together with a high speed digital network.  We will also assume that the images and associated text are stored on digital storage devices in one or more places on this network.

    The vast size of each image is the major technological factor that makes image retrieval different from retrieval of standard text.  As we observed earlier, full-screen 8 bit color images on a megapel workstation will take up a minimum of 1/3 megabyte.  But, for certain kinds of objects, 15 megabytes (Nugent, 1990) and as much as 30 megabytes might be required (Ester 1990; Getty 1989; Alexander 1990).  This vast size has profound effects on storage, access, network performance, and response time.

Storage and Access

    We have seen how the large storage requirements for images are likely to require mass storage devices (such as optical disks) for even a moderately sized collection.  Medium to large collections will require access to multiple optical disks, and are likely to need some kind of juke box set-up.

    Optical drives currently available take one (or a combination) of three forms: read-only (CD ROM), write once read many (WORM), or erasable (usually of the metal oxide variety).  Read-only drives pose some of the same problems as do videodiscs (outlined above): you cannot add to or update them without re-mastering.  But the discussion here will revolve around access to an already-existing database, and issues of creation and updating will not be dealt with further.
 


Chart A
Sample Average Image Transfer Time
 
Drive
Average Disk Access Time
Transfer Rate
Access/Delivery Time for 1 M image
Sony WDD3000 WORM 250 milliseconds 2400 KB/second .67 seconds
Hitachi OD101 WORM 80 milliseconds 1000 KB/second 1.08 seconds
Sony D501 erasable 95 milliseconds 680 KB/second 1.57 seconds
Canon M-O erasable 92 milliseconds 1000 KB/second 1.09 seconds
 

 
    Designers of systems for image retrieval must seriously consider disk access and transfer time.  As shown in chart A, under ideal circumstances, it would take from 2/3 to 1.5 seconds just to pull a 1 M image off the disk.  This does not take into consideration the amount of time it would take to transmit that image to a user or display it on a workstation.

    With multiple users trying to simultaneously access images from the disk, the actual disk access and transfer time can grow considerably.  Under a normal set-up (a single copy of the database on a single disk), there is a single pipeline of requests for disk access waiting one behind the other.  If 5 users simultaneously request an image, the last user will have to wait for the four other disk accesses and transfers to be performed before the system will even consider attempting to retrieve his/her image from disk.  This would increase the effective disk access/delivery time approximately five-fold.

    The utilization of jukebox storage increases the number of problems and considerations.  Access time is significantly increased (5-10 seconds or more) due to the time it takes to mechanically remove the disk from its storage slot and position it over the read head.  As observed earlier, with multiple simultaneous users, the delay effect from queuing all these requests in a linear pipeline can make response time unacceptably slow.  But multiple heads within the jukebox can partially relieve this problem (as can redundant data).

Transmission

    The time it takes to transmit an image is huge (see chart B).  These are just transmission times (under ideal circumstances), and do not take into consideration disk access or remote display.  Even within a relatively fast workstation such as a SUN 3, it would take 0.03 seconds to move a 1 megabyte image around on the workstation's own internal bus.  Normal telephone transmission for such images is prohibitive, as few people would be willing to wait the hour necessary to transmit a single image at 2400 baud.
 

Chart B
Image Transmission Time
 
 
Speed (Bits)
Transmission Time 
(1 M image)
Transmission Time 
(1/2 M image)
1200 bps 139 minutes 69 minutes
2400 bps 69 minutes 35 minutes
9600 bps 17.4 minutes 8.7 minutes
56 Kb 2.97 minutes 1.49 minutes
T-1 (1.544 Mb) 6.5 seconds 3.2 seconds
Ethernet (10 Mb) 1 second .5 seconds
T-3 (45 Mb) 0.2 seconds 0.1 seconds
FDDI (100 Mb) 0.1 seconds 0.05 seconds
SUN bus (320 Mb) 0.015 seconds 0.007 seconds
 

     At transmission speeds of T-1 and above, image transmission does become feasible.  But even at these speeds, image transmission is likely to cause problems.

    First of all, any given network transmission speed (such as 1 megabit/second) is the speed that would be achieved if the network were empty.  Having a large number of users moving images around a network is sure to lower network performance, but we have no idea yet by how much.

    Secondly, sets of images forced to pass through a gateway or router (either at a point near the image server or somewhere else along the network path) are likely to back up there both due to the gateway's handling, and as they wait for enough bandwidth to pass onto the next part of the network.

    Recently, telecommunications companies are offering services that will drastically increase the transmission speed of images. Surprisingly, services offered are targeted at image applications. Large and small business firms may now purchase internetwork services (using SMDS) that achieve 45 MB speeds continuously over 3,000 miles.  At present, the cost of such services are prohibitive to most businesses, but the cost is decreasing as more telecommunications companies offer the service.

Response Time

    One can expect that users of an image database will ask to see a number of images before they find a particular image that they are really looking for.  Even in well-indexed bibliographic databases, (which are far more advanced than image databases) it is considered good precision to have to look at only 4 unnecessary citations out of every 10 retrieved (Dillon 1988).  Retrieving large image files from storage, and sending these across a network without being certain that these are the images that the user really desires, would be an unacceptable strain on system resources.

    Online Public Access Catalogs typically transmit a brief citation of a work, and the user may then request to see the full citation after ascertaining whether this work is what s/he really wants.  Because an image cannot adequately be described in words, image databases will first have to offer users a textual description of the image, then an abstract (i.e. reduced size and resolution) of the image itself, giving the user the opportunity to forego tying up the system with a large-scale transmission until s/he is relatively certain that this image is really desired.

    It is also likely that some users will require higher resolution images (and be willing to endure a longer response time to view these) than other users.  But the system cannot afford to store an array of different quality versions of each image (from abstract to high resolution).  An interesting solution to this problem might be progressive transmission, where iteratively better versions of the image are transmitted to the user, giving the user the chance to stop the transmissions whenever s/he is satisfied with the quality.  This is roughly analogous to delivering every 10th pixel to the user's screen on the first pass, then every 5th pixel on the second pass, every third pixel on the third pass, ... .  The first version transmitted will be an abstract, and may be good enough for the user to determine that it is not the desired image.  Each subsequent version improves image quality, while only the minimal system resources are necessary.  The drafts for CCITT group IV (FAX) standards incorporate progressive transmission features.

Distributed Systems

    One way of alleviating the problem of access time is through the distribution of storage among a number of disk drives.  This could take the form of redundant copies of the images sitting on different disk drives, or of distributing a single set of images among different drives in an such a way that simultaneous requests are likely to be answerable from different drives.

Another solution can be achieved through the technique of disk caching.  This technique transfers the image initially from the optical drive to very  fast high density magnetic disk drives where the transfer rate and access  time from the drive to workstation is on the order of 100 times faster than the optical drive.  Obviously, a larger number of requests can be serviced in a given unit of time with a magnetic drive than an optical, but it's much more expensive in cost/Mb than an optical drive. Such a system also requires a fast microcomputer, and software that can determine the optimum set of images to keep on the magnetic drive. To further decrease the access times, large amounts of very fast solid state memory can be used as a buffer between the magnetic drive and the network. Unfortunately, the cost of using this scheme is limited to very expensive high end systems. The distributed alternative is to use existing magnetic drives attached to workstations that exist on a high speed digital network. These workstations (Unix based) can use a distributed network file system (AFS) that incorporates this disk caching scheme. Each workstation can also share its own local images on its optical drive with any other workstation on the network.  Users will also enjoy this scheme because they can manage their own images  as opposed to referring to a server to act as a clearinghouse, hence the images are truly distributed.

    Other issues for a distributed image database system include: whether decompression of compressed images should be done at the workstation or at the server, and whether expert techniques can be used to help the system anticipate user desires (based on an initial retrieval set) and transmit an appropriate set of images to the user's workstation before s/he actually requests to see them.

Image Servers on a Wide Area Network

    Recently Americans have begun discussing the possibility of building an image server prototype that would provide fast access to large image collections from workstations attached to the Internet.   Since the conception of the U.S. National Research and Education Network (NREN), most professional conferences have hosted presentations discussing the possibility of nationwide access to the bibliographic data of large collections.   But not only is it a technological problem to build such servers and attach them to a large distributed 2 gigabyte/sec network that does not yet exist, but the copyright ramifications of such a project will be substantial.

    To successfully achieve the goal of constructing a device capable of delivering high resolution images to workstations attached to a high speed network, the following  features must be included.

1. Capability of transmitting data at a minimum of 10 Mb/sec sustained output.

2. The network interface must include IEEE 802.3, 802.5 and FDDI, as well as any proposed network interfaces to be    primarily  used on the NREN.

3. Must be able to distribute its processing needs dynamically to more capable devices attached to its local bus, or somewhere on the NREN. (MACH)

4. Compression and decompression devices must be available services.

5. Drive caching subsystems must also be available but not necessarily exist on the server itself.

6. Network protocols such as Internet (X included) and OSI must be supported simultaneously.

7. New sets of protocol families are needed to accommodate a client server model of image querying and delivery.
These protocols should include full text description as well as traditional text information about the image, services like compression, decompression, progressive transmission, scaling, dithering, 24 bit to 8 bit clustering, and authentication services.

8. Must be capable of supporting the lowest possible denominator of an image device such as a monochrome X terminal.

Berkeley System

    The Berkeley Image Database System (ImageQuery) was designed for a networked campus environment, giving a large number of users remote access to several different image collections (Besser & Snow forthcoming).  The prototype system allows users to browse through images in the University Art Museum, the Lowie Museum of Anthropology, the Architecture Slide Library, and the Geography Department's map collection (Besser, 1990).

    The software is built on a distributed X-Windows environment, with resources for display management and user interaction offloaded onto the user's workstation, while the image server handles image retrieval and manipulation.  This server-client relationship allows many different users to share the same resources without overloading the central server with display and interaction services.  X-Windows also permits a wide variety of hardware platforms to access the file-server in functionally transparent ways.
 

Figure C
ImageQuery Modules

 

 
    The software is structured in a modular format (see figure C) in order to take advantage of optimized commercial software.  The Visual Front End handles iconic (mouse-based) input and display, and translates these into queries that are sent to a back-end database management system.  This modular approach allows the database administrator to select which DBMS to place on the back-end.  (Options include a built-in flat-file database.)  The separation of image-handling from text retrieval tools allows the system to take advantage of commercial database management systems which have spent the last decade developing techniques to optimize performance.

Conclusion

    The development of the ImageQuery prototype has proven the feasibility of a server-client image database in a limited environment.  The concept of a modular system seems to be a sound one, and off-loading text searching to a commercial DBMS has achieved performance levels which could not have been realized with self-developed searching tools.

    The prototype has shown that there are problems in color matching between different display devices.  It has also shown that, as suspected, images can cause storage and transmission problems because of their size.  This points to the need for more research into just how big an impact will be caused by moving a large number of images around a network.  It also points to the need for more work with image compression, and progressive transmission, and it begins to show what kinds of services may be found on the high bandwidth networks of the future.

Footnotes

1. The one notable exception is at MIT, where they maintain separate but coordinated campuswide analog (cable TV) and digital networks.  Each image retrieval workstation is connected to both the analog and to the digital network.  A request to view an image is sent over the digital network to an analog (videodisc) server which then broadcasts it at a precoordinated time over the cable TV network.  The user workstation tunes in to that channel at the precoordinated time and digitally captures the analog image.  This is a very interesting system, but is extremely complex, and beyond the means of most institutions.

Acknowledgements
Conversations with Clifford Lynch contributed to the ideas discussed here.  Linda Zirnitis and Bill Kownacki did bibliographic research for this paper.  In addition, Ms Zirnitis did fact-checking and provided technical support.  Christine Bishop helped with clerical support.  Steve Jacobson, Randy Ballew, and Ken Lindahl wrote the ImageQuery software.
Citations
 
Alexander, Michael.  Data storage: Grace under pressure, Computerworld 24:21 (May 21, 1990), page 17.

Besser, Howard.  Visual Access to Visual Images: The UC Berkeley Image Database Project, Library Trends 38:4, Spring, 1990.

Besser, Howard and Snow, Maryly.  Access to Diverse Collections in University Settings: The Berkeley Dilemma,  in Pat Moholt and Toni Petersen (eds) Beyond the Book: Extending MARC for Subject Access to Non-Traditional Materials, G K Hall (forthcoming).

Dillon, Martin.  Barriers to the use of research ideas in the design of real systems, in Lorcan Dempsey (ed), Influencing the system designer online public access to library files: Third national conference, Oxford, UK: Elsevier, 1988, pages 133-144.

Ester, Michael.  Image Quality and Viewer Perception, Leonardo (Digital Image-Digital Cinema Supplemental Issue), 1990, pages 51-63.

Getty Art History Information Program.  Image Presentation Guide, Santa Monica: Getty AHIP, August 1988, 60 pages (mimeographed).

Nugent, William R.  Electronic imaging in high resolution gray scale for fine art and salon photography, Washington: Library of Congress, September 1990, 7 pages (mimeographed).
 

Top
Howard's Home Page
Back to List of Papers
 



This paper originally written in Microsoft Word  for Macintosh 4.0 in1991 (draft 2.7).
It was converted to HTML 4 using Netscape Composer 4.04 on 5/18/98.