Technical Issues in the Distribution of Museum Images
and Textual Data to Universities
School of Information Management & Systems
University of California
Berkeley, CA 94720-4600, USA
Digital Image Center Coordinator
Fine Arts Library
University of Virginia
Charlottesville, VA 22903, USA
Besser, Howard and Christie Stephenson. (1996).
The MuseumEducational Site Licensing Project: Technical Issues in the
Distributiionof Museum Images and Textual Data to Universities, in
James Hemsley (ed.) E.V.A. '96 London (Electronic Imaging and the Visual
Arts), Thursday 25th July 1996 (vol 2), Hampshire, UK: Vasari Ltd,
1996, pages 5-1- 5-15.
This paper discusses the technical and logistical issues involved in the delivery and exchange of museum images and textual data as part of a large multi-site collaborative project.
The Museum Educational Site Licensing Project (MESL) is a major demonstration project designed to identify and resolve the problems of licensing and delivery of images and accompanying text from museum content providers to groups of university content users. Much has been written about the MESL concept and MESL aims in the arena of intellectual property administration. This paper is the first attempt to explain the technical problems of data distribution identified in the course of the MESL project through its first full year of implementation.
The lessons learned from this project are likely to prove invaluable
in planning for any large-scale scheme for the distribution of museum images
and text to educational institutions who will then, in turn, make them
available to individual users.
The MESL project serves as a laboratory for developing and testing the legal, administrative and technical mechanisms needed to enable the full educational use of museum collections through routine delivery of high-quality museum images and information to educational institutions.
MESL was launched in December of 1994 with management and limited funding from the Getty Art History Information Program (AHIP) and MUSE Educational Media. The project first took shape when participants came together in February of 1995. University participants include: the Universities of Illinois, Maryland, Virginia, and Michigan, as well as Columbia, Cornell, and American Universities. Content providers include: the Library of Congress (LOC), the National Gallery of Art (NGA), the National Museum of American Art (NMAA), the Harvard University Art Museums, the Houston Museum of Fine Arts, George Eastman House, and the Fowler Museum of Cultural History.
During the two-year period of dissemination (June 1995-June 1997), the participants are exploring standards and mechanisms for distributing images and data among institutions, mounting and delivering this information to university users, developing tools for incorporating images and data into the instructional process, and developing parameters for licensing this type of content.
We are currently midway in this process. The first distribution of nearly 5,000 images and associated data took place in the summer of 1995. Universities spent the fall of 1995 mounting the first distribution on their local systems. The project participants have met twice since then (December 1995, May 1996) and have discussed and resolved some of the technical issues raised in the first distribution. A second distribution is scheduled for summer 1996 and the participants have outlined an ambitious agenda to continue to explore issues, evaluate and document the project prior to its conclusion in the summer of 1997.
This paper discusses the participants' experiences in the exchange of image and text data between museums and universities through the first MESL distribution. It documents the variety of technical problems we have encountered to date and our strategies for addressing them within the project. It focuses on the necessity of standards for multi-institution distribution and highlights those technical issues that must be resolved if the MESL model is to be extended.
Technical Issues Addressed by the MESL Project
Though most of the public discourse on MESL has framed the project as an experiment in the licensing of intellectual property (Albrecht 1995; Bearman 1996; Besser 1996a,b; Besser 1995; Trant 1996, Trant 1995a,b, Trant 1994), the project has always also been focused on both distribution and deployment of content. For the purposes of this paper, we define distribution as the process of moving images and accompanying text from the content providers to the institutions who will make use of it (the universities). We define deployment as the way in which the universities make these images available to their own users. This paper focuses on issues of distribution; only where necessary to understand distribution issues will this paper treat elements of deployment.
A key feature of MESL was to enable members of the University communities to make as broad a use as possible of a large number of images and rich text within the short 2-year framework of the project. A key strategy to achieve this rapid distribution and deployment was to repurpose existing data from the content providers. This strategy acknowledged both the time constraints and the lack of resources for creating new documentation that exists within the museum community. Therefore, most of the first-year distribution came from the museums' existing stock of digital images and existing collection management (and associated) records.
Each of the content providers had their own distinct methods of scanning, type of scanned source material, format and size of images, and type and structure of accompanying textual information. Therefore one of the first tasks that MESL participants had to grapple with was whether and how to standardize this information. The group had to choose between reducing deployment time and effort by attempting to standardize in the distribution phase; or speeding up distribution by having museums provide their images and text "as-is", requiring the universities to handle a wide variety of different formats. In the interest of creating a project that would scale up to a much larger number of participants, MESL chose to implement limited standardization within the distribution phase. This method also gave the participants a framework within which to explore what standards might be required in the future and where broader more flexible operational guidelines might suffice. The particulars of the distribution model are discussed in detail below.
The final section of the paper explores the successes and failures of the MESL distribution model through the first year and a half of the project and the technical issues that must be resolved if the model is to be extended to include more museums and educational institutions. We assert that a model that addresses not only a solution to the intellectual property rights issues but also these technical issues is necessary to carry the work of the MESL project forward to a larger group of participants.
The MESL Experience: The First Year
At the initial MESL meeting in February 1995, there was considerable discussion about appropriate formats and sizes for images to be distributed in the project. This reflected a lack of common vocabulary, the lack of standardization within the community and the lack of specific knowledge about user needs. The group finally agreed that museums would supply either the largest lossless JPEG/JFIF images they felt comfortable releasing, or files in the form of PhotoCDs.
In reality, the first distribution deviated significantly from these guidelines (see Table 1). Only one institution supplied images in PhotoCD format (PCD). Others used TIFF and JPEG but the JPEG compression ratios varied widely, and the JPEG files were frequently derived from PhotoCD files.
One of the reasons for the large variety in format and file size was that several institutions (e.g. NMAA, Library of Congress) had already digitized and compressed images and were simply supplying existing digital files. In at least one case, the service bureau contracted by a museum to digitize its images was unable to save files in a lossless JPEG format.
Most universities expressed a desire to obtain "the largest files museums
were comfortable distributing." The intent behind this statement was not
that universities would deploy the large images in their basic delivery
scheme, but that they would derive appropriately-sized smaller images from
these, and make the larger images available for specific purposes and research
|Museum||File Format||Largest File||Smallest File|
(supplied three sizes of derivative images)
|Eastman House||PhotoCD||16 Base
Characteristics of Digital Images Distributed in the First MESL Distribution
(JPEG compression information from JPEGView Statistics)
At the outset of the project, it was apparent that many concepts and requirements were not defined consistently by all participants. For example, a discussion at the first meeting indicated that there was considerable difference in interpretation of the term "high resolution".
|Eastman House||none recorded (presumed to be digitized from intermediate 35mm film via PCD process)|
|Fowler|| 35 mm slide scanned onto Kodak PhotoCD (resolution
Scanned from catalogue using HP IIcx flatbed scanner
Scanned from drawing using HP IIcx flatbed scanner
Digital photograph using Kodak DCS420 digital camera
|Harvard|| Digitized with PhotoCD from 35mm slides. Extracted
at Base*4 and reduced to fit within 1024x768 then cropped, minimal color
Scanned using a Pixelcraft 4000 at 300 dpi from an 8x10 transparency. Reduced to fit within 1024x768, minimal color correction
Scanned using a Sharp JX-600 at 300 dpi from an 8x10 transparency. Reduced to fit within 1024x768, minimal color correction
|Houston||from 35mm slides scanned onto Kodak writable CD|
|Library of Congress||Scanned from film intermediate, uncorrected|
|National Gallery||24 bit color: corrected|
Capture Methods Recorded by Museums in MESL Data Records (Field 29)
The standardization of accompanying textual data was addressed by the
MESL Documentation Working Group. At the first meeting, the group decided
that in order to get the project up and running, documentation would be
supplied in a more or less "as is" form by the participating museums but
attempts would be made to normalize its structure. David Bearman and Robin
Dowden collaborated on the first MESL data dictionary (current version
at http://www.ahip.getty.edu/mesl/about/docs/datadict.html), which identified
the most common fields present in the museums' collection management systems
(see Table 3). All of the museums had the opportunity to
respond to an early draft dictionary to ensure that their data could be
mapped into the structure to the extent possible. During the delivery cycle,
each museum was responsible for mapping their existing information into
a field-delimited ASCII file according to the format defined in the MESL
data dictionary. Museums supplied this ASCII data with declared delimiters
separating data in repeatable fields, ends of fields and ends of records.
If a museum did not have any structured data mapable to a specific field,
that field would be present but empty in their records.
2. holding institution
3. accession number
4. accession method
5. credit line
7. object type/ object class/ object name
8. object title/caption
9. creator/maker - name
10. creator/maker - culture/nationality
11. creator/maker - role
12. creation place
13. creation begin date
14. creation end date
15. creation technique/method/process
22. associated events, people, organizations, places
27. accompanying image - file name
28. accompanying image - caption
29. accompanying image - capture data
30. accompanying document - file name
31. accompanying document - type
32. version identification
Fields in MESL Data Dictionary (version 1.1)
Each museum agreed to supply the following:
2. Label data: This includes such elements as title, creator, date, object type, collection (if necessary) which the museum would place on a label and is the minimum data that the suppliers feel needs to be available in every application.
3. Fielded data: These are the most common or important 20-40 fields in art and culture museums.
The MESL project considered two distribution models proposed by Clifford Lynch at the first meeting. The first model proposed that the seven suppliers distribute directly to all participants. The second model was for central distribution by a single entity. The second distribution model was accepted by the group. Museums provided images and text in electronic form to the University of Michigan, which agreed to act as MESLs "distribution central.".
Lynch also identified two logistical methods for data distribution: push (providers distribute data) or pull (users collect it). Thusfar the "push" method has been used for the main distributions; Michigan has assembled the image and text sets from all seven museums, duplicated the merged data sets, and distributed them on CD-ROM to the participating universities, and to several museums who were also interested in mounting the data. The "pull" method has been used for updates to the text files; users get these updates via ftp.
The participants agreed to a twice-yearly distribution schedule, with major image/text distributions in the early summers of 1995 and 1996. Corrected data for the 1995 distribution was distributed in January/February 1996. Redistribution of corrections to the summer 1996 distribution will take place in September 1996. In order to deal with changes to the data dictionary over time, each record in the structured database begins with a reference to the release number of the agreement it adheres to. The first agreement was numbered Version 1.0. The group also agreed that all future updates would involve complete replacement of the data files, as it was easier for those handling deployment to insert a complete new data file than to locate, verify and make changes to individual records.
Deployment on Campuses
Each university devised its own strategy for deploying the data on its campus network, in response to their existing infrastructure and needs. Most universities used the World Wide Web as a delivery platform and provided some kind of search interface for public access to the MESL text and images. Some used local customized image database delivery systems already in place on their campuses. Some mounted text within relational database management systems, others used SGML-like structures. The first data sets were received by the universities in early August 1995. Most had a prototype site up the following month with at least a few of the museums' collections available.
The process of mounting the collections fell into four distinct areas:
1) the design of a delivery front-end (e.g. web site) and search interface;
2) processing structured data;
3) processing images for networked delivery; and
4) implementing security strategies.
The first of these areas is a local deployment issue and is beyond the scope of this paper.
The universities encountered a number of problems with processing text and images and implementing security measures. While some of these were relatively simple to solve, others were more complex and have significant implications for any future distribution system that might follow from the MESL experience.
Problems Encountered in the Deployment Phase
Since many of the universities were using the World Wide Web as a delivery platform, they needed to derive several sizes of images for various presentation environments. This kind of multiple derivative image generation is becoming increasingly common and is a relatively simple batch operation using software such as Debabelizer or ImageMagick. However, because most of the images delivered had been previously compressed with some amount of loss, those of us charged with local deployment were reluctant to resize and recompress from the images that were delivered.
We have serious reservations about the use of any form of lossy compression
before an image arrives at a deploying location. As almost all the deployment
schemes involve the delivery of lossy-compressed images (and not all employ
the same compression schemes), introduction of lossy compression during
the distribution cycle would likely result in lossy-compressing each image
twice. The advisability of two iterations of lossy compression is questionable.
A successful first instance of lossy compression should remove elements
from the image that are barely discernible to the human eye (such as diminishment
of the number of chrominance values). A second iteration will either expect
that it is starting with a robust image (and remove so much information
from the image that it introduces artifacts into that image), or it will
not find much more to remove from the image. In any case, the results of
a second iteration of lossy compression will likely prove confusing for
anyone who has come to expect a somewhat reliable image quality/size trade-off
as the result of a particular lossy compression scheme.
The adoption of the "quick and dirty" data extraction methods employed in the first MESL distribution allowed all of the universities to mount the collections quickly. Less than 5 months passed from agreement on content to the distribution and fall deployment on campuses. But throughout the fall of 1995, each university began to encounter and report a variety of problems as they transferred the MESL data records into their local systems. These problems included records for which there were no associated images, an entire data set which omitted artists' names, different character sets used by different museums to render their data, and wide variety in the end-of-field and end-of-record delimiters. These problems are being dealt with as part of the project; revised data sets have been made available by FTP from the Michigan "distribution central" site. The group has also decided to adopt the ISO Latin 1 character set and a standard set of field and record delimiters for future distributions. In addition, minor revisions have been made to the MESL data dictionary and a new version with fuller definitions of the data values expected in each field to assist museums in the export and mapping process.
A detailed examination of the data reveals more problematic issues, both with the structure and with the data values that appear within the structure. The data dictionary was based on examination of participants' databases, the work of CIMI (Consortium for the Computer Interchange of Museum Information, the AITF (Art Information Task Force) Categories for the Description of Works of Art, and the CIDOC (International Documentation Committee of the International Council of Museums) data structure. Several of the contributing museums had much less complex databases, so that many of the fields in their data records are empty. Since museums have not universally adopted accepted standards for name authorities, artists' names for instance appear in a number of variant forms. Because the data came from collection management systems, subject access to images is inconsistent at best.
It is clear that a project like this can run much more efficiently if some distribution activities are handled in a centralized fashion. But it is not yet clear which distribution activities should be handled centrally and which should be handled locally (either at the content provider end or at the university end).
Verifying data integrity and checking to see that data meets group standards are two activities that MESL has thus far decided are appropriate roles for a central distribution unit. In an early distribution, problems were discovered by one university after other universities had already done much work with the data. Proposed plans call for the central unit to verify whether the data submitted meets the structural standards defined by the group. Any substandard data must be immediately corrected by the content provider, by either handling it themselves or paying the central unit to do so. Many in the group feel that imposing financial responsibility for data correction upon the content providers will ensure much cleaner data in the future. Issues surrounding data value problems are outside the scope of "quality assurance" function as we have defined it.
There is less agreement on the benefits of other potential centralized activities. For example, today the content providers each submit a "high quality" image, and each university individually derives both smaller images and thumbnails from each of the "high quality" images. Shifting this image derivation function to the central distribution unit would appear to be a considerable savings of workload. But doing so would require agreement on the dimensions of both smaller images and thumbnails, faith in the reliability of the central body's derivation, and a significant increase in the size of the body of data sent from the central distribution unit to each campus. At the moment, the prospect for this appears unlikely, particularly considering the number of different formats and compression schemes being used for deployment.
The MESL model is a relatively closed system where the project data is stored in directories and access to those directories is controlled either using passwording or the Web-based .htaccess system. As users begin to interact with and integrate these resources along with others having different kinds of rights restrictions, this method of control becomes problematic. Imagine trying to manage an increasing number of image resources from diverse sources with different levels of access privileges by trying to isolate them in separate directories based on those access privileges. The .htaccess system is also vulnerable to security breeches. This is an area where some of the research activity driven by the commercial sector may provide solutions for the future.
What we can learn from the MESL Experience
From its inception, the MESL project has focused on easing the burden on content providers (museums) by devising a mechanism for licensing content on a large scale to content users (educational institutions). Even if MESL efforts develop terms and conditions for handling the licensing of intellectual property, this method will be of little use to either content providers or institutional users without a technical framework in which content can be delivered efficiently and effectively. Just as it is unreasonable to imagine content providers negotiating separate licensing arrangements for their materials with all interested educational institutions, it is also untenable to posit a model where image specifications and text structure differ for each potential enduser. The efforts to define and test standards for accepting images and accompanying text from content providers, and the channels and mechanisms for moving these from individual repositories to institutional users are critical pieces to any solution of large-scale delivery of images from a number of providers to a number of institutional users.
Any solution to the delivery problem must consider delivery of both images and text with corresponding links between them. The design of a delivery model must take into account standards for both images and text, sizes of files, what activities should take place centrally, how to assure quality control, how to handle changes and updates, what kind of delivery methods to use, whether to support compression, and how the content providers can be assured that their images will be protected.
We can expect that any delivery system in the foreseeable future will continue to depend upon repurposing a great deal of existing data from the content providers. Because much of this pre-existing data has been created for internal systems and must still serve those internal systems, most content providers would have to undertake a significant retrospective conversion project in order to make this data strictly conform to emerging standards for data interchange. While the community continues to work to develop the standards, specifications, and tools to support information interchange, we must also develop relatively flexible and simple systems for mapping existing data into those interchange formats. It is reasonable to assume that a core textual description will conform to fairly rigorous standards (probably as defined in a data dictionary) with each content provider mapping their data into that standard and each deploying institution converting from that standard into their own deployment system. Work is currently underway to explore the relationship between the MESL data dictionary and the CIMI SGML Document Type Definition for object identification information. This work may facilitate the process of data normalization and interchange as future delivery systems evolve.
Future distribution systems must take into account that institutional sites will deploy a variety of image qualities and that the file formats and sizes employed will vary from site to site. It is also likely that a delivery system could support a very limited number of image file formats. A key question is what size of image should such a system deliver? Because there have been few studies of image quality (Ester 1990, 1994), the deploying institutions have little knowledge of what their users need. Most deployment systems within MESL have established a set of "standard" images sizes that they deliver and these sizes differ from one deployment system to the next. Similarly, photographic stock houses have chosen to deliver a variety of file sizes, and these differ from one stock house to another.. Picture Network Incorporated (PNI) delivers 12M-18M, 1.5M, 300K, and 50K images, while Weststock Stock House (Muse) delivers 4.5M, 1.1M, and 250K images. Because they deal primarily with commercial users, it is doubtful that any of the stock houses have surveyed the needs of institutional educational users in defining their preset image sizes. One significant result of MESL deployment over time might be to determine a set of image sizes that would satisfy a large number of institutional users. Until that time a procedure which allows each institution to derive their own ideal size from a high-quality distributed image is probably preferable. This issue is further complicated by the fact that most content providers will not allow access to their highest quality images until they can be assured that security protections are effective enough.
Another key issue is what kind of activities should take place centrally and what should take place at either the content providers or the deploying institutions. Tasks such as assuring quality control should be centralized when there is clear agreement among participating institutions; however, tasks such as the generating smaller image files should probably remain local. A central site that checks that contributed data meets predefined standards must also have some mechanism to raise substandard data to the minimum level of standard. If a consortium were well-funded, upgrading records might be the responsibility of the central site. But in a voluntary cooperative association, the group needs to explore methods for assuring that participants will meet minimum quality standards as well as meet deadlines.
The distribution of large data sets becomes even more complex when changes or updates are made to a small portion of the set. The MESL data is currently being treated as a single entity rather than a series of discreet data files, raising challenging issues about maintaining currency and accuracy if the model were scaled up. A complete reload of the entire data set may continue to be simpler than trying to locate and update the individual changes. This approach, however, becomes problematic if the deploying institution has customized the data in the originally-mounted set. In any case, it would be wise for the distribution system to set a schedule for periodic updates rather than sending notice of a change or update each time one is discovered.
Another important decision is what kind of delivery method to use. For a decentralized system like MESL, the push method (where the museums send in their data) is preferable for getting data from the content providers to the central site. The pull method (where the universities take action to obtain the data from a central source) is better for getting the data from the central site to the deploying institutions. Such a decentralized system allows each institution to move at its own preferred pace as long as it meets the group's time commitments. But the most likely delivery vehicle for the pull method (the Internet) does not yet have sufficient bandwidth to make this method practical for moving large groups of even moderately-sized images. Stock houses like PNI deliver their smaller sized images over the Internet via ftp, and the larger sizes by post on CDs. We can expect that most text and small image files will be distributed online, both for ease of use and ease of update. As bandwidth increases, we will see more and more distribution of large images migrate from physical media (like CD ROMs) to online environments. Some image distributors (such as Weststock) are currently delivering images via the Internet using lossy compression schemes. For distribution to institutions who will then deploy these images, we have put forward a strong argument against distributing files that have been subjected to lossy compression.
As long as copying an image is easy and it is impossible to trace, content providers will be reluctant to allow high quality versions of their images to be used. But as better security methods (such as watermarking, encapsulation, and encryption) become more widely available and are integrated into deployment systems, content providers will likely become more comfortable with allowing their images to be disseminated this way. These security measures also hold the promise of being adapted to ensure the integrity of the image and accompanying text data (Besser 1996c). It is possible to imagine that some of the recent advances in "secure container" technology could be applied on a sitewide basis, providing both reassurance for the content providers and secure delivery environments in educational institutions which are free of individual metering or pay-per-view restrictions which are antithetical to our shared educational goals (Trant 1996).
While the technical challenges addressed to date in the MESL project have been many, it is clear that we have much work still to do. In the coming year, the MESL participants will continue to examine these technical issues in the context of rapid developments. As we struggle to come to grips with them, it is perhaps useful to remember how quickly we have moved from collections of largely bootleg 35mm slides, cataloged locally, where access is restricted to a select few. We are seeking to define an ambitious new delivery model which is economically feasible for the content providers and through which educational institutions can gain access to large amounts of authoritative cultural heritage images and information and make them widely available to all their users. Though the hurdles are many, few of us would argue that it is not worth the effort to try to overcome them.
Afterword: Technical Agenda for MESL in 1996-97
At the May 1996 meeting, the MESL participants identified an ambitious technical agenda for the coming year, to further explore the distribution-related issues outlined in this paper, document its experiences, develop recommendations and articulate areas for further investigation. Findings resulting from the analysis of deployment strategies and overall project evaluation will also be woven into the analysis of distribution. Activities in this area of the project for the coming year include:
Distribute 2nd MESL data sets; document distribution experiences
Document and analyze image and data preparation experiences of the content providers.
Review knowledge representation and interchange standards requirements in collaboration with CIMI; explore SGML as a data representation strategy and Z39.50 as an interchange mechanism
Recommend distribution architecture options, standards requirements and operational guidelines
Through the coming year, the MESL participants will continue to discuss
the technical challenges raised by the MESL experience within the cultural
heritage community. In addition, we will produce formal reports which document
the technical issues and experiences we have begun to outline here, make
recommendations, and outline strategies for follow-on investigation.
Bearman, David. "New Economic Models for Administering Cultural Intellectual Property," A paper presented at the Digital Knowledge Conference, Toronto, Ontario, February 7, 1996. Also presented at EVA Florence, Italy, February 9, 1996. [http://www.ahip.getty.edu/mesl/about/docs/economics.html]
Besser, Howard and Diana Vogelsong. "Networked Distribution of Digital Images: The Museum Educational Site License Project," Computers in Libraries Conference, Washington, DC, February 28, 1996(a)
Besser, Howard. "The Museum Education Site License Project," Information Highways and Byteways Conference, sponsored by the American Association of Museums, San Francisco, February 16, 1996(b)
Besser, Howard. Image Databases: The First Decade, the Present, and the Future in Heidorn and Sandore (eds.) Proceedings of the Illinois Digital Image Access and Retrieval Conference, 1996(c), in press
Besser, Howard. "The Museum Education Site License Project," Time, Space, & Distance: The Future of Preserving & Accessing the Past, Conference held at the Mint Museum, Charlotte, June 2, 1995
Ester, Michael. Image Quality and Viewer Perception, Leonardo 23:1, 1990, pages 51-63.
Ester, Michael. Digital Images in the Context of Visual Collections and Scholarship, Visual Resources, X:1, 1994, pages 11-24.
Museum Educational Site Licensing Project. World Wide Web Site. [http://www.ahip.getty.edu/mesl/]
Trant, J. Enabling Educational Use of Museum Digital Materials: The Museum Educational Site Licensing (MESL) Project, A paper for the Electronic Imaging and the Visual Arts Conference, Florence, Italy, February 8-9, 1996. [http://www.io.org/~jtrant/papers/jt.eva.florence.9602.html]
Trant, J. "The Museum Educational Site Licensing (MESL) Project: An Update," Spectra, Winter 1995(a). [http://www.io.org/~jtrant/papers/mesl.spectra9511.html]
Trant, J. The Getty AHIP Imaging Initiative: A Status Report, Electronic Imaging and the Visual Arts (EVA), The National Gallery, London, July, 1995. Also appearing in Archives and Museums Informatics, Cultural Heritage Information Quarterly, Vol. 9, no. 3, 1995(b), 262-278. [http://www.io.org/~jtrant/papers/eva.95.html]
Trant, J. "The Museum Educational Site Licensing Project, Spectra,
Winter 1994-95, 19-21. [http://www.io.org/~jtrant/papers/mesl.spectra9502.html]
The authors would like to acknowledge the contributions of all the participants in MESL. Their commitment and extraordinary efforts have provided all of us with this rich testbed for experiment and exploration. Jennifer Trant and David Bearman of the MESL Management Committee provided useful feedback to an earlier draft of this paper. Thanks also to Edward Gaynor of the University of Virginia Library for his careful reading and copy-editing.
Howard Besser is Visiting Associate Professor at the University of California's School of Information Management & Systems; publishes extensively on both image/multimedia databases and on the social and cultural effects of new information technology; consults for libraries, museums, and arts organizations; and is a frequent speaker at professional conferences. Dr. Besser has been on the faculty of the LIS Schools in Berkeley, Pittsburgh, and Michigan. He has been on the Management Committee of MESL since its inception, and is one of the faculty members using the images and text in his teaching.
Christie Stephenson is Coordinator of the Digital Image Center at the
University of Virginia Library, and is one of the pioneers in networked
delivery of digital images to students. Ms. Stephenson has an M.A. in Art
History and an M.S.L.S. from the University of North Carolina--Chapel Hill.
After 15 years as an art bibliographer, she became involved in digital
imaging and currently serves as the MESL Project Coordinator at UVA.
Last modified: 4/3/2000