MESL Implementation at the Universities
Howard Besser, MESL Management Committee
UC Berkeley School of Information Management and Systems
All seven MESL universities mounted the identical set of approximately 10,000
images and accompanying text records--each in their own way. These
implementations varied widely, each university making different choices as to
the search options, the indexed fields, display choices, and the overall
look-and-feel of the access systems. Methods for access control,
authentication, and the choice of text fields that displayed with an image
differed not just from one university to another, but even within some
universities these changed over time.
This article reviews the steps the universities took to process and mount the
images and data and examines the different deployment systems primarily from
the standpoint of the user's interactive experience. It also speculates on the
reasons the implementations differed from one another, including the lack of
standard practices and procedures, the varying goals and models of the
implementors, and the role of technological change in areas like
University Deployment: Early Decisions
From the beginning the MESL Management Committee encouraged the universities
to pursue independent solutions when they deployed the images and data. Many
of the universities had joined the MESL project hoping to experiment with ways
to integrate image delivery with their existing text-based information delivery
systems. This fact, as well as the short lead time, precluded the development
of a single deployment solution across all sites.
The seven independently developed deployment systems that emerged allowed us
to compare them and begin to speculate about the effects local implementation
decisions had on search results.
Receiving the Data
During each of MESL's two main content distributions, the Michigan central
distribution site received batches of images and text from the museums and
forwarded these on to each university. This section briefly describes the
variety of different processes and issues each university faced in preparing
this data to load into its local information delivery system.
most cases the universities took the flat delimited text files they received
and used a variety of application tools (e.g., Perl scripts, Excel, Filemaker
Pro, Microsoft Access) to parse (or separate) these to create HTML pages for
each record, and to load the data into a database for user retrieval. The
exception was Virginia which used PERL scrpts to create records from the MESL
data in "pseudo" SGML format, ran database queries against this stored data
(using Open Text), and generated HTML results pages from it on the fly.
During the first distribution, the universities had significant problems
parsing and loading the text data. Some of the reasons cited were:
1) some records did not have all the prescribed fields present;
2) some fields were not properly delimited;
3) museums did not all use the same set of delimiters;
4) some records had line feeds embedded in them; and,
5) museums used different character sets for their text records.
Many of these problems disappeared in the second distribution, as the MESL
participants agreed on more extensive specifications and standardized practice
with respect to delimiters and character sets.
The MESL experience made it clear that the specifications for data export must
be extremely precise, and that a pilot study involving a heterogeneous pool of
institutions can reveal the results of divergent practices that were not taken
into consideration during initial attempts at specification. Over the course
of the project, MESL participants developed a set of standard specifications
intended to be precise enough to assure consistent data structure; other
follow-on projects will have to tackle the even more difficult problems
associated with normalizing data values to improve retrieval (see article by
Robin Dowden elsewhere in this volume).
university sites based their user interface and general design decisions on the
particular image sizes and/or other qualities. Some sites already had an
investment in a particular size of image, based on experience and software
development in previous projects. During the MESL project, instructors
expressed concerns over image size including: that they be big enough for
classroom projection, be as big as possible yet fit on the "average" screen
without scrolling, fit within a specific application without scrolling, etc.
None of the implementations supported the on-the-fly derivation of smaller
images, so when the images were received from the central distribution site
each university generated several sizes of derivative images (thumbnail,
large image, and often one or more in-between) in advance for delivery. Applications like Debabelizer and ImageMagick
made this process relatively simple to accomplish (completely unattended) in
batch mode. However, many of the MESL images had been previously compressed
by the museums in such a way (using "lossy" compression) that it was necessary
to to uncompress, reduce or resize, and then recompress them so that they could
be deployed in a parcular information environment supported by the university.
As suggested elsewhere (Besser and Stephenson 1996), in future distribution
schemes both image derivation and lossy compression might be performed at a
central distribution site receiving uncompressed images, thus eliminating
duplication of effort by the entire set of deploying institutions, as well as
avoiding the problems of multiple lossy compressions. For this strategy to be
effective, all of the deploying sites would need to agree upon common
specifications for image sizes, bit depth, and compression ratios.
Table I illustrates how image sizes varied widely between the different
deployment sites, even among well-recognized "sizes" such as thumbnails. Though
most sites delivered compressed images, a comparison of compression ratios or
quality is inhibited by the lack of a standard scale for measuring these.
x 70 GIF 89
x 250 JPEG
x 500 JPEG
x 900 JPEG
pixels max dimension
Photo CD images converted to JPEG
but not resized
supplied; delivered by ftp on request
I. Image sizes and formats delivered at each site
pixels max dimension
pixels max dimension
pixels max dimension
full size image supplied by the museum
Though the batch post-processing worked well for creating most derivative
images, certain kinds of image types posed problems. For example, all of the
universities noted that PhotoCD images were quite difficult to work with. And
batch compression did not work well across different content format types
(e.g., line drawings, engravings, paintings); future projects might address
this problem by separating the images by content types (line drawings would be
handled together, as would continuous tone images), and using compression
techniques that have been optimized for the different content types.
In general the universities were pleased with the quality of the digital
images they received from the museums. Nevertheless, they experienced a number
of problems with image quality. According to the Columbia technical report,
"The quality of the digital images varied from museum to museum, but in general
we found the resolution to be too low when compared with [digital] images we
have been able to obtain commercially." And when Columbia faculty compared
projected slides alongside projected MESL digital images of the same object,
they found the quality of the digital image sorely lacking.
Some university disappointment stemmed from the scanning process the museums
had used; others from the fact that some images had been scanned from
poor-quality intermediates. Other quality issues included: images that were
too small for the universities to make effective use of them, and images that
were dark and muddy probably because they had either not been color-balanced,
or had been viewed only on one particular monitor/platform combination; (there
are not yet proper color management tools to assure that images will look good
and consistent from one platform and monitor to another). As "best practices"
continue to evolve and be promulgated in the museum community, many of these
image quality issues will inevitably disappear.
Museums also differed in their policies and practices regarding such things as
the placement of borders around images or the matting of backgrounds
(particularly on thumbnails) to create images of a consistent aspect ratio.
Such alterations makes it difficult to handle images "en masse" and will need
to be "regularized" if a single source is to produce all derivative images in
Other challenges the universities experienced arose in the process of passing
the images from point to point, and linking them properly to accompanying
text., Some image files were corrupted, and others were missing, misnamed, or
misreferenced. These problems may have been introduced anywhere along the
distribution chain, which led from the museums to the central distribution site
to the various steps within the universities. Explicit procedures and quality
assurance checks would minimize such problems in the future .
Designing the Deployment System and User Interface
Each university independently designed its own system for deploying images and
text on its campus. This section discusses some general differences between
the various university implementations. It also discusses how the different
implementations looked to users and the ways in which search results differed.
Although a precise empirical study of user response to each implementation
could not be usefully undertaken (due to the vast heterogeneity), observations
about the ways in which various design approaches affected the look and
performance of the individual deployment systems was still possible.
out of seven of the universities eventually chose the World Wide Web as the
primary access mechanism for their users. Initially Illinois and Cornell began
with different delivery systems, but moved onto the Web midway through the
first year of the project. For a number of Maryland provided user access
through a local network, and enabled more
limited secondary access through the Web.
University implementations of the MESL data varied dramatically. The
differences resulted primarily from the fact that institutional
situations--e.g. the local information delivery architecture, encoding and
searching systems, as well as staff expertise--had a major influence on the
choices that were made at each site. In addition, a few of the project staff
at MESL sites had been involved with digital imaging projects and drew on these
experiences when making interface design and other related decisions. The
degree of institutional support for MESL implementation--manpower, equipment,
classroom facilities, and available expertise--constituted another significant
variable from one university to another.
[I only got to this point, 1/23/98--PAMcC; started below but it needs much
[HB: The Berkeley Study is really part of "Initial Presentation and Query
Options"; ideally it needs some kind of introduction to keep it from looking
like it came out of left field.]
was a wide variety in the way the various university implementations looked to
a user. A group of Berkeley students performed the only cross-implementation
study, 8 comparing six of the MESL implementations. The findings of
their informal study are reported here.
Eight students in a UC Berkeley graduate class were given access to the
implementations at six university sites for a one-month period. The students had different academic backgrounds, and the
expectation was that they would search in a variety of ways and also notice
different features of the various systems. By design, none had extensive art
history training so that their queries would be more like those of naive users
than experienced art historians. They were given the following assignments:
_ Compare the user interface and display options on all the MESL sites. Look at
how the user is supposed to navigate through the system (including how the
information is "chunked," the order in which options are presented to the user,
and the placement of buttons). Also examine search options and the layout of
_ Compare size and quality of thumbnail (as well as larger) images on all the
MESL sites. Note the approximate sizes of images offered, and how these differ
_ Perform three identical searches on each of the MESL sites and note whether
or not the same query on the same data set yielded different results.
The results of the student study are reported here.
[HB: below are two dangling facts--neither of which is very clear yet. It
looks to me as if you are making a point or two about database browsability,
and then are introducing the Table on backend database/search engine
information. Both of these paragraphs need a lot more explanation and context
to be useful.]
Five of the six implementations studied
provided a browse function that allowed the user to scan through large batches
of images and records without first performing a query. In most of the other
systems the browse applications limited the user to browsing within only a
single museum at a time [did they need to perform a query first?].
[perhaps this can be incorporated below under the introductory section on
back-end database/search engines?] All of the Web-based delivery systems
provided searching via HTML forms [explain] that generated cgi-scripted calls
[jargon] to a back-end database/search engine [jargon]. Back-end
databases/search engines included products such as Filemaker Pro, Microsoft SQL
server, and Glimpse, and locally designed systems such as Full Text
Lexicographer (see Table II). [expand this so that a general reader understands
what this is about. By the way, is a flat database a search engine type??? I
couldn't rewrite this because I didn't understand it.]
II. Back-end search engines employed at each site
Access; customized with Visual Basic (Maryland ISIS)
[Begin with a paragraph that explains what a "back-end search engine" is and
what it does--and then introduce what is to follow .]
Most sites presented the user with several layers of explanatory information
before allowing the user to compose a query. This information was designed to
interest users in the MESL data, to contextualize the project and clarify its
scope, and to explain conditions of use. One of the students felt that
bynesting the search page deeply within the web hierarchy, repeated user
queries would be discouraged It was recommended that future designers should
provide one set of paths for initial users and another set for repeat users.
Query screens for most implementations employed HTML forms [be sure this term
is explained in introductory paragraph above] with menu choices. Most sites
provided forms for both simple and complex (e.g. Boolean) searches, either as
separate pages or combined on the same page. Examples of query screens from
Cornell and Michigan site are shown in Figures 1 and 2. These screen captures
show how the same search result data can be presented to users in different
ways, depending on the choices made by interface designers.
Most interfaces offered searchers the option of undertaking either simple or
complex searches. Several Berkeley students found this distinction [between
simple and complex searches] confusing. In most cases the difference was that
the "complex" searches permitted the user to search for a single value in each
of two fields (such as Artist=Cezanne and Date=1876). They felt that "complex"
was a poor word to use for this type of search.
Each site chose to index a different subset of the available MESL fields. Some
sites chose to provide keyword access while others did not. Some sites
provided access by categories of local interest (such as by course using the
image). And in many cases "searchable fields" on the user's query form were
really composed of indexes made by concatenating a variety of related fields in
the database rather than by presenting the fields defined by the MESL data
dictionary. Different sites combining their indexes in different ways was one
of the factors that led the same query to yield radically different search
results between sites.
part of the study, each Berkeley student created three search strategies which
they then performed at each site. Because the set of searchable fields
presented to the user differed from site to site, students needed to use their
own judgment in an effort to replicate the search as closely as possible at
each site. These searches yielded vastly different results from site to site.
For example:Access Control
_ Searching for title="birth" yielded a different result set from each site
(with one site returning a null set). [can you say more about the different
results? I'm left with a "so what" feeling, not knowing how different--or the
range of responses--or what is interesting about this point.]
_ A simple search query for "german landscape" yielded no results at Virginia.
A compound (or "complex") [give example: was it "german" and "landscape" ??
HB: I have no easily accessible record of whether that is how it was done]
search produced no results at American, Michigan, and Maryland, yet 6 results
at Virginia and Cornell, and 5 results at Illinois.
_ Searching for "haystack" retrieved 6 results at Michigan, 5 at Cornell and
Virginia, 3 at Maryland, 2 at Illinois, and 1 result at Columbia. (see figures
3, 4, and 5)
_ Searching for oil portraits of children (using the terms child, oil,
and sometimes qualified by portrait ) yielded a wide range of results.
All searches at American and Maryland and a "quick" search at Michigan yielded
no results. Searches at Illinois yielded 2 items, neither of which had
anything to do with oil paintings of children: rather they were works created
by an artist named "Child" about the Free Soil Party. However,
a fielded search ("child" within subject and "oil" within medium) at Michigan
yielded 31 results, over half of which were oil portraits of children [what
were the other ones--relevant or totally unrelated to search intentions? HB:
Again, my records are not easily accessible]. Fielded searches at Cornell
(material-medium=oil and concepts-subject=child) and at Virginia (subject=child
and material=oil) both yielded 82 records, over half of which were oil
portraits of children [same question re other results? HB: Again, my records
are not easily accessible].
_ The keyword phrase "black and white" yielded 0 results at Maryland, 3 results
at Illinois, the identical 9 results at American and Cornell, and the same 22
results at both Virginia and Michigan.
_ A search for French Still Life yielded no results at American and Maryland,
20 results at Illinois, 22 at Cornell, and 23 at Michigan and Virginia. (see
figures 6, 7 and 8)
_ A search for Madonna and Child yielded 0 at Maryland, 57 at American, 60 at
Cornell and Michigan, 61 at Illinois, and 66 at Virginia.[isn't there more to
say about the accuracy of the results that we got at these places--the 66 isn't
necessarily as interesting as the fact that not all of them were actually of
Madonna and Child.]
_ A search under Surreal yielded 2 at Cornell, Illinois, Maryland, and
Michigan, and 4 at American and Virginia.
There were a number of reasons for these divergent search results: some sites
combined different sets of the original data fields into unified indexes,
different search engines and their different approach to indexing, and
whole-word versus character-string searches on various fields [e.g. a character
string search would pull up "soil" in a search for oil, and a whole word search
The most significant reason for discrepancies in search results on the same
data (at different implementation sites) had to do with choices institutions
made when they combined data fields in order to simplify searching for users.
The MESL data dictionary contains 32 fields, far too many to present
effectively in a typical search interface. Consequently, institutions made
local decisions about how to group sets of fields within the MESL database and
what to label each of these combined indexes. As a result, at each site users
were presented with different indexes to the same underlying content.
The way "keyword" indexes were constructed accounted for most of the
discrepancies that occurred when the same search was tried at different sites.
Keyword indexes can be formed by combining prominent fields like subject,
description, and title, by relying completely on the words within the label
field, and by other variations on these themes. The choice of which fields to
index for keywords can have a significant impact on search results, such as
finding an artist named "Child" when looking for portraits of children. The
importance of these choices is compounded by the fact that simple searches,
which are often used by less experienced users, tend to rely on the keyword
approach. [I deleted last sentence because it wasn't obvious what was
interesting about a casual user getting different results at different
sites--this wasn't even possible in MESL. HB: But it does explain why the
Berkeley users got the results they did!]
Another reason the results differed across sites had to do with search engine
design--that is whether all matches must start exactly the same, beginning from
the left side of any field, whether the system looks for character-strings or
whole words, whether the system matches stems, truncates, or performs other
search tricks. The impact of such searching design decisions drastically
affected search results in this study.
The preliminary results yielded by the Berkeley study suggest that additional
research can be done to further refine our understanding of the complex
interaction between database design, search engines, interface design and user
behavior. Efforts to develop successful systems for image delivery, undertaken
in tandem with those to repurpose collection management data for public access
to images, present formidable challenges.
In addition to testing a variety of search and retrieval choices, the MESL
experiment explored issues surrounding the provision of access and security to
the museum data mounted on campus servers. Each implementation used fixed
Internet Protocol (IP) addresses as its initial form of access control. This form of security is quick and easy to
implement, and only requires that a list of valid campus domains or IP
addresses be compiled and checked whenever a search on the "secure" database is
initiated. While this IP access controlworked relatively well for this
experimental project, it poses serious problems for a true production-level
Groups of IP addresses tend to be too general, and often include too many
users in some areas and not enough in others. For example, commercial
entities leasing campus space, private technology-transfer spin-offs, alumni
dial-up access, and other groups that might not be valid members of a "campus
community" (as defined within a licensing agreement) are often included within
the campus IP domain. In many cases it is not possible to isolate these invalid
users from permissable student, faculty, and staff users. Another problem
stems from the fact that many legitimate users (e.g. those from satellite
campuses and programs in other cities, students and faculty who dial up through
their own Internet service providers, faculty on sabbatical at other campuses,
etc.) do not share the main campus domain or do not have fixed IP addresses,
and may be blocked from accessing the system. (Even if a campus could create a
list incorporating most of these other valid fixed external IP addresses,
managing such a fluctuating list would quickly become unwieldy.)
Because most Web security has used IP addressing to control access to an
individual directory (see explanation footnoted above), this approach can
require that different sizes of images and text be stored into directories
based on access control rather than upon logical arrangement. For example, a
university wanting to control access to all images bigger than thumbnails, but
allow any user to see textual descriptions, would have to store thumbnails and
text in an uncontrolled-access directory and all other images in a
Midway through the MESL project, several of the campuses began to implement
experimets with more sophisticated means of access control. In the second year
of MESL, Illinois added log-in and password access to supplement IP access as a
way of serving those outside its core IP cluster. In 1997 both Michigan and
Columbia implemented systems requiring log-in names and passwords for users of
MESL and other restricted collections, and authenticated them against already
developed databases of valid campus users.
It is clear that simple IP access control will not support the kind of security
measures that most image rightsholders expect. More sophisticated methods need
to be found, based upon individual users rather than upon workstation
addresses. Most of these methods will require universities to keep track of
their users various affiliations (e.g. to isolate alumni or drop-outs, to
identify valid users of material intended only for a particular course, etc.).
Because of privacy concerns, universities have the responsibility to maintain
authentication systems based upon this level of information about their users,
even when distributors are delivering licensed material directly to members of
the university community Some universities have begun experimenting with
public key encryption and digital certificates to try to solve the
authentication problem while still maintaining user privacy.
[I could keep going on these general observations, but am more inclined to
suggest they be deleted, or in some instances incorporated above. They don't
follow logically from the preceding sections, and are all treated more fully in
other articles. Instead it might be nice to have a summary paragraph on the
main points that were made above--and then stop.]
[suggest deleting this section and substituting a brief concluding summary.]
The MESL distribution and delivery architecture proved to be adequate for this
demonstration project. But many MESL participants doubt whether this model
will work in a large-scale production mode. In particular, the handling of
distributions, (including updates and corrections) is problematic; on an
ongoing basis the MESL approach of "full redistribution of the entire dataset
for every update" might not scale up.
The architecture chosen for the MESL project is by no means the only possible
distribution and delivery scheme. It is very possible that at some time in the
future universities may negotiate licenses with image repositories (or agents
acting for a group of repositories) on behalf of the university community, but
rely upon the repositories to deliver these images and accompanying text
directly to the users. For certain high-use items or special combinations and
configurations the universities might choose to mount a small subset of the
data, but still rely upon an external repository/distributor to deliver most
items to the university community. Before such a configuration becomes viable,
a number of other problems must be solved, among them: reliable high-bandwidth
delivery over wide-area networks, secure authentication of users as being part
of the authorized university community, and protection of user privacy.
Another critical issue for instructors is the development of a set of tools
that go beyond a library catalog model of merely finding an image and
displaying it. For many instructors, finding a set of images was not enough;
they wanted tools for organizing and using these images. The most notable tool development occurred at Maryland
and Virginia. Maryland built an application called SearchSlide (now Maryland
ISIS) which mimicked a light table,
allowing an instructor or student to re-order and organize a set of images, or
to prepare a set for projection. Virginia built into its query screens the
capability to mark individual images with checkboxes, allowing the user to then
view only the set of checked images. Virginia also designed a set of templates
that made it easy for faculty to design side-by-side comparisons of images and
text records or to have their students model virtual exhibits. Other desirable
tools include image zooming, image annotation, and manipulation of color and
gamma functions. Until faculty have access to easy-to-use tools for performing
functions they deem important, widespread faculty use of digital images is
The heterogeneous mix of deployment systems in MESL has revealed a number of
interesting factors that would have been difficult to discover in a more
homogeneous environment. This mix also permitted an iterative refinement of
While the design of an information retrieval system may at first appear to be
trivial, decisions over how to combine indexes to present to the user and how
to implement searching strategies are critical in determining the user's
experience. By examining the different ways in which an identical data set
can be searched and presented to users, implementers should be able to better
design future interactive projects.
It is clear that a number of problems must be resolved before there will be
widespread use of digital images on university campuses. Infrastructure
problems (such as labs with high-resolution workstations and high-quality
classroom projection) pale in comparison with the problem of faculty buy-in and
enthusiasm. The need for a critical mass with which to teach, and having
exemplary projects to show other faculty, are both dealt with in other chapters
in this and the companion volume.
Portions of this chapter appeared in a paper by Howard Besser titled "If
it's the same museum information, why don't these look the same?" Comparing
five implementations of identical data from the Museum Educational Site
Licensing Project" in the Proceedings of the 1997 International
Conference of Hypermedia and Interactivity in Museums, edited by David
Bearman and Jennifer Trant.
Thanks are due to the MESL participant institutions for providing permission
and access to the images, records, and retrieval implementations, and for
compiling data for their technical reports. Financial assistance from the
Andrew W. Mellon Foundation helped gather and compile technical data about the
University implementations. Christie Stephenson compiled information about
image delivery, and offered keen observations on many other aspects of the
topics covered in this chapter, as well as significant editorial assistance.
Students in Howard Besser's Spring 1997 SIMS 296A course at UC Berkeley
participated in the cross-implementation study.
Besser, Howard. (1997a). "`If it's the same museum information, why don't
these look the same?' Comparing five implementations of identical data from the
Museum Educational Site Licensing Project," in David Bearman and Jennifer
Trant (eds.) Proceedings of the 1997 International Conference of Hypermedia
and Interactivity in Museums.[place, date]
Besser, Howard. (1997b). The Transformation of the Museum and the Way It's
Perceived, in Katherine Jones-Garmil (ed.), The Wired Museum: Emerging
Technology and Changing Paradigms, Washington, D.C.: American Association
of Museums, pp 153-169.
Besser, Howard and Christie Stephenson. (1996). The Museum Educational Site
Licensing Project: Technical Issues in the Distribution of Museum Images and
Textual Data to Universities, in James Hemsley (ed.), E.V.A. '96 London
(Electronic Imaging and the Visual Arts), Thursday 25th July 1996 (vol. 2),
Hampshire, UK: Vasari Ltd, 1996, pages 5-1 - 5-15.
Museum Educational Site Licensing Project. (1997). World Wide Web site.
see Besser and Stephenson 1996
 Applications to generate derivatives
on-the-fly were not available at that time, but in the future these may prove
 JPEGs were produced with a variety of
different batch image processing programs (HiJaak95, Lview, PhotoShop, Image
Magick, Debabelizer, Graphic Converter, Multimedia Converter, Alchemy.) at a
variety of quality settings. It is difficult to compare quality settings
across software as each has a unique method of representing the
quality/compression ratio scale. Michigan
also supplied an intermediate "small" size JPEG, with a max pixel dimension of
320 pixels. Availability of derivatives in the full range of sizes was
dependent on the size of the original.
 compiled by Christie Stephenson
 see Hays and Borkowski article in
 Examples from Maryland cited herein were
gathered from their Web implementation which was never intended as the primary
means of access for Maryland users. Consequently they are not indicative of
the access that most Maryland users experienced via their campus network
 The Columbia site was inaccessible to the
students during the study period.
 This paper summarizes the preliminary
findings, focusing primarily on the searching process. Future reports from this
study will will examine how search results are presented to users at each of
the university sites, and will compare record display features, thumbnail
sizes, and other interface variables.
 Virginia did not provide a browse
IP Access Control allows a systems
manager to create a file containing a list of valid internet addresses, and to
prevent access to all the information in that directory by any users not coming
from one of those listed internet addresses. The most common IP access control
at Universities is to limit access to the university's domain name. (Thus, by
placing just a few lines of code (specifying "cornell.edu") in a file in a
particular directory, Cornell could prevent access to all files in that
directory by anyone at a workstation whose address did not end in
 In recent years applications like this
for text have been developed to operate in conjunction with text-based library
catalogs. Tools like ProCite allow the user to download formatted records from
a library catalog, load these into a citation database, and manipulate them or
incorporate them into footnotes, citations, bibliographies, etc.
 For a detailed description of this
software, see "Maryland ISIS (Interactive System for Image Searching)" by
Catherine Hays and Ellen Borkowski in the accompanying volume,
Perspectives on the Museum Educational Site Licensing Project.