Questionnaire design for the Evaluation of the Museum Education Site Licensing Project

By Cara List

The Museum Education Site Licensing Project has brought together sixteen institutions in a joint effort to make art historical information more accessible through the use of digital technology. Sponsored by the Getty Art History Information Program (AHIP) and MUSE Educational Media, this project makes images from the collections of seven museums and cultural repositories available for educational purposes at seven institutes of higher learning. The Fowler Museum of Cultural History at the University of California, Los Angeles; George Eastman House; Harvard University Art Museums; the Library of Congress; the Museum of Fine Arts, Houston; the National Gallery of Art; and the National Museum of American Art have agreed to create an image database using information and images from their collections in order to study the educational potential of digital technology in the area of art history. The database will be available through a site license to the educational communities at American University, Columbia University, Cornell University, the University of Illinois at Urbana-Champaign, the University of Maryland, the University of Michigan, and the University of Virginia. The participating institutions hope to learn more about the technology required for such a project, the educational uses of the data, issues of intellectual property, and to create a model site license agreement. The project is intended as a laboratory for exploring these areas and resolving the problems that are inherent in so complex a project.

Because MESL is a two year, experimental prototype, many issues of implementation and use will require careful examination in order to understand what does and does not function well. The evaluation of these issues is complex and will be on-going throughout the project. Different methodologies for evaluation will suit different issues. Making the decisions about how to collect data, from whom to collect data, and on what subjects, is by itself a daunting task. Survey tactics, attitude measurement, and statistical evaluation make up a whole branch of psychology by themselves, and it is best advised to consult an outside source for help in design and implementation of the evaluative process. Wendy Lougee of the University of Michigan raises some important points in her June 2, 1995 e-mail to Jennifer Trant. Ms. Lougee mentions that in order to assess the effectiveness of various digital library projects, the library has engaged an organizational psychologist, who is not only skilled in formatting the methodologies for the collection of the data, but also brings objectivity to the task of sorting and recognizing the goals of the evaluation. It would certainly be advisable for the MESL participants to consider hiring an outside expert to help them in organizing and designing their evaluative surveys.

In studying the e-mail of the participants in the evaluation working group, it became immediately apparent to me that a lack of organization and background in survey design would act as a powerful hindrance in this process. Despite my lack of expertise in organizational psychology, it is clear to me that certain steps can be taken to begin to set the evaluation process in motion.

The first step is to create a list of all the information MESL will need to collect. This will include not only attitudes about the project from instructors and students, but various measurements that will demonstrate MESL's accountability such as demographics about the users of the database, numbers of hits per annum, and costs to the universities for maintenance. A lot of the information collected by the accountability survey will be of use to the attitude survey, and it is certainly in MESL's best interests to avoid redundancy or useless information. In my study of the e-mail of the evaluation working group it is clear that progress has been made categorizing the topics of interest in an assessment of accountability. Both David Bearman, of the management committee in his message of June 12, 1995, and David Millman of Columbia University in his e-mail of June 21, 1995 have outlined the subjects for which data needs to be collected. The evaluation working group studying usage has not managed to put together the same kind of list, and this should probably be their priority.

It should be recognized at this point that it is much harder to pinpoint the subjects on which data is needed about attitudes than it is for the much more straight forward issues of accountability. It may be useful for the evaluation team to work backwards in trying to establish the nature of their surveys. By beginning with the final stage of data gathering: the report, the group might be better able to identify both the sample of the university community that they need to question, and their areas of interest. What will the ideal report say?

At this stage it is also important to recognize that there will surely not be only a single report. What reports will be needed? From reading the e-mail and from my own interpretation of the project, I have put together a short list of possible reports:

  1. An initial report of the attitudes of instructors who will be utilizing MESL, reporting how they intend to use the database in their classes.
  2. A follow-up report of the attitudes of instructors, after they have incorporated the database into their classroom instruction.
  3. A report of faculty who are not using MESL, reporting their attitudes about whether the database could be of use to them, and how; as well as the reasons why they are not using MESL.
  4. A report of the attitudes of students who are in the classes that use the MESL database.
  5. A report of the users of the database, whether they are students and faculty or not, and how they used it.

I am certain that there are other reports that the MESL organizers are in a better position to identify, and that some of the above listed reports may not have been considered. Trying to determine what reports will be needed is not as easy as it initially looks. Each time a sample group is identified, the people who are not included must also be examined. For instance, it is obvious that professors using the MESL database in their classrooms should be surveyed, but what can be learned about those professors who are not using it. Do they know about it? A lot could be learned about use patterns by discovering whether information about MESL's existence is well known. Do they know about it but don't want to use it? A wealth of information lies in that direction. The data gathered by only surveying instructors using MESL will only tell the survey group about people who embrace the technology, or have use for the information. What about people who are afraid to use MESL because they don't understand it? Will the data gathered about the use of the contents of the database be fully representational, if the people who don't find their needs met don't use it, and therefore are not surveyed? Brainstorming about the desired results of the survey will help to identify the samples of the educational community that need to receive a questionnaire or be interviewed.

The next step is then to examine each of these potential reports to establish what information the MESL organizers hope to gain. To take the first example, a list can be made of the possibilities for findings.

  1. Who are the instructors?
  2. What departments are the instructors affiliated with?
  3. How did the instructors find out about MESL?
  4. How much technological experience do the instructors have?
  5. Have the instructors ever used digital teaching tools before? If so, what, and how?
  6. What are the instructional tools the professors currently use?
  7. What are the benefits of using these tools? The drawbacks?
  8. How do the instructors intend to use MESL? In the classroom? Outside of the classroom?
  9. What do the professors perceive the benefits of MESL to be? The drawbacks?
  10. In a perfect world, what do the instructors think MESL should include or be able to do?

At this point a questionnaire prototype should be written and circulated to a small sampling of participating professors, to clarify whether the questions are producing the kinds of answers that are needed by the survey team, and to help the team fine tune the questionnaire. This step is referred to as pilot work, and is extremely important to the creation of a successful survey and representational results. In writing the sample questionnaire, different types of questions can be identified: factual and subjective. When writing factual questions it is tempting to use multiple choices. For instance, the question "What is your affiliation to the MESL project? __ faculty __ student __ staff" ignores the possibility that the user could be from outside the university. By using a completely open ended question in the pilot run, the survey team will be able to assess whether multiple choices are appropriate, and what the choices might be.

Subjective questions should remain open ended, but the pilot work can still influence the final version of the question. The questions should not be confining or leading, and they should not make assumptions about the attitudes of the respondents. After the pilot run, if the responses to a question have a narrow range, it is likely that the question needs to be rewritten to be less leading. If the answers to a particular question frequently indicate misunderstanding, chances are good that the writers of the question assumed something incorrect about the types of answers it would get. Questions can be too narrow or too broad, too formal or too intimate, too vague, or biased. The answers to the pilot run will help the survey designers to fine tune and rewrite questions.

Another valuable result of a pilot run is the ability of the survey team to assess whether the respondents' answers are meeting their needs for data. If a question is consistently producing answers that are not appropriate to the information requirements of the survey team, then it clearly needs to be reworded.

This is also a good time to evaluate the method of circulation. Not long ago there were many fewer choices as to how data could be gathered. Information was collected by a mailed questionnaire, telephone interview, or personal interview. All of these options still exist, but a questionnaire can now also be sent through e-mail, or can be an interactive element in a web site. A mailed paper copy of a questionnaire might languish in an in-box for a long time or for ever, whereas e-mail tends to be viewed as more urgent. However, e-mail might also get less careful consideration in the answers generated. Using forms, databases, and CGI to create an interactive web site is a great way to discover who actually uses the database, because of course many people within the educational community may visit it without ever being enrolled in the classes that utilize MESL. A web site survey also has the added benefit of automatic data entry into a data base, which can be designed to quantify the some of the results. However, this kind of survey is fraught with problems because of the lack of control of who might respond. Many people may use the database, but it's possible that only a certain fragment of that population would respond to a questionnaire.

The pilot phase is also the appropriate time to set up the mechanisms for editing, coding and analyzing the data collected. I will not discuss this element because it is definitely better handled by an experienced professional, and because it is a complicated area requiring considerable study, and understanding of statistics. It is easy for the findings of a well written questionnaire to become misleading in the analysis stage. Conclusions can be drawn that are not backed by real evidence but reflect the desires of the analyzer, or indicate a bias on the part of the survey team. For this reason alone, an outside expert should handle the preparation of the results.

After the pilot stage is complete, and a carefully designed questionnaire is written, the actual collection of data can begin. I hope this essay helps to clarify the issues at hand in implementing such a complex study. I had initially intended to write a questionnaire myself, but after some research, the essence of which I have outlined above, I believe that this must be a group effort, guided by an organizational psychologist, and will require quite a bit of field work before a final survey format can be created.

BIBLIOGRAPHY

Oppenheim, A.N. Questionnaire Design, Interviewing and Attitude Measurement. New York: Pinter Publishers, 1992.

Warwick, Donald P. And Lininger, Charles A. The Sample Survey: Theory and Practice. New York: McGraw Hill Book Company, 1975.