Evaluation Overview

Evaluation happens at all stages of a project’s development. For exhibit and program design, evaluation can be broken into three phases: front-end evaluation, formative evaluation, and summative evaluation.

Evaluation

The earliest stages of development should include front end evaluation. It allows you to conduct focus groups or interviews with intended audiences to measure their understanding of themes and content. Formative evaluation lets you test exhibit or program elements with visitors before producing a final product. Prototyping these elements is an iterative process. Testing, making adjustments, and retesting produces the best results. Finally, when the exhibit or program is complete, conduct summative evaluation. Comment boards, observation, or surveys measure how visitors interact with the experiences and if design goals were reached.

Evaluation ensures that you’re giving families what they want and producing the most successful product possible. In-house evaluation can be done quickly and on a budget. Even small amounts of data will uncover trends that will help you design and enhance your program or exhibit. You may think you’ve covered all the bases at the design table. But you’d be surprised at what can be learned from bringing an exhibit or program in front of your target audience at all three phases.

At the USS Constitution Museum, we’ve improved our museum experience by listening to our visitors. Our entire All Hands on Deck exhibit was designed by prototyping and evaluating with its intended audience, families. As Director of Exhibits Robert Kiihne explains in this article, the exhibit’s original design and the final design are markedly different thanks to the families who helped shape the exhibit through their participation and feedback.

Quantitative vs. Qualitative Data

Evaluation yields a lot of information. This data can be broken into two categories: quantitative data and qualitative data.

Quantitative data is objective and measurable—it is usually measured by numbers and quantities. It includes:

  • Demographics (age, race, zip code of residence, etc.)
  • How much time a visitor spends in a given gallery, or at a particular label or interactive
  • “Yes” or “No” questions (“Have you ever been to this museum before?”)
  • Rating scales (“On a scale of 1 to 5, with 1 the worst and 5 the best, please rate your experience”)

Qualitative data is open-ended and non-numerical. It usually involves analyzing text or other word-based responses. It includes:

  • Why a visitor prefers one exhibit, label, or activity over another
  • What a visitor would change about an exhibit, label, or activity
  • Knowledge gained from an exhibit or program (“Name two things that you learned about [subject] from this exhibit/program.”)

These are not exhaustive lists. If you’re new to the world of evaluation, it’s important to understand that analysis based on both quantitative and qualitative sets of data is necessary for different phases of the evaluation process.

How Much Data Do You Need?

Marianna Adams provides advice about data collection in this video clip. She argues that you’ve collected enough data when you start to see reliable patterns:

Developing Your Evaluation Instrument

Just like designing a program or exhibit, constructing your instrument, whether it’s a questionnaire or an interview, takes time. The questions that sound best at your computer may fall flat in front of an audience. Only ask the questions that you are going to do something with. The shorter the better! Prototype your instrument; test, revise, and test again until you get something that works. Marianna Adams explains the concept of testing evaluation further:

Embedding Evaluation

When we tack on an evaluation to the end of our programs, it can feel like a test to an already over-served public. Is that the right time and methodology to get public input? Marianna Adams has championed the use of embedded evaluation. She explains the process here:

In this video, Beth Fredericks and Marianna Adams discuss a real-world example of embedding evaluation in Beth’s snow globe program at the Boston Children’s Museum:

Evaluation on a Budget

How do you know if you’ve achieved your goals? Do you have to hire a big, expensive firm to collect you the data you need? No. You can do it cheaply, simply, and effectively in-house. In this video, Heather Nielsen and Beverly Sheppard discuss ways to evaluate success in-house with your own staff to get the data you need, like “Big E” and “little e” evaluation: