image

CONTENTS

Editor-in-Chief’s Comment

Editors’ Notes

Chapter 5: Design Principles for Data Visualization in Evaluation

Simplification

Emphasis

Implications for Evaluation Practice

Chapter 6: Data Dashboard as Evaluation and Research Communication Tool

When to Use Dashboards

A Dashboarding Process

Dashboard Limitations

Conclusion

Chapter 7: Graphic Recording

When and Why Would an Evaluator Use Graphic Recording?

How Is Graphic Recording Being Used in the Field?

Common Questions About Graphic Recording

Chapter 8: Mapping Data, Geographic Information Systems

GIS Logistics

GIS and Program Implementation

GIS and Program Outcomes

GIS Limitations

Final Thoughts

Index

image

New Directions for Evaluation

Sponsored by the American Evaluation Association

Editor-in-Chief

Paul R. Brandon University of Hawai‘i at Mānoa

Editorial Advisory Board

Anna Ah Sam University of Hawai‘i at Mānoa
Michael Bamberger Independent consultant
Gail Barrington Barrington Research Group, Inc.
Fred Carden International Development Research Centre
Thomas Chapel Centers for Disease Control and Prevention
Leslie Cooksy Sierra Health Foundation
Fiona Cram Katoa Ltd.
Peter Dahler-Larsen University of Southern Denmark
E. Jane Davidson Real Evaluation Ltd.
Stewart Donaldson Claremont Graduate University
Jody Fitzpatrick University of Colorado Denver
Jennifer Greene University of Illinois at Urbana-Champaign
Melvin Hall Northern Arizona University
Gary Henry Vanderbilt University
Rodney Hopson Duquesne University
George Julnes University of Baltimore
Jean King University of Minnesota
Saville Kushner University of Auckland
Robert Lahey REL Solutions Inc.
Miri Levin-Rozalis Ben Gurion University of the Negev and Davidson Institute at the Weizmann Institute of Science
Laura Leviton Robert Wood Johnson Foundation
Melvin Mark Pennsylvania State University
Sandra Mathison University of British Columbia
Robin Lin Miller Michigan State University
Michael Morris University of New Haven
Debra Rog Westat and the Rockville Institute
Patricia Rogers Royal Melbourne Institute of Technology
Mary Ann Scheirer Scheirer Consulting
Robert Schwarz University of Toronto
Lyn Shulha Queen’s University
Nick L. Smith Syracuse University
Sanjeev Sridharan University of Toronto
Monica Stitt-Bergh University of Hawai‘i at Mānoa

Editorial Policy and Procedures

New Directions for Evaluation, a quarterly sourcebook, is an official publication of the American Evaluation Association. The journal publishes works on all aspects of evaluation, with an emphasis on presenting timely and thoughtful reflections on leading-edge issues of evaluation theory, practice, methods, the profession, and the organizational, cultural, and societal context within which evaluation occurs. Each issue of the journal is devoted to a single topic, with contributions solicited, organized, reviewed, and edited by one or more guest editors.

The editor-in-chief is seeking proposals for journal issues from around the globe about topics new to the journal (although topics discussed in the past can be revisited). A diversity of perspectives and creative bridges between evaluation and other disciplines, as well as chapters reporting original empirical research on evaluation, are encouraged. A wide range of topics and substantive domains is appropriate for publication, including evaluative endeavors other than program evaluation; however, the proposed topic must be of interest to a broad evaluation audience. For examples of the types of topics that have been successfully proposed, go to http://www.josseybass.com/WileyCDA/Section/id-155510.html.

Journal issues may take any of several forms. Typically they are presented as a series of related chapters, but they might also be presented as a debate; an account, with critique and commentary, of an exemplary evaluation; a feature-length article followed by brief critical commentaries; or perhaps another form proposed by guest editors.

Submitted proposals must follow the format found via the Association’s website at http://www.eval.org/Publications/NDE.asp. Proposals are sent to members of the journal’s Editorial Advisory Board and to relevant substantive experts for single-blind peer review. The process may result in acceptance, a recommendation to revise and resubmit, or rejection. The journal does not consider or publish unsolicited single manuscripts.

Before submitting proposals, all parties are asked to contact the editor-in-chief, who is committed to working constructively with potential guest editors to help them develop acceptable proposals. For additional information about the journal, see the “Statement of the Editor-in-Chief” in the Spring 2013 issue (No. 137).

Paul R. Brandon, Editor-in-Chief

University of Hawai‘i at Mānoa

College of Education

1776 University Avenue

Castle Memorial Hall, Rm. 118

Honolulu, HI 96822–2463

e-mail: nde@eval.org

Editor-in-Chief’s Comment

With the guest editors, I am proud to present the second of two New Directions for Evaluation (NDE) issues on the topic of data visualization. The substantial number of figures and tables made it impossible to present all the chapters on the topic in a single journal issue. The guest editors and chapter authors introduced readers to the topic in the four chapters of NDE No. 139 (Fall 2013); the present second part includes an additional four chapters with numerous tables and figures demonstrating how evaluation results and statistics can be displayed effectively and attractively.

Readers should note that the tables and figures shown in the two NDE issues are available (many in color) at www.ndedataviz.com

Paul R. Brandon, PhD

Professor of Education

Curriculum Research & Development Group

College of Education

University of Hawai‘i at Mānoa

Honolulu

Editors’ Notes

The current crop of technological innovations has made it easier for evaluators to visualize data and information, but has also opened a Pandora’s box for further frustrations, as common visualization errors continue. The methods of display have also vastly increased in the past few years, as software capacity develops and online tools grow in popularity. Given the spread and the growing expectations for visualizations among stakeholders, it is ever more important that evaluators remain current on the opportunities and challenges that data visualization offers. This issue of New Directions for Evaluation, the second of two parts, aims to introduce evaluators to different qualitative and quantitative applications that can be used in evaluation practice. The issue also offers concrete suggestions for improving data visualization design and helps the reader identify and correct common visualization errors that can often lead to communication failures. Our goal is to introduce the reader to some practical and fundamental ideas and concepts in data visualization. Part 1 (New Directions for Evaluation, 139) introduced readers to the tools and status of data visualization, with general overviews of how it is used on both quantitative and qualitative data. Both Parts 1 and 2 are intended as references and as sources for guidance and ideas for evaluators who are interested in, designing, and struggling with data visualizations.

The beginning of each chapter contains icons (designed by Chris Metzner) that indicate the applicability of the chapter to the four stages of the evaluation life cycle (Alkin, 2010). These stages include (1) understanding the program, stakeholders, and context; (2) collecting data and information; (3) analyzing data; and (4) communicating findings (see Figure 1). These icons provide a quick reference of each chapter’s content and its relationship to evaluation practice.

Figure 1. Title Icons

image

Part 2 Chapter Descriptions

Chapter 5 offers general guidelines and best practices for good data visualization, which focus on addressing common errors and using techniques to help a viewer understand and interpret the data. Chapter 6 introduces readers to data dashboards, discusses their history and uses for strategic decision making, and outlines a step-by-step process for creating an effective data dashboard. In Chapter 7, readers are introduced to the graphic recording process, a visualization method that involves audiences during the evaluation process and helps stakeholders share their knowledge and make sense of relevant data. Chapter 8 provides readers with examples of how maps and geographic information systems (GIS) can be used during the evaluation process to assess needs, track implementation, and examine program outcomes. In all cases, the figures in each chapter have been printed in black and white; color versions can be found at ndedataviz.com

Reference

Alkin, M. C. (2010). Evaluation essentials: From A to Z. New York, NY: Guilford Press.

Tarek Azzam

Stephanie Evergreen

Editors

Tarek Azzam is an assistant professor at Claremont Graduate University and associate director of the Claremont Evaluation Center.

Stephanie Evergreen is an evaluator who runs Evergreen Data, a data presentation consulting firm.

5

Design Principles for Data Visualization in Evaluation

Stephanie Evergreen, Chris Metzner

Evergreen, S., & Metzner, C. (2013). Design principles for data visualization in evaluation. In T. Azzam & S. Evergreen (Eds.), Data visualization, part 2. New Directions for Evaluation, 140, 5–20.

Abstract

Data visualization is often used in two main ways—as a tool to aid analysis or as a tool for communication. In the context of this issue, in this chapter we are focusing on the latter. At its most essential, the communication goal of data visualization is to grab audience attention and help them engage with the data such that the resulting product is increased understanding, regardless of the software platform or programming ability of the author. © Wiley Periodicals, Inc., and the American Evaluation Association.

In her influential article summarizing the state of evaluation use, Weiss (1998) ends with this thought:

The evaluator has to seek many routes to communication—through conferences, workshops, professional media, mass media, think tanks, clearinghouses, interest groups, policy networks—whatever it takes to get important findings into circulation. And then we have to keep our fingers crossed that audiences pay attention. (p. 32)

This chapter will pick up where Weiss left off by investigating the role of communication in supporting the attention of evaluation audiences. Visualization-supported evaluation reporting is an educational act and, as such, should be communicated with the use of principles that support cognition. Research from related fields suggests practices evaluators could do (and may be doing) to increase the chances that audiences will want to engage with their reporting. One study found a very high correlation of .91 between perceived beauty of a display and its usefulness (Tractinsky, 1997). Evaluators can do much more than cross their fingers and hope their data will be read and used.