AEA Banner

Professional Development Workshops

Professional Development Workshops are hands-on, interactive sessions that provide an opportunity to learn new skills or hone existing ones at Evaluation 2012.

Professional development workshops precede the conference. They differ from sessions offered during the conference itself in at least three ways: 1. each is longer (either 3, 6, or 12 hours in length) and thus provides a more in-depth exploration of a skill or area of knowledge, 2. presenters are paid for their time and are expected to have significant experience both presenting and in the subject area, and 3. attendees pay separately for these workshops and are given the opportunity to evaluate the experience. Sessions are filled on a first-come, first-served basis and many usually fill before the conference begins.

Registration:

Registration for professional development workshops is handled as part of the conference registration forms; however, you may register for professional development workshops even if you are not attending the conference itself (still using the regular conference registration forms - just uncheck the conference registration box).

Fees:

Workshop registration fees are in addition to the fees for conference registration:

  Members Non-members Full-time Students
  Early Standard On Site Early Standard On Site Early Standard On Site
< Sep 20 < Oct 11 >= Oct 12 < Sep 20 < Oct 11 >= Oct 12 < Sep 20 < Oct 11 >= Oct 12
Conference Registration $185 $225 $275 $265 $305 $355 $95 $105 $115
Two Day Workshop $300 $320 $360 $400 $440 $480 $160 $180 $200
One Day Workshop $150 $160 $180 $200 $220 $240 $80 $90 $100
Half Day Workshop $75 $80 $90 $100 $110 $120 $40 $45 $50

Full Sessions:

Sessions that are closed because they have reached their maximum attendance will be clearly marked below the session name. No further registrations will be accepted for full sessions and we do not maintain waiting lists. Once sessions are closed, they will not be re-opened.

Browse by Time Slot:

Two Day Workshops, Monday and Tuesday, October 22 and October 23, 9 AM to 4 PM

(1) Qualitative Methods; (2) Quantitative Methods; (3) Actionable Answers; (4) Logic Models; (5) Interactive Practice; (6) Developmental Evaluation

One Day Workshops, Tuesday, October 23, 9 AM to 4 PM

(9) Propensity Score Matching; (10) Focus Group Research; (11) Evaluation 101; (12) Systems Thinking; (13) Grant Writing Skills; (14) Equity-Focused Evaluation; (15) Using Stories; (16) Experimental Research Design; (17) Introductory Consulting Skills

One Day Workshops, Wednesday, October 24, 8 AM to 3 PM

(19) Evaluation Dissertation; (20) Empowerment Evaluation; (21) Logic Models Beyond; (22) Intro to GIS; (23) Data Dashboard Design; (24) Using Effect Size; (25) Utilization-Focused Evaluation; (26) Longitudinal Data Analysis; (27) Multilevel Models; (28) Data Cleaning; (29) Focus Group Interviewing; (30) RealWorld Impact Evaluation; (31) Reality Counts: Participatory Methods; (32) Qualitative Research Strategies; (33) Advanced Evaluation Methods; (34) Theory-Driven Evaluation; (35) Transformative Mixed Methods; (36) Applications of Multiple Regressions

Half Day Workshops, Wednesday, October 24, 8 AM to 11 AM

(38) Acknowledging the “Self”; (39) Metaphor to Model; (40) Intermediate Consulting Skills 

Half Day Workshops, Wednesday, October 24, 12 PM to 3 PM

(42) Case Study Methods; (43) Program Design; (44) Engaging laypeople; (45) Performance Measurement Systems


Two Day Workshops, Monday and Tuesday, October 22 and October 23, 9 AM to 4 PM


1. Qualitative Methods for Evaluation Research

Qualitative information can richly support quantitative research findings by providing insight into people’s experiences, perceptions and motivations.  Many evaluators, steeped in quantitative tradition, ask what are best practices regarding qualitative research, and how best to conduct qualitative inquiry that will contribute positively to an evaluation.

Through lecture, discussion, and hands-on practice, this workshop will help you explore the benefits of qualitative research, design choices, considerations when interviewing, and analysis alternatives.

You will learn:

Deborah Potts is a qualitative researcher who has led thousands of focus groups and one-on-one interviews. She is co-author of Moderator to the Max: A full-tilt guide to creative, insightful focus groups and depth interviews.  Deborah has taught workshops in conducting qualitative research for nearly two decades. She is an independent consultant.

Session 1: Qualitative Methods
Scheduled: Monday and Tuesday, October 22 and 23, 9 AM to 4 PM
Level: Beginner, no prerequisites


2. Quantitative Methods for Evaluators

This workshop is now oversubscribed and no further registrations are being accepted. Please select an alternative.

Quantitative data offers opportunities for numerical descriptions of populations and samples. The challenge is in knowing which analyses are best for a given situation. Designed for the practitioner needing a refresher course and/or guidance in applying quantitative methods to evaluation contexts, the workshop covers the basics of parametric and nonparametric statistics, as well as how to report your findings.

Hands-on exercises and computer demonstrations interspersed with mini-lectures will introduce methods and concepts. The instructor will review examples of research and evaluation questions and the statistical methods appropriate to developing a quantitative data-based response.

You will learn:

Katherine McKnight applies quantitative analysis as Director of Program Evaluation for Pearson Achievement Solutions and is co-author of Missing Data: A Gentle Introduction (Guilford, 2007). Additionally, she teaches Research Methods, Statistics, and Measurement in Public and International Affairs at George Mason University in Fairfax, Virginia.

Session 2: Quantitative Methods
Scheduled:
Monday and Tuesday, October 22 and 23, 9 AM to 4 PM
Level:
Beginner, no prerequisites

3. Getting Actionable Answers for Real-World Decision Makers: Evaluation Nuts and Bolts that Deliver

This workshop is now oversubscribed and no further registrations are being accepted. Please select an alternative.

Ever read an evaluation report and still wondered how worthwhile the outcomes really were or whether the program was a waste of money? What if evaluations actually asked evaluative questions and gave clear, direct, evaluative answers? This workshop covers 1) big-picture thinking about key stakeholders, their information needs, and the evaluative questions they need answered; 2) a hands-on introduction to evaluative rubrics as a way of directly answering those questions; 3) guidance for designing interview and survey questions that are more easily interpreted against evaluative rubrics and capture evidence of causation; and 4) a reporting structure that gets to the point, delivering direct evaluative answers that decision makers can really use.

This workshop combines mini lectures, small and large group exercises to build big picture thinking to focus the evaluation on what really matters, and the most important “nuts and bolts” concepts and tools needed to deliver actionable answers.

You will learn:

E Jane Davidson runs her own successful consulting practice, Real Evaluation Ltd, blogs with Patricia Rogers on the entertaining Genuine Evaluation Blog, and is 2005 recipient of AEA's Marcia Guttentag Award. Her popular text, Evaluation Methodology Basics: The Nuts and Bolts of Sound Evaluation (Sage, 2005), is used by practitioners and graduate students around the world. Jane’s work builds on Michael Scriven’s contributions on the logic and methodology of evaluation, combined with concepts and techniques from utilization-focused and theory-based evaluation, and translated into concrete, easy-to-follow practical methodologies that can be applied in a real-world setting.

Session 3: Actionable Answers
Scheduled: Monday and Tuesday, October 22 and 23, 9 AM to 4 PM
Level: Beginner, no prerequisites

4. Logic Models for Program Evaluation and Planning

Many programs fail to start with a clear description of the program and its intended outcomes, undermining both program planning and evaluation efforts. The logic model, as a map of what a program is and intends to do, is a useful tool for clarifying objectives, improving the relationship between activities and those objectives, and developing and integrating evaluation plans and strategic plans.

First, we will recapture the utility of program logic modeling as a simple discipline, using cases in public health and human services to explore the steps for constructing, refining and validating models. Then, we'll examine how to improve logic models using some fundamental principles of "program theory", demonstrate how to use logic models effectively to help frame questions in program evaluation, and show some ways logic models can also inform strategic planning. Both days use modules with presentations, small group case studies, and debriefs to reinforce group work.

You will learn:

Thomas Chapel is the central resource person for planning and program evaluation at the Centers for Disease Control and Prevention and a sought after trainer. Tom has taught this workshop for the past four years to much acclaim.

Session 4: Logic Models
Scheduled: Monday and Tuesday, October 22 and 23, 9 AM to 4 PM
Level: Beginner, no prerequisites

5. Strategies for Interactive Evaluation Practice

In all of its many forms, evaluation practice requires evaluators to be skilled facilitators of interpersonal interactions. Whether you are completely in charge, working collaboratively with program staff, or coaching individuals conducting their own study, you need to interact with people throughout the course of an evaluation. This workshop will provide theoretical grounding (social interdependence theory, conflict theory, and evaluation use theory) and practical frameworks for analyzing and extending your own practice.

Through presentations, discussion, reflection, and case study, you will learn and experience strategies to enhance involvement and foster positive interaction in evaluation. You are encouraged to bring examples of challenges faced in your own practice to this workshop consistently lauded for its ready applicability to real-world evaluation contexts.

You will learn:

Jean King has over 30 years of experience as an award-winning teacher at the University of Minnesota. As an evaluation practitioner, she has received AEA’s Myrdal award for outstanding evaluation practice. Laurie Stevahn is associate professor of educational leadership at Seattle University with extensive facilitation experience as well as applied experience in evaluation. The two are co-authors of Interactive Evaluation Practice: Mastering the Interpersonal Dynamics of Program Evaluation (Sage, forthcoming) and Needs Assessment Phase III: Taking Action for Change (Sage, 2010).

Session 5: Interactive Practice
Prerequisites: Basic evaluation skills
Scheduled: Monday and Tuesday, October 22 and 23, 9 AM to 4 PM
Level: Intermediate

6. Developmental Evaluation: Applying Complexity Concepts to Enhance Innovation and Use

Developmental evaluation (DE) is especially appropriate for innovative initiatives or organizations in dynamic and complex environments where participants, conditions, interventions, and context are turbulent, pathways for achieving desired outcomes are uncertain, and conflicts about what to do are high. DE supports reality-testing, innovation, and adaptation in complex dynamic systems where relationships among critical elements are nonlinear and emergent. Evaluation use in such environments focuses on continuous and ongoing adaptation, intensive reflective practice, and rapid, real-time feedback. The purpose of DE is to help develop and adapt the intervention (different from improving a model).

This evaluation approach involves partnering relationships between social innovators and evaluators in which the evaluator’s role focuses on helping innovators embed evaluative thinking into their decision-making processes as part of their ongoing design and implementation initiatives. DE can apply to any complex change effort anywhere in the world. Through lecture, discussion, and small-group practice exercises, this workshop will position DE as an important option for evaluation in contrast to formative and summative evaluations as well as other approaches to evaluation.

You will learn:

Michael Quinn Patton is an independent consultant and professor at the Union Institute. An internationally known expert on Utilization-focused Evaluation, this workshop is based on his just published new book, Developmental Evaluation: Applying Complexity Concepts to Enhance Innovation and Use (Guilford, 2010).

Session 6: Developmental Evaluation
Scheduled: Monday and Tuesday, October 22 and 23, 9 AM to 4 PM
Level: Beginner

Tuesday Workshops, October 23, 9 AM to 4 PM


9. Propensity Score Matching: Theories and Applications

This workshop is now oversubscribed and no further registrations are being accepted. Please select an alternative.

When randomized designs are infeasible, evaluation researchers often use quasi-experiments or observational data to estimate treatment effects. Propensity score matching, used to improve covariate balance, has been gaining popularity as a method to improve causal inferences.

This workshop will include a review of experimental and non-experimental designs, an overview of commonly used matching methods, and an introduction for using propensity score adjustments. More specifically, we will cover basic theories and principles, a step-by-step demonstration, and software and syntax used to conduct propensity score matching. Our demonstrations will be done in both SPSS and R. We will provide attendees with a CD containing SPSS and R code and sample data sets.

You will learn:

Haiyan Bai, and MH Clark are experienced facilitators on the faculty at the University of Central Florida. They bring to the session extensive experience in quantitative research methodology and the development and application of propensity score analysis, having explored both the theoretical and practical applications of this methodology.

Session 9: Propensity Score Matching
Scheduled: Tuesday, October 23, 9 AM to 4 PM
Level: Beginner


10. Focus Group Research: Planning and implementation

As a qualitative research method, focus groups are an important tool to help researchers understand the motivators and determinants of a given behavior. Based on the seminal works of Richard Krueger and David Morgan, this course will provide a practical introduction to planning and conducting focus group research. Beginning with the choice to use focus groups as a methodology, through to writing final reports, you will be equipped with the knowledge to implement focus group research.

This workshop uses a combination of lecture and skill-building through small group work. Participants will be actively encouraged to ask questions throughout the session, as well as share any insights they have gleaned from previous experience

You will learn:

Michelle Revels a consultant with ICF International for 15 years, is an experienced presenter on focus groups. A primary research method in her work, Michelle recently completed a 20 focus group study on mammography adherence and co-authored a publication in the December 2011 issue of the Journal of Women's Health.

Session 10: Focus Group Research
Scheduled: Tuesday, October 23, 9 AM to 4 PM
Level: Beginner


11: Evaluation 101

Are you a new evaluator or responsible for evaluation in your organization? Yes? Then this workshop is for you. You will learn a framework for designing evaluations, as well as how to identify problems and ensure use of results. A central focus of the workshop will be issues related to cultural competency and the use of standards for ensuring high quality evaluations.

Through a series of short didactic presentations followed by small group work, this workshop will cover the processes of thinking about, planning, implementing, and reporting an evaluation. You will be encouraged to explore the topic from your own perspective and apply the material within your own context.

You will learn:

Donna Mertens a faculty member of Gallaudet University and active evaluator, recently authored an introductory textbook for evaluation courses titled, Program Evaluation Theory and Practice: A Comprehensive Guide (with Amy Wilson, Guilford Press, 2012). She has held long-term leadership positions within AEA and has 30 years of experience as a presenter in program evaluation.

Session 11: Evaluation 101
Scheduled: Tuesday, October 23, 9 AM to 4 PM
Level: Beginner


12. Advanced Topics in Systems Thinking: Applications in Evaluation

This workshop is now oversubscribed and no further registrations are being accepted. Please select an alternative.

Systems thinking can help evaluators understand the world – in all its diversity – in ways that are practical, comprehensive, and wise. A systems approach is particularly useful for making sense of the complex and sometimes messy situations we encounter our practice as evaluators. This year’s advanced systems thinking workshop focuses on how three systems concepts—(1) nested reality, (2) dynamics and (3) deep structures—can be used to address complex situations and ecologies.

Through a combination of mini-lectures, structured practice and group discussion, you will learn tools for working in complex ecological contexts that account for system dynamics and address causality in systemic change.

You will learn:

Margaret Hargreaves, a senior health researcher at Mathematica Policy Research, has taught program evaluation methods at the graduate level and is currently serving as program co-chair for the Systems in Evaluation Topical Interest Group. Janice Noga, an independent consultant with Pathfinder Evaluation and Consulting, has taught graduate level courses in statistics, research methods, human learning and classroom assessment and evaluation.

Session 12: Advanced Systems Thinking
Prerequisites: Experience in designing and implementing evaluations and some knowledge of systems theory
Scheduled: Tuesday, October 23, 9 AM to 4 PM
Level: Intermediate


13. Grant Writing Skills for Evaluators

Evaluators are often called on to write the “Evaluation Capacity” section of a grant proposal. In today’s environment of accountability, grant proposals are rated on the effectiveness of the evaluative methods outlined in the grant narratives. Grant funders expect that evaluative measures will assess to what extent the program objectives have been met, and to what extent these can be attributed to the project. A strong proposal will include evidence that inputs, activities, outputs, short-term and long-term outcome all relate to the project and the target population served—as well as offer ways to track and measure the project!

This hands-on workshop will use mini-lectures, small group activities/exercises, and discussions to offer a pragmatic approach to addressing these various evaluation components of grant proposals.

You will learn:

Catherine Dunn Kostilnik is president of the Center for Community Studies, Inc., an evaluation and grant writing firm. She spent 14 years at LaGrange College doing community capacity building work, evaluation and grant writing and has a total of 30 years of experience in non-profit management, evaluation, strategic planning and grant writing. An experienced facilitator, she often trains on the essential skills for grant writers.

Session 13: Grant Writing
Scheduled: Tuesday, October 23, 9 AM to 4 PM
Level: Beginner


14: Equity-Focused Evaluation: How to design and manage evaluations of equitable development interventions

The push for a stronger focus on equity in human development is gathering momentum at the international level. The premise of achieving equitable development results is increasingly supported by United Nations reports and independent analysis, leading more national policies and international alliances to adopt equity as a focus. This shift poses important challenges, and opportunities, to the evaluation function. How can one strengthen the capacity of Governments, organizations and communities to evaluate the effect of interventions on equitable outcomes for marginalized populations? What are the evaluation questions for assessing an intervention’s impact on equity?

The workshop will equip you to address the methodological implications in designing, conducting, managing and using equity focused evaluations. Using interactive discussion and small group work, this workshop will handout and explore the content of UNICEF’s manual on this subject.

You will learn:

Marco Segone is the coauthor of the UNICEF manual on Equity-Focused Evaluation and works in their Evaluation Office. He has 20 years of experience in facilitating workshops for UN, Government and civil society staff across Latin America, Africa, Asia and Eastern Europe. His webinar on this topic was supported by the Rockefeller Foundation and Claremont University.

Session 14: Equity-Focused Evaluation
Prerequisites: Basic methods of development evaluation
Scheduled: Tuesday, October 23, 9 AM to 4 PM
Level: Intermediate


15: Using Stories in Evaluation

Stories are effective in communicating evaluation results. Stories are remembered and convey emotional impact. Unfortunately, the story has been undervalued and largely ignored as a research and reporting procedure. Stories are sometimes regarded with suspicion because of the haphazard manner in which they are captured or the cavalier description of what the story depicts. Stories, or short descriptive accounts, have considerable potential for providing insight, increasing understanding of how a program operates or in communicating evaluation results.

Through short lecture, discussion, demonstration, and hands-on activities, this workshop explores effective strategies for discovering, collecting, analyzing and reporting stories that illustrate program processes, benefits, strengths or weaknesses.

You will learn:

Richard Krueger, professor at the University of Minnesota, has taught and written about qualitative research for 25 years. He is a past president of AEA. He has taught this particular workshop for AEA and at six universities across the United States. He recently wrote a chapter on the use of stories in evaluation in the Handbook of Practical Program Evaluation, edited by Wholey, et al.

Session 15: Using Stories
Prerequisites: Experience in individual and group interviewing, qualitative methods
Scheduled: Tuesday, October 23, 9 AM to 4 PM
Level: Intermediate


16. Experimental and Quasi-experimental Research Design for Evaluators

The principles of research design provide the conceptual and structural framework for planning and executing cause-probing studies. Attendees will learn the theoretical and methodological foundations of design, with an emphasis on principles that lead to valid causal inference and generalization. The primary focus will be on quasi-experimental studies.

The workshop will include review and discussion of case studies drawn from current literature and facilitators' evaluation experience. The course has a secondary focus of encouraging discussion among practicing evaluators regarding the ethical and economic issues that underlie and affect decision-making in all cause-probing studies.

You will learn:

Chris Coryn is Director of the Interdisciplinary Ph.D. in Evaluation at Western Michigan University and Associate Professor. He has published more than 80 scholarly, peer-reviewed articles—many on the topic of research design, statistical methods and measurement. He has served as Principal Investigator for numerous research and evaluation grants from institutions such as the National Science Foundation and National Institutes of Health.

Session 16: Experimental and Quasi-Experimental Design
Prerequisites: Basic applied statistics and research methods of evaluation
Scheduled: Tuesday, October 23, 9 AM to 4 PM
Level: Intermediate


17: Getting Started: Introductory consulting skills for evaluators

Program evaluators who are thinking about going out on their own will find that it can be both challenging and intimidating unless they have the simple, but important skills required to be successful. This practical workshop is based on a synthesis of management consulting literature, evaluation and applied research processes, and entrepreneurial and small business skills.

Through lecture, anecdote, discussion, small-group exercises, and independent reflection, this workshop will help you solve problems and develop strategies for action. Samples, worksheets, insider tips, trade secrets, and personal anecdotes will be provided to help you address your unique business issues. You will leave this workshop with a clearer understanding of what it takes to be an independent consultant.

You will learn:

Gail Barrington has many years of practical experience as an independent consultant. She founded Barrington Research Group, Inc. in 1985 and has conducted over 100 program evaluation and applied research studies. Her recent book, Consulting Start up & Management: A Guide for Evaluators & Applied Researchers, (SAGE, 2012) is based on the many workshops on consulting skills that she has offered over the years. In 2008, she won the Canadian Evaluation Society award for her Contribution to Evaluation in Canada.

Session 17: Introductory Consulting Skills
Scheduled: Tuesday, October 23, 9 AM to 4 PM
Level: Beginner


Wednesday Workshops, October 24, 8 AM to 3 PM


19. How to Prepare an Evaluation Dissertation Proposal

Developing an acceptable dissertation proposal often seems more difficult than conducting the actual research. Further, proposing an evaluation as a dissertation study can raise faculty concerns of acceptability and feasibility. This workshop will lead you through a step-by-step process for preparing a strong, effective dissertation proposal with special emphasis on the evaluation dissertation.

The workshop will cover such topics: as the nature, structure, and multiple functions of the dissertation proposal; how to construct a compelling argument; how to develop an effective problem statement and methods section and how to provide the necessary assurances to get the proposal approved. Practical procedures and review criteria will be provided for each step. The workshop will emphasize application of the knowledge and skills taught to your personal dissertation situation through the use of an annotated case example, multiple self-assessment worksheets, and several opportunities for questions.

You will learn:

Nick L Smith is the co-author of How to Prepare a Dissertation Proposal (Syracuse University Press) and a past-president of AEA. He has taught research and evaluation courses for over 20 years at Syracuse University and is an experienced workshop presenter. He has served as a dissertation advisor to multiple students and is the primary architect of the curriculum and dissertation requirements in his department.

Session 19: Evaluation Dissertation
Scheduled: Wednesday, October 24, 8 AM to 3 PM
Level: Beginner


20: Empowerment Evaluation

Empowerment Evaluation builds program capacity and fosters program improvement. It teaches people to help themselves by learning how to evaluate their own programs. The basic steps of empowerment evaluation include: 1) establishing a mission or unifying purpose for a group or program; 2) taking stock - creating a baseline to measure future growth and improvement; and 3) planning for the future - establishing goals and strategies to achieve goals, as well as credible evidence to monitor change.

Can you see yourself in the role of the evaluators as coach or facilitator of an empowerment evaluation? Through lecture, activities, demonstration and case examples, you will be introduced to the steps of empowerment evaluation and the tools to facilitate the approach.

You will learn:

David Fetterman, President & CEO, Fetterman and Associates, is a past-president of AEA and recipient of AEA awards in both evaluation theory and practice. He has taught or facilitated empowerment evaluation workshops world-wide, ranging from Australia to Japan and Brazil to Israel. He is the author of three books on the topic and numerous articles. His fourth book is at press and will be titled, Empowerment Evaluation in the DigitalVillages: Hewlett Packard’s $15 Million Race Toward Social Justice (by Stanford University Press).

Session 20: Empowerment Evaluation
Scheduled: Wednesday, October 24, 8AM to 3 PM
Level: Beginner


21. Logic Models Beyond the Traditional View: Metrics, methods, format and stakeholders

When should we use (or not use) logic models? What kind of information can we put in logic models? What is the value of different forms and scales of models for the same program? What different uses do logic models have across a program's life cycle? What kinds of relationships can we represent? What are the relationships between logic models, metrics, and methodology? How can we manage multiple uses of logic models -- evaluation, planning, advocacy; and explanation versus prediction? How can we peg the detail in a model to our actual state of knowledge about a program? How does graphic design relate to information richness, and why does it matter? What choices do evaluators have for working with stakeholders to develop and use logic models?

Evaluators need to know how to respond to these questions. Through lecture and discussion, this workshop will endeavor to provide answers.

You will learn:

Jonathan (Jonny) Morell is Director of Evaluation at the Fulcrum Corporation. He has produced and delivered a wide range of training workshops to professional associations, government, and private sector settings. He employs logic models throughout his evaluation practice and consulting work. The issue of program logic, and how to represent that logic, figures prominently in his recent book Evaluation in the Face of Uncertainty.

Session 21: Logic Models Beyond
Prerequisites: Experience constructing logic models
Scheduled: Wednesday, October 24, 8 AM to 3 PM
Level: Intermediate


22. Introduction to GIS (Geographic Information Systems) and Spatial Analysis in Evaluation

This workshop introduces Geographic Information Systems (GIS) and spatial analysis concepts and uses in environmental/ecological and community health/public health program and policy evaluation. It defines and covers steps for undertaking vector and raster mapping projects plus challenges of doing GIS cost effectively. It will provide information about pros and cons of a variety of mapping software (some free) and how to obtain base maps and data for map contents.

Using case study examples from environmental/ecological and community health/public health content areas, we will demonstrate and you will practice designing a GIS project, setting up spatial analysis, and using GIS approaches to evaluate programs or policy initiatives. The workshop will discuss ways to involve evaluation stakeholders (e.g., staff, program clients) in mapping projects. We’ll use case studies, demonstration, small group exercises, and traditional presentation to introduce attendees to the value and use of GIS for evaluation; however, please note that this is not taught in a computer lab and there will be no hands-on work with GIS software.

You will learn:

Both Arlene Hopkins and Stephen Maack are experienced university teachers and regularly present to adults in professional venues, in conference sessions and as workshop facilitators. Stephen Maack has a Ph.D. in anthropology with a research specialty in social change. Ms. Hopkins has an MA in education, and is also a former K-12 teacher. Both use GIS in their work as practicing evaluators.

Session 20: Introduction to GIS
Scheduled: Wednesday, October 24, 8AM to 3 PM
Level: Beginner


23. Data Dashboard Design for Robust Evaluation and Monitoring

This workshop is now oversubscribed and no further registrations are being accepted. Please select an alternative.

Evaluators and their clients are increasingly using dashboards for monitoring and evaluation in the social sectors. Designed well, dashboards can be a powerful communication tool for informing stakeholder decision-making and action that improve program processes and performance. This workshop teaches the practical skills needed to transform dashboards from glitzy chart junk into effective means for enlightenment.

This hands-on session will provide a review and critique of a variety of dashboard examples from industry and the social sectors, software (Excel and Tableau) demonstrations, and an opportunity to design a dashboard from scratch. Common dashboard mistakes, issues of aesthetics and defining measures will also be discussed.

You will learn:

Veronica Smith, principal at independent evaluation and research firm data2insight LLC, has more than 20 years of design experience including 5 years of designing and building social sector dashboards. Her clients include Howard University's NIH Clinical and Translational Science Award Evaluating and Tracking Group, Powerful Schools, Seattle Public Utilities and The Nature Conservancy. Ms. Smith has facilitated professional development workshops for 20 years and recently presented an AEA Coffee Break webinar series on this topic.

Session 23: Data Dashboard Design
Prerequisites: Basic knowledge of dashboards, Excel, quantitative data analysis and graph creation
Scheduled: Wednesday, October 24, 8 AM to 3 PM
Level: Intermediate


24: Using Effect Size and Association Measures in Evaluation

Improve your capacity to understand and apply a range of measures including: standardized measures of effect sizes from Cohen, Glass, and Hedges; Eta-squared; Omega-squared; the Intraclass correlation coefficient; and Cramer’s V. Answer the call to report effect size and association measures as part of your evaluation results. Together we will explore how to select the best measures, how to perform the needed calculations, and how to analyze, interpret, and report on the output in ways that strengthen your overall evaluation.

Through mini-lecture, hands-on exercises, and computer-based demonstration, you will improve your understanding of the theoretical foundation and computational procedures for each measure as well as ways to identify and correct for bias.

You will learn:

Jack Barnette is professor of biostatistics at the University of Colorado School of Public Health. He has taught courses in statistical methods, program evaluation, and survey methodology for more than 30 years. He has been conducting research and writing on this topic for more than ten years. Jack is a regular facilitator both at AEA's annual conference and the CDC/AEA Summer Evaluation Institute. He was awarded the Outstanding Commitment to Teaching Award by the University of Alabama and is a member of the ASPH/Pfizer Academy of Distinguished Public Health Teachers.

Session 24: Using Effect Size
Prerequisites: Univariate statistics through ANOVA and understanding of use of confidence levels
Scheduled: Wednesday, October 24, 8 AM to 3 PM
Level: Advanced


25: Utilization-Focused Evaluation

Evaluations should be useful, practical, accurate and ethical. Utilization-Focused Evaluation is a process that meets these expectations and promotes use of evaluation from beginning to end. With a focus on carefully targeting and implementing evaluations for increased utility, this approach encourages situational responsiveness, adaptability and creativity. This training is aimed at building capacity to think strategically about evaluation and increase commitment to conducting high quality and useful evaluations.

Utilization-Focused evaluation focuses on the intended users of the evaluation in the context of situational responsiveness with the goal of methodological appropriateness. An appropriate match between users and methods should result in an evaluation that is useful, practical, accurate, and ethical, the characteristics of high quality evaluations according to the profession's standards. With an overall goal of teaching you the process of Utilization-Focused Evaluation, the session will combine lectures with concrete examples and interactive case analyses.

You will learn:

Michael Quinn Patton is an independent consultant and professor at the Union Institute. An internationally known expert on Utilization-Focused Evaluation, this workshop is based on the newly completed fourth edition of his best-selling evaluation text, Utilization-Focused Evaluation: The New Century Text (SAGE).

Session 25: Utilization-Focused Evaluation
Scheduled: Wednesday, October 24, 8 AM to 3 PM
Level: Beginner


26: Longitudinal Data Analysis

This workshop is now oversubscribed and no further registrations are being accepted. Please select an alternative.

Many evaluation studies make use of longitudinal data—which has its own associated number of special problems. This workshop reviews how traditional and modern methods in the analysis of change address challenges presented by longitudinal data sets. Specifically, you will receive an introduction in SEM-based latent growth curve modeling (LGM). In contrast to most traditional methods, which are restricted to the analysis of mean changes, LGM allows the investigation of unit specific changes over time.

Using a mixture of PowerPoint presentation, group discussion and exercises, you will learn how to specify, estimate and interpret growth curve models. Recent advancements in the field will be covered, including multiple group analyses, the inclusion of time-varying covariates, and cross sequential designs. Detailed directions for model specification will be given and all analyses will be illustrated by practical examples.

You will learn:

Manuel C Voelkle is a research scientist at the Max Planck Institute in Berlin, Germany. He teaches courses on advanced multivariate data analysis and research design and research methods. Werner W. Wittmann is professor of psychology at the University of Mannheim, where he heads a research and teaching unit specializing in research methods, assessment and evaluation research.

Session 26: Longitudinal Data Analysis
Prerequisites: Familiarity with structural equation models (SEM) and regression analytic techniques
Scheduled: Wednesday, October 24, 8 AM to 3 PM
Level: Intermediate


27: Multilevel Models in Program and Policy Evaluation

Multilevel models open the door to understanding the inter-relationships among nested structures and the ways evaluands change across time. This workshop will demystify multilevel models and present them at an accessible level, stressing their practical applications in evaluation.

Through discussion and hands-on demonstrations, the workshop will address four key questions: When are multilevel models necessary? How can they be implemented using standard software? How does one interpret multilevel results? What are recent developments in this area?

You will learn:

  • The basics of multilevel modeling;
  • When to use multilevel models in evaluation practice;
  • Implementation of models using widely available software;
  • The importance of considering multilevel structures in understanding program theory.

Sanjeev Sridharan, of the University of Toronto, has repeatedly taught multilevel models for AEA as well as for the SPSS software company. His recent work on this topic has been published in the Journal of Substance Abuse Treatment, Proceedings of the American Statistical Association and Social Indicators Research. Known for making the complex understandable, his approach to the topic is straightforward and accessible.

Session 27: Multilevel Models
Scheduled: Wednesday, October 24, 8 AM to 3 PM
Level: Beginner


28: The Twelve Steps of Data Cleaning: Strategies for dealing with dirty data

This workshop is now oversubscribed and no further registrations are being accepted. Please select an alternative.

Evaluation data, like a lot of research data, can be messy. Rarely are evaluators given data that is ready to be analyzed. Missing data, coding mistakes, and outliers are just some of the problems that evaluators should address prior to conducting analyses for an evaluation report. Despite its centrality to a robust analysis, data cleaning has received little attention in the literature and the resources that are available are often complex and not user-friendly.

This workshop will review our 12-step process for cleaning evaluation data. Through an examination of data samples for each step and small group work, you will feel more confident about what decisions need to be made regarding your data prior to conducting analyses to address your evaluation questions.

You will learn:

Jennifer Ann Morrow is an Assistant Professor of Evaluation, Statistics, and Measurement at the University of Tennessee and has over 14 years of experience teaching statistics, research methods, and program evaluation as well as over 10 years of experience conducting workshops on a variety of data analysis topics. Jennifer and her co-presenter, Gary Skolits (an Associate Professor at the University of Tennessee), have recently submitted a publication of their 12-step data cleaning process.

Session 28: Data Cleaning
Prerequisites: Basic quantitative analysis
Scheduled: Wednesday, October 24, 8 AM to 3 PM
Level: Intermediate


29: Critical Skills in Focus Group Interviewing

A successful focus group requires considerable skill. The focus group interview presents many challenges and requires artful diplomacy, self discipline and research skills. This workshop offers both an in-depth description of the critical skills needed for focus group mastery and suggestions for developing those skills. The skill areas include: planning the study, selecting the appropriate participants, recruiting participants, hosting the group, introducing the focus group, developing powerful questions, capturing data, analyzing results, and reporting the findings.

Through mini-lecture, examples and demonstration attendees will spend 30 minutes carefully examining each skill. Group discussion and sharing of previous experience(s) will enrich the workshop.

You will learn:

Richard Krueger is a senior fellow and professor emeritus at the University of Minnesota. He has conducted thousands of focus groups over the past 25 years and has written extensively on focus group interviewing. He is also a past president of AEA. Co-presenter, Mary Anne Casey, is an independent evaluation consultant specializing in helping organizations gather, analyze and use information to increase their effectiveness. She has taught professional in-service and graduate courses at the University of Minnesota and the University of South Florida.

Session 29: Focus Group Interviewing
Prerequisites: Experience planning and conducting focus groups
Scheduled: Wednesday, October 24, 8 AM to 3 PM
Level: Intermediate


30: RealWorld Impact Evaluation: Addressing methodological weaknesses and threats to validity

This workshop is now oversubscribed and no further registrations are being accepted. Please select an alternative.

How can you conduct adequately valid impact evaluations under real world circumstances? There are influential individuals and institutions who continue to promote experimental research designs and methods as the “gold standard.” However, attempting to implement a randomized control trial (RCT) on development programs in society is often inappropriate or, at least, infeasible. Very frequently projects are started without conducting a baseline study that is comparable with a pre-planned endline evaluation. Even more frequently, it would be impractical or even unethical to randomly select individuals or other units of analysis into ‘treatment’ and ‘control’ groups.

Welcome to the real world! Through participatory processes we will explore techniques that help evaluators and clients ensure the best quality evaluation under real-life constraints like those described above and others. You will learn about the approaches in the 2nd edition of the RealWorld Evaluation book, and from the extensive international experiences of the authors.

You will learn:

Jim Rugh has more than 48 years of experience in international development, 32 of them as a professional evaluator, mainly of international NGOs. Michael Bamberger spent a decade working with NGOs in Latin America, and working on evaluations with the World Bank. They, along with Linda Mabry, first co-authored RealWorld Evaluation: Working Under Budget, Time, Data and Political Constraints in 2006, the 2nd edition came out last year and will be the backbone of this workshop.

Session 30: RealWorld Impact Evaluations
Prerequisites: Experience in conducting evaluations and facing real world constraints/challenges
Scheduled: Wednesday, October 24, 8 AM to 3 PM
Level: Advanced


31: Reality Counts: Participatory methods for engaging vulnerable and under-represented persons in monitoring and evaluation

Many evaluators find their toolkit of methods is inappropriate or ineffective in gathering accurate, reliable, valid and usable data from persons who may feel vulnerable, powerless or disenfranchised. Typical evaluation processes may unintentionally silence, or leave out, those with particular attributes such as low-literacy or numeracy skills, gender, socio-economic status, cultural identity, HIV status, or immigration status. This workshop will cover methods from participatory rural appraisal and the participatory learning and action family of methodologies which work to engage and empower program participants throughout the entire program and evaluation cycle.

Attendees in this highly interactive workshop will learn how to promote evaluations that are culturally relevant and come away with several techniques and resources – such as community mapping, pocket chart voting, wealth ranking, cartoons, and seasonal calendars – that can be adapted and used in different contexts and disciplines with vulnerable persons throughout the world.

You will learn:

Tererai Trent has facilitated participatory monitoring and evaluation techniques for more than 18 years. From Zimbabwe, she brings together viewpoints from both the developed and developing world. As Heifer International’s PM&E Director (2002-2010) she trained field staff in more than 15 countries. Mary Crave and Kerry Zaleski, University of Wisconsin-Extension, along with Trent, have worked with vulnerable persons across diverse cultures both internationally and domestically. Together, they have worked in over 55 countries in health, agriculture, livelihoods, education, community development, HIV/AIDS, and environment.

Session 31: Reality Counts: Participatory Methods
Prerequisites: Experience with participatory methods and work in limited-resource environments with under-represented communities
Scheduled: Wednesday, October 24, 8 AM to 3 PM
Level: Intermediate


32: Qualitative Research Strategies for Deeper Understanding

In an ideal world, qualitative research provides rich insights and deep understanding of respondents' perceptions, attitudes, behaviors and emotions. However, it is often difficult for research respondents to remember and report their experiences and to access their emotions.This practical workshop offers cutting edge approaches to conducting qualitative interviews and focus groups that have proven their effectiveness across a wide range of topics. These strategies engage research participants and help them articulate more thoroughly, thus leading to deep insights for the researcher.

Through demonstration and hands-on participation, we will examine techniques that help respondents to reconstruct their memories—such as visualization, mind-mapping, diaries and storytelling; or to articulate their emotions through metaphorical techniques such as analogies, collage and photo-sort—and to explore different perspectives through "word bubbles" and debate.

You will learn:

Deborah Potts is a qualitative researcher who has led thousands of focus groups and one-on-one interviews. She is co-author of Moderator to the Max: A full-tilt guide to creative, insightful focus groups and depth interviews. Deborah is the senior qualitative researcher at the research company, InsightsNow, Inc. and has taught workshops in conducting qualitative research for nearly two decades.

Session 32: Qualitative Research Strategies
Prerequisites: Basic interviewing skills
Scheduled: Wednesday, October 24, 8 AM to 3 PM
Level: Intermediate


33: Advanced Evaluation Methods, Concepts & Problems

Facilitated using the Highly Interactive Protocol (HIP), you will become part of a conversation revolving around the latest theories, concepts and current disputes in the field of evaluation today. Offer knowledge of your own and/or submit your questions for discussion on the collaboratively agreed upon agenda. Take this opportunity to live on the cutting edge of our profession.

Topics that are likely to be addressed include:

You will learn:

Michael Scriven has served as the President of AEA and has taught evaluation in schools of education, departments of philosophy and psychology, and to professional groups, for 45 years. A senior statesman in the field, he has authored over 450 publications focusing on evaluation, and received AEA's Paul F Lazarsfeld Evaluation Theory award in 1986.

Session 33: Advanced Evaluation Methods
Prerequisites: Some Experience Doing Evaluation And Knowledge Of Evaluation Literature
Scheduled: Wednesday, October 24, 8 Am To 3 Pm
Level: Advanced


34: Theory-Driven Evaluation for Assessing Effectuality, Viability and Transferability

Designed for evaluators who already have basic training and experience in program evaluation, this workshop will expand your knowledge on the theory-driven evaluation approach for strengthening your practical skills. We'll explore the conceptual framework of program theory and its related evaluation taxonomy, which will facilitate effective communication between evaluators and stakeholders regarding stakeholders' evaluation needs and evaluation options to address those needs.

The workshop starts from an introduction of an evaluation taxonomy that encompasses the full program cycle, including program planning, initial implementation, mature implementation, and outcomes. We'll then focus on how program theory and theory-driven evaluation are useful in the assessment and improvement of a program at each of these stages. The workshop also covers recent developments on the integrative validity model and bottom-up approach for enhancing usefulness of evaluation.

You will learn:

Huey Chen is Director of the Center for Research and Evaluation for Education and Human Services and Professor of Health and Nutrition Sciences at Montclair State University. Previously, he was a Senior Evaluation Scientist at the Centers for Disease Control and Prevention (CDC). He has taught workshops, as well as undergraduate and graduate evaluation courses in universities. His 1990 book, Theory-Driven Evaluations, is considered the classic text for understanding program theory and theory-driven evaluation. His 2005 book, Practical Program Evaluation: Assessing and Improving Planning, implementation, and Effectiveness, provides a major expansion of the scope and usefulness of theory-driven evaluations.

Session 34: Theory-Driven Evaluation
Prerequisites: Basics of evaluation
Scheduled: Wednesday, October 24, 8 AM to 3 PM
Level: Intermediate


35: Transformative Mixed Methods Evaluation

What are the implications of the transformative paradigm for the use of mixed methods directly focused on the furtherance of social justice? How can you apply the methodological implications of the transformative paradigm in the design of an evaluation? What approaches are useful for an evaluator to undertake a transformative mixed methods evaluation in diverse contexts? How can transformative mixed methods be applied to increase the probability of social justice goals being achieved? What sampling and data collection strategies are appropriate? What does it mean to address the myth of homogeneity?

We'll explore the answers to these and other questions, and the implications of positioning oneself as an advocate for social justice and human rights in an evaluation context. We'll demonstrate, discuss, and share examples of evaluations that use transformative mixed methods that focus on dimensions of diversity such as race/ethnicity, religion, language, gender, indigenous status, and disability. The philosophical assumptions of the transformative paradigm will be used to derive methodological implications that can be applied to the design of evaluations that seek to further social justice in marginalized communities.

You will learn:

Donna Mertens is a Past President of the American Evaluation Association who teaches evaluation methods and program evaluation to deaf and hearing graduate students at Gallaudet University in Washington, D.C. Mertens recently authored Transformative Research and Evaluation (Guilford).

Session 35: Transformative Mixed Methods
Prerequisites: Knowledge of basic evaluation practice
Scheduled: Wednesday, October 24, 8 AM to 3 PM
Level: Intermediate


36: Applications of Multiple Regression for Evaluators: Mediation, moderation and more

Multiple regression is a powerful and flexible tool that has wide applications in evaluation and applied research. Regression analyses are used to describe relationships, test theories, make predictions with data from experimental or observational studies, and model linear or nonlinear relationships.

We'll explore selecting models that are appropriate to your data and research questions, preparing data for analysis, running analyses, interpreting results, and presenting findings to a nontechnical audience. The facilitator will demonstrate applications from start to finish with live SPSS and Excel, and then you will tackle multiple real-world case examples in small groups. Detailed handouts include explanations and examples that can be used at home to guide similar applications.

You will learn:

Dale Berger, of Claremont Graduate University, is a lauded teacher of workshops and classes in statistical methods. Recipient of the outstanding teaching award from the Western Psychological Association, he is also the author of "Using Regression Analysis" in The Handbook of Practical Program Evaluation.

Session 36: Applications of Multiple Regressions
Prerequisites: Basic inferential and descriptive statistics, including correlation and regression; familiarity with SPSS
Scheduled: Wednesday, October 24, 8 AM to 3 PM
Level: Intermediate


Wednesday Morning Workshops, October 24, 8 AM to 11 AM


38: Acknowledging the “Self” in Developing Cultural Competency

“Cultural competence is not a state at which one arrives; rather, it is a process of learning, unlearning, and relearning.” This is the first premise given by the AEA Public Statement on Cultural Competence in Evaluation. This workshop will investigate the concept of Culture Competence in Evaluation, drawing from the Developmental Model of Intercultural Sensitivity.

Examining our own cultural perspective, including those subtle aspects of affect and behavior embedded within our culture, can be challenging. Increase your ability to recognize the multiple lenses through which you operate and immediately apply this awareness to evaluation case studies. Lecture and small group activities will help you build a strong understanding of Cultural Competence as it applies to your work.

You will learn:

Osman Ozturgut, assistant professor at the University of the Incarnate Word, has taught workshops in Turkey, China and across the United States. He is a member of the AEA Cultural Competence in Evaluation Dissemination Committee. His recent publications include articles titled Understanding Multicultural Education and Promoting Cultural Competence through Study Abroad Programs for Higher Education Faculty. His co-presenter is Cindy Crusto, associate professor of psychology in psychiatry at the Yale School of Medicine, and Chair of the AEA Cultural Competence Statement Dissemination Working Group.

Session 38: Acknowledging the “Self”
Scheduled: Wednesday, October 24, 8 AM to 11 AM
Level: Beginner


39: From Metaphor to Model: Mapping complex systems to inform program evaluation

Are you trying to evaluate organizational change efforts? This workshop will give you the tools you need to be able to model an organization change process. You will learn to integrate organizational theory(ies) (on culture, climate, distributed accountability, etc.) with evaluative systems theory using a systems pyramid metaphor of events.

Through a series of hands-on exercises using a constructed case scenario, you will learn how to use the model to frame a dynamic process for evaluation that continually examines interactions and interrelationships among systems elements and to structure a process for reporting focused on identifying what is working, what needs to be changed, and the challenges inherent in making such changes.

You will learn:

Janice Noga, an independent consultant with Pathfinder Evaluation and Consulting, has taught graduate level courses in statistics, research methods, human learning, and classroom assessment and evaluation. She is the current chair of the Systems in Evaluation TIG of the AEA.

Session 39: Metaphor to Model
Prerequisites: Experience designing and implementing evaluations, some familiarity with systems thinking
Scheduled: Wednesday, October 24, 8 AM to 11 AM
Level: Intermediate


40: Staying Power: Intermediate Consulting Skills for Evaluators

This workshop is now oversubscribed and no further registrations are being accepted. Please select an alternative.

Evaluators and applied researchers may be well versed in academic research processes, but uncertain about how to handle the entrepreneurial and small business management skills needed to be successful independent consultants. For those aspiring to have a thriving practice, this workshop addresses common problems encountered in the day-to-day running of a small consulting business and identifies some of the key issues faced by consultants.

Through discussion, lecture, anecdote, independent reflection and teamwork, this workshop will help participants problem solve around the ongoing challenges of a consulting practice. For those looking to fine-tune their current professional practice, a seasoned practitioner/award winner/author will share from her more than two decades of experience. This course may go hand-in-hand with Getting Started, geared toward those contemplating the prospect of going into independent consulting.

You will learn:

Gail Barrington is a top-rated trainer and has more than 25 years of practical experience running her own consulting practice, Barrington Research Group, Inc. Her recent book, Consulting Start up & Management: A Guide for Evaluators & Applied Researchers, (SAGE, 2012) is based on the many workshops on consulting skills that she has offered over the years. In 2008, she won the Canadian Evaluation Society award for her Contribution to Evaluation in Canada.

Session 40: Intermediate Consulting Skills
Prerequisites: Be in a business of managing a consulting practice – in order to enrich discussion
Scheduled: Wednesday, October 24, 8 AM to 11 AM
Level: Intermediate


Wednesday Afternoon Workshops, October 24, 12 PM to 3 PM


42: Case Study Methods in Evaluation

This workshop is now oversubscribed and no further registrations are being accepted. Please select an alternative.

Case study methods offer a powerful and flexible design palette to evaluators for approaching program assessment.While often heavily steeped in the use of qualitative methods, case studies also may include the use of quantitative data or mixed methods. Learn how this approach can produce particularly nuanced and in-depth program information. We will interactively explore the elements of case study design and implementation as it relates to evaluation.

This workshop will take you step by step through the case study method, challenging you to work through the design and implementation of a case study example. Time will also be spent exploring application of the method to participants’ individual examples and experience.

You will learn:

Rita O’Sullivan is Associate Professor at the University of North Carolina and has taught graduate courses in Case Study since 1995. As Director of Evaluation, Assessment, & Policy Connections in the School of Education, she has conducted many case studies in multiple program evaluation contexts. Most recently, she facilitated this presentation for AEA in June 2011.

Session 42: Case Study Methods
Prerequisites: Familiarity with multiple evaluation approaches and designs.
Scheduled: Wednesday, October 24, 12 PM – 3 PM
Level: Intermediate


43: Basics of Program Design: A theory-driven approach

This workshop is now oversubscribed and no further registrations are being accepted. Please select an alternative.

Evaluators often take an active role in program design and understanding the basics of program design from a theory-driven evaluation perspective can be essential. You will learn the five elements of a basic program design and how they relate to program theory and social science research. A strong program design is an important element in evaluation design. Begin to develop your skill in putting together the pieces of a program with the potential to improve social, health, educational, organizational, and other issues.

Mini lectures interspersed with small group activities will help you apply and understand the concepts presented. Examples from evaluation practice will be provided to illustrate main points and key take-home messages and you will receive a handout of further resources.

You will learn:

Stewart Donaldson, professor at Claremont Graduate University, is an experienced facilitator and teacher in Evaluation Theory and Practice, as well as, Advanced Application of Program Theory. He has taught courses in these subjects for the Centers for Disease Control and Claremont since 2003. Co-presenter, John Gargani, is an independent evaluator with 20 years of experience with community-based organizations.

Session 43: Program Design
Scheduled: Wednesday, October 24, 12 PM to 3 PM
Level: Beginner


44: Who Knows?: Engaging laypeople in meaningful, manageable data analysis and interpretation

How can evaluators simultaneously support high-quality data analysis and interpretation and meaningful participation of program participants, staff, and others? This workshop will offer key strategies you can use in your evaluation practice. Learn how to provide targeted, hands-on data analysis and interpretation training and support; develop accessible intermediate data reports; and carefully craft meeting agendas that succeed in evoking high-quality participation and analysis.

We will weave in attendee experiences, provide many take-home tools, and give you a bit of hands-on experience. Finally, the workshop will speak to the question: “I’d like to do more to engage stakeholders, but how do I inspire my clients or colleagues to invest in this?”

You will learn:

  • How to equip all evaluation team members – including “laypeople” -- with the skills and confidence to be full participants in the data analysis and interpretation process;

  • How to develop an intermediate data report that is accessible and that promotes deeper analysis and interpretation;

  • How to construct a team process to maximize meaningful participation and quality analysis and interpretation;

  • How to cultivate investment of key decision-makers in engaging an evaluation team in analysis and interpretation.

Kristin Bradley-Bull and Tobi Lippin are external evaluators with New Perspectives Consulting Group. Kristin has taught program evaluation and community assessment at the University of North Carolina/Greensboro. Likewise, Tobi has taught evaluation in Duke University’s Non-Profit Management Certificate Program. They are co-facilitating with Tom McQuiston, Senior Associate for Program Research and Evaluation, and Linda Cook, Grant Staff Training Coordinator, at the Tony Mazzocchi Center for Health, Safety and Environmental Education.  All four have facilitated and published on participatory evaluation.

Session 44: Engaging laypeople
Prerequisites: Understanding of stakeholder engagement and basic qualitative and quantitative analysis
Scheduled: Wednesday, October 24, 12 PM to 3 PM
Level: Intermediate


45. A Framework for Developing and Implementing a Performance Measurement System of Evaluation

This workshop is now oversubscribed and no further registrations are being accepted. Please select an alternative.

Program funders are increasingly emphasizing the importance of evaluation, often through performance measurement. Developing high quality project objectives and performance measures is critical to both good funding proposals and successful evaluations. Understanding the relationships between project activities and intended program outcomes (and base appropriate measures for each) assists in the development of more sound evaluation designs, thus allowing for the collection of higher-quality and more meaningful data.

The workshop will provide examples and ideas on how to implement a performance measurement system within an organization. You will be provided with a framework with practical strategies and planning devices to use when writing project objectives and measures, and planning evaluations focused on performance measurement. This framework is useful for a wide array of programs assessing impact and evaluating program outcomes from both single-site and multisite studies as well as for locally and federally-funded projects.

You will learn:

  • To identify and create measurable, feasible, and relevant project objectives related to evaluation;
  • To identify and write high quality performance measures that are complete, measureable, practical and pertinent to the corresponding objective;
  • To understand the difference between process and outcome measures and to ensure there is a balance of the two;
  • To see how objectives and performance measures can easily fit into the performance reports required by federal, local, and for profit funders;
  • To learn how to develop a framework that can be applied to a variety of evaluation settings which allows for a clear, measurable, and agreed upon plan.

Courtney Brown and Mindy Hightower King have directed and managed evaluations for local and state, and federal agencies, foundations, and non-profit organizations. Drawing on their extensive experience developing and implementing performance measurement systems, they have provided over 20 workshops and lectures to program staff and grantees, and individual technical assistance to at least 40 representatives of grant-receiving agencies. Dr. King is the Evaluation Manager at the Indiana Institute on Disability and Community at Indiana University. Dr. Brown is the Director of Organizational Performance and Evaluation at the Lumina Foundation.

Session 45: Performance Measurement Systems
Prerequisites: Basic evaluation skills
Scheduled: Wednesday, October 24, 12 PM to 3 PM
Level: Intermediate