2010 Banner
chalkboard

Professional Development Workshops are hands-on, interactive sessions that provide an opportunity to learn new skills or hone existing ones at Evaluation 2010.

Professional development workshops precede and follow the conference. They differ from sessions offered during the conference itself in at least three ways: 1. each is longer (either 3, 6, or 12 hours in length) and thus provides a more in-depth exploration of a skill or area of knowledge, 2. presenters are paid for their time and are expected to have significant experience both presenting and in the subject area, and 3. attendees pay separately for these workshops and are given the opportunity to evaluate the experience. Sessions are filled on a first-come, first-served basis and many usually fill before the conference begins.

Registration:

Registration for professional development workshops is handled as part of the conference registration forms; however, you may register for professional development workshops even if you are not attending the conference itself (still using the regular conference registration forms - just uncheck the conference registration box).

Fees:

Workshop registration fees are in addition to the fees for conference registration:

 

  Members Non-members Full-time Students
  Early Standard On Site >= Nov 2 Early Standard On Site >= Nov 2 Early Standard On Site >= Nov 2
<Oct 1 < Nov 2 <Oct 1 <Nov 2 <Oct 1 <Nov 2
Conference Registration $155 $195 $245 $235 $275 $325 $80 $90 $100
Two Day Workshop $300 $320 $360 $400 $440 $480 $160 $180 $200
One Day Workshop $150 $160 $180 $200 $220 $240 $80 $90 $100
Half Day Workshop $75 $80 $90 $100 $110 $120 $40 $45 $50

 

Full Sessions:

Sessions that are closed because they have reached their maximum attendance will be clearly marked below the session name. No further registrations will be accepted for full sessions and we do not maintain waiting lists. Once sessions are closed, they will not be re-opened.

Browse by Time Slot:

Two Day Workshops, Monday and Tuesday, November 8 and 9, 9 AM to 4 PM

(1) Qualitative Methods; (2) Quantitative Methods; (3) Evaluation 101; (4) Logic Models; (5) Participatory Evaluation; (6) Consulting Skills; (7) Developmental Evaluation; (8) Building Evaluation Capacity

One Day Workshops, Tuesday, November 9, 9 AM to 4 PM

(9) Reconceptualizing Evaluation; (10) Evaluation Design; (11) RealWorld Evaluation; (12) Advanced Focus Group Moderation; (13) Introduction to GIS; (14) Systems Thinking; (15) Creating Surveys; (16) Longitudinal Analysis

One Day Workshops, Wednesday, November 10, 8 AM to 3 PM

(17) Collaborative Evaluations; (18) Needs Assessment; (19) Logic Models; (20) Transformative Mixed Methods; (21) Evaluation Dissertation; (22) Utilization-Focused Evaluation; (23) Concept Mapping; (24) Evaluating Organizational Collaboration; (25) Effect Size and Association Measures; (26) Enhanced Group Facilitation; (27) Multilevel Models; (28) Actionable Answers; (29) Theory-driven Evaluation; (30) Racism in Evaluation; (31) Evaluation Consulting Contracts; (32) Survey Design 101; (33) Operations Research; (34) Integrating Systems Concepts; (35) Social Network Analysis; (36) Propensity Score Analysis; (37) Qualitative Tools

Half Day Workshops, Wednesday, November 10, 8 AM to 11 AM

(38) Empowerment Evaluation; (39) Fidelity of Implementation; (40) Moderating Focus Groups; (41) Nonparametric Statistics; (42) Interpersonal Validity; (43) Impact Evaluation;

Half Day Workshops, Wednesday, November 10, 12 PM to 3 PM

(44) Effective Reporting; (45) Integrated Data Analysis; (46) Using Theory to Improve Practice; (47) Gender Responsive Evaluation; (48) Evaluating RTD Projects/Programs; (49) Cost-Effectiveness Analysis

Half Day Workshops, Sunday, November 14, 9 AM to 12 PM

(50) Cost-Inclusive Evaluation; (51) Hearing Silenced Voices; (52) Participatory Program Tools for M&E; (53) Purposeful Program Theory; (54) Data Prep & Mgmt


Two Day Workshops, Monday and Tuesday, November 8 and 9, 9 AM to 4 PM


1. Qualitative Methods in Evaluation

New and experienced qualitative researchers alike often ask: "Is my approach to qualitative research consistent with core principles of the method?" Evaluators who integrate qualitative method into their work are responsible to ensure this alignment.

This session aims to help you become a strong decision maker through the life of a qualitative research project. This process is facilitated by attention to the following questions:

  1. How will the strength of your evaluation work be enhanced because you took a qualitative approach?
  2. How will the qualitative analysis approach you have chosen help you arrive at your goals?
  3. What does it mean to "stay close to the text?" How do you do it? Why does it matter?

You will learn:

Raymond C. Maietta is President of ResearchTalk Inc., a qualitative research consulting and professional development company. Lessons learned from 15 years of work with qualitative researchers in the fields of evaluation, health science and the social sciences informs a book he is completing titled, Sort and Sift, Think and Shift: A Multi-dimensional Approach to Qualitative Research.

Session 1: Qualitative Methods
Scheduled: Monday and Tuesday, November 8 and 9, 9 AM to 4 PM
Level: Beginner, no prerequisites

2. Quantitative Methods for Evaluators

Quantitative data offers opportunities for numerical descriptions of populations and samples. The challenge is in knowing which analyses are best for a given situation. Designed for the practitioner needing a refresher course and/or guidance in applying quantitative methods to evaluation contexts, the workshop covers the basics of parametric and nonparametric statistics, as well as how to report your findings.

Hands-on exercises and computer demonstrations interspersed with mini-lectures will introduce methods and concepts. The instructor will review examples of research and evaluation questions and the statistical methods appropriate to developing a quantitative data-based response.

You will learn:

Katherine McKnight applies quantitative analysis as Director of Program Evaluation for Pearson Achievement Solutions and is co-author of Missing Data: A Gentle Introduction (Guilford, 2007). Additionally, she teaches Research Methods, Statistics, and Measurement in Public and International Affairs at George Mason University in Fairfax, Virginia.

Session 2: Quantitative Methods
Scheduled: Monday and Tuesday, November 8 and 9, 9 AM to 4 PM
Level: Beginner, no prerequisites

3. Evaluation 101: Intro to Evaluation Practice

Begin at the beginning and learn the basics of evaluation from an expert trainer. The session will focus on the logic of evaluation to answer the key question: "What resources are transformed into what program evaluation strategies to produce what outputs for which evaluation audiences, to serve what purposes." Enhance your skills in planning, conducting, monitoring, and modifying the evaluation so that it generates the information needed to improve program results and communicate program performance to key stakeholder groups.

A case-driven instructional process, using discussion, exercises, and lecture will introduce the steps in conducting useful evaluations: Getting started, Describing the program, Identifying evaluation questions, Collecting data, Analyzing and reporting, and Using results.

You will learn:

John McLaughlin has been part of the evaluation community for over 30 years working in the public, private, and non-profit sectors. He has presented this workshop in multiple venues and will tailor this two-day format for Evaluation 2010.

Session 3: Evaluation 101
Scheduled: Monday and Tuesday, November 8 and 9, 9 AM to 4 PM
Level: Beginner, no prerequisites

4. Logic Models for Program Evaluation and Planning

Many programs fail to start with a clear description of the program and its intended outcomes, undermining both program planning and evaluation efforts. The logic model, as a map of what a program is and intends to do, is a useful tool for clarifying objectives, improving the relationship between activities and those objectives, and developing and integrating evaluation plans and strategic plans.

First, we will recapture the utility of program logic modeling as a simple discipline, using cases in public health and human services to explore the steps for constructing, refining and validating models. Then, we'll examine how to improve logic models using some fundamental principles of "program theory", demonstrate how to use logic models effectively to help frame questions in program evaluation, and show some ways logic models can also inform strategic planning. Both days use modules with presentations, small group case studies, and debriefs to reinforce group work.

You will learn:

  • To construct simple logic models, To use program theory principles to improve a logic model;
  • To identify and engage program stakeholders using a logic model;
  • To develop an evaluation focus based on a logic model;
  • To use logic models to answer strategic planning.

Thomas Chapel is the central resource person for planning and program evaluation at the Centers for Disease Control and Prevention and a sought after trainer. Tom has taught this workshop for the past four years to much acclaim.

Session 4: Logic Models
Scheduled: Monday and Tuesday, November 8 and 9, 9 AM to 4 PM
Level: Beginner, no prerequisites

5. Participatory Evaluation

Participatory evaluation practice requires evaluators to be skilled facilitators of interpersonal interactions. This workshop will provide you with theoretical grounding (social interdependence theory, conflict theory, and evaluation use theory) and practical frameworks for analyzing and extending your own practice.

Through presentations, discussion, reflection, and case study, you will experience strategies to enhance participatory evaluation and foster interaction. You are encouraged to bring examples of challenges faced in your practice for discussion to this workshop consistently lauded for its ready applicability to real world evaluation contexts.

You will learn:

  • Strategies to foster effective interaction, including belief sheets; values voting; three-step interview; cooperative rank order; graffiti; jigsaw; and data dialogue;
  • Responses to challenges in participatory evaluation practices;
  • Four frameworks for reflective evaluation practice.

Jean King has over 30 years of experience as an award-winning teacher at the University of Minnesota. As an evaluation practitioner, she has received AEA’s Myrdal award for outstanding evaluation practice. Laurie Stevahn is a professor at Seattle University with extensive facilitation experience as well as applied experience in participatory evaluation.

Session 5: Participatory Evaluation
Prerequisites: Basic evaluation skills
Scheduled: Monday and Tuesday, November 8 and 9, 9 AM to 4 PM
Level: Intermediate

6. Consulting Skills for Evaluators: Getting Started

Program evaluators who choose to become independent consultants will find that the intersection of business and research can offer tremendous personal reward but it can be both challenging and intimidating unless they have the simple but important skills required to be successful. This practical workshop addresses the unique issues faced by individuals who want to become independent consultants, who have recently taken the plunge or who need to re-tool their professional practice.

Participants will have the opportunity to explore four different skill sets that are required to support a successful evaluation consulting practice:

  1. The personal characteristics required to be an independent consultant;
  2. The entrepreneurial skills needed to get started and obtain research contracts;
  3. The management skills involved in running a small business while focusing on your research;
  4. The steps involved in managing your consulting projects.

Through lecture, anecdote, discussion, small-group exercises, and independent reflection, this workshop will help you to problem solve around this career choice and develop an agenda for action.

You will learn:

  • If consulting is an appropriate career choice for you;
  • How to break into the evaluation consulting market and stay there;
  • How to manage your small business and still have time for your research;
  • What steps are involved in an evaluation consulting engagement;
  • What field-based skills can to enhance your evaluation practice.

Gail V. Barrington is an independent consultant who started her consulting firm, Barrington Research Group, Inc. in 1985. She has conducted over 100 program evaluation studies and has made a significant contribution to the field of evaluation through her practice, writing, teaching, training, mentoring and service. In 2008 she won the Canadian Evaluation Society award for her Contribution to Evaluation in Canada.

Session 6: Consulting Skills
Scheduled: Monday and Tuesday, November 8 and 9, 9 AM to 4 PM
Level: Beginner, no prerequisites

7. Developmental Evaluation: Applying Complexity Concepts to Enhance Innovation and Use

This workshop is over-subscribed and thus full. We do not maintain a waitlist. Please make an alternate selection.

Developmental evaluation (DE) is especially appropriate for innovative initiatives or organizations in dynamic and complex environments where participants, conditions, interventions, and context are turbulent, pathways for achieving desired outcomes are uncertain, and conflicts about what to do are high. DE supports reality-testing, innovation, and adaptation in complex dynamic systems where relationships among critical elements are nonlinear and emergent. Evaluation use in such environments focuses on continuous and ongoing adaptation, intensive reflective practice, and rapid, real-time feedback. The purpose of DE is to help develop and adapt the intervention (different from improving a model).

This evaluation approach involves partnering relationships between social innovators and evaluators in which the evaluator’s role focuses on helping innovators embed evaluative thinking into their decision-making processes as part of their ongoing design and implementation initiatives. DE can apply to any complex change effort anywhere in the world. Through lecture, discussion, and small-group practice exercises, this workshop will position DE as an important option for evaluation in contrast to formative and summative evaluations as well as other approaches to evaluation.

You will learn:

  • The specific niche for which developmental evaluation is appropriate and useful;
  • To understand and distinguish five different types of DE and the implications of those types;
  • To identify the dimensions of complexity that affect how DE is;
  • Practical frameworks and innovative methods for use in DE.

Michael Quinn Patton is an independent consultant and professor at the Union Institute. An internationally known expert on Utilization-focused Evaluation, this workshop is based on his just published new book, Developmental Evaluation: Applying Complexity Concepts to Enhance Innovation and Use (Guilford, 2010).

Session 7: Developmental Evaluation
Scheduled: Monday and Tuesday, November 8 and 9, 9 AM to 4 PM
Level: Beginner, no prerequisites

8. Building Evaluation Capacity of Community Organizations

Are you working with local community groups (coalitions, nonprofits, social service agencies, local health departments, volunteers, school boards) that are trying to evaluate the outcomes of their work to meet a funding requirement, an organizational expectation, or to enhance their own program performance?

In this highly interactive workshop, you will practice and reflect on a variety of activities and adult learning techniques for building basic evaluation skills related to the core components of evaluation: engaging stakeholders, focusing the evaluation, data collection, data analysis and use. Try the activities out, assess their appropriateness for your own situation, and expand your toolbox. Bring your own ‘best practices' to share as we work towards building the evaluation capacity of community practitioners and organizations. This year’s workshop also will include a section on organizational capacity building that goes beyond individual competencies and skills – strategies to build resources and support and the organizational environment for sustaining evaluation.

You will learn:

  • Activities to use in building essential evaluation knowledge and skills;
  • Methods and techniques that facilitate evaluation learning;
  • What to consider in choosing among options to better suit needs, requests and realities;
  • Strategies, beyond teaching and training, for building organizational evaluation capacity.

Ellen Taylor-Powell is widely recognized for her work in evaluation capacity building. Her 20 years in Extension have focused continuously on evaluation training and capacity building with concentration on individual, team, and organizational learning.

Session 8: Building Evaluation Capacity
Prerequisites: Involvement in evaluation capacity building at the community level
Scheduled: Monday and Tuesday, November 8 and 9, 9 AM to 4 PM
Level: Intermediate

Tuesday Workshops, November 9, 9 AM to 4 PM


9. Reconceptualizing Evaluation

This workshop aims to provide an opportunity to reconsider most of the usual assumptions and perspectives we make about the nature of evaluation. At the least we’ll discuss some alternatives to most of them, and bring in some perspectives not usually addressed, e.g., the neurological function of evaluation. The discussions will have four foci: (i) the relation of evaluation to other disciplines; (ii) the relation of program evaluation, which most of us do, to the many other professional fields that are also sub-divisions of evaluation; (iii) the relation of evaluation to the other great cognitive processes of explanation, description, prediction, recommendation, etc.; (iv) the relation of any approach to evaluation to the socio-cultural norms of the evaluator and evaluees, including acceptance, challenge, bypass, compromise etc.

The process will involve a mix of short presentations, whole group and small group discussions, and write-in as well as oral questioning.

Learning outcomes will include new knowledge and understanding of the existence and worth of alternative approaches to the theory and practice of evaluation, and improved ability to deal with standard objections and resistance to internal and external evaluation.

You will learn:

  • New knowledge and understanding of the existence and worth of alternative approaches to the theory and practice of evaluation;
  • Improved ability to respond to standard objections and resistance to internal and external evaluation.

Michael Scriven has served as the President of AEA and has taught evaluation in schools of education, departments of philosophy and psychology, and to professional groups, for 45 years. A senior statesman in the field, he has authored over 90 publications focusing on evaluation, and received AEA's Paul F Lazarsfeld Evaluation Theory award in 1986.

Session 9: Reconceptualizing Evaluation
Prerequisite: A basic understanding of the breadth of the field
Scheduled: Tuesday, November 9, 9:00 AM to 4:00 PM
Level: Intermediate


10. Experimental and Quasi-Experimental Designs for Evaluation

Design provides the conceptual framework, using structural elements, from which a study is planned and executed. It also sets the basic conditions from which facts and conclusions are derived and inferred. With an emphasis on causal inference and various types of validity, attendees will explore the theoretical, ideological, and methodological foundations of and principles for designing experimental and quasi-experimental investigations for evaluation. The primary foci of the course include design of quasi-experimental studies that either lack a comparison group or pretest observation, design of quasi-experimental studies that use both control groups and pretests (including interrupted time-series and regression discontinuity designs), and randomized experimental designs, including the conditions conducive to implementing them.

Using an instructional process of mini-lectures, group discussions, exercises, and work with case examples, the facilitator will guide you through the process of designing experimental and quasi-experimental investigations for program evaluation purposes.

You will learn:

  • The vocabulary of research design and related vocabularies and how to apply those concepts to the construction of experimental and quasi-experimental designs;
  • The major types of validity and their relation to research design;
  • How to identify plausible validity threats and reduce them using elements of design;
  • How to make intelligent, informed decisions when designing evaluations that logically couple research questions to elements of design under consideration of their costs and benefits.

Chris L. S. Coryn is the Director of the Interdisciplinary Ph.D. in Evaluation program at The Evaluation Center of Western Michigan University and a recipient of the American Evaluation Association Marcia Guttentag Award. He teaches Foundations of Evaluation: Theory, Method, and Practice for the Evaluators’ Institute and has provided workshops on numerous topics including evaluation theory and methods, research design, statistics, and sampling, for both national and international audiences.

Session 10: Evaluation Design
Prerequisite: A basic knowledge of research methods, applied statistics, and measurement.
Scheduled: Tuesday, November 9, 9:00 AM to 4:00 PM
Level: Intermediate


11. RealWorld Evaluation: Practical Tips for Doing Evaluations in Spite of Budget, Time, Data and Political Constraints

Have you had the experience of being asked to perform an evaluation of a project that was almost finished, there was no baseline, and there can’t be a comparison group, yet your clients expected “rigorous impact evaluation”? Not only that, but as you were negotiating the terms of reference you discovered that there is a short deadline and a rather limited budget for conducting the evaluation! Have you had to deal with political pressures, including pre-conceived expectations by stakeholders?  

This workshop presents a seven-step process, a checklist and a toolbox of techniques that seek to help evaluators and clients ensure the best quality evaluation under real-life constraints like those described above. The RealWorld Evaluation approach will be introduced and its practical utility assessed through presentations, examples from international experiences, and small-group exercises. The intention is that participants will mutually learn and share practical techniques for dealing with real-world constraints.

You will learn:

  • The seven steps of the RealWorld Evaluation approach;
  • Context-responsive evaluation design alternatives;
  • Ways to reconstruct baseline data;
  • How to answer the question of what would have happened without the project;
  • How to identify, and overcome threats to the validity or adequacy of evaluation methods.

Jim Rugh has had over 45 years of experience in international development, 30 of them as a professional evaluator, mainly of international NGOs. He headed the evaluation department at CARE for 12 years. He has led many evaluation trainings and conducted agency-level evaluations of a number of international agencies. Along with Michael Bamberger and Linda Mabry, he co-authored the book published by Sage in 2006: RealWorld Evaluation, Working Under Budget, Time, Data and Political Constraints.

Session 11: RealWorld Evaluation
Prerequisite: A basic knowledge of evaluation and experience conducting evaluations.
Scheduled: Tuesday, November 9, 9:00 AM to 4:00 PM
Level: Intermediate


12. Advanced Focus Group Moderator Training

The literature is rich in textbooks and case studies on many aspects of focus groups including design, implementation and analyses. Missing however, are guidelines and discussions on how to moderate a focus group. In this experiential learning environment, you will find out how to maximize time, build rapport, create energy and apply communication tools in a focus group to maintain the flow of discussion among the participants and elicit more than one-person answers.

Using practical exercises and examples, including role play and constructive peer-critique as a focus group leader or respondent, you will explore effective focus group moderation including ways to increase and limit responses among individuals and the group as a whole. In addition, many of the strategies presented in the workshop are applicable more broadly, in other evaluation settings such as community forums and committee meetings to stimulate and sustain discussion

You will learn:

  • The theoretical basis for focus groups and its implications for practice;
  • Fifteen practical strategies to create and maintain focus group discussion;
  • Approaches to moderating a focus group while being sensitive to cross-cultural issues;
  • How to stimulate discussion in community forums, committee meetings, and social settings.

Nancy-Ellen Kiernan has facilitated over 200 workshops on evaluation methodology and moderated focus groups in over fifty studies with groups ranging from Amish dairy farmers in barns to at-risk teens in youth centers, to university faculty. On the faculty at Penn State University, she has published widely and is a regular workshop presenter at AEA’s annual conference.

Session 12: Advanced Focus Group Moderation
Prerequisites: Having moderated 2 focus groups and written focus group questions and probes
Scheduled:
Tuesday, November 9, 9:00 AM to 4:00 PM
Level: Intermediate


13. Introduction to GIS and Spatial Analysis in Evaluation

This workshop introduces Geographic Information Systems (GIS) and spatial analysis concepts and uses in program and policy evaluation. We will cover steps for undertaking a mapping project plus challenges of doing GIS cost effectively. It discusses a variety of mapping software (some free) and how to obtain base maps and data for map contents.

Using case study examples from several content areas (e.g., ecology, public health, housing, criminal justice), the presenters will demonstrate and attendees will practice designing a GIS project, setting up spatial analysis, and using GIS approaches to evaluate programs or policy initiatives. We will discuss ways to involve evaluation stakeholders (e.g., staff, program clients) in mapping projects. Finally, we will present ways to improve visual quality and avoid bias when doing mapping. This workshop will not provide hands-on experience in using GIS software, but rather will provide demonstrations and examine output.

You will learn:

  • To understand GIS as an important tool for evaluators in a variety of settings;
  • The pros and cons of a variety of mapping software and how to obtain the software;
  • How to find and use online GIS data sources to put content in your maps;
  • How to improve and assess visual presentation in maps;
  • How to avoid bias or 'lying with maps in order to improve evaluation quality.

Arlene Hopkins and Stephen Maack bring university-level teaching experience, workshop facilitation experience, and both a class-based and practical background in using GIS for evaluation, to their workshop presentation.

Session 13: Introduction to GIS
Scheduled: Tuesday, November 9, 9:00 AM to 4:00 PM
Level: Beginner, no prerequisites


14. Systems Thinking and Evaluation Practice: Tools to Bridge the Gap

As interest in applying systems thinking in evaluation grows, it is becoming clear that evaluators are looking for assistance in matching systems ideas to evaluation practice. The goal of this workshop is to help participants bridge the two fields by teaching you approaches for matching systems methods to the evaluation questions you are seeking to answer. It is targeted for those who are not only trying to make sense of complex, messy situations, but who also wish to build a basic toolbox of approaches for understanding and evaluating systems.

Through mini-lectures, group activities, and hands-on use of systems methods, this workshop will provide you with the basic tools you’ll need to “think systemically” as evaluators.

You will learn:

  • What it means to "think systemically;"
  • To “stretch your evaluator’s muscles” regarding the use of systems thinking for evaluation by engaging in activities and discussions relating to core systems and evaluation concepts;
  • How to match systems methods to the type of questions being asked in an evaluation;
  • How to use at least three systems thinking approaches in their own evaluation practice

Jan Noga, Margaret Hargreaves, Richard Hummelbrunner, and Bob Williams will facilitate this workshop. Together they bring years of experience and expertise in systems thinking, evaluation, organizational consulting, and group facilitation and training both in the US and abroad.

Session 14: Systems Thinking
Scheduled: Tuesday, November 9, 9:00 AM to 4:00 PM
Level: Beginner, no prerequisites


15. Creating Surveys to Measure Performance and Assess Needs

Surveys for program evaluation, performance measurement, or needs assessment can provide excellent information for evaluators. However, developing effective surveys requires an eye both to unbiased question design as well as to how the results of the survey will be used. Neglecting these two aspects impacts the success of the survey.

This hands-on workshop will use lecture and group exercises to review guidelines for survey development. We will use two national surveys, one used for measuring the performance of local governments and the other to assess the needs of older adults, to inform the creation of our own survey instruments.

You will learn:

  • How to create effective surveys for tracking outcomes and needs;
  • How to report useful results to a wide variety of stakeholders;
  • Barriers, and ways to surmount those barriers, to using survey results.

Michelle Kobayashi is co-author of Citizen Surveys: a comprehensive guide to making them matter (International City/County Management Association, 2009). She has over 25 years of experience in performance measurement and needs assessment, and has conducted scores of workshops on research and evaluation methods for community based organizations, local government employees, elected officials and students.

Session 15: Creating Surveys
Scheduled: Tuesday, November 9, 9:00 AM to 4:00 PM
Level:
Beginner, no prerequisites


16. Longitudinal Analysis Using Structure Equation Models

This workshop is over-subscribed and thus full. We do not maintain a waitlist. Please make an alternate selection.

Many evaluation studies make use of longitudinal data. However, while much can be learned from repeated measures, the analysis of change is also associated with a number of special problems. The workshop takes up these issues and reviews how traditional methods in the analysis of change, such as the paired t-test, repeated measures ANOVA or MANOVA, address these problems.

A mixture of PowerPoint presentation, group discussion, and exercises with a special focus on model specification will help us to explore LGM in contrast to more traditional approaches to analyzing change. The core of the workshop will be an introduction to SEM-based latent growth curve modeling (LGM). We will show how to specify, estimate and interpret growth curve models. In contrast to most traditional methods, which are restricted to the analysis of mean changes, LGM allows the investigation of unit specific (individual) changes over time. Towards the end of the workshop, we will discuss more recent advancements of LGM, including multiple group analyses, the inclusion of time-varying covariates, and cohort sequential designs. Detailed directions for model specification will be given, and all analyses will be illustrated by practical examples.

You will learn:

  • How to detect reliable sources of variance in the analysis of change;
  • Special problems associated with the analysis of longitudinal data;
  • How to specify, estimate and interpret latent growth curve models (LGM) using SEM;
  • The relationship of traditional methods to new methods;
  • Recent developments in latent growth curve modeling.

Manuel C Voelkle is a research scientist at the Max Planck Institute in Berlin, Germany. He teaches courses on advanced multivariate data analysis and research design and research methods. Werner W Wittmann is professor of psychology at the University of Mannheim, where he heads a research and teaching unit specializing in research methods, assessment and evaluation research.

Session 16: Longitudinal Analysis
Prerequisites: Familiarity with structural equation models and regression analytic techniques. Experience with analyzing longitudinal data is useful but not necessary.
Scheduled: Tuesday
, November 9, 9:00 AM to 4:00 PM
Level: Intermediate


Wednesday Full-Day Workshops, November 10, 8 AM to 3 PM


17. Collaborative Evaluations: A Step-by-Step Model for the Evaluator

Do you want to engage and succeed in collaborative evaluations? Using clear and simple language, this workshop will outline key concepts and effective tools/methods to help master the mechanics of collaboration in the evaluation environment. Building on a theoretical grounding, you will explore how to apply the Model for Collaborative Evaluations (MCE) to real-life evaluations, with a special emphasis on those factors that facilitate and inhibit stakeholders' participation.

Using highly interactive discussion, demonstration, hands-on exercises and small group work, each section addresses fundamental factors contributing to the six model components that must be mastered in order to succeed in collaborations. You will gain a deeper understanding of how to develop collaborative relationships in the evaluation context.

You will learn:

  • The factors that influence the success of collaboration in evaluations;
  • How to apply the Model for Collaboration Evaluations to your practice;
  • To capitalize on others' strengths to encourage feedback, clarify interpretations, and resolve misunderstandings;
  • Methods and tools to facilitate collaborative evaluations and build collaborative relationships.

Liliana Rodriguez-Campos is the Program Chair of the Collaborative, Participatory & Empowerment TIG and a faculty member in Evaluation at the University of South Florida. An experienced facilitator, with consistently outstanding reviews from AEA attendees, she has developed and offered training in both English and Spanish to a variety of audiences in the US and internationally.

Session 17: Collaborative Evaluations
Prerequisites:
Familiarity with structural equation modeling and regression analytic techniques
Scheduled: Wednesday
, November 10, 8:00 AM to 3:00 PM
Level: Intermediate


18. Needs Assessment: Overview of Concepts, Getting Starting, Analyzing Needs Data, Strategies for Prioritizing Needs

Needs assessment (NA) is often assigned to evaluators with the assumption that they are trained in this form of assessment. Surveys of evaluation training, however, indicate that only a small number of courses on needs assessment are being taught. Needs assessment topics such as the process of NA, how to get the assessment started, analyzing and presenting data, and prioritizing needs are key capacities for many effective evaluations.

This day-long workshop will consist of four mini-workshops exploring different aspects of NA. Each mini-workshop will include hands-on exercises around case studies, discussion, and explanation. Much of the content parallels a Needs Assessment Kit, recently developed by the facilitators.

You will learn:

  • The basic concepts, definitions, and model for assessing needs;
  • Strategies for getting an assessment started;
  • Options for analyzing needs data and how the data can be portrayed;
  • Ways to prioritize needs.

James Altschuld is a well-known author and trainer of needs assessment. Professor Emeritus at The Ohio State University, he is an experienced presenter with more than three decades of experience and is co-author of a new needs assessment kit published by SAGE in 2009. Jeffry White is an Assistant Professor at University of Louisiana Lafayette and is co-author of one of the five books.

Session 18: Needs Assessment
Scheduled: Wednesday
, November 10, 8:00 AM to 3:00 PM
Level: Beginner, no prerequisites


19. Logic Models - Beyond the Traditional View: Metrics, Methods, Expected and Unexpected Change

When should we use logic models? How can we maximize their explanatory value and usefulness as an evaluation tool? This workshop will present three broad topics that will increase the value of using logic models. First, we'll explore an expanded view of what forms logic models can take, including 1) the range of information that can be included, 2) the use of different forms and scales, 3) the types of relationships that may be represented, and 4) uses of models at different stages of the evaluation life cycle.

This workshop will examine how to balance the relationship between visual design and information density in order to make the best use of models with various stakeholders and technical experts and consider epistemological issues in logic modeling, addressing 1) strengths and weaknesses of 'models', 2) relationships between models, measures, and methodologies, and 3) conditions under which logic models are and are not useful. Through lecture and both small and large group discussions, we will move beyond the traditional view of logic models to examine their applicability, value, and relatability to attendees' experiences.

You will learn:

  • The essential nature of a 'model', its strengths and weaknesses;
  • Uses of logic models across the entire evaluation life cycle;
  • The value of using multiple forms and scales of the same logic model for the same evaluation;
  • Principles of good graphic design for logic models;
  • Evaluation conditions under which logic models are, and are not, useful;
  • The relationship among logic models, measurement, and methodology.

Jonathan Morell works at Vector Research Center for Enterprise Performance and TechTeam Government Solutions, and has been a practicing evaluator for over 20 years. An experienced trainer, he brings practical hands-on examples from real-world situations that build the connection between theory and practice.

Session 19: Logic Models
Prerequisites: Experience working with logic models and working with stakeholders
Scheduled: Wednesday
, November 10, 8:00 AM to 3:00 PM
Level: Intermediate


20. Transformative Mixed Methods Evaluations

This workshop focuses on the methodological and contextual considerations in designing and conducting transformative mixed methods evaluation and is geared to meet the needs of evaluators working in communities that reflect diversity in terms of culture, race/ethnicity, religion, language, gender, and disability. Deficit perspectives taken as common wisdom can have a deleterious effect on both the design of a program and the outcomes. A transformative mixed methods approach enhances an evaluator's ability to accurately represent how this can happen.

Interactive exercises based upon case studies will give you an opportunity to apply theoretical guidance that will be provided in a plenary session, a mini-lecture and small- and large-group discussions. Alternative strategies based on transformative mixed methods are illustrated through reference to the presenters' own work, the work of others, and the challenges that participants bring to the workshop.

You will learn:

  • To critically examine the transformative paradigm's assumptions in culturally diverse communities;
  • To identify different methodological approaches within a transformative mixed methods model;
  • To apply critical skills associated with selecting the design and use of transformative mixed methods evaluation.

Donna Mertens is a Past President of the American Evaluation Association who teaches evaluation methods and program evaluation to deaf and hearing graduate students at Gallaudet University in Washington, D.C. Mertens recently authored Transformative Research and Evaluation (Guilford). Katrina L Bledsoe is a senior research associate at Walter R. McDonald & Associates, conducting and managing evaluations in culturally complex communities nationally.

Session 20: Transformative Mixed Methods
Prerequisites: Basic knowledge of evaluation and a year's experience in the field
Scheduled: Wednesday
, November 10, 8:00 AM to 3:00 PM
Level: Intermediate


21. How to Prepare an Evaluation Dissertation Proposal

Developing an acceptable dissertation proposal often seems more difficult than conducting the actual research. Further, proposing an evaluation as a dissertation study can raise faculty concerns of acceptability and feasibility. This workshop will lead you through a step-by-step process for preparing a strong, effective dissertation proposal with special emphasis on the evaluation dissertation.

The workshop will cover such topics as the nature, structure, and multiple functions of the dissertation proposal; how to construct a compelling argument; how to develop an effective problem statement and methods section; and how to provide the necessary assurances to get the proposal approved. Practical procedures and review criteria will be provided for each step. The workshop will emphasize application of the knowledge and skills taught to the participants’ personal dissertation situation through the use of an annotated case example, multiple self-assessment worksheets, and several opportunities for questions of personal application.

You will learn:

  • The pros and cons of using an evaluation study as dissertation research;
  • How to construct a compelling argument in a dissertation proposal;
  • The basic process and review criteria for constructing an effective problem statement and methods section;
  • How to provide the assurances necessary to guarantee approval of the proposal;
  • How to apply all of the above to your personal dissertation needs.

Nick L Smith is the co-author of How to Prepare a Dissertation Proposal (Syracuse University Press) and a past-president of AEA. He has taught research and evaluation courses for over 20 years at Syracuse University and is an experienced workshop presenter. He has served as a dissertation advisor to multiple students and is the primary architect of the curriculum and dissertation requirements in his department

Session 21: Evaluation Dissertation
Scheduled: Wednesday
, November 10, 8:00 AM to 3:00 PM
Level: Beginner, no prerequisites


22. Utilization-Focused Evaluation

Evaluations should be useful, practical, accurate and ethical. Utilization-Focused Evaluation is a process that meets these expectations and promotes use of evaluation from beginning to end. With a focus on carefully targeting and implementing evaluations for increased utility, this approach encourages situational responsiveness, adaptability and creativity. This training is aimed at building capacity to think strategically about evaluation and increase commitment to conducting high quality and useful evaluations.

Utilization-Focused evaluation focuses on the intended users of the evaluation in the context of situational responsiveness with the goal of methodological appropriateness. An appropriate match between users and methods should result in an evaluation that is useful, practical, accurate, and ethical, the characteristics of high quality evaluations according to the profession's standards. With an overall goal of teaching you the process of Utilization-Focused Evaluation, the session will combine lectures with concrete examples and interactive case analyses.

You will learn:

  • Basic premises and principles of Utilization-Focused Evaluation (U-FE);
  • Practical steps and strategies for implementing U-FE;
  • Strengths and weaknesses of U-FE, and situations for which it is appropriate.

Michael Quinn Patton is an independent consultant and professor at the Union Institute. An internationally known expert on Utilization-focused Evaluation, this workshop is based on the newly completed fourth edition of his best-selling evaluation text, Utilization Focused Evaluation: The New Century Text (SAGE).

Session 22: Utilization-Focused Evaluation
Scheduled: Wednesday
, November 10, 8:00 AM to 3:00 PM
Level: Beginner, no prerequisites


23. Advanced Topics in Concept Mapping for Evaluation

Concept mapping, as a mixed method approach, is a well-known and widely-used tool to enhance evaluation design, implementation, and measurement. Concept mapping allows us to systematically synthesize large statement sets and represent ideas in a series of easy-to-read graphics. This intermediate level workshop focuses on developing advanced skills for analysis, production of results, and utilization of results in a planning and evaluation framework.

Through the use of mini-lectures and small group exercises, you will work through the process of concept mapping. This includes synthesizing a large statement set, choosing the best fitting cluster solution, and producing results that increase the likelihood of utilization.

You will learn:

  • How to conduct Key Words in Context analysis and Idea Synthesis to systematically reduce a large set of brainstormed ideas into a final list;
  • How to use concept mapping analysis to determine appropriate cluster solutions;
  • Several analytic techniques for examining data produced in concept mapping;
  • Tested and proven tools for enhancing the utilization of concept mapping results

Mary Kane will lead a team of facilitators from Concept Systems, Inc, a consulting company that uses the concept mapping methodology as a primary tool in its planning and evaluation consulting projects. The presenters have extensive experience with concept mapping and are among the world's leading experts on this approach.

Session 23: Concept Mapping
Prerequisites: Basic concept mapping training or use of concept mapping
Scheduled: Wednesday
, November 10, 8:00 AM to 3:00 PM
Level: Intermediate


24. Evaluating Organizational Collaboration

“Collaboration” is a ubiquitous, yet misunderstood, under-empiricized and un-operationalized construct. Program and organizational stakeholders looking to do and be collaborative struggle to identify, practice and evaluate collaboration with efficacy. This workshop will demonstrate how the principles of collaboration theory can be used to inform evaluation practice.

You will have the opportunity to increase your capacity to quantitatively and qualitatively examine the development of inter-organizational, intra-organizational and inter-professional collaboration. Together, we will examine assessment strategies and specific tools for data collection, analysis and reporting. We will practice collaboration assessment techniques currently in use in the evaluation of local, state, and federal education, health and human service agencies. Highlighted programs include those sponsored by the CDC Office of Smoking and Health, the Association of State and Territorial Dental Directors, and the Safe Schools/Healthy Students Initiative.

You will learn:

  • Fundamental principles of inter-organizational and inter-professional collaboration;
  • Specific strategies, tools and protocols used in qualitative and quantitative assessment of collaboration;
  • How to assess grant-funded programs that identify increasing collaboration as an intended outcome;
  • How stakeholders use the evaluation process and findings to improve organizational collaboration.

Rebecca Woodland (last name formerly Gajda) has facilitated workshops and courses for adult learners for more than 10 years and is on the faculty at the University of Massachusetts - Amherst. She is an editorial board member of the American Journal of Evaluation. Her most recent publication on the topic of organizational collaboration is found in the International Journal of Public Administration, which was the journal’s most read article of 2009. Woodland notes, “I love creating learning opportunities in which all participants learn, find the material useful, and have fun at the same time.”

Session 24: Evaluating Organizational Collaboration
Prerequisites: Basic understanding of philosophy and techniques of Utilization-Focused Evaluation; basics of quantitative and qualitative data collection, analysis, and reporting techniques; basic understanding of organizational change theory/systems theory.
Scheduled: Wednesday
, November 10, 8:00 AM to 3:00 PM
Level: Intermediate


25. Using Effect Size and Association Measures in Evaluation

Improve your capacity to understand and apply a range of measures including: standardized measures of effect sizes from Cohen, Glass, and Hedges; Eta-squared; Omega-squared; the Intraclass correlation coefficient; and Cramer’s V. Answer the call to report effect size and association measures as part of your evaluation results. Together we will explore how to select the best measures, how to perform the needed calculations, and how to analyze, interpret, and report on the output in ways that strengthen your overall evaluation.

Through mini-lecture, hands-on exercises, and computer-based demonstration, you will improve your understanding of the theoretical foundation and computational procedures for each measure as well as ways to identify and correct for bias.

You will learn:

  • How to select, compute, and interpret the appropriate measure of effect size or association;
  • Considerations in the use of confidence intervals;
  • SAS and SPSS macros to compute common effect size and association measures;
  • Basic relationships among the measures.

Jack Barnette is Professor of Biostatistics at the University of Colorado School of Public Health. He has taught courses in statistical methods, program evaluation, and survey methodology for more than 30 years. He has been conducting research and writing on this topic for more than ten years. Jack is a regular facilitator both at AEA's annual conference and the CDC/AEA Summer Evaluation Institute. He was awarded the Outstanding Commitment to Teaching Award by the University of Alabama and is a member of the ASPH/Pfizer Academy of Distinguished Public Health Teachers.

Session 25: Effect Size and Association Measures
Prerequisites: Univariate statistics through ANOVA & understanding and use of confidence levels
Scheduled: Wednesday
, November 10, 8:00 AM to 3:00 PM
Level: Advanced


26. Enhanced Group Facilitation: People, Purpose and Process

This session has been cancelled.

Effective meeting planners take an active role in planning and managing a group facilitation experience. Learn how to interact with multiple types of meeting attendees in order to promote valuable contributions from everyone as well as how to choose an effective facilitation technique based on goals and objectives, anticipated outcome, types and number of participants, and logistics.

This workshop will explore facilitation techniques for generating ideas and focusing thoughts, which include item writing and force field analysis, along with variations on these techniques. You will practice developing meeting agendas to achieve desired goals and outcomes, incorporate appropriate activities, facilitate agreement on decisions and next steps, and observe time management.

You will learn:

  • How to understand the role of each person in a meeting, and manage their comfort zones in order to gain participation;
  • How to create a shared understanding and commitment to a meeting’s objectives, rationale and outcomes;
  • How to develop effective meeting plans to achieve desired goals and outcomes, facilitate agreement and manage time.

Jennifer Dewey, with James Bell Associates, Inc., is currently serving as the Project Director for the Family connection Discretionary Grants evaluation, funded by the Administration for Children and Families (ACF), Children’s Bureau, where she led a team of JBA staff in designing and implementing a cross-site evaluation protocol, along with providing technical assistance to local evaluation activities.

Session 26: Enhanced Group Facilitation
Scheduled: Wednesday
, November 10, 8:00 AM to 3:00 PM
Level: Beginner, no prerequisites


27. Multilevel Models in Program and Policy Evaluation

Multilevel models open the door to understanding the inter-relationships among nested structures and the ways evaluands change across time. This workshop will demystify multilevel models and present them at an accessible level, stressing their practical applications in evaluation.

Through discussion and hands-on demonstrations, the workshop will address four key questions: When are multilevel models necessary? How can they be implemented using standard software? How does one interpret multilevel results? What are recent developments in this arena?

You will learn:

  • The basics of multilevel modeling;
  • When to use multilevel models in evaluation practice;
  • How to implement models using widely available software;
  • The importance of considering multilevel structures in understanding program theory.

Sanjeev Sridharan of the University of Toronto has repeatedly taught multilevel models for AEA as well as for the SPSS software company. His recent work on this topic has been published in the Journal of Substance Abuse Treatment, Proceedings of the American Statistical Association and Social Indicators Research. Known for making the complex understandable, his approach to the topic is straightforward and accessible.

Session 27: Multilevel Models
Prerequisites: Basic statistics
Scheduled: Wednesday
, November 10, 8:00 AM to 3:00 PM
Level: Intermediate


28. Getting Actionable Answers for Real-World Decision Makers: Evaluation Nuts and Bolts that Deliver

Ever read an evaluation report and still wonder whether the program worked or was a waste of money? What if evaluations actually asked evaluative questions and gave clear, direct, evaluative answers? This workshop covers 1) big-picture thinking about key stakeholders, their information needs, and the evaluative questions they need answered; 2) a macro-level framework for evaluation design; 3) a reporting structure that gets to the point like no other; and 4) a powerful, practical, meaningful evaluation tool for getting real, genuine, direct, evaluative, concise answers to important questions – the mixed method evaluative rubric.

This workshop covers the most important “nuts and bolts” concepts and tools needed to deliver actionable answers and utilizes a combination of mini lectures as well as facilitated small and large group exercises to help you adopt big picture thinking to focus the evaluation on what really matters and who’ll take action based on the findings.

You will learn:

  • How to write a set of big picture overarching questions to guide the evaluation;
  • How to use evaluation rubrics to get direct, evaluative answers to these questions;
  • Tips for commissioning and managing evaluation projects that deliver actionable answers;
  • Evaluation conceptualization and reporting tips that maximize the chances of a clear, to-the-point, and actionable evaluation.

E Jane Davidson runs her own successful consulting practice and is the 2005 recipient of AEA's Marcia Guttentag Award. She is the author of Evaluation Methodology Basics: The Nuts and Bolts of Sound Evaluation (Sage, 2004). This work builds on Michael Scriven’s contributions on the logic and methodology of evaluation, combining it with techniques from theory-based evaluation and translating it into concrete, easy-to-follow practical methodologies that can be applied in a real-world setting.

Session 28: Actionable Answers
Scheduled: Wednesday
, November 10, 8:00 AM to 3:00 PM
Level: Beginner, no prerequisites


29. Theory-driven Evaluation for Assessing and Improving Planning, Implementation, and Effectiveness

Learn the theory-driven approach for assessing and improving program planning, implementation and effectiveness. In this workshop, you will explore the conceptual framework of program theory and its structure, which facilitates precise communication between evaluators and stakeholders regarding evaluation needs and approaches to addressing those needs. The workshop will also focus on how program theory and theory-driven evaluation are useful in the assessment and improvement of a program at each stage throughout its life-cycle.

Mini-lectures, group exercises and case studies will illustrate the use of program theory and theory-driven evaluation for program planning, initial implementation, mature implementation and outcomes. In the outcome stages, you will explore the differences among outcome monitoring, efficacy evaluation and effectiveness evaluation.

You will learn:

  • How to apply the conceptual framework of program theory and theory-driven evaluations;
  • How to conduct theory-driven process and outcome evaluations;
  • How to conduct integrative process/outcome evaluations;
  • How to apply program theory to improve program planning processes.

Huey Chen, a Senior Evaluation Scientist at the Centers for Disease Control and Prevention and 1993 recipient of the AEA Lazarsfeld Award for contributions to evaluation theory, is the author of Theory-Driven Evaluations (SAGE), the classic text for understanding program theory and theory-driven evaluation and more recently of Practical Program Evaluation (2005). He is an internationally known workshop facilitator on the subject.

Session 29: Theory-driven Evaluation
Prerequisites: Basic knowledge of logic modeling or program theory
Scheduled: Wednesday
, November 10, 8:00 AM to 3:00 PM
Level: Intermediate


30. Identifying, Naming, Measuring and Interpreting Racism in Contexts of Evaluations

Historically, racism has been a contributing factor to the racial disparities that persist across contemporary society. This workshop will help you to identify, frame, and measure racism's presence. The workshop includes strategies for removing racism from various evaluation processes, as well as ways to identify types of racism that may influence the contexts in which racial disparities and other societal programs operate.

Through mini-lectures, discussion, small group exercises, and handouts, we will apply workshop content to real society problems such as identifying racial biases that may be embedded in research literature, identifying the influence of racism in the contexts of racial disparities programs, and eliminating inadvertent racism that may become embedded in cross-cultural research. This workshop will help you to more clearly identify, frame, measure, interpret, and lessen the presence of racism in diverse settings.

You will learn:

  • Strategies for removing/averting racism's presence in evaluation processes;
  • Common places where racism may hide and influence the context of programs and problems;
  • Naming, defining, framing and employing strategies for using the Brooks Equity Typology (BET) for collecting data on racism;
  • How to collect five broad types of data concerning racism as a variable;
  • Strategies for collecting data on eight of the several dozen types of racism described in contemporary cross-disciplinary English-language research literature.

Pauline Brooks is an evaluator and researcher by formal training and practice. She has had years of university-level teaching and evaluation experience in both public and private education, particularly in the fields of education, psychology, social work and public health. For over 20 years, she has worked in culturally diverse settings focusing on issues pertaining to underserved populations, class, race, gender, and culture.

Session 30: Racism in Evaluation
Prerequisites: Openness to exploring issues of context, setting, process, and approaches as they relate to inequalities
Scheduled: Wednesday
, November 10, 8:00 AM to 3:00 PM
Level: Intermediate


31. Navigating the Waters of Evaluation Consulting Contracts

Are you looking for a compass, or maybe just a little wind to send you in the right direction when it comes to your consulting contracts? Have you experienced a stormy contract relationship and seek calm waters?

This workshop combines mini lecture, discussion, skills practice, and group work to address evaluation contract issues. You will learn about important contractual considerations such as deliverables, timelines, confidentiality clauses, rights to use/ownership, budget, client and evaluator responsibilities, protocol, data storage and use, pricing, contract negotiation, and more. Common mistakes and omissions, as well as ways to navigate through these, will be covered. You will receive examples of the items discussed, as well as resources informing the contract process. You are encouraged to bring topics for discussion or specific questions.

You will learn:

  • How to develop contract clauses that set projects up for success and support positive client relationships;
  • How to enhance contract negotiation skills;
  • How to use templates and tools to facilitate contract development;
  • About valuable resources to improve contract writing;
  • Practical advice for real world situations.

Kristin Huff is a seasoned evaluator and facilitator with over 15 years of training experience and a Master of Science degree in Experiential Education. Huff has managed consulting contracts covering the fields of technology, fundraising, nonprofit management, and evaluation, and has developed and managed more than 400 consulting contracts in the past eight years.

Session 31: Evaluation Consulting Contracts
Prerequisites: Basic knowledge of contractual issues
Scheduled: Wednesday
, November 10, 8:00 AM to 3:00 PM
Level: Intermediate


32. Survey Design 101

Building on proven strategies that work in real-world contexts, this workshop will help you plan and execute all aspects of the survey design process. Designed for true beginners with little or no background in survey development, you will be introduced to the fundamentals of survey design and administration, and leave with tools needed to develop and improve your own surveys. Perfect for those who need a refresher on survey design or are new to the field of evaluation.

This interactive workshop will use a combination of direct instruction with hands-on opportunities for participants to apply what is learned to their own evaluation projects. Explore different types of surveys, the advantages and challenges associated with various administration methods, as well as how to choose the right one, and problems to avoid. Participants will receive also handouts with sample surveys, item writing tips, checklists, and resource lists for further information.

You will learn:

  • The various types and formats of surveys;
  • Procedures for high quality survey design;
  • Strategies for increasing response rates;
  • Tips for items writing and survey formatting.

Courtney L. Malloy and Harold N. Urman are consultants at Vital Research, a research and evaluation firm operating for more than 25 years and specializing in survey design. They have developed and administered numerous surveys and have extensive experience facilitating workshops and training sessions on research and evaluation for diverse audiences. Malloy is on faculty at the Rossier School of Education at the University of Southern California; Urman has taught courses on survey development at the University of California, Los Angeles.

Session 32: Survey Design 101
Scheduled: Wednesday, November 10, 8:00 AM to 3:00 PM
Level: Beginner, no prerequisites


33. Operations Research (OR) Techniques for Program Evaluation

What is Operations Research? And how can this approach be incorporated into your evaluation practice? Though OR techniques aren’t widely taught in the social sciences, the field is advancing quickly into systems theory, policy analysis, decision science, management, and – evaluation. Even without advanced math or engineering knowledge, evaluators can learn useful techniques and approaches that are utilized in operations research. Although most of the methods presented will be quantitatively based, the presenter is not a “math whiz” and mathematical equations will not be emphasized.

This interactive workshop will include brainstorming activities, small group discussion, case studies and a historical overview of operations research compared with program evaluation. Both fields have different origins and an interesting history. In what ways are they different and how have these fields been changing?

You will learn:

  • Nifty things Operations Research has to offer evaluators to enhance their skills;
  • Tools and resources to address evaluation issues as they relate to decision-making, cost analysis, risk assessment, and organizational effectiveness;
  • Techniques to help evaluators improve project management and tracking capabilities, and provide support for systems-level analyses.

Edith L. Cook has a background as an evaluation researcher and trainer for nearly 20 years and is currently Institutional Researcher at Seton Hill University where she develops professional development training in assessment and evaluation in addition to monitoring campus assessment activities.

Session 33: Operations Research
Prerequisites: Experience collecting and analyzing quantitative data
Scheduled: Wednesday
, November 10, 8:00 AM to 3:00 PM
Level: Intermediate


34. Useful Tools for Integrating System Dynamics and System Intervention Elements into System Change Evaluation Designs

Want useful tools for 1) identifying and describing system dynamics, 2) articulating key elements of system change initiatives including theories of change/logic models, and 3) using this information to design more effective system change evaluations? This workshop provides in-depth training on how to develop system evaluation questions, select system data collection methods, interpret system change data, and develop client relationships. Throughout the workshop, the facilitators will address evaluation design issues that are frequently encountered in system change initiatives – issues related to the sustainability and scalability of fundamental paradigm shifts in large, complex systems.

This workshop will use a mini-lecture format as well as small group exercises and large group discussion. Facilitators will review core system concepts and dynamics, core system intervention concepts and dynamics and system evaluation purposes and design choices as well as how to determine evaluation questions and data collection methods, how to interpret and make meaning of system change data and identify the most critical relationships between evaluators and their clients and stakeholders. Participants will receive materials in advance of the session and are invited to share work-related examples of complex systems.

You will learn:

  • How to describe a complex system and understand its dynamics;
  • How to describe and understand the dynamics of a system change intervention;
  • How to select appropriate evaluation purposes and designs for system change interventions;
  • How to design systems evaluations that address issues of paradigm shifts, scalability, and sustainability.

Margaret Hargreaves, with Mathematica Policy Research, has taught program evaluation methods in a variety of settings, including graduate-level evaluation methods courses for Metropolitan State University and Hamline University, evaluation methods workshops and trainings for local, state and federal agencies, and evaluation methods presentations at numerous meetings including AEA’s annual conference. Beverly Parsons is Executive Director of InSites, a Colorado-based nonprofit organization that assists education and social service systems through evaluation, research, and planning special attention to using a systems orientation. She is also a member of AEA’s Board of Advisors.

Session 34: Integrating Systems Concepts
Prerequisites: Basic understanding of evaluation and core systems concepts
Scheduled: Wednesday
, November 10, 8:00 AM to 3:00 PM
Level: Intermediate


35. Social Network Analysis: Theories, Methods, and Applications

Interest in the field of social work analysis has grown considerably over the last decade. Social network analysis takes seriously the proposition that the relationships between individual units or “actors” are non-random and that their patterns have meaning and significance. Social network analysis seeks to operationalize concepts such as “position”, “role”, or “social distance” that are sometimes used casually or metaphorically in social, political, and/or organizational studies.

Through a combination of lectures, group exercises and discussion, this introductory workshop will orient participants to social network analysis theories, concepts, and applications within the context of evaluation. Hands-on exercises will facilitate a better understanding of network structure, function and date collection. And the workshop will provide a forum for discussion of applications of social network analysis to evaluation and emerging trends in the field.

You will learn:

  • The basic theories, concepts, analysis, and application of social network analysis and evaluation;
  • Types of networks, network measures, and how to collect network data;
  • Analytical tools for network data.

Kimberly Fredericks has been an expert lecturer, consultant, and presenter for AEA, the Agency for Healthcare Quality and Research, the Center for Creative Leadership, the Eastern Evaluation Association, and the Robert Wood Johnson Foundation.

Session 35: Social Network Analysis
Scheduled: Wednesday
, November 10, 8:00 AM to 3:00 PM
Level: Beginner, no prerequisites


36. Practical Propensity Score Analysis

Propensity score analysis is an increasingly popular statistical approach used in epidemiology, psychology, education, economics, sociology, medicine, public health, and many other areas. Propensity score analysis is not a single technique but rather a collection of methods used to adjust effect estimates to make them approximate the results of a randomized experiment. This workshop will help participants better understand the concept and the techniques.

Through mini-lecture, interactive discussion, and computer demonstration, this workshop will focus on the skills necessary to begin incorporating and applying propensity score analysis. All material will be presented in a highly-accessible manner with a focus on understanding the concepts and procedures for using the techniques. The workshop will begin with a brief overview of causality, quasi-experimental design, and selection bias. Attendees will be introduced to propensity score analysis as a potential statistical approach for addressing selection bias. The rationale, basic techniques, and major limitations of propensity score analysis will also be covered.

You will learn:

  • What propensity score analysis is and the rationale for using it;
  • The basic techniques of propensity score analysis;
  • How to implement propensity score analysis using popular software like PASW and R;
  • How to determine whether propensity score analysis is performed as intended;
  • The limitations of propensity score analysis.

Jason Luellen is a Senior Statistical Analyst at the Centerstone Research Institute in Nashville, TN. He has experience teaching graduate-level field experiments and undergraduate psychological statistics, and is currently co-authoring a text on propensity score analysis with William Shadish. M. H. Clark is a quantitative psychologist in the Department of Educational and Human Sciences at the University of Central Florida.

Session 36: Propensity Score Analysis
Prerequisites: Familiarity with quasi-experiments, selection bias, and basic statistics.
Scheduled: Wednesday
, November 10, 8:00 AM to 3:00 PM
Level: Intermediate


37. Qualitative Tools for Deeper Understanding

In an ideal world, qualitative research provides rich insights and deep understanding of respondents’ perceptions, attitudes, behaviors and emotions. However, it is often difficult for people to remember and report their experiences and to access their emotions.

This workshop demonstrates a variety of tools that help respondents articulate more thoroughly and thus that lead to deep insights for the researcher. These tools can be used in focus group research as well as in individual interviews. Through demonstration and hands-on participation, workshop participants will explore techniques that help respondents:

  • Reconstruct their memories – such as visualization, mind-mapping, diaries and storytelling,
  • Articulate their emotions, through metaphorical techniques such as analogies, collage and photo-sorts,
  • Explore different perspectives through “word bubbles” and debate.

Participants are asked to bring an example of a qualitative evaluation research project – past, present or future – for which they wish deeper data.

You will learn:

  • How to conduct a variety of activities that lead to richer qualitative data;
  • Which activities best fit with what kinds of research objectives;
  • How to think creatively about crafting a qualitative discussion guide.

Deborah Potts is a qualitative researcher who has led thousands of focus groups and one-on-one interviews. She is co-author of Moderator to the Max: A full-tilt guide to creative, insightful focus groups and depth interviews. Deborah has taught workshops in conducting qualitative research for nearly two decades. She is an independent consultant.

Session 37: Qualitative Tools
Prerequisite: Basic qualitative interviewing skills
Scheduled: Wednesday, November 10, 8:00 AM to 3:00 PM
Level: Intermediate


Wednesday Morning Half-Day Workshops, November 10, 8 AM to 11 AM


38. Empowerment Evaluation

Empowerment Evaluation builds program capacity and fosters program improvement. It teaches people to help themselves by learning how to evaluate their own programs. The basic steps of empowerment evaluation include: 1) establishing a mission or unifying purpose for a group or program; 2) taking stock - creating a baseline to measure future growth and improvement; and 3) planning for the future - establishing goals and strategies to achieve goals, as well as credible evidence to monitor change. The role of the evaluator is that of coach or facilitator in an empowerment evaluation, since the group is in charge of the evaluation itself.

Employing lecture, activities, demonstration and case examples ranging from townships in South Africa to a $15 million Hewlett-Packard Digital Village project, the workshop will introduce you to the steps of empowerment evaluation and tools to facilitate the approach. You will join participants in conducting an assessment, using empowerment evaluation steps and techniques. 

You will learn:

  • How to plan and conduct an empowerment evaluation;
  • Ways to employ new technologies as part of empowerment evaluation including use of digital photography, quicktime video, online surveys, and web-based telephone/videoconferencing;
  • The dynamics of process use, theories of action, and theories of use.

David Fetterman hails from Stanford University and is the editor of (and a contributor to) the recently published Empowerment Evaluation Principles in Practice (Guilford). He Chairs the Collaborative, Participatory and Empowerment Evaluation AEA Topical Interest Group and is a highly experienced and sought after facilitator.

Session 38: Empowerment Evaluation
Scheduled:
Wednesday, November 10, 8:00 AM to 11:00 AM
Level: Beginner, no prerequisites


39. Rethinking Fidelity of Implementation (FOI): A Critical Component Approach to Measuring Program Use

This workshop is over-subscribed and thus full. We do not maintain a waitlist. Please make an alternate selection.

This workshop presents a novel approach to fidelity of implementation measurement that supports specific and rigorous description of program use. This approach is based in a conceptual framework that focuses on systematic, in-depth measurement of essential elements or “critical components” of programs. It supports comparisons across programs, facilitates knowledge accumulation, and provides information that can be useful for multiple audiences.

Using lecture, question and answer, and small group work, the workshop will will focus on programs in schools or community centers and will use, as examples, instruments designed to measure use of instructional materials. You should come with a specific program in mind and if possible, bring program materials, as the workshop will provide opportunities to identify program critical components. Additionally, the workshop will introduce two tools (an instrument-construct matrix and an item-construct matrix) that assist with organizing the measurement of those critical components across multiple instruments.

You will learn:

  • An approach to measuring FOI that specifically and rigorously describes program use;
  • How to identify the critical components of their programs;
  • Considerations for determining the best instrument for measuring particular program components;
  • How to create items that measure program components and place them in an item-construct matrix.

Jeanne Century is the Principal Investigator on the National Science Foundation project “Applying Research on Science Materials Implementation: Bringing Measurement of Fidelity of Implementation (FOI) to Scale” that has led to the creation of the FOI framework and current suite of instruments. She has been working in education reform, research and evaluation for over 20 years.

Session 39: Fidelity of Implementation
Scheduled: Wednesday
, November 10, 8:00 AM to 11:00 AM
Level: Beginner, no prerequisites


40. Dominators, Cynics, and Wallflowers: Practical Strategies for Moderating Meaningful Focus Groups

Focus groups are a great way to gather input from stakeholders in an evaluation. Unfortunately, the behavior of the participants in a focus group can be a challenge. Some participants can be dominators and need to be curbed so they do not contaminate the discussion. Other participants can be cynics or wallflowers that also impede the quality of the discussion.

Through small group discussion, role play, and lecture, you will be able to identify and address ten problem behaviors that commonly occur in focus groups and practical strategies to prevent, manage, and leverage these behaviors. You will be able to share your own experiences with focus groups and gather helpful tactics that others have used.

You will learn:

  • Ten common types of problem behavior in focus groups;
  • Practical strategies for managing and leveraging problem behavior in focus groups;
  • Methods of identifying and preventing problem behavior before it occurs.

Robert Kahle is the author of Dominators, Cynics, and Wallflowers: Practical Strategies for Moderating Meaningful Focus Groups (Paramount Market Publishing, 2007). Kahle has been working on the issue of managing difficult behavior in focus groups and other small group settings since the mid 1990s.

Session 40: Moderating Focus Groups
Prerequisites:
Understanding of qualitative methods and small group dynamics. Experience moderating focus groups.
Scheduled: Wednesday, November 10, 8:00 AM to 11:00 AM
Level: Intermediate


41. Nonparametric Statistics: What to Do When Your Data is Skewed or Your Sample Size is Small

So many of us have encountered situations where we simply did not end up with the robust, bell-shaped data set we thought we would have to analyze. In these cases, traditional statistical methods lose their power and are no longer appropriate. This workshop provides a brief overview of parametric statistics in order to contrast them with non-parametric statistics. Different data situations which require non-parametric statistics will be reviewed and appropriate techniques will be demonstrated step by step.

This workshop will combine a classroom style with group work. The instructor will use a laptop to demonstrate how to run the non-parametric statistics in SPSS. You are encouraged to e-mail the facilitator prior to the conference with your specific data questions which may then be chosen for problem-solving in the workshop.

You will learn:

  • What non-parametric statistics are;
  • How to identify situations in which your data requires non-parametric statistics versus parametric statistics;
  • How to run a variety of non-parametric statistics in SPSS;
  • How to interpret and report results.

Jennifer Camacho Catrambone uses statistics as part of her work at the Ruth M Rothstein CORE Center. She regularly teaches informal courses on the use of non-parametric statistics in the evaluation of small programs and enjoys doing independent evaluative and statistical consulting. This popular workshop regularly receives notice kudos for making the confusing accessible.

Session 41: Nonparametric Statistics
Scheduled: Wednesday, November 10, 8:00 AM to 11:00 AM
Level:
Beginner, no prerequisites


42 Increasing 'Interpersonal Validity' and Ethical Practice: An Integral Evaluator Model

Too often, we look without seeing, listen without hearing and touch without feeling. As evaluators, we have a professional and ethical responsibility to transform processes and practices that may obscure or distort more than they illuminate. This workshop introduces an Integral Evaluator Quadrant model as a holistic self-assessment framework grounded in the recognition that evaluative judgments are inextricably bound up with culture and context. It represents the intersection of two dimensions: (individual versus collective vantage points) X (interior versus exterior environments). Walking around the quadrants facilitates systematic exploration of lenses, filters and frames vis-a-vis judgment-making within relevant situational, relational, temporal and spatial/geographic contexts.

Through a deliberative forum, guided by provocative seeding and percolating activities, you will gain enhanced understandings of the self-in-context within, for example, power and privilege hierarchies and enhanced understandings of the contexts-embodied-in-the-self across time. Together, we will explore valid uses of self as knower, inquirer and engager of others within as well as across salient diversity divides.

You will learn:

  • To cultivate the self-as-responsive-instrument;
  • To assess what the situational, relational, temporal and spatial/geographic contexts are calling for from you via multiple vantage points;
  •  To engage in ongoing assessment of your own lenses, filters, and frames in order to enhance ethical praxis and quality;
  •  To expand and enrich your diversity-relevant, boundary-spanning knowledge and skills repertoire.

Hazel Symonette brings over 30 years of work in diversity-related arenas. She is the founder and former Director of the University of Wisconsin Excellence Through Diversity Institute (2002-2009)—a year-long intensive train-the-trainers/facilitators campus workforce learning community and organizational change support network organized around contextually-responsive, multi-level assessment and evaluation. She is founder and current Director of a similar intensive community of practice that includes campus Workforce/ Student Teams: the Student Success Institute. She has served on the faculty of the Health Research and Education Trust's Cultural Competency Fellowship Program; the Institute on Higher Education Policy Summer Academy; and the Association of American Colleges & Universities Inclusive Excellence Summer Academy.

Session 42: Interpersonal Validity
Scheduled: Wednesday, November 10, 8:00 AM to 11:00 AM
Level:
Beginner, no prerequisites


43. Impact Evaluation of International Development Programs: The Heifer Hoofprint Model

This workshop is over-subscribed and thus full. We do not maintain a waitlist. Please make an alternate selection.

Limitations of resources (money, time, baseline information) for planning/implementation and the ever-changing contexts are some of the challenges faced by professionals trying to evaluate complex social interventions. Development of approaches overcoming most of those challenges and providing valid, credible, and useful evaluations of such interventions are clearly needed.

Through mini-lectures, group exercises, and discussion, this workshop will provide you with the opportunity to learn about the Heifer Hoofprint Impact Evaluation Model, an approach that has established creative ways to perform basic evaluation functions (e.g., values, criteria and indicators definition, rubrics development, fieldwork and data analysis strategies, synthesis) and address thorny issues such as causal attributions within such challenging environments.

You will learn:

  • To identify, clarify and select the relevant values to orient data collection and to form the basis for drawing evaluative conclusions from descriptive data;
  • To develop criteria and indicators to facilitate determination of the extent to which the values (abstract concepts) have been achieved;
  • To develop rubrics to provide an evaluative assessment of performance (scoring on a scale) for each indicator/criterion/overall;
  • To search for impact at three key levels: (i) project recipients, (ii) non-participants (local, other communities, regions or countries); (iii) value systems of individuals, families and communities;
  • To establish causal attributions based on fieldwork rather than on experiments.

Michael Scriven and Thomaz Chianca have conducted impact evaluations for Heifer International from 2005 to 2009 in 20 countries, drawing on their combined 50+ years of evaluation expertise. Rienzzie Kern has almost 20 years experience as an evaluation practitioner and manager of evaluations in complex organizations. He has designed a self evaluation toolkit which is published as an internal resource for Heifer International.

Session 43: Impact Evaluation
Scheduled: Wednesday, November 10, 8:00 AM to 11:00 AM
Level:
Beginner, no prerequisites


Wednesday Afternoon Half-Day Workshops, November 10, 12 PM to 3 PM


44. An Executive Summary is Not Enough: Effective Reporting Techniques for Evaluators

This workshop is over-subscribed and thus full. We do not maintain a waitlist. Please make an alternate selection.

As an evaluator you are conscientious about conducting the best evaluation possible, but how much thought do you give to communicating your results effectively? Do you consider your job complete after submitting a lengthy final report? Reporting is an important skill for evaluators who care about seeing their results disseminated widely and recommendations actually implemented.

Drawing on current research, this interactive workshop will present an overview of three key principles of effective reporting and engage participants in a discussion of its role in effective evaluation. Participants will leave with an expanded repertoire of innovative reporting techniques and will have the opportunity to work on a real example in groups.

You will learn:

  • The role of communication and reporting in good evaluation practice;
  • Three key principles for communicating results effectively;
  • Four innovative reporting techniques.

Kylie Hutchinson has served since 2005 as the trainer for the Canadian Evaluation Society's Essential Skills Series (ESS) in British Columbia. Her interest in dissemination and communications stems from twenty years of experience in the field of evaluation.

Session 44: Effective Reporting
Scheduled: Wednesday, November 10, 12:00 PM to 3:00 PM
Level: Beginner, no prerequisites


45. Integrated Data Analysis in Mixed Methods Evaluation

This workshop is over-subscribed and thus full. We do not maintain a waitlist. Please make an alternate selection.

Evaluators have been mixing methods in practice for decades. Building on this practice, the emerging theory of mixing methods aims to provide structure, justification, and frameworks of possibilities for combining different kinds of methods and data in one study. Mixed methods theory, however, continues to rely heavily on creative and thoughtful mixed methods practice. This is so for the area of integrated data analysis.

This workshop will engage analytic possibilities for mixing different kinds of data from different kinds of methods. Integrated data analysis will first be situated in the mixed methods landscape. A framework for integrated analysis will be presented, with multiple examples and an ongoing critique of possible challenges to these analyses. Following the presentation, participants will use their own creativity to engage in a practice exercise in integrative mixed methods analysis.

You will learn:

  • The location and importance of integrated data analyses in mixed methods evaluation;
  • Multiple specific strategies for integrated mixed methods analysis;
  • Continuing challenges of integration in mixed methods evaluation.

Jennifer Greene has conducted mixed methods workshops in multiple venues around the world over the past 10+ years. A noted author on mixed-methods issues, and an Associate Editor for the Journal of Mixed Methods Research, she brings both theoretical and practical knowledge and experience to the workshop venue.

Session 45: Integrated Data Analysis
Prerequisites: Familiarity with the basic methodological literature in mixing methods and knowledge of and direct experience in more than one methodological tradition
Scheduled: Wednesday, November 10, 12:00 PM to 3:00 PM
Level: Advanced


46. The Basics of Using Theory to Improve Evaluation Practice

This workshop is designed to provide practicing evaluators with an opportunity to improve their understanding of how to use theory to improve evaluation practice. We'll examine social science theory and stakeholder theories, including theories of change and their application to making real improvements in how evaluations are framed and conducted.

Lecture, exercises, and discussions will help participants learn how to apply evaluation theories, social science theories, and stakeholder theories of change to improve the accuracy and usefulness of evaluations. A wide range of examples from evaluation practice will be provided to illustrate main points and key take-home messages.

You will learn:

  • To define and describe evaluation theory, social science theory, and program theory;
  • How evaluation theory can be used to improve evaluation practice;
  • How implicit and explicit social science theories can be used to guide evaluation decisions;
  • The components and processes of several commonly used social science theories that have been used to develop and evaluate interventions;
  • How developing stakeholder theories of change can be used to improve evaluation practice.

Stewart Donaldson is Dean of the School of Behavioral and Organizational Sciences at Claremont Graduate University. He has published widely on the topic of applying program theory, developed one of the largest university-based evaluation training programs, and has conducted theory-driven evaluations for more than 100 organizations during the past decade. John LaVelle is an advanced graduate student at CGU and has taught courses on the application of psychology to understanding social problems.

Session 46: Using Theory to Improve Practice
Scheduled: Wednesday, November 10, 12:00 PM to 3:00 PM
Level: Beginner, no prerequisites


47. The Tools and Techniques of Gender Responsive Evaluation

Despite the fact that most development agencies are committed to gender equality, gender continues to be one of the areas that is not adequately addressed in many evaluations and conventional research tools are often not well suited for understanding how different social groups are affected by development interventions. This workshop will provide you with the tools and techniques for assessing the differential impacts of development programs on men and women, and will provide a framework for understanding how program interventions are affected by, and in turn affect, the social and economic roles and relationships among members of the household, community and broader groups in society.

We'll provide you with tools and techniques for applying feminist principles that place women and girls at the center of the evaluation where gender, race, class, and sexuality are the units of analysis with the goal of identifying, exposing, challenging and deconstructing oppressive systems while at the same time gaining a better understanding of women and girls’ histories, ways of knowing, experiences, and knowledge. The workshop will present tools for developing gender sensitive conceptual frameworks, evaluation design, sampling, data collection and analysis, and the dissemination and use of evaluations. We'll critique case studies employing different gender sensitive approaches and you will apply the techniques in hands-on exercises.

You will learn:

  • The similarities and differences between gender analysis and feminist evaluation;
  • The importance of incorporating gender respective approaches into program evaluation, the value added as a result, and the operational and policy consequences of ignoring gender;
  • The tools and techniques for the design, implementation, and use of gender responsive evaluation;
  • Why gender issues are frequently ignored in program evaluation and strategies for increasing their utilization.

A team consisting of Michael Bamberger, Kathryn Bowen, Belen San Luque, and Tessie Catsambas will lead the workshop. Together they bring years of expertise as facilitators and trainers as well as on-the-ground experience evaluating development programs around the world.

Session 47: Gender Responsive Evaluation
Scheduled: Wednesday, November 10, 12:00 PM to 3:00 PM
Level: Beginner, no prerequisites


48. Methods for Evaluating Research, Technology, and Development (RTD) Projects and Programs

Expanding the capabilities of RTD evaluation has worldwide prominence due to increased awareness of contributions to innovation policy, programs, resource allocation, and need for accountability evidenced by: the Office of Science and Technology Policy (OSTP) Science of Science Policy 2009 Roadmap to develop new methods, and recent National Science Foundation (NSF) Science of Science Innovation Policy evaluation method development grants, etc.

With an aim to expand the capabilities of both evaluators and managers who plan, commission, and use evaluation results, this workshop will explore the latest RTD evaluation methods and their applications; guidance on how to select from among a variety of evaluation methods; discuss modeling and data requirements; address special issues associated with methods; explore effective use of results; address how to incorporate evaluation into other key programmatic functions and activities, such as strategic planning and budget justifications; and provide illustrations from studies employing a variety of methods, including benefit-cost analysis, modeling and historical tracing using bibliometric techniques, etc.

You will learn:

  • Evaluation options for a variety of RTD applications;
  • How to apply multiple evaluation methods and to overcome challenges associated with different methods;
  • Ways to apply evaluation tools in actual settings;
  • How to institutionalize evaluation into the RTD management cycle.

Gretchen Jordan, Rosalie Ruegg, and Cheryl Oros have extensive experience facilitating training in academic and professional settings. Together, they bring 75+ years of evaluation experience to the session, having conducted RTD evaluations across the United States federal sector, examining a variety of research, technology, and development programs.

Session 48: Evaluating RTD Projects/Programs
Prerequisites: Knowledge of logic modeling and at least several evaluation methods; Experience either conducting, commissioning, or using evaluation results
Scheduled: Wednesday, November 10, 12:00 PM to 3:00 PM
Level: Intermediate


49. Cost-Effectiveness Analysis of Health and Human Services Programs

This workshop is over-subscribed and thus full. We do not maintain a waitlist. Please make an alternate selection.

As the cost of health and human service programs far outpaces our willingness and ability to pay, the importance of, and demand for, measuring their economic efficiency continues to grow. This workshop focuses on building basic skills in cost-effectiveness analysis for health and human service interventions.

You will be taught the basic types of cost-effectiveness analysis and will learn the mathematical model that underlies robust economic evaluations that are crucial to decision makers. We will discuss interpretation and presentation of the results as well as the limitations inherent in this field of evaluation. The session will be interactive and you should come prepared with a program that you are familiar with that is suited for cost-effectiveness analysis. You will receive practical, problem-solving experience with this exercise and will be immediately able to use the skills learned in the workshop in your evaluation activities.

You will learn:

  • What types of economic evaluation are available to the evaluator of health and human service programs;
  • What cost-effectiveness is;
  • The four essential questions that need to be addressed before a Cost-Effectiveness Analysis (CEA) can begin;
  • How to do a simple CEA;
  • How to interpret and communicate the results of a CEA effectively.

Edward Broughton is responsible for a portfolio of cost-effectiveness analyses in continuous quality improvement programs in several countries in a variety of health areas including maternal and neonatal health, HIV/AIDS care and treatment and pediatric health. He has provided training on Cost-Effectiveness Analysis in the United States, Afghanistan, and China.

Session 49: Cost-Effectiveness Analysis
Prerequisites: Basic program evaluation skills
Scheduled: Wednesday, November 10, 12:00 PM to 3:00 PM
Level: Intermediate


Sunday Morning Half-Day Workshops, November 14, 9 AM to 12 PM


50. Doing Cost-Inclusive Evaluation: We Should, You Can, and Here's How

Through this workshop, you will learn alternative strategies for modeling, evaluating, managing, and systematically improving the cost-effectiveness and cost-benefit of health and human services. We will extend social science research methods to measure costs and benefits at the level of the individual consumer. A quantitative understanding of what occurs between the "costs in" and "outcomes out" is enhanced by a model that distinguishes between performance of and participation in program procedures, and between desired and actual change in biopsychosocial processes responsible for program outcomes.

Together, we'll explore each step in understanding and improving relationships between resources used, procedures implemented, biopsychosocial processes altered or instilled, and outcomes produced. You'll draw upon examples from the instructor’s evaluation research in health, mental health, and substance abuse settings.

You will learn:

  • Four reasons to include costs in quantitative and qualitative evaluation plans;
  • Two alternative methods for measuring costs for use in cost-inclusive evaluations;
  • Definitions and examples of 3 types of cost-inclusive evaluation;
  • A method of measuring benefits and utilities;
  • Two or more ethical issues in evaluations that include costs, and plausible resolutions for each.

Brian Yates has published over 70 articles, book chapters, and reviews - plus five books, most focusing on the application of cost-effectiveness analysis, cost-benefit analysis, or cost-utility analysis to the systematic evaluation and improvement of human services. He regularly facilitates workshops in the area of cost-inclusive evaluation at gatherings of professional associations and training programs.

Session 50: Cost-Inclusive Evaluation
Scheduled: Sunday, November 14, 9:00 AM to 12:00 PM
Level: Beginner, no prerequisites


51. Hearing Silenced Voices: Using Visual Methods to Include Traditionally Disenfranchised Populations in Evaluation

While evaluators understand the importance of multiple stakeholder perspectives, many struggle with how to ensure the participation of those traditionally ‘without a voice,’ vulnerable or disenfranchised populations. Children and youth, persons with disabilities, or those having emerging literacy in the majority language(s) hold important views regarding the programs, services, and situations which affect them, but their perspectives are not always included.

This workshop will be grounded in theory, but will be highly participatory and interactive. Stemming from a rights-based approach, the workshop will explore the why and how of including traditionally disenfranchised populations in evaluation. Through their work in Canada and numerous countries in Europe, the facilitators will share a variety of visually-based techniques and how these can be coupled with more traditional methods. Participants will be introduced to the use of drawing, icons, photography, and mapping, as tools for eliciting often-silenced voices. Ethical considerations will also be discussed.

You will learn:

  • To identify methods that will give voice to populations traditionally disenfranchised or disconnected from evaluation;
  • To use a variety of visual methods;
  • To apply ethical and practical considerations to their evaluation processes.

Linda Lee, Proactive Information Services Inc, has facilitated professional development workshops in the area of evaluation for over 25 years both in North American and internationally.

Session 51: Hearing Silenced Voices
Prerequisites: Experience with qualitative methods; Experience conducting evaluations that involve traditionally disenfranchised or marginalized populations
Scheduled: Sunday, November 14, 9:00 AM to 12:00 PM
Level: Intermediate


52. Participatory Program Implementation Tools That Lay the Foundation for Evaluation: Practical Participatory Processes and Tools for Evaluators, and M&E Personnel

Participatory program implementation tools lend themselves well to laying a foundation for conducting participatory evaluations along the program cycle. This session will show you how to use two tools employed under the Participatory Action for Community Enhancement (PACE) Methodology for program implementation and adapt them to broader evaluation settings.

The first tool is a facilitated exercise to assess the relationship, expectations, and challenges a program team faces in working with key stakeholder groups under their program. The second tool is an exercise that looks at the "micro-politics" of a program to identify social factors such as gender, class, race, ethnicity, or tribalism, define them from the participant's perspective and determine the role they play in program implementation. The facilitator will walk you through the actual exercises and carry out a debrief at the end of each to clarify questions on how to conduct the exercises, and to identify ways the tools could be adapted to the your needs.

You will learn:

  • How to apply the Social Factors Analysis exercise with a group of stakeholders to identify and analyze the social factors (gender, class, race, ethnicity, or tribalism) at play in a particular program context;
  • How to apply the "The Groups We Work With" exercise to assess the inherent expectations and challenges a program implementation faces in working with various stakeholder groups;
  • At least three ways to adapt the tools to an evaluation setting.

Scott Yetter is a senior trainer for the Participatory Action for Community Empowerment (PACE) methodology for the past 9 years. He also designed and launched a series of multi-day workshops to train local and international staff on participatory methodologies and program implementation. Carlene Baugh is a Monitoring and Evaluation Advisor for CHF International.

Session 52:  Participatory Program Tools for M&E
Prerequisites: Community or stakeholder group facilitation experience; prior experience working with or participating in community groups
Scheduled: Sunday, November 14, 9:00 AM to 12:00 PM
Level:
Intermediate


53. Purposeful Program Theory

While program theory has become increasingly popular over the past 10 to 20 years, guides for developing and using logic models sometimes sacrifice contextual difference of practice in the interests of clear guidance and consistency across organizations. This session is designed for advanced evaluators who are seeking to explore ways to develop and use program theory in ways that suit the particular characteristics of the intervention, the evaluation purpose and the organizational environment.

In addition to challenges identified by participants, the workshop will use mini-lectures, exercises, and discussions to address three particularly important issues- improving the quality of the models by drawing on generic theories of change and program archetypes; balancing the tension between simple models which communicate clearly and complicated models which better represent reality; and using the model to develop and answer evaluation questions that go beyond simply meeting targets.

You will learn:

  • A strategy for purposeful development and use of program theory, drawing from an expanded repertoire of options;
  • Additional ways to represent logic models, including results chains, and multi-level logic models;
  • How to better represent complicated and complex interventions.

Patricia Rogers is an experienced facilitator and evaluator, and one of the leading authors in the area of Program Theory. She has taught for The Evaluators Institute and is on faculty at the Royal Melbourne Institute of Technology.

Session 53: Purposeful Program Theory
Prerequisites: Knowledge of and experience developing and using logic models and program theory for monitoring and evaluation
Scheduled: Sunday, November 14, 9:00 AM to 12:00 PM
Level: Advanced


54. Data Preparation and Management

Multiple and conflicting data file versions, data inconsistencies, mis-coded and mis-matched variables… Errors in data preparation and management are common and cause major difficulties in studies, such as delays in obtaining usable data, difficulties in analysis, and even misleading results. The implementation of standard data preparation and management procedures can help prevent such difficulties. This course will teach participants how to quickly and accurately detect, resolve, and prevent data errors.

The first half of the course will cover how to troubleshoot and prevent the most common data preparation and management errors, while the second half will address real life data management problems that course participants experience. Prior to the course, participants will have the opportunity to submit descriptions of problems they are having with their data. We'll select and focus on key problems from participant submissions, present problems to the class using simulated data, and provide step-by-step solutions.

You will learn:

  • How to prevent problems and errors in data preparation and management;
  • How to quickly and accurately detect and correct common data errors;
  • How simple data management decisions can have a major methodological impact - and how to ensure sound decisions are made;
  • How to automate data preparation and management tasks for greater accuracy and speed.

Allison Minugh, Susan Janke, and Nicoletta Lomuto are experienced facilitators hailing from Datacorpm a consulting firm dedicated to improving the human condition by empowering clients to use information to its fullest extent.

Session 54: Data Prep & Mgmt
Prerequisites: Knowledge of and experience developing and using logic models and program theory for monitoring and evaluation
Scheduled: Sunday, November 14, 9:00 AM to 12:00 PM
Level: Beginner, no prerequisites