Professional Development Workshops

Professional Development Workshops are hands-on, interactive sessions that provide an opportunity to learn new skills or hone existing ones at Evaluation 2006.

Professional development workshops precede and follow the conference. These workshops differ from sessions offered during the conference itself in at least three ways: 1. each is longer (either 3, 6, or 12 hours in length) and thus provides a more in-depth exploration of a skill or area of knowledge, 2. presenters are paid for their time and are expected to have significant experience both presenting and in the subject area, and 3. attendees pay separately for these workshops and are given the opportunity to evaluate the experience. Sessions are filled on a first-come, first-served basis and most are likely to fill before the conference begins.

REGISTRATION: Registration for professional development workshops is handled as part of the conference registration forms; however, you may register for professional development workshops even if you are not attending the conference itself (still using the regular conference registration forms - just uncheck the conference registration box).

FEES: Workshop registration fees are in addition to the fees for conference registration:


Two Day

One Day

Half Day

AEA Members













FULL SESSIONS: Sessions that are closed because they have reached their maximum attendance will be clearly marked below the session name. No further registrations will be accepted for full sessions and we do not maintain waiting lists. Once sessions are closed, they will not be re-opened.


TWO DAY, MONDAY-TUESDAY, OCTOBER 30-31, 9 am to 4 pm

Qualitative Methods; Quantitative Methods; Evaluation 101; Logic Models; Participatory Eval; Interviewing


Immigrant Communities; Eval Methodology; Effect Size Measures; Using Theories; Organizational Collaboration; Outcomes Research in Health Contexts; Coding; Using Systems Tools;


Needs Assessment; Business of Evaluation; Appreciative Inquiry; Theory-Driven Evaluation; Rasch Measurement; Experiential Activities; Presenting Evaluation Findings; Survey Design; Experimental Design; Collaborative Eval; Utilization-focused Eval; Multilevel Models; RealWorld Evaluation; Evaluation Dissertation;  Working Sessions; Multiple Regression; Performance Measurement; Hard-core Qualitative; Cultivating Self


Using Stories; Visual Presentations; Writing Scopes-of-Work; Sampling 101


Data Collection Instruments; Focus Group Research; Building Eval Plans; Programs for Children


SUNDAY, NOVEMBER 5, HALF DAY, 9 am to 12 pm

Moderator Training; Program Theory; Fundable Nonprofit Programs; Empowerment Evaluation; Improving Your Eval Practice


Qualitative Methods

Qualitative data can humanize evaluations by portraying people and stories behind the numbers. Qualitative inquiry involves using in-depth interviews, focus groups, observational methods, and case studies to provide rich descriptions of processes, people, and programs. When combined with participatory and collaborative approaches, qualitative methods are especially appropriate for capacity-building-oriented evaluations.

Through lecture, discussion, and small-group practice, this workshop will help you to choose among qualitative methods and implement those methods in ways that are credible, useful, and rigorous. It will culminate with a discussion of new directions in qualitative evaluation.

You will learn:

  • Types of evaluation questions for which qualitative inquiry is appropriate,

  • Purposeful sampling strategies,

  • Interviewing, case study, and observation methods,

  • Analytical approaches that support useful evaluation.

Michael Quinn Patton is an independent consultant and professor at the Union Institute. An internationally known expert on utilization-focused evaluation and qualitative methods, he published the third edition of Qualitative Research and Evaluation Methods (SAGE) in 2001.

Session 1: Qualitative Methods
Scheduled: Monday and Tuesday, October 30 and 31, 9 am to 4 pm
Level: Beginner, no prerequisites

Quantitative Methods

This session is full. Registrations are no longer being accepted for this workshop.

Quantitative data offers opportunities for numerical descriptions of populations and samples. The challenge is in knowing which analyses are best for a given situation.   

Designed for the practitioner needing a refresher course and/or guidance in applying quantitative methods to evaluation contexts, the workshop covers the basics of parametric and nonparametric statistics, as well as how to report your findings.

Hands-on exercises and computer demonstrations interspersed with mini-lectures will introduce methods and concepts. The instructor will review examples of research and evaluation questions and the statistical methods appropriate to developing a quantitative data-based response.

You will learn:

  • The conceptual basis for a variety of statistical procedures,

  • How more sophisticated procedures are based on the statistical basics,

  • Which analyses are most applicable for a given data set or evaluation question,

  • How to interpret and report findings from these analyses.

Katherine McKnight applies quantitative analysis as Director of Program Evaluation for Pearson Achievement Solutions. Additionally, she teaches Research Methods, Statistics, and Measurement in Public and International Affairs at George Mason University in Fairfax, VA.

Session 2: Quantitative Methods
Scheduled: Monday and Tuesday, October 30 and 31, 9 am to 4 pm
Level: Beginner, no prerequisites

Evaluation 101: Intro to Evaluation Practice

This session is full. Registrations are no longer being accepted for this workshop.

Begin at the beginning and learn the basics of evaluation from an expert trainer. The session will focus on the logic of evaluation to answer the key question: "What resources are transformed into what program evaluation strategies to produce what outputs for which evaluation audiences, to serve what purposes." Enhance your skills in planning, conducting, monitoring, and modifying the evaluation so that it generates the information needed to improve program results and communicate program performance to key stakeholder groups.

A case-driven instructional process, using discussion, exercises, and lecture will introduce the steps in conducting useful evaluations: Getting started, Describing the program, Identifying evaluation questions, Collecting data, Analyzing and reporting, and Using results.

You will learn:

  • The basic steps to an evaluation and important drivers of program assessment,

  • Evaluation terminology,

  • Contextual influences on evaluation and ways to respond,

  • Logic modeling as a tool to describe a program and develop evaluation questions and foci,

  • Methods for analyzing, and using evaluation information.

John McLaughlin has been part of the evaluation community for over 30 years working in the public, private, and non-profit sectors. He has presented this workshop in multiple venues and will tailor this two-day format for Evaluation 2006.

Session 3: Evaluation 101
Scheduled: Monday and Tuesday, October 30 and 31, 9 am to 4 pm
Level: Beginner, no prerequisites

Logic Models for Program Evaluation and Planning

Many programs fail to start with a clear description of the program and its intended outcomes, undermining both program planning and evaluation efforts. The logic model, as a map of what a program is and intends to do, is a useful tool for clarifying objectives, improving the relationship between activities and those objectives, and developing and integrating evaluation plans and strategic plans.

First, we will recapture the utility of program logic modeling as a simple discipline, using cases in public health and human services to explore the steps for constructing, refining and validating models. Then, we’ll examine how to use logic models in evaluation to gain stakeholder consensus and determine evaluation focus, in program monitoring to determine a set of balanced performance measures, and in strategic planning to affirm mission and identify key strategic issues. Both days use modules with presentations, small group case studies, and debriefs to reinforce group work.

You will learn:

  • To construct logic models,

  • To develop an evaluation focus based on a logic model,

  • To use logic models to answer strategic planning questions and select and develop performance measures.

Thomas Chapel is the central resource person for planning and program evaluation at the Centers for Disease Control and Prevention. Tom has taught this workshop for the past three years to much acclaim.

Session 4: Logic Models
Scheduled: Monday and Tuesday, October 30 and 31, 9 am to 4 pm
Level: Beginner, no prerequisites

Participatory Evaluation

Participatory evaluation practice requires evaluators to be skilled facilitators of interpersonal interactions. This workshop will provide you with theoretical grounding (social interdependence theory, conflict theory, and evaluation use theory) and practical frameworks for analyzing and extending your own practice.

Through presentations, discussion, reflection, and case study, you will experience strategies to enhance participatory evaluation and foster interaction. You are encouraged to bring examples of challenges faced in your practice for discussion.

You will learn:

  • Strategies to foster effective interaction, including belief sheets; values voting; three-step interview; cooperative rank order; graffiti; jigsaw; and data dialogue,

  • Responses to challenges in participatory evaluation practices,

  • Four frameworks for reflective evaluation practice.

Jean King has over 30 years of experience as an award-winning teacher at the University of Minnesota. As an evaluation practitioner, she has received AEA’s Myrdal award for outstanding evaluation practice. Laurie Stevahn is a professor at Seattle University with extensive facilitation experience as well as applied experience in participatory evaluation.

Session 5: Participatory Eval
Basic evaluation skills

Scheduled: Monday and Tuesday, October 30 and 31, 9 am to 4 pm
Level: Intermediate

Cost-Benefit and Cost-Effectiveness Methods

This workshop has been Cancelled

Interviewing Individuals and Groups

Successful interviewing is hard to do! Interviewing looks easy when done by experts but in practice it can be challenging for the rest of us. Interviewing is not a mechanical process whereby information is transferred from the respondent to the evaluator. Instead the interview is a social interaction where the interviewee is responding to a complex environment. The evaluator must be sensitive to how that environment influences the respondent and his or her comments.  

This session is designed for those who do interviews for evaluation or research. The workshop will help you plan, organize, conduct and analyze interviews (individual and group) in a manner that is consistent with quality protocol, ethical responsiveness, and respect for the respondent.

This session will be an overview of evaluation/research interviewing and identify practices and techniques used by experts. This session will cover topics such as planning the interview, setting up the logistics, developing appropriate questions, skills in asking questions, capturing the data, and analyzing results.  

You will learn:

  • Critical skills in conducting individual and group interviews

  • Effective strategies for interviewing

  • By experiencing and interview as a respondent or an interviewer

  • The limitations of interviews and how evaluators deal with those concerns

Richard A. Krueger is professor emeritus and senior fellow at the University of Minnesota. In 30+ years of practice he has conducted thousands of interviews and he still gets excited about listening to people. He is author of 6 books on focus group interviewing and is a past president of AEA.

Session 7: Interviewing
Scheduled: Monday and Tuesday, October 30 and 31, 9 am to 4 pm

Evaluation in Immigrant and Other Cultural Communities

Attend to the unique issues of working in communities and cultures with which you may be unfamiliar and within which your craft is unknown. This workshop will examine such issues as access, entry, relationship building, sampling, culturally specific outcomes, instrument development, translation, culturally appropriate behavior and stakeholder participation.

Drawing on case examples from practice in immigrant and other cultural communities, we will illustrate what has and hasn’t worked, principles of good practice, and the learning opportunities for all involved. Through simulations, roundtable discussions and exercises you will experience the challenges of cross-cultural evaluation and how to deal with them.

You will learn:

  • Approaches to evaluation practice in unfamiliar cultures and settings,

  • How to draw upon the traditions of communities in mutually beneficial ways,

  • Useful, respectful and credible ways to collect and report information including design, tool development, data collection and analysis for stakeholders,

  • Which tools and approaches to use for examining culturally sensitive and taboo subject matter.

Barry Cohen and Mia Robillos are on the staff of Rainbow Research, Inc. Barry and Mia have worked with Hmong, Latino, Somali, Nigerian, Native American, Filipino, African-American, Asian American and Hispanic-American cultures as part of their evaluation practice. They also have extensive experience as facilitators on issues related to responding to cultural context.

Session 8: Immigrant Communities
Prerequisites: Work with immigrant communities
Scheduled: Tuesday, October 31, 9:00 am to 4:00 pm
Level: Intermediate

Evaluation Methodology Basics

Evaluation logic and methodology are a set of principles (logic) and procedures (methodology) that guide evaluators in combining descriptive data with relevant values to draw conclusions that address how good, valuable, or important something is, rather than just describing what it is like or what happened.  

This workshop combines mini-lectures, demonstrations, small group exercises and interactive discussions to offer a “nuts and bolts” introduction to concrete, easy-to-follow, practical methods for conducting an evaluation.

You will learn:

  • The difference between research methodology and evaluation-specific methodology,

  • How to extract an evaluation design from a logic model and simple evaluation framework,

  • Where the “values” come from in an evaluation,

  • How to respond to questions about subjectivity,

  • Which evaluative criteria are more important than others,

  • The fundamentals of using rubrics to convert descriptive data to evaluative findings.

Jane Davidson has nearly 20 years of experience teaching and conducting workshops on a wide variety of topics including evaluation and research methods. The methodologies presented in this workshop are drawn from her book Evaluation Methodology Basics: The Nuts and Bolts of Sound Evaluation (SAGE).

Session 9: Eval Methodology
Scheduled: Tuesday, October 31, 9:00 am to 4:00 pm
Level: Beginner, no prerequisites

Using Effect Size and Association Measures

Answer the call to report effect size and association measures as part of your evaluation results. Improve your capacity to understand and apply a range of measures including: standardized measures of effect sizes from Cohen, Glass, and Hedges; Eta-squared; Omega-squared; the Intraclass correlation coefficient; and Cramer’s V.

Through mini-lecture, hands-on exercises, and demonstration, you will improve your understanding of the theoretical foundation and computational procedures for each measure as well as ways to identify and correct for bias.

You will learn:

  • How to select, compute, and interpret the appropriate measure of effect size or association,

  • Considerations in the use of confidence intervals,

  • SAS and SPSS macros to compute common effect size and association measures,

  • Basic relationships among the measures.

Jack Barnette hails from The University of Alabama at Birmingham. He has been conducting research and writing on this topic for the past ten years. Jack has won awards for outstanding teaching and is a regular facilitator both at AEA's annual conference and the CDC/AEA Summer Evaluation Institute.

Session 10: Effect Size Measures
Prerequisites: Univariate statistics through ANOVA & understanding and use of confidence levels
Scheduled: Tuesday, October 31, 9:00 am to 4:00 pm
Level: Advanced

Using Program Theories and Theories of Change To Improve Intervention Logics

Learn how to improve the quality of program intervention logics that are developed for purposes of evaluation and design. Common practices in developing intervention logics include stakeholder consultations, staff workshops, analysis of documentation and use of evaluator expertise. 

You will investigate how these usual processes may be enhanced by the injection of various types of theories of change including carrot and stick approaches and  socio-ecological, diffusion, stage and empowerment theories. Through extensive group exercises and discussion, explore how to identify and critique the relevance of the theories of change and have the opportunity to build intervention logics from program theories.

You will learn:

  • The differences and relationships among theories of change, program theory, program (intervention) logic, and logic modeling,

  • How to choose and use program theories and theories of change to develop, review and improve intervention logics,

  • Theories of social and behavior change and how they can be used to develop and evaluate intervention logics.

Sue Funnell of Performance Improvement Pty Limited has over 20 years of international workshop facilitation experience as well as an extensive background in the practical application of program theory and intervention logics.

Session 11: Using Theories
Prerequisites: Basic understanding of concepts of program theory and program logic

Scheduled: Tuesday, October 31, 9:00 am to 4:00 pm
Level: Intermediate

Evaluating Inter- and Intra-Organizational Collaboration

“Collaboration” is a misunderstood, under-empiricized and un-operationalized construct.  Program and organizational stakeholders looking to do and be collaborative struggle to identify, practice and evaluate it with efficacy.    

This workshop aims to increase participants’ capacity to quantitatively and qualitatively examine the development of inter- and intra-organizational partnerships. Assessment strategies and specific tools for data collection, analysis and reporting will be presented. You will practice using assessment techniques that are currently being used in the evaluation of PreK-16 educational reform initiatives and other grant-sponsored endeavors including the Safe School/Healthy Student initiative. 

You will learn:

  • The principles of collaboration so as to understand and be able to evaluate the construct,

  • Specific strategies, tools and protocols used in qualitative and quantitative assessment,

  • How to  formatively assess grant-sponsored programs,

  • How to evaluate PreK-16 educational reform initiatives based on the development of inter-personal, intra-organizational collaboration,

  • How stakeholders use the evaluation process and findings to address organizational collaboration.

Rebecca Gajda has been a facilitator of various workshops and courses for adult learners for more than 10 years. As Director of Research and Evaluation for a large-scale, grant-funded school improvement initiative, she is currently working collaboratively with organizational stakeholders to examine the nature, characteristics and effects of collaborative school structures on student and teacher empowerment and performance. 

Session 12: Organizational Collaboration
Prerequisites: Basic understanding of organizational change theory/systems theory and familiarity with mixed methodological designs
Scheduled: Tuesday, October 31, 9:00 am to 4:00 pm
Level: Intermediate

Outcomes Research in Health Contexts

Outcomes research has become a major component in health care decision-making. Health care managers, purchasers, and regulators are increasingly relying on outcomes research methodology as a tool to evaluate health care outcomes, quality, and costs, to promote informed decisions and assure the best value for money.  

Through the use of mini-lectures, case studies and group exercises, you will leave with a basic understanding of outcomes research, its key principles, elements and considerations. Applicability of Outcomes Research in fields other than health care will also be discussed. 

You will learn:

  • The fundamentals of outcomes research evaluations,

  • The main types of outcomes research perspectives and outcomes,

  • The types of analysis used in outcomes research, including cost-effectiveness, cost-benefit, burden of disease, patient reported outcomes, number needed to treat and sensitivity analysis.

Olga Geling has seven years of experience practicing, as well conducting research and training, in the area of health outcomes research. She has developed and conducted agency-wide training for the Hawaii Department of Health and been an evaluator for the Hawaii HIV Prevention Programs and the Hawaii Early Childhood Comprehensive Systems. 

Session 13: Outcomes Research in Health Contexts
Tuesday, October 31, 9:00 am to 4:00 pm
Level: Beginner, no prerequisites

Coding? Qualitative Software? Why and How

Coding and qualitative software are viewed as resources that assist in the search for meaning in qualitative data. This session is designed to use practical experience with real data in the form of group exercise to direct discussion of important principles that shape qualitative analysis. 

Individual and small group work are framed by seminars that explore pre-code work, code evolution, and memo writing. Qualitative software, including ATLAS.ti and MAXqda, is presented as a useful tool to integrate into analysis, but not as a solution to analysis challenges.

You will learn:

  • The value of “context” in analytic decision-making,

  • Processes that support the evolution of coding qualitative data,

  • Strategies for moving through coding to later phases of finding meaning from narrative data,

  • How and when to integrate software into the qualitative analysis process.

Ray Maietta is President and founder of ResearchTalk Inc, a qualitative inquiry consulting firm. He is an active qualitative researcher who also brings extensive experience as a trainer to the session. Jacob Blasczyk is an active, experienced evaluator with in-depth experience in using qualitative software.

Session 14: Coding
Prerequisites: Experience in qualitative data analysis
Scheduled: Tuesday, October 31, 9:00 am to 4:00 pm
Level: Intermediate

Best Practices in Quantitative Methods:  Little Things Can Make a Big Difference

This workshop has been Cancelled

Using Systems Tools in Evaluation Situations

The systems field came into being to address and solve real world problems. It is an intensely practical discipline that has developed a wide-range of specific methods to deal with specific situations. How do you choose the most appropriate systems method? 

Using group tasks and mini-lectures, this workshop provides a framework for matching systems methods to common evaluation situations and evaluation questions, particularly those questions that evaluators traditionally find difficult. You will also be coached in the use of three systems methods applicable to a wide spectrum of evaluation situations.

You will learn:

  • Basic systems principles that underpin system tools,

  • Which systems tools are appropriate for particular evaluation tasks,

  • Three tools that together can cover a wide-range of evaluation situations.

Bob Williams is an independent consultant and a pioneer in applying systems theory to the field of evaluation. Glenda Eoyang is founding Executive Director of the Human Systems Dynamics Institute and has presented systems approach workshops with Bob at previous AEA conferences.

Session 16: Systems Tools
Prerequisites: Knowledge of eval methods, situations and systems of theory and practice
Scheduled: Tuesday, October 31, 9:00 am to 4:00 pm
Level: Intermediate

Needs Assessment

This session is full. Registrations are no longer being accepted for this workshop.

Assessing needs is a task often assigned to evaluators with the assumption that they have been trained in or have experience with the activity. However, surveys of evaluation training indicated that by the year 2002 only one formal course on the topic was being taught in university based evaluation programs.  

This workshop uses multiple hands-on activities interspersed with mini-presentations and discussions to provide an overview of needs assessment. The focus will be on basic terms and concepts, models of needs assessment, steps necessary to conduct a needs assessment and an overview of methods.  

You will learn:

  • The definition of need and need assessment and levels, types and examples of needs,

  • Models of needs assessment with emphasis on a comprehensive 3-phase model,

  • How to organize for a needs assessment by setting up a NA Committee and moving the committee toward action,

  • Methods commonly used in needs assessment.

James Altschuld is a professor at Ohio State University and the instructor of the only needs assessment course in the most recent study of evaluation training. He has co-written two books on needs assessment and is a well-known presenter of workshops on the topic in numerous respected venues.

Session 17: Needs Assessment
Scheduled: Wednesday, November 1, 8:00 am to 3:00 pm
Level: Beginner, no prerequisites

The Business of Evaluation: How to Run an Evaluation Practice 

Knowing how to design and execute an evaluation study is different than operating an evaluation business.  This workshop provides business skills for evaluators in the early phases of their consultancies as well as for those who want a deeper understanding of basic business practices. 

Using mini-lecture, small group discussion and individual exercises, you will study the basic operating principles for businesses and learn how to develop the fundamentals required to manage the business side of an evaluation practice. From there, you will delver deeper to gain an understanding of how business practices can compliment evaluation skills as well as the tools needed to plan and implement best business practices. 

You will learn:

  • Common mistakes people going into business for themselves frequently make and how to avoid them,

  • The stages of business development and what is required to manage each stage,

  • Essential principles to manage a successful evaluation practice,

  • How to write a basic business plan,

  • Which evaluative criteria are more important than others,

  • Strategies for developing and using a financial plan.

Sanford Friedman of the Friedman Consulting Group has a 30-year career in evaluation and is now an independent consultant whose practice focuses on capacity building and providing an edge to businesses and professional corporations. Jerry Hipps, WestEd, has provided training workshops covering a diverse range of evaluation-related topics since 1981.

Session 18: Business of Evaluation
Scheduled: Wednesday, November 1, 8:00 am to 3:00 pm
Level: Beginner, no prerequisites

Using Appreciative Inquiry In Evaluation

Experience the power of appreciative reframing! An appreciative approach to evaluation maximizes chances for sustainable impact by helping programs identify what is working and drawing on existing strengths to build capacity and improve program effectiveness. Appreciatively oriented evaluation does not veil problems, but rather refocuses energy in a constructive and empowering way. 

You will experience the various phases of Appreciative Inquiry (AI) using appreciative interviews to focus on evaluation, developing indicators and data collection tools, conducting appreciative interviews, analyzing interview data, and sharing results. The workshop will incorporate the new text of Appreciate Inquiry (Preskill and Catsambas) to be published by Sage in June 2006, and Dr. Hallie Preskill will join the workshop in its final hour for a Q&A session. 

You will learn:

  • The principles and applications of appreciative inquiry,

  • How to formulate evaluation goals and questions using the appreciative inquiry approach,

  • How to develop interview guides, conduct interviews and analyze interview data,

  • How to reframe deficits into assets,

  • How to develop indicators to measure program impact through provocative propositions/possibility statements.

Tessie Catsambas, President of EnCompass LLC, brings to the workshop years of training experience and hands-on practice using AI in a variety of program contexts.

Session 19: Appreciative Inquiry
Scheduled: Wednesday, November 1, 8:00 am to 3:00 pm
Level: Beginner, no prerequisites

Theory-Driven Evaluation

Learn the theory-driven approach for assessing and improving program planning, implementation and effectiveness. Participants will explore the conceptual framework of program theory and its structure, which facilitates precise communication between evaluators and stakeholders regarding evaluation needs and approaches to address those needs.  

Mini-lectures, group exercises and case studies will illustrate the use of program theory and theory-driven evaluation for program planning, initial implementation, mature implementation and outcomes. In the outcome stages, you will explore the differences among outcome monitoring, efficacy evaluation and effectiveness evaluation.  

You will learn:

  • How to apply the conceptual framework of program theory,

  • How to apply the theory-driven approach to select an evaluation that is best suited to particular needs,

  • How to apply the theory-driven approach for evaluating a program’s particular stage or the full cycle.

Huey Chen, professor at the University of Alabama at Birmingham, is the author of Theory-Driven Evaluations (SAGE), the classic text for understanding program theory and theory-driven evaluation and most recently of Practical Program Evaluation (2005). He is an internationally know workshop facilitator on the subject.

Session 20: Theory-Driven Evaluation
Prerequisites: Knowledge of logic models or program theory
Scheduled: Wednesday, November 1, 8:00 am to 3:00 pm
Level: Intermediate

Using Rasch to Measure Services and Outcomes

Program evaluation has great need for the development of valid measures, e.g. of the quantity and quality of services and of the outcomes of those services. Many evaluators are frustrated when existing instruments are not well tailored to the task and do not produce the needed sensitive, accurate, valid findings. 

Through an extensive presentation, followed by discussion and hands-on work with data sets and computer-generated output, this workshop will explore Rasch Measurement as a means to effectively measure program services.  

You will learn:

  • Differences between Classical Test Theory and Rasch Measurement,

  • Why, when, and how to apply Rasch measurement,

  •  Hands-on application of Rasch analysis using Winsteps software,

  • Interpretation of Rasch/Winsteps output

Kendon Conrad is from the University of Illinois at Chicago and Nikolaus Bezrucko is an independent consultant. They bring extensive experience in both teaching about, and applying, Rasch measurement to evaluation.

Session 21: Rasch Measurement
Scheduled: Wednesday, November 1, 8:00 am to 3:00 pm
Level: Beginner, no prerequisites

Strengthening Evaluation Relationships Through Experiential Activities

Think outside the box and learn to strengthen evaluation relationships through unconventional activities. This hands-on workshop will help you identify your role and responsibility as an experiential facilitator, learn specific activities to use with clients, and understand how experiential activities can add value to your work. 

Participants will get involved in group discussions, question and answer sessions and many experiential activities that get them up and moving.  Bring your questions, your ideas, and open mind!

You will learn:

  • How to determine the right kind of experiential activity for a group,

  •  How to identify your own strength as a facilitator and areas for growth,

  • Experiential activities to use for strengthening stakeholder and participant relationships encountered as an evaluator,

  • Strategies for modifying presented materials to different populations,

  • About available experiential resources.

Kristin Huff has been using experiential activities applied to her work with nonprofits, foundations, state agencies, businesses and students for 15 years. She presented this workshop at AEA in 2004 and has expanded it to a full-day professional development session this year.  

Session 22: Experiential Activities
Prerequisites:  Experience using experiential activities with groups
Scheduled: Wednesday, November 1, 8:00 am to 3:00 pm
Level: Intermediate

Presenting Evaluation Findings: Effective Messaging for Evaluators

Explore the difference between “presenting” findings and “communicating” findings and be sure your message is understood and remembered. This is an interactive session for any evaluator who is asked to present evaluation findings in front of an audience. Participants are introduced to three primary channels of communication: how you look, how you sound and how you organize what you say.  

The instructor will model a behavior, explain an idea and demonstrate concept after which attendees will have the opportunity to practice in front of the group and receive coaching and feedback. Come prepared with a specific topic that you’ll be asked to present in the near future. 

You will learn:

  • The importance of the three main channels of communication,

  • How to eliminate distracting physical behaviors from your presentations,

  • How to organize and effectively stage an evaluation presentation for maximum impact.

Carl Hanssen hails from The Evaluation Center at Western Michigan University and is a certified interpersonal skills instructor. An experienced facilitator and presentations coach, he excels at developing, practicing and teaching presentation skills.

Session 23: Presenting Evaluation Findings
Scheduled: Wednesday, November 1, 8:00 am to 3:00 pm
Level: Beginner, no prerequisites

Survey Design and Administration

This professional development workshop is designed for beginners in the field of evaluation. You will be introduced to the fundamentals of survey design and administration.  

This interactive workshop will use a combination of direct instruction with hands-on opportunities for participants to apply what is learned to their own evaluation projects. We will explore different types of surveys, how to identify the domains included in surveys, how to choose the right one, how to administer the survey and how to increase response rates and quality of data. You will receive handouts with sample surveys, item writing tips, checklists, and resource lists for further information.  

You will learn:

  • The various types and formats of surveys,

  • Procedures for high quality survey design,

  • How to write high quality items,

  • Strategies for increasing reliability and validity.

Courtney Malloy and Harold Urman are consultants at Vital Research, a research and evaluation firm that specializes in survey design. They both have extensive experience facilitating workshops and training sessions on research and evaluation for diverse audiences.

Session 24: Survey Design
Scheduled: Wednesday, November 1, 8:00 am to 3:00 pm
Level: Beginner, no prerequisites

Experimental Designs in Evaluation

Experimental designs have been the focus of some controversy in evaluation, but they are also expected by many funding sources interested in promoting scientifically-based methods. To meet these expectations, evaluators need to understand the challenges and strategies available in designing, implementing and analyzing data from experimental designs.

With an emphasis on hands-on exercises and individual consultation within the group setting, this workshop will provide you with concrete skills in improving your current or anticipated work with experimental design studies. 

You will learn:

  • How to conduct evaluability assessments of experimental and quasi-experimental designs,

  • How to write or evaluate proposals to satisfy demands for scientifically-based research methods,

  • How to modify experimental designs to respond to specific contexts,

  • How to conduct quantitative analyses to strengthen the validity of conclusions and reveal hidden program impacts.

Fred Newman is a Professor at Florida International University with over thirty years of experience in performing front line program evaluation studies. George Julnes, Associate Professor of Psychology at Utah State University, has been contributing to evaluation theory for over 15 years and has been working with federal agencies, including the Social Security Administration, on the design and implementation of randomized field trials.

Session 25: Experimental Design
Prerequisites: Understanding of threats to validity and the research designs used to minimize them, practical experience with eval helpful
cheduled: Wednesday, November 1, 8:00 am to 3:00 pm
Level: Intermediate

Evaluation Practice: A Collaborative Approach

Collaborative evaluation is an approach that actively engages program stakeholders in the evaluation process. When stakeholders collaborate with evaluators, stakeholder and evaluator understanding increases and the utility of the evaluation is often enhanced. Strategies to promote this type of evaluation include evaluation conferences, member checking, joint instrument development, analysis and reporting.  

Employing discussion, hands-on activities, and roleplaying, this workshop focuses on these strategies and techniques for conducting successful collaborative evaluations, including ways to avoid common collaborative evaluation pitfalls.

You will learn:

  • A collaborative approach to evaluation,

  • Levels of collaborative evaluation and when and how to employ them,

  • Techniques used in collaborative evaluation,

  • Collaborative evaluation design and data-collection strategies.

Rita O'Sullivan of the University of North Carolina and John O'Sullivan of North Carolina A&T State University have offered this well-received session for the past six years at AEA. The presenters have used collaborative evaluation techniques in a variety of program settings, including education, extension, family support, health, and non-profit organizations.

Session 26: Collaborative Eval
Prerequisites: Basic Eval Skills
Scheduled: Wednesday, November 1, 8:00 am to 3:00 pm
Level: Intermediate

Utilization-focused Evaluation

Evaluations should be useful, practical, accurate and ethical. Utilization-focused Evaluation is a process that meets these expectations and promotes use of evaluation from beginning to end. By carefully implementing evaluations for increased utility, this approach encourages situational responsive-ness, adaptability and creativity. 

With an overall goal of teaching you the process of Utilization-focused Evaluation, the session will combine lectures with concrete examples and interactive case analyses, including cases provided by the participants.

You will learn:

  • Basic premises and principles of Utilization-focused Evaluation (U-FE),

  • Practical steps and strategies for implementing U-FE,

  • Strengths and weaknesses of U-FE, and situations for which it is appropriate.

Michael Quinn Patton is an independent consultant and professor at the Union Institute. An internationally known expert on Utilization-focused Evaluation, in 1997 he published the third edition of the book on which this session is based, Utilization Focused Evaluation: The New Century Text (SAGE). 

Session 27: Utilization-focused
Scheduled: Wednesday, November 1, 8:00 am to 3:00 pm
Level: Beginner, no prerequisites

Multilevel Models in Program Evaluation

Multilevel models (also called hierarchical linear models) open the door to understanding the inter-relationships among nested structures (students in classrooms in schools in districts for instance), or the ways evaluands change across time (perhaps longitudinal examinations of health interventions). This workshop will demystify multilevel models and present them at an accessible level, stressing their practical applications in evaluation.

Through lectures supplemented with practical examples, the workshop will address four key questions: When are multilevel models necessary? How can they be implemented using standard software? How does one interpret multilevel results? What are recent developments in this arena?

 You will learn:

·         The basics of multilevel modeling,

·         When to use multilevel models in your evaluation practice,

·         How to implement models using widely available software,

·         The importance of considering multilevel structures in understanding program theory. 

Sanjeev Sridharan is head of evaluation programs and a senior research fellow at the University of Edinburgh as well as a trainer for SPSS and an Associate Editor for the American Journal of Evaluation.

Session 28: Multilevel Models
Prerequisites: Basic understanding of Statistics
Wednesday, November 1, 8:00 am to 3:00 pm
Level: Intermediate

RealWorld Evaluation: Conducting Evaluations with Budget, Time, Data and Political Constraints

This session is full. Registrations are no longer being accepted for this workshop.

What do you do when asked to perform an evaluation on a program that is well underway? When time and resources are few, yet expectations high? When questions about baseline data and control groups are met with blank stares? When time and resources are few, yet clients expect “rigorous impact evaluation”? When there are political pressures to deal with?

Through presentations and discussion, with real-world examples drawn from extensive international as well as US evaluation experiences, you will be introduced to the RealWorld Evaluation approach. This well-developed seven-step approach seeks to ensure the best quality evaluation under real-life constraints.

You will learn:

·         The seven steps of the RealWorld Evaluation approach,

·         Context-responsive evaluation design alternatives,

·         Ways to reconstruct baseline data,

·         How to identify, and overcome threats to the validity or adequacy of evaluation methods.

Jim Rugh will coordinate a team of three facilitators with extensive real-world experience in conducting evaluations in a range of contexts worldwide. He is a leader in the area of conducting evaluations with budget, time, and data constraints.

Session 29: RealWorld Evaluation
Prerequisites: Academic or practical knowledge of the basics of evaluation
Wednesday, November 1, 8:00 am to 3:00 pm
Level: Intermediate

How to Prepare an Evaluation Dissertation Proposal

Developing an acceptable dissertation proposal often seems more difficult than conducting the actual research. Further, proposing an evaluation as a dissertation study can raise faculty concerns of acceptability and feasibility. This workshop will lead you through a step-by-step process for preparing a strong, effective dissertation proposal with special emphasis on the evaluation dissertation.

The workshop will emphasize application of the knowledge and skills taught to the participants’ personal dissertation situation through the use of an annotated case example, multiple self-assessment worksheets, and several opportunities for questions of personal application.

You will learn:

·         The pros and cons of using an evaluation study as dissertation research,

·         How to construct a compelling argument in a dissertation proposal,

·         The basic process and review criteria for constructing an effective problem statement and methods section,

·         How to provide the assurances necessary to guarantee approval of the proposal.

Nick L Smith is the co-author of How to Prepare a Dissertation Proposal from Syracuse University Press and a past-president of AEA. He has taught research and evaluation courses for over 20 years at Syracuse University and is an experienced workshop presenter through NOVA University's doctoral program in evaluation.

Session 30: Evaluation Dissertation
Wednesday, November 1, 8:00 am to 3:00 pm
Level: Beginner, No prerequisites

Working Sessions to Facilitate Stakeholder Engagement and Evaluation Use

This new workshop will provide in-depth learning on, arguably, the most important communicating and reporting strategy for facilitating organizational learning and evaluation use among stakeholders: The working session.

Through a combination of self-assessment, small group work, and the practical application of techniques covered to an actual evaluation, you will explore group facilitation and organizational development skills that evaluators often do not receive training for in graduate programs. 

You will learn:

·         How working sessions can be used throughout all phases of an evaluation to promote ownership of evaluation processes and facilitate the use of findings,

·         How to assess and refine group facilitation skills for conducting working sessions, 

·         How to design and carry out effective working sessions.

Rosalie Torres has facilitated numerous workshops on evaluation practice and use of findings. Throughout Rosalie’s 28-year evaluation career she has developed and refined her use of working sessions and is co-author of several books and articles on working sessions, evaluation practice and use including Evaluation Strategies for Communicating and Reporting (With Hallie Preskill and Mary Piontek), 2005 and Evaluative Inquiry for Enhancing Learning in Organizations (With Hallie Preskill), 1999.

Session 31: Working Sessions
Scheduled: Wednesday, November 1, 8:00 am to 3:00 pm
Level: Beginner, no prerequisites

Applications of Multiple Regression in Evaluation: Mediation, Moderation, and More

Multiple regression is a powerful tool that has wide applications in evaluation and applied research. Regression analyses are used to describe relationships, test theories, make predictions with data from experimental or observational studies, and model linear or nonlinear relationships. Issues we’ll explore include selecting specific regression models that are appropriate to your data and research questions, preparing data for analysis, running the analyses, interpreting the results, and presenting findings to a nontechnical audience.

The facilitator will demonstrate applications from start to finish with SPSS and Excel, and then you will tackle multiple real-world case examples in small groups. Detailed handouts include explanations and examples that can be used at home to guide similar applications.

 You will learn:

      ·         Concepts important for understanding regression

      ·         Procedures for conducting computer analysis, including SPSS code

      ·         How to conduct mediation and moderation analyses

      ·         How to interpret SPSS REGRESSION output

      ·         How to present findings in useful ways

Dale Berger is Professor of Psychology at Claremont Graduate University where he teaches a range of statistics and methods courses for graduate students in psychology and evaluation. He was President of the Western Psychological Association and recipient of the WPA Outstanding Teaching Award.

Session 32: Multiple Regression
Prerequisites: Basic understanding of Statistics
Wednesday, November 1, 8:00 am to 3:00 pm
Level: Intermediate

Using Performance Measurement to Increase Program Effectiveness and Communicate Value

These days it’s impossible not to have heard of performance measurement (PM), but what exactly is it? Why do so many program managers swear by it, and why do so many funders require it? What are the pitfalls to avoid?  How can evaluators use PM most effectively to improve programs and document accomplishments?

 Explore all these things in this hands-on, highly interactive workshop. Working in small groups, you will learn PM step-by-step, working with a real program and developing real products. Then, materials in hand, leave prepared to apply your learning to programs back home.

You will learn:

      ·        How to specify a program’s logic model and choose which desired outcomes to measure,

      ·        How to develop measurable indicators – the heart of an effective PM system,

      ·        How to gather data on these indicators and analyze these data usefully,

      ·        How to report PM findings  in catchy, understandable, and useful ways.

Michael Hendricks is an experienced consultant who has helped design and implement PM systems in local nonprofit agencies, national associations, governments at all levels, foundations, and international development agencies. He is also an excellent facilitator who believes in involving participants in their own learning.

Session 33: Performance Measurement
Intermediate: Basic background in evaluation
Wednesday, November 1, 8:00 am to 3:00 pm
Level: Intermediate

Hard-Core Qualitative Methodology

This session is full. Registrations are no longer being accepted for this workshop.

We need to reject the 'quantify where possible' approach, but we only need  to change it to 'quantify where valuable' in order to make it compatible with qualitative methods, which are demonstrably: (1) the inescapable foundation of all scientific method (since science was and is still based on qualitative observation, interpretation, and evaluation), and (2) the home of many powerful non-quantitative approaches to scientific knowledge (e.g. causation, explanation, and understanding).

This workshop presents and extends qualitative methods as extensions of commonsense and of standard scientific methods and provides examples for practical application in evaluation contexts. We'll use mini-lecture, group discussion, and question/answer to get to the heart of the hard-care approach to qualitative methodology.

You will learn:

      ·        How to avoid philosophical entanglements when presenting/using qualitative methods,

      ·        How to extend the scientific method to include hard-core qualitative methodologies,

      ·        How to restate the role of the social sciences when hard-care qualitative methods are fully legitimized and internalized.

Michael Scriven is among the most well-known professionals in the field today with 25 years of work on the philosophy of science. He has over 90 publications in the field of evaluation. Michael is excited to offer this brand new workshop at Evaluation 2006. Guilla Holm of Western Michigan University will serve as co-facilitator on this workshop and is co-author of with Michael Scriven of an upcoming text on the topic.

Session 34: Hard-core Qualitative
Intermediate: Basic background in qualitative methods
Wednesday, November 1, 8:00 am to 3:00 pm
Level: Intermediate

Cultivating Self as Responsive Instrument

Evaluative judgments are inextricably bound up with culture and context. Excellence and ethical practice in evaluation are intertwined with orientations toward, responsiveness to, and capacities for engaging diversity. Breathing life into this expectation calls for critical ongoing personal homework for evaluators regarding their lenses, filters and frames vis-a-vis their judgment-making. Using the framework in Evaluating Social Science, we will explore relevant potential threats to validity.

Reflective exercises will help us spotlight culture and context issues and identify pathways for calibrating, and refining the self as a diversity-grounded responsive instrument. Otherwise, we often look but still do not see, listen but do not hear, touch but do not feel. Such limitations handicap our truth-discerning and judging capacities, so evaluators have a professional and ethical responsibility to address the ways our lenses, filters and frames may obscure or distort more than they illuminate.

You will learn:

      ·        Ongoing assessment of one's meaning-making/meaning-shaping processes,

      ·        How to engage in empathic perspective-taking,

      ·        To engage how others perceive and receive oneself,

      ·        To dynamically embrace the development of intercultural/multicultural competencies as process and stance and not simply as a status or fixed state of being.

Hazel Symonette brings over 30 years of work in diversity-related arenas. She is founder and Director of the University of Wisconsin-Madison Excellence Through Diversity Institute: a year-long train-the trainers/facilitators initiative organized around responsive assessment and evaluation. She is also faculty with the year-long health research and education trust's cultural competence leadership fellowship program.

Session 35: Cultivating Self
Wednesday, November 1, 8:00 am to 3:00 pm
Level: Beginner, no prerequisites


WEDNESDAY, NOVEMBER 1, HALF DAY, 8:00 am to 11:00 am

Using Stories in Evaluation

Stories are an effective means of communicating the ways in which individuals are influenced by educational, health, and human service agencies and programs. Unfortunately, the story has been undervalued and largely ignored as a research and reporting procedure. Stories are sometimes regarded with suspicion because of the haphazard manner in which they are captured or the cavalier promise of what the story depicts.

Through short lecture, discussion, demonstration, and hands-on activities, this workshop explores effective strategies for discovering, collecting, analyzing and reporting stories that illustrate program processes, benefits, strengths or weaknesses.

You will learn:

      ·        How stories can reflect disciplined inquiry,

      ·        How to capture, save, and analyze stories in evaluation contexts,

      ·        How stories for evaluation purposes are often different from other types of stories.

Richard Krueger is on the faculty at the University of Minnesota and has over 20 years experience in capturing stories in evaluation. He has offered well-received professional development workshops at AEA and for non-profit and government audiences for over 15 years.

Session 36: Using Stories
Wednesday, November 1, 8:00 am to 11:00 am
Level: Beginner, no prerequisites

Visual Presentations of Quantitative Data 

Presenting data through graphics, rather that numbers, can be a powerful tool for understanding data and disseminating findings. Unfortunately, this method is commonly used to confuse audiences, complicate research and obscure findings.  

This workshop will enable participants to capitalize on the benefits of visual representation by providing them with tools for displaying data graphically for presentations, evaluation reports, publications and continued dialogue with program funders, personnel and recipients. Heighten your awareness of the common errors made when visually displaying multivariate relations and become more critical consumers of quantitative information.

You will learn:

      ·        How people process visual information,

      ·        Ways to capitalize in cognitive processing and minimize cognitive load,

      ·        Quick and easy methods for presenting data,

      ·        Innovative methods for graphing data,

      ·        Ways to decide if quantitative information should be presented graphically/visually versus in words/test.

Stephanie Reich is a research associate at the Center for Evaluation and Program Improvement at Vanderbilt University where her research focuses on cognitive development and how people process information. David Streiner is a Professor of Psychiatry at the University of Toronto and has authored four widely used books in statistics, epidemiology and scale development. 

Session 37: Visual Presentations
Wednesday, November 1, 8:00 am to 11:00 am
Level: Beginner, no prerequisites

How to Write Evaluation Scopes-of-Work

Writing and responding to scopes of work requires balancing the integrity of independent evaluations and ensuring responsive work. A scope of work provides the detailed description of work to be performed and a road map for the actions that the contractor is to undertake. If it is poorly written, the probability of success for a project can be decreased.

This workshop will address getting useful evaluation results with a well-written scope of work that lays out minimum requirements and solicits enhancements to the original design and how to respond to a less well-written scope of work. Participants will learn how to construct evaluation scopes of work that yield methodologically rigorous, useful and creative evaluations.

You will learn:

      ·        How to manage the key relationships between tasks and deliverables,

      ·        How to make effective use of proposal preparation instructions,

      ·        How to develop excellent evaluation criteria,

      ·        How in incorporate key tasks in the scope of work to promote success,

      ·        How to recognize the characteristics of a good scope of work,

      ·        Coping skills to deal with problematic scopes of work.

Steve Zwillinger has worked for the U.S. Department of Education for over 20 years and given workshops and sessions with thousands of attendees on data quality, strategic planning, performance management and contract proposal writing. David Bernstein has been a professional evaluator for 23 years and has conducted numerous trainings for government employees.

Session 38: Writing Scopes-of-Work
Prerequisites: Familiarity with government evaluation, experience with government contracting
Scheduled: Wednesday, November 1,
8:00 am to 11:00 am

Sampling 101: Basics of Probability and Purposeful Sampling

Choosing and implementing an appropriate sampling strategy can affect the validity, credibility and cost of an evaluation. Some studies require sophisticated probability sampling methods to produce accurate estimates of the characteristics of the populations served or of the size of the effects of the program or policy on the target population. Other studies may appropriately use purposeful samples to support theory development or to do detailed case analysis. 

The instructor will address the 14 questions from his book Practical Sampling (Sage, 1990) that should be answered prior to sample design, as a part of sample design, and prior to analysis of the data. Examples will be used to illustrate the designs and issues that arise in implementation. You will have the opportunity to raise specific sampling issues encountered in your own work

You will learn:

      ·        Ways to plan and implement sampling strategies that meet the needs of an evaluation,

      ·        How to improve the validity and credibility of your results through effective sampling,

      ·        Ways to employ sampling as a cost-saving measure while maintaining the rigor of the evaluation design.

Gary T. Henry is a professor in the Andrew Young School of Policy Studies at Georgia State University. Henry has evaluated a variety of public policies and programs and has published extensively in the field of evaluation and education policy. He received the Evaluation of the Year Award from the American Evaluation Association in 1998 for his work with the Georgia’s Council for School Performance and the Joseph S. Wholey Distinguished Scholarship Award in 2001 from the American Society for Public Administration and the Center for Accountability and Performance.

Session 39: Sampling 101
Prerequisites: Familiarity with evaluation design
Scheduled: Wednesday, November 1,
8:00 am to 11:00 am


WEDNESDAY, NOVEMBER 1, HALF DAY, 12:00 pm to 3:00 pm

Assessing Data Collection Instruments

When you read reports and articles do you see indications of psychometric properties reported and feel you aren’t quite sure what these represent or how to interpret them? Do you look for instruments to use and see indicators of reliability and validity to help you decide if the instrument has potential use? Do you need to compute reliability and validity measures and report them? 

If you answered “yes” to any of these, this workshop should interest you. It will focus on commonly used measures of reliability and validity relative to how they are computed, interpreted and used to help decide if the research that uses these methods is credible. 

You will learn:

      ·        How to locate data collection instruments for use in conducting evaluations,

      ·        Common methods used to assess reliability and validity indicators,

      ·        How to identify and interpret claims related to properties of data collection methods,

      ·        The basis for determining confidence intervals around score reliability measures,

      ·        How to compute and interpret reliability confidence intervals.

Jackson Barnette has more than 35 years experience in teaching statistical methods, assessment methods and program evaluation. He has designed an Excel software program to compute confidence levels that will be given to participants at this workshop.

Session 40: Data Collection Instruments
Prerequisites: Knowledge of basic parametric/nonparametric correlation methods, computation and uses of confidence intervals
Scheduled: Wednesday, November 1,
12:00 pm to 3:00 pm
Level: Intermediate

Understanding, Designing and Implementing Focus Group Research

As a qualitative research method, focus groups are an important tool to help researchers understand the motivators and determinants of a given behavior. Based on the work of Richard Krueger and David Morgan, this workshop uses a combination of lecture and small group work to provide a practical introduction to focus group research.

Participants will learn ways to identify and discuss critical decisions in designing a focus group study, understand how research or study questions influence decisions regarding segmentation, recruitment and screening, and discuss different types of analytical strategies included top-line summaries and full narrative reports.

You will learn:

      ·        How to identify and discuss critical decisions in designing a focus group study,

      ·        How research or study questions influence decisions regarding segmentation, recruitment and screening,

      ·        How to identify and discuss different types of analytical strategies and focus group reports.

Michelle Revels is a technical director at ORC Macro where she has designed, implemented and managed numerous focus group projects, primarily with “heard-to-reach” populations. Bonnie Bates has over 25 years of experience in dual skill areas of qualitative research and program evaluation and training, particularly in the area of prevention of public health problems.

Session 41: Focus Group Research
Wednesday, November 1, 12:00 pm to 3:00 pm
Level: Beginner, no prerequisites

Beyond the Logic Model: Building Your Evaluation Plan

Do you have a decent logic model, but are not sure how align your evaluation plan to measure your program outcomes? Take your logic model to the next level as an essential, useful, component of evaluation planning.

This workshop will review basic logic model components, provide recommended evaluation planning, data collection and data analysis strategies, and build an evaluation plan through hands-on exercises and real-life examples. You will have the opportunity to incorporate what you've learned in practice, describing your program and program outcomes to different audiences. 

You will learn:

      ·       How basic logic model components correspond to evaluation planning,

      ·       Strategies for planning in advance for data collection and analysis,

      ·       How a program logic model aligned with an evaluation process can enhance report writing and program awareness.

Edith Cook has been a professional trainer for 12 years for the Corporation for National Service. She has conducted workshops on this topic for over 2,000 program directors and faculty associated with VISTA, Learn and Serve, AmeriCorps and Senior Corps programs. 

Session 42: Building Eval Plans
Prerequisites: Previous participation in developing logic models
Scheduled: Wednesday, November 1,
12:00 pm to 3:00 pm
Level: Intermediate

Evaluating Programs for Children: Special Considerations

When children are the primary participants in or recipients of an intervention, evaluation processes must accommodate children’s developmental levels, capture their unique perspectives and experiences, and protect their rights. Evaluators without specialized knowledge of children’s developmental, cognitive and legal situations will benefit from this workshop. You will learn specific strategies to use in both formative and summative evaluations of programs for children to ensure that instruments, procedures, and reporting practices are appropriate and effective.

The workshop will include hands-on activities that will allow you to identify specific issues that need to be addressed in your own program evaluations, and to begin adapting provided resources to meet your specific needs.

You will learn:

      ·       Developmental issues to consider in developing and adapting instrumentation, including surveys and clinical interviews,

      ·        How multiple sources of data (observations, teacher and parent data, etc.) can and should be used to validate child-level data,

      ·        Issues related to informed consent, protection of privacy and other legal obligations.

Lauren B Goldenberg has almost two decades of experience in education and has led applied research and evaluation projects at EDC's Center for Children and Technology for four years. Wendy Martin has worked in the field of educational media and evaluation for over fifteen years. As a Research Associate at EDC's Center for Children and Technology, Dr. Martin's work has primarily focused on the evaluation of large-scale educational technology professional development and literacy programs.

Session 43: Programs for Children
Prerequisites: Basics of evaluation
Scheduled: Wednesday, November 1,
12:00 pm to 3:00 pm
Level: Intermediate

Focus Group Moderator Training

The literature is rich in textbooks and case studies on many aspects of focus groups including design, implementation and analyses. Missing however are guidelines and discussions on how to moderate a focus group.

In this experiential learning environment, you will find out how to maximize time, build rapport, create energy and apply communication tools in a focus group to maintain the flow of discussion among the participants and elicit more than one-person answers. You will learn at least 15 strategies to create and maintain a focus group discussion. These strategies can also be applied in other evaluation settings such as community forums and committee meetings to stimulate discussion.

You will learn:

      ·       How to moderate a focus group,

      ·       At least 15 strategies to create and maintain focus group discussion,

      ·       How to stimulate discussion in community forums, committee meetings, and social settings.

Nancy-Ellen Kiernan has facilitated over 150 workshops on evaluation methodology and moderated focus groups in 50+ studies with groups ranging from Amish dairy farmers in barns to at-risk teens in youth centers, to university faculty in classrooms.

Session 44: Moderator Training
Prerequisites: Having moderated 2 focus groups and written focus group questions and probes
Sunday, November 5, 9:00 am to 12:00 pm
Level: Intermediate

Advanced Applications of Program Theory

 While simple logic models are an adequate way to gain clarity and initial understanding about a program, sound program theory can enhance understanding of the underlying logic of the program by providing a disciplined way to state and test assumptions about how program activities are expected to lead to program outcomes. 

Lecture, exercises, discussion, and peer-critique will help you to develop and use program theory as a basis for decisions about measurement and evaluation methods, to disentangle the success or failure of a program from the validity of its conceptual model, and to facilitate the participation and engagement of diverse stakeholder groups. 

You will learn:

      ·       To employ program theory to understand the logic of a program,

      ·       How program theory can improve evaluation accuracy and use,

      ·       To use program theory as part of participatory evaluation practice.  

Stewart Donaldson is Dean of the School of Behavioral and Organizational Sciences at Claremont Graduate University. He has published widely on the topic of applying program theory, developed one of the largest university-based evaluation training programs, and has conducted theory-driven evaluations for more than 100 organizations during the past decade.

Session 45: Program Theory
Prerequisites: Experience or Training in Logic Models
Sunday, November 5, 9:00 am to 12:00 pm
Level: Intermediate

Designing Fundable Nonprofit Programs

In these days of ever-increasing competition for grant dollars, organizations are realizing that funders are asking for more accountability and rigorous demonstrations of the true worth of programs in terms of cost effectiveness and program impact. Evaluators are often called upon to contribute or develop evaluation plans as part of the proposal development phase during grantseeking.

This workshop will examine funding trends and offer participants practical steps and tools that are immediately useful when developing evaluation plans that are to be included in grant applications and project proposals.  The session will draw upon the best practices evaluation outlined in the Grantmakers’ Due Diligence Tool developed by Grantmakers for Effective Organization.

You will learn:

      ·       The grantmaking trends that impact the program evaluation field,

      ·       The functions evaluators contribute to nonprofit organizations’ grantseeking efforts and concomitant organizational development work,

      ·       How to prepare the program and outcomes evaluation plan section of a funding proposal.

Deborah Krause and E. J. Honton both have more than 20 years of experience in program evaluation, analytical methods and proposal development. They have conducted countless workshops and seminars on fundraising for nonprofits.

Session 46: Fundable Nonprofit Programs
Prerequisites: Basic understanding of grantseeking process and outcomes eval
Sunday, November 5, 9:00 am to 12:00 pm
Level: Intermediate

Empowerment Evaluation

Empowerment Evaluation builds program capacity and fosters program improvement. It teaches people to help themselves by learning how to evaluate their own programs. The basic steps of empowerment evaluation include: 1) establishing a mission or unifying purpose for a group or program; 2) taking stock - creating a baseline to measure future growth and improvement; and 3) planning for the future - establishing goals and strategies to achieve goals, as well as credible evidence to monitor change. The role of the evaluator is that of coach or facilitator in an empowerment evaluation, since the group is in charge of the evaluation itself.

Employing lecture, activities, demonstration and discussion, the workshop will introduce you to the steps of empowerment evaluation and tools to facilitate the approach.

You will learn:

      ·       Steps to empowerment evaluation,

      ·       How to facilitate the prioritization of program activities,

      ·       Ways to guide a program’s self-assessment.

David Fetterman hails from Stanford University and is the editor of (and a contributor to) the recently published Empowerment Evaluation Principles in Practice (Guilford). He Chairs the Collaborative, Participatory and Empowerment Evaluation AEA Topical Interest Group and is a highly experienced and sought after facilitator.

Session 47: Empowerment Evaluation
Sunday, November 5, 9:00 am to 12:00 pm
Level: Beginner, no prerequisites

Lessons From the Field: 20 Ways to Improve Your Evaluation Practice

Explore lessons learned in structuring evaluation projects, improving client relationships, responding to ethical issues, managing project resources and enhancing personal survival skills in stressful work environments. The presenter will share her hard-won lessons from the field, which have allowed her to bootstrap her evaluation practice up from one level to the next over the past two decades.

Participants will have an opportunity to work in small groups to develop solutions to a scenario and will compare strategies with their colleagues. At the end of the workshop, you will take away some fresh ideas to tackle your most pressing evaluation challenges.  

You will learn:

  • A conceptual framework around which to organize evaluation projects,

  • Ways to improve client relationships,

  • Standards for ethical conduct,

  • Tools for project management,

  • Tips for personal survival.

Gail Barrington started Barrington Research Group more than 20 years ago as a sole practitioner. Today, with over 100 program evaluation studies under her belt, she has a wealth experience to share. A top rated facilitator, she has taught workshops throughout the US and Canada. 

Session 48: Improving Your Eval Practice
Prerequisites: Prior experience conducting evaluations for clients
Sunday, November 5, 9:00 am to 12:00 pm
Level: Intermediate