Professional Development Workshops

Professional Development Workshops at AEA's Annual Conference are hands-on, interactive sessions that provide an opportunity to learn new skills or hone existing ones.

IMPORTANT BACKGROUND: Professional development workshops precede and follow the conference. These workshops differ from sessions offered during the conference itself in at least three ways: 1) each is longer (either 3, 6, or 12 hours in length) and thus provides a more in-depth exploration of a skill or area of knowledge, 2) presenters are paid for their time and are expected to have significant experience both presenting and in the subject area, and 3) attendees pay separately for these workshops and are given the opportunity to evaluate the experience. Sessions are filled on a first-come, first-served basis and many are likely to fill before the conference begins.

FEES: Professional development workshops cost $300 for a two-day session, $150 for a full-day session and $75 for a half-day session for AEA members. For nonmembers, the fees are $400, $200 and $100 respectively and for students they are $160, $80 and $40.

REGISTRATION: Registration for professional development sessions is handled right along with standard conference registration. You may register for professional development workshops even if you are not attending the conference itself.

FULL SESSIONS: Sessions that are closed because they have reached their maximum attendance are clearly marked below the session name. No more registrations will be accepted for full sessions and AEA does not maintain waiting lists. Once sessions are closed, they will not be re-opened.

BROWSE BY TIME SLOT:

TWO DAY, MONDAY-TUESDAY, NOV 1-2, FROM 9 am to 4 pm

TUESDAY, NOV 2, FULL DAY SESSIONS, 9 am to 4 pm

WEDNESDAY, NOV 3, FULL DAY,  8 am to 3 pm

WEDNESDAY, NOV 3, HALF DAY, FROM 8 am to 11 am

WEDNESDAY, NOV 3, HALF DAY, FROM 12 PM to 3 PM


SUNDAY, NOV 7, HALF DAY, FROM 9 am to 12 pm

 

TWO DAY, MONDAY-TUESDAY, NOV 1-2, FROM 9 am to 4 pm

Qualitative Methods

Qualitative data can humanize evaluations by portraying people and stories behind the numbers. Qualitative inquiry involves using in-depth interviews, focus groups, observational methods, and case studies to provide rich descriptions of processes, people, and programs. When combined with participatory and collaborative approaches, qualitative methods are especially appropriate for capacity-building-oriented evaluations.

Through lecture, discussion, and small-group practice, this workshop will help you to choose among qualitative methods and implement those methods in ways that are credible, useful, and rigorous. It will culminate with a discussion of new directions in qualitative evaluation.

You will learn:
§ Types of evaluation questions for which qualitative inquiry is appropriate,
§ Purposeful sampling strategies,
§ Interviewing, case study, and observation methods,
§ Analytical approaches that support useful evaluation.

Michael Quinn Patton is an independent consultant and professor at the Union Institute. An internationally known expert on utilization-focused evaluation and qualitative methods, he published the third edition of Qualitative Research and Evaluation Methods through Sage in 2001.

Session 1: Qualitative Methods
Scheduled: 11/1-2, 9 am to 4 pm
Level: Beginner, no prerequisites


Quantitative Methods

Quantitative data offers opportunities for numerical descriptions of populations and samples. The challenge is in knowing which analyses are best for a given situation. Designed for the evaluator with little statistical background, the workshop covers the basics of parametric statistics, and nonparametric statistics, as well as how to report your findings in ways useful to stakeholder groups.

Hands-on exercises interspersed with mini-lectures will introduce methods and concepts. The instructor will review examples of research and evaluation questions and the statistical methods appropriate to developing a quantitative data-based response.

You will learn:
§ The conceptual basis for a variety of statistical procedures,
§ How more sophisticated procedures are based on the statistical basics,
§ Which analysis technique is best for a given data set or evaluation question,
§ How to interpret and report findings from these analyses.

Katherine McKnight applies quantitative analysis in her practice as a research consultant and program evaluator for Public Interest Research Services. Additionally, she teaches Research Methods, Statistics, and Measurement in the Department of Psychology at the University of Arizona in Tucson, Arizona.

Session 2: Quantitative Methods
Scheduled: 11/1-2, 9 am to 4 pm
Level: Beginner, no prerequisites


Consulting Skills for Evaluators: Getting Started

Do you have what it takes to be a successful independent consultant? Designed for evaluators considering becoming independent consultants or who have recently begun a consulting practice, the workshop will help you to assess your own skills and characteristics to determine if you have what it takes to be successful and strategize about areas in need of improvement.

The workshop will focus on the full scope of operating an independent consulting practice from marketing and proposal writing, to developing client relationships, to project management, ethics, and business operations. Case examples, hands-on activities, and take-home materials will prepare you to enter the world of consulting.

You will learn:
§ If consulting is an appropriate career choice for you,
§ How to break into the evaluation consulting market – and stay there,
§ Time and money management strategies,
§ Professional practices including customer service, ethical operations, and client relations.

Gail Barrington started Barrington Research Group 19 years ago as a sole practitioner. Today, she has a staff of 7 and a diverse client base. A top rated facilitator, she has taught workshops throughout the US and Canada.

Session 3: Consulting Skills
Scheduled: 11/1-2, 9 am to 4 pm
Level: Beginner, no prerequisites


Evaluation 101: Intro to Evaluation Practice

Begin at the beginning and learn the basics of evaluation from an expert trainer. The session will focus on the logic of evaluation to answer the key question: "What resources are transformed into what program evaluation strategies to produce what outputs for which evaluation audiences, to serve what purposes." Enhance your skills in planning, conducting, monitoring, and modifying the evaluation so that it generates the information needed to improve program results.

A case-driven instructional process, using discussion, exercises, and lecture will introduce the steps in conducting useful evaluations: Getting started, Describing the program, Identifying evaluation questions, Collecting data, Analyzing and reporting, and Using results.

You will learn:
§ The basic steps to an evaluation,
§ Contextual influences on evaluation and ways to respond,
§ Logic modeling as a tool to describe a program and develop evaluation questions and foci,
§ Methods for analyzing, and using evaluation information.

John McLaughlin has been part of the evaluation community for over 30 years working in the public, private, and non-profit sectors. He has presented this workshop in multiple venues and will tailor this two-day format for Evaluation 2004.

Session 4: Evaluation 101
Scheduled: 11/1-2, 9 am to 4 pm
Level: Beginner, no prerequisites


Using Appreciative Inquiry in Evaluation

Experience the power of appreciative reframing! An appreciative approach to evaluation maximizes chances for sustainable impact by helping programs identify what is working and drawing on existing strengths to build capacity and improve program effectiveness. Appreciatively oriented evaluation does not veil problems, but rather refocuses energy in a constructive and empowering way.

You will experience the various phases of Appreciative Inquiry (AI) by developing evaluation questions and data collection tools; conducting and analyzing appreciative interviews; and sharing results using real-world case examples. You will also explore ways to use AI for evaluation capacity building.

You will learn:
§ The principles and applications of AI to evaluation,
§ How to formulate evaluation questions using AI,
§ How to develop and use a data collection instrument using AI,
§ How AI builds evaluation capacity.

Tessie Catsambas, President of EnCompass LLC, and Hallie Preskill, University of New Mexico professor and evaluation consultant, together bring to the workshop years of training experience and hands-on practice using AI in a variety of program contexts.

Session 5: Appreciative Inquiry
Scheduled: 11/1-2, 9 am to 4 pm
Level: Beginner, no prerequisites


Logic Models for Program Evaluation and Planning

Many programs fail to start with a clear description of the program and its intended outcomes, undermining both program planning and evaluation efforts. The logic model, as a map of what a program is and intends to do, is a useful tool for clarifying objectives, improving the relationship between activities and those objectives, and developing evaluation plans and strategic plans.

First, we will recapture the utility of program logic modeling as a simple discipline, using cases in public health and human services to explore the steps for constructing, refining and validating models. On day two, we’ll examine how to use logic models in evaluation to gain stake-holder consensus and determine evaluation focus, and in strategic planning to affirm mission and identify key strategic issues. Both days use modules with presentations, small group case studies, and debriefs to reinforce group work.

You will learn:
§ To construct logic models,
§ To develop an evaluation focus based on a logic model,
§ To use logic models to answer strategic planning questions.

Thomas Chapel is the central evaluation resource person and logic model trainer at the Centers for Disease Control. This is an expanded version of a workshop he has taught for the past 3 years to much acclaim.

Session 6: Logic Models
Scheduled: 11/1-2, 9 am to 4 pm
Level: Beginner, no prerequisites


Participatory Evaluation

Participatory evaluation practice requires evaluators to be skilled facilitators of interpersonal interactions. This workshop will provide you with theoretical grounding (social interdependence theory, conflict theory, and evaluation use theory) and practical frameworks for analyzing and extending your own practice.

Through presentations, discussion, reflection, and case study, you will experience strategies to enhance participatory evaluation and foster interaction. You are encouraged to bring examples of challenges faced in your practice for discussion.

You will learn:
§ Strategies to foster effective interaction, including belief sheets; values voting; three-step interview; cooperative rank order; graffiti; jigsaw; and data dialogue,
§ Responses to challenges in participatory evaluation practices,
§ Four frameworks for reflective evaluation practice.

Jean King has over 30 years of experience as an award-winning teacher at the University of Minnesota. As an evaluation practitioner, she has received AEA’s Myrdal award for outstanding evaluation practice. Laurie Stevahn is a professor at Seattle University with extensive facilitation experience as well as applied experience in participatory evaluation.

Session 7: Participatory Eval
Prerequisites: Basic eval skills
Scheduled: 11/1-2, 9 am to 4 pm
Level: Intermediate

TUESDAY, NOV 2, FULL DAY SESSIONS, 9 am to 4 pm

Using GIS in Evaluation

Geographic Information Systems (GIS) are a suite of tools that can help you to manage, analyze, model and display complex spatial information and relationships simply. GIS have been used in a variety of contexts, and are highly applicable to evaluators examining community-level or larger change.

Through lecture in the morning and hands on plotting and analysis of real-world data in the afternoon, you will investigate how to use GIS to depict change through examining special relationships. You will receive a free demo copy of one of the most commonly used GIS software tools, ARCGIS. Attendees should bring a charged PC laptop and be computer literate including creating, manipulating, and deleting files, using the mouse, and navigating among files and windows.

You will learn:
§ The purposes, strengths, and weaknesses of GIS,
§ When and how to apply GIS to show change over time,
§ The basics of running ARCGIS,
§ How to make a map and identify special patterns in the data.

Ralph Renger, Sydney Pettygrove, Seumas Rogan, and Adriana Cimetta authored an article in the Winter 2002 issue of the American Journal of Evaluation on using GIS as an evaluation tool. The team will reconvene to share their expertise in a hands-on format.

Session 8: Using GIS
Scheduled: 11/2, 9 am to 4 pm
Level: Beginner, no prerequisites


From Logic Models to Program Theory

While simple logic models are an adequate way to gain clarity and initial understanding about a program, sound program theory can enhance understanding of the underlying logic of the program by providing a disciplined way to state and test assumptions about how program activities are expected to lead to program outcomes.

Lecture, exercises, discussion, and peer-critique will help you to develop and use program theory as a basis for decisions about measurement and evaluation methods, to disentangle the success or failure of a program from the validity of its conceptual model, and to facilitate the participation and engagement of diverse stakeholder groups.

You will learn:
§ To employ program theory to understand the logic of a program,
§ How program theory can improve evaluation accuracy and use,
§ To use program theory as part of participatory evaluation practice.

Stewart Donaldson is a Professor and Director of the Institute of Organizational and Program Evaluation Research at Claremont Graduate University. He has served as co-chair of AEA’s Theory-driven Evaluation TIG and authored articles and journals on the topic of applying program theory.

Session 9: Program Theory
Prerequisites: Experience or Training in Logic Models
Scheduled: 11/2, 9 am to 4 pm
Level: Intermediate


Evaluation in Immigrant Communities

Attend to the unique issues of working in communities and cultures with which you may be unfamiliar and within which your craft is unknown. This workshop will examine such issues as access, entry, relationship-building, sampling, culturally specific outcomes, instrument development, translation, culturally appropriate behavior and stakeholder participation.

Drawing on case examples from practice in immigrant communities, we will illustrate what has and hasn’t worked, principles of good practice, and the learning opportunities for all involved. Through simulations and exercises you will experience the challenges and rewards of cross-cultural evaluation.

You will learn:
§ Approaches to evaluation practice in unfamiliar cultures and settings,
§ How to draw upon the traditions of communities in mutually beneficial ways,
§ Useful, respectful and credible ways to collect and report information for stakeholders,

Barry Cohen and Mia Robillos are on the staff of Rainbow Research, Inc. They bring experience working with Hmong, Latino, Somali, Nigerian, Native American, and Filipino cultures in their evaluation practice.

Session 10: Immigrant Communities
Prerequisites: Work with immigrant communities
Scheduled: 11/2, 9 am to 4 pm
Level: Intermediate


Designing Evaluations of Complex, Open Systems

Complex, systemic, programs involving multiple strategies at different levels must be represented for evaluation within non-linear, multi-tiered frameworks. The framework explored in this workshop captures the complexity of the system in an accessible way that improves evaluation design and facilitates explanation of the system, and of the evaluation design, in a wide range of evaluation contexts.

Through discussion and examination of tools and techniques, you will move from the theoretical to the practical application of this simple, yet not simplistic framework.

You will learn:
§ Evaluation design using a holistic, systemic, multi-tiered framework,
§ Selection of tools to capture and communicate diverse results,
§ How to maintain the integrity and usefulness of the evaluation as priorities and contexts change.

Barry Kibel hails from the Pacific Institute for Research and Evaluation and has over 20 years of experience as an evaluator and facilitator. This work builds upon themes explored in his book Success Stories as Hard Data. John Grove works with the Population Leadership Program and has collaborated with Kibel to develop this framework.

Session 11: Complex Systems Evaluation
Prerequisites: Experience evaluating complex programs
Scheduled: 11/2, 9 am to 4 pm
Level: Advanced


The Causal Wars: A Survival Guide

Evaluators and the educational research world are deeply divided over the issue of measuring causation, in a kind of rebirth of the old paradigm wars. We’ll look at multiple perspectives, from Hume and Aristotle, to Campbell and Cook, to more modern treatments, and examine exactly how this impacts evaluation and evaluation funding.

This workshop employs a combination of lecture and discussion aimed at providing a full background and practical procedures for both (1) designing evaluations for scientific acceptability, and (2) securing funding for evaluation in the educational and social science areas.

You will learn:
§ The reasons behind the arguments for using randomly controlled trials to support causal conclusions,
§ A set of 8 scientifically sound designs for establishing causation,
§ The tough requirements for adding case study methodology to the 8,
§ What to do when seeking funding in the present situation.

Michael Scriven is among the most well-known professionals in the field today with 25 years of work on the philosophy of science, much of it centering on causation, and over 90 publications in the field of evaluation.

Session 12: Causal Wars
Prerequisites: Basic evaluation skills
Scheduled: 11/2, 9 am to 4 pm
Level: Intermediate


Designing and Implementing M&E Systems that Work

Develop your skills in using specialized outcome charting techniques to design effective monitoring and evaluation (M&E) systems in results based accountability settings. These techniques can be administered internally and achieve a high level of validity and reliability while being credible to stakeholders and illustrating how performance furthers agency goals and objectives.

Through lecture, discussion, and group work, and drawing upon examples from the US, Canada, and Europe, you will develop knowledge and skills for the successful design and implementation of outcomes and performance monitoring systems. The M&E systems illustrated here are applicable in a variety of contexts from community organizations to government units.

You will learn:
§ How to design results focused M&E systems for internal administration,
§ Skills and techniques to implement a focus on results,
§ Key elements for successful implementation of an M&E system.

Andy Rowe works for GHK International and has served as a consultant to federal, state, and local government, as well as to community and regional health and human services initiatives.

Session 13: M&E Systems
Prerequisites: Basic knowledge of evaluation and accountability
Scheduled: 11/2, 9 am to 4 pm
Level: Intermediate


Structural Equation Modeling for Evaluators
THIS SESSION IS FULL - NO MORE REGISTRANTS ARE BEING ACCEPTED FOR THIS SESSION
AEA DOES NOT MAINTAIN WAITING LISTS FOR WORKSHOPS

Explore the conceptual, technical, and applied issues related to Structural Equation Modeling (SEM). SEM merges confirmatory factor analysis with path analysis and provides means for constructing, testing, and comparing comprehensive structural path models as well as comparing the goodness of fit of models and their adequacy across multiple samples.

Drawing heavily on structured lecture with opportunity for questions, this session will examine models varying from simple to more complex that cover a wide range of situations including longitudinal and mediational analyses, comparisons between groups, and analyses that include data from different sources such as from supervisors and
co-workers.

You will learn:
§ Features and advantages of SEM,
§ When and how to apply 6 basic SEM models,
§ To test specific hypotheses and compare models,
§ To report SEM analysis.

Amiram Vinokur is a charter member of AEA currently at the University of Michigan’s Institute for Social Research. He has written on SEM, uses it in his practice, and teaches it at the Survey Research Summer Institute.

Session 14: SEM for Evaluators
Prerequisites: Intermediate Statistics
Scheduled: 11/2, 9 am to 4 pm
Level: Advanced

 

WEDNESDAY, NOV 3, FULL DAY,  8 am to 3 pm

Exploring Qualitative Data Analysis Software

Which qualitative data analysis (QDA) software package is right for you? What features and advancements are available and how can they facilitate your evaluation work?

This workshop is divided into 4 seminar-style sections: (1) Overview: Comparison of most recent versions of ATLAS.ti, HyperResearch, MAXqda, and NVIVO; (2) Diving In: Discussion of features to facilitate early reading and review of data and recognizing and recording themes; (3) Stepping Back: Comparison of features that allow you to inventory and assess project status; and (4) Application: Integrating qualitative software skills into your own work.

You will learn:
§ The latest advancements and trends in QDA software,
§ How to make careful choices for personal and departmental software purchases,
§ Tips for integrating qualitative software into your analysis plans,
§ Important do’s and don’ts in qualitative software use.

Ray Maietta is President and founder of ResearchTalk Inc, a qualitative inquiry consulting firm. His training and content expertise is extensive, reflected in this workshop being ranked among the top 10% when offered in 2002 and 2003.

Session 15: Qualitative Software
Prerequisites: Experience in qualitative data analysis
Scheduled: 11/3, 8 am to 3 pm
Level: Intermediate


Evaluation-specific Methodology
THIS SESSION IS FULL - NO MORE REGISTRANTS ARE BEING ACCEPTED FOR THIS SESSION
AEA DOES NOT MAINTAIN WAITING LISTS FOR WORKSHOPS

Is there 'something more' that an evaluator needs to be able to do, that a doctorate in a social science won't have taught her/him? Here we’ll spell out the 'something more' in some detail, so that you not only know what it is, but acquire basic skills in it. We'll cover: 1) validation of values; 2) the process of integration of values with factual claims; 3) needs assessment; 4) integration of evaluations of a program (etc.) on several dimensions of merit into an overall evaluation of merit; and 5) setting standards of merit.

Via discussion and mini-lectures, we will first examine the possibility that there is no evaluation-specific methodology and then investigate the inverse through small-group problem solving.

You will learn:
§ The ways in which evaluation differs from other social sciences,
§ Evaluation-specific skills,
§ When and how to apply such methodologies.

Michael Scriven is among the most well-known professionals in the field today with over 90 publications related to evaluation methodology. He is currently a professor at Western Michigan University.

Session 16: Evaluation Methodology
Prerequisites: Basic training or experience in evaluation
Scheduled: 11/3, 8 am to 3 pm
Level: Intermediate


When Your Data Breaks the Rules: Nonparametric Stats
THIS SESSION IS FULL - NO MORE REGISTRANTS ARE BEING ACCEPTED FOR THIS SESSION
AEA DOES NOT MAINTAIN WAITING LISTS FOR WORKSHOPS

When your data does not meet the assumptions of traditional statistics, there is an alternative! Nonparametric statistics allow for rigorous analysis when the sample size is small or skewed.

Through lecture and hands-on group work designed to help you select the best techniques for your research question and context, you will learn how to apply nonparametric techniques to your work. You will leave with a manual designed to guide you through nonparametrics when you return home. The facilitator encourages you to email her your research questions so that they may be used as examples (jennifermcamacho@yahoo.com).

You will learn:
§ When to use parametric versus nonparametric statistics,
§ How to determine which statistics are appropriate,
§ How to run, interpret, and write up nonparametric data,
§ How to apply techniques learned on your own after the workshop.

Jennifer Camacho offered this workshop in 2000 and attendees ranked it number 1 among the professional development offerings. An experienced facilitator, she currently works at the Sinai Community Institute.

Session 17: Nonparametric Stats
Prerequisites: Basic knowledge of statistics
Scheduled: 11/3, 8 am to 3 pm
Level: Intermediate


Fundamentals in Grantwriting

Increasing evaluation capacity often requires the ability to develop and sustain grants and contracts. This workshop will provide you with the skills needed to tap into resources for identifying grant opportunities and then prepare effective grants that result in increased revenue and capacity for evaluation activity.

Framed by presentations on key components of grantwriting, you will engage in hands-on learning by working in small groups to brainstorm and develop proposal concepts in response to hypothetical grant requests.

You will learn:
§ The core components to grants and contracts applications,
§ How to build effective grantwriting workgroups,
§ Key aspects of budget development and negotiation,
§ Basic grant-management skills,
§ How to communicate with funders.

Michael Shafer is a returning trainer for 2004. He has personally generated in excess of $20 million dollars in grant revenue and served as Principal Investigator (PI) on over 35 grants and contracts. He currently manages a grant funded research and training center at the University of Arizona.

Session 18: Grantwriting
Scheduled: 11/3, 8 am to 3 pm
Level: Beginner, no prerequisites


Shoestring Evaluation: Overcoming Constraints

What do you do when asked to perform an evaluation on a program that is well underway? When time and resources are few, yet expectations high? When questions about baseline data and control groups are met with blank stares? The Shoestring Evaluation approach seeks to ensure the best quality evaluation under real-life constraints.

Through presentations and discussion, with real-world examples drawn from international development evaluation, you will study the Shoestring Evaluation approach. The workshop focuses on developing country evaluation, but the techniques are applicable to evaluators working in any context with budget, time, and data constraints.

You will learn:
§ The six steps of the Shoestring Evaluation approach,
§ Ways to reduce the costs and time of data collection,
§ How to reconstruct baseline and control group data,
§ Methods for addressing threats to validity and accuracy of findings.

Michael Bamberger will coordinate a team of four facilitators with extensive real-world experience in conducting evaluations in a range of contexts worldwide. Dr. Bamberger is a leader in the area of conducting evaluations with budget, time, and data constraints.

Session 19: Shoestring Evaluation
Prerequisites: Basic evaluation skills
Scheduled: 11/3, 8 am to 3 pm
Level: Intermediate
 


Using Rasch to Measure Services and Outcomes

Program evaluation has great need for the development of valid measures, e.g. of the quantity and quality of services and of the outcomes of those services. Many evaluators are frustrated when existing instruments are not well tailored to the task and do not produce the needed sensitive, accurate, valid findings.

Through an extensive presentation, followed by discussion and hands-on work with data sets and computer-generated output, this workshop will explore Rasch Measurement as a means to effectively measure program services. Attendees should bring their own charged PC laptop and will receive a copy of the Winsteps software at the workshop.

You will learn:
§ Differences between Classical Test Theory and Rasch Measurement,
§ Why, when, and how to apply Rasch measurement,
§ Hands-on application of Rasch analysis using Winsteps software,
§ Interpretation of Rasch/Winsteps output.

Kendon Conrad is from the University of Illinois at Chicago and Nikolaus Bezrucko is an independent consultant. They bring extensive experience in both teaching about, and applying, Rasch measurement to evaluation.

Session 20: Rasch Measurement
Scheduled: 11/3, 8 am to 3 pm
Level: Beginner, no prerequisites


Using Effect Size and Association Measures

Answer the call to report effect size and association measures as part of your evaluation results. Improve your capacity to understand and apply a range of measures including: standardized measures of effect sizes from Cohen, Glass, and Hedges; Eta-squared; Omega-squared; the Intraclass correlation coefficient; and Cramer’s V.

Through mini-lecture, hands-on exercises, and demonstration, you will improve your understanding of the theoretical foundation and computational procedures for each measure as well as ways to identify and correct for bias.

You will learn:
§ How to select and compute the appropriate measure of effect size or association.
§ Considerations in the use of confidence intervals,
§ Ways to identify and correct for measurement bias,

Jack Barnette, from the University of Iowa and James McLean from East Tennessee State University have been conducting research and writing on this topic for over five years. Together, they bring over 60 years of teaching and workshop facilitation experience and both have received awards for outstanding teaching.

Session 21: Effect Size, Measures
Prerequisites: Univariate statistics through ANOVA & power
Scheduled: 11/3, 8 am to 3 pm
Level: Intermediate


Utilization-focused Evaluation

Evaluations should be useful, practical, accurate and ethical. Utilization-focused Evaluation is a process that meets these expectations and promotes use of evaluation from beginning to end. By carefully implementing evaluations for increased utility, this approach encourages situational responsive-ness, adaptability and creativity.

With an overall goal of teaching you the process of Utilization-focused Evaluation, the session will combine lectures with concrete examples and interactive case analyses, including cases provided by the participants.

You will learn:
§ The fundamental premises of Utilization-focused Evaluation,
§ The implications of focusing an evaluation on intended use by intended users,
§ Options for evaluation design and methods based on situational responsiveness, adaptability and creativity,
§ How to use the Utilization-focused Evaluation checklist & flowchart.

Michael Quinn Patton is an independent consultant and professor at the Union Institute. An internationally known expert on Utilization-focused Evaluation, in 1997 he published the third edition of the book on which this session is based, Utilization Focused Evaluation: The New Century Text.

Session 22: Utilization-focused
Scheduled: 11/3, 8 am to 3 pm
Level: Beginner, no prerequisites


Enhancing Evaluation Using Systems Concepts

Systems-based approaches examine the inter-relationships among human and contextual actors in systems both small and large. They have the potential to yield powerful insights for program improvement; yet, are often misunderstood. For instance, the ‘systems approach’ is commonly assumed to require ‘everything’ be included, making evaluations impossibly complicated. In fact, most systems approaches focus on simplicity by rigorously identifying what can be left out.

A brief overview the field of systems-based approaches, will be followed by instruction and case study in the practical application of 3 to 4 different and commonly used approaches.

You will learn:
§ The underlying concepts of systems theory,
§ Key facets of systems approaches and their application,
§ Applications of systems dynamics, soft systems methodology, and complex adaptive systems.

Bob Williams is an independent consultant and a pioneer in applying systems theory to the field of evaluation. In her position as a program officer, Teresa Behrens has been instrumental in bringing systems approaches to the evaluation work at the WK Kellogg Foundation.

Session 23: Systems Approaches
Scheduled: 11/3, 8 am to 3 pm
Level: Beginner, no prerequisites


Evaluation Practice: A Collaborative Approach

Collaborative evaluation is an approach that actively engages program stakeholders in the evaluation process. When stake-holders collaborate with evaluators, stakeholder and evaluator under-standing increases and the utility of the evaluation is often enhanced.

Employing discussion, hands-on activities, and roleplaying, this workshop focuses on strategies and techniques for conducting successful collaborative evaluations, including ways to avoid common collaborative evaluation pitfalls.

You will learn:
§ A collaborative approach to evaluation,
§ Levels of collaborative evaluation and when and how to employ them,
§ Techniques used in collaborative evaluation,
§ Collaborative evaluation design and data-collection strategies.

Rita O’Sullivan of the University of North Carolina and John O’Sullivan of North Carolina A&T State University have offered this well-received session for the past six years at AEA. The presenters have used collaborative evaluation techniques in a variety of program settings, including education, extension, family support, health, and non-profit organizations.

Session 24: Collaborative Eval
Prerequisites: Basic Eval Skills
Scheduled: 11/3, 8 am to 3 pm
Level: Intermediate


Using Cluster Evaluation to Strengthen Programs

Cluster evaluation focuses on evaluating a set, or cluster, of projects targeting a single issue such as literacy. The projects may have little in common other than (usually) a funding source and focus area, and their disparate nature brings unique evaluation challenges and opportunities. Cluster evaluation can be used for knowledge generation, formative and/or summative evaluation.

Following presentations of the distinctions between cluster and multi-site evaluation, this interactive session will employ small group exploration of a case study. Participants will work through the stages of a cluster evaluation from the design through data analysis and interpretation.

You will learn:
§ The origin, features, importance, and purposes of cluster evaluation,
§ When to use cluster evaluations,
§ To design a cluster evaluation,
§ To apply general principles of data analysis and interpretation to cluster evaluation.

Beverly Parsons, Executive Director of InSites, has over 20 years experience in evaluation and has conducted and consulted on numerous cluster evaluations. Rene Lavinghouze is from the Centers for Disease Control where this workshop has become part of their ‘Evaluation University.’

Session 25: Cluster Eval Design
Prerequisites: Basic Eval Skills
Scheduled: 11/3, 8 am to 3 pm
Level: Intermediate


Concept Mapping for Planning and Evaluation
THIS SESSION IS FULL - NO MORE REGISTRANTS ARE BEING ACCEPTED FOR THIS SESSION
AEA DOES NOT MAINTAIN WAITING LISTS FOR WORKSHOPS

Concept mapping integrates high quality quantitative and qualitative techniques to produce an interpretable pictorial view (concept map) of interrelated ideas and concepts from multiple stakeholders. We will focus on direct applications to program planning and evaluation - identifying relevant concepts, assessing the degree to which stakeholder groups show consensus and examining the match between theoretical expectations and observed outcomes.

The workshop provides a complete introduction to The Concept System method and software; shows how to guide participants through the process; and presents case histories of its use. Attendees should bring a PC-based laptop with a CD drive and be familiar with its use.

You will learn:
§ Ways to work with groups to conceptualize and plan evaluation,
§ How to develop and interpret concept maps,
§ Application of multivariate statistical methods to evaluation.

William Trochim will lead a team of four facilitators experienced in the development and use of concept maps. He is the developer of the concept mapping methodology and has published on this method since 1983.

Session 26: Concept Mapping
Prerequisites: Experience as an evaluator and computer skills
Scheduled: 11/3, 8 am to 3 pm
Level: Intermediate


Multicultural Issues in Program Evaluation

THIS WORKSHOP HAS BEEN CANCELED.


Economic Effectiveness: Valuing Inputs and Benefits

Regardless of the economic technique used for judgments about the economic effectiveness of programs, they all rely on the valuation of program inputs and benefits. This workshop will provide practical advice and hands-on experience estimating input costs and valuing benefits such as outputs, outcomes or impacts.

Through lecture and hands-on group work you will design and implement simple techniques to estimate the costs and benefits of your program. You will also learn when it is reasonable to attempt to value benefits and when it is better to leave them as quantitative and qualitative descriptions of effects.

You will learn:
§ How to estimate program costs in a variety of contexts,
§ Options for valuing benefits,
§ How costs and benefits are used in ROI, Benefit-Cost Analysis, and program effectiveness measures.

Andy Rowe is an economist with GHK International, Chair of AEA’s International Committee, and former President of the Canadian Evaluation Society. He has assessed costs and benefits of social, environ-mental and arts programs ranging from alternative dispute resolution initiatives to industrial and labor development programs, to community economic development.

Session 28: Economic Effectiveness
Scheduled: 11/3, 8 am to 3 pm
Level: Beginner, no prerequisites


Multilevel Models in Program Evaluation
THIS SESSION IS FULL - NO MORE REGISTRANTS ARE BEING ACCEPTED FOR THIS SESSION
AEA DOES NOT MAINTAIN WAITING LISTS FOR WORKSHOPS

Multilevel models (also called hierarchical linear models) open the door to understanding the inter-relationships among nested structures (students in classrooms in schools in districts for instance), or the ways evaluands change across time (perhaps longitudinal examinations of health interventions). This workshop will demystify multilevel models and present them at an accessible level, stressing their practical applications in evaluation.

Through discussion and hands-on demonstrations, we will address four key questions: When are multilevel models necessary? How can they be implemented using standard software? How does one interpret multilevel results? What are recent developments in this arena?

You will learn:
§ The basics of multilevel modeling,
§ When to use multilevel models in your evaluation practice,
§ How to implement models using widely available software.

Sanjeev Sridharan is an Evaluation Specialist at Westat and a trainer for SPSS. Wendy Garrard works at the Vanderbilt Institute for Public Policy Studies. They have employed multilevel models in both evaluation projects and as the basis for publications.

Session 29: Multilevel Models
Prerequisites: Basic understanding of Regression Methodology
Scheduled: 11/3, 8 am to 3 pm
Level: Intermediate

WEDNESDAY, NOV 3, HALF DAY, FROM 8 am to 11 am

The Nuts and Bolts of Group Techniques for Evaluation

Improve your facilitation skills when working with a wide range of stakeholders by adding Brain-storming (traditional and electronic), Nominal Group Technique (NGT), and Ideawriting to your evaluator’s toolbox. Group techniques allow evaluators to tap into individual and collective wisdom, generate ideas, and identify next steps when planning and conducting evaluations and acting on evaluation results.

A mix of lecturette and experiential learning opportunities will allow you to walk away comfortable in your ability to identify opportunities that would benefit from group techniques and then to apply those techniques confidently to your evaluation practice for maximum impact.

You will learn:
§ To facilitate Brainstorming, Nominal Group Technique (NGT), and Ideawriting sessions,
§ When and how to apply each of the three to evaluation,
§ To develop effective questions for group techniques.

Vicki Staebler Tardino is an experienced organization development consultant working work with professional and leadership development initiatives. Jennifer Dewey hails from Learning Point Associates where she has been involved in large and small group work for quality assurance, project management, and evaluation.

Session 30: Group Techniques
Scheduled: 11/3, 8 am to 11 am
Level: Beginner, no prerequisites


Video Use for Evaluation

The FAVOR method, an acronym for Feedback and Analysis via Video Observation and Reflection, uses videotaped observations and feedback as a springboard for reflection on the part of stakeholders and participants in a program. Such reflection, in turn, promotes problem solving that produces solutions to problems that are inherent in the program, its design, goals, strategies or operation. The method is a participatory tool for evaluation that not only starts and ends with the stakeholders, but incorporates them in the process of evaluation as well.

You will have the opportunity to practice videotaping activities, view the videotape and reflect on the experience from the point of view of evaluation. Please bring a video camera if you have one available.

You will learn:
§ How to use the video camera to collect data,
§ How to analyze the videotext,
§ How to present feedback through reflection on the videotext,
§ How to use video as a tool for participatory evaluations.

Barbara Rosenstein has used the FAVOR method in her evaluation work for over 15 years, as well as publishing on the method and offering workshops on it both in the US and internationally.

Session 31: Video Use
Prerequisites: Know how to use a video camera
Scheduled: 11/3, 8 am to 11 am
Level: Intermediate


Using Outcomes Theory to Improve Programs

Managing for Outcomes approaches are transforming the rhetoric, if not the reality, of management practice. Yet there is a lack of understanding of ‘Outcomes Theory’ – the attempt to develop a sound set of principles that define what are well constructed outcome sets for organizations. Should outcomes always have to be measurable and attributable, should they be for the organization itself or for the field in which the organization operates?

Using presentations, discussion, and small group work, you will examine such issues in detail and analyze outcomes sets you bring to the workshop for how they conform to the elements of Outcomes Theory. You will gain confidence in developing, discussing and critiquing organizational outcomes and outcomes hierarchies. More info at www.strategicevaluation.info.

You will learn:
§ Principles for structuring and critiquing outcomes,
§ How to address measurability, attribution, and autonomy,
§ How to apply Outcomes Theory to evaluation practice.

Paul Duignan is an experienced facilitator with over 20 years of experience in evaluation. He is in the forefront of applying Outcomes Theory to evaluation in his private consulting work.

Session 32: Outcomes Theory
Scheduled: 11/3, 8 am to 11 am
Level: Beginner, no prerequisite


Minding Your Mind: Using Your Brain More Effectively

Evaluators seldom pay attention to how their brains work day to day; how they are affected by food; how memories are created, organized, and accessed; under what physical circumstances inspiration arises; how ideas are generated and connected to one another; how sleep, exercise, unstructured time, and life problems affect what they think about; and how easily or stressfully they handle day to day thinking chores.

Through lecture and discussion, this workshop addresses how thinking can be made easier, less stressful, and more productive. This is not about "gimmicks" or clever philosophical insights, but is based on the working habits of successful people involved in intellectual work.

You will learn:
§ How to work effectively on many evaluation topics at one time,
§ How to increase the odds of finding solutions to hard problems,
§ What to eat to increase your mental energy,
§ How to connect with thoughtful people for mutual benefit.

George Grob works with the Office of Inspector General Corps and has managed over 1000 evaluations and 100 evaluators in the past 15 years. Increasingly, his teaching focuses on effective performance, problem solving, and thinking for evaluators.

Session 33: Minding Your Mind
Scheduled: 11/3, 8 am to 11 am
Level: Beginner, no prerequisites

WEDNESDAY, NOV 3, HALF DAY, FROM 12 PM to 3 PM

Focus Group Interviewing
THIS SESSION IS FULL - NO MORE REGISTRANTS ARE BEING ACCEPTED FOR THIS SESSION
AEA DOES NOT MAINTAIN WAITING LISTS FOR WORKSHOPS

The focus group moderator plays a critical role in the quality of the focus group interview. This workshop will examine the function of the moderator and suggest methods that maximize his or her role. Specific topics will include when to use focus groups, developing powerful questions, solving problems regularly encountered by moderators, using effective and efficient analysis, and alternative moderating styles.

Through lecture, demonstration, discussion and practice, this hands-on session will introduce best practices in moderating, developing questions and analyzing results for focus groups. You will have the opportunity to participate in and/or observe a mock focus-group.

You will learn:
§ Critical ingredients of focus group research,
§ Focus group moderating skills,
§ Development of focus group questions.
§ Analysis strategies for group data.

Richard Krueger is co-author of one of the most widely read texts on focus groups: Focus Groups: A Practical Guide of Applied Research, as well as numerous articles on the topic. He has conducted over 300 focus groups in the public, private, and non-profit sectors and is a highly experienced workshop facilitator who has offered sessions at AEA since 1988.

Session 34: Focus Groups
Scheduled: 11/3, 12 pm to 3 pm
Level: Beginner, no prerequisites


Empowerment Evaluation

Empowerment Evaluation builds program capacity and fosters program improvement. It teaches people to help themselves by learning how to evaluate their own programs. The basic steps of empowerment evaluation include: 1) establishing a mission or unifying purpose for a group or program; 2) taking stock - creating a baseline to measure future growth and improvement; and 3) planning for the future - establishing goals and strategies to achieve goals, as well as credible evidence to monitor change. The role of the evaluator is that of coach or facilitator in an empowerment evaluation, since the group is in charge of the evaluation itself.

Employing lecture, activities, demonstration and discussion, the workshop will introduce you to the steps of empowerment evaluation and tools to facilitate the approach.

You will learn:
§ Steps to empowerment evaluation,
§ How to facilitate the prioritization of program activities,
§ Ways to guide a program’s self-assessment.

David Fetterman hails from Stanford University and is the author of the seminal text on this subject: Empowerment Evaluation. He Chairs the Collaborative, Participatory and Empowerment Evaluation AEA Topical Interest Group and is a highly experienced and sought after facilitator.

Session 35: Empowerment Evaluation
Scheduled: 11/3, 12 pm to 3 pm
Level: Beginner, no prerequisites


Cultivating Self as Responsive Instrument

Evaluative judgments are inextricably bound up with culture and context. The AEA Guiding Principles encourage greater realization that excellence and ethical practice in evaluation are intertwined with orientations toward, responsiveness to, and capacities for, engaging diversity. Breathing life into this expectation calls for critical ongoing personal homework for evaluators regarding their lenses vis- a-vis their judgment-making.

We will employ individual and group reflective exercises to tend mindfully to issues of culture and context, developing and refining our self-as-responsive-instrument. From our privileged standpoints, we often look but still do not see, listen but do not hear; touch but do not feel.

You will learn:
§ To attend to your self as instrument,
§ To identify the lenses influencing your meaning-making/evaluation practice,
§ How other’s perceptions of the evaluator impact evaluation effectiveness.

Hazel Symonette brings over 30 years of work in diversity-related arenas to the workshop. She is founder and Director of the Excellence Through Diversity Institute at the University of Wisconsin-Madison.

Session 36: Cultivating Self
Scheduled: 11/3, 12 pm to 3 pm
Level: Beginner, no prerequisites


Outcomes for Success in Evaluating Collaboration

Even when organizations working together share common goals, they operate in different environments, at different scales, with different resources, and employ different strategies. The evaluator, whether working for one organization or a consortium, often must help stakeholders frame and measure outcomes that show how change is being made through the collaborative process – and how collaborative and system outcomes are related to participant outcomes.

Through short lectures and group exercises, this workshop will present methods and give you experience in the design of evaluations for these complex situations.

You will learn:
§ To apply outcomes-based evaluation to the evaluation of systems and collaborations,
§ To help clients see the links between systems building and client outcomes,
§ To evaluate the influence of collaboratives on clients.

Bill Leon has ten years of experience conducting evaluations for nonprofits, government agencies, and foundations. He currently works with Organizational Research Services focusing on projects with consortia engaged in systems change.

Session 37: Evaluating Collaboration
Prerequisites: Logic Modeling Basics
Scheduled: 11/3, 12 pm to 3 pm
Level: Intermediate


Creative, Interactive Ways to Communicate and Report
THIS SESSION IS FULL - NO MORE REGISTRANTS ARE BEING ACCEPTED FOR THIS SESSION
AEA DOES NOT MAINTAIN WAITING LISTS FOR WORKSHOPS

This unique session is designed to take practicing evaluators a level beyond their current communicating and reporting practices.

You will self-assess your practices to determine what formats and strategies you use most often, what challenges and successes you have experienced, and why. Then, select among learning opportunities in the newest areas of communicating and reporting: design and layout to enhance appeal and readability, video and web conferencing, chat rooms and teleconferencing, working sessions, photography and cartoons, poetry and drama, video and computer-generated presentations, and website communications.

You will learn:
§ To self-assess your communications needs,
§ Cutting edge strategies in areas that you select as most applicable to your evaluation practice,
§ In-depth about the one strategy that can benefit you right now.

Rosalie Torres and Hallie Preskill are co-authors are of the forthcoming 2nd edition of Evaluation Strategies for Communicating and Reporting (Sage). They have applied their recommendations in a range of evaluation contexts, bringing practical experience to the session.

Session 38: Reporting
Prerequisites: Experience w/evaluation communicating and reporting
Scheduled: 11/3, 12 pm to 3 pm
Level: Intermediate

SUNDAY, NOV 7, HALF DAY, FROM 9 am to 12 pm

Using Stories in Evaluation

Stories are an effective means of communicating the ways in which individuals are influenced by educational, health, and human service agencies and programs. Unfortunately, the story has been undervalued and largely ignored as a research and reporting procedure. Stories are sometimes regarded with suspicion because of the haphazard manner in which they are captured or the cavalier promise of what the story depicts.

Through short lecture, discussion, demonstration, and hands-on activities, this workshop explores effective strategies for discovering, collecting, analyzing and reporting stories that illustrate program processes, benefits, strengths or weaknesses.

You will learn:
§ How stories can reflect disciplined inquiry,
§ How to capture, save, and analyze stories in evaluation contexts,
§ How stories for evaluation purposes are often different from other types of stories.

Richard Krueger is on the faculty at the University of Minnesota and has over 20 years experience in capturing stories in evaluation. He has offered well-received professional development workshops at AEA and for non-profit and government audiences for over 15 years.

Session 39: Using Stories
Scheduled: 11/7, 9 am to 12 pm
Level: Beginner


Logistic & Multinomial Regression for Evaluators

Sometimes, much of the data that evaluators are able to collect consist of Yes/No responses, or categorical data having more than two categories. Such situations pose challenges for analysis and interpretation.

In this workshop, you will explore logistic and multinomial regression techniques that allow for ‘richness of analysis’, even when lacking ‘richness of data.’ In lectures that blend statistical theory with practical applications, you will develop a better understanding of when such techniques are appropriate. You will also be given several data sets, and will have the opportunity to run analyses using either SPSS or SAS. Please bring a fully charged computer with SPSS or SAS loaded.

You will learn:
§ Statistical theory underlying logistic and multinomial regression,
§ To run analyses for these techniques using real data,
§ To interpret and report the results.

Ronald Szoc hails from Information Technology International. He has used statistics for 35 years in applied settings and facilitated workshops on data analysis for both technical and non-technical audiences.

Session 40: Regression
Prerequisites: Experience w/regression & ANOVA, knowledge of SPSS or SAS
Scheduled: 11/7, 9 am to 12 pm
Level: Intermediate


Using Logic Modeling and Evaluability Assessment

Evaluators can use logic modeling and evaluability assessment to get a reasonable level of agreement among key stakeholders on: program goals; the inputs, processes, and activities to be used to achieve the goals; internal and external factors that can influence goal achievement; evaluation priorities; and intended uses of evaluation information.

Through mini-lectures and discussion, this workshop will describe and illustrate the logic modeling and evaluability assessment processes. In small-group exercises and subsequent reporting, participants will have opportunities to test portions of both processes and to relate those processes to their own experience.

You will learn:
§ The basic steps in logic modeling,
§ The basic steps in evaluability assessment,
§ Their use to achieve a reasonable level of agreement on program design, evaluation priorities, and intended uses of evaluation results.

Joseph Wholey has been on the faculty at the University of Southern California since 1980 and has served as Director of Program Evaluation Studies at The Urban Institute, Deputy Assistant Secretary for Evaluation at the US Department of Health and Human Services, and Senior Advisor for Evaluation at the US General Accounting Office.

Session 41: Using Logic Models
Scheduled: 11/7, 9 am to 12 pm
Level: Beginner
 


Focus Group Moderator Training
THIS SESSION IS FULL - NO MORE REGISTRANTS ARE BEING ACCEPTED FOR THIS SESSION
AEA DOES NOT MAINTAIN WAITING LISTS FOR WORKSHOPS

The literature is rich in textbooks and case studies on many aspects of focus groups including design, implementation and analyses. Missing however are guidelines and discussions on how to moderate a focus group.

In this experiential learning environment, you will find out how to maximize time, build rapport, create energy and apply communication tools in a focus group to maintain the flow of discussion among the participants and elicit more than one-person answers. You will learn at least 15 strategies to create and maintain a focus group discussion. These strategies can also be applied in other evaluation settings such as community forums and committee meetings to stimulate discussion.

You will learn:
§ How to moderate a focus group,
§ At least 15 strategies to create and maintain focus group discussion,
§ How to stimulate discussion in community forums, committee meetings, and social settings.

Nancy Ellen Kiernan has facilitated over 150 workshops on evaluation methodology and moderated focus groups in 50+ studies with groups ranging from Amish dairy farmers in barns to at-risk teens in youth centers, to university faculty in classrooms.

Session 42: Moderator Training
Prerequisites: Having moderated a focus group
Scheduled: 11/7, 9 am to 12 pm
Level: Intermediate


The Art and Science of Creating Useful Surveys

It’s not easy to design a survey that is reliable and valid, feasible to administer, and returns data that is useable and useful. This workshop will provide a brief, practical introduction to the fundamentals of survey design and administration.  

A combination of lecture, discussion, and hands-on examples will address:  determining your purpose; types of surveys; basics of survey design; construction of items and response scales (i.e., how to avoid common mistakes); reliability and validity issues; and key steps to manage the process of survey design through data collection. Handouts will include checklists of do’s and don’ts, examples of key topics and sample questions, and sources for further information.

You will learn:
§ Types of surveys and how to chose among them,
§  Key steps to survey design,
§ Errors to avoid when writing questions and response scales,
§ Specific strategies to increase reliability and validity.

 

Allison Titcomb is an Evaluation Associate with  LeCroy & Milligan Associates, Inc., in Tucson, Arizona, and is Past-president of the Arizona Evaluation Network. A veteran facilitator, she has over 17 years of experience in evaluation planning, design and implementation.

Session 43: Useful Surveys

Scheduled: 11/7, 9 am to 12 pm

Level: Beginner, no prerequisites

Return to Eval 2004