Professional Development Workshops

 

Professional Development Workshops are hands-on, interactive sessions that provide an opportunity to learn new skills or hone existing ones at Evaluation 2005: Crossing Borders, Crossing Boundaries, the joint conference of the Canadian Evaluation Society and the American Evaluation Association.

Professional development workshops precede and follow the conference. These workshops differ from sessions offered during the conference itself in at least three ways: 1. each is longer (either 3, 6, or 12 hours in length) and thus provides a more in-depth exploration of a skill or area of knowledge, 2. presenters are paid for their time and are expected to have significant experience both presenting and in the subject area, and 3. attendees pay separately for these workshops and are given the opportunity to evaluate the experience. Sessions are filled on a first-come, first-served basis and most are likely to fill before the conference begins.

 

REGISTRATION: Registration for professional development workshops is handled as part of the conference registration forms; however, you may register for professional development workshops even if you are not attending the conference itself.

 

Both the American Evaluation Association (AEA) and the Canadian Evaluation Society (CES) are processing registrations for Evaluation 2005. You need only to register through one or the other for the conference and workshops. If you reside in Canada, you should register through CES. If you reside in the US, you should register through AEA. If you are an international attendee, we strongly encourage you to register through CES as well since, representing the hosting country for the conference, they will be issuing any letters of invitation and also have the most up to date information related to international travel into and out of Canada. You are on the AEA Conference site. The CES conference site may be accessed at http://c2005.evaluationcanada.ca/index.cgi?_lang=en.

 

FEES: Workshop registration fees are in addition to the fees for conference registration:

 

Two Day
Workshop

One Day
Workshop

Half Day
Workshop

CES/AEA Members

$340

$170

$85

Students

$180

$90

$45

Nonmembers

$440

$220

$110

 

Please note that all fees are given us US dollars. If you register via the Canadian Evaluation Society's site, you will register at approximately equivalent rates in Canadian Dollars. 

 

FULL SESSIONS: Sessions that are closed because they have reached their maximum attendance will be clearly marked below the session name. No further registrations will be accepted for full sessions and we do not maintain waiting lists. Once sessions are closed, they will not be re-opened.

 

BROWSE BY TIME SLOT:

 

TWO DAY, MONDAY-TUESDAY, OCTOBER 24-25, 9 am to 4 pm

Qualitative Methods; Quantitative Methods; Consulting Skills; Evaluation 101; Performance Planning; Logic Models; Participatory Eval

TUESDAY, OCTOBER 25, FULL DAY SESSIONS, 9 am to 4 pm

Performance Measurement; Systems Approaches RealWorld Evaluation; Focus Groups; Coding; Reporting; SEM for Evaluators; Eval Methodology; Results Based Approach

WEDNESDAY, OCTOBER 26, FULL DAY,  8 am to 3 pm

Tools of Quality; Policy Implementation; Success Case Method Evaluation; Rasch Measurement; Appreciative Inquiry; Needs Assessment; Utilization-focused; Theory Driven Evaluation; Presenting Evaluation Findings; 360-Degree Feedback; Effect Size, Measures; Minding Your Mind; Instrument Development; Multilevel Models; Values in Evaluation; Collaborative Eval; Immigrant Communities; Cost-Effectiveness; Experimental Design

WEDNESDAY, OCTOBER 26, HALF DAY, 8 am to 11 am

Community Change; Empowerment Evaluation; Performance Management; Using Stories; Fun and Evaluation

WEDNESDAY, OCTOBER 26, HALF DAY, 12 pm to 3 pm

Programs for Children; TRIAGE; Survey Design; Swinging Dance; Cultivating Self

SUNDAY, OCTOBER 30, HALF DAY, 9 am to 12 pm

Program Theory; Collaborative Step-by-Step; Moderator Training; Analyzing Text; Leap to Consulting

 

TWO DAY, MONDAY-TUESDAY, OCT 24-25, FROM 9 am to 4 pm

 

Qualitative Methods
THIS WORKSHOP IS FULL - NO MORE REGISTRANTS ARE BEING ACCEPTED 
WE DO NOT MAINTAIN WAITING LISTS FOR WORKSHOPS

Qualitative data can humanize evaluations by portraying people and stories behind the numbers. Qualitative inquiry involves using in-depth interviews, focus groups, observational methods, and case studies to provide rich descriptions of processes, people, and programs. When combined with participatory and collaborative approaches, qualitative methods are especially appropriate for capacity-building-oriented evaluations.

 

Through lecture, discussion, and small-group practice, this workshop will help you to choose among qualitative methods and implement those methods in ways that are credible, useful, and rigorous. It will culminate with a discussion of new directions in qualitative evaluation.

 

You will learn:

  • Types of evaluation questions for which qualitative inquiry is appropriate,

  • Purposeful sampling strategies,

  • Interviewing, case study, and observation methods,

  • Analytical approaches that support useful evaluation.

Michael Quinn Patton is an independent consultant and professor at the Union Institute. An internationally known expert on utilization-focused evaluation and qualitative methods, he published the third edition of Qualitative Research and Evaluation Methods (SAGE) in 2001.

 

Session 1: Qualitative Methods
Scheduled: Monday and Tuesday, October 24 and 25, 9 am to 4 pm
Level: Beginner, no prerequisites


Quantitative Methods

Quantitative data offers opportunities for numerical descriptions of populations and samples. The challenge is in knowing which analyses are best for a given situation. 

 

Designed for the practitioner needing a refresher course and/or guidance in applying quantitative methods to evaluation contexts, the workshop covers the basics of parametric statistics, and nonparametric statistics, as well as how to report your findings in ways useful to stakeholder groups.

Hands-on exercises interspersed with mini-lectures will introduce methods and concepts. The instructor will review examples of research and evaluation questions and the statistical methods appropriate to developing a quantitative data-based response.

You will learn:

  • The conceptual basis for a variety of statistical procedures,

  • How more sophisticated procedures are based on the statistical basics,

  • Which analysis technique is best for a given data set or evaluation question,

  • How to interpret and report findings from these analyses.

Katherine McKnight applies quantitative analysis in her practice as a research consultant and program evaluator for Public Interest Research Services. Additionally, she teaches Research Methods, Statistics, and Measurement in the Department of Psychology at the University of Arizona in Tucson, Arizona.

Session 2: Quantitative Methods
Scheduled: Monday and Tuesday, October 24 and 25, 9 am to 4 pm
Level: Beginner, no prerequisites


Consulting Skills for Evaluators: Getting Started

 

Do you have what it takes to be a successful independent consultant? Designed for evaluators considering becoming independent consultants or who have recently begun a consulting practice, the workshop will help you to assess your own skills and characteristics to determine if you have what it takes to be successful and strategize about areas in need of improvement.

The workshop will focus on the full scope of operating an independent consulting practice from marketing and proposal writing, to developing client relationships, to project management, ethics, and business operations. Case examples, hands-on activities, and take-home materials will prepare you to enter the world of consulting.

You will learn:

  • If consulting is an appropriate career choice for you,

  • How to break into the evaluation consulting market – and stay there,

  • Time and money management strategies,

  • Professional practices including customer service, ethical operations, and client relations.

Gail Barrington started Barrington Research Group 20 years ago as a sole practitioner. Today, she has a staff of 7 and a diverse client base. A top rated facilitator, she has taught workshops throughout the US and Canada.

Session 3: Consulting Skills
Scheduled: Monday and Tuesday, October 24 and 25, 9 am to 4 pm
Level: Beginner, no prerequisites


Evaluation 101: Intro to Evaluation Practice

Begin at the beginning and learn the basics of evaluation from an expert trainer. The session will focus on the logic of evaluation to answer the key question: "What resources are transformed into what program evaluation strategies to produce what outputs for which evaluation audiences, to serve what purposes." Enhance your skills in planning, conducting, monitoring, and modifying the evaluation so that it generates the information needed to improve program results and communicate program performance to key stakeholder groups.

A case-driven instructional process, using discussion, exercises, and lecture will introduce the steps in conducting useful evaluations: Getting started, Describing the program, Identifying evaluation questions, Collecting data, Analyzing and reporting, and Using results.

You will learn:

  • The basic steps to an evaluation and important drivers of program assessment,

  • Evaluation terminology,

  • Contextual influences on evaluation and ways to respond,

  • Logic modeling as a tool to describe a program and develop evaluation questions and foci,

  • Methods for analyzing, and using evaluation information.

John McLaughlin has been part of the evaluation community for over 30 years working in the public, private, and non-profit sectors. He has presented this workshop in multiple venues and will tailor this two-day format for Evaluation 2005.

Session 4: Evaluation 101
Scheduled:
Monday and Tuesday, October 24 and 25, 9 am to 4 pm
Level: Beginner, no prerequisites


Performance Planning, Measurement and Reporting for Continuous Improvement

THIS WORKSHOP IS FULL - NO MORE REGISTRANTS ARE BEING ACCEPTED 
WE DO NOT MAINTAIN WAITING LISTS FOR WORKSHOPS

Explore the latest in practical tools and techniques that have evolved to build capacity across diverse stakeholders to describe, analyze, plan, measure, report and manage performance. This workshop will focus on ideas and concepts from traditional evaluation practice, as well as new approaches from the fields of systems thinking, policy analysis, risk management and action research.

 

This workshop will showcase the most promising emerging cases, and will invite you to engage in hands-on small group work to further reinforce key concepts, practical applications to real situations, and group learning.

 

You will learn:

  • How to establish multi-level results chains/performance frameworks,

  • Uses of a Needs-Results hierarchy to set strategy,

  • ‘Umbrella’ or high-level Results-based Management and Accountability Frameworks (RMAFs),

  • Risk-results analyses,

  • Approaches to Implementing evaluation strategies across diverse populations,

  • Practical approaches to performance planning, measurement and reporting.

Steve Montague, Partner at Performance Management Network Inc. and author of The Three Rs of Performance, will lead this workshop. He has two decades of international experience in performance measurement, program evaluation, review and audit projects as a management consultant and as an evaluation manager.

Session 5: Performance Planning
Scheduled: Monday and Tuesday, October 24 and 25, 9 am to 4 pm
Level: Beginner, no prerequisites


Logic Models for Program Evaluation and Planning

 

Many programs fail to start with a clear description of the program and its intended outcomes, undermining both program planning and evaluation efforts. The logic model, as a map of what a program is and intends to do, is a useful tool for clarifying objectives, improving the relationship between activities and those objectives, and developing and integrating evaluation plans and strategic plans.

First, we will recapture the utility of program logic modeling as a simple discipline, using cases in public health and human services to explore the steps for constructing, refining and validating models. Then, we’ll examine how to use logic models in evaluation to gain stakeholder consensus and determine evaluation focus, in program monitoring to determine a set of balanced performance measures, and in strategic planning to affirm mission and identify key strategic issues. Both days use modules with presentations, small group case studies, and debriefs to reinforce group work.

You will learn:

  • To construct logic models,

  • To develop an evaluation focus based on a logic model,

  • To use logic models to answer strategic planning questions and select and develop performance measures.

Thomas Chapel is the central evaluation resource person and logic model trainer at the Centers for Disease Control. This is an expanded version of a workshop he has taught for the past 3 years to much acclaim.

Session 6: Logic Models
Scheduled: Monday and Tuesday, October 24 and 25, 9 am to 4 pm
Level: Beginner, no prerequisites


Participatory Evaluation

Participatory evaluation practice requires evaluators to be skilled facilitators of interpersonal interactions. This workshop will provide you with theoretical grounding (social interdependence theory, conflict theory, and evaluation use theory) and practical frameworks for analyzing and extending your own practice.

Through presentations, discussion, reflection, and case study, you will experience strategies to enhance participatory evaluation and foster interaction. You are encouraged to bring examples of challenges faced in your practice for discussion.

You will learn:

  • Strategies to foster effective interaction, including belief sheets; values voting; three-step interview; cooperative rank order; graffiti; jigsaw; and data dialogue,

  • Responses to challenges in participatory evaluation practices,

  • Four frameworks for reflective evaluation practice.

Jean King has over 30 years of experience as an award-winning teacher at the University of Minnesota. As an evaluation practitioner, she has received AEA’s Myrdal award for outstanding evaluation practice. Laurie Stevahn is a professor at Seattle University with extensive facilitation experience as well as applied experience in participatory evaluation.

Session 7: Participatory Eval
Prerequisites: Basic evaluation skills
Scheduled:
Monday and Tuesday, October 24 and 25, 9 am to 4 pm
Level: Intermediate

 

TUESDAY, OCTOBER 25, FULL DAY SESSIONS, 9 am to 4 pm

 

Performance Measurement

 

Get up to speed on performance measurement and its contribution to program evaluation. A sound performance measurement system strengthens accountability, demonstrates value for money to taxpayers and funders, reinforces and supports planning and quality assurance processes, and, most importantly, improves performance at the strategic, client, and operational levels.

This intensive, interactive workshop uses many examples and case studies from a variety of government and non-profit agencies. Participants will engage a simple step-by-step process for developing meaningful performance measures. Focus will be on the use of performance measurement in reporting, management decision making and achieving program results.

You will learn:

  • How to build a Simple Program Logic Model to identify a program’s key results,

  • How to develop performance measures for planning, monitoring and evaluating results,

  • How to use performance measurement in a variety of practical management processes.

John Robert Allen is a management consultant with more than 28 years of experience in performance measurement and program evaluation in the public sector.  He is an experienced facilitator who has presented this workshop frequently in both the United States and Canada.

 

Session 8: Performance Measurement
Scheduled: Tuesday, October 25, 9:00 am to 4:00 pm
Level: Beginner, no prerequisites


Dealing with Reality: Systems Approaches to Evaluation

THIS WORKSHOP IS FULL - NO MORE REGISTRANTS ARE BEING ACCEPTED 
WE DO NOT MAINTAIN WAITING LISTS FOR WORKSHOPS

Systems theory is a large and highly evaluative filed of inquiry. Evaluators can use many of the tools and techniques from the systems field, often with little modification. These tools can reveal otherwise hidden insights, help address ongoing evaluation challenges, and provide valuable short-cuts.

 

This workshop explores design, data collection, and analysis methods drawn from two systems theory based approaches – Soft Systems Methodology and Complex Adaptive Systems. Participants will learn and experiment with innovative techniques on a real case study and then determine the usefulness of these techniques to their own projects.

 

You will learn:

  • Criteria for selecting systems based methods,

  • Ways to integrate evaluation and systems-based methods to respond to the demands of a particular program,

  • Multiple innovative and systems-based methods of evaluation design, data collection and analysis.

Bob Williams is an independent consultant and a pioneer in applying systems theory to the field of evaluation. Glenda Eoyang is founding Executive Director of the Human Systems Dynamics Institute and has presented systems approach workshops with Bob at previous AEA conferences.

 

Session 9: Systems Approaches
Prerequisites: Basics of needs assessment, eval methods and qualitative data collection and analysis
Scheduled: Tuesday, October 25, 9:00 am to 4:00 pm
Level: Intermediate


RealWorld Evaluation: Overcoming Constraints

THIS WORKSHOP IS FULL - NO MORE REGISTRANTS ARE BEING ACCEPTED 
WE DO NOT MAINTAIN WAITING LISTS FOR WORKSHOPS

What do you do when asked to perform an evaluation on a program that is well underway? When time and resources are few, yet expectations high? When questions about baseline data and control groups are met with blank stares? This workshop presents a seven-step approach that seeks to ensure the best quality evaluation under real-life constraints.

 

Through presentations and discussion, with real-world examples drawn from international development evaluation, you will study the RealWorld Evaluation approach. The workshop focuses on developing country evaluation, but the techniques are applicable to evaluators working in any context with budget, time, and data constraints.

You will learn:

  • The seven steps of the RealWorld Evaluation approach,

  • Ways to reconstruct baseline data,

  • How to identify, and overcome threats to the validity or adequacy of evaluation methods.

Jim Rugh will coordinate a team of four facilitators with extensive real-world experience in conducting evaluations in a range of contexts worldwide. He is a leader in the area of conducting evaluations with budget, time, and data constraints.

Session 10: RealWorld Evaluation
Prerequisites: Academic or practical knowledge of the basics of evaluation
Scheduled: Tuesday, October 25, 9:00 am to 4:00 pm
Level: Intermediate


Focus Group Interviewing
THIS WORKSHOP IS FULL - NO MORE REGISTRANTS ARE BEING ACCEPTED 
WE DO NOT MAINTAIN WAITING LISTS FOR WORKSHOPS

The focus group moderator plays a critical role in the quality of the focus group interview. This workshop will examine the function of the moderator and suggest methods that maximize his or her role. Specific topics will include when to use focus groups, developing powerful questions, solving problems regularly encountered by moderators, using effective and efficient analysis, and alternative moderating styles.

 

Through lecture, demonstration, discussion and practice, this hands-on session will introduce best practices in moderating, developing questions and analyzing results for focus groups. You will have the opportunity to participate in and/or observe a mock focus-group.

You will learn:

  • Critical ingredients of focus group research,

  • Focus group moderating skills,

  • Development of focus group questions.

  • Analysis strategies for group data.

Richard Krueger is co-author of one of the most widely read texts on focus groups: Focus Groups: A Practical Guide of Applied Research (SAGE), as well as numerous articles on the topic. He has conducted over 300 focus groups in the public, private, and non-profit sectors and is a highly experienced workshop facilitator who has offered sessions at AEA since 1988.

Session 11: Focus Groups
Prerequisites: Experience with individual or group interviewing
Scheduled: Tuesday, October 25, 9:00 am to 4:00 pm
Level: Intermediate


Coding? Qualitative Software? Why and How

Coding and qualitative software are viewed as resources that assist in the search for meaning in qualitative data. This session is designed to use practical experience with real data in the form of group exercise to direct discussion of important principles that shape qualitative analysis. 

 

Individual and small group work are framed by seminars that explore pre-code work, code evolution, and memo writing. Qualitative software, including ATLAS.ti and MAXqda, is presented as a useful tool to integrate into analysis, but not as a solution to analysis challenges.

 

You will learn:

  • The value of “context” in analytic decision-making,

  • Processes that support the evolution of coding qualitative data,

  • Strategies for moving through coding to later phases of finding meaning from narrative data,

  • How and when to integrate software into the qualitative analysis process.

Ray Maietta is President and founder of ResearchTalk Inc, a qualitative inquiry consulting firm. He is an active qualitative researcher who also brings extensive experience as a trainer to the session. Jacob Blasczyk is an active, experienced evaluator with in-depth experience in using qualitative software.

 
Session 12: Coding
Prerequisites: Experience in qualitative data analysis
Scheduled: Tuesday, October 25, 9:00 am to 4:00 pm
Level: Intermediate


Creative, Interactive Strategies for Communicating and Reporting

 

This unique session is designed to take practicing evaluators a level beyond their current communicating and reporting practices.

 

You will self-assess your practices to determine what formats and strategies you use most often, what challenges and successes you have experienced, and why. Then, select among learning opportunities in the newest areas of communicating and reporting: video and web conferencing, chat rooms and teleconferencing, working sessions, photography and cartoons, poetry and drama, video and computer-generated presentations, and website communications.

 

You will learn:

  • To self-assess your communications needs,

  • Cutting edge strategies in areas that you select as most applicable to your evaluation practice,

  • In-depth about the one strategy that can benefit you right now.

Rosalie Torres, of Torres Consulting Group, and Mary Piontek, from the University of Michigan, represent two-thirds of the authoring team for the 2nd edition of Evaluation Strategies for Communicating and Reporting (SAGE). They have applied their recommendations in a range of evaluation contexts, bringing practical experience to the workshop.

 

Session 13: Reporting
Prerequisites: Experience with evaluation communicating and reporting
Scheduled:
Tuesday, October 25, 9:00 am to 4:00 pm
Level: Intermediate


Structural Equation Modeling for Evaluators
THIS WORKSHOP IS FULL - NO MORE REGISTRANTS ARE BEING ACCEPTED 
WE DO NOT MAINTAIN WAITING LISTS FOR WORKSHOPS

Explore the conceptual, technical, and applied issues related to Structural Equation Modeling (SEM). SEM merges confirmatory factor analysis with path analysis and provides means for constructing, testing, and comparing comprehensive structural path models as well as comparing the goodness of fit of models and their adequacy across multiple samples.

 

Drawing heavily on structured lecture with opportunity for questions, this session will examine models varying from simple to more complex that cover a wide range of situations including longitudinal and mediational analyses, comparisons between groups, and analyses that include data from different sources such as from supervisors and co-workers.

 

You will learn:

  • Features and advantages of SEM,

  • When and how to apply 6 basic SEM models,

  • To test specific hypotheses and compare models,

  • To report SEM analysis.

Amiram Vinokur is a charter member of AEA currently at the University of Michigan’s Institute for Social Research. He has written on SEM, uses it in his practice, and teaches it at the Survey Research Summer Institute.

 

Session 14: SEM for Evaluators
Prerequisites: Intermediate Statistics
Scheduled: Tuesday, October 25, 9:00 am to 4:00 pm
Level: Intermediate


Evaluation Methodology Basics

 

Evaluation logic and methodology is a set of principles (logic) and procedures (methodology) that guide evaluators in combining descriptive data with relevant values to draw conclusions that address how good, valuable, or important something is, rather than just describing what it is like or what happened.

 

This workshop combines mini-lectures, demonstrations, small group exercises and interactive discussions to offer a “nuts and bolts” introduction to concrete, easy-to-follow, practical methods for conducting an evaluation.

You will learn:

  • The difference between research methodology and evaluation-specific methodology,

  • The fundamentals of theory-based needs assessment,

  • Where the “values” come from in an evaluation,

  • How to respond to questions about subjectivity,

  • Which evaluative criteria are more important than others,

  • The fundamentals of using rubrics to convert descriptive data to evaluative findings.

Jane Davidson has nearly 20 years of experience teaching and conducting workshops on a wide variety of topics including evaluation and research methods. The methodologies presented in this workshop are drawn from her book Evaluation Methodology Basics: The nuts and bolts of sound evaluation (SAGE).

 
Session 15: Eval Methodology
Scheduled:
Tuesday, October 25, 9:00 am to 4:00 pm

Level: Beginner, no prerequisites


Applying the Results Based Approach in Funding Applications and Ongoing Programs

 

Evaluation has become increasingly important as funding opportunities become more competitive. This workshop is geared, in particular, to those working with non-profit organizations that want to build evaluation into funding applications and programs.

 

Through hands-on exercises, presentations, and discussion, you will explore the Results Based Management approach to Program Logic Models and Evaluation Matrices. A Program Logic Model is a tool to help design and evaluate programs, demonstrating the relationship between program inputs, activities, outputs, outcomes and impacts. An Evaluation Matrix is a tool for systematically identifying evaluation questions, indicators of success, appropriate data sources and data collection methods

You will learn:

  • How to create Program Logic Models,

  • How to develop an Evaluation Matrix,

  • How to build evaluation into your funding applications and programs.

Harry Cummings, the award-winning director of Harry Cummings and Associates, has designed and led numerous training workshops in economic impact assessment. His associate, Nichole Fraser, brings to the workshop a range of experience in applying the Results Based Approach in real-world situations.

 
Session 16: Results Based Approach
Scheduled:
Tuesday, October 25, 9:00 am to 4:00 pm

Level: Beginner, no prerequisites

 

WEDNESDAY, OCTOBER 26, FULL DAY,  8 am to 3 pm

 

Using the Tools of Quality

 

In the early stages of development, Quality Assurance/Quality Control was defined as “…a type of evaluative monitoring.” Today, quality is more than just an action taken on an assembly line. It is a set of qualitative and quantitative tools applicable to all types of organizations including non-profits, government and education. In the course of its growth Quality has developed and adapted a number of tools and techniques that are relevant to all evaluators and should become a part of your tool kit.

 

This workshop uses mini-lectures, discussion and small group exercises to explore the Seven Tools of Quality Control and the Seven New Tools for Quality Management.
 
You will learn:

  • The Seven Tools of Quality Control, including Pareto Charts, Scatter Diagrams, and Flowcharts,

  • The Seven New Tools for Quality Management, including Affinity Diagrams and Prioritization Matrices,

  • Applications of the tools to evaluation in a variety of contexts.

Thomas Berstene is the founder and president of WorkForce Planning Associates, Inc. He has more than twenty years of experience working in the area of quality and organizational assessments and facilitating professional development seminars.

 

Session 17: Tools of Quality
Scheduled:
Wednesday, October 26, 8:00 am to 3:00 pm

Level: Beginner, no prerequisites


Evaluation and Policy Implementation

THIS WORKSHOP IS FULL - NO MORE REGISTRANTS ARE BEING ACCEPTED 
WE DO NOT MAINTAIN WAITING LISTS FOR WORKSHOPS

Policy implementation can be thought of as a multi-level, multi-site intervention in an organizational system. This workshop explores evaluation as it relates to two important implementation strategies. The fidelity approach focuses on development of a large scale program that is delivered at all sites and the adaptation approach focuses on development of programs that respond to the needs of groups within the larger system. 

Using mini-lectures, small group exercises and discussion, this session enables participants to learn about evaluative methods appropriate to both of these approaches. We will also address the information needs of policy makers, program managers, the legislature and other stakeholders.

You will learn:

  • How to structure evaluations to influence the quality of policy implementation,

  • How to identify and respond to the diverse needs of policy stakeholders,

  • How to determine policy impact.

John Owen founded the graduate teaching program at the Centre for Program Evaluation at Melbourne University. He is a Fellow of the Australasian Evaluation Society and has offered workshops for the AES and AEA. Pam St Leger brings to the workshop a wealth of facilitation skills and is a Senior Lecturer at the Centre for Program Evaluation.

 

Session 18: Policy Implementation
Prerequisites: Experience with policy development and evaluation

Scheduled: Wednesday, October 26, 8:00 am to 3:00 pm

Level: Intermediate


Success Case Method Evaluation

THIS WORKSHOP IS FULL - NO MORE REGISTRANTS ARE BEING ACCEPTED 
WE DO NOT MAINTAIN WAITING LISTS FOR WORKSHOPS

The Success Case Method (SCM) is a proven and effective method to quickly evaluate the impact of training programs and other organizational or performance improvement initiatives. It is research-based, practical, efficient and produces highly credible and trustworthy impact. The SCM intentionally seeks the very best results a program is producing so that best practices can be leveraged and extended.

Workshop participants will practice with and learn about the SCM through presentations, simulation exercises, and review of case examples. Extensive take home resources, examples, and job aids facilitate on-the-job application of workshop content.

You will learn:

  • Fundamental principles and concepts of Success Case Evaluation Method (SCM),

  • How to plan and conduct a complete SCM evaluation,

  • Strategic applications of the SCM.

Robert Brinkerhoff developed the SCM throughout the past 20 years in which he evaluated the impact and effectiveness of training interventions in organizations worldwide and is the author of 12 books on the topic including, The Success Case Method: Find Out Quickly What’s Working and What’s Not (Berrett Koehler). He will lead a team of three during this workshop.

Session 19: Success Case Method Evaluation

Scheduled: Wednesday, October 26, 8:00 am to 3:00 pm
Level: Beginner, no prerequisites


Using Rasch to Measure Services and Outcomes

 

Program evaluation has great need for the development of valid measures, e.g. of the quantity and quality of services and of the outcomes of those services. Many evaluators are frustrated when existing instruments are not well tailored to the task and do not produce the needed sensitive, accurate, valid findings.

 

Through an extensive presentation, followed by discussion and hands-on work with data sets and computer-generated output, this workshop will explore Rasch Measurement as a means to effectively measure program services. Attendees should bring their own charged PC laptop and will receive a copy of the Winsteps software at the workshop.

 

You will learn:

  • Differences between Classical Test Theory and Rasch Measurement,

  • Why, when, and how to apply Rasch measurement,

  • Hands-on application of Rasch analysis using Winsteps software,

  • Interpretation of Rasch/Winsteps output.

Kendon Conrad is from the University of Illinois at Chicago and Nikolaus Bezrucko is an independent consultant. They bring extensive experience in both teaching about, and applying, Rasch measurement to evaluation.

 

Session 20: Rasch Measurement
Scheduled: Wednesday
, October 26, 8:00 am to 3:00 pm
Level: Beginner, no prerequisites


Using Appreciative Inquiry In Evaluation

 

Experience the power of appreciative reframing! An appreciative approach to evaluation maximizes chances for sustainable impact by helping programs identify what is working and drawing on existing strengths to build capacity and improve program effectiveness. Appreciatively oriented evaluation does not veil problems, but rather refocuses energy in a constructive and empowering way.

 

You will experience the various phases of Appreciative Inquiry (AI) using appreciative interviews to focus on evaluation, developing indicators and data collection tools, conducting appreciative interviews, analyzing interview data, and sharing results. The workshop uses real-world case examples, exercises, discussion and short lectures to show participants how to incorporate AI into their evaluation contexts.

 

You will learn:

  • The principles and applications of appreciative inquiry,

  • How to formulate evaluation goals and questions using the appreciative inquiry approach,

  • How to develop interview guides, conduct interviews and analyze interview data,

  • How to reframe deficits into assets.

Tessie Catsambas, President of EnCompass LLC, and Hallie Preskill, Claremont Graduate University professor and evaluation consultant, together bring to the workshop years of training experience and hands-on practice using AI in a variety of program contexts.

 

Session 21: Appreciative Inquiry
Scheduled:
Wednesday, October 26, 8:00 am to 3:00 pm
Level: Beginner, no prerequisites


Needs Assessment

THIS WORKSHOP IS FULL - NO MORE REGISTRANTS ARE BEING ACCEPTED 
WE DO NOT MAINTAIN WAITING LISTS FOR WORKSHOPS

Assessing needs is a task often assigned to evaluators with the assumption that they have been trained in or have experience with the activity. However, surveys of evaluation training indicated that by the year 2002 only one formal course on the topic was being taught in university based evaluation programs.

 

This workshop uses hands-on activities interspersed with mini-presentations and discussions to provide an overview of needs assessment. The focus will be on basic terms and concepts, models of needs assessment, steps necessary to conduct a needs assessment and an overview of methods. 

 

You will learn:

  • The definition of need and need assessment and levels, types and examples of needs,

  • Models of needs assessment with emphasis on a comprehensive 3-phase model,

  • How to manage a comprehensive needs,

  • Methods commonly used in needs assessment.

James Altschuld is a professor at Ohio State University and the instructor of the only needs assessment course in the most recent study of evaluation training. He has co-written two books on needs assessment and is a well-known presenter of workshops on the topic in numerous respected venues. 

Session 22: Needs Assessment

Scheduled: Wednesday, October 26, 8:00 am to 3:00 pm
Level: Beginner, no prerequisites 


Utilization-focused Evaluation

THIS WORKSHOP IS FULL - NO MORE REGISTRANTS ARE BEING ACCEPTED 
WE DO NOT MAINTAIN WAITING LISTS FOR WORKSHOPS

Evaluations should be useful, practical, accurate and ethical. Utilization-focused Evaluation is a process that meets these expectations and promotes use of evaluation from beginning to end. By carefully implementing evaluations for increased utility, this approach encourages situational responsive-ness, adaptability and creativity.

 

With an overall goal of teaching you the process of Utilization-focused Evaluation, the session will combine lectures with concrete examples and interactive case analyses, including cases provided by the participants.

 

You will learn:

  • The fundamental premises of Utilization-focused Evaluation,

  • The implications of focusing an evaluation on intended use by intended users,

  • Options for evaluation design and methods based on situational responsiveness, adaptability and creativity,

  • How to use the Utilization-focused Evaluation checklist & flowchart.

Michael Quinn Patton is an independent consultant and professor at the Union Institute. An internationally known expert on Utilization-focused Evaluation, in 1997 he published the third edition of the book on which this session is based, Utilization Focused Evaluation: The New Century Text (SAGE). 

 

Session 23: Utilization-focused
Scheduled: Wednesday, October 26, 8:00 am to 3:00 pm
Level: Beginner, no prerequisites


Theory-Driven Evaluation

THIS WORKSHOP IS FULL - NO MORE REGISTRANTS ARE BEING ACCEPTED 
WE DO NOT MAINTAIN WAITING LISTS FOR WORKSHOPS

Learn the theory-driven approach for assessing and improving program planning, implementation and effectiveness. Participants will explore the conceptual framework of program theory and its structure, which facilitates precise communication between evaluators and stakeholders regarding evaluation needs and approaches to address those needs. 

 

Mini-lectures, group exercises and case studies will illustrate the use of program theory and theory-driven evaluation for program planning, initial implementation, mature implementation and outcomes. In the outcome stages, you will explore the differences among outcome monitoring, efficacy evaluation and effectiveness evaluation.

 

You will learn:

  • How to apply the conceptual framework of program theory,

  • How to apply the theory-driven approach to select an evaluation that is best suited to particular needs,

  • How to apply the theory-driven approach for evaluating a program’s particular stage or the full cycle.

Huey Chen, professor at the University of Alabama at Birmingham, is the author of Theory-Driven Evaluations (SAGE), the classic text for understanding program theory and theory-driven evaluation. He is an internationally know workshop facilitator on the subject.


Session 24: Theory Driven Evaluation

Prerequisites:  Knowledge of logic models or program theory
Scheduled:
Wednesday, October 26, 8:00 am to 3:00 pm
Level: Intermediate


Presenting Evaluation Findings:  Effective Messaging for Evaluators

 

Explore the difference between “presenting” findings and “communicating” findings. This is an interactive session for any evaluator who is asked to present evaluation findings in front of an audience. Participants are introduced to three primary channels of communication: how you look, how you sound and how you organize what you say.

 

The instructor will model a behavior, explain an idea and demonstrate concept after which attendees will have the opportunity to practice in front of the group and receive coaching and feedback. Come prepared with a specific topic that you’ll be asked to present in the near future.

 

You will learn:

  • The importance of the three main channels of communication,

  • How to eliminate distracting physical behaviors from your presentations,

  • How to organize and effectively stage an evaluation presentation for maximum impact.

Carl Hanssen hails from The Evaluation Center at Western Michigan University and is a certified interpersonal skills instructor.   An experienced facilitator and presentations coach, he excels at developing, practicing and teaching presentation skills.

 

Session 25: Presenting Evaluation Findings
Scheduled:
Wednesday, October 26, 8:00 am to 3:00 pm
Level: Beginner, no prerequisites


360-Degree Feedback: Online Methods and Techniques

 

360-degree feedback is a powerful multi-dimensional leadership development tool. Feedback is obtained from a range of perspectives across an organization. The ease of deploying surveys on the web does not mean that organizations are better equipped to implement 360-degree, but it does produce new opportunities and challenges.

 

Through mini-lecture, demonstration, discussion, small group practice and role-play this workshop will introduce you to conducting effective 360-degree feedback online.

 

You will learn:

  • The steps in the 360-degree feedback process,

  • The evaluation framework to understand outcomes for different stakeholders,

  • How to implement 360-degree instruments,

  • How to interpret reports and selection of strategies for effective feedback.

Zita Unger hails from Evaluation Solutions, a consulting company with extensive experience in the design and delivery of online evaluation instruments and has received the Australasian Evaluation Society's Evaluation Training and Service award for outstanding contributions to the profession of evaluation.

 

Session 26: 360-Degree Feedback
Scheduled:
Wednesday, October 26, 8:00 am to 3:00 pm
Level: Beginner, no prerequisites


Using Effect Size and Association Measures

 

Answer the call to report effect size and association measures as part of your evaluation results. Improve your capacity to understand and apply a range of measures including: standardized measures of effect sizes from Cohen, Glass, and Hedges; Eta-squared; Omega-squared; the Intraclass correlation coefficient; and Cramer’s V.

 

Through mini-lecture, hands-on exercises, and demonstration, you will improve your understanding of the theoretical foundation and computational procedures for each measure as well as ways to identify and correct for bias.

 

You will learn:

  • How to select and compute the appropriate measure of effect size or association.

  • Considerations in the use of confidence intervals,

  • Ways to identify and correct for measurement bias.

Jack Barnette from The University of Alabama at Birmingham and James McLean from The University of Alabama - Tuscaloosa have been conducting research and writing on this topic for over five years. Together, they bring over 60 years of teaching and workshop facilitation experience and both have received awards for outstanding teaching.

Session 27: Effect Size, Measures
Prerequisites: Univariate statistics through ANOVA & power
Scheduled: Wednesday, October 26, 8:00 am to 3:00 pm
Level: Intermediate


Minding Your Mind: Using Your Brain More Effectively

 

Evaluators seldom pay attention to how their brains work day to day; how they are affected by food; how memories are created, organized, and accessed; under what physical circumstances inspiration arises; how ideas are generated and connected to one another; how sleep, exercise, unstructured time, and life problems affect what they think about; and how easily or stressfully they handle day to day thinking chores.

 

Through lecture and discussion, this workshop addresses how thinking can be made easier, less stressful, and more productive. This is not about "gimmicks" or clever philosophical insights, but is based on the working habits of successful people involved in intellectual work.

 

You will learn:

  • How to work effectively on many evaluation topics at one time,

  • How to increase the odds of finding solutions to hard problems,

  • What to eat to increase your mental energy,

  • How to connect with thoughtful people for mutual benefit.

George Grob works with the Office of Inspector General Corps and has managed over 1000 evaluations and 100 evaluators in the past 15 years. Increasingly, his teaching focuses on effective performance, problem solving, and thinking for evaluators.

 

Session 28: Minding Your Mind
Scheduled:
Wednesday, October 26, 8:00 am to 3:00 pm
Level: Beginner, no prerequisites


Improved Instrument Development Through Group Facilitation

 

Sound instrument design is the hallmark of quality evaluation practice. A solid understanding of item writing techniques and the ability to effectively implement them are critical. Most item writing, however, takes place in isolation or with uneven contributions from all participants. The item writing process can be greatly enhanced through the use of group techniques.

 

This workshop outlines the process of item writing, showcasing the role of nominal group technique as a process for item writers to voice their opinions, prioritize their thoughts, and create coherent plans.

 

You will learn:

  • A research-based system for generating survey items,

  • When and how to use nominal group technique and its benefits,

  • Technical aspects of item writing,

  • A strategy for pilot testing newly developed items,

  • Pitfalls to avoid in item writing.

Jennifer Dewey is an expert facilitator with varied experience in the areas of quality assurance and evaluation, most currently working for ORC Marco as an internal trainer. Stacie Hudgens hails from Learning Point Associates where she designs and implements evaluation projects for external contracts.

 
Session 29: Instrument Development
Scheduled:
Wednesday, October 26, 8:00 am to 3:00 pm

Level: Beginner, no prerequisites


Multilevel Models in Program Evaluation

THIS WORKSHOP IS FULL - NO MORE REGISTRANTS ARE BEING ACCEPTED 
WE DO NOT MAINTAIN WAITING LISTS FOR WORKSHOPS

Multilevel models (also called hierarchical linear models) open the door to understanding the inter-relationships among nested structures (students in classrooms in schools in districts for instance), or the ways evaluands change across time (perhaps longitudinal examinations of health interventions). This workshop will demystify multilevel models and present them at an accessible level, stressing their practical applications in evaluation.

 

Through discussion and hands-on demonstrations, the workshop will address four key questions: When are multilevel models necessary? How can they be implemented using standard software? How does one interpret multilevel results? What are recent developments in this arena?

 

You will learn:

  • The basics of multilevel modeling,

  • When to use multilevel models in your evaluation practice,

  • How to implement models using widely available software.

Sanjeev Sridharan is head of evaluation programs and a senior research fellow at the University of Edinburgh as well as a trainer for SPSS and an Associate Editor for the American Journal of Evaluation.

 

Session 30: Multilevel Models
Prerequisites: Basic understanding of Regression Methodology
Scheduled:
Wednesday, October 26, 8:00 am to 3:00 pm
Level: Intermediate


Getting the Values Into Evaluation

THIS WORKSHOP IS FULL - NO MORE REGISTRANTS ARE BEING ACCEPTED 
WE DO NOT MAINTAIN WAITING LISTS FOR WORKSHOPS

Evaluative conclusions are about the merit or worth or the evaluand, and therefore require a combination of empirical data about its performance and some standards of merit or value. The usual training of evaluators prepares them to find out what a program is and does, but they are not familiar with ways to identify and validate the program's value.

 

This workshop will use a set of handouts and posters plus a series of case studies to make the process clear and accessible. Basic concepts include dimensions of merit, plus weights, bars, stars, steps, and standards as indicators of value strength.

 

You will learn:

  • How to identify all relevant values,

  • How to validate (or invalidate) these values,

  • How to combine the values with empirical results to get an evaluative conclusion.

Michael Scriven is among the most well-known professionals in the field today with 25 years of work on the philosophy of science. He has over 90 publications in the field of evaluation, many tangentially or directly relevant to this theme.

 

Session 31: Values in Evaluation

Prerequisites: Skills in determining empirical facts

Scheduled: Wednesday, October 26, 8:00 am to 3:00 pm
Level: Intermediate  


Evaluation Practice: A Collaborative Approach

 

Collaborative evaluation is an approach that actively engages program stakeholders in the evaluation process. When stakeholders collaborate with evaluators, stakeholder and evaluator understanding increases and the utility of the evaluation is often enhanced.

 

Employing discussion, hands-on activities, and roleplaying, this workshop focuses on strategies and techniques for conducting successful collaborative evaluations, including ways to avoid common collaborative evaluation pitfalls.

 

You will learn:

  • A collaborative approach to evaluation,

  • Levels of collaborative evaluation and when and how to employ them,

  • Techniques used in collaborative evaluation,

  • Collaborative evaluation design and data-collection strategies.

Rita O'Sullivan of the University of North Carolina and John O'Sullivan of North Carolina A&T State University have offered this well-received session for the past seven years at AEA. The presenters have used collaborative evaluation techniques in a variety of program settings, including education, extension, family support, health, and non-profit organizations.

 

Session 32: Collaborative Eval
Prerequisites: Basic Eval Skills
Scheduled:
Wednesday, October 26, 8:00 am to 3:00 pm
Level: Intermediate


Evaluation in Immigrant Communities

 

Attend to the unique issues of working in communities and cultures with which you may be unfamiliar and within which your craft is unknown. This workshop will examine such issues as access, entry, relationship-building, sampling, culturally specific outcomes, instrument development, translation, culturally appropriate behavior and stakeholder participation.

 

Drawing on case examples from practice in immigrant communities, we will illustrate what has and hasn’t worked, principles of good practice, and the learning opportunities for all involved. Through simulations and exercises you will experience the challenges and rewards of cross-cultural evaluation.

 

You will learn:

  • Approaches to evaluation practice in unfamiliar cultures and settings,

  • How to draw upon the traditions of communities in mutually beneficial ways,

  • Useful, respectful and credible ways to collect and report information for stakeholders.

Barry Cohen and Mia Robillos are on the staff of Rainbow Research, Inc. They bring experience working with Hmong, Latino, Somali, Nigerian, Native American, and Filipino cultures in their evaluation practice.

 

Session 33: Immigrant Communities
Prerequisites: Work with immigrant communities
Scheduled:
Wednesday, October 26, 8:00 am to 3:00 pm
Level: Intermediate 


Cost-Benefit and Cost-Effectiveness Methods

THIS WORKSHOP IS FULL - NO MORE REGISTRANTS ARE BEING ACCEPTED 
WE DO NOT MAINTAIN WAITING LISTS FOR WORKSHOPS

Get an overview of the main theoretical and applied issues in using cost-effectiveness (CEA) and cost-benefit (CBA) techniques. An important element of this workshop is to show that CEA is a retrospective technique well suited to program evaluation, while CBA is a prospective technique best suited for program planning. 

 

Participants will study cost-effectiveness measures applied to health (vaccination) and labor market training and cost-benefit analysis applied to transportation and infrastructure planning through a combination of lectures and group discussion.

 

You will learn:

  • The theoretical foundations of cost benefit analysis and cost effectiveness analysis,

  • The difference between CBA and CEA, and why CEA is preferred in most settings,

  • The data requirements for successful application of CBA and CEA,

  • How to allocate costs to program activities,

  • How to apply CEA in a range of contexts.

Greg Mason hails from the University of Manitoba where he has been a member of the Department of Economics since 1974. He has more than 20 years experience in making professional presentations in the area of cost-effectiveness analysis.

 

Session 34: Cost-Effectiveness
Prerequisites: Evaluation theory or experience
Scheduled:
Wednesday, October 26, 8:00 am to 3:00 pm
Level: Intermediate


Experimental Designs in Evaluation

Experimental designs are central to much of the work done in evaluation and yet also a source of controversy. To understand the controversy and make the best use of these designs when appropriate, evaluators need to be versed in the logic, concepts, and practical lessons involved in crafting and implementing experimental designs in evaluation.

 

With an emphasis on hands-on exercises and checklists to guide your later work, this workshop introduces you to effective use of experimental designs in supporting strong causal conclusions about program and policy impacts.

 

You will learn:

  • The conceptual advantages and drawbacks of experimental designs,

  • When and in what contexts to use experimental designs,

  • How to modify experimental designs to address the constraints and information needs of specific contexts, including the use of recently developed designs,

  • How to anticipate and plan for problems in implementing experimental design evaluations.      

Fred Newman is a Professor at Florida International University with over thirty years of experience in performing front line program evaluation studies. George Julnes, Associate Professor of Psychology at Utah State University, has been contributing to evaluation theory for over 15 years and has been working with the Social Security Administration and the U.S. Dept. of Education on the design and implementation of randomized field trials.

 

Session 35: Experimental Design
Scheduled:
Wednesday, October 26, 8:00 am to 3:00 pm
Level:
Beginner, no prerequisites

 

 

WEDNESDAY, OCTOBER 26, HALF DAY, FROM 8 am to 11 am

 

Community and Systems Change Efforts: Evaluation Dilemmas and Methods

THIS WORKSHOP IS FULL - NO MORE REGISTRANTS ARE BEING ACCEPTED 
WE DO NOT MAINTAIN WAITING LISTS FOR WORKSHOPS

Critical issues have emerged as programs address community and systems change that are not addressed by methods used to evaluate individual outcomes. This workshop reviews and actively engages participants in exploring evaluation dilemmas. Methods and approaches will be reviewed including participatory development of program theory, semi-structured interviewing, tools to measure collaboration and relational mapping.

 

Participants will engage in a relational mapping exercise using a case study to explore relationships among program characteristics that influence program development and contribute to community-based change efforts.

 

You will learn:

  • How to identify and adapt approaches to evaluating community and systems changes,

  • How to develop approaches to engage multiple and diverse stakeholders,

  • How to apply relational mapping to establish relationships among multiple factors.

Susanna Ginsburg has more than 30 years of experience in designing evaluations at the community and funder levels and conducting hands-on workshops on the topic. She will lead a team of three facilitators experienced in community and systems development evaluation.

 

Session 36: Community Change 

Prerequisites: Experience evaluating community-based programs, application of logic models and theory of change to evaluation design

Scheduled: Wednesday, October 26, 8:00 am to 11:00 am
Level: Intermediate


Empowerment Evaluation

 

Empowerment Evaluation builds program capacity and fosters program improvement. It teaches people to help themselves by learning how to evaluate their own programs. The basic steps of empowerment evaluation include: 1) establishing a mission or unifying purpose for a group or program; 2) taking stock - creating a baseline to measure future growth and improvement; and 3) planning for the future - establishing goals and strategies to achieve goals, as well as credible evidence to monitor change. The role of the evaluator is that of coach or facilitator in an empowerment evaluation, since the group is in charge of the evaluation itself.

 

Employing lecture, activities, demonstration and discussion, the workshop will introduce you to the steps of empowerment evaluation and tools to facilitate the approach.

 

You will learn:

  • Steps to empowerment evaluation,

  • How to facilitate the prioritization of program activities,

  • Ways to guide a program’s self-assessment.

David Fetterman hails from Stanford University and is the editor of (and a contributor to) the recently published Empowerment Evaluation Principles in Practice (Guilford). He Chairs the Collaborative, Participatory and Empowerment Evaluation AEA Topical Interest Group and is a highly experienced and sought after facilitator.

 

Session 37: Empowerment Evaluation
Scheduled:
Wednesday, October 26, 8:00 am to 11:00 am
Level: Beginner, no prerequisites


Building a Performance Management System for Program Improvement

 

Performance management is a process for getting and focusing attention on the most important aspects of your program and using that focus to improve services. Two critical steps to the development of such a system are measuring only the few, most important, aspects and using the data.

Through lecture, discussion, exercise, and the use of a range of examples, this session provides an overview of performance management for beginners. Participants will review the basic steps to implementing a new performance management system and have the opportunity to draft and receive feedback on basic performance management plans.

 

You will learn:

  • The definition and purpose of performance management and it’s relationship to program evaluation,

  • Basic implementation issues and common pitfalls,

  • How do apply this knowledge by drafting steps to developing your own system.

Natalia Pane of the American Institute for Research, has given workshops and sessions with thousands of professionals and grantees on performance management, strategic planning and data quality. She has published and presented over 40 pieces on performance management and related data quality and strategic planning issues.

 

Session 38: Performance Management
Scheduled:
Wednesday, October 26, 8:00 am to 11:00 am
Level: Beginner, no prerequisites


Using Stories in Evaluation

 

Stories are an effective means of communicating the ways in which individuals are influenced by educational, health, and human service agencies and programs. Unfortunately, the story has been undervalued and largely ignored as a research and reporting procedure. Stories are sometimes regarded with suspicion because of the haphazard manner in which they are captured or the cavalier promise of what the story depicts.

 

Through short lecture, discussion, demonstration, and hands-on activities, this workshop explores effective strategies for discovering, collecting, analyzing and reporting stories that illustrate program processes, benefits, strengths or weaknesses.

 

You will learn:

  • How stories can reflect disciplined inquiry,

  • How to capture, save, and analyze stories in evaluation contexts,

  • How stories for evaluation purposes are often different from other types of stories.

Richard Krueger is on the faculty at the University of Minnesota and has over 20 years experience in capturing stories in evaluation. He has offered well-received professional development workshops at AEA and for non-profit and government audiences for over 15 years.

 

Session 39: Using Stories
Scheduled:
Wednesday, October 26, 8:00 am to 11:00 am
Level: Beginner, no prerequisites


Putting Fun and Evaluation in the Same Sentence

 

Engaging stakeholders’ interest and increasing their ownership of the evaluation process usually elicits groans rather than jumps of joy. One reason for this is a general lack of enjoyment and imagination.

 

Through interactive discussions and demonstrations, an evaluation board game (complete with chocolate!) improv comedy, and facials, learn how to creatively gather data, elicit education stories and have stakeholders actually ask to do more evaluation while satisfying funder requirements. Participants will also receive a follow-up resource package to use after the workshop.

 

You will learn:

  • How to increase stakeholder ownership through creative evaluation techniques,

  • How to link creative evaluation techniques with evaluation methodology,

  • How to include diverse stakeholders in the evaluation process.

Lee-Anne Ragan has received much acclaim for her work as a specialist in evaluation, cross-cultural training and conflict resolution and has been providing lively, engaging workshops for more than 16 years. She is the co-owner of Rock.Paper.Scissors, a corporate training and entertainment company.

 

Session 40: Fun and Evaluation
Prerequisites: Knowledge of the program logic model

Scheduled: Wednesday, October 26, 8:00 am to 11:00 am
Level: Intermediate

 

WEDNESDAY, OCTOBER 26, HALF DAY, FROM 12 PM to 3 PM

 

Evaluating Programs for Children

THIS WORKSHOP IS FULL - NO MORE REGISTRANTS ARE BEING ACCEPTED 
WE DO NOT MAINTAIN WAITING LISTS FOR WORKSHOPS

Evaluation programs for children calls for skills and knowledge that go beyond traditional evaluation. Increasing emphasis on student outcomes, teacher outcomes, and site visits creates a need for evaluators to further develop their specialized knowledge and skills appropriate to contexts with children.

 

You will gain an increased awareness of principals and concepts through mini-lectures on observation, instruments and strategies, and engage in brief application exercises and group discussion.

 

You will learn:

  • The “why” and “how” of conducting effective evaluation of programs for children,

  • Practical strategies and tools for evaluating children’s programs,

  • Why some strategies work and why some do not work,

  • A range of currently available instruments and their use.

Shannan McNair is an associate professor of Early Childhood Education, who has been involved in developing and presenting workshops for over 20 years. She conducts evaluations on a regular basis for programs serving children from preschool through high school. She will lead a team of three presenters from Oakland University.


Session 41: Programs for Children
Prerequisites: Basics of Evaluation, Applied experience
Scheduled:
Wednesday, October 26, 12:00 pm to 3:00 pm
Level: Intermediate

TRIAGE : Du nouveau dans le monde des techniques de groupe en évaluation/TRIAGE: A new group technique gaining recognition in evaluation

 

[Offered in French/Session en français uniquement] Aux techniques de groupe traditionnellement utilisées en évaluation, s'est  récemment ajoutée la TRIAGE (Technique de Recherche d'Information par Animation d'un Groupe Expert). Mais qu'est-ce au juste que TRIAGE? D'où vient cette technique? Comment procède-t-elle? Quand l'utiliser? Quelles sont ses forces et faiblesses? Quelles habiletés exige-t-elle de la part de l'évaluateur?

 

Nous vous convions donc à une exploration intensive de TRIAGE. Exposé théorique, démonstration, discussions à partir d’histoires de cas et expérimentation seront utilisés dans cet atelier. Vous pourrez ainsi rapidement vous approprier cette technique de façon à pouvoir l'appliquer avec efficacité dans votre milieu de pratique.

 

Vous apprendrez:

  • à reconnaître les composantes de TRIAGE et son contexte d'utilisation en évaluation

  • à distinguer les différentes techniques de groupe (technique Delphi, technique du groupe nominal, focus group) et à reconnaître l'apport complémentaire de TRIAGE,

  • à dégager la contribution distinctive de TRIAGE dans divers projets d'évaluation,

  • à utiliser TRIAGE à partir d'un cas d'évaluation.

Marie Gervais oeuvre dans le domaine de l'évaluation depuis près de 15 ans. Elle enseigne et dirige des projets d'envergure en santé et dans divers réseaux de services au Québec. Geneviève Pépin est également active en évaluation dans le domaine de la santé.

 

-----------------------------------------

[Offered in French/Session en français uniquement] TRIAGE (Technique for Research of Information by Animation of a Group of Experts) has recently been added to the group techniques traditionally used in evaluation. What exactly is TRIAGE? Where does this technique come from? How does it work? When is it used? What are its strengths and shortcomings? What skills are evaluators required to have?

This workshop is an in-depth exploration of TRIAGE, including a theory presentation, demonstration, discussions based on case histories and experiments. Participants will be able to quickly learn this technique and apply it effectively in their work situations.

Participants will learn to:

  • Recognize the components of TRIAGE and its application in different evaluation context
  • Distinguish between the various group techniques (Delphi technique, Nominal Group Technique and Focus Group) and recognize how TRIAGE complements these main group techniques
  • Recognize the distinct contribution of TRIAGE in various evaluation projects
  • Use TRIAGE in a case simulation
Marie Gervais has been working in the field of evaluation for nearly15 years. She teaches and manages large-scale projects in healthcare and in various service networks in Québec. Geneviève Pépin is also active in evaluation in the healthcare field.

Session 42: TRIAGE
Scheduled: Wednesday, October 26, 12:00 pm to 3:00 pm
Level: Beginner, no prerequisites
Note: This session will be offered in French


Survey Design and Administration

 

This professional development workshop is designed for beginners in the field of evaluation. You will be introduced to the fundamentals of survey design and administration. 

 

This interactive workshop will use a combination of direct instruction with hands-on opportunities for participants to apply what is learned to their own evaluation projects.  Learn about different types of surveys, how to choose the right one, how to administer the survey and how to increase response rates and quality of data. Attendees will receive handouts with sample surveys, item writing tips, checklists, and resource lists for further information.

 

You will learn:

  • The various types and formats of surveys,

  • Procedures for high quality survey design,

  • How to write high quality items,

  • Strategies for increasing reliability and validity.

Courtney Malloy and Harold Urman are consultants at Vital Research, a research and evaluation firm that specializes in survey design. They both have extensive experience facilitating workshops and training sessions on research and evaluation for diverse audiences.

 
Session 43: Survey Design
Scheduled:
Wednesday, October 26, 12:00 pm to 3:00 pm
Level: Beginner, no prerequisites


The Swinging Dance of Evaluation: Lessons in Collaboration and Partnership

Care to dance? This lively and highly interactive workshop explores concepts related to swing dance to build our collaborative and partnering skills as professional evaluators. Rhythmic and dance-related activities will highlight fundamental issues in our work as evaluators from enhancing our perspective to examining our process to informing our practice.

Participants will listen and move to music to illustrate the evaluation process and reflect on how experiences in the workshop can shape the way you approach your evaluation practice.

You will learn:

  • Ways of collaborating and partnering with clients using the metaphor of dance,

  • Skills of leading and following in an evaluation setting,

  • How concepts and experiences in the workshop can shape your approach to evaluation.

Phyllis Clay is the co-founder and owner of Youth Policy Research Group, Inc. Jamie Callahan teaches in the Human Resources Development Program at Texas A&M University. They are both competitive dancers who apply the creative metaphor of dance in their evaluation professions. A shortened version of this workshop at the 2004 AEA Conference was extremely well received.

 

Session 44: Swinging Dance

Scheduled: Wednesday, October 26, 12:00 pm to 3:00 pm
Level: Beginner, no prerequisites - two left feet welcomed!

 


Cultivating Self as Responsive Instrument

 

Evaluative judgments are inextricably bound up with culture and context. The AEA Guiding Principles encourage greater realization that excellence and ethical practice in evaluation are intertwined with orientations toward, responsiveness to, and capacities for engaging diversity. Breathing life into this expectation calls for critical ongoing personal homework for evaluators regarding their lenses and filters vis- a-vis their judgment-making.

 

We will explore individual and group reflective exercises that spotlight culture and context issues and help us develop and refine the self as a diversity-grounded responsive instrument. This workshop addresses the reality that, from our privileged standpoints, we often look but still do not see, listen but do not hear, touch but do not feel.  Such limitations handicap our capacities to accurately discern, describe, engage, interpret, and evaluate truths from multiple vantage points.

      

You will learn:

·         To attend to your self as instrument and, thus, enhance "interpersonal validity" (The soundness and trustworthiness of self as knower, inquirer and engager of others),

·         To identify the lenses and filters influencing your meaning-making and evaluation practice,

·         To examine how stakeholders’ perceptions of the evaluator impact evaluation accuracy and effectiveness.

 

Hazel Symonette brings over 30 years of work in diversity-related arenas to the workshop. She is founder and Director of the University of Wisconsin-Madison Excellence Through Diversity Institute:  A year-long train-the-trainers/facilitators initiative organized around responsive assessment and evaluation.

 

Session 45: Cultivating Self
Scheduled: Wednesday, October 26, 12:00 pm to 3:00 pm
Level: Beginner, no prerequisites

 

SUNDAY, OCTOBER 30, HALF DAY, FROM 9 am to 12 pm

Advanced Applications of Program Theory

THIS WORKSHOP IS FULL - NO MORE REGISTRANTS ARE BEING ACCEPTED 
WE DO NOT MAINTAIN WAITING LISTS FOR WORKSHOPS

While simple logic models are an adequate way to gain clarity and initial understanding about a program, sound program theory can enhance understanding of the underlying logic of the program by providing a disciplined way to state and test assumptions about how program activities are expected to lead to program outcomes.

 

Lecture, exercises, discussion, and peer-critique will help you to develop and use program theory as a basis for decisions about measurement and evaluation methods, to disentangle the success or failure of a program from the validity of its conceptual model, and to facilitate the participation and engagement of diverse stakeholder groups.

 

You will learn:

  • To employ program theory to understand the logic of a program,

  • How program theory can improve evaluation accuracy and use,

  • To use program theory as part of participatory evaluation practice.

Stewart Donaldson is Dean of the School of Behavioral and Organizational Sciences at Claremont Graduate University. He has published widely on the topic of applying program theory, developed one of the largest university-based evaluation training programs, and has conducted theory-driven evaluations for more than 100 organizations during the past decade.

Session 46: Program Theory
Prerequisites: Experience or Training in Logic Models
Scheduled: Sunday, October 30, 9:00 am to 12:00 pm
Level: Intermediate


Focus Group Moderator Training

THIS WORKSHOP IS FULL - NO MORE REGISTRANTS ARE BEING ACCEPTED 
WE DO NOT MAINTAIN WAITING LISTS FOR WORKSHOPS

The literature is rich in textbooks and case studies on many aspects of focus groups including design, implementation and analyses. Missing however are guidelines and discussions on how to moderate a focus group.

 

In this experiential learning environment, you will find out how to maximize time, build rapport, create energy and apply communication tools in a focus group to maintain the flow of discussion among the participants and elicit more than one-person answers. You will learn at least 15 strategies to create and maintain a focus group discussion. These strategies can also be applied in other evaluation settings such as community forums and committee meetings to stimulate discussion.

 

You will learn:

  • How to moderate a focus group,

  • At least 15 strategies to create and maintain focus group discussion,

  • How to stimulate discussion in community forums, committee meetings, and social settings.

Nancy-Ellen Kiernan has facilitated over 150 workshops on evaluation methodology and moderated focus groups in 50+ studies with groups ranging from Amish dairy farmers in barns to at-risk teens in youth centers, to university faculty in classrooms.

Session 48: Moderator Training
Prerequisites: Having moderated a focus group
Scheduled:
Sunday, October 30, 9:00 am to 12:00 pm
Level: Intermediate


Analyzing Text and Audio Data 

 

Are you drowning in a sea of words? Take this opportunity to focus on the practical use of qualitative data analysis (QDA) for dealing with text and audio data such as that derived from focus groups and interviews. 

 

Though hands-on work with sample data, you will select and mark quotations, generate meaningful and useful codes, create memos and construct networks in the style of grounded theory. In addition, the instructor will demonstrate the use of Atlas-ti, one of the most popular QDA analysis programs, to suggest the next step in moving participants towards computer-assisted QDA.

 

You will learn:

  • To select and mark text quotations,

  • To code and memo using grounded theory,

  • To make a simple network diagram,

  • The capabilities of Atlast-ti to do the above.

S Reed Early is with the British Columbia Office of the Auditor General. He has taught qualitative data analysis in the college classroom and workshop settings as well as using Atlast-ti extensively in his own work in a range of evaluation contexts. 

Session 49: Analyzing Text
Scheduled:
Sunday, October 30, 9:00 am to 12:00 pm
Level:
Beginner, no prerequisites


Making the Leap to Evaluation Consulting

Have you thought about moving from employee to consultant? Do you need more information about the benefits and challenges of doing so? Do your skills, work style, risk tolerance, and temperament provide a good fit for independent consulting? Do you want ideas about how to get started, attract and keep customers, and transition on a part- or full-time basis?

We will focus on helping you decide whether and how to transition to consulting either instead of, or in conjunction with, current employment. The workshop will be interactive with a good balance of content, practical exercises, and discussion, supplemented by take-home materials and a list of resources you can explore.

You will learn:

  • Whether consulting is for you,

  • How to transition from employee to consultant,

  • How to start and build a practice,

  • How to acquire the skills you need to succeed.

Mary Grcich Williams, owner of Mary Williams & Associates, maintains a solo practice but frequently partners with other consultants and specialists. She recently completed her 15th year as an independent evaluation consultant, after 18 years as a consultant and administrator with a large state agency.

 

Session 50: Leap to Consulting
Scheduled:
Sunday, October 30, 9:00 am to 12:00 pm
Level:
Beginner, no prerequisites