Return to search form  

Session Title: Youth Focused Evaluation TIG Business Meeting
Business Meeting Session 55 to be held in Holmead on Wednesday, Oct 16, 6:10 PM to 6:55 PM
Sponsored by the Youth Focused Evaluation
TIG Leader(s):
Kim Sabo Flores, Evaluation Access, kimsabo@aol.com
David White, Oregon State University, david.white@oregonstate.edu
Mary Arnold, Oregon State University, mary.arnold@oregonstate.edu

Session Title: Measuring the Effectiveness of College Readiness Interventions
Multipaper Session 56 to be held in Independence on Wednesday, Oct 16, 6:10 PM to 6:55 PM
Sponsored by the Assessment in Higher Education TIG
Nandita Chaudhuri,  Texas A&M University, nchaudhuri@ppri.tamu.edu
Katherine Beck,  Denver Public School District, katherineesbeck@yahoo.com
Challenges in Designing Developmental Education Intervention Assessment: Evaluating Texas Scaling, Success and Innovation Grants to Make Students College Ready
Nandita Chaudhuri, Texas A&M University, nchaudhuri@ppri.tamu.edu
Jim Dyer, Texas A&M University, jim@ppri.tamu.edu
Abstract: Nationwide absence of adequate evaluations of developmental education programs and an adequate evidence base are of concern to scholars, educators and policy makers. To address this matter in Texas, the Texas Higher Education Coordinating Board (THECB) contracted with the Public Policy Research Institute at Texas A&M University to assess the effectiveness of scaling, success and innovation grants launched at twelve funded Texas higher educational institutions focusing primarily on acceleration, student supports and professional development. Started in November 2012, this 28-month study is strategically utilizing a series of qualitative and quantitative methodologies including focus groups, site visits, face-to-face interviews, direct observations, surveys and analysis of program specific performance and success data. This presentation will reflect upon the logic model centered cluster evaluation study design development process and will discuss the specific challenges that the evaluation team had to face in developing the formative and summative assessment design from a multi-site perspective.
Evaluating Implementation Efforts to Support Adult Learners on Career Pathways
Janelle Clay, City University of New York (CUNY), janelle.clay@mail.cuny.edu
Abstract: The CareerPATH program, offered by the City University of New York (CUNY), is a three year program aimed at supporting adult students in career advancement and successful college transition. CareerPATH supports adult learners by providing academic instruction in five industry sectors with the goal of engaging adult learners and helping them bypass developmental education. The proposed paper will present a comprehensive overview of the program's evaluation plan. As a complex multi- component, multi-site initiative, the program model necessitated a mix methods evaluation plan to accurately identify program impact, as well as implementation of components instrumental to programmatic success. Using a combination of student surveys and focus groups, administrative data, and staff interviews, this paper will highlight lessons learned during the program's first two years that will hopefully contribute to efforts on serving and supporting adult learners on their path to academic and career success.

Session Title: Image Grouping as Household Survey: Participatory, Pictorial Tool for Gathering Data on Household Characteristics and Assets With Populations With Varying Literacy/Language Ability
Demonstration Session 57 to be held in Jay on Wednesday, Oct 16, 6:10 PM to 6:55 PM
Sponsored by the Disabilities and Other Vulnerable Populations
Danielle Dryke, The Improve Group, danielled@theimprovegroup.com
Danielle Hegseth, The Improve Group, danielleh@theimprovegroup.com
Abstract: Image Grouping is a pictorial data collection method which offers a number of benefits, especially to a participatory evaluation approach. This method has been used by the presenter with success in diverse program settings, domestically and internationally. It helps to overcome literacy barriers, while supporting individual and group reflection. It was recently implemented as a retrospective household survey. Feedback from the client indicated that they felt respondents were more honest than they were standard household survey methods. The technique supports surprising candor, even on sensitive issues among those using it. Implemented in a focus group-type setting, the tool supports survey-like analysis. While this new tool has been successful, it has the potential for continued development in design and application. This session will help participants learn about the basics of creating and implementing an Image Grouping household survey tool, while discussion will encourage brainstorming on its continued development.

Session Title: Foundations and the "F" Word: Cultural Limits of Evaluation in Early 21st Century Philanthropy
Expert Lecture Session 58 to be held in  Kalorama on Wednesday, Oct 16, 6:10 PM to 6:55 PM
Sponsored by the Non-profit and Foundations Evaluation TIG
John Sherman, Headwaters Group, jsherman@headwatersgroup.com
John Sherman, Headwaters Group, jsherman@headwatersgroup.com
Abstract: The rapid growth in evaluation of philanthropic efforts and programs over the past 20 years has fostered development of sophisticated evaluation tools and methods. However, structural and cultural issues limit philanthropy's ability to fully utilize evaluation. Regardless of the approach or methodology, evaluation can reveal otherwise unknown, and not self-evident, truths. When such truths challenge a program's goals, objectives, strategies, or its fundamental theory of change, the willingness to listen, learn and recalibrate diminishes. This inability to acknowledge failure (the "F" word) neuters the most useful aspects of evaluation--identifying and highlighting what is not working and why. The recent demand for short term quantitative results and need to protect and enhance philanthropy's brand has exacerbated the long standing, risk-averse orthodoxy prevalent in much of philanthropy. Discussed will be ways for evaluators to address these barriers through a wicked problems and Deliberate Leadership framework.

Session Title: Getting Past No! Strategies for Accessing Participants and Collecting Data When you Have no Direct Access to Your Target Population
Think Tank Session 59 to be held in L'Enfant on Wednesday, Oct 16, 6:10 PM to 6:55 PM
Sponsored by the Human Services Evaluation TIG
Theresa Fox, Rutgers University, fox@ssw.rutgers.edu
Donna Van Alst, Rutgers University, vanalst@ssw.rutgers.edu
Abstract: Evaluators can spend years working collaboratively with community groups, identifying needs, use evidence based interventions and developing logic models that outline activities, inputs, outputs and outcomes. Time might even be spent anticipating the barriers to the effectiveness of interventions and redesigning programs. It is all a prelude to the process of data collection, analysis and reporting of the findings. What should an evaluator do when the unthinkable happens? Specifically when there is no data or access to the program participants is extremely limited. Reasons range from a desire to protect the confidentiality of program participants to (ironically) protecting the integrity of the very program that is being evaluated. The evaluator can be under the impression that the data is being collected when it is not. This think tank will delve into this topic and brainstorm strategies for dealing with this all too common problem in human services program evaluation.

Session Title: Understanding the Voices of Young Adults Through Evaluation
Multipaper Session 60 to be held in Morgan on Wednesday, Oct 16, 6:10 PM to 6:55 PM
Sponsored by the Social Work TIG and the Youth Focused Evaluation
Deborah Cohen,  University of Kentucky, deborah.cohen@uky.edu
Deborah Cohen,  University of Kentucky, deborah.cohen@uky.edu
Conducting Developmental Evaluation of Community Programs for Young Adults Coping With Psychiatric Illness: Achievements and Challenges
Chen Lifshitz, Ashkelon Academic College, chenl@erech-nosaf.co.il
Abstract: Community programs for young adults (age 18-30) coping with psychiatric illnesses pose significant challenges for participants, implementers and evaluators. The National Insurance Fund regarded the evaluation of this program as a significant tool for improving effectiveness of two pilot programs. Following this approach, a professional steering committee had accompanied the evaluation team during the period of evaluation. The committee included key stake holders (funders, implementing teams & participants). Sequential meetings enabled operating developmental evaluation. Five assessments were conducted (July 2009 and March 2011), as well as follow up of program graduates (February 2012) utilizing various sources of information - about 80 program participants, graduates, parents, employers and staff, using in-depth interviews, observations, focus groups and self-administered questionnaires. Operating evaluation, based on "developmental evaluation" approach, has generated a collaborative process of implementers, funders and other key stake holders - working together in designing evaluation tools and analyzing findings for improving the programs.
Pathways of Care and a Youth Guided Empowerment Evaluation Approach
Gregory Washington, University of Memphis, gwshngt1@memphis.edu
Jenny Jones, Florida A&M University, jenny.jones@famu.edu
Abstract: The workshop will describe how a university-community partnership engaged youth and families impacted by mental illness in a process to identify preferred "pathways of care" that include community mental health assets in their communities. The Center for the Advancement of Youth Development's (CAYD) Youth Guided Empowerment Evaluation Criteria program is facilitated by University of Memphis Department of Social Work. Faculty and graduate students were used to trained high school youth to serve as community evaluators and use a Community Health Assets Mapping for Partnerships (CHAMP) tool initially developed by the African Religious Health Assets Program (ARHAP). It was adapted into CAYD Community Asset Mapping tool utilized with a SAMSHA funded system of care project, Just Care Family Network, in Shelby County which comprises most of Memphis TN.

Session Title: Distance Education and Other Educational Technologies TIG Business Meeting
Business Meeting Session 61 to be held in Northwest on Wednesday, Oct 16, 6:10 PM to 6:55 PM
Sponsored by the Distance Ed. & Other Educational Technologies TIG
TIG Leader(s):
Talbot Bielefeldt, International Society for Technology in Education, talbot@iste.org

Session Title: Choosing the Right Database Software
Demonstration Session 62 to be held in Oak Lawn on Wednesday, Oct 16, 6:10 PM to 6:55 PM
Sponsored by the Integrating Technology Into Evaluation
Laura Keene, Keene Insights, laura@keene-insights.com
Abstract: What's the best database software? It depends. What's your budget? How much customization is required? Does it need to be web-based? What level of security is required? To make it more difficult, the technology is expanding and changing rapidly. In the last ten years, we've gone from an environment where Microsoft Access databases were the only cheap, viable solution to a proliferation of internet-based options including many out-of-the box programs like Effort to Outcomes (ETO) and Apricot that are designed specifically for tracking clients, services, and outcomes. In this session, we'll start with a look at today's database software landscape and how it's evolving including "the cloud", the rise of "Software as a Service" options like Salesforce and the proliferation of cheap web hosts like Go Daddy. Then, we'll discuss what factors to consider when choosing (or helping a client choose) the best option.

Session Title: Getting Ahead of the Curve: Evaluation Methods that Anticipate the Next Generation Science Standards (NGSS)
Panel Session 63 to be held in Piscataway on Wednesday, Oct 16, 6:10 PM to 6:55 PM
Sponsored by the STEM Evaluation TIG and the Pre-K - 12 Educational Evaluation TIG
Karen Peterman, Karen Peterman Consulting Company, karenpetermanphd@gmail.com
Abstract: The state of STEM evaluation practice in the early 21st Century is in transition as we await the final release of the Next Generation Science Standards (NGSS), which will guide research and evaluation in science education in the coming years. This panel features STEM evaluation methods that have been developed and field-tested in anticipation of the NGSS. The panelists are both evaluating supplemental education programs that promote instructional practices consistent with the NGSS in middle and high school classrooms, and have taken advantage of the opportunity to explore evaluation methods that will reveal teaching and learning consistent with NGSS concepts and practices. Each panelist will share specific methods and field test results, including performance-based assessments and a classroom observation protocol supported with virtual student artifacts generated as part of a technology-supported science curriculum. With feedback from the audience, the panelists will reflect on the merits and challenges of the work.
Show Me What You Can Do: Using Performance-based Assessments to Document Students' Ability to Use Next Generation Science Standards' Science and Engineering Practices
Karen Peterman, Karen Peterman Consulting Company, karenpetermanphd@gmail.com
This presentation will feature a series of online performance-based assessments (PBAs) designed to measure student science practices, created with the hope of providing authentic assessments consistent with the conceptual shifts of the NGSS. The first pair of pre-post instruments measure middle and high school students' ability to (a) collect data via an eMonitor that records electricity data, and (b) use the data to answer real-world questions and communicate results. Reliability analysis and pre-post results will be shared. Next the presenter will share examples of PBAs that have been developed for elementary students and embedded into an online learning platform. These include proficiency assessments used as formative measures of data and mapping skills, as well as pre-post PBAs that will be used for summative purposes. The presentation will conclude with a discussion of whether and how these methods are a meaningful way to evaluate students in relation to the NGSS.
Cloud-computing Tools Facilitate Both the Implementation and Observation of Science Teaching and Learning as Envisioned in the Next Generation Science Standards
Kimberle Kelly, University of Southern California, akimkelly@gmail.com
Faculty at California State University, Northridge, have developed teacher training programs that emphasize the use of cloud-based computing tools (such as google docs, spreadsheets, and websites) in K-12 science classrooms with the express intent of engaging students in the science and engineering practices envisioned in the Next Generation Science Standards (NGSS). These tools not only provide opportunities to infuse more formative assessment, pooling of investigational data across the class, and collaborative student reporting into science lessons, they also enhance opportunities to measure teaching and learning in ways that are naturally embedded in the work teachers and students do. The presentation reviews the development and validation of a structured observation protocol that documents technology-supported NGSS science and practices and cross-cutting concepts in secondary science classrooms, taking advantage of cloud-based computing technologies to automate the collection and analysis of the observation data and associated virtual artifacts of student learning.

Session Title: Collaborative and Culturally Competent Approaches to Program Evaluation: Examples From Latino Serving Community-Based Organizations
Panel Session 64 to be held in Coats on Wednesday, Oct 16, 6:10 PM to 6:55 PM
Sponsored by the Community Psychology TIG and the Multiethnic Issues in Evaluation TIG
Julia Perilla, Georgia State University, jperilla@gsu.edu
Abstract: This panel will present an innovative approach used to conduct two evaluations at a community-based organization working with immigrant Latino families in Atlanta, GA. The first is a comprehensive evaluation of a program for Latino families in which violence has occurred. The second is an adaptation and evaluation of a leadership program that involved both academic researchers and Latina survivors of domestic violence. Some unique aspects of these evaluations are (a) the emphasis given to the academic-community partnership in the initial attempts to evaluate the work and impact of the community organization; (b) the potential for students and academic professionals to increase their knowledge and skills regarding evaluations in a culturally respectful and competent manner; and (c) the potential for enhancing community capacity among underserved populations. Participants will have the opportunity to discuss the potential for these evaluations as a potential blueprint for use in other non-profits serving immigrant populations.
Embracing the Uniqueness of Community-based Programs Through Evaluation
Rebecca Rodriguez, Georgia State University, rrodriguez12@student.gsu.edu
Alvina Rosales, Georgia State University, alvinar@gmail.com
Alfredo Morales, Georgia State University, amorales5@student.gsu.edu
Charmaine Mora, Georgia State University, cmora3@student.gsu.edu
Funding agencies are increasingly requiring that community-based organizations provide evidence of program effectiveness. For community-based organizations serving underrepresented and marginalized populations, program evaluation becomes more challenging. This is especially true when working with indigenous or grass-roots programs. Program evaluators are challenged with capturing the uniqueness of these programs while utilizing traditional modes of measurement that may not fit with the specific cultural group. This presentation will describe methods of a community based program evaluation that are sustainable and culturally specific, including: strategic stakeholder collaborations, mixed methodology and the adaptation and development of culturally nuanced measures. The authors will use examples from a comprehensive domestic violence intervention evaluation. Lastly, the authors will engage the audience in a discussion of methods to disseminate evaluation findings and ideas to increase a sustainable implementation of program evaluation.
Evaluation of a Leadership Training Program for Latinas
Rosemarie Lillian Macias, Georgia State University, rmacias2@student.gsu.edu
Findings from an adaptation and evaluation of a peer leadership-training program for Latina survivors of domestic violence in Atlanta, GA, support a participant-centered, community-based approach. In addition, the presenter will discuss the larger evaluation's findings with respect to the impact of the program on Latina community leaders and implications for future research in the area of community-based evaluation in the context of adaptive programs.

Session Title: Diversity, Social Justice, and Cultural Competence: Balancing Essential Education Elements in the 2013 Evaluation Profession
Think Tank Session 65 to be held in Albright on Wednesday, Oct 16, 6:10 PM to 6:55 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Jennifer Williams, JE Williams and Associates LLC, jenniferwilliams.722@gmail.com
Tererai Trent, Tinogona Consulting, tererai.trent@gmail.com
Jennifer Williams, JE Williams and Associates LLC, jenniferwilliams.722@gmail.com
Tererai Trent, Tinogona Consulting, tererai.trent@gmail.com
Abstract: Due to increased foci on budget reduction, providing data-informed interventions and non-evaluators conducting evaluations while complying with HIPAA and FERPA policies, evaluators working with K-12 education systems are challenged with myriad professional, ethical, legal and social justice challenges. This Think Tank examines the trans-disciplinary aspects of evaluation in not-for-profit, K-12 settings that provide social services and where the clinician is also a lead evaluator. Is it possible to maintain appropriate boundaries? Can social justice considerations be balanced with cultural competence? Who is the client (school staff, parent, or child)? With few to no alternatives re: human resources to conduct the project, how does the evaluator / service provider address these issues? Should one simply refuse / decline the contract? Is it economically feasible to 'just say no'? This session will address these questions and stimulate discourse among the participants on how to provide such complex evaluations with integrity in 2013.

Session Title: 25 Tips for More Effective Conference Presentations
Demonstration Session 66 to be held in Cardozo on Wednesday, Oct 16, 6:10 PM to 6:55 PM
Sponsored by the Data Visualization and Reporting TIG
Kylie Hutchinson, Community Solutions Planning & Evaluation, kylieh@communitysolutions.ca
Abstract: Have you ever felt bored sitting in a conference presentation, or frustrated by a presenter with poor delivery skills? This is particularly disappointing when great content gets lost in a lackluster or poorly-paced presentation. Is this maybe...you? Even if you're not the world's greatest public speaker, there are many simple and low tech things you can do to dramatically increase your audience's engagement and your overall effectiveness as a presenter. Leave this session with 25 easy tips you can implement immediately.

Session Title: Digging Deeper: Using Cognitive Interviewing to Identify and Resolve Data Collection Problems
Demonstration Session 67 to be held in DuPont on Wednesday, Oct 16, 6:10 PM to 6:55 PM
Sponsored by the Qualitative Methods TIG
Shauna Clarke, ICF International, shauna.clarke@icfi.com
Michael Long, ICF International, michael.long@icfi.com
Abstract: Cognitive interviewing is an important, yet often overlooked tool in the evaluator's toolkit. It is a systematic process for gathering feedback from potential study respondents to identify critical data collection problems, such as when study respondents do not understand or misinterpret questions and instructions during data collection. Problems such as these adversely impact our ability to draw meaningful conclusions from the data collected. As we conduct evaluations in diverse contexts, it is important to consider how this tool can be incorporated into data collection efforts to ensure that we collect the most accurate and meaningful data possible. During this session, presenters will engage participants in learning about the strategies and process of cognitive interviewing using real-world examples. Presenters will draw from their extensive experience conducting cognitive interviews for various federal government agencies to share specific applications related to the fields of education, youth development and public health.

Session Title: Evaluating Arts Integration: Establishing Links Between Artistic Activity and Educational Outcomes
Multipaper Session 68 to be held in Embassy on Wednesday, Oct 16, 6:10 PM to 6:55 PM
Sponsored by the Arts, Culture, and Audiences TIG
Luke Rinne, Johns Hopkins University, lrinne@jhu.edu
Abstract: Arts-integration is a multifaceted educational approach that is loosely defined as "teaching through the arts." Although arts-integrated instruction is becoming more common, little work has been done to evaluate its potential for improving both arts and non-arts (academic) outcomes. Evaluation of arts integration is difficult, because arts integration takes many different forms, and there are varying ideas about its primary objectives. To evaluate arts integration, it is necessary to identify key elements and link them to specific educational outcomes in both core academic and arts domains. Recent work has investigated whether various aspects of arts-integrated instruction affect student engagement, creative ability, retention of academic content, attitudes about the arts, and other outcomes. Unique study designs have been developed to rigorously assess the effects of arts integration and produce quantitative data to go along with existing qualitative data, thereby improving the quality and reliability of evaluations.
Student Attitudes About the Arts and Ideation as Predictors of Student Engagement
Ivonne Chand O'Neal, The John F Kennedy Center for the Performing Arts, iconeal@kennedy-center.org
Annie Begle, The John F Kennedy Center for the Performing Arts, asbegle@kennedy-center.org
Mark Runco, Creativity Testing Services, 
Winner and Hetland (2001) argue that research and theory should focus on how the arts can serve as vehicles for academic achievement. They suggest theory-building studies and experiments as one avenue. The current theory-building study aimed to examine how students' attitudes about the arts (AAA) and ideational behavior (IB) impact student engagement. Assessments included Attitudes about Arts, Runco Ideational Behavior Scale, and Jaeger & Chand O'Neal Student Engagement & Interest Survey, completed by 601 4th and 5th grade students. Multiple linear regression analyses indicated that student engagement was significantly and positively predicted by students' AAA and IB (p < .001), which also positively predicted dimensions of engagement (effort, interest, flow and emotional engagement; p < .001 for all analyses). Affiliation, however, was significantly predicted only by AAA (p < .001). These results advance our understanding of the relationships between arts, creativity and student engagement as a path to academic achievement.
Assessing Arts Integration: Effects on Retention of Academic Content
Luke Rinne, Johns Hopkins University, lrinne@jhu.edu
Mariale Hardiman, Johns Hopkins University, mmhardiman@jhu.edu
Many educators and researchers have argued that integrating the arts into everyday instruction improves academic outcomes. While some research suggests that this is indeed the case, prior work has not established where the primary academic benefits lie, and it remains unclear what aspects of arts integration might be responsible for academic improvements. We hypothesized that academic benefits arise at least in part because artistic activities involve unique forms of information processing that improve long-term retention of academic content. To rigorously assess this hypothesis, we conducted a randomized controlled trial of arts-integrated science curricula in four 5th grade classrooms in a single school. We argue that controlled experiments of this sort can and should be used more frequently to identify and measure specific effects of arts integration. Because arts integration is a multifaceted educational approach, effective evaluation requires an increased focus on the identification of specific, rather than general effects.

Session Title: A Toolkit for Evaluating the Impact of HIV/AIDS Programming on Child Wellbeing in Africa
Demonstration Session 69 to be held in Fairchild West on Wednesday, Oct 16, 6:10 PM to 6:55 PM
Sponsored by the Health Evaluation TIG
Jenifer Chapman, Futures Group, jchapman@futuresgroup.com
Janet Shriberg, United States Agency for International Development, jshriberg@usaid.gov
Abstract: We will demonstrate an orphans and vulnerable children (OVC) program evaluation toolkit developed under the USAID-funded MEASURE Evaluation project. The toolkit includes quantitative child outcomes and caregiver/household outcomes measurement tools designed for use in a household survey of children ages 0-17 years and their adult caregivers. The questionnaires are designed to measure changes in child, caregiver and household well-being that can reasonably be attributed to program interventions. Tools have been developed through extensive expert consultation, and are aligned to the PEPFAR Guidance for Orphans and Vulnerable Children Programming as well as the US Government Action Plan for Children in Adversity. Tools have been formally piloted in three countries using both cognitive interviews and household pre-test methodology to ascertain the validity and reliability of measures across sub-Saharan Africa. The toolkit includes a tools manual, a template survey protocol, a template analysis plan, and a training manual for data collectors.

Session Title: From the Outside Looking in: Independent Evaluations at the National Endowment for Democracy
Panel Session 70 to be held in Fairchild East on Wednesday, Oct 16, 6:10 PM to 6:55 PM
Sponsored by the International and Cross-cultural Evaluation TIG
Georges Fauriol, National Endowment for Democracy, georgesf@ned.org
Abstract: The National Endowment for Democracy (NED) works to build and strengthen the practice of democracy around the world through grantmaking to local, grassroots organizations. The cornerstones of NED's evaluation practice are external, independent evaluations of its grants program. Each year, the Endowment commissions 2-3 external evaluations of a country, theme, or cross-cutting issue. In 2011, the Endowment commissioned a retrospective study of all of its evaluations on file. The purpose of this "evaluation of evaluations" (a metaevaluation)was not to critique past programs and philosophies but to identify and firm up some of the assumptions and dynamics that shape NED's monitoring and evaluation processes. This panel will share the story of how this evaluation and its recommendations have been used for learning and improving the NED's evaluation practice.
An Overview of Independent Evaluations at the National Endowment for Democracy
Rebekah Usatin, National Endowment for Democracy, rebekahu@ned.org
How does a grantmaking organization with over 1000 grantees in 90 different countries evaluate its work? This presentation will examine the evolution of the National Endowment for Democracy's efforts at using external evaluations as one of the core components of its evaluation practice. Each year, the Endowment commissions 2-3 external evaluations of a country, theme, or cross-cutting issue. Since 1990, NED has commissioned 26 evaluations, mostly using a mixed-methods approach comprised of document analysis and key informant interviews. Early evaluations were used for accountability purposes but more recent efforts have sought to integrate learning for staff and grantees into the evaluation process. This panel will detail the challenges and successes of the Endowment's experience with independent evaluation.
Lessons Learned From the National Endowment for Democracy's "Evaluation of Evaluations"
Georges Fauriol, National Endowment for Democracy, georgesf@ned.org
In 2011 the National Endowment for Democracy commissioned a retrospective study of all of its external evaluations. The purpose of this "evaluation of evaluations" was not to critique past programs and philosophies but to identify and firm up some of the assumptions and dynamics that shape NED's monitoring and evaluation processes. The retrospective study was designed to assist NED in assessing its methodological approaches, and formalizing and articulating the Endowment's particular approach to evaluating democratic development. It took more than a year to complete and was the subject of much scrutiny inside the Endowment. This presentation will focus on how the evaluation and its recommendations have been used for learning and improving the NED's independent evaluation practice.

Session Title: Polygamous Evaluations: The Joys and Challenges of Collaborating With Multiple Evaluators on a Single Project
Think Tank Session 71 to be held in Gunston West on Wednesday, Oct 16, 6:10 PM to 6:55 PM
Sponsored by the Independent Consulting TIG
Rachel Becker-Klein, PEER Associates Inc, rachel@peerassociates.net
John Fraser, New Knowledge Organization Ltd, jfraser@newknowledge.org
John Fraser, New Knowledge Organization Ltd, jfraser@newknowledge.org
Rachel Becker-Klein, PEER Associates Inc, rachel@peerassociates.net
Abstract: This proposal aims to present the challenges and successes of multiple evaluators working with one client. Contributors will discuss what ensued when one member of a collaborative research project wanted to bring in a new evaluation partner, disrupting a long term relationship with their current evaluator that had been developed with the project's Principal Investigator. The evaluators involved will share experiences, ethical issues related to the contracting relationships, challenges that emerged, and learning that occurred from negotiating this potentially incendiary arranged marriage. Some of the best practices that the evaluators used include openness and transparency with each other, agreeing on ethical guidelines and roles for communication with the client, and choosing a collaborative rather than a competitive approach that blended two different approaches to evaluation. Session contributors will facilitate a discussion aimed toward exchanging ideas that promote positive approaches that can sustain long-term relationships between clients and multiple evaluators.

Session Title: Challenges in Evaluating Small Business Capacity Building Models on Employing People With Disabilities
Panel Session 72 to be held in Gunston East on Wednesday, Oct 16, 6:10 PM to 6:55 PM
Sponsored by the Disabilities and Other Vulnerable Populations
Cherise Hunter, United States Department of Labor, hunter.cherise@dol.gov
Day Al-Mohamed, United States Department of Labor, al.mohamed.day@dol.gov
Abstract: Small business plays a key role in U.S. economic growth. The U.S. Department of Labor's Office of Disability Employment Policy (ODEP) developed a systems change initiative to connect demand from small businesses with the underutilized labor supply of people with disabilities. The Add Us In (AUI) initiative targets a fast-growing segment of the small business population-ethnic and racial minority businesses and businesses operated by lesbian, gay, bisexual and transgender individuals (LGBT); women; and people with disabilities. The purpose of the Add Us In initiative is to improve the capacity of small businesses to hire people with disabilities and the placement of historically underserved and marginalized people with disabilities in an employment experience-an internship or a job. This panel focuses on issues in evaluation when public programs try to leverage and influence the business sector in a systems change initiative.
Tailoring Evidence to Engage Small Businesses in Disability Diversity
JoAnn Kuchak, Economic Systems Incorporated, jkuchak@econsys.com
Cokley Patrick, United States Department of Labor, cokley.patrick@dol.gov
Cheryl Mitchell, Economic Systems Incorporated, cheryl.mitchell@econsys.com
This paper widens the tent of evaluation to focus on similarities and differences in types of evidence typically used by social service agencies to guide policy and program development, and the type of evidence development required to support a compelling business case for the employment of people with disabilities by small businesses. There is a major disconnect between the type of evidence typically obtained and used by public agencies versus the type of evidence businesses want to hear. Grantees used mixed methods to identify evidence and to assemble it to support the mission of Add Us In (AUI). Grantees educated themselves through the collaborative process, and conducted surveys and focus groups to identify barriers and motivators to small business engagement in Add Us In. This presentation focuses on how evidence was developed in AUI to transform outreach into engagement by small businesses.
Evaluation Issues in Leveraging Business Organizations to Improve Employment Opportunities for People With Disabilities
Steve Cotter, Economic Systems Incorporated, scotter@econsys.com
A key strategy of Add Us In (AUI) is to stimulate business organizations to embrace disability diversity, to communicate information to member businesses, and to engage them in providing work experiences for people with disabilities. There are numerous business organizations throughout the United States. Some consortia initially emphasized all-encompassing business organizations, such as local chambers of commerce. Many AUI grantees refocused from general business organizations to organizations whose memberships emphasize marginalized groups. Specific types of business organizations that represent marginalized groups are not present in all communities; others have chapters throughout the United States. The uniqueness of some of the business organizations poses special challenges for evaluators, raising issues of appropriate comparison groups, random assignment and generalizability. This presentation discusses issues in evaluation associated with the specialized clusters of businesses targeted by AUI grantees.
Role of Adaptive Evaluation in a Dynamic Business-focused Systems Change Project
JoAnn Kuchak, Economic Systems Incorporated, jkuchak@econsys.com
Cherise Hunter, United States Department of Labor, hunter.cherise@dol.gov
Jacob Denne, United States Department of Labor, jdenne@econsys.com
As Add Us In evolves, the external evaluation has adapted to provide information that the sponsoring agency (ODEP) uses in guiding grantees toward more favorable outcomes, and stimulates grantees to improve their projects. The evaluation began with site visits to each grantee. Combining the site visit interview data with grantee progress reports, the external evaluator provided ODEP with an assessment of grantee progress, and the risks in achieving expected outcomes. ODEP considered this information in formulating their guidance to grantees, and the evaluator used the information to focus next steps of the evaluation. In the second year of the evaluation, ODEP asked their evaluator to identify factors that contribute to AUI success. The evaluation was tailored using phone interviews, logic models, surveys, and case studies, with special emphasis on reducing burden to business partners. This paper reviews the methods used to adapt the evaluation to the dynamics of the project.
Framework for Grantee Evaluation of Business-focused Change: Maryland Add Us In
Richard Lueking, Trans-Cen Inc, rlueking@transcen.org
Each Add Us In (AUI) grantee is responsible for conducting a rigorous evaluation of its AUI program. This paper describes the model developed by Maryland AUI to address systems change outcomes for businesses and the workforce system. The model defines indicators for each project objective, and uses a four-component methodology to articulate how each will be addressed. The evaluation uses a mixed-methods approach to address the objectives for the project, including administrative records, structured interviews, focus groups, surveys, analysis of meeting minutes, analysis of agreements, observations and participant assessments. The design includes repeated measures and pre-post comparisons. The model identifies the respondents for each type of data collection, the parties responsible for carrying out each method, and the timing of the data collection and reporting.

Session Title: Multi-Level Logic Models and Relational Databases: Simple Methods for Keeping Track of Complex Programs
Demonstration Session 73 to be held in Columbia Section 1 on Wednesday, Oct 16, 6:10 PM to 6:55 PM
Sponsored by the Cluster, Multi-site and Multi-level Evaluation TIG
Humphrey Costello, University of Wyoming, hcostell@uwyo.edu
Abstract: Evaluators use logic models to illustrate links between program resources, activities, outputs, and outcomes. Good logic models, in theory, should be simple and clear. Ideally, data systems will mirror logic models, facilitating tracking of all variables in the models. But programs are often messy, complex, subtle, opaque, and multifaceted, posing extraordinary challenges for data management. In this demonstration, we will share methods for illustrating highly complex, overlapping programs through nested logic models supported by a relational database. We will draw on our experience evaluating state tobacco use prevention and control programs which a) make modifications to CDC recommended guidelines, b) include separate and overlapping objectives, c) are integrated with other chronic disease prevention programs, and d) are implemented both directly and through grantees. Phew! We will discuss our process for working with many different sets of stakeholders to create integrated and multi-leveled logic models supported by a flexible yet comprehensive databases.

Session Title: Strategies for Improving the Quality of Quantitative Data
Demonstration Session 74 to be held in Columbia Section 2 on Wednesday, Oct 16, 6:10 PM to 6:55 PM
Sponsored by the Quantitative Methods: Theory and Design TIG
Timothy Ho, University of California at Los Angeles, timothyho@ucla.edu
Patricia Quinones, University of California at Los Angeles, p.quinones00@gmail.com
Alejandra Priede-Schubert, University of California at Los Angeles, alejandrapriede@gmail.com
Abstract: Evaluations can be seriously harmed by poor quality data that may lead to inaccurate conclusions. Taking preventative steps and being thoughtful about the data before the data collection process can reduce some of the challenges encountered during data analysis. This demonstration is targeted to beginning evaluators who are entering the field and are unsure about how to best capture quantitative data of interest, and to program personnel who wish to get some helpful tips about how to improve the quality of their data. This demonstration aims to address the following topics: 1) some basic principles of data collection and management, 2) tips for instrument development and variable characteristics, 3) strategies for organizing longitudinal and/or multi-site data, and 4) best practices in using SPSS that will improve the quality of data. Collectively, these topics should help evaluators and program personnel think about the implications of how they collect their data.

Session Title: Issues in Structural Equation Modeling and Testing for Mediation
Multipaper Session 75 to be held in Columbia Section 3 on Wednesday, Oct 16, 6:10 PM to 6:55 PM
Sponsored by the Quantitative Methods: Theory and Design TIG
Dale Berger,  Claremont Graduate University, dale.berger@cgu.edu
Adequacy of Model Fit in Confirmatory Factor Analysis and Structural Equation Models: It Depends on What Software You Use
Susan Hutchinson, University of Northern Colorado, susan.hutchinson@unco.edu
Antonio Olmos, University of Denver, polmos@du.edu
Eric Teman, University of Northern Colorado, eric.teman@unco.edu
Abstract: Among the numerous statistical tools used by evaluators, confirmatory factor analysis (CFA) and structural equation modeling (SEM) are gaining in popularity (Hays, Revicki, & Coyne, 2005). A common practice when applying either CFA or other forms of SEM is to assess adequacy of the tested model by examining indices of model fit (Sun, 2005). Despite extensive research on factors affecting fit, little attention has addressed potential differences in model fit depending upon the particular software package used. Given inconsistencies across SEM programs (e.g., Amos, EQS, LISREL, Mplus, and R lavaan) in how they compute fit indices (Bryant & Satorra, 2012) and estimate models, we review the concept of model fit, describe salient differences across popular SEM software programs, illustrate disparities using data from both CFA and structural models, and discuss potential implications associated with choice of SEM software and estimation procedure.
An Alternative Approach to Statistical Mediation for Evaluations With Few Cases or Many Variables: A Cascade Modeling Approach
Rafael Garcia, The University of Arizona, ragarci2@email.arizona.edu
Aurelio José Figueredo, The University of Arizona, ajf@email.arizona.edu
Lee Sechrest, The University of Arizona, sechrest@email.arizona.edu
Abstract: While conducting evaluation research, it is commonplace to posit one or several intermediate variables (mediators) between root causes and desired outcomes. Many approaches toward analyzing such relationships have been articulated over the years. One of the more common approaches used today is Structural Equations Modeling, and although this method is appropriate with large sample sizes, it may not be appropriate for research with smaller numbers of cases or larger numbers of variables. In order to work around the insufficient power, a 'Cascade Modeling' approach is presented in this paper. This method utilizes simple regression techniques to describe the nomological network of the variables modeled. Cascade Modeling is a step up from the traditional Causal Steps Approach (Baron & Kenny 1986) as it easily accommodates multiple mediations and avoids some of the pitfalls of this approach. Cascade Modeling will be illustrated with data from evaluation projects.

Session Title: Getting Equity Advocacy Results and Assessing the Strength of Local-Level Policies: Two Independent Approaches to Evaluating Advocacy and Policy Change
Multipaper Session 76 to be held in Columbia Section 4 on Wednesday, Oct 16, 6:10 PM to 6:55 PM
Sponsored by the Advocacy and Policy Change TIG and the Health Evaluation TIG
Anna L Williams,  Perspectio, anna@perspectio.com
Assessing the Quality of Local-Level Obesity Prevention Policies: Lessons From Measuring the Strength and Comprehensiveness of Policy Language
Nikole Lobb Dougherty, Washington University in St Louis, nlobbdougherty@wustl.edu
Stephanie Andersen, Washington University in St Louis, sandersen@brownschool.wustl.edu
Rachel Barth, Washington University in St Louis, rbarth@brownschool.wustl.edu
Christopher B Robichaux, Washington University in St Louis, crobichaux@gwbmail.wustl.edu
Tanya Montgomery, Washington University in St Louis, tmontgomery@gwbmail.wustl.edu
Cheryl Kelly, University of Colorado at Colorado Springs, ckelly6@uccs.edu
Amy Stringer Hessel, Missouri Foundation for Health, astringerhessel@mffh.org
Abstract: The Missouri Foundation for Health currently funds an evaluation of a multi-site obesity prevention initiative. Projects implement physical activity and healthy eating components, including a policy/advocacy component. This paper discusses the application of PolicyLift, a comprehensive, evidence-based tool designed to assess the strength and comprehensiveness of obesity prevention policy language. The authors assessed the language of local-level policies (e.g., school, worksite wellness policies) implemented by projects. The authors discuss preliminary findings of this assessment to demonstrate the use and limitations of using such a tool. The authors provide suggestions for future considerations in applying or developing tools to assess local-level policy work and the need for tools that assess different steps of the policy process including, advocacy, policy language development, policy adoption and implementation, and associated or anticipated outcomes of the policy.
GEAR for Evaluation of Equity Advocacy
Jme McLean, PolicyLink, jme@policylink.org
Abstract: Policy determines what is allowed, encouraged, discouraged, and prohibited in a community. Advocacy - the art of influence and persuasion - is essential for fostering the creation, adoption, and implementation of promising policy solutions that catalyze social change. PolicyLink has worked closely with partners spanning a range of fields to identify some of the essential components of successful equity advocacy for policy change. This information has been assembled in Getting Equity Advocacy Results (GEAR): a suite of benchmarks, methods, and tools for advocates, organizers, and their allies to track campaign results. GEAR includes benchmarks for both creating equitable policies - the kinds of policies that ensure fair distribution of resources and opportunity for all - as well as equitably creating policy change -empowering people to have a say in the circumstances that affect their lives. In this session, we will present the GEAR framework and benchmarks and discuss how they help advance advocacy evaluation.

Session Title: Evaluation Use: A Revised Reading of the Role of the Evaluator, the Model and the Context
Expert Lecture Session 77 to be held in  Columbia Section 5 on Wednesday, Oct 16, 6:10 PM to 6:55 PM
Sponsored by the Evaluation Use TIG
Astrid Brousselle, Universite de Sherbrooke, astrid.brousselle@usherbrooke.ca
Abstract: The use of evaluation results is at the core of evaluation theory and practice. Debates in the literature have emphasized the importance of both the evaluator's role and the evaluation process itself in fostering use. Our presentation gives a completely new reading of the long-standing debate on evaluation use, rebalancing the respective roles of context, theories and evaluator. Based on a recent systematic review on knowledge exchange and information use (Contandriopoulos et al. 2010) we began by positioning selected evaluation models in the two-dimensional framework according to their core components. Our analysis shows it would be a mistake to think results use depends primarily on the model used or the evaluator's qualities; rather, it is largely influenced by the evaluation context. Furthermore, our analysis suggests that some models are more appropriate in some contexts to foster use. This re-interpetation of use has important consequences for evaluation practice.

Session Title: Feminist Issues in Evaluation TIG Business Meeting
Business Meeting Session 78 to be held in Columbia Section 6 on Wednesday, Oct 16, 6:10 PM to 6:55 PM
Sponsored by the Feminist Issues in Evaluation TIG
TIG Leader(s):
Donna Podems, Stellenbosch University, donna@otherwise.co.ca
Kathryn Mathes, Centerstone Research Institute, kathryn.mathes@centerstone.org
Michael Bamberger, Independent Consultant, jmichaelbamberger@gmail.com

Session Title: The Systems-Highlights-Patterns(SHP)Framework: A New Framework for the Evaluation of Complex Projects
Demonstration Session 79 to be held in Columbia Section 7 on Wednesday, Oct 16, 6:10 PM to 6:55 PM
Sponsored by the Systems in Evaluation TIG
Aruna Lakshmanan, East Main Evaluation & Consulting LLC, alakshmanan@emeconline.com
Barbara Heath, East Main Evaluation & Consulting LLC, bheath@emeconline.com
Catherine Freeman, East Main Evaluation & Consulting LLC, cfreeman@emeconline.com
Abstract: This demonstration will outline the Systems-Highlights-Patterns (SHP) Framework, a new framework designed for the evaluation of complex projects. The SHP Framework is a three-step iterative process that is based on a developmental evaluation approach and borrows from practices in systems thinking and strategy theory. In contrast to outcomes based approaches, the SHP Framework offers better explanatory power in complex situations. The demonstration will be conducted in three steps. First, the presenters will illustrate how to gain an understanding of the program system using a systems thinking model. Second, participants will learn how to create a highlights table to track project accomplishments and emergent ideas longitudinally. The third step will address how to create diagrams focused on patterns of action in order to visually depict the system as a function of time. Participants will gain knowledge about the utility and practical application of the SHP Framework in their evaluations.

Session Title: Enhancing the Link Between International Development Evaluation and Civil Society: Two Illustrative Examples
Multipaper Session 80 to be held in Columbia Section 8 on Wednesday, Oct 16, 6:10 PM to 6:55 PM
Sponsored by the International and Cross-cultural Evaluation TIG
Sue Griffey,  Social & Scientific Systems Inc, sgriffey@s-3.com
Maureen Rubin,  University of Texas at San Antonio, m_rubin@hotmail.com
Evaluation of Stray Dog Population Control Project Causes Downsizing of Project, but Stimulates Dialog Among Non-governmental Organizations and Government Branches and Results in Additional Funding
Anna Martsinkiv, Rinat Akhmetov's Foundation "Development of Ukraine", amartsinkiv@fdu.org.ua
Olga Schetinina, Rinat Akhmetov's Foundation "Development of Ukraine", oschetinina@fdu.org.ua
Abstract: The Rinat Akhmetov Foundation for the Development of Ukraine began the Donetsk-based Stray Dog Population Control Project in 2010. The focus of the project was to reduce the stray dog population of 14000 animals through sterilization. Due to the fact that a needs assessment was not carried out before the project launch, a key factor for completing the project was not taken into account: the type of dwellings most residents live in. Statistics show that 50 percent of residents in Donetsk (total population 950 thousand) live in private homes and often have more than one dog. Two years after the project start, a complex progress evaluation showed that a reduction in the stray dog population would only occur if serious efforts to curb the dog population are taken among private home owners. Without the assistance of private home owners the goals of the project will not be achieved.
Building the Evaluation Field Through Strengthening Voluntary Organisation of Professional Evaluators in Indonesia
Benedictus Dwiagus Stepantoro, Indonesian Development Evaluation Community (InDEC), bdwiagus@gmail.com
Abstract: It was only in 2009 that a formal voluntary organisation of professional evaluators (VOPE) have been established in Indonesia, which then was further structurally strengthen in 2011 by having more organisational structure to operate. This VOPE called InDEC (Indonesian Development Evaluation Community) have been trying to build the evaluation field in Indonesia through different strategies and methods in addressing the institutional capacity, individual capacity and enabling environment. However, this genuine endeavours is a long journey. Although InDEC is still considered as a relatively young organisation, but it has done much towards its goal and mission. This paper presentation would provide real experience-based insights around those endeavours: what outcomes have InDEC been trying to achieve? What are the enabling and inhibiting factors towards achieving the individual and organisational capacity outcomes? What can we learn and should have done better in the future?

Session Title: What You See is What You Get: Using Observations for Evaluation and Quality Improvement of Evidence-based Programs
Demonstration Session 81 to be held in Columbia Section 9 on Wednesday, Oct 16, 6:10 PM to 6:55 PM
Sponsored by the Health Evaluation TIG
Amanda Purington, Cornell University, ald17@cornell.edu
Jessica Collura, University of Wisconsin at Madison, collura@wisc.edu
Jane Powers, Cornell University, jlp5@cornell.edu
Shepherd Zeldin, University of Wisconsin at Madison, rszeldin@wisc.edu
Marilyn Ray, Finger Lakes Law & Social Policy Center Inc, mlr17@cornell.edu
Abstract: In this demonstration, evaluators from two different adolescent sexual health initiatives (one city-wide and one state-wide) will share how observational tools can be used to assess quality of evidence-based program (EBP) implementation. The increasing popularity of EBPs in health promotion and prevention has encouraged evaluators to move beyond self-report measures and assessments of curriculum fidelity to assessments of the quality of program delivery. Quality program implementation is more likely to engage young people, reduce disruptive group dynamics, and create a supportive learning environment, which ultimately should lead to positive program outcomes. The best way to evaluate - and ultimately enhance - quality is to conduct observations. Presenters will share two observation tools developed to assess quality of EBP implementation and describe how these tools have been used to inform technical assistance, enhance quality EBP implementation and promote participant engagement.

Session Title: Introduction to Social Network Analysis With NodeXL
Demonstration Session 82 to be held in Columbia Section 10 on Wednesday, Oct 16, 6:10 PM to 6:55 PM
Sponsored by the Social Network Analysis TIG
Shelly Engelman, The Findings Group LLC, shelly@thefindingsgroup.com
Tom McKlin, The Findings Group LLC, tom@thefindingsgroup.com
Abstract: Social Network Analysis (SNA) has evolved as a powerful method for capturing both visual and mathematical elements of connections among people, organizations, and other interacting units. Evaluators use this method for mapping and measuring meaningful, often unseen, structural relationships in communities. This demonstration will introduce novice users to NodeXL, a free open-source template for Microsoft Excel. Attendees will learn how to prepare surveys to collect SNA-related data, organize data in NodeXL, generate sociograms, and evaluate metric results. Two examples will be presented: First, we will examine how collaborations among high school and university computer science instructors have grown over time using a pre/post survey; second, we will look at publication data to assess the partnerships among authors at a university research center.

Session Title: Facilitation: An Essential Ingredient in Evaluation Practice
Think Tank Session 83 to be held in Columbia Section 11 on Wednesday, Oct 16, 6:10 PM to 6:55 PM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Dawn Smart, Clegg & Associates, dsmart@cleggassociates.com
Rita Fierro, Fierro Consulting, fierro.evaluation@gmail.com
Alissa Schwartz, Solid Fire Consulting, alissa@solidfireconsulting.com
Tessie Catsambas, EnCompass LLC, tcatsambas@encompassworld.com
Kataraina Pipi, FEM 2006 Ltd, kpipi@xtra.co.nz
Patricia Jessup, InSites, pjessup@insites.org
Jean King, University of Minnesota, kingx004@umn.edu
Tobi Mae Lippin, New Perspectives Inc, 
Kristin Bradley-Bull, New Perspectives Inc, 
Veena Pankaj, Innovation Network Inc, vpankaj@innonet.org
Rosalie Torres, Torres Consulting Group LLC, rosalie@torresconsultinggroup.com
Maria Scordialos, Living Wholeness Institute, 
Vanessa Reid, Living Wholeness Institute, 
Laurel Stevahn, Seattle University, 
Michele Tarsilla, Social Impact Inc, mitarsi@hotmail.com
Abstract: There are many intersections between evaluation and facilitation. In evaluation, facilitation can play a role in helping groups map a theory of change, in data collection through focus groups or other dialogues, in analysis by involving stakeholders in making meaning of the findings. While each of these steps is described in evaluation texts and the literature, less attention is given to describing facilitation approaches and techniques. Even less is written about evaluating facilitation practices, which are integral to organizational development and collaborative decision-making. Choices for facilitation methods to implement depend on the client, context, and priorities of the work, as well as the practitioner's skill, confidence, and philosophy. This think tank brings together a group of evaluators and facilitators collaborating on a publication about these complementary practices. We hope to spark a deeper conversation and reflections among participants about the role of facilitation in evaluation and of evaluation in facilitation.

Session Title: Measuring Peace and Food Resilience: Approaches and Lessons Learned
Multipaper Session 84 to be held in Columbia Section 12 on Wednesday, Oct 16, 6:10 PM to 6:55 PM
Sponsored by the International and Cross-cultural Evaluation TIG
Diana Rutherford,  FHI 360, drutherford@fhi360.org
Measuring "Peace": Approaches and Lessons Learned in Evaluating the Peace in East and Central Africa (PEACE II) Program
Robert Grossman-Vermaas, IBTCI, rgrossman@ibtci.com
Adam Reisman, IBTCI, areisman@ibtci.com
Abstract: Evaluators of peace-building and conflict mitigation programs in complex environments frequently face major challenges, such as establishing valid means for interpreting and measuring progress toward the desired state of relative peace. This challenge is even more significant when evaluators are required to provide causal, correlative or even logical linkages between funded peace-building activities and the actual outcomes achieved - such as strengthened "peace." In this paper, the authors will examine such challenges through the context of IBTCI's final evaluation of the USAID PEACE II Program, which focused on mitigating conflict and enhancing security within targeted cross-border communities along the Kenya-Somalia and Kenya-Uganda borders. In particular, this paper will emphasize approaches used and lessons learned for the benefit of future evaluations of peace-building programs in complex environments. Key to this will be describing how the team approached measuring "peace," and how it did so under operational constraints, such as lack of access due to conflict, , and methodological constraints, such as how to prove donor activity impact.
Food Security Resilience Measurement: Capacity or Outcome?
Timothy Frankenberger, TANGO International, tim@tangointernational.com
Suzanne Nelson, TANGO International, suzanne@tangointernational.com
Abstract: Resilience has recently emerged as a framework for improving regional and local capacity to deal with shocks, and thereby reducing the need for humanitarian responses. Yet there is a scarcity of verifiable evidence of impact among resilience programs. Empirical evidence at the household, community and national levels that illustrates what factors consistently contribute to resilience, to what types of shocks and in what contexts can be used both for planning and programming purposes as well as for assessing program impact. An analytical framework is needed that is general enough to be applied across different contexts but flexible enough to be contextualized, and to which general measurement principles can be applied. Key points for resilience measurement identified during an Expert Consultation held in Rome include the purpose of the measurement, unit of analysis, types of measurement, timing/frequency of data collection, importance of qualitative approaches, use of indices, etc.

Session Title: Using the Evaluation Questions Checklist to Improve Practice
Demonstration Session 85 to be held in Cabinet on Wednesday, Oct 16, 6:10 PM to 6:55 PM
Sponsored by the Organizational Learning and Evaluation Capacity Building
Maureen Wilce, Centers for Disease Control and Prevention, mwilce@cdc.gov
Sarah Gill, Centers for Disease Control and Prevention, sgill@cdc.gov
Sheri Disler, Centers for Disease Control and Prevention, sdisler@cdc.gov
Abstract: Asking the right questions is crucial to producing meaningful and actionable evaluation findings. In this workshop, the authors will show how the Evaluation Questions Checklist can be used to review proposed evaluation questions and transform mediocre questions into ones that elicit useful and insightful information for evaluation stakeholders. This checklist combines literature, practice wisdom garnered from interaction with members of the Organizational Learning and Evaluation Capacity Building Topical Interest Group, and experiences from evaluators from the National Asthma Control Program, Centers for Disease Control and Prevention. The process identifies factors that help focus an evaluation question to provide more valuable information while integrating use of the evaluation standards. Working in groups, participants will identify ways to enhance their sample evaluation questions. Participants are encouraged to bring questions from their own practice. At the end of the demonstration, participants will report back and engage in discussion with the larger group.

Session Title: Considering Culture and Ethnicity in Data Collection
Multipaper Session 86 to be held in Georgetown West on Wednesday, Oct 16, 6:10 PM to 6:55 PM
Sponsored by the Multiethnic Issues in Evaluation TIG and the Quantitative Methods: Theory and Design TIG
Erika Taylor,  National Education Association, taylorerika1@gmail.com
Is the Ethnic Category Hispanic Useful for Evaluation or Other Types of Research?
Michael Maranda, City College of NY, mic_maranda@yahoo.com
Ray Toledo, New York State Office of Alcohol & Substance Abuse Services, raytoledo@oasas.ny
Milton Herrera, City College of NY, mherrera@ccny.cuny.edu
Abstract: The term Hispanic as used in the US has always been a diverse category. With relatively recent immigration from Central and South America it has become more diverse. Who are the Hispanics? There have been arguments about the acceptability of the term to those labeled Hispanic. Should another term be used? Is there resentment for being lumped into this category? Can it legitimately be used as a stand in variable for diet, other behaviors or prejudice? Should Hispanic be a self-ascribed label or assigned by the researcher? Arguments are presented that in most cases at the local level it is better to use more specific 'ethnic' terms referring to the country of origin and include specific measurements of behaviors of interest. And that at the national level, the category Hispanic is too broad to be meaningful and, in many cases, distorts the picture toward the largest subgroup of Hispanics.
Balancing Consistency With Flexibility: Systematizing a Process of Cross-Cultural Survey Adaptation in Five Country Youth Development Project
Maura Shramko, Search Institute, mauras@search-institute.org
Abstract: Cross-cultural adaptation of psychometric measures requires attention to accurately contextualizing the concepts within a language and culture, while still ensuring that adaptation is equivalent to the original psychometric constructs. Search Institute has developed and tested a widely applied survey instrument measuring non-cognitive factors in youth development, adapting the instrument in more than 20 languages in collaboration with partners such as USAID, Save the Children and World Vision International. Working with Save the Children and MasterCard Foundation, Search Institute is systematizing the adaptation of the Developmental Assets Profile (DAP) in a complex multi-site, multi-country youth livelihoods and workforce program. Challenges include coordinating consistent processes of adaptation of the instrument in five country contexts with multiple partners. This paper utilizes pilot data and reflection on process standardization to identify ways to improve the systematization of a complex adaptation process that balances the need for consistency and flexibility in a way that allows for more robust assessment of outcomes and impact.

Session Title: Evaluator's Role: To Be or Not to Be an Advocate
Panel Session 87 to be held in Georgetown East on Wednesday, Oct 16, 6:10 PM to 6:55 PM
Sponsored by the Government Evaluation TIG
Rakesh Mohan, Idaho State Legislature, rmohan@ope.idaho.gov
Rakesh Mohan, Idaho State Legislature, rmohan@ope.idaho.gov
Abstract: As the use of evaluation becomes more widespread in various sectors of our society, a set of key questions come to mind. What is the role of evaluators-just to conduct the evaluation, or go beyond that? Should evaluators play the role of advocates promoting their findings and recommendations? How about being a champion for a particular policy or program? Where do we draw the line to say that beyond a certain level of advocacy will result in compromising the evaluator's independence? How do you balance advocacy with independence? Answers to these questions are not black and white, nor do they reflect universal agreement among evaluators. In order to play an active role in shaping the future of this 21st century through evaluation, evaluators need to have an ongoing discussion about their role.
Evaluators in the Midst of Advocates: Some Challenging Situations
George Grob, Center for Public Program Evaluation, georgefgrob@cs.com
Evaluators can find themselves in situations where they are called upon to play the role of advocates or be viewed as such even if they do not wish to be. Sometimes the opposite is true-evaluators may consciously or unconsciously (unaware of their own biases) choose to advocate for policies or programs they have evaluated. This presentation will examine some such situations, trying to sort out what is appropriate. Examples include: using evaluations to advocate for increasing government program spending or cutting government spending; working as an evaluator employee of an advocacy organization; and personally advocating for implementation of recommendations included in an evaluation report authored by the evaluator. The presenter will discuss both the propriety of working in these situations, and also approaches to protect evaluator independence and to recognize and deal with personal biases. The scenarios will reflect true life situations encountered by the presenter.
Advocating For Our Evaluations -- Maybe the Hardest Part of Our Job
Michael Hendricks, Independent Consultant, mikehendri@aol.com
Some persons -- both clients and evaluators -- don't feel that evaluators should engage in any form of advocacy, not even for the findings, conclusions, and recommendations of their studies. I disagree with this, and I believe that advocating for the evaluation (as opposed to advocating for a particular decision or policy) is an important part of our job. However, there are at least four reasons why such advocacy may be our toughest task: (1) Each evaluation situation is different, requiring us to advocate differently each time; (2) We typically advocate to a group of persons, each of whom might respond best to a tailored advocacy message; (3) Nothing in our formal training prepares us to do this sort of advocacy; and (4) We rarely have any contractual or financial support for continuing this advocacy after a final report and briefing. This presentation will offer some suggestions to maximize the situation.

Session Title: Keeping the Flow of Evaluative Learning Within Organizations
Demonstration Session 88 to be held in Jefferson West on Wednesday, Oct 16, 6:10 PM to 6:55 PM
Sponsored by the Internal Evaluation TIG and the Presidential Strand
India Swearingen, United Way of the Bay Area, iswearingen@uwba.org
Abstract: Few organizations will deny that evaluation findings can help them improve outcomes, compete for funding, or plan strategically. Although there is a growing appreciation for evaluation within organizations, fluid relationships between evaluation and program planning are still challenging. In fact, many evaluations exist as static, point-in-time projects or evaluators work independently from program partners. As a result, findings may not be fully relevant to the organization. A strong relationship, effective systems and relevant tools are often necessary to build a well-integrated connection between evaluation and program planning within an organization. This session will demonstrate a case study of one particular program and outline strategies and key tools (such as quarterly dashboards) that help keep evaluation relevant and flowing in program planning. This demonstration will cover key strategies that helped facilitate relationship building and organizational thinking around the strategic use of data and evaluation.

Session Title: Using a Program-Theory Model to Create a Curriculum Rubric, Assess Leadership and Inform a Theory of Change
Panel Session 89 to be held in Jefferson East on Wednesday, Oct 16, 6:10 PM to 6:55 PM
Sponsored by the Program Theory and Theory-driven Evaluation TIG
Belle Brett, Brett Consulting Group, bellebrett@comcast.net
Patti Bourexis, The Study Group Inc, studygroup@aol.com
Abstract: This panel looks at how a program-theory model has been developed and used in an evaluation of an initiative designed to transform schools by assisting high schools in the development of 12th-grade capstone courses in math and science. We will highlight the program model; from which we have designed a rubric used to guide thinking on curriculum development and in which we have included in our classroom observation protocol. We will discuss how the model has informed the assessment of the leadership portion of the program; and how it has informed our theory of change model. These efforts can help clarify antecedent conditions associated with the program, add to our understanding of programs that work and illuminate under what circumstances program effects can be expected to occur. Such efforts would seem likely to hold the promise of more realistic solutions and add to the overall growth of our field.
Using a Program-theory Model Throughout the Evaluation: From Development of a Curriculum Rubric to a Theory of Change
Kathryn Race, Race & Associates Ltd, race_associates@msn.com
Donald Wink, University of Illinois at Chicago, dwink@uic.edu
Dean Grosshandler, University of Illinois at Chicago, grosshan@uic.edu
Based on an exemplar case study, this presentation intends to describe how a fully articulated program-theory model has been integrated into programming and evaluation efforts of a transformation teacher institute designed to help participating teachers and high schools develop/modify 12th-grade capstone courses in math and science. We will show how the program model has been instrumental in the development of a curriculum rubric and development of a theory of change model. The rubric has been integrated into programming efforts (including use by participating teachers), our proposed quality assessment of the curricula, and integrated into the protocol used in classroom observations. The theory of change model centers on teacher leader teams as agents of change. We will focus on how these efforts are integral to our evaluation in assessing program fidelity and outcomes; and, we will highlight how these efforts are applicable to other evaluations in STEM education.
Leadership Development in Theory and Practice: Evaluating the Leadership Strand of a Transformation Teacher Institutes Program for High Schools
Megan Deiger, Loyola University Chicago, mdeiger@luc.edu
Stacy Wenzel, Loyola University Chicago, swenzel@luc.edu
Jonya Leverett, Loyola University Chicago, jlevere@luc.edu
MaryBeth Talbot, Loyola University Chicago, mtalbo1@luc.edu
This presentation will explore the enactment of one of the most important strands of the program described in the first presentation. Specifically, we will discuss the internal evaluation efforts that examined the leadership development program component, and discuss how the program activities aimed at developing leadership among participants affected teachers and their work in schools. We will present findings from a mixed-method evaluation including data gathered from interviews, observations, and surveys, to illustrate the ways in which the program was manifested in the field. Focused attention will be given to the ways in which the enacted program differed from the program model, some of the mechanisms behind these differences, and the implications these differences may have on the theory of change model. We will also discuss how program planners have utilized evaluation findings and how overall evaluation efforts are relevant to evaluations of other STEM professional development programs.

Session Title: Ignite Your Impact: Examining Outcomes, Impacts, Effectiveness, and Scale-up
Ignite Session 90 to be held in Lincoln West on Wednesday, Oct 16, 6:10 PM to 6:55 PM
Sponsored by the AEA Conference Committee
Getting to Outcomes (GTO) 101: A Novice's Guide to Evidence-Based Approaches to Accountability
Katherine Knies, University of South Carolina, kniesk@gmail.com
Abraham Wandersman, University of South Carolina, wandersman@sc.edu
Abstract: Good implementation is difficult given: 1) the significant amount of skills required, 2) the number of steps that need to be addressed, and 3) the variety of contexts in which programs are implemented. These challenges have created a large gap between research and practice, resulting in a lack of positive outcomes at the local level. The Getting To Outcomes ™ (GTO) accountability framework is one way to address this gap (Chinman et al, 2004; Wandersman, 2009). GTO has 10 steps drawn from numerous research studies and field-based expertise in the areas of planning, implementation, evaluation, continuous quality improvement, and sustainability. In this Ignite Presentation, I will provide a brief introduction that exposes participants to the utility of GTO for their evaluations. This presentation will be geared towards novice evaluators, with an attempt to enhance their capacity to address key tasks involved in planning, implementing, evaluating, and sustaining their own programs.
"You're Gonna Need a Bigger Boat": How Evaluators Can Support Scale-Up of Pilot Programs
Andrea Beesley, Mid-continent Research for Education and Learning, abeesley@mcrel.org
Sheila Arens, Mid-continent Research for Education and Learning, sarens@mcrel.org
Abstract: Developers and evaluators of successful pilot programs often contemplate scaling the program to a wider audience. Although this is sometimes thought of as the developer's responsibility, evaluators can play a crucial role in supporting iterations of pilot tests and in scale-up of piloted programs through interpreting evaluation data, involving stakeholders, and contributing directly to program materials. In this Ignite session, we address how evaluators can prepare for scale-up at different phases of program development and give examples from three development project evaluations.
Performance Management and Evaluation: Two Sides of the Same Coin
Isaac Castillo, Child Trends, isaac101@hotmail.com
Ann Emery, Innovation Network Inc, aemery@innonet.org
Abstract: Performance management and evaluation-what's the difference? With an increasing emphasis on measurement and impact, service providers and their funders are pushing for increasingly sophisticated evaluation approaches such as experimental and quasi-experimental designs. However, experimental methods are rarely appropriate, feasible, or cost-effective for the majority of organizations and service providers. In contrast, performance management-the ongoing process of collecting and analyzing information to monitor program or organizational performance-is something that every organization could, and should, do on a regular basis. In this Ignite session, the presenters will introduce the key differences between internal performance management processes and formal external evaluations. These terms are often confused and conflated because both approaches utilize many of the same techniques. However, their philosophies, timing, and purposes vary. This session will clarify when and for whom each approach is most appropriate.
Evaluating a Key Success Loop
Teresa Behrens, Grand Valley State University, behrenst@foundationreview.org
Abstract: Causal loop diagrams are one systems tool used to understand cause and effect relationships in complex systems. This tool, with origins in system dynamics, requires a great deal of quantitative data. A variant, Key Success Loops, has been suggested (Kim, 1997) as a simplified version of causal loop diagraming that can be useful to surface theories about causal relationships. In this presentation, the presenter will show 1) how a Key Success Loop was used to describe the work of an organization that promotes collaboration between foundations and government, 2) how that loop led to an evaluation design, and 3) how the evaluation results are being analyzed and presented.
Finding Common Ground and Increasing Utilization Through Open, Collaborative, and Innovative Evaluation Methods (A Case Study)
Alfonso Sintjago, University of Minnesota, sintj002@umn.edu
Brittany Edwards, University of Minnesota, edwa0180@umn.edu
Abstract: This presentation shares the results of the utility-focused developmental evaluation of the Graduate and Professional Student Assembly (GAPSA) at the University of Minnesota which was conducted under a resolution approved by its General Assembly. The goal of this presentation is to share how two new important concepts, open evaluation and integrative leadership techniques can increase the effectiveness of an evaluation and the engagement and support of stakeholders. GAPSAs evaluation utilized innovative techniques including open collaboration techniques through the use of Google Docs as wikis where any stakeholder with access to the internet could contribute to their drafting and modification. This presentation also highlights the benefits of the different methodologies promoted by the Center for Integrative Leadership (CIL) at the University of Minnesota including World Caf, Art of Hosting, Polarity Mapping, and Idea Generation which have contributed to GAPSA's evaluation

Session Title: Training Evaluators: Learning Outcomes for Introductory Evaluation Courses
Think Tank Session 91 to be held in Lincoln East on Wednesday, Oct 16, 6:10 PM to 6:55 PM
Sponsored by the Teaching of Evaluation TIG
Randall Davies, Brigham Young University, randy.davies@byu.edu
Katye Perry, Oklahoma State University, katye.perry@okstate.edu
Brad Astbury, University of Melbourne, brad.astbury@unimelb.edu.au
Jennifer Morrow, University of Tennessee at Knoxville, jamorrow@utk.edu
Abstract: The need to provide quality training opportunities to new evaluators will always be an important objective for the evaluation profession. Informed by Stevahn, King, Ghere, and Minnema's updated taxonomy of essential program evaluator competencies, this session will specifically address the question -- what does it mean to be a trained evaluator. The session will involve a discussion of what topics and skills should be taught in an introductory evaluation course to best prepare new evaluators along with any additional things new evaluators would need to know, understand and be able to do in order to be recognized as a trained evaluator.

Session Title: Evaluation Policy TIG Business Meeting and Presentation: Alignment Between Intention and Implementation: A Case Study of a Foundation's Evaluation Policy
Business Meeting Session 92 to be held in Monroe on Wednesday, Oct 16, 6:10 PM to 6:55 PM
Sponsored by the Evaluation Policy TIG
TIG Leader(s):
Lisa Rajigah, Utah State University, lrajigah@yahoo.com
Kristin Kaylor Richardson, First Things First, krichardson@azftf.gov
Lisa Dillman, University of California at Los Angeles, lisa.m.dillman@gmail.com
Abstract: Evaluations conducted by foundations operate under the auspices of evaluation policies that influence aspects of evaluation design including research questions, data collection and analysis procedures, and reporting of findings. These policies might be designed or unintentional, and may be understood, rather than explicitly articulated. This research investigates the evaluation policies at a large private foundation and assess the fidelity with which they are implemented. Through document analysis and interviews, a logic model detailing the theory of action underlying the evaluation policy is presented and used to assess the degree to which they are implemented in the field as intended. The idea of evaluation policy is a relatively new, but important area of research. Evaluation policies are potentially very powerful, but all too often, too little attention is paid to them ( Mark, Cooksy, & Trochim, 2009).

Session Title: Determining the Added-Value of Partnerships: Factors to Help Assess Elements of Success
Expert Lecture Session 93 to be held in  Suite 7101 on Wednesday, Oct 16, 6:10 PM to 6:55 PM
Sponsored by the Business, Leadership, and Performance TIG
Vivienne Wildes, United States Agency for International Development, vjw100@gmail.com
Abstract: Partnership-based projects present unique opportunities to advance program goals and objectives. However, measures to evaluate partnership success have proved both difficult and evasive. This presentation outlines a framework from which to assess value-added for partnership-related activities during the various stages of a partnership, from formation to management. Partnerships are explored from two angles: the project; and, the partnership, itself. Discussion will include ways in which to monitor, evaluate, and strengthen partnership strategies through early and consistent measures for effectiveness, efficiency, scale, sustainability, and leverage.

Session Title: Public Health Law Research: Theory and Methods
Multipaper Session 95 to be held in Suite 1101 on Wednesday, Oct 16, 6:10 PM to 6:55 PM
Sponsored by the Health Evaluation TIG
Alexander Wagenaar, University of Florida, wagenaar@gmail.com
Abstract: The use of law as a mode of health intervention has increased dramatically since the 1960s. Evaluation research has played an important part in this expansion, demonstrating that law has contributed to some of the biggest public health achievements of the past half-century. Yet legal evaluation has largely gone on within topical silos. Motor vehicle safety researchers worked on motor vehicle law; tobacco researchers worked on tobacco control law; AIDS researchers worked on AIDS law. There was little cross-fertilization of methods, and the sense that these strands of work could be seen as a comprising a distinct field of public health law research was slow to develop. Moreover, research on health laws has not been informed by theory and methods developed in sociolegal research. This panel demonstrates a truly multi-disciplinary approach to legal evaluation, drawing upon a newly published book, Public Health Law: Theory and Methods.
Filing the Black Box: Theories for Public Health Law Evaluation
Scott Burris, Temple University, scott.burris@temple.edu
Theoretically grounded research illuminating mechanisms of legal effect has at least three important benefits for public health law research and practice: Defining the phenomena to be observed, supporting causal inference, and guiding reform and implementation. The choice of what theory or theories to draw upon is a practical one based on research questions and designs, types of law or regulatory approach under study, and state of current knowledge about the matter being investigated. Evaluation researchers can draw upon a variety of theories developed by socio-legal scholars to explain how laws are put into practice and how they influence environments and behaviors. Similarly, it is possible to integrate laws within general social and behavioral theories. And it is in fact possible to do both at the same time. These methods make it possible to substantially improve the validity, utility and credibility of health research on effects of laws and legal practices.
Advancing the State of Research Designs Used in Public Health Law Evaluation
Alexander Wagenaar, University of Florida, wagenaar@gmail.com
Despite notable exceptions, evaluations of the impact of laws on health and safety continue to use research designs characterized by limitations on internal validity-the strength of confidence in a causal interpretation of observed effects. This presentation describes and illustrates the use of experimental and quasi-experimental designs in the evaluation of injury control laws. New rules may be implemented in a way that allows experimental evaluation, or experimental methods may be used in the "laboratory" to experimentally test the mechanisms of a law's effects. In natural experiments where random assignment is not possible, there are many design elements that, when combined in a single study, produce study designs that match or sometimes exceed the validity of RCTs. Such elements include: long time-series spanning decades; high time-resolution measures (daily/weekly/monthly); multiple comparison groups, sites, measures; hierarchically nested levels of comparisons; and dose-response studies.

Session Title: Breakthroughs in Capacity Building in the 21st Century: Learn about the Plan - Do -Reflect Thinking Tool!
Skill-Building Workshop 96 to be held in Suite 2101 on Wednesday, Oct 16, 6:10 PM to 6:55 PM
Sponsored by the Organizational Learning and Evaluation Capacity Building
Cassandra O'Neill, Wholonomy Consulting, cassandraoneill@comcast.net
Abstract: We developed this flexible thinking tool to support individuals and groups in setting goals, trying new things, planning for reflection, and adopting reflective practices. People are frequently asked to adopt new behaviors, habits, or make changes without any support. The plan-do-reflect thinking tool helps people identify something they want to try, think through the best conditions to try it, think through challenges that may come up in advance, and then help reflect after they have tried it. When using this, people have asked for support in identifying something to try - so we have also developed goal setting thinking tools and lists of thinking prompts that can be used to set goals. The combination of these thinking tools has helped people and organizations build capacity to successfully try new things, and embed reflective practice into their work. Engaging in a process that stretches thinking can be simple with this thinking tool.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Evaluating Non-response Error: Impact and Repercussions of Survey Data on Evaluation Findings
Roundtable Presentation 97 to be held in Suite 3101 on Wednesday, Oct 16, 6:10 PM to 6:55 PM
Sponsored by the Quantitative Methods: Theory and Design TIG and the Research on Evaluation
Michelle Bakerson, Indiana University South Bend, mmbakerson@yahoo.com
Abstract: Most evaluators at one point or another depend on data collected through questionnaires or surveys; however this type of data gathering comes with certain limitations that must be kept in mind. One major limitation to this type of data is the possibility of non-response bias existing within the data itself. The bias created by non-response is a function of both the level of non-response and the extent to which non-respondents are different from respondents (Kano, Franke, Afifi & Bourque, 2008). Essentially all surveys will contain some form of error which will prejudice the interpretation of the data; it is when bias exists that there may be repercussions. A brief overview of survey error is examined with a major focus on non-response error. Specific examples and techniques for reducing and correcting for non-response and detecting non-response bias are covered to help facilitate valid interpretations of evaluation findings
Roundtable Rotation II: Just to be on the Safe Side: Who are We really Protecting With Active Informed Consent Procedures in Evaluation?
Roundtable Presentation 97 to be held in Suite 3101 on Wednesday, Oct 16, 6:10 PM to 6:55 PM
Sponsored by the Quantitative Methods: Theory and Design TIG and the Research on Evaluation
Elena Lyutykh, Concordia University Chicago, elena.lyutykh@cuchicago.edu
Cynthia Grant, Concordia University Chicago, cynthia.grant@cuchicago.edu
Veronica  Richard, Concordia University Chicago, veronica.richard@cuchicago.edu
Abstract: In this presentation three university professors reflect on their experiences conducting external program evaluations with minors in school settings in four different countries. Our discussion focuses on informed consent procedures for data collection in the context of program evaluation. A common theme in our experiences across countries was our dissatisfaction with the limitations that active consent procedures imposed on our access to and relationship with the participants, which likely compromised the validity/truthfulness of the evaluation results. We problematize the being-on-the-safe-side premise that typically privileges active consent over passive consent procedures in IRB decisions. Requiring participants and their parents to sign and return consent and ascent forms often represents the risky side from the participants perspective, forcing them to opt out of voicing their opinions in program evaluations. We, therefore, argue for wider acceptance an d use of passive consent in program evaluations in educational settings.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Barriers to Behavior-based and Outcome-based Evaluations of Workplace Training
Roundtable Presentation 98 to be held in Suite 4101 on Wednesday, Oct 16, 6:10 PM to 6:55 PM
Sponsored by the Organizational Learning and Evaluation Capacity Building
Perri Kennedy, Independent Consultant, perri@perrikennedy.com
Yonnie Chyung, Boise State University, ychyung@boisestate.edu
Abstract: Most workplace learning professionals understand the need to evaluate the training interventions they develop for their organizations. A behavior-based evaluation examines the merit of the training and its relevance to organizational outputs, while a results-based evaluation measures actual outcomes against iexpected benefits. In short, did the training intervention meet its objectives, and did it have the desired impact on workplace performance? In this era of shrinking budgets, it's critical for learning departments to prove their value to their organizations. However, surveys reveal that only about half of workforce training interventions are evaluated on the on-the-job behaviors they generate, and even fewer are evaluated on how they met organizational expectations. Research also indicates that the quality of such evaluations is inconsistent, as is the perception of what a results-based evaluation actually measures. The proposed roundtable discussion would explore the barriers to conducting relevant behavior-based and results-based evaluations of training.
Roundtable Rotation II: Matching Evaluation Supply and Demand: Experiences From a Costa Rican Public Sector-academia Alliance on Evaluation Capacity Development
Roundtable Presentation 98 to be held in Suite 4101 on Wednesday, Oct 16, 6:10 PM to 6:55 PM
Sponsored by the Organizational Learning and Evaluation Capacity Building
Sabrina Storm, Deutsche Gesellschaft für Internationale Zusammenarbeit (GIZ), sabrina.storm@giz.de
Corinna Schopphoff, University of Costa Rica, corinna.schopphoff@cimonline.de
Abstract: Balancing demand and supply for evaluations and evaluation expertise is a particular challenge for national Evaluation Capacity Development. The program Promoting Evaluation Capacities in Central America (FOCEVAL) acts on the interface between public administration, academia and civil society and started implementation in July 2011. A key approach is promoting dialogue, articulation and joint activities between academic actors such as the Master's program in Evaluation, the Investigation and Training Center in Public Administration (both from the University of Costa Rica) and executive M&E- authorities like the Ministry of Planning (MIDEPLAN). This presentation shows how important steps were taken to contribute to a demand- and user-oriented training and evaluation practice in Costa Rica, by developing and disseminating evaluation standards for the public sector, implementing a hands-on train-the-trainer program for evaluation champions from national universities and MIDEPLAN and by scientific support within the scope of pilot evaluation projects in public institutions.

Roundtable: GSNE Evolving: Reflecting on the Membership, Mission and Vision of the Graduate Student and New Evaluator TIG for the 21st Century
Roundtable Presentation 99 to be held in Suite 5101 on Wednesday, Oct 16, 6:10 PM to 6:55 PM
Sponsored by the Graduate Student and New Evaluator TIG
A Rae Clementz, University of Illinois at Urbana-Champaign, raeclementz@gmail.com
Kristin Woods, Oklahoma State University, krwoods@okstate.edu
Patrick Barlow, The University of Tennessee, pbarlow1@utk.edu
Abstract: This is the 15th anniversary of the Graduate Student and New Evaluator TIG. The GSNE TIG leadership committee would like to take this opportunity to review and revise the original mission and vision of the TIG to reflect the evolution of the TIG and the needs and priorities of TIG members. This is a working Roundtable. Participants will help the leadership committee draft a statement of identity for the TIG, revise the existing mission and vision statements, and develop a clear set of priorities for the TIG moving forward. This draft will be posted to the TIG website and Facebook page for further comment & review, then presented formally to AEA in 2014. We invite everyone with an interest in supporting and developing graduate students and new evaluators to come share their perspectives and have a voice in the future of the GSNE TIG.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Moving Toward Developmental Evaluation: Are Those Hiring Us Moving With Us?
Roundtable Presentation 100 to be held in Boundary on Wednesday, Oct 16, 6:10 PM to 6:55 PM
Sponsored by the Health Evaluation TIG and the Teaching of Evaluation TIG
Lauren Workman, University of South Carolina, workmanl@email.sc.edu
Kelli Kenison, University of South Carolina, kenison@mailbox.sc.edu
Sarah Griffin, Clemson University, sgriffi@clemson.edu
Elizabeth Cashen, University of South Carolina, cashen@mailbox.sc.edu
Carley Prynn, University of South Carolina, prynn@email.sc.edu
Abstract: Our team has an evolving approach to evaluation. Informed by developmental evaluation, we utilize both qualitative and quantitative methods to evaluate a variety of public health programs in South Carolina. However, are the perceptions of our roles and approaches congruent with those of our partners? As it is important to consider the dynamics of collaboration and perception when working in a developmental evaluation framework (Gamble, 2008), we are preparing to conduct a set of semi-structured interviews with key partners and stakeholders from six ongoing or recently completed evaluations. Through these interviews we aim to understand the ways in which partners and stakeholders view our role as evaluators, as well as the ways their perceptions are shaped by previous experiences with evaluators. In this session we will share the results of these interviews, along with our reflections on the results, and the impact on our approach to evaluation.
Roundtable Rotation II: Emerging Models for Reflexivity in Evaluation
Roundtable Presentation 100 to be held in Boundary on Wednesday, Oct 16, 6:10 PM to 6:55 PM
Sponsored by the Health Evaluation TIG and the Teaching of Evaluation TIG
Jenna van Draanen, University of California at Los Angeles, jennavandraanen@gmail.com
Abstract: In the field of evaluation there is a paucity of information and guidance on the practice of reflection, despite it being a professional competency. This roundtable will begin with a short presentation of a model to guide reflexive practice for evaluators that was developed and piloted during the evaluation of a psychiatric consumer caucus from May to December in 2011. Models for practitioner reflexivity from other disciplines will be presented. Discussion will be geared towards adapting the existing frameworks and sharing methods of reflection that others in the field have found helpful. Ample time for discussion during this roundtable will allow attendees to wrestle with concepts such as the impact of an evaluator's social location, evaluation politics, and the complex interplay of evaluation values. The session will contribute to the field by shaping new models for reflexivity in evaluation and advancing the available guidelines for professional reflective practice.

Return to Evaluation 2013 Home Page
Search Results for All Sessions