2012

Return to search form  

Session Title: Optimize Effectiveness and Outcomes for Environment Programs
Demonstration Session 900 to be held in 102 B on Saturday, Oct 27, 1:00 PM to 2:30 PM
Sponsored by the Environmental Program Evaluation TIG
Presenter(s):
Dima Reda, World Bank, dreda@thegef.org
Omid Parhizkar, World Bank, oparhizkar@thegef.org
Abstract: As a financial organization, the Global Environment Facility (GEF) has allocated $10 billion in grants to developing countries for 2,800 projects related to (among others) climate change and biological diversity. Given the complexity of the funding areas, the ability to define environmental outcomes and aggregate results is imperative. At the GEF we have developed innovate tools and a robust results-based management system to improve our ability to measure outcomes in the short-run and impacts in the longer-term. This demonstration will cover three main areas: 1) selection of useful and measurable environmental indicators 2) measurement of outcomes at the portfolio level and 3) optimization of outcomes through a results based management system. While drawing on the experience of the GEF, the demonstration and tools are applicable to any evaluator working in the environmental field, in particular within an international context.

Session Title: Evaluation as Positive Youth Development: How Do We Integrate the Best Practices of Both Fields?
Think Tank Session 901 to be held in 102 C on Saturday, Oct 27, 1:00 PM to 2:30 PM
Sponsored by the Youth Focused Evaluation
Presenter(s):
Benjamin Silliman, North Carolina State University, ben_silliman@ncsu.edu
Abstract: This Think Tank presents key principles and methods of Youth Development (experiential learning, youth-adult partnership, quality climate) and Evaluation (evaluation standards, authentic assessment, participatory evaluation) as catalysts for discussion of the values, goals, and strategies that promote the best of both fields. Emergence of high-stakes testing and adult-driven youth programs in much of formal and informal youth development presents a challenge-of-paradigms for program leaders trained in the hands-on, youth-led, practical outcomes approaches of traditional youth organizations. Participants will be invited to consider if and how youth organizations can increase the intensity and fidelity of programs to produce high-performance outcomes while maintaining the integrity of programs within youth development best practices.

Session Title: Assessing the Strengths and Limitations of Web-Based Performance Measurement and Reporting Systems
Panel Session 902 to be held in 102 D on Saturday, Oct 27, 1:00 PM to 2:30 PM
Sponsored by the Integrating Technology Into Evaluation
Chair(s):
Paul Garbe, Centers for Disease Control and Prevention, plg2@cdc.gov
Abstract: Performance measurement and reporting systems (PMRS) housed in web-based management information systems are widely used in the public sector to improve accountability and inform decision making. Periodic examination of PMRS helps ensure effective performance measurement. This panel highlights three web-based PMRS used by the Centers for Disease Control and Prevention to monitor and report the impact of national public health programs. The first presentation describes methodology used to evaluate the National Asthma Control Program’s PMRS and reports findings on system achievements and challenges, as well as strategies for system improvement. The second presentation outlines the National Comprehensive Cancer Control Program’s PMRS and explores inconsistencies and trend effects in reported performance measures. The third presentation summarizes preliminary findings from an evaluation of the Chronic Disease Management Information System and suggests future enhancements. This panel will provide strategies for monitoring and evaluating web-based PMRS to inform stakeholders and managers using these systems.
Evaluating the Performance Measurement Capabilities of the Asthma Information Reporting System (AIRS): Findings and Future Strategies
Laura Hester, Centers for Disease Control and Prevention, gjt8@cdc.gov
Ayana Perkins, Scimetrika, vzt8@cdc.gov
The Centers for Disease Control and Prevention’s Air Pollution and Respiratory Health Branch (APRHB) provides funding and technical assistance on asthma control and prevention to 34 states, Puerto Rico and the District of Columbia which compose the National Asthma Control Program (NACP). To monitor and evaluate the efforts and outcomes of NACP grantees, the APRHB launched the Asthma Information Reporting System (AIRS) in the fall of 2010. AIRS is a performance measurement and reporting system (PMRS) housed in a web-based management information system that is designed to gather information on a set of core performance indicators. This presentation outlines the methods used to evaluate AIRS’s ability to effectively and efficiently meet the NACP’s needs. Additionally, the presentation summarizes the evaluation findings on AIRS’ value, lessons learned about designing and implementing AIRS and other PMRS, and strategies for increasing the use of these systems in program evaluation.
Monitoring Performance Measures for the National Comprehensive Cancer Control Program in the Era of Management Information Systems
Julie Townsend, Centers for Disease Control and Prevention, zmk4@cdc.gov
Angela Moore, Centers for Disease Control and Prevention, armoore@cdc.gov
Chris Stockmyer, Centers for Disease Control and Prevention, zll6@cdc.gov
Susan Derrick, Centers for Disease Control and Prevention, srd3@cdc.gov
Tiffani Mulder, Centers for Disease Control and Prevention, hyn3@cdc.gov
Vicky D'Alfonso, Centers for Disease Control and Prevention, vcd3@cdc.gov
The Centers for Disease Control and Prevention (CDC) collects and monitors performance measures to ensure that National Comprehensive Cancer Control Program (NCCCP) grantees are successfully implementing high impact cancer control activities defined in the NCCCP. In 2010, a web-based management information system (MIS) was deployed to collect programmatic and performance measures data from grantees, replacing paper-based reporting. NCCCP performance measures for year 4 of the NCCCP grant cycle were queried from the MIS and compared to previous years’ results for a trend analysis. An assessment of data collected in the MIS revealed a 74% decline in the amount of leveraged funds that NCCCP grantees reported and a reduction in the percentage of programs with implementation activities addressing cancer survivorship. Temporal changes alone likely did not account for these differences, and these findings may be attributed to a MIS that requires more detailed reporting of activities. Interpretation of trends requires caution.
The Chronic Disease Management Information System: Lessons Learned and Future Implications
Raegan Tuff, Centers for Disease Control and Prevention, rrt6@cdc.gov
The Centers for Disease Control and Prevention (CDC) supports the development, implementation, and evaluation of chronic disease programs through cooperative agreements with a diverse array of partners. For over five years, CDC has implemented management information systems (MIS) to track partners’ efforts through interim and annual reports. An increased emphasis on accountability, program integration, cost savings, and information sharing highlighted the need to more aggressively consolidate MIS activities. This presentation will provide an overview of the Chronic Disease Management Information System (CDMIS), a web-based solution for monitoring and enhancing grant performance after receipt and dispersal of award(s), highlight preliminary evaluation findings on its usefulness, and outline future system enhancements for improved reporting.

Session Title: New Instruments to Measure Student Success in School
Multipaper Session 903 to be held in 102 E on Saturday, Oct 27, 1:00 PM to 2:30 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Chair(s):
Kelly Murphy,  Claremont Graduate University, kelly.murphy@cgu.edu
Discussant(s):
Sheila Robinson Kohn,  Greece Central School District, sbkohn@rochester.rr.com
Fidelity of Implementation: One measure for Multiple Tiered Models of Support
Presenter(s):
Amy Gaumer Erickson, University of Kansas, aerickson@ku.edu
Patricia Noonan, University of Kansas, pnoonan@ku.edu
Abstract: Models of Response to Intervention (RTI) have been widely developed and implemented, and have recently expanded to include integrated academic/behavior RTI models. With so many RTI models incorporating their own process and evaluation measures, many schools find implementation of both academic and behavior models to be overwhelming. It is necessary to develop fidelity of implementation tools that reliably measure implementation across academic and behavior RTI models. In this presentation, we will report current research on RTI implementation measures and describe the School Implementation Scale, a measure successfully tested within three separate RTI models. This cost-effective, online measure shows high reliability across multiple models in elementary, middle, and high schools. Additionally, school personnel report that the results inform their action planning and data-based decision-making processes.
Rubrics For History
Presenter(s):
Rebeca Diaz, WestEd, rdiaz@wested.org
Donna Winston, WestEd, dwinsto@wested.org
Abstract: This presentation will focus on methods used and best practices learned from creating lesson plan rubrics for evaluating two professional development programs. Funded by the same federal education grant, the professional development grantees aimed to enhance teacher U.S. history pedagogy. The presenters, who have evaluated five such grantees, developed lesson plan rubrics to assess instructional practice for teachers tailored to each client’s needs. They employed a mixed-methods, collaborative evaluation approach, working with both the project leaders and the teachers involved in the program. Their evaluation insight on creating lesson plan rubrics will be useful for any evaluator assessing professional development efficacy through lesson plan analysis.
Triangulation of Multiple Data Sources: Examining Congruence and Sensitivity
Presenter(s):
Shanette Granstaff, University of Alabama, usgranstaff@crimson.ua.edu
John Bolland, University of Alabama, jbolland@ches.ua.edu
Abstract: Office discipline referrals (ODRs) are increasingly used to measure student behavior. Research suggests, however, that a variety of contextual factors may influence a student’s risk of receiving an ODR, thus limiting use of the measure for research, evaluation, and program development. The triangulation of multiple, though imperfect data sources can result in a more valid and comprehensive measure than a single source. This presentation demonstrates how survey data from primary sources and from collaterals along with school records can be used to assess the congruency of ODRs with self- and other-reported data; identify characteristics associated with congruency; and conduct a sensitivity analysis comparing single and triangulated data measures. The data in this study come from the Mobile Youth Survey, a multi-cohort longitudinal study conducted in impoverished areas of Mobile, Alabama. The methods discussed are widely applicable and the specific results may be applicable to other studies of ODRs. {148 words}
Development of an Interactive Assessment Tool to Capture Children’s Developing “Data Analysis” Math Skills
Presenter(s):
Tomoko Wakabayashi, HighScope Educational Research Foundation, twakabayashi@highscope.org
Zongping Xiang, HighScope Educational Research Foundation, zxiang@highscope.org
Beth Marshall, HighScope Educational Research Foundation, bmarshall@highscope.org
Ann Epstein, HighScope Educational Research Foundation, aepstien@highscope.org
Susanne Gainsley, HighScope Educational Research Foundation, sgainsley@highscope.org
Abstract: The proposed session will present the rationale for developing DAT, an interactive assessment tool to capture preschool children’s Data Analysis skills,as a supplement to existing standardized math assessment tools. DAT was piloted as a part of the Institute of Education Sciences funded Numbers Plus Efficacy Study currently being conducted. Numbers Plus is a preschool mathematics curriculum aligned with National Council of Teachers of Mathematics standards. Strategies for aligning assessments with interventions as means of maximizing evaluation outcomes will be discussed along with our preliminary results.

Session Title: Using OntheMap to Evaluate Workforce and Economic Development Strategies
Demonstration Session 904 to be held in 200 A on Saturday, Oct 27, 1:00 PM to 2:30 PM
Sponsored by the Data Visualization and Reporting TIG
Presenter(s):
Nichole Stewart, University of Maryland, Baltimore County, nicholes21@aol.com
Abstract: The Census Bureau’s OnTheMap application is a data visualization tool that evaluation practitioners can use in needs assessments and impact evaluations of programs that implement workforce development and economic development job creation strategies. Using LEHD Origin-Destination Employment Statistics (LODES) data derived from state unemployment insurance wage records, the application provides measures of employment, skill level, and job flows of workers employed in jobs in an area (work-area) and workers living in an area (residence area). Session attendees will be guided through the steps for using the application including 1) importing shapefiles and KML/KMZ files, and 2) performing area profile, area comparison, destination, inflow/outflow, and paired area analysis. In addition, the demonstration will cover 1) using the detailed reports, tables, and charts available through the application, and 2) exporting the data to GIS to use along with other data sets.

Session Title: To Bid or Not to Bid...that is just ONE of the Questions: Practical Tips and Tactics to a Successful Approach With Requests for Proposals
Panel Session 905 to be held in 200 B on Saturday, Oct 27, 1:00 PM to 2:30 PM
Sponsored by the Independent Consulting TIG
Chair(s):
Leah Goldstein Moses, The Improve Group, leah@theimprovegroup.com
Abstract: The Request for Proposal process is a common conduit to finding and acquiring project work in the field of evaluation. Most of us agree that it is a highly challenging system. This engaging panel session unites representatives from independent evaluation groups from across the United States to pass on concrete tips and lessons learned to take control of the time, effort and costs surrounding RFPs. Our panel will share practical knowledge on the RFP challenges we all address, such as how to: decide what to bid on; decipher the RFP; prioritize proposal management activities; successfully engage key content and technical experts in the work; and clearly communicate why people should work with you. These skilled presenters will offer perspectives from a small, 2-person business, a woman-owned company, a mid-sized agency, and a large university. Attendees are encouraged to share their ideas to create an even richer repository of useful tips.
Bringing Marketing and Evaluation Skills to Selecting and Responding to RFPs
Susan Murphy, The Improve Group, susanm@theimprovegroup.com
The elephant in the room for any working evaluation professional is the need to generate more work by responding to the ever-present Request for Proposals. Too many of us have worked a good number of hours to craft and submit a proposal only to discover we were not selected due to factors such as a previous relationship the client had with another evaluator, lack of information in the RFP that would have generated a different response from you, or a budget that may not have been realistic to the tasks requested. This session will explore what to watch out for in the RFP search, deciphering the RFP itself, and planning the correct approach for responding to get the win. Ms. Murphy will draw from 12 years of proposal writing experience in both non-profit and for-profit business arenas to share lessons learned and helpful tips to tackling the RFP.
Pestering Your Superiors While Still Keeping Your Job: Managing for Proposal Success
Jennifer Dewey, James Bell Associates Inc, dewey@jbassoc.com
Requests for Proposals are stressful endeavors due to the need to produce a well-crafted response to critical work tasks that is time sensitive, incorporates key input from proposal partners, and exceeds organizational quality standards. Even when an organization anticipates and plans for the proposal, conceptualizing, partnering, writing, and budgeting still take up valuable, unpaid hours by multiple staff members who need to balance the need to procure more work with doing the work the organization already has. Prioritizing proposal management, including designating a proposal manager, will enable the team to better determine roles and responsibilities, develop a timeline, track page counts, establish common terms of reference, manage content and fiscal input from subcontractors and consultants, coordinate exhibits and appendices, and funnel sections through the editing process, helping to ease the way to a successful procurement.
Bid-No-Bid Decision for a Request for Proposals
Sathi Dasgupta, SONA Consulting Inc, sathi@sonaconsulting.net
The Bid-No Bid decision in response to a RFP depends on several factors. The presenter would provide a check list of questions and discuss why those questions are important and how to interpret the answers to the questions and rate the answers. The presenter will also discuss how to arrive at a bid/no bid decision.
The Potential and the Pitfalls of Proposal Development in Academia
Laura Bloomberg, University of Minnesota, bloom004@umn.edu
Academia is fraught with potential barriers to successful proposal development, including expensive student stipends, unrealistic indirect costs, turf battles across departments, and struggle to justify evaluation as a worthy pursuit in research-driven institutions. Yet, there are creative ways to overcome these challenges and leverage the potential for colleges and universities to develop successful proposals that build on strengths and add value for both the client and the institution. The presenter will draw on years of experience teaching program evaluation at the Humphrey School of Public Affairs and as executive director of a multidisciplinary university research center to explore these pitfalls and opportunities.

Session Title: Statistical Methods in Designing and Evaluating Programs
Multipaper Session 906 to be held in 200 C on Saturday, Oct 27, 1:00 PM to 2:30 PM
Sponsored by the Quantitative Methods: Theory and Design TIG
Chair(s):
Guili Zhang,  East Carolina University, zhangg@ecu.edu
A Methodology for Identification of High Need Areas: An Application in New York State
Presenter(s):
Vajeera Dorabawila, New York State Office of Children & Family Services, vajeera.dorabawila@ocfs.state.ny.us
Susan Mitchell-Herzfeld, New York State Office of Children & Family Services, 
Abstract: This presentation will demonstrate a methodology that can be utilized in the identification of high need areas in program design. Often the need arises to combine multiple need measures and identify potential target locations. This methodology combines a large number of indicators, identify zip codes with need and then identify clusters of zip codes that can potentially be targeted. Specifically, indicators of need utilized included those related to economic and social problems, child welfare involvement, and juvenile justice involvement. The primary unit of analysis was zip code. High need clusters were defined as either individual zip codes or clusters of contiguous zip codes where both the rate of problems (risk) and the number of cases with problems (burden) are high. Simple statistical tools (standard deviation and z scores) available in most software programs and GIS software were utilized in the analysis and as a result will be accessible to all.
Reliability Generalization (RG): Examining Reliability Estimates of the Organizational Commitment Questionnaire (OCQ)
Presenter(s):
Kim Nimon, University of North Texas, kim.nimon@gmail.com
Forrest Lane, University of Southern Mississippi, forrest.lane@usm.edu
Mariya Gavrilova Aguilar, University of North Texas, mariya.gavrilovaaguilar@unt.edu
Kelly Carrero, Shippensburg University, kmcarrero@ship.edu
Susan Frear, University of North Texas, susanfrear@gmail.com
Marie Garrigue, University of North Texas, marie.garrigue@7-11.com
Tekeisha Zimmerman, University of North Texas, zimmermantk@sbcglobal.net
Linda Zientek, Sam Houston State University, lrzientek@shsu.edu
Robin Henson, University of North Texas, robin.henson@unt.edu
Abstract: This paper presents the results of a Reliability Generalization (RG) study conducted on the Organizational Commitment Questionnaire (e.g, Porter, Steers, Mowday, & Boulian, 1974). A meta-analytic technique, RG studies examine reliability estimates to identify predictors of measurement error across studies. The OCQ is a popular commitment scale (Brett, Cron & Slocum, 1995) commonly used to evaluate program outcomes (cf., Davis & Bryant, 2010). Not only does this study clearly demonstrate that reliability is a function of data not instrumentation, the study found that 25% of variability in reliability estimates could be explained by differences in study parameters.
Evaluating the Coverage Performance of Confidence Intervals for Effect Sizes in ANOVA
Presenter(s):
Guili Zhang, East Carolina University, zhangg@ecu.edu
Abstract: Reporting an effect size in addition to or in place of a hypothesis test has been recommended by some statistical methodologists because effect sizes are recognized as being more appropriate and more informative. When reporting evaluation results, providing an effect size as an indication of intervention effect besides the hypothesis test results has become increasingly important. In this study, the coverage performance of the confidence intervals (CIs) for the Root Mean Square Standardized Effect Size (RMSSE) was investigated in a balanced, one-way, fixed-effects, between-subjects ANOVA design. The noncentral F distribution-based and the percentile bootstrap CI construction methods were compared. The results indicated that the coverage probabilities of the CIs for RMSSE were not adequate, suggesting the need for a better alternative in order for evaluation results to be accurate and trustworthy.

Session Title: Practical Approaches to Evaluation Capacity Building in Organizations
Multipaper Session 907 to be held in 200 D on Saturday, Oct 27, 1:00 PM to 2:30 PM
Sponsored by the Organizational Learning and Evaluation Capacity Building
Chair(s):
Kimberly Snyder,  ICF International, ksnyder@icfi.com
Do to Learn: Practical Approaches to Building Evaluation Capacity With Local Staff
Presenter(s):
Isabelle Carboni, World Vision International, isabelle_carboni@wvi.org
Jamo Huddle, World Vision International, jamo_huddle@wvi.org
Abstract: Lack of evaluation capacity with monitoring & evaluation staff in country offices has often led to poor quality data and unusable reports for many funded programmes. Few opportunities exist to get hands-on, practical training in how to plan and manage data collection, how to analyse and use the findings. Over the last three years World Vision has been piloting a radically new approach to building evaluation capacity across country offices in Asia, Africa and now Latin America. The “Learning Labs” provide a ‘do to learn’ approach for national staff, whose jobs are to manage evaluations, to get real experience in a safe space and critically reflect on their own practice in evaluation. The Labs use a variety of adult learning techniques to make the learning ‘stick’, but focus especially on visual and kinaesthetic approaches. A review of results so far was impressive. Regular reviews will be used to improve methodology.
Evaluation Capacity Building Intervention in the Complex Cultural Context of a Military Psychological Health Program
Presenter(s):
Lara Hilton, Claremont Graduate University, lara.hilton@cgu.edu
Salvatore Libretto, Samueli Institute, slibretto@siib.org
Abstract: The need for evaluation capacity building (ECB) in military psychological health is apparent in light of the proliferation of newly developed, yet untested programs. Commanders are forced to make decisions that are not evidence-based. One innovative PTSD treatment program that provides integrative care began operating in 2008 at Ft. Hood, Texas. The program has treated over 500 soldiers on which data has been systematically collected, but program staff lack sufficient knowledge, skills, and competencies for analysis or sustainability of evaluation efforts. The study addresses these deficiencies by offering ECB activities to program staff following a needs assessment. Based on this research, evaluators will gain a better understanding of how to provide an ECB intervention in a complex cultural and political environment and assess its effectiveness. Preskill and Boyle’s multi-disciplinary ECB model will be utilized because it emphasizes context which is paramount in military healthcare settings.
Facilitated Self-Assessments and Action Planning: Building the Monitoring and Evaluation Capacity of Thuthuzela Care Centre Local Implementation Teams in South Africa
Presenter(s):
Peter Vaz, RTI International, pvaz@rti.org
Elizabeth Randolph, RTI International, erandolph@rti.org
Sipho Dayel, ECIAfrica Consulting Ltd, sipho.dayel@eciafrica.com
Thokozile Majokweni, Sexual Offences and Community Affairs Unit, tjmajokweni@npa.gov.z
Virginia Francis, RTI International, vfrancis@rti.org.za
Abstract: Although definitions of institutional capacity assessment vary, one key feature is that they are aimed at ensuring that institutions are able to meet their goals by identifying capacity gaps and formulating capacity-development strategies. This paper examines one such facilitated capacity self-assessment of Local Implementation Teams (LITs) to evaluate their ability to effectively deliver services at Thuthuzela Care Centres (TCCs - rape care management centres) in South Africa. The methodology involved developing a capacity assessment tool, adapted from the United Nations Development Goals assessment framework, which identified capacity gaps and developed an action plan for each LIT. The tool was implemented in 34 TCCs. Although the process was initially confused with performance assessment, facilitators monitored this tendency and directed LITs to the purpose of institutional capacity self-assessment. This is one of the first facilitated self-assessments in sexual and gender-based violence in South Africa and could be adapted to varied evaluation situations.
Capacity Building as an Integral Component of Program Evaluation
Presenter(s):
Laura Haas, Tulane University, lhaas@tulane.edu
Bridget Lavin, Tulane University, blavin@tulane.edu
Canisius Nzayisenga, CHF International, cnzayisenga@rw.chfinternational.org
Abstract: Too many development efforts in sub-Saharan Africa are subjected to sub-standard program evaluations, with limited engagement from program staff, which add little to the existing knowledge base and fail to inform program planning. The USAID/Higa Ubeho program, initiated in 2010, is a five-year program emphasizing economic resiliency among the most vulnerable Rwandan households. In late 2010, the program engaged external consultants to evaluate prospectively the effectiveness of its efforts. A critical component of this program evaluation was continuous capacity building among program staff to ensure relevancy and enhance ownership of the evaluation results. Through a series of training workshops, all program staff contributed to the development of evaluation objectives, questionnaires, and analysis plans. Additionally, the M&E team leader received individual mentoring and training throughout the evaluation. The final product was an assessment well understood by program staff and responsive to the program’s information needs for planning and resource allocation.

Session Title: Foundation Program Officers: Heart and Ego-Driven Decision-making? Or Responsible Consumers of Strategic Learning and Evaluation? Perspectives From the Field
Panel Session 908 to be held in 200 E on Saturday, Oct 27, 1:00 PM to 2:30 PM
Sponsored by the Non-profit and Foundations Evaluation TIG
Chair(s):
Chris Armijo, The Colorado Trust, chris@coloradotrust.org
Discussant(s):
Ricardo Millett, Millett & Associates, ricardo@ricardomillett.com
Abstract: In recent years, particularly in the Non-profit and Foundations TIG, the AEA Annual Meeting has generated dialogue among evaluators regarding the utilization of evaluation and learning in the philanthropic community. Most notably, foundation evaluation staff and external evaluation consultants posit that program strategies are devoid of strategic thinking, with more emphasis on subjective criteria to guide decision-making and less attention on strategy evaluations and learning. The purpose of this session is for program officers to discuss on three key questions: When do program officers use evaluation? How do program officers adapt strategies based on evaluation and learning? Under which circumstances do program officers choose not use evaluation and learning? Program staff from three health foundations with different organizational approaches to evaluation and annual giving will provide insight into their evaluation and learning practices, with the intent of creating understanding and capitalizing on opportunities for collaboration between program and evaluation staff.
Taking Evaluation from No-Win to Win-Win
Elizabeth Krause, Connecticut Health Foundation, elizabeth@cthealth.org
The Connecticut Health Foundation (CT Health) is a health philanthropy charged with improving lives in the areas of racial and ethnic health disparities, children’s mental health and children’s oral health. CT Health strives to act as a strategic philanthropy and learning organization, leveraging resources in order to actualize ambitious visions. CT Health has been committed to independent evaluation of its major initiatives since its inception, though there have been numerous instances of disappointment, Monday morning quarterbacking, and mounting cynicism about the value of evaluation along the organization’s quest for the holy grail of “meaningful and measurable.” Session participants will gain insight into how to work with foundation clients looking to shift from traditional program evaluation to more dynamic evaluation frameworks. Additionally, session participants will gain appreciation for the ostensibly impossible role program officers must often play as intermediaries and translators of evaluation between evaluators, foundation executives, board members, and grantees.
Big Aspirations, Modest Evaluation Capacity: How Small and Local Foundations Utilize Evaluation and Learning
Rachel Wick, Consumer Health Foundation, rwick@consumerhealthfdn.org
The Consumer Health Foundation provides $2 million in grants annually in the Washington, DC Metro region focused on improving health care access and addressing the social and economic conditions that impact health in low-income communities. Approximately 75% of the Foundation’s funding is focused on advocacy and systems change. The Foundation has a logic model that guides its operations and is working to build out an evaluation strategy that includes facets of the Foundation’s work beyond grantmaking. It is also committed to building the capacity of grantees to design and evaluate their own work and to support a culture of learning within their organizations. Session participants will gain an understanding of the way small, local foundations think about evaluation; insights into evaluating advocacy and systems change work; and ways to expand philanthropy’s perspectives of evaluation to include their own performance as well as that of their grantees.
Measurement as a Standard Practice: How a Global Corporate Contributions Team Embraces Program Evaluation
Michael Bzdak, Johnson & Johnson Corporate Contributions, mbzdak@its.jnj.com
Johnson & Johnson and its many operating companies support community-based programs that improve health and well-being. Together with our partners, we are helping mothers and infants survive childbirth; we are supporting doctors, nurses and local leaders as they work to provide the best medical care to their people; and, we are educating communities on how to reduce their risk of infection from preventable diseases. With a robust strategic giving plan, the Contributions team has also made a commitment to improving our ability to measure and communicate program results-- numbers and stories. Session participants will gain an understanding of the importance of “evaluation thinking” within the context of a global corporate giving program. Specifically, the presentation will highlight how a sub-team or “tiger team” works to continuously improve our global team’s ability to measure outcomes while at the same time improving our partners’ evaluation capacity.

Session Title: Evaluating Federally-Funded Multi-Site Behavioral Health Programs: Methodological Approaches and Lessons Learned
Panel Session 909 to be held in 200 F on Saturday, Oct 27, 1:00 PM to 2:30 PM
Sponsored by the Cluster, Multi-site and Multi-level Evaluation TIG
Chair(s):
Resa Matthew, JBS International Inc, rmatthew@jbsinternational.com
Abstract: This panel will discuss methodological approaches and lessons learned related to evaluating Federally-funded multi-site behavioral health programs. Topics discussed include selecting an evaluation design (e.g., experimental versus non-experimental), addressing site-level heterogeneity in terms of program model, setting, type, location, and target population; obtaining stakeholder input and buy-in; developing data collection instruments in the context of Office of Management and Budget regulations; utilizing mixed-methods data collection approaches; data management issues in preparing large amounts of quantitative and qualitative data for analysis; procurement and use of secondary datasets; triangulation of data across multiple sources; and creating meaningful and actionable reports for a variety of audiences. Panelists will discuss strategies utilized and lessons learned in these areas based on their experiences conducting a variety of large-scale, Federally-funded multi-site behavioral health evaluations.
Conceptual Approaches and Issues in Conducting Federally-Funded Multi-Site Behavioral Health Evaluations
Resa Matthew, JBS International Inc, rmatthew@jbsinternational.com
Kevin Hylton, Alliances for Quality Education Inc, khylton@aqe-inc.com
Amanda Gmyrek, JBS International Inc, agmyrek@jbsinternational.com
Susan Hayashi, JBS International Inc, shayashi@jbsinternational.com
Conceptualizing an approach is a key first step in implementing large-scale Federally-funded multi-site behavioral health evaluations. A critical issue to be considered is the potential diversity in agencies, organizations, or programs being evaluated in terms of program model, setting, type, location, and target population. Given this heterogeneity, evaluators must take steps to gain a thorough understanding of the programs being evaluated by developing profiles of each grantee or conducting visits to programs to observe and discuss program operations with staff. However, evaluators must be mindful of the Federal requirement to assess level of burden on evaluation participants and time needed to obtain Office of Management and Budget clearance before data collection can begin. Federal agencies commonly recommend that evaluators obtain key stakeholder input on the proposed evaluation design and implementation (e.g., through expert panel meetings or working groups) and the process of identifying and selecting key stakeholders will be discussed.
Implementation and Data Collection in Federally-Funded Multi-Site Behavioral Health Evaluations
Kevin Hylton, Alliances for Quality Education Inc, khylton@aqe-inc.com
Guileine Kraft, JBS International Inc, gkraft@jbsinternational.com
Resa Matthew, JBS International Inc, rmatthew@jbsinternational.com
Amanda Gmyrek, JBS International Inc, agmyrek@jbsinternational.com
The large-scale nature of many Federally-funded multi-site behavioral health evaluations often lends itself to mixed-methods data collection approaches. The combination of quantitative and qualitative data provides robust information, but there are issues that must be addressed when collecting mixed-methods data for multi-site evaluations. For example, consideration must be given to the logistics around collecting data at multiple time points (whether cross-sectional or longitudinal data) and across the diverse agencies, organizations, or programs being evaluated. Quantitative data collection instruments and qualitative interview or focus group guides must be designed or selected to take into account this heterogeneity. If secondary datasets are to be used, special attention must be paid to the procurement and use of these data. This presentation will also include examples of data collection technical assistance and training that are frequently required by the agencies, organizations, or programs involved in the evaluation.
Data Management in Federally-Funded Multi-Site Behavioral Health Evaluations
Amanda Gmyrek, JBS International Inc, agmyrek@jbsinternational.com
Jackie King, JBS International Inc, jking@jbsinternational.com
Erika Tait, JBS International Inc, etait@jbsinternational.com
Nakia Brown, Alliances for Quality Education Inc, nbrown@aqe-inc.com
Large volumes of quantitative and qualitative data are often collected throughout the course of Federally-funded multi-site behavioral health evaluations. These data must be effectively managed for eventual use in analysis and reporting. In addition to managing multiple datasets and incorporating data quality procedures to ensure data integrity, quantitative data management issues include the creation of datasets that integrate data collected across multiple levels and matching data from evaluation participants across datasets using key variables of interest (e.g., demographics). Qualitative data management issues include prepartion of transcripts created from recordings of interviews or focus groups for inclusion in qualitative data software and developing consensus among individuals coding qualitative data whether using a predefined set of codes or examining the data for emerging themes. There are also decisions that need to be made regarding data collection timelines such as the implementation and cessation of the various quantitative and qualitative data collection activities.
Data Analysis and Reporting in Federally-Funded Multi-Site Behavioral Health Evaluations
Jaslean La Taillade, JBS International Inc, jtaillade@jbsinternational.com
Guileine Kraft, JBS International Inc, gkraft@jbsinternational.com
Resa Matthew, JBS International Inc, rmatthew@jbsinternational.com
Jose Santiago-Velez, JBS International Inc, jsantiago-velez@jbsinternational.com
The final critical steps of any evaluation are data analysis and reporting of the evaluation findings. Due to the large amount of quantitative and qualitative data collected, triangulation of data across sources is a common data analysis issue for Federally-funded multi-site behavioral health evaluations. In many cases, qualitative data can be quantified and used in quantitative analysis. Given the diversity of funded programs in a multi-site evaluation, advanced analytic techniques (e.g., multi-level modeling) are often needed to simultaneously examine the influence of changes at the organization-, agency-, or program-level on key variables of interest. This presentation will include examples of translating complex analytic findings to meaningful and actionable reports for a variety of audiences (e.g., the Federal funder, programs being evaluated, and the academic community). Integrating qualitative and quantitative results in presenting evaluation reports will be discussed as will linking findings to the evaluation questions proposed in the original design.

Session Title: CDC’s Interactive Evidence-Based Decision Making Tool: A Resource to Strengthen Evaluation Practice and Capacity
Demonstration Session 910 to be held in 200 G on Saturday, Oct 27, 1:00 PM to 2:30 PM
Sponsored by the Community Psychology TIG and the Organizational Learning and Evaluation Capacity Building
Presenter(s):
Natalie Wilkins, Centers for Disease Control and Prevention, nwilkins@cdc.gov
Sally Thigpen, Centers for disease Control and Prevention, sthigpen@cdc.gov
Helen Singer, Centers for Disease Control and Prevention, hhsinger@cdc.gov
Richard Puddy, Centers for Disease Control and Prevention, rpuddy@cdc.gov
Abstract: The Centers for Disease Control and Prevention’s Division of Violence prevention has developed a comprehensive framework for thinking about evidence which includes not only the best available research evidence on effective prevention strategies, but also evidence on the complex ecology in which these strategies are implemented and evaluated, including contextual evidence and experiential evidence. In this demonstration, facilitators will engage participants in learning how to use a new, online evidence-based decision making tool developed from this comprehensive framework of evidence. The demonstration will include a general overview of the tool and its function as a support for practitioners making decisions about prevention strategies. It will also focus on key tips for evaluators on using the tool to build stakeholders’ capacity for understanding evidence, and apply multiple forms of evidence to strengthen evaluation practice.

Session Title: Developing Outcome Indicators for Program Evaluation: Training, Team Science, and Translation of Basic Research
Panel Session 911 to be held in 200 H on Saturday, Oct 27, 1:00 PM to 2:30 PM
Sponsored by the Research, Technology, and Development Evaluation TIG
Chair(s):
Joshua Schnell, Thomson Reuters, joshua.schnell@thomsonreuters.com
Abstract: A major challenge in any research program evaluation is the identification of appropriate metrics for assessing program outcome and impact. Performance metrics need to incorporate a broad array of data types and sources to capture the variety of research activities and impacts among and across disciplines. There are several widely accepted metrics for assessing performance and impact, including publication output and follow-on citations. While these measures are robust and can be normalized by field of research, other metrics are necessary to capture and report upon important research outputs and outcomes not published in journals. In this panel, we will discuss new program impact metrics for training programs, team science, and R&D programs. We will use a logic model approach to contextualize metrics within the framework of research program goals.
Training Program Evaluation: Creating a Composite Indicator to Measure Career Outcomes
Yvette Seger, Thomson Reuters, yvette.seger@thomsonreuters.com
Leo DiJoseph, Thomson Reuters, leo.dijoseph@thomsonreuters.com
A primary goal of research training programs is to increase the number of qualified researchers in a particular discipline or methodology. Training program evaluations tend to examine applicant and awardee characteristics, award rates, and short-term research outcomes such as subsequent publication or grant activity. In many cases, these measures do not capture the activities of those trainees who are participating in the research enterprise but not as a principal investigator. We tested the feasibility and efficacy of a metric that combined professional society membership, professional certification, service on federal advisory committees, and publication authorship as a proxy of ‘broader engagement’ in the biomedical research enterprise. We show that this metric captured trainees who did not have subsequent grant records but who are engaged in research-related activities. We describe a case study in which this metric was found to be an effective indicator of program impact.
Team Science Evaluation: Developing Methods to Measure Convergence of Fields
Unni Jensen, Thomson Reuters, unni.jensen@thomsonreuters.com
Jodi Basner, Thomson Reuters, jodi.basner@thomsonreuters.com
A problem in the evaluation of team science is how to measure team formation. For team-based projects where a goal is to bring together researchers from multiple disciplines to catalyze new research approaches, one metric is evolution and use of terms that combine aspects of multiple disciplines. We tested the efficacy of publication metadata and text mining methods at detecting the convergence of multiple disciplines. In one set of experiments, we investigated the association between researcher discipline and their participation in transdisciplinary collaborations. In another set, we tested the extent of association between disciplines by processing text from publications and grant progress reports and visualized co-occurrence using network graphs. We show how these methods have proven effective at both illustrating and measuring field convergence, and how they have been used to inform program evaluation.
Measuring Longer-Term Outcomes: Testing the Feasibility of Linking Research Grant Funding to Downstream Drug Development
Duane Williams, Thomson Reuters, duane.williams@thomsonreuters.com
Joshua Schnell, Thomson Reuters, joshua.schnell@thomsonreuters.com
To support program outcomes evaluation we conducted a feasibility study for applying automated means to link grants to later-stage outcomes, namely biomarkers and drugs approved by the Food and Drug Administration (FDA). Biomarkers were linked to government funding through related publication records. To establish funding links to currently marketed drugs we leveraged patent information from FDA New Drug Application (NDA) records listed in the FDA Orange Book and applied automated text-mining algorithms to identify connections between these documents. We applied these methodologies to grant portfolios and characterized the resulting connections. Subject matter experts reviewed the results of this study and determined that, for all of the direct connections and a subset of the indirect connections, projects that were linked to the drugs were appropriate. This automated and transparent process may support evaluation and program staff by improving the efficiency and consistency of portfolio analysis.

Session Title: Measuring Treatment Implementation and Common Therapeutic Process Factors and Their Relation to Outcomes
Multipaper Session 912 to be held in 200 I on Saturday, Oct 27, 1:00 PM to 2:30 PM
Sponsored by the Human Services Evaluation TIG
Chair(s):
Ronald Thompson, Boys Town, ronald.thompson@boystown.org
Abstract: The focus of this session is on a longitudinal measurement study of youth in residential care to assess both implementation quality and common therapeutic process factors (e.g. therapeutic alliance, motivation to change) in relation to youth mental health outcomes. The goals of the study were to (1) describe the relationship among seven different implementation assessment measures; (2) report on the psychometrics of an adaption of the Peabody Treatment Progress Battery to residential care; and (3) to examine the relationship of implementation and common process factors in predicting youth mental health outcomes. Perspectives will be shared regarding the strengths and limitations of the implementation measures and recommendations for using the implementation and common process factor assessments in both research and practice.
Examining the Role of Implementation Quality and Therapeutic Processes for Youth in Residential Services: An Overview of the Research Approach
Kristin Duppong Hurley, University of Nebraska, Lincoln, kdupponghurley2@unl.edu
Amy Stevens, Boys Town, amy.stevens@boystown.org
This NIMH-funded longitudinal measurement study examines both implementation quality and common therapeutic process factors to predict mental health outcomes for youth in residential services. One focus of this project was a comprehensive study of implementation assessment approaches, including the perspectives of supervisors, staff, clients, internal and external observers, as well as implementation records from a token economy system. The second focus was on the assessment and application of therapeutic processes (e.g. therapeutic alliance, motivation to change) to the intensive 24/7 residential care setting. This study simultaneously has collected comprehensive treatment implementation and common therapeutic factor data, creating the possibility of examining the relationship between these two constructs that tend to predict positive youth outcomes. This presentation describes the overall study objectives, methodology, and sample.
Differing Views of Implementation Quality: How Implementation Assessments From Supervisors, Staff, Clients and Observers Predict Youth Mental Health Outcomes
Justin Sullivan, University of Nebraska, Lincoln, jsullivan2@unl.edu
Kristin Duppong Hurley, University of Nebraska, Lincoln, kdupponghurley2@unl.edu
Matthew Lambert, University of Nebraska, Lincoln, mlambert2@unl.edu
Many issues surround the assessment of implementation, such as what type of data to collect (observations, ratings, archival), respondent type (supervisor, client, staff) and frequency of assessments. Yet, minimal research has been conducted to examine how stable these implementation assessments are over time, as well as the relationships among the various implementation assessments. While one could hypothesize that the measures would be highly correlated, it is also possible that they capture different elements of implementation, and thus may not be interchangeable. With this complex measurement data set, a variety of issues need to be considered, included nesting and longitudinal factors. This presentation examines if there was agreement among the methods to assess low, adequate and high levels of implementation; how implementation levels varied over time and experience levels; and if quality of implementation was related to youth mental health outcomes.
Assessing the Role of Common Therapeutic Process Factors on Youth Outcomes While in Residential Care
Matthew Lambert, University of Nebraska, Lincoln, mlambert2@unl.edu
Kristin Duppong Hurley, University of Nebraska, Lincoln, kdupponghurley2@unl.edu
Common therapeutic process factors (e.g. therapeutic alliance, client motivation to change) are an important aspect of service delivery. While youth are frequent recipients of mental health interventions, few measures exist to assess these common process factors with youth. This study adapted an assessment approach used successfully with youth in outpatient settings, the Peabody Treatment Progress Battery, to 24/7 residential care. This presentation will report on the psychometric properties of the modified assessment battery; investigate how the common therapeutic process factors change over time for youth in residential care; and examine if these process factors are predictive of youth mental health outcomes. To analyze this data a variety of longitudinal approaches were utilized, including cross-lagged path analysis models, latent curve models and growth mixture models with latent trajectory classes.
Bringing It All Together: The Relationships Among Implementation, Therapeutic Factors, and Youth Outcomes
Kristin Duppong Hurley, University of Nebraska, Lincoln, kdupponghurley2@unl.edu
Amy Stevens, Boys Town, amy.stevens@boystown.org
Matthew Lambert, University of Nebraska, Lincoln, mlambert2@unl.edu
Justin Sullivan, University of Nebraska, Lincoln, jsullivan2@unl.edu
A unique aspect of this study was the simultaneous collection of both implementation and common therapeutic process factors in regard to youth mental health outcomes. This allowed for the examination of how implementation quality and therapeutic process factors were related. Moreover, we investigated if these variables mediated youth mental health outcomes. This presentation will also summarize the key findings of the study and suggest possible next steps for future research. Specifically, recommendations will be provided on the strengths and limitations of the implementation assessments, advice for the frequency of implementation and therapeutic process factor assessments, and thoughts regarding the statistical and design challenges in analyzing these constructs.

Session Title: Creating a Performance Management Culture: Linking Measurement and Evaluation to Programmatic Results
Panel Session 913 to be held in 200 J on Saturday, Oct 27, 1:00 PM to 2:30 PM
Sponsored by the Government Evaluation TIG
Chair(s):
Kathryn Newcomer, George Washington University, newcomer@gwu.edu
Discussant(s):
Elizabeth Curda, Government Accountability Office, curdae@gao.gov
Abstract: A myriad of performance management and improvement initiatives designed to promote the use of performance measurement and program evaluation across the federal government have been instituted over the last two Administrations. Our paper and presentation examines the organizational factors that enhance or impede the use of performance measurement and evaluation to strengthen programs and create a performance management culture that supports organizational learning. We will present analyses of survey data collected by the U.S. Office of Personnel Management, the U. S. Government Accountability Office, and the results component (Part 4) of PART scores collected by the Office of Management and Budget under the Bush Administration to examine the correlations between the characteristics of organizational cultures that use performance data and evaluation . These findings provide a glimpse into the current practices of federal agencies and help illuminate the successes and challenges of implementing government-wide performance management and performance improvement initiatives.
The External Drivers Affecting Performance Cultures Within Federal Agencies
Kathryn Newcomer, George Washington University, newcomer@gwu.edu
The first presenter will set the stage for the presentation of our empirical results by outlining the external climate and drivers of collection of performance data and use of program evaluation within the federal government. She will also introduce the conceptual model the team has developed to help us understand and predict the level of comfort with performance management within federal agencies.
The Complex Ecologies of Agency Cultures Within Federal Government
Rick Kowalewski, US Department of Transportation, rick.kowalewski@dot.gov
The second author will bring his 30 plus years of federal managerial experience to bear in explaining how and when performance management works -and does not work - within federal agencies.
How has the use of Performance Measurement and Evaluation Played out Across the Federal Government?
Yvonne Watson, US Environmental Protection Agency, watson.yvonne@epa.gov
The third presenter will discuss the data analyses and implications for promoting the use of performance management and evaluation within government within agencies to promote learning.

Session Title: Context Matters: Understanding Relationships in Complex Evaluation and Cultural Contexts
Multipaper Session 914 to be held in 201 A on Saturday, Oct 27, 1:00 PM to 2:30 PM
Sponsored by the International and Cross-cultural Evaluation TIG
Chair(s):
Debra Rog,  Westat, debrarog@westat.com
Context Matters! Navigating the uncharted and unfamiliar in International Development Evaluation
Presenter(s):
Sonal Zaveri, Founder Member and Core Group Member of Community of Evaluators, sonalzaveri@gmail.com
Abstract: Uneven understanding exists about how contextual issues in evaluation can exacerbate inequities or worse, unintentionally create new ones (1) in diverse, complex and volatile communities of South Asia. Evaluation examples involving vulnerable groups (sex workers, migrants, child laborers) from South Asia demonstrate how contextual factors such as gender, caste, hierarchy and religion affect the use (and misuse) of transformative/participatory approaches to evaluation. The paper challenges evaluation theory and practice largely North-focused and discusses what and how nuanced cultural competencies enabled evaluators and commissioners of evaluation to ethically address different world views/realities (2), be equity oriented (3) and do no harm in the implementation and use of evaluation. 1.Friere, Paolo. (1972) Pedagogy of the Oppressed. Penguin 2.Chambers, R. (2009). So that the poor count more: Using participatory methods for impact evaluation. Journal of Development Effectiveness, 1(3), 243-246. 3.Mertens, DM., (1999) Inclusive Evaluation: Implications of transformative theory for evaluation, AJE Vol 20, No 1
Addressing the Complexity of Evaluating Community Development
Presenter(s):
Dorothy Ettling, University of the Incarnate Word, ettling@uiwtx.edu
Rolando Sanchez, Northwest Vista College, rolsanch@live.com
Gerald SSeruwagi, University of the Incarnate Word, sseruwag@student.uiwtx.edu
Ozman Ozturgut, University of the Incarnate Word, ozturgut@uiwtx.edu
Abstract: In the attempt to assess community development (Kramer, Seedat, Lazarus & Suffla , 2011),complexity commands increasing attention. AEA’s Statement on Cultural Competency and Evaluation can serve as a reflector on cross cultural research as it affirms the recognition of the unique context and priorities of each community investigated. This paper introduces an approach to evaluation, developed by an international NGO through cross cultural field practice over the last decade, and is now being subjected to resonance with the AEA Statement. The model privileges the evaluators’ grasp of the local and global aspects of the complex systems that stakeholders face, and focuses on cultivating relationships with the stakeholders through capacity building and evaluation strategies. Reference Kramer, S., Seedat, M., Lazarus, S., Suffla, S. (2011, December). A critical review of instruments assessing characteristics of community. South African Journal of Psychology, 41(4), 503-516.
Reflections on a Cross-Cultural Evaluation: Striving for Cultural Competence
Presenter(s):
Tina Goodwin-Segal, Measurement Incorporated, tsegal@measinc.com
Christina Luke, Measurement Incorporated, cluke@measinc.com
Abstract: Over the past decade, evaluators have become more focused on the role that culture plays in evaluation. This growing body of knowledge suggests that cultural context is important in all phases of evaluation from designing through reporting. Most field experts agree that culturally competent evaluation is more than just applying a set of rules or guidelines—it’s more a way of being than of knowing. Reflecting on the evaluation of a federally-funded Cooperative Exchange Civic Education Project, this paper presents successes and challenges of conducting a cross-culturally competent evaluation in an international context. One unique factor of this evaluation was that a teachers’ union coordinated the project in eight different countries and served to bridge cultural differences. In addition, the content of the project being evaluated was emotionally volatile for participating teachers and their students. This paper presentation will discuss the lessons learned about cross-cultural competence during this three-year evaluation.
The Space Between Us: Understanding Relationships in Complex and Dynamic Evaluation Contexts
Presenter(s):
Jill Anne Chouinard, University of Ottawa, chouinardjill@gmail.com
Abstract: Evaluation is considered to be a relational endeavor (Abma & Widdershoven, 2008; Greene, 2005) that is fundamentally grounded in social relations. Within diverse cultural contexts, the focus on relationships is particularly relevant, as the need to develop inclusive approaches to accommodate alternative ways of knowing is paramount. Of particular interest is the relationship between evaluators and stakeholders, in what is spoken and not spoken, what is assumed and taken-for-granted, and what we as evaluators bring to the table, and how it is transformed within and out of the interactions that arise. Fine (1998) refers to this as “working the hyphen” (p. 135), where evaluators explore the margins of their social location, where both self identities and “others” come together. In this paper, I explore how researchers work this hyphen, how they represent others, how they decide upon which stories to tell, and where they locate themselves in telling the story.

Session Title: Hot Tips for Commissioning and Managing Actionable Evaluation
Demonstration Session 915 to be held in 201 B on Saturday, Oct 27, 1:00 PM to 2:30 PM
Sponsored by the Evaluation Managers and Supervisors TIG
Presenter(s):
E Jane Davidson, Real Evaluation Ltd, jane@realevaluation.com
Paula White, Ministry of Maori Development, whitp@tpk.govt.nz
Abstract: High quality, worthwhile, actionable evaluation doesn’t just depend on the technical competence and effective consultation skills of the evaluator. Decisions made and actions taken (or not taken) by the client can make or break the value of evaluation for an organization. High-value evaluation is the product of a fruitful interaction between a well-informed client and a responsive, appropriately skilled evaluation team. In this session, we combine the internal (client) and external (evaluation contractor) perspectives on lessons learned from both stunningly high value evaluative work (“dream projects”) and bitter disappointments (a.k.a. “Nightmares on Eval Street”), and use these as a foundation for a “hot tips” guide for those who commission evaluation – and the evaluators who work with them, demonstrated with examples. Evaluators helping clients get maximum utilization and value for their evaluation dollar will find this a useful guide for advice, support, and utilization-focused thinking and action.

Session Title: It’s Complicated: Evaluating Advocacy Partnerships
Panel Session 916 to be held in 202 A on Saturday, Oct 27, 1:00 PM to 2:30 PM
Sponsored by the Advocacy and Policy Change TIG
Chair(s):
David Devlin-Foltz, The Aspen Institute, ddf@aspeninstitute.org
Abstract: Multi-stakeholder partnership models are seen as a relatively effective force in politics, wielding ‘soft’ power to transform global norms and practices. Evaluation challenges and opportunities have emerged alongside the growth of organizations and organizational forms like transnational advocacy networks, coalitions, and campaigns. This session will discuss multi-actor advocacy efforts, engaging participants in a discussion of evaluation practice to facilitate effective collaboration. First, the panel will discuss lessons from evaluating a global-national advocacy partnership model, which sought to identify conditions necessary to advance short-term advocacy objectives and support civil society development in national partners. Next, the panel will share lessons on the design of a successful multi-stakeholder partnership. Third, we will examine the role of funders in organizing and setting the agendas of such collaborations.
Partnerships Across Borders: Structures, Systems and Conditions for Global-National Advocacy Initiatives
Rhonda Schlangen, Independent Consultant, rhondaschlangen@gmail.com
Advocacy partnerships involving global or northern and southern NGO partners aspire to influence global norms and their national expression. But power relationships, differences in civil society contexts and evaluation cultures, and even culture-based theories of change influence partnership effectiveness. This presentation will discuss evaluation of a global-national advocacy partnership in six African countries aimed at understanding effective partnership structures, systems and conditions. Partnerships that combine global and national advocacy tend to be shaped and driven by global or northern NGOs, who form the partnership and provide resources and strategic guidance. The purpose of such initiatives, then, is to advance the advocacy objectives of the global-national effort while at the same time supporting the advocacy capacity of participating organizations. Developing shared evaluation practices that contribute to partners’ learning at all levels can create space for mutual learning and accountability and reinforce partnership practices and constructs that contribute to advocacy effectiveness.
The Critical Ingredients of Partnership Design
Anna Williams, Ross Strategic, awilliams@rossstrategic.com
Partnerships may have excellent causes and strong memberships. Without other key ingredients, however, such as clear institutional arrangements, processes to identify clear goals and to encourage trust between partners, strong leadership, and smart advocacy strategies, they often falter. This presentation will highlight insights about voluntary multi-stakeholder partnerships based on an independent evaluation of an effective global partnership called the Partnership for Clean Fuels and Vehicles (PCFV) conducted for the US Environmental Protection Agency. Since its launch in 2002, PCFV has helped to catalyze the removal of lead from fuel in many developing and transitional countries, saving hundreds of thousands of people (and especially children) from extremely harmful exposure to lead. The evaluation involved an in-depth review of PCFV and comparison of PCFV’s design with voluntary partnership best practices. One product from this evaluation was a set of partnership design principles that have since been used to launch another international partnership.
Foundation-driven, Multi-grantee Advocacy Efforts: Can’t We All Just Get Along?
David Devlin-Foltz, The Aspen Institute, ddf@aspeninstitute.org
Foundations are major actors in global advocacy, but most see themselves as facilitators rather than direct participants in social change and policy change processes. Evaluators and advocates may welcome this humility from funders. But foundation representatives inevitably affect the policy change partnerships in which they participate. This session will draw on the Aspen Institute’s Advocacy Planning and Evaluation Program (APEP)’s experience as evaluator of several foundation-initiated or foundation-driven advocacy collaboratives. These vary in the degree of foundation influence over collective advocacy strategy, and in the administrative and managerial arrangements that govern the relationships among the actors. They differ as well in the institutions and processes that they seek to influence. But in every case, APEP’s role included helping clients define more clearly their objectives and to examine how their working relationships affected grantees’ ability to achieve these objectives. Bottom line: Sometimes arranged marriages can work.

Session Title: Contributions of Indigenous Evaluation Frameworks and Sovereignty to Ownership, Influence and Utility
Multipaper Session 918 to be held in 203 A on Saturday, Oct 27, 1:00 PM to 2:30 PM
Sponsored by the Indigenous Peoples in Evaluation and the Evaluation Use TIG
Chair(s):
Nicole Bowman,  Bowman Performance Consulting, nicky@bpcwi.com
Discussant(s):
Nicole Bowman,  Bowman Performance Consulting, nicky@bpcwi.com
Ecologies of Influence: What Indigenous Evaluation Frameworks Can Teach Us About Evaluation Use
Presenter(s):
Richard Nichols, Colyer Nichols Inc, colyrnickl@cybermesa.com
Karen Kirkhart, Syracuse University, kirkhart@syr.edu
Joan LaFrance, Mekinak Consulting, lafrancejl@gmail.com
Abstract: Indigenous evaluation is grounded in a commitment to give back to the community. Use of information is viewed as a moral obligation; evaluation is action-oriented for the betterment of society. Indigenous evaluation seeks to create knowledge and give it back in ways that are purposeful, helpful, and relevant. Knowledge may come from the process of planning and carrying out an evaluation as well as from the interpretations of findings. Utilization concerns both the sharing of information and guardianship of what is not to be shared. In this paper, LaFrance and Nichols’ Indigenous Evaluation Framework (IEF) is juxtaposed with Kirkhart’s Integrated Theory of Influence (ITI) to advance and expand prior understandings of evaluation influence. The discussion benefits both frameworks, fleshing out the “Reflecting, Learning, Celebrating” portion of the IEF and challenging the categorical representations of ITI. Lessons learned extend beyond Indigenous contexts to suggest a more nuanced appreciation of evaluation use/influence.
Strengthening Tribal Authority over Research-based and Evaluation-related Activity
Presenter(s):
Victor Begay, Arizona State University, vbegay@asu.edu
Kerry Lawton, Arizona State University, klawton@asu.edu
Abstract: There are over 550 federally-recognized Tribal Nations that share an economic, geographical and political landscape with the United States. Many of these Nations have been researched extensively and in the process, been subject to invasive, unethical, and exploitive evaluative research practices. In response to this, many tribes have taken greater control over research-based and evaluation-related activities conducted on Tribal lands. While the development of participatory action research has highlighted the importance of community engagement and empowerment, little attention has been paid to the role of sovereign governing entities in the evaluation process and the possible implications of increased Tribal oversight on practice. We will address these issues within the context of our experiences planning and implementing large-scale evaluations on a large, diverse, American Indian community.
Tools for ensuring indigenous cultural frameworks and values guide evaluation design and practice in indigenous contexts
Presenter(s):
Kate McKegg, Kinnect Group, kate@kinnect.co.nz
Kataraina Pipi, FEM (2006) Ltd, kpipi@xtra.co.nz
Abstract: In a Maori health setting in New Zealand, an innovative approach to the prevention of diabetes among Maori succeeded in attracting government funding. Underpinning the new approach to disease prevention were Maori models, concepts, theories and philosophies about health, whanau (family), culture and identity. A condition of funding was that evaluation would be undertaken, alongside programme development, piloting and implementation. Although the funding and contracting for health services is located in a mainstream government setting, the evaluation design had scope to draw on Maori ways of knowing and valuing. A tool that is now being widely used in New Zealand for surfacing values (a rubric) in evaluation was adapted in this indigenous context to ensure that Maori values were embedded in the programme assessment processes and the wider programme evaluation. The rubric development process opened up robust conversations about values, about what really mattered, to Maori in the community, and in the health settings. The Maori provider has subsequently embraced the use of the rubric, building it in as an integral part of the programme and the evaluation. As they see it, it reflects what they care about, what matters to them, and it has given them clarity of purpose and has helped them move forward with confidence as Maori. This paper will illustrate the importance of ensuring that in indigenous contexts and situations, indigenous values hold sway; that they are embedded in the programmes and evaluations we conduct. It will also provide an overview of a tool (a rubric) being used by Maori and non-Maori evaluators in indigenous settings to ensure indigenous values guide our practice as evaluators.

Session Title: Navigating Complex Organizational Ecologies: Reflections on Building Lutheran World Relief’s M&E Capacity
Panel Session 919 to be held in 203 B on Saturday, Oct 27, 1:00 PM to 2:30 PM
Sponsored by the Organizational Learning and Evaluation Capacity Building
Chair(s):
Christie Getman, Lutheran World Relief, cgetman@lwr.org
Abstract: Lutheran World Relief (LWR), a small-to-medium sized international NGO, received a grant in 2010 to build its organizational monitoring and evaluation capacity, most specifically “to provide strategic leadership to LWR’s efforts to better demonstrate measurable impacts of its programs.” Approximately two years into this initiative, several members of the LWR M&E team (all new to LWR since 2011) reflect on the question of “where do you start?” when navigating complex organizational ecologies- the pressure from various teams to “provide M&E magic” to meet everyone’s competing needs in what seemed like a timeframe of overnight. Then panel will include team members discussing LWR’s high-level strategic evaluation information needs, the steps we took in establishing a Design, Monitoring, Evaluation and Learning Framework, as well a perspective from a field manager in implementing the initiative with LWR staff and partners overseas.
LWR’s Strategic Perspective: Setting Initial Priorities, Timelines and an Underlying Philosophy in Building a New M&E Team and M&E “System”
Christie Getman, Lutheran World Relief, cgetman@lwr.org
LWR’s Director of M&E will reflect on the complex organizational motivations and expectations driving the establishment of the team, and how the team decided what to tackle first, and how. Balancing competing demands for quick deliverables, in a context of low evaluation capacity, minuscule budgets, and already overworked staff, has made the strategic planning for the initiative additionally complex, yet therefore widely relevant to the situations that many organizations face. The presentation will also include reflections on both gaining stakeholder buy-in across the organization, and maintaining thorough internal communication. Key lessons learned two years into the initiative include emphasizing a philosophy of integrated ownership of M&E by program managers, the “it’s a marathon, not a sprint” mantra, and the necessity of external networking and technical sharing to both avoid reinventing the wheel, and to take advantage of the positively encouraging “open source” culture of the international evaluation community.
LWR’s Internal Perspective: Managing the Creation of a Source of M&E Knowledge That is Logical, Accessible, Exhaustive and Sustainable
Garrett Schiche, Lutheran World Relief, gschiche@lwr.org
LWR’s M&E manager will present on the complexities of managing the design and rollout of a design, monitoring, evaluation and learning (DMEL) system. Experience has shown that any new system, M&E or otherwise, must meet the test of logic, accessibility, exhaustiveness, and sustainability. Two core themes that encompass each of these test metrics include “avoiding the reinvention of the wheel” and “stopping the dusty manual.” The presentation will reflect on why LWR has sought to take full advantage of the “open source” culture of the international evaluation community to create a framework that adopts and integrates the best DMEL tools, guides, and checklists currently available rather than developing a system from scratch. Furthermore, it will elaborate on why the key to sustainability resides in constant attention to the needs of the end user and how doing so will put a stop to the “dusty manual.”
LWR’s Field Level Perspective: Coaching for Capacity Building
Jacques Ahmed Hlaibi, Lutheran World Relief, jacques@lwr.ne
LWR’s regional monitoring and evaluation manager for West Africa will present on why coaching is not only for sports. His presentation will reflect on why coaching is also the best method to develop field staffs’ sustainable skills and knowledge related to M&E, as well as elaborating on the most effective coaching methods employed during his M&E work in Mali, Niger, and Burkina Faso. In sports, the best teams are not always those that are the most talented, but are often those that are the best coached. A good coach may cost more money up front, but the long term benefit to the team can be enormous. LWR adheres to a model of ‘accompaniment’ that believes in ‘standing with’, rather than ‘doing for.’ Based on this model LWR has made the strategic decision to invest in coaching to attain long term, high level, and sustainable M&E capacity of both LWR and partner staff.

Session Title: Evaluation for Collective Impact in Greater Cincinnati
Panel Session 920 to be held in 204 A on Saturday, Oct 27, 1:00 PM to 2:30 PM
Sponsored by the Non-profit and Foundations Evaluation TIG
Chair(s):
Mike Baker, United Way of Greater Cincinnati, mike.baker@uwgc.org
Charles Wright, United Way of Greater Cincinnati, charles.wright@uwgc.org
Discussant(s):
John Kania, FSG, john.kania@fsg.org
Victor Kuo, FSG, victor.kuo@fsg.org
Abstract: Large-scale social change requires broad cross-sector coordination, yet the social sector remains focused on the isolated interventions of individual organizations as well as individual sectors. Leading nonprofit and philanthropic organizations within the Greater Cincinnati region have recognized the importance of collaborative work and of evaluation in supporting collective initiatives. Since 2009, there have been significant local collaborative efforts among Greater Cincinnati’s multiple sectors including the arts, community and neighborhood development, education, health, and workforce development. Notably, the United Way of Greater Cincinnati is a recipient of the federal Social Innovation Fund grant program, dedicated to identifying promising, innovative solutions that have the potential to scale to meet the needs of multiple communities. This session will provide participants with insights on how evaluators can enhance collective impact efforts by facilitating the identification of shared outcomes, developing shared measurement systems, and building the capacity of backbone organizations that support collective impact efforts.
The Nonprofit and Philanthropic Sector in Greater Cincinnati
Shiloh Turner, The Greater Cincinnati Foundation, turners@gcfdn.org
Collaboration within Greater Cincinnati’s philanthropic community has become the 'new normal'. However, that was not always the case. The community experienced civil unrest in 2001 that served as the lightning rod for mobilizing a cross-sector response to address deep divisive issues. This event brought together the public, private, nonprofit, and philanthropic sectors to start healing the community. Problems were complex and required collective efforts. A decade later, Cincinnati’s sector leaders have learned how to work together, trust one another, embrace a data driven continuous improvement culture. The region has also seen its nonprofit sector strengthened by the development of several backbone organizations that advance the agendas of the issue areas in which they work. Next steps are to establish shared outcomes and measurement. This session will highlight lessons learned from key sectors and consider the 'readiness' of sectors -- what it takes, where to begin, and what not to do.
Developing Shared Outcomes for the Health Sector in Greater Cincinnati
E Kelly Firesheets, Health Foundation of Greater Cincinnati, kfiresheets@healthfoundation.org
Jené Grandmont, Health Foundation of Greater Cincinnati, jgrandmont@healthlandscape.org
Although health data are widely available in Greater Cincinnati, it has been difficult to identify common outcomes in healthcare. Healthcare measures traditionally focus on the reduction of pathology, rather than wellness, and stakeholders often disagree on which of those measures are the most relevant or important. In order to create meaningful health outcomes, stakeholders need to find common ground between patient-level clinical measures and population-level health indicators. Meaningful outcomes also take into account social determinants that affect health but are not managed in traditional health care settings. This presentation will describe the healthcare sector and stakeholders’ roles in developing shared outcomes. The presentation will discuss the role of technology in advancing shared measurement illustrated with examples from Cincinnati. Finally, the presentation will address how shared measurement addresses the social determinants that affect health within the broader community context.
Assessing Organizational Capacity for Collective Impact in Greater Cincinnati
Ellen Martin, FSG, ellen.martin@fsg.org
A key element that differentiates collective impact efforts from other forms of collaboration is the presence of a “backbone organization”. The backbone organization supports and facilitates the work of their partners to ensure that progress is made and tracked. And although much of this function occurs “behind the scenes,” proponents of a collective impact approach believe this role is critical to an effective process and for achieving outcomes. Representatives from FSG will present findings from recent work in Greater Cincinnati with six backbone organizations working in education and youth, economic development, workforce development and community development sectors. Presenters will discuss how backbone effectiveness is defined, what outcomes can be expected, and what evaluation methodologies can build greater capacity and learning for the backbone organizations themselves, their partners, funders, and other stakeholders.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Learning Points and Challenges Faced by Internal Evaluators
Roundtable Presentation 921 to be held in 204 B on Saturday, Oct 27, 1:00 PM to 2:30 PM
Sponsored by the Internal Evaluation TIG
Presenter(s):
Wei Cheng Liu, Ministry of Education, Singapore, liu_wei_cheng@moe.gov.sg
Hwee Lee Seah, Ministry of Education, Singapore, seah_hwee_lee@moe.gov.sg
Yi Xe Thng, Ministry of Education, Singapore, thng_yi_xe@moe.gov.sg
Ser Ming Lee, Ministry of Education, Singapore, lee_ser_ming@moe.gov.sg
Soon Chew Chia, Ministry of Education, Singapore, chia_soon_chew@moe.gov.sg
Abstract: We work as evaluators in the Ministry of Education (MOE), Singapore. Our unit designs and conducts evaluation of programmes implemented by other units in MOE, and we also provide consultation services to programme owners. The evaluations conducted range from evaluation of small-scale prototypes or programmes piloted in a few schools to systemic evaluation of large-scale programmes implemented in all schools. Although we view ourselves as internal evaluators helping programme owners identify ways to improve programme implementation, we are cognizant that some programme owners perceive that the data collected for evaluation could be used to assess their capacity to deliver the intended outcomes of the programmes (i.e., for performance assessment of individuals), even though data has not been used for such purpose. For this session, we will share our learning points from conducting evaluation in MOE and the strategies we have adopted to ensure we are effective internal evaluators.
Roundtable Rotation II: A Collaborative and Critical Discussion of Internal Evaluation: The “Pearls” and “Perils” of Being an Embedded Evaluator
Roundtable Presentation 921 to be held in 204 B on Saturday, Oct 27, 1:00 PM to 2:30 PM
Sponsored by the Internal Evaluation TIG
Presenter(s):
Heather Wallace, Centerstone Research Institute, heather.wallace@centerstone.org
John Putz, Centerstone Research Institute, john.putz@centerstone.org
Abstract: This roundtable will address how successful internal evaluations recognize and address the dynamic nature of working within complex ecologies – specifically community-based interventions with youth and adults – ranging from relationships with direct care staff and administrators to responsibly maintaining objectivity and remaining responsive to stakeholders and the evaluation community at-large. Specific objectives for this roundtable are that participants will: a) learn about and relate to the experiences of two internal evaluators embedded in SAMSHA-funded community-based interventions, b) critically examine the “pearls” and “perils” of internal evaluation to develop and share strategies for navigating within complex ecologies, and c) collaboratively frame the qualities of an effective internal evaluator. For internal evaluators, learning how to best work in complex ecologies in which they are embedded is essential, as doing so directly affects implementation of the evaluation and use of the data collected from their work.

Session Title: Feminist Approaches to Evaluation Research: Problems and Prospects for Enhancing Credibility and Social Justice
Multipaper Session 922 to be held in 205 A on Saturday, Oct 27, 1:00 PM to 2:30 PM
Sponsored by the Feminist Issues in Evaluation TIG
Chair(s):
Sharlene Nagy Hesse-Biber, Boston College, sharlene.hesse-biber@bc.edu
Discussant(s):
Donna Mertens, Gallaudet University, donna.mertens@gallaudet.edu
Abstract: This session will raise a range of issues regarding how evaluators take into account social difference across the evaluation process. The papers in this session will explore how specific feminist principles of praxis can be applied effectively to the evaluation research process to enhance the credibility of evidence. We employ a case study approach to examine how a feminist praxis approach can enhance the credibility of Evidence Based Research within different evaluation designs. We will discuss the concept of "reflexivity," and how its application can be deployed before, beginning and after an evaluation project to increase the validity of research findings. We will look at how social difference is or is not included in the evaluation process and the consequences and impacts in terms of the validity of evaluation findings. We look at strategies evaluators can employ that serve to promote taking difference into account in the evaluation process.
Assessing and Enhancing the Credibility of Evidence-Based Practice: The Application of a Feminist Praxis Approach to Randomized Control Trials
Sharlene Nagy Hesse-Biber, Boston College, sharlene.hesse-biber@bc.edu
Most disciplines within the health and social sciences regard randomized control trials (RCTs) as the “gold standard” of evidence-based practice (EBP) to determine patient care. While EBP has increasingly included patient and clinician perspectives, the move toward mixed methods within evidence-based research has proven daunting to many researchers and no best practices for RCT mixed methods studies currently exist. In this paper, I will argue for the value of a feminist praxis approach to RCTs and mixed methods projects. Feminist praxis begins with subjects’ lived experiences and urges researches to practice reflexivity throughout the research process. The bulk of this paper presents four case studies that analyze how researchers in diverse fields have taken feminist praxis into account in their RCT mixed methods projects as well as the missed opportunities in maximizing the validity of a project and developing a greater understanding of their research problems.
Deploying a Feminist Lens within a Needs Assessment Evaluation: A Reflection on the Process, Benefits, & Challenges
Divya Bheda, University of Oregon, dbheda@uoregon.edu
This presentation explores how the feminist principles of (1) collaborative learning and praxis, (2) beginning from lived experiences, and (3) reflexivity, played a role in a needs-assessment evaluation conducted for a non-profit organization that serves girls ages 10-18 in one north-western county. Feminist principles mediated all steps of the evaluation—from decisions regarding the evaluation questions, evaluation design, data collection methods and analysis procedures, to the recommendations derived from the evaluation, and information dissemination regarding the evaluation. This paper is a reflection on the evaluation process and how it resulted in (a) eliciting diverse perspectives that otherwise would have been missing within the evaluation, (b) increasing the validity of the evaluation, and (c) offering credible evidence that resulted in unique insights leading to the generation of recommendations to improve the non-profit’s programming and services in ways that otherwise would not have been explored. Challenges will also be explored.
Using a Feminist Lens to See Disparity
Kathryn Bowen, Centerstone Research Institute, kathryn.bowen@centerstone.org
It is well established that gender has been one of the main sources of social, psychological, cultural and economic inequalities in modern societies. Characteristics of the assessment, treatment and management of mental health specific to each gender have been acknowledged for centuries, but solutions to the problems have been slow to arrive. As a result, the literature contains comparatively little about which mental illness affect men and women differently, why that difference might be the case, and how to structure diagnosis and treatment in response to these differences. While social, economic and educational disparities between males and females have been under scrutiny there is still much to do to achieve parity in payment and the full complement of equal rights between the genders in terms of access to mental health care that is gender responsive. Program evaluation with a feminist lens/perspective can intentionally respond to these sex and gender differentials. By designing robust evaluations using participatory/feminist approaches evaluators can intentionally focus on measuring, identifying, understanding and responding to sex and gender inequity in the diagnosis and treatment of mental illness. The context for this paper reflects an adult drug court in a rural Appalachian community where men and women with co-occurring substance abuse and mental health conditions are accessing treatment within a recovery oriented system of care.
Feminist Evaluation and Community Learning: Fostering Knowledge About School-Based Health Care
Denise Seigart, Stevenson University, dseigart@stevenson.edu
This presentation explores the challenges of incorporating gender analysis into the evaluation of school health programs in the U.S., Australia, and Canada. While conducting case studies of school based health care in these countries; racism, sexism and classism were all noted, due to religious, economic, and cultural influences. These all play a part in the quality of and accessibility of health care in these countries. Examples of gender inequities in access to school health care include the disproportionate influence religious organizations have on the provision of care and the valuing (or devaluing) of women's work with regard to the provision of health care for children in schools. Reflections on the challenges of implementing an evaluation from a feminist perspective will be discussed, along with the emphasis feminist evaluators should place on community learning and the responsibilities feminist evaluators have to promote this within the context of their evaluations.

Session Title: Social Network Analysis (SNA) Topical Interest Group Business Meeting and Think Tank: SNA in Evaluation - Lessons Learned and Future Directions
Business Meeting Session 923 to be held in 205 B on Saturday, Oct 27, 1:00 PM to 2:30 PM
Sponsored by the Social Network Analysis TIG
TIG Leader(s):
Maryann Durland, Durland Consulting, mdurland@durlandconsulting.com
Stacey Friedman, Foundation for Advancement of International Medical Education & Research, staceyfmail@gmail.com
Todd Honeycutt, Mathematica Policy Research, thoneycutt@mathematica-mpr.com
Irina Agoulnik, Brigham and Women's Hospital, iagoulnik@partners.org
Presenter(s):
Maryann Durland, Durland Consulting, mdurland@durlandconsulting.com
Stacey Friedman, Foundation for Advancement of International Medical Education & Research, staceyfmail@gmail.com
Irina Agoulnik, Brigham and Women's Hospital, iagoulnik@partners.org
Abstract: Interest in social network analysis (SNA) as an evaluation tool has increased over the past few years. SNA is a methodology for studying relationships within a context of networks. SNA provides tools for exploring the fit of individuals and subgroups within a network and for measuring the structural characteristics of the network, subgroups, and individuals. It is relevant for evaluation where program processes or goals hinge on understanding or effecting change in relationships (for example, relationships between community members or among organizations within a social service system). Though it has an historical thread through quantitative paradigms, it is conceptually and methodologically distinct from traditional statistical analyses. Its use in evaluation is complicated with issues related to data collection, confidentiality, and interpretation that differ from what evaluators encounter when using other methodologies. In this session, the SNA TIG leadership and invited speakers will lead a group discussion focused on four questions: 1. What lessons have you learned in using SNA in your evaluation activities? 2. What are the key challenges you have faced in applying SNA techniques? 3. What do you wish you knew before you began using SNA in evaluation that you know now? 4. Where do you think we should focus our efforts to enhance the methodology of SNA in evaluation? This session will be of interest to both new and experienced evaluators as they learn from each other about issues related to the application of SNA methods in evaluation activities.

Session Title: Internet Focus Groups: Lessons Learned and Important Considerations
Multipaper Session 924 to be held in 205 C on Saturday, Oct 27, 1:00 PM to 2:30 PM
Sponsored by the Qualitative Methods TIG
Chair(s):
Richard Krueger, University of Minnesota, rkrueger@umn.edu
Abstract: Focus groups can be conducted in a variety of ways. A recent trend is to conduct these groups over the Internet using either a bulletin board format or a real-time audio or video conversation. The Internet allows the researcher to connect with participants who would be unable to come together in person. A team of researchers from the University of Minnesota, led by Richard Krueger, examined alternative platforms and evaluated different strategies for conducting these groups. In this multi-paper session they present their findings. Papers will be presented on the following topics: 1. Selecting a platform for online focus groups 2. Creating the feeling of connectedness and producing data-rich conversation in Internet focus groups 3. Concerns about privacy, security and use of sensitive information in Internet focus groups 4. Lessons learned from Internet focus groups
Online Focus Groups: Selecting a Platform
Alison Link, University of Minnesota, linkx109@umn.edu
Sally Dinsmore, University of Minnesota, fols0026@umn.edu
The evaluator in an online environment must make deliberate choices: she must craft a space and a mode of engagement that will elicit good information from participants. Whereas face-to-face focus groups draw on participants’ innate and rich capacity to exchange language and meaning in close proximity to each other, online focus groups must conceive of “proximity” and “exchange” in new ways. Moreover, in a world where technologies are constantly evolving, it is difficult to separate passing technology fads from lasting tools and trends. With these considerations in mind, this paper offers a framework for selecting an online focus group platform. It gives examples of both free and proprietary services that online evaluators may want to consider. It also emphasizes that the technology landscape is constantly evolving, and offers dimensions for selecting platforms that will be relevant across time, as platforms continue to evolve.
Creating the Feeling of Connectedness and Producing Data-Rich Conversation in Internet Focus Group
Michael Lee, University of Minnesota, leex5298@umn.edu
Mary O'Brien, University of Minnesota, obrie713@umn.edu
A hallmark of the traditional in-person focus group is social interaction that encourages participants to share rich data with the moderator and among their peers. Conducting focus groups on-line challenges researchers to modify their approach to facilitate a permissive conversational environment that transcends the limitations of spatial separation. The purpose of this article is to identify strategies for utilizing features of the virtual environment that can optimize collection of rich, experiential data. Specific topics include using visual elements to introduce participants and moderators; crafting questions that elicit reflection and thoughtful responses; creating safe spaces for self-disclosing on line; using back-channel messaging to prompt non-participants, and basic design considerations.
Concerns About Privacy, Security and Use of Sensitive Information in Internet Focus Groups
Alfonso Sintjago, University of Minnesota, sintj002@umn.edu
Caryn Lindsay, University of Minnesota, linds231@umn.edu
The internet offers researchers the opportunity to adapt established methods to a new environment, one that can eliminate the limitations of time and place. Focus group research is one method that is making this transition. Potentially, internet focus groups could offer sufficient anonymity to allow sensitive matters to be discussed openly in a way that face-to-face focus groups cannot. Has that potential been developed? Are internet focus groups being held with a complete understanding of the implications for privacy, security and confidentiality? This paper will identify and explore the concerns researchers need to take into account when developing internet focus groups.
Lessons Learned from Internet Focus Groups
Patrick O'Leary, University of Minnesota, poleary@umn.edu
David Ernst, University of Minnesota, dernst@umn.edu
Online environments provide unique opportunities for evaluators to collect useful data. But successfully leveraging these environments requires an understanding of the technologies and how they best mediate human interaction. This paper will review important lessons learned from conducting online focus groups and the application of those lessons to other domains of research, communication and instruction. Suggestions will be offered on effective instructional design and software choices. And, a list of best practices will be discussed, including how the online environment differs from face-to-face, benefits and challenges in synchronous versus asynchronous environments, the role of moderators, and how to overcome technical barriers.

Session Title: Novel Applications of Case Study Methods: Evaluating Complex CTSA Research Environments (Clinical and Translational Science Awards)
Multipaper Session 925 to be held in 205 D on Saturday, Oct 27, 1:00 PM to 2:30 PM
Sponsored by the Health Evaluation TIG
Chair(s):
Janice A Hogle, University of Wisconsin, Madison, jhogle@wisc.edu
Abstract: The case study method is a qualitative research approach that explores a single case or cases to identify pathways, patterns and relationships. Within complex research environments, such as Clinical and Translational Science Awards (CTSA), case studies can complement other quantitative and qualitative information and provide an in-depth understanding of how the research infrastructure impacts success. This session explores how five of the 60 CTSAs have used the case study method within the context of emergent evaluation approaches to assess the achievements of complex research infrastructures. Case studies may focus on individual investigators and/or their research teams, or on clinical/translational research projects, or indeed on an entire institute or center. The presenters are all professional evaluators affiliated with their CTSAs, tasked with internal evaluation of their institutes and centers. The presentations will be useful for evaluators interested in the case study approach within or outside of CTSAs.
Case Study and Success Case Methods Application in a CTSA Context (Clinical and Translational Science Awards)
Nancy J Bates, University of Illinois, Chicago, nbates@uic.edu
Timothy Johnson, University of Illinois, Chicago, 
CTSAs are a prime example of a complex ecology. Case study methodologies (Yin and Stake) and the Success Case Method (Brinkerhoff) are key approaches to exploring in depth and detail how the CTSA infrastructure supports improvement in clinical and translational research. In particular, it offers opportunities to investigate the relationships necessary for researchers to achieve successful outcomes. Case studies are also flexible and responsive to changing needs of the evaluation. This presentation provides examples from the University of Illinois at Chicago Center for Clinical and Translational Science (UIC CCTS) of how case studies are used to support tracking and evaluation activities. Examples will include case studies focusing on integration (cross-disciplinary), translation (research-to-application), and interaction (researchers with stakeholders) in which we look at research teams using multiple core services.
Appreciative Inquiry: An Assets-Based Strategy for Approaching CTSA Evaluation (Clinical and Translational Science Awards)
Jennifer Kusch, Medical College of Wisconsin, Milwaukee, jkusch@mcw.edu
The Clinical and Translational Science Institute of Southeast Wisconsin (CTSI-SEW) represents a unique partnership among nine universities and health care systems, which optimize infrastructure and resource utilization in order to improve health through translational science. There is limited information about how these disparate organizations and cultures become a functional unit. To address this gap CTSI-SEW is framed through a “success-oriented” case study lens, which provides formative, real time information about successful strategies to facilitate the growth and effectiveness of our partnerships. AI serves a complementary role in quantitative evaluation processes by illuminating key features associated with positive change through use of semi-structured interviews with key CTSA users. This strategy yields information to stakeholders around such themes as key features involved in forming successful collaborations across partners to leverage resources. AI provides a unique approach to understanding the progress in creating a partner-wide culture focused on effectiveness and interdependent relationships.
Using Success Case Studies in Evaluating Complex Research Infrastructure
Janice A Hogle, University of Wisconsin, Madison, jhogle@wisc.edu
Paul Moberg, University of Wisconsin, dpmoberg@wisc.edu
Christina Hower, University of Wisconsin, cspearman@wisc.edu
Bobbi Bradley, Marshfield Clinic Research Foundation, bradley.bobbi@mcrf.mfldclin.edu
The Success Case Study method (Brinkerhoff; Patton) provides an in-depth understanding of how CTSA investigators use multiple resources leading to recognized productivity. The case studies explore what it means to research teams to have an improved research infrastructure available. They complement other quantitative and qualitative data about research resource use by adding project-specific description and analysis of how clinical and translational research is being implemented and improved. Analysis of the case studies focuses on extracting themes critical for understanding how the CTSA has contributed to enhancing the scientific achievements and career advancement of our most promising investigators. Each case study involves at least one key informant interview with the lead researcher, along with review of documentation about the research, the investigator and the research team. We describe how we implement the approach, comment on cross-case analysis, and reference how the case studies are used at our CTSA.
Communicating Success Case Studies in Complex Organizations
Stuart Henderson, University of California, Davis, stuart.henderson@ucdmc.ucdavis.edu
Julie Rainwater, University of California, Davis, julie.rainwater@ucdmc.ucdavis.edu
Success case studies can be an effective way to identify underlying patterns, relationships, and dynamics that lead to program success or failure. Identifying successful cases is especially important in complex evaluations, such as Clinical and Translational Science Awards, where successes are definitionally complex and often do not develop in linear or predictable ways. For success case studies to be fully effective, however, evaluators need to find innovative ways to communicate them to diverse stakeholders, who may see them as simply anecdotes or unconnected “stories.” In this talk, we will discuss different approaches we have used at the UC Davis Clinical and Translational Science Center (CTSC) for presenting success case studies to CTSC leadership, advisory boards, and the broader campus community. The emphasis will be on exploring ways to increase success cases’ accessibility and impact.
Translational Forensics: Presenting a Protocol for Retrospective Translational Research Case Studies
Cath Kane, Cornell CTSC, cmk42@cornell.edu
Samantha Lobis, Cornell University, sjl332@cornell.edu
William Trochim, Cornell University, wmt1@cornell.edu
The Weill Cornell CTSC will present our protocol for retrospective case studies of successful translational research. Here, a case is defined as drug, medical device, or a surgical process/procedure that has already been translated into clinical practice. Key informants from the CTSC suggest these case examples in addition to using FDA approval, Cochrane Reports, and Medicare/Medicaid approval as proxies for the end point of successful translation. Once the cases and end points have been identified, we’ll work retrospectively using a kind of “translational forensics” via literature reviews and CV data mining in order to determine a genesis point for the cases. This information is then used to identify key informants, who are interviewed to document the entire story of the research translation from the genesis of an idea to clinical application. The interview data is analyzed and collated, relevant durations are calculated, and the TR pathways are depicted visually.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Evaluating Programs for Sexual Minority Youth in Educational Settings: Navigating the Institutional Review Board (IRB)
Roundtable Presentation 926 to be held in 206 A on Saturday, Oct 27, 1:00 PM to 2:30 PM
Sponsored by the Lesbian, Gay, Bisexual, Transgender Issues TIG
Presenter(s):
Maralee Mayberry, University of South Florida, mayberry@usf.edu
Abstract: Evaluating the efficacy of high school Gay-Straight Alliances (GSAs), an increasingly familiar school intervention designed to provide support for sexual minority youth who struggle to overcome the feelings of difference and isolation they experience in school, is fraught with regulatory challenges. Researchers may unfortunately avoid conducting research with this population because of anticipated difficulties in obtaining IRB approval. Consequently, evaluations of such interventions are a glaring gap in the current literature. Those that do exist rely on small convenience samples of youth who feel “safe” requesting their parent’s consent to participate. To contribute to the body of knowledge on the efficacy of Gay-Straight Alliances, this roundtable will focus on the ethical considerations and constraints that face evaluation research on sexual minority youth and provide time to brainstorm strategies that could be employed to circumvent parental consent requirements while simultaneously advancing the IRB ethical principles of autonomy, beneficence, and justice.
Roundtable Rotation II: Is Your Organization Really LGBT Friendly? Building Organizational LGBT Cultural Competency
Roundtable Presentation 926 to be held in 206 A on Saturday, Oct 27, 1:00 PM to 2:30 PM
Sponsored by the Lesbian, Gay, Bisexual, Transgender Issues TIG
Presenter(s):
Adam Viera, Harm Reduction Coalition, viera@harmreduction.org
Abstract: Several years ago, our organization was funded to work with community-based organizations in New York to build their capacity to provide health and social services to LGBTQ individuals and communities. From this experience, our organization has begun to develop a complex and multi-layered methodology for assessing organizational capacity to serve LGBTQ individuals and communities, and for evaluating how well training and technical assistance serve to increase this capacity. In this roundtable, we will present a brief synopsis of our own capacity building activities and the efforts we have undertaken to evaluate the efficacy of these activities. We will encourage group discussion around how best to improve these efforts as well as on how attendees can apply our framework to their own efforts to monitor and build their organizational competency to serve LGBTQ individuals and communities.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Evaluating Test Fairness in High-Stakes Decision-Making Testing Programs: A Stakeholder Viewpoint
Roundtable Presentation 927 to be held in 206 B on Saturday, Oct 27, 1:00 PM to 2:30 PM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Presenter(s):
Xiaomei Song, Queen's University, 0xs@queensu.ca
Abstract: Concerns about fairness among test stakeholders are paramount in all kinds of programs which often use high-stakes decision-making tests as a means to classify, select, and judge individuals (Shohamy, 2001). Although test stakeholders such as test takers and test users are directly affected by or involved in program activities, their voices tend to be unheard and evaluating test fairness mostly focus on statistical analyses at the group level (Zvoch & Stevens, 2008). Based on the development of evaluation theories and practices (Freeman, 1989, 2004; Greene, 1988; Phillips, 1997), this roundtable discussion recommends that evaluation of test fairness may be complemented by examining stakeholder perspectives. Integrating the perspective of multiple stakeholders and justifying stakeholder groups in the high-stakes decision-making testing programs is highlighted. Stakeholder involvement has important implications in the evaluation of test fairness, especially in a recent trend worldwide towards using testing and assessment for accountability and educational reform.
Roundtable Rotation II: Head Start for At-Risk Children: A Collaborative Evaluation
Roundtable Presentation 927 to be held in 206 B on Saturday, Oct 27, 1:00 PM to 2:30 PM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Presenter(s):
Debra Thrower, Saint Leo University, dthrower@tampabay.rr.com
Liliana Rodriguez-Campos, University of South Florida, liliana@usf.edu
Abstract: Abstract: Communities of faith characteristically have an ongoing need to hire program evaluators. This action enhances opportunities for faith-based organizations whose mission includes early childhood education for families who are either homeless, or at risk of becoming homeless. While an internal faith-based program evaluator is beneficial; a combination of both internal and external evaluators would typically result in a wider community engagement. This increases the likelihood of leveraging community, financial and human resources necessary to engage this high risk population within our society. In this roundtable discussion, the presenters will address the use of the Model Collaborative Evaluations (MCE) to understand the importance of a collaborative evaluation and demonstrate its application. This model can provide someone using a collaborative approach with guidelines of how to accomplish sound evaluation in support of young children and their families. The authors will also present the intended benefits of combing community-wide engagement through effective collaboration.

Session Title: Advancing a Community of Practice Among Science, Technology, Engineering and Math (STEM) Evaluators: Developing and Bridging Formal Networks
Think Tank Session 928 to be held in 207 A on Saturday, Oct 27, 1:00 PM to 2:30 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Presenter(s):
Alyssa Na'im, Education Development Center, anaim@edc.org
Discussant(s):
Jack Mills, Independent Consultant, jackmillsphd@aol.com
Kimberle Kelly, University of Wisconsin, Madison, akimkelly@gmail.com
Jennifer Nielsen, Manhattan Strategy Group, jnielsen@manhattanstrategy.com
Sam Held, Oak Ridge Institute for Science and Education, sam.held@orau.org
Abstract: This session will explore participants’ interest in and capacity for bridging and developing formal networks to cultivate a Community of Practice for science, technology, engineering and math (STEM) education evaluators. The discussion will continue previous exchanges around sharing information and resources, best practices, and lessons learned in conducting evaluations of STEM education programs. Through facilitated discussion, participants will share successful approaches to evaluation capacity building and professional community building, discuss the development of an AEA TIG focused on STEM education evaluation, and identify ways to expand these opportunities to the broader STEM education field, including practitioners and policymakers. The discussion will also attend to the conference theme by identifying the complex ecologies of STEM education and the evaluator’s role in navigating these circumstances and environments (e.g., responding to various funding agencies and priorities, recognizing differential needs across the STEM education continuum, interpreting project-level contributions to program- or system-level initiatives).

Session Title: Using Evaluation to Improve Integration of Mental Health and Chaplaincy Services for US Veterans and Active Duty Personnel
Panel Session 929 to be held in 207 B on Saturday, Oct 27, 1:00 PM to 2:30 PM
Sponsored by the Alcohol, Drug Abuse, and Mental Health TIG
Chair(s):
Mark DeKraai, University of Nebraska, mdekraai@nebraska.edu
Abstract: The Department of Veterans Affairs (VA) and the Department of Defense (DoD) initiated a program evaluation to better understand the potential for improved mental health/chaplaincy integration across the continuum of services available to Active Duty Service Members and Veterans. This panel addresses four aspects of the evaluation: 1) the origination of the policy issue around mental health and chaplaincy and the use of work teams to frame the issues, 2) implementation of a survey of VA and DoD Chaplains to answer key questions related to chaplaincy and mental health, 3) implementation of a gap analysis to determine how mental health and chaplains currently work together, the potential for maximizing mental health/chaplaincy integration, strategies to move to model practices, and 4) how evaluation results are used to inform public policy across very large organizations (VA & DoD). Time for questions will follow the formal presentations.
Using Cross-Agency Subject Matter Experts and Work Teams to Frame the Issues for Evaluating Chaplaincy/Mental Health Integration
Jason Nieuwsma, Duke University Medical Center, jason.nieuwsma@duke.edu
This presentation provides background on the chaplaincy/mental health initiative including how the issue became prominent in the VA and the DoD, the impetus for creating work groups to study the issue, the identification of subject matter experts with knowledge and interest in the topic, the unique collaboration across two large federal agencies, the interest in gathering additional information to inform policy decisions, and the use of work teams to frame the issues for the evaluation. The presentation includes lessons learned in conducting a program evaluation across two federal agencies and across two unique disciplines – chaplaincy and mental health. Dr. Jason Nieuwsma is a clinical psychologist, an assistance Professor at Duke University and the Associate Director of the VA Mental Health/Chaplaincy Program. Dr. Nieuwsma was instrumental in directing the initiative and managing the evaluation contracts for the Durham VA Mental Illness Research, Education and Clinical Center.
National Survey of Chaplains Regarding Mental Health Interactions and Beliefs
M Becky Lane, RTI International, blane@rti.org
This presentation provides a summary of a nationwide survey of VA and DoD Chaplains working in a variety of settings including hospitals, clinics, family life centers, and operational military units. The presentation addresses the survey design including the formation of questions to answer key questions developed by the work teams and subject matter experts. We provide a summary of the results and lessons learned in conducting survey research across complex ecologies. Bill Cantrell is Chaplain for Education and Research with the VA and has been instrumental in managing the Mental Health and Chaplaincy Program including the implementation of the Chaplain survey. Chaplain Cantrell has a broad range of experience as a parish priest, Navy chaplain, President/CEO of St. Jude’s Ranch for Children and most recently extensive work at Naval Medical Center San Diego working with PTSD patients in cooperation with Mental Health Services.
Gap Analysis to Assess Potential for Enhanced Mental Health/Chaplaincy Integration
Denise Bulling, University of Nebraska, dbulling@nebraska.edu
Mark DeKraai, University of Nebraska, mdekraai@nebraska.edu
This presentation provides an overview of visits to over 30 VA and DoD sites across the country and interviews with over 300 chaplains, mental health professionals, social workers, substance abuse counselors and other stakeholders who work together in complex healthcare organizations. The presentation highlights the challenges of conducting this type of in-depth inquiry across multiple systems, each with its own culture. We discuss evaluation design and methods, how this component of the evaluation fit into the overall evaluation framework, lessons learned in conducting evaluations involving complex ecologies, and provide an overview of evaluation results. Dr. Bulling is a Senior Research Director at the University of Nebraska Public Policy Center. She has over 25 years experience in mental health service delivery, policy development and evaluation. She co-directed the gap analysis, including evaluation design, instrument development, analysis and reporting of results.
What’s Next? The Use of Evaluation Results to Inform Practice and Policy Change in Mental Health and Chaplaincy
Jason Nieuwsma, Duke University Medical Center, jason.nieuwsma@duke.edu
This presentation focuses on the results of the evaluation components and how they were received by work teams and administration within the Department of Veterans Affairs and the Department of Defense. The presentation presents lessons learned about the value of quality evaluation results and the use of data to inform changes in federal policies and practices across multiple agencies. Dr. Jason Nieuwsma is a clinical psychologist, an assistance Professor at Duke University and the Associate Director of the VA Mental Health/Chaplaincy Program. Dr. Nieuwsma was instrumental in directing the initiative and managing the evaluation contracts for the Durham VA Mental Illness Research, Education and Clinical Center.

Session Title: Systems in Evaluation: Discovering the Practical and Useful in a Complex World
Panel Session 930 to be held in 208 C/D on Saturday, Oct 27, 1:00 PM to 2:30 PM
Sponsored by the Presidential Strand and the Systems in Evaluation TIG
Chair(s):
Janice Noga, Pathfinder Evaluation and Consulting, jan.noga@pathfinderevaluation.com
Discussant(s):
Jonathan Morell, Fulcrum Corporation, jmorell@fulcrum-corp.com
Marah Moore, i2i Institute, marah@i2i-institute.com
Abstract: Knowledge – what we know and how we know it – is at the heart of the programs we evaluate. Yet, these programs are often formally complex, complete with multiple causal paths, emergence, and all the other phenomena that make simple prediction of outcomes and impact impossible. What are we as evaluators to do with our comfortable theories, methodologies, and techniques when faced with complex situations for which they are inadequate? In this session, a panel of evaluation experts representing a diverse range of perspectives on systems thinking will engage in a moderated conversation about the use of systems perspectives in evaluating complex situations. The ultimate goal of this session is to expand evaluators’ thinking about systems from a focus on the “how to” to a deeper consideration of issues related to the intersections of knowledge, relevance, and utility of a systems orientation in evaluative thinking about complex ecologies.
Applying an Ecological Systems Lens to Evaluation
Meg Hargreaves, Mathematica Policy Research, mhargreaves@mathematica-mpr.com
The nested realities and multi-layered dynamics of complex ecologies present significant challenges to traditional evaluation methods. Program boundaries need to be reconsidered as program developers and evaluators consider how context influences program success. Planners and evaluators may also need to re-think the unit of analysis of system change initiatives, replacing organizations with target populations at the center of their ecological models and conceptual frameworks. Evaluation funders and policy-makers also need to re-think the “gold standard” of evaluation design; to consider when natural experiments and quasi-experimental designs are more appropriate than randomized control trials for interventions that are operating in unpredictable, dynamic environments. Applying this kind of ecological systems lens to evaluation can be a game changer; much like moving from a one-dimensional to a three-dimensional chessboard, it changes both the playing field and the rules.
The Complex Ecology of Knowledge Generation and Use: A Stronger Role for Evaluation
Mary McEathron, University of Minnesota, mceat001@umn.edu
Every program or initiative, no matter how large or small, operates at the confluence of the funder’s vision and resources, staff motivation and on-the-ground knowledge, and a research base that point in the direction of “what works”. However, each of these sectors (funders, project staff, researchers, and evaluators) often operates in semi-isolation, separated from each other by a complex layering of boundaries and relationships. Data are collected, reports are written, but the exchange occurs through a highly ritualized dance that often obscures the most relevant, change-inducing information. If knowledge – what we know and how we know it – is at the heart of social, educational, and environmental program planning, how can evaluators play a stronger role in ensuring a more complete and useful exchange between projects, funders, and researchers? How do we use systems approaches to examine boundaries, power, and perspectives in order to foster a more complete exchange of knowledge?
A Systems Orientation for Culturally and Contextually Responsive Evaluation
Veronica Thomas, Howard University, vthomas@howard.edu
When evaluators bring systems thinking into the arena of culturally and contextually responsive evaluation, they bring a lens that examines the intersections of environment and culture that are important aspects of a complex ecology. A systems orientation for a cultural/contextual responsiveness evaluation approach has the power to yield added value by facilitating evaluators' ability to observe and explain program effects, particularly variation among different populations and subpopulations. The added value in an evaluator's spending time uncovering underlying, and often invisible, behavioral archetypes, system traps, and opportunities can influence design, implementation, and outcomes of projects serving marginalized communities. In the end, using a systems lens has the potential to advance our efforts from models that emphasize individualized deficit thinking to those that emphasize systems perspectives for problem solving.
Evaluation and the Fundamental Pursuit of Wisdom
Matt Keene, US Environmental Protection Agency, keene.matt@epa.gov
"Where is the life we have lost in living? Where is the wisdom we have lost in knowledge? Where is the knowledge we have lost in information?" T.S. Eliot. To best serve the world, wisdom must be evaluation’s fundamental pursuit. We would be more likely to think from a systems paradigm and contribute to the larger systems knowledge base if evaluation better understood its relationship with the problems it intends to address and the disciplines necessary to do so. Convergent problems typically involve the non-living world, are solvable, and are well suited to instructional disciplines that emphasize system function, experimentation and manipulation. In life, and (often) evaluation, we are far more likely to encounter divergent (wicked) problems – tangled in living systems, lacking a discrete solution, and in need of descriptive disciplines that work with what can be experienced and can grapple with the invisibles of life, values, and spirituality.
The Power of Patterns
Beverly Parsons, InSites, bparsons@insites.org
A systems orientation to evaluation emphasizes looking for patterns in data generated within complex ecologies. Patterns are determined by similarities, differences, and relationships among agents across time and space. A focus on seeing and understanding patterns provides the basis for influencing systems within complex ecologies. Ways for looking at patterns vary depending on whether you as an evaluator are working in a situation where a summative, developmental, scalability, or sustainability evaluation is desired or appropriate. Yet the underlying idea is that patterns can be influenced by changing the differences among agents, their relationships with one another, and/or which agents are involved in forming the patterns. Keeping focused on seeing, understanding, and influencing patterns helps an evaluator make choices about data collection, analyses, and interpretation as well as stakeholder involvement. It also helps evaluation users move in their desired direction through complex conditions.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Validating Self-report Change Among Health Care Providers Attending Educational Programs: How Do We Know that Change Reported Actually Occurs In Practice?
Roundtable Presentation 931 to be held in 209 A on Saturday, Oct 27, 1:00 PM to 2:30 PM
Sponsored by the Health Evaluation TIG
Presenter(s):
Beth-Anne Jacob, University of Illinois, Chicago, bethanne@uic.edu
Suzanne Carlberg-Racich, University of Illinois, Chicago, scarlb1@uic.edu
Abstract: Our AIDS Education and Training Center provides approximately 450 skills-building training programs annually to health care providers. The evaluation team at our Center has spent the past 7 years developing a brief web-based instrument that assesses practice change of providers attending our trainings. Although data from this project are abundant and encouraging, we continue to ask ourselves about the degree to which we can trust data from self-report, and about ways to approach validating our methods and instrument. The questions for the proposed idea exchange are: 1. Can a web-based self-report survey method provide accurate, consistent and economical evidence about practice changes by medical care professionals? a.What types of practice and clinical care systems changes can this method identify? b.How does this information compare to evidence obtained by chart review and structured case-based interviews in terms of content and economy of measurement?
Roundtable Rotation II: Clinical Observations to Assess Quality of Care: Challenges and Solutions in Data Collection and Synthesis from a Maternal Health Evaluation in Madagascar
Roundtable Presentation 931 to be held in 209 A on Saturday, Oct 27, 1:00 PM to 2:30 PM
Sponsored by the Health Evaluation TIG
Presenter(s):
Eva Bazant, Jhpiego, Johns Hopkins University, ebazant@jhpiego.net
Vandana Tripathi, Johns Hopkins University, vtripath@jhsph.edu
Abstract: Use of clinical observations of health care provision globally and in US will be actively discussed at this roundtable. To explore challenges and solutions, we describe an example of from Madagascar of maternal health care observations. Challenges include: time needed for ethical approvals; lengthy observation checklists adapted from approved service delivery guidelines; and observer fatigue. Whether to code an item not observed as “No” and “Not applicable” is sometimes unclear to observers. Analytical challenges include determining how to create summary scores, setting a threshold for acceptable levels of quality; and inconsistent responses. Use of smart phones allows for rapid data collection and also has its own challenges. Solutions, which the attendees can add to, include anticipating the need for ethical approvals and analysis, training observers well to promote inter-rater reliability and to master use of smart phones; using structured methods to highlight key tasks; and triangulating data from several sources.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Evaluating School District Leadership and Support
Roundtable Presentation 932 to be held in 209 B on Saturday, Oct 27, 1:00 PM to 2:30 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Presenter(s):
Karen Childs, University of South Florida, kchilds2@usf.edu
Jose Castillo, University of South Florida, jmcastil@usf.edu
Kevin Stockslager, University of South Florida, kstocksl@usf.edu
Abstract: This roundtable will pose the question, “How can we best evaluate school district leadership practices?” Nationally, there has been a growing emphasis on accountability and evaluation of school based staff including teachers, administrators and support personnel. These evaluations are often based on a combination of student outcomes and professional practices (as evidenced by direct observation and product review). Standards of practice for instructional personnel are relatively plentiful when compared to the availability of models against which to evaluate the practices and performance of district staff. Florida’s School District Survey for Implementation of Multi-Tiered Systems of Support (MTSS) will be described. Join us in an exploration of this and other exemplary models and tools for evaluating the effectiveness of district level educational systems.
Roundtable Rotation II: Evaluating K-12 Foreign Language Enrollment: Effects on Standardized Tests and Teacher Attitudes
Roundtable Presentation 932 to be held in 209 B on Saturday, Oct 27, 1:00 PM to 2:30 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Presenter(s):
Michelle Bakerson, Indiana University South Bend, mmbakerson@yahoo.com
Abstract: Evaluators are often contracted by school districts or organizations receiving grants to develop and facilitate programs to benefit the school or organization. One such school district in Northern Indiana is a district whose ninth through twelfth grade students receive instruction in foreign languages. The collaborative evaluation was conducted to determine the effect this instruction has on standardized test achievement but also examined teacher attitudes and perceptions toward foreign language learning. The evaluation was designed to be a learning tool for facilitating the improvement of foreign language learning and its importance in these schools. Accordingly, a collaborative evaluation approach was utilized to actively engage the school and the teachers during the whole process. The steps, advantages, and obstacles of this evaluation will be discussed

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Evaluation of Democracy and Human Rights Assistance Programs in Insecure and Transitional Environments: A Participatory Dialogue to Identify Best Practices
Roundtable Presentation 933 to be held in 210 A on Saturday, Oct 27, 1:00 PM to 2:30 PM
Sponsored by the International and Cross-cultural Evaluation TIG
Presenter(s):
Karen Chen, US Department of State, chenky@state.gov
Abstract: The Bureau of Democracy, Human Rights and Labor at the U.S. Department of State administers a significant number of programs overseas that operate in restrictive or closed environments, including in countries with nascent national government institutions and civil societies, limited or no U.S. Government presence, and prohibitive security restrictions. After the events of the 2011 “Arab Spring,” assistance in transitional political environments has become a higher policy priority; however, given the scarcity of data and complexity of the social and political changes in these contexts, these programs are very difficult to monitor and evaluate. This roundtable discussion will address best practices in evaluating international democracy and human rights assistance, capturing learning from participants’ experience developing strategies to measure impact in insecure and transitional sociopolitical environments as well as the difficulty of measuring medium- and long-term processes of social change, particularly when programmatic cycles occur in 1 to 3 year increments.
Roundtable Rotation II: Aid Effectiveness From Words To Action: Filling the Gap Between Donor-Funded Evaluation Capacity Development (ECD) Programs and Local National Priorities in Developing Countries
Roundtable Presentation 933 to be held in 210 A on Saturday, Oct 27, 1:00 PM to 2:30 PM
Sponsored by the International and Cross-cultural Evaluation TIG
Presenter(s):
Michele Tarsilla, Western Michigan University, mitarsi@hotmail.com
Abstract: Recent evaluation reports attest that donors-funded programs aimed at enhancing local evaluation capacity have not produced the results that one would have expected. In response to the existing weaknesses of Evaluation Capacity Development (ECD) programs implemented in the developing countries, this paper will contribute a new paradigm to frame some of the most relevant issues characterizing the current discourse on ECD (e.g., the lack of clear and agreed-upon theoretical constructs and practical definitions). In so doing, this paper will build upon the findings of an evaluation capacity needs assessment conducted among national evaluation associations, academia and civil society representatives in three countries in sub-Saharan Africa (the Democratic Republic of Congo, Niger and South Africa). Serving as the basis for an open dialogue on ECD, the findings of this paper will be validated and informed by the opinions expressed by three ad hoc advisory panels throughout the duration of the research.

Roundtable: Caught in the Middle of Measurement Versus Marketing
Roundtable Presentation 934 to be held in 210 B on Saturday, Oct 27, 1:00 PM to 2:30 PM
Sponsored by the Evaluation Use TIG
Presenter(s):
Joelle Cook, Organizational Research Services, jcook@organizationalresearch.com
Abstract: Particularly in these tough economic times, organizations are looking for ways to market their work and fundraise to support their efforts. Evaluation can be an obvious area to glean information for these purposes. How do evaluators balance the tension between a desire for marketable “stories” and fidelity to systematic qualitative data? What do we do when results are mixed but clients promote only the positive parts? This roundtable will share examples of these situations from diverse client examples -- an international aid organization whose evaluation manager sought to include video clips of aid recipients; an early learning home visiting program in the midst of a major fundraising campaign; and, a communication campaign aimed at parents of vulnerable children whose board members were skeptical of the value. We will describe the issues that emerged and how they were resolved, and invite discussion about how other evaluators have wrestled with similar issues.

Session Title: Mixed-Method Strategies in Higher Education Assessment
Multipaper Session 935 to be held in 211 A on Saturday, Oct 27, 1:00 PM to 2:30 PM
Sponsored by the Assessment in Higher Education TIG
Chair(s):
Audrey Rorrer,  University of North Carolina, Charlotte, audrey.rorrer@uncc.edu
Discussant(s):
Mende Davis,  University of Arizona, mfd@email.arizona.edu
Capturing Relationships: Piecing Together an Ecological Program Model
Presenter(s):
Dorothy Pinto, University of Alberta, dorothy.pinto@ualberta.ca
Jason Daniels, University of Alberta, jason.daniels@ualberta.ca
Bernadette Martin, University of Alberta, berni.martin@ualberta.ca
Abstract: This paper presents evaluator insights into an ongoing three-year evaluation of a distributed learning (DL) model for the delivery of a Masters program in Physical Therapy (PT) at the University of Alberta. The DL model uses synchronous video-conferencing technology to link main campus to a rural satellite site. This program is unique to PT education in Canada, and its outcomes have implications for program scale up and transferability. Our approach is informed by Bronfenbrenner’s ecological model (Brofenbrenner, 1998), which highlights individuals’ relationships within and with their environments. In this case, we considered relationships between the evaluator and participants, among participants, and between participants and their environments over time: the program, university, rural community, province, and PT profession. The evaluators utilized mixed methods, triangulation, and member checking with participants and key informants to piece together a comprehensive understanding of the program. Implications for evaluator role and methodological considerations will be discussed.
• Building Relationships, Sharing Responsibilities, and Making it Relevant: Implementing and Evaluating an Alternative Teacher Certification Residency Program for STEM Disciplines in Kentucky’s High Need High Schools
Presenter(s):
Kimberly Cowley, Edvantia Inc, kim.cowley@edvantia.org
Chandra O'Connor, Edvantia Inc, chandra.o'connor@edvantia.org
Martha Day, Western Kentucky University, martha.day@wku.edu
Abstract: The Graduate Southern Kentucky Teach (GSKyTeach) alternative certification residency program at Western Kentucky University prepares and places new mathematics, chemistry, and physics teachers in high-need high schools in Jefferson County Public Schools (JCPS) in Louisville, Kentucky. Through partnerships with JCPS, the Commonwealth Center for Parent Leadership, and the Kentucky Education Professional Standards Board, the one-year residency experience enables Graduate Resident Interns (GRIs) to work alongside experienced teacher mentors while pursuing graduate studies. Program completion provides teacher certification and a Master of Arts in Education. Edvantia is conducting a quasi-experimental, mixed-method evaluation to share formative feedback for program improvement and to assess program effectiveness. This presentation will focus on the intersection of the external evaluation with partner relationships and responsibilities; the effort to ensure the evaluation is responsive, transparent, and actionable by project staff; and how to ensure that subsequent findings and interpretations are meaningful and relevant given the ecological contexts.
The Importance of a Mixed-Method and Multi-Sample Approach in Higher Education Assessment
Presenter(s):
Brittany Daulton, University of Tennessee, Knoxville, bdaulton@utk.edu
Jennifer Morrow, University of Tennessee, jamorrow@utk.edu
Ruth Darling, University of Tennessee, Knoxville, rdarling@utk.edu
Abstract: While assessing university programs, we often work with over-surveyed student populations that are apprehensive to respond. Therefore, some would argue that evaluations in higher education raise concerns regarding adequate sample size, validity, and generalizeability (Farmer & Napieralski, 1997). In order to combat these concerns, mixed-methods and multi-sample evaluation designs can be used (Thomas, Lightcap, & Rosencranz, 2005). A mixed-methods and multi-sample approach allows evaluators to use the multiple resources available in higher education institutions to produce more valid conclusions and meaningful results (Nelson, 2010). Through triangulation we can use many resources and various types of data to reduce uncertainty of results and gain additional perspectives (Thomas et al., 2005). This presentation will address sampling and data collection issues in higher education assessment, and provide strategies for the use of multiple samples and mixed-method designs in evaluation.

Session Title: Making Evaluation Relevant: Experiences Using Collaborative Evaluation for the National YMCA Higher Education Service Project
Panel Session 936 to be held in 211 B on Saturday, Oct 27, 1:00 PM to 2:30 PM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Chair(s):
Rita O'Sullivan, University of North Carolina, Chapel Hill, ritao@unc.edu
Discussant(s):
Rita O'Sullivan, University of North Carolina, Chapel Hill, ritao@unc.edu
Abstract: This panel will present information about the use of collaborative evaluation in a national YMCA college access context. Various aspects will be presented from external evaluator, program staff, and meta-evaluation perspectives. Presenters will share the collaborative evaluation design of a two-year evaluation that was conducted for YMCA USA. Using a collaborative evaluation approach, the evaluation team worked with 60 sites in year two across the country to help them evaluate their “Higher Education Service Projects.” Results from a survey taken by local program staff on the use of this approach will be shared. Next YMCA program staff will share reflections about how the past two years of using a collaborative evaluation design was introduced and received by the national organization. Finally, the specific evaluation context will be compared with broader collaborative evaluation efforts.
The Nuts and Bolts of a Multisite Collaborative Evaluation
Whitney Rouse, Eastern Carolina University, rousew10@students.ecu.edu
Elaine Lo, University of North Carolina, Chapel Hill, elainejlo@gmail.com
The Higher Education Service Project began in Fall 2010 to help YMCAs expand their higher education programming. Sites could work in the areas of college preparation with grades K-8; college access and transition programs (grades 9-12); non-traditional students, or some combination of participants. Forty sites received funding and were required to develop evaluation plans for their program. The YMCA decided to work with Evaluation, Assessment, and Policy Connections (EvAP) at the University of North Carolina in concert with the AEA Graduate Education Diversity Internship Program as the external evalautor for this project. Together with the YMCA USA program staff EvAP designed a collaborative evaluation plan to support the program. This presentation will share the collaborative evaluation efforts of the second year of the project.
Voices from the Field: Program Staff Speak Out about Evaluation
Johnavae Campbell, University of North Carolina, Chapel Hill, johnavae@email.unc.edu
Similar to other not-for-profit agencies, the YMCA is eager to build capacity within their local organizations. The implementation of collaborative evaluation design as a way to engage stakeholders and build capacity was the primary driver behind the decision of national directors to use the collaborative evaluation approach. Using data from a recent survey distributed to the local YMCA program staff involved in the Higher Education Service Project, this work begins to examine how the implementation of a collaborative evaluation approach influenced local YMCA organizations.
A National Program Perspective on Collaborative Evaluation Strategies
Marcia Weston, National YMCA of the USA, marcia.weston@ymca.net
Heather Garcia, National YMCA of the USA, heather.garcia@ymca.net
Faced with the need of implementing a new program, YMCA USA decided to work with Evaluation, Assessment, and Policy Connections (EvAP) at the University of North Carolina in concert with the AEA Graduate Education Diversity Internship Program as the external evaluator for this project. As the program staff members were very interested in enhancing the evaluation capacity of their grantees as well as gathering process and outcome evidence for the funders of these programs, the program staff requested a collaborative approach to the evaluations. YMCA program staff was integrally engaged in the design and implementation of the evaluation, which included development of logic models, identification of common evaluation instruments, evaluation webinars, evaluation fairs, and cross site accomplishment summaries. Beyond that the program evaluation introduced collaborative evaluation to the broader YMCA USA organization. This presentation will share the evaluation implementation from the program staff perspective.
Elaine Lo, University of North Carolina, Chapel Hill, elainejlo@gmail.com

Session Title: Taking People Seriously: Strategies for Understanding, Measuring, and Leveraging Participant Subjectivity
Panel Session 938 to be held in 212 A on Saturday, Oct 27, 1:00 PM to 2:30 PM
Sponsored by the Mixed Methods Evaluation TIG
Chair(s):
Ricardo Gomez, University of Massachusetts, Amherst, rgomezye@educ.umass.edu
Discussant(s):
Elena Polush, Ball State University, eypolush@bsu.edu
Abstract: Subjectivity --people's attitudes, emotions, or viewpoints/perspectives toward an issue- is an essential attribute of evaluation. Evaluations must take into account what matters most to the ‘relevant’ people, what is felt and experienced. Second, all self-reported data (surveys, interviews etc.) are intrinsically subjective. Subjectivity is also critical when planning an evaluation, as we need to identify appropriate criteria for evaluation and determine success against those criteria. This panel provides several different perspectives on how evaluative inquiry can better understand, measure and leverage participant subjectivity. In this panel, we will discuss a variety of strategies for improving evaluation through accounts of subjectivity. We will discuss several strategies (or methodologies) for engaging individuals with the goal of understanding subjectivity across populations: Q methodology and phenomenological approach to latent class analysis and projective interview techniques. We will also describe two participatory methods facilitate group decision-making: Group Priority Sort and Delphi.
Understanding the Subjective: Eliciting Hidden Meaning
David Roberts, RobertsBrown, Australia, david@robertsbrown.com
Understanding the subjective is the key to understanding people’s interactions within a program. However, the subjective tends to be ‘hidden’ especially in the ‘objective’ environment of evaluation interviews. David Roberts has been using projective techniques to reveal the ‘hidden’ subjective meanings for more than 20 years. This paper presents research that explores how an understanding of participants’ cognitive processes might assist us to assess the validity of interview techniques. Extensive debriefing of people who had participated in a photo-elicitation exercise showed that their responses and answers were derived from abstract generalizations (or ‘typifications’) and developed afterwards by rationalization. The participants used such typifications to help them to think about the subject of the activity. Identifying the participants’ typifications revealed information about their attitudes towards the topic. Such attitudes are not a predictor of behavior but may influence, and hence help to explain, their interactions.
Experiential Segmentation: A Phenomenological Approach to the Study of Participant Subjectivity
James Heichelbech, HealthCare Research Inc, jheichelbech@healthcareresearch.com
One of the challenges with designing evaluative inquiry in ways that incorporate subjectivity is that the standard tools of qualitative and quantitative investigation do not always lend themselves to coordinated, cooperative learning. While qualitative methods invariably produce particular cases, they struggle to satisfy the need for a more general account. While quantitative methods more or less succeed in providing “projectable” results, they often lack relevance. Adopting a phenomenological approach to both qualitative and quantitative methods allows each to contribute to a more meaningful synthesis. The basic idea of a phenomenological study is explained and development of a survey instrument suitable for latent class analysis is described as an example of such synthesis. James Heichelbech is a philosopher specializing in applied qualitative research. He is primarily concerned with demonstrating the practical value of qualitative results within the broader context of evaluative inquiry.
Opinions Count...So Get Them Right: Using Q Methodology to Inform Program Evaluation and Planning
Ricardo Gomez, University of Massachusetts, Amherst, rgomezye@educ.umass.edu
Evaluation participants are often called upon to attest to the quality of programs or interventions, and their opinions are used for making decisions about program execution, scalability, and performance. However, the methods for studying people's opinions are either taken lightly or remain time-consuming and intrusive, usually limited to interviews or surveys. As a result , people's perspectives are lost in percentages, charts, and numbers; or in obscure interpretations and pre-specified categories deemed relevant only by the evaluator. The author will present the steps of designing a Q study, demonstrating how it can be used to reveal perspectives that other techniques may overlook. In a Q study, participants examine a set of statements and then arrange those statements in accordance with a forced normal distribution from “agree the most” to “disagree the most. This ranking exercise is followed by inferential statistical procedures that reveal people’s attitudes and perspectives about an issue
Leveraging Subjectivity for Grounded Decision-Making: The Group Priority Sort Method
Melissa McGuire, Cathexis Consulting Inc, melissa@cathexisconsulting.ca
This presentation will introduce a hands-on consultation method, Group Priority Sort, which leverages stakeholder subjectivity to inform decisions when there is not an objective “right” answer. This face-to-face process generates stakeholder buy-in, builds consensus among stakeholders, and generates a stronger sense of community. The outputs are similar to those of Delphi method: -Comparative ranking scores; -Rich qualitative data; and -Engaged participants. Melissa McGuire has used this method successfully in her evaluation practice in a number of different topic areas, including healthcare, education and economic development, to: -Define the scope of an evaluation that has multiple stakeholders (i.e., select indicators). -Prioritize strategic planning goals. -Define a complex concept drawing upon the insights of experts. The Priority Sort method is a valuable addition to any evaluator’s tool kit.
Whose Values? Who Decides What? Delphi Methodology Structuring Group Communication Process
Elena Polush, Ball State University, eypolush@bsu.edu
Evaluation is intentional, complex, and multilateral. It brings to the table a diverse group of stakeholders with different perspectives, i.e., legitimate ways of looking (what), understanding (how), and justifying (why). Perspectives are not provable; they represent one’s system of beliefs, values, and assumptions, i.e., worldview. Perspectives are inherently subjective. Who should be involved? Whose values should become relevant? Whose views guide evaluation? How does one consider the perspectives of different stakeholders within particularity of an ‘evaluand’ situation? Delphi as a methodology for structuring a diverse group communication process by engaging members (stakeholders) in sharing their positions (opinions), prioritizing, and building consensus has prominence in evaluation centered on ‘subjective’ inputs. The paper describes Delphi as a technique and reports on its use in three evaluation study settings, (1) action based needs assessment; (2) program theory development within multi-methods evaluation, and (3) bridging different stakeholders needs and capabilities in mixed-methods evaluation.

Session Title: Meeting the Needs of Vulnerable Populations: Insight Into Evaluation Methods and Providing Evidence for Policymaking
Multipaper Session 939 to be held in 212 B on Saturday, Oct 27, 1:00 PM to 2:30 PM
Sponsored by the Disabilities and Other Vulnerable Populations
Chair(s):
Diana Rutherford, FHI 360, drutherford@fhi360.org
Abstract: With mixed methods and rich data from vulnerable populations across three regions, this group of researchers addresses the challenges of working with vulnerable populations in difficult contexts to learn, evaluate, and provide an evidence base for decision-making. From Zambia, evaluators learn that the most vulnerable orphans and children are 5-11 year old girls. In the Philippines, we learn how the children of seaweed farmers and weavers spend their time, and the effect of an economic strengthening intervention on child well-being outcomes In India, evaluators examine changes in STI/HIV prevalence among female sex workers and their clients. Each evaluator will present his/her evaluation methods and findings in context, describing challenges and ethical issues in conducting evaluations with vulnerable populations, how they were addressed, tools and procedures that grew out of the context, and how the findings provide impetus for policy and/or future intervention programming.
Whom We’re Missing Among the Highly Vulnerable: Evidence from a Post-test Quasi-Experimental Evaluation
Gina Etheredge, FHI 360, getheredge@fhi360.org
Nancy Scott, Boston University, nscott@bu.ed
Katherine Semrau, Boston University, ksemrau@bu.edu
Orphans and Vulnerable Children (OVC) are often regarded as the quintessential vulnerable population. Most programs targeting them are not gender-specific, yet there is increasing evidence that results vary within this population. The Longitudinal Orphans and Vulnerable Children’s Study in Zambia (conducted by Boston University, funded by USAID) provides evidence that 5-11 year old (y.o.) girls fare worse than their male counterparts of this age and worse than their female counterparts across all ages. We discuss the challenges of reports from parents/guardians possibly being inaccurate, and our inability to collect qualitative data from this young group, and small sample sizes for sub-group analyses. Educational, nutritional, and psychosocial measures suggest that 5-11 y.o. girls are highly vulnerable and potentially marginalized in the household. Understanding the plight of these girls is critical for effective programming and policy, particularly since they are of age for increased HIV risk behaviors and are susceptible to abuse.
Evaluation of a Large-scale HIIV Prevention Program Among Vulnerable Populations in India
Rajat Adhikary, FHI 360, radhikary@fhi360.org
Avahan, a large-scale HIV-prevention initiative launched in 2003 in India, aims to slow down the transmission of HIV in the general population by raising the coverage of prevention interventions in high-risk groups. Two rounds of Integrated Behavioral and Biological Assessment (IBBA) were conducted among high-risk population groups across six states in 2006-07 and 2009-10. Each round covered a total of 25,000 participants. Focusing on female sex workers (FSWs) we discuss the challenges faced including working with government agencies and private research firms, sampling clients of FSWs, the logistics of collecting highly-sensitive information among a vulnerable population including physical samples of bodily fluids, and applying size estimation methods in a non-experimental evaluation. At the aggregate level, among FSWs, prevalence of syphilis declined from 10.8% to 5.0%. The findings suggest that tailored and sustained programmatic efforts are needed to address the disparity in prevalence of STIs and HIV in the three states.
How the Children of Seaweed Farmers and Weavers Spend their Time in the Philippines: Evidence from a Mixed-method Evaluation with Parents and Children
Diana Rutherford, FHI 360, drutherford@fhi360.org
In the developing world, children are often the first or second source of essential labor for the family. How do we, as evaluators, understand and interpret how children spend their time, especially when they are among the poorest of the poor? In the Philippines, we examine how the children of seaweed farmers and weavers spend their time. We explain how data were collected from both parents using surveys and children using participatory rapid appraisals. Focusing on children, we will address the challenges faced when data from multiple sources are contradictory, the complexities of community debriefing among the vulnerable poor, and how economic interventions can and should measure change at the child level. Findings are part of a larger evaluation of a value chain project implemented between 2008-12 for USAID’s Displaced Children and Orphans Fund under the Supporting Transformation by Reducing Insecurity and Vulnerability with Economic Strengthening program (STRIVE).

Session Title: The Black Men's Initiative Outcomes Matrix: A Holistic Approach to Evaluating HIV Prevention Programming for Young Men Who Have Sex With Men
Demonstration Session 940 to be held in 213 A on Saturday, Oct 27, 1:00 PM to 2:30 PM
Sponsored by the Lesbian, Gay, Bisexual, Transgender Issues TIG
Presenter(s):
Sherry Estabrook, Harlem United Community AIDS Center, sestabrook@harlemunited.org
Abstract: The Black Men’s Initiative at Harlem United Community AIDS Center developed an Outcomes Matrix as an evaluation tool to track the progress of its YMSM clients. The matrix tracks clients in 6 domains that contribute to HIV risk: Education, Income, Employment, Risk Behaviors, Housing, and Mental Health. This demonstration presentation will address the development of the matrix, components of each domain, strengths and weaknesses of the method, and relevant approaches to data analysis. Participants will also have the opportunity to use the tool themselves and discuss the implications for clinical practice and program quality improvement for vulnerable populations.

Session Title: Learning From Community Evaluations: Theory and Practice
Multipaper Session 941 to be held in 213 B on Saturday, Oct 27, 1:00 PM to 2:30 PM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Chair(s):
LaShonda Coulbertson,  Nia Research Consulting and Evaluation LLC, lcoulbertson@niaeval.org
Fostering Community Participation for Formative Research and Summative Evaluation: A Community Engagement Model in Development Communication Program
Presenter(s):
Raghu Thapalia, Equal Access Nepal, rthapalia@equalaccess.org
Sushil Kumar Bhandari, Equal Access Nepal, bhandarisk99@gmail.com
Abstract: Equal Access Nepal was interested to foster participatory research and evaluation for formative research. It engaged community in research and evaluation of projects to develop tools for effective evaluation. Participatory approaches formulated in the course of developing programs and a research for assessing communication for social change executed collaboratively with Australian Universities paved the way towards effective ways of engaging community in evaluating impact of development interventions. The modules developed in the course of intensive formative research and assessing impact proved to be handy in meaningful participation of stakeholders throughout the project cycle. This paper presents set of processes, procedures and techniques to engage community for research and evaluation. It presents challenges encountered and strategies adopted while shifting trend to formative research and evaluation. This paper concludes how the participatory approached developed within the project helps to achieve effective program development and evaluation of outcome of the project.
Identifying Genuine Two-Way Knowledge Sharing in Online Communities
Presenter(s):
Laurie Ruberg, Wheeling Jesuit University, lruberg@cet.edu
Debra Piecka, Wheeling Jesuit University, dpiecka@cet.edu
Tyra Good, Duquesne University, goodt@duq.edu
Abstract: How do we measure knowledge sharing in an online collaborative website? This presentation explores how indicators of knowledge sharing in online communities can be identified, measured, and analyzed. Our goal is to improve understanding of the affordances of electronic communities as learning contexts. This presentation will demonstrate how techniques used to measure knowledge transfer and knowledge generation in online business communities can be applied to evaluation of educational virtual communities. The context for this evaluation is a STEM education online community that offers both public and private collaborative groups. The presentation will provide examples of strategies and contextual factors that show genuine knowledge sharing. The authors will present their results visually—showing the online context, case examples, data coding scheme, data summary system, visual representation of results, visual organizer depicting successful strategies, and contexts for future evaluation to further test the guidelines presented.
Making "Mobilizing for Action through Planning and Partnerships (MAPP)" Work at the Neighborhood Level
Presenter(s):
Janna Knight, Louisiana Public Health Institute, jknight@lphi.org
Ashley Burg, Louisiana Public Health Institute, aburg@lphi.org
Abstract: Mobilizing for Action through Planning and Partnerships (MAPP) is a strategic planning process around health. While MAPP is community driven, it is usually conducted at the city, county, or state level. The Louisiana Public Health Institute received CDC REACH-CORE funds to implement MAPP with two neighborhoods in New Orleans. We found through this implementation that the Forces of Change and Local Public Health System Assessments were difficult to frame and use at the neighborhood level. In partnership with our leadership team we conceptualized adaptations to make the assessments more relevant. Coincidentally, the City of New Orleans began a MAPP process during this time period and our neighborhood partners were able to adopt and adapt the results of the City’s assessments. While most neighborhoods that undertake MAPP will not have the benefit of a city or state assessment running concurrently, certain lessons we learned around these particular assessments will remain applicable.

Return to Evaluation 2012
Search Results for All Sessions