2010 Banner

Return to search form  

Session Title: Exploring the Intersection of Beauty and Quality With the Student Travel Award Winners
Think Tank Session 622 to be held in Lone Star A on Friday, Nov 12, 3:35 PM to 4:20 PM
Sponsored by the Presidential Strand
Leslie Cooksy, University of Delaware, ljcooksy@udel.edu
Jeehae Ahn, University of Illinois at Urbana-Champaign, jahn1@illinois.edu
Stephanie Coleman, University of Missouri, slccm7@mail.missouri.edu
Brandi Gilbert, University of Colorado, Boulder, brandi.gilbert@colorado.edu
Douglas Grane, University of Iowa, douglas-grane@uiowa.edu

Session Title: Using Quantitative Analyses in the Development and Testing of Theories of Change
Multipaper Session 623 to be held in Lone Star B on Friday, Nov 12, 3:35 PM to 4:20 PM
Sponsored by the Program Theory and Theory-driven Evaluation TIG
John Gargani,  Gargani + Company, john@gcoinc.com
Assessing a Theory of Change for the Choice to Be Sexually Abstinent: Testing a Conceptual Model of Behavior Choices Among Adolescents
Amy Laura Arnold, University of Georgia, alarnold@uga.edu
Virginia Dick, University of Georgia, vdick@cviog.uga.edu
Ann Peisher, University of Georgia, apeisher@uga.edu
Robetta McKenzie, Augusta Partnership for Children Inc, rmckenzie@arccp.org
Katrina Aaron, Augusta Partnership for Children Inc, kaaron@arccp.org
Don Bower, University of Georgia, dbower@uga.edu
Abstract: Over the past two years, a collaborative partnership in Augusta-Richmond County, Georgia has implemented a comprehensive saturation approach in order to reduce premarital sexual activity and pregnancy among middle school youth. With the second wave of data collected during the fall of 2009, the evaluation team has statistically tested and examined the Theory of Change for the Choice to be Sexually Abstinent. Using regression analysis and Structural Equation Modeling (SEM), the direct effects as well as the mediating and moderating effects of key variables were tested. This paper will present the findings from the regression analysis and SEM, which both validate the proposed theory of change. In general youth decisions to become and remain abstinent are influenced by the intervention (comprehensive programming) and are mediated by maternal communication and conceptual understanding of the benefits and risks.
Quantitative Content Analysis for Developing a Competitive Grant’s Program Theory From its Request for Applications
Elena Polush, Iowa State University, elenap@iastate.edu
Carl Roberts, Iowa State University, carlos@iastate.edu
Abstract: This paper reports on a single-case study aimed at designing a meaningful evaluation of a federal competitive grants program. The study’s main question was, “What constitutes a good evaluation for a competitive grants program?” The premise was that a good evaluation plan begins with explicating the program’s essential conceptual underpinnings, namely the program theory. Theory-based evaluation is of emerging importance in evaluation practice and is generally recognized as enhancing the quality of evaluation. However, the theory-based approach remains underutilized in evaluations of federal competitive grant programs. This research used quantitative content analysis to systematically study the texts of Request for Application (RFA) for the Higher Education Challenge grants program. The analysis centered on examining linear changes and continuity in emphasis during eleven years of the HEC’s implementation. In the analysis, eight themes are identified indicating trends toward program continuity. These themes are used to draw inferences about HEC’s program theory.

Session Title: Rights-based Evaluation and Active Citizenship in Development Contexts
Expert Lecture Session 624 to be held in  Lone Star C on Friday, Nov 12, 3:35 PM to 4:20 PM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Robert Stake, University of Illinois at Urbana-Champaign, stake@uiuc.edu
Saville Kushner, University of the West of England, saville.kushner@uwe.ac.uk
Abstract: Evaluation as a response to a citizen’s right to information about public programs is a well-rehearsed idea in wealthy countries of the Northern Hemisphere. It has barely penetrated the field of international development where evaluation is seen as a privileged instrument of the administrative system and the international donor elite. Models of ‘Dialogic’, ‘Participatory’, ‘Responsive’ and ‘Democratic’ Evaluation have a secondary presence to logic models which bend to the needs of Results-Based Management. Evaluation is a widespread practice in developing countries, but is only barely accessible to the citizen, is rarely grounded in their priorities, and equally rarely is reported to them. This lecture – proposed by an ex-UNICEF Regional Adviser and long-time advocate of Democratic and ‘Personalised’ Evaluation - explores the use of evaluation as a redistributive device in a context in which the ‘information-poor’ compete on asymmetrical terms with the ‘information-wealthy’ for knowledge and deliberative engagement on the quality and merits of development programs.

Session Title: Communities of Practice (CoP) as a Foundation Strategy
Panel Session 625 to be held in Lone Star D on Friday, Nov 12, 3:35 PM to 4:20 PM
Sponsored by the Non-profit and Foundations Evaluation TIG
Tania Jarosewich, Censeo Group, tania@censeogroup.com
Nushina Mir, Lumina Foundation for Education, nmir@luminafoundation.org
Abstract: The Lumina Foundation for Education has incorporated communities of practice (CoP) in two major initiatives, KnowHow2Go, a national media and college access network building initiative, and in its productivity work, which supports states’ efforts to educate more students within existing resources. This project describes the first formal effort to examine the implementation and impact of CoPs in these large initiatives. This panel describes the work of two evaluators who collaborated to develop a common analytical framework based on Realistic Evaluation, with one evaluator also using Cultural Historical Activity Theory (CHAT) to analyze implementation, perceptions, and contribution of the CoPs to the work. The presentation will discuss how the evaluators measured goals, functioning, support, and effects of the CoPs. The evaluation officer who is overseeing the initiative at the Lumina Foundation will discuss the implications of the evaluation for both efforts and for the overall work of the foundation.
Framework for Analyzing CoP in Foundation-led Initiatives
Tania Jarosewich, Censeo Group, tania@censeogroup.com
This presentation the development of a framework to evaluate a Community of Practice (CoP) in KnowHow2GO, a nationwide college media and network building initiative funded by the Lumina Foundation. Realistic Evaluation theory guided the evaluation, which examined the salient features, health, and key outcomes of the CoP; influencing institutional and environmental factors; and contributing resources, processes, and procedures. Two evaluation teams worked together to create consistent frameworks and parallel systems to analyze the application of the CoP in KH2GO and also in the Lumina Foundation’s productivity work. The presentation will discuss the ways in which the two evaluation teams worked together but also relied on individual systems and techniques that were most appropriate for each specific initiative. The evaluation is intended to guide program improvement, enlighten the program officers leading the effort, and inform foundations about successful ways to develop and support learning communities in other projects.
Assessing the Value of a Learning Community Strategy: An Evaluation Framework Informed by Realistic Evaluation and Systems Thinking
Ruth Mohr, SPEC Associates, rmohr@pasty.com
The second panelist, evaluation team lead for the Lumina Foundation-funded assessment of the learning community strategy (LCS) within its higher education productivity work, will present the team’s framework for describing the emerging LCS and assessing its value in sharing, creating and storing higher education-related productivity information and knowledge. This framework, which is informed by Realistic Evaluation and Cultural Historical Activity Theory (CHAT) examines participant roles and LCS rules and tools plus contextual factors influencing development and participation, as well as perceived individual and group benefits. The purpose of this framing is to provide a clearer understanding of LCS functioning, support needed to maintain it, successful procedures and systems that engage participants, and ways in which the LCS can have greater impact on the Foundation’s strategy of increasing higher education productivity to expand capacity and serve more students through state policy change, instructional and operational efficiencies and innovative, affordable education methods.

Session Title: Using Regression Discontinuity Designs for Program Evaluation
Demonstration Session 626 to be held in Lone Star E on Friday, Nov 12, 3:35 PM to 4:20 PM
Sponsored by the Quantitative Methods: Theory and Design TIG
Joseph Stevens, University of Oregon, stevensj@uoregon.edu
Keith Zvoch, University of Oregon, kzvoch@uoregon.edu
Drew Braun, Bethel School District, dbraun@bethel.k12.or.us
Abstract: The proposed demonstration describes the use, analysis, and interpretation of Regression Discontinuity (RD) designs in applied program evaluations. RD designs are often recommended in quasi-experimental situations (Shadish, Cook, & Campbell, 2002) as an alternative to randomized experiments. When properly implemented, RD designs provide causal inferences that are as strong as those obtained from randomized experiments. In practice, however, there are a number of challenges in the implementation, analysis, and interpretation of RD designs. This presentation will define the features of RD designs, demonstrate variations of RD designs, and show how data are analyzed and interpreted. Analyses and examples will be based on literacy data from 1,449 elementary students from a Northwest school district. We will also discuss several challenges in the use of RD designs (e.g., compliance, attrition) as well as important issues in analysis of RD results (e.g., functional form of the pre-post relationship, interactions).

Session Title: Exemplars of School Evaluations
Multipaper Session 627 to be held in Lone Star F on Friday, Nov 12, 3:35 PM to 4:20 PM
Sponsored by the Cluster, Multi-site and Multi-level Evaluation TIG
Rene Lavinghouze,  Centers for Disease Control and Prevention, shl3@cdc.gov
Correlates of Perceived Effectiveness of the Safe Schools and Healthy Students Initiative
Bruce Ellis, Battelle Memorial Institute, ellis@battelle.org
Ping Yu, Battelle Memorial Institute, yup@battelle.org
Aaron Alford, Battelle Memorial Institute, alforda@battelle.org
Danyelle Mannix, United States Department of Health and Human Services, danyelle.mannix@samhsa.hhs.gov
Sharon Xiong, Battelle Memorial Institute, xiongs@battelle.org
Abstract: A three-level growth curve model was applied to estimate school-perceived impact growth trajectories, using multi-year data on outcome and progrma implementation from project and school surveys collected from 40 grantees and 3,300 participating schools. Primary interest is to determine whether and how project-level correlates affect schools' perceptions of the initiatiative's effectiveness over time when effects of pre-grant community and otehr environmental conditions are considered. Comprehensive programs and activities were found to be a significant predictor of initial increases in perceived overall impact of the initiative, of safety and violence prevention activities, and of substance use prevention activities even when the effect of funding was considered. Coordination and service integration activities were significantly related to mean rate of growth for substance use prevention over a three-year period. The paper demonstrates that this type of longitudinal grwoth curve model is appropriate for evaluating large-scale prevention initiatives funded by the Federal Government.
County-level Versus State-level Results: Is It Worth the Effort?
Susan Saka, University of Hawaii, Manoa, ssaka@hawaii.edu
Abstract: Given limited resources, is it worthwhile to increase sample sizes to obtain county-level results instead of only state-level results? Reasons for doing so for the Hawaii Youth Risk Behavior Survey (HYRBS) include (a) schools are interested in their own students, (b) community groups and state agencies want relevant data for grant applications, (c) educators want to know if their interventions had any effect, and (d) people want to know if events such as several youth-aged suicides in a small community were reflected in survey results. Using data from the 2003 HYRBS where 8,791 students from 43 schools statewide completed usable surveys, simple analysis of variance (ANOVA) was conducted, including the calculation of percent of variance accounted for, to compare the differences among four counties and the state on key indicators. Findings that helped a planning committee to determine whether or not to increase the sample size will be presented.

Roundtable: Improving the Quality of Evaluations That Include Lesbian, Gay, Bisexual and Transgender People (LGBT): Tools for the Evaluator?
Roundtable Presentation 628 to be held in MISSION A on Friday, Nov 12, 3:35 PM to 4:20 PM
Sponsored by the Lesbian, Gay, Bisexual, Transgender Issues TIG
Kathleen McKay, Connecticut Children's Medical Center, kmckay@ccmckids.org
Joe E Heimlich, Ohio State University, heimlich.1@osu.edu
John T Daws, University of Arizona, johndaws@email.arizona.edu
Abstract: This roundtable will address the conference theme of quality from the perspective of inclusion and representativeness by asking: How can we help ensure that the perspectives of lesbian, gay, bisexual and transgender (LGBT) people are included in evaluation design, conduct and analysis? For instance, poor survey design, each evaluator's necessarily narrow perspective, and "institutional homophobia" can result in findings that omit the perspectives of LGBT evaluands. The goal of this session will be to assess whether we should and how we might go about developing tools or checklists that will help the mainstream evaluator better consider these issues in the conduct of their evaluations.

Session Title: Program Fidelity and Development in Social Work
Multipaper Session 629 to be held in MISSION B on Friday, Nov 12, 3:35 PM to 4:20 PM
Sponsored by the Social Work TIG
Sarita Davis,  Georgia State University, saritadavis@gsu.edu
Evaluating the Renewal and Evolution of a Behavioral Intervention Modality
David Wright, Abilene Christian University, david.wright@coba.acu.edu
Darryl Jinkerson, Abilene Christian University, darryl.jinkerson@coba.acu.edu
Abstract: This study documents the proactive initiative at New Horizons, Inc., a thirty nine year old private non-profit agency, to evaluate and improve its behavioral intervention modality in caring for abused children. Child abuse takes many forms and, therefore, can require varying interventions. The intent of intervention is to help the child understand and positively modify behaviors in order to reenter and succeed in their social environment. The current initiative is motivated by a long term posture of constant improve striving and the recognition of improving approaches in intervention modalities. The current study will track the development and implementation of these enriched treatment modalities as well as assess the effectiveness subsequent program execution.
The Development and Standardization of a Parent Partner Fidelity Tool for Use in Wraparound Program Evaluation
Margaret Polinsky, Parents Anonymous Inc, ppolinsky@parentsanonymous.org
Abstract: This presentation will provide information on the background, methods, and findings related to developing a standardized, psychometrically sound Parent Partner Fidelity Tool (PPFT) for measuring the impact of Parent Partners in Wraparound programs that use a team modality to provide at-risk families with comprehensive, individualized services. Parent Partners, also called Parent Advocates or Family Partners, have previous successful experience with the system the family is encountering (child welfare, mental health, special education) and serve on Wraparound Teams as a mentor and advocate for the family, providing a bridge between the professional Facilitator and the family members. Although the Wraparound approach has been evaluated as effective, specific family outcomes related to the presence of a parent Partner have not. The development of the PPFT led to a clearer understanding of the previously ill-defined Parent Partner role and the development of a previously-unavailable evaluation tool for use in Wraparound program evaluation projects.

Session Title: The Role of Collaboratives in Promoting Community and Systems Changes
Multipaper Session 630 to be held in BOWIE A on Friday, Nov 12, 3:35 PM to 4:20 PM
Sponsored by the Systems in Evaluation TIG
Evelyn Yang,  Community Anti-Drug Coalitions of America, eyang@cadca.org
Evaluation of a Model to Build Effective Community Coalition Change Agents for Substance Abuse Prevention
Evelyn Yang, Community Anti-Drug Coalitions of America, eyang@cadca.org
Pennie Foster-Fishman, Michigan State University, fosterfi@msu.edu
Erin Watson, Michigan State University, droegeer@msu.edu
Jane Callahan, Community Anti-Drug Coalitions of America, jcallahan@cadca.org
Abstract: This presentation will describe the theoretical framework and evaluation findings for the coalition capacity building training model used by Community Anti-Drug Coalitions of America (CADCA) and its National Coalition Institute (Institute). CADCA’s Institute provides training and technical assistance to substance abuse coalitions across America to improve their effectiveness at reducing substance abuse rates within their community. This paper will present findings from an evaluation study testing the Institute’s theory of change. Structural equation modeling has assessed the extent to which this theory of change explains coalition functioning, the role training and TA play in influencing this change process, and the extent to which this framework fits for different types of coalitions. Additionally, findings from the first three waves of a longitudinal study examining the impact of training and TA to build community change agents will be presented.

Session Title: How Well Are We Addressing Asthma Disparities? Demonstration of a New Evaluation Toolkit
Demonstration Session 631 to be held in BOWIE B on Friday, Nov 12, 3:35 PM to 4:20 PM
Sponsored by the Multiethnic Issues in Evaluation TIG
Robin Shrestha-Kuwahara, Centers for Disease Control and Prevention, rbk5@cdc.gov
Maureen Wilce, Centers for Disease Control and Prevention, muw9@cdc.gov
Abstract: One of the overarching goals in public health is to eliminate health disparities among racial and ethnic groups. Eliminating health disparities is contingent upon a health care system’s ability to reduce barriers to care and enhance patient-centered programs. Recognizing the power of evaluation to identify disparities and ensure the quality of programs to address them, the National Asthma Program at the Centers for Disease Control and Prevention (CDC) recently developed a Toolkit to assist state partners. Grounded in the CDC’s Framework for Program Evaluation, the Self-Study Asthma Disparities Toolkit employs the Culturally and Linguistically Appropriate Service (CLAS) Standards, designed to improve the responsiveness of services to diverse needs. This session will include a presentation on how the Toolkit’s construction and a demonstration of how it measures public health program activities using recommendations related to: 1) culturally competent care; 2) language access services; and 3) organizational support for cultural competence.

Session Title: Using Research and Evaluation to Build the Power and Effectiveness of Community Organizing and Advocacy
Panel Session 632 to be held in BOWIE C on Friday, Nov 12, 3:35 PM to 4:20 PM
Sponsored by the Advocacy and Policy Change TIG
Anna Saltzman, Blueprint Research & Design Inc, anna@blueprintrd.com
Abstract: Advocacy organizations know the power that research has in building an effective issue campaign. Increasingly, community organizing groups are taking on research, and both advocacy and organizing groups are taking on evaluation, as a way to enhance the power and effectiveness of their campaigns. This session will explore some of the ways that advocacy and organizing groups have used research and evaluation to build their power and increase their effectiveness, and will provide guidance to advocates, organizers, funders and evaluators on ways to build research and evaluation into advocacy and organizing campaigns. Presenters representing funders and evaluators will reflect upon successes, lessons learned, and implications for the fields of advocacy and community organizing evaluation.
Catherine Crystal Foster, Blueprint Research & Design Inc, catherine@blueprintrd.com
Communities for Public Education Reform (CPER), launched in 2006, is a national, multi-site funding collaborative that supports educational organizing campaigns working to improve educational opportunities and outcomes for public school students, particularly low-income students and students of color. For over three years, Blueprint Research & Design Inc. has been CPER’s evaluation partner, helping to document the progress of the collaborative as a whole and within each of the sites. Catherine Crystal Foster is an independent consultant and strategic partner with Blueprint on the firm’s policy and advocacy evaluation work, and has co-directed Blueprint’s evaluation of CPER. As presenter, she will describe Blueprint’s approach to evaluating CPER, the methods that Blueprint evaluators are using to engage CPER funders and grantees in data collection and learning, examples of what funders and grantees have gained from the evaluation, and broader implications for the fields of advocacy and community organizing evaluation.
Julie Kohler, Public Interest Projects, jkohler@publicinterestprojects.org
Julie Kohler is the Director of Education & Civic Engagement at Public Interest Projects (PIP), where she manages Communities for Public Education Reform (CPER). As a presenter, Julie will provide background on CPER’s programmatic goals and overall theory of change, describe CPER’s “learning agenda,” and explain CPER’s interest in building evaluation capacity among local and state-based community organizing groups. Julie will also summarize the ways that the CPER evaluation has helped to advance the field of education organizing by assessing impact, providing close-in time feedback on campaign strategies, and building organizations’ capacity to document their own work.

Roundtable: Celebrating Government Evaluation: Forging Links Among Research, Policy Decisions, and Evaluation to Protect Our Nation's Air Quality
Roundtable Presentation 633 to be held in GOLIAD on Friday, Nov 12, 3:35 PM to 4:20 PM
Sponsored by the Government Evaluation TIG
Dale Pahl, United States Environmental Protection Agency, pahl.dale@epa.gov
Abstract: This is a year for celebration: the Government Evaluation TIG celebrates two decades in making a stamp on quality; the U.S. Environmental Protection Agency (EPA) celebrates four decades protecting human health and safeguarding the natural environment on which life depends. One of the goals central to EPA’s mission is protecting air quality so that our nation’s air is safe and healthy to breathe and risks to human health and the environment are reduced. To protect air quality, the Clean Air Act requires EPA to establish national ambient air quality standards; since 1977, this legislation has articulated systemic and systematic links among research, policy decisions, and evaluation. This paper describes these links and illustrates how they have improved the quality of the knowledge and evidence used in evaluation and decision-making for national ambient air quality standards for particulate matter.

Roundtable: The Evolution of Integrating Arts-based Inquiry in Evaluation
Roundtable Presentation 634 to be held in SAN JACINTO on Friday, Nov 12, 3:35 PM to 4:20 PM
Sponsored by the Evaluating the Arts and Culture TIG
April Munson, Kennesaw State University, amunson1@kennesaw.edu
Abstract: In 2007 Simons and McCormack challenged evaluators to consider the integration of arts-based inquiry in theory, method, and action. While the concept resonated with some, many were left asking, “How do I do this?” To embrace the challenge, those in the arts are charged with furthering the exploration of this idea, and educating those outside of the arts with the understanding of how this incorporation can lead to richer, more meaningful evaluation experiences.

Session Title: Compared to What? Reconsidering Assessment in Higher Education
Expert Lecture Session 635 to be held in  TRAVIS A on Friday, Nov 12, 3:35 PM to 4:20 PM
Sponsored by the Assessment in Higher Education TIG
Gary Brown, Washington State University, browng@wsu.edu
Abstract: The consensus in higher education recognizes that failures to respond effectively to accountability demands have made institutional comparisons nearly unavoidable. But those charged with improving educational outcomes learn little from ranking “America’s Best Colleges.” There is greater variance within an institution than between institutions, but the comparison expectation persists. Gary Brown, Director of Washington State University’s Office of Assessment and Innovation, will describe the system implemented at WSU that leverages technology and community to recast the comparison mandate. The approach, noted by organizations like AAC&U, is drawn from Brown’s team’s research (7 Best Research Awards), evaluation experience with many state initiatives, and major, cross institutional grants. The approach vests primary responsibility for assessment in the expertise of faculty and calibrates measures with external stakeholders. The model provides a useful, alternative comparison—not among institutions with disparate contexts, but within communities that shape and are shaped by the outcomes.

Session Title: Using New Technologies to Support Evaluation Quality and Organizational Learning
Panel Session 636 to be held in TRAVIS B on Friday, Nov 12, 3:35 PM to 4:20 PM
Sponsored by the Integrating Technology Into Evaluation
Lynn McCoy, Pact Inc, lmccoy@pactworld.org
John Talieri, Pact Inc, jtalieri@pactworld.org
Abstract: The use of new media including mobile phone technology, webinars, Skype and online meetings presents exciting new applications in organizational program monitoring, evaluation, and learning efforts. Creative possibilities seem almost limitless but are often bound by the realities of cost, bandwidth and technology mastery. Panelists will review a verity of technologies they use in building shared learning systems and discuss pros and cons, experiences and lessons learned in use of technology for program delivery and organizational learning (particularly in promoting evaluation quality).
Challenges and Lessons in Conducting Quality Evaluation of the Short Message Service (SMS) Technology
Maeve B de Mello, Pact Inc, mmello@pactbrasil.org
Maeve B. de Mello is a Senior Technical Advisor for Pact Inc in Brazil. Pact Brazil has established a partnership with a leading telecommunications network in Brazil (Vivo) to send educational health/ HIV and AIDs SMS messages to all male clients aged 18 or above to Pernambuco, Rio de Janeiro and Brasilia regions. The use of mobile phone messages is an exciting new field and creative possibilities seem almost limitless. However monitoring and evaluating the impact of the use of this technology presents its own unique challenges, and Ms. de Mello will present of her experiences and lessons to date on conducting quality monitoring and evaluation of the use of SMS technologies in program implementation
Integrating Technology to Maximize Evaluation Quality and Organizational Learning
Mary Ngugi, Pact Inc, mngugi@pactworld.org
Mary Ngugi is the Monitoring Evaluation and Learning Program Manager for Pact Inc in Washington DC. She has 10 years of experience in international development and a focus on the use of technology in building shared learning systems. Ms. Ngugi will present the various technologies she uses to encourage organizational learning particularly the methods utilized to improve evaluation quality, including meta-evaluation databases, establishment of an online Monitoring and Evaluation Community of Practice, use of Web-Based Learning Circles, and creation of “Webinars” to improve M&E practice. Ms Ngugi will discuss the opportunities and challenges surrounding use of technology within an organization with 2,000 staff and offices in 52 countries.

Session Title: Creating Statewide Guidelines for Twenty First Century Community Learning Centers' Local Evaluation
Demonstration Session 637 to be held in TRAVIS C on Friday, Nov 12, 3:35 PM to 4:20 PM
Sponsored by the Organizational Learning and Evaluation Capacity Building TIG
Wendy Tackett, iEval, wendy@ieval.net
Laurie VanEgeren, Michigan State University, vanegere@msu.edu
Abstract: 49 grantees operate 345 Michigan 21st Century Community Learning Center after-school programs across the state in a variety of contexts, working with 35 different local evaluators. The local evaluators have a wealth of expertise and use a variety of approaches; but great variability exists in what evaluators do and how beneficial program administrators find their services. A team of local evaluators, project directors, state evaluators, and other key stakeholders came together to develop common criteria for local evaluation in MI programs. Products of this project included an evaluator job description, sample contract, guidelines for evaluator roles, and rubric for determining if the evaluation is meeting the client’s needs. This was accomplished by using the Program Evaluation Standards, conducting a statewide survey of project directors and evaluators, meeting via conference calls and in person, and in-depth interviews with successful programs. This session will review the process, tools, and materials developed.

Session Title: Is Your Program Sustainable? A Practical Tool for Evaluating Sustainability
Demonstration Session 638 to be held in TRAVIS D on Friday, Nov 12, 3:35 PM to 4:20 PM
Sponsored by the Health Evaluation TIG
Nancy Mueller, Washington University in St Louis, nmueller@wustl.edu
Stephanie Herbers, Washington University in St Louis, sherbers@wustl.edu
Sarah Schell, Washington University in St Louis, sschell@wustl.edu
Abstract: Public health programs have weathered tough economic and political times. These situations result in programs wondering if they are sustainable. To assist programs, the Center for Tobacco Policy Research at Washington University developed a framework and tool to assess the sustainability of public health programs. Concept mapping and expert review were utilized to refine and validate the framework. An advisory panel of researchers, practitioners, and funders provided input on the development of a tool based on the framework. The tool is intended to be used by stakeholders to assess a program’s capacity for sustainability and identify opportunities for improvement. During this session, the sustainability framework will be introduced and the validation and testing processes will be described. We will also demonstrate the use of the tool for practitioners, funders, and evaluators. At the end of the session, participants will be able to apply the framework to their own work.

Session Title: Nerd Activism: Using Data as the Vehicle for Community Mobilization
Panel Session 639 to be held in INDEPENDENCE on Friday, Nov 12, 3:35 PM to 4:20 PM
Sponsored by the Lesbian, Gay, Bisexual, Transgender Issues TIG
Kari Greene, Oregon Public Health Division, kari.greene@state.or.us
Abstract: Structural barriers and policy issues prevent progress on national or local LGBTQ health agendas. Beyond methodological issues, structural and societal impediments include stigmatization, invisibility, lack of a comprehensive research agenda or “home” for LGBTQ evaluation/policy issues, and lack of funding. In this panel, we demonstrate how one community addressed these challenges through an assessment of local needs, followed by data-driven coalition building, fund-raising, and policy development. Does this sound like the perfect evaluation? It wasn’t. It was a cobbled-together effort that succeeded despite tensions between the paradigms of local activism and community engagement and the structured methodological approach of research and evaluation. We share how key factors in community-based participatory research and empowerment evaluation like shared values, trust between partners, and commitment to a common goal made the difference between success and failure.
Developing a Community Needs Assessment by Consensus: Why is Our Evaluator Tearing Her Hair Out?
Linda Drach, Oregon Public Health Division, linda.drach@state.or.us
Balancing community involvement with methodological rigor can be a challenge when conducting a community-based needs assessment. Structural and societal impediments within the LGBTQ community can create additional roadblocks, as demonstrated by the Speak Out 2009 survey, led by public health staff in Portland, Oregon. Although the survey aimed to explore a wide range of health behaviors, health outcomes, and related social and environmental factors, it originated in the health department’s HIV program because local health equity initiatives did not include LGBTQ issues. Therefore, the project was considered a “special project” and initially had no dedicated funding or staff. We share what we achieved, what opportunities were missed, and how we moved the project from wistful notion to completed evaluation through survey development and implementation, analysis, dissemination, and then into policy and fund development.
Coalitions Built Through Data: How a Survey Engaged a Community
Kari Greene, Oregon Public Health Division, kari.greene@state.or.us
Despite the methodological and structural challenges of conducting research in LGBTQ communities, a local assessment of LGBTQ health was conducted by a small, committed group of local leaders. This assessment led to a number of important outcomes, including the formation of a LGBTQ Health Coalition with the mission of addressing the social determinants of health and well-being in LGBTQ communities. Having applied for funding to sustain and institutionalize their efforts, the Coalition is embarking on a two-year strategic planning process based on a model developed by the CDC to identify and eliminate health disparities. The model will offer a framework for guiding the coalition through a structured process to solidify the group, assess available data, gather additional data, and generate a plan for action to support advocacy and policy-making. This data-driven, action-oriented model can be used by evaluators from all disciplines to build coalitions, particularly within disenfranchised communities.

Session Title: Digital Media and Learning for Youth: What Do They Know, When Do They Know It, How Do We Evaluate It?
Think Tank Session 640 to be held in PRESIDIO A on Friday, Nov 12, 3:35 PM to 4:20 PM
Sponsored by the Distance Ed. & Other Educational Technologies TIG
Kristin Bass, Rockman et al, kristin@rockman.com
Elizabeth Bandy, Rockman et al, elizabeth@rockman.com
Brianna Scott, Rockman et al, brianna@rockman.com
Jennifer Borland, Rockman et al, jennifer@rockman.com
Abstract: The emerging field of digital media and learning prides itself on innovation. How can we make sure our evaluations follow suit? This think tank session will consider opportunities and challenges in evaluating programs that teach digital media and technologies to youth. The chairperson will introduce the session by defining digital media and learning within the contexts of the programs we have evaluated. Each breakout group will then discuss one of the following issues: (1) the advantages and limitations of specific methods and measures; (2) the ways in which youth can best document their experiences and participate in evaluation activities; and (3) how evaluations vary based on program factors, such as the media being studied, duration, setting and population. We hope by the end of this session we can identify the kinds of questions we need to address and begin to discuss the best methods for finding answers.

Session Title: Engaging Practitioners in the Medical Field to Build Evaluation Capacity
Multipaper Session 641 to be held in PRESIDIO B on Friday, Nov 12, 3:35 PM to 4:20 PM
Sponsored by the Organizational Learning and Evaluation Capacity Building TIG
Cynthia Tananis,  University of Pittsburgh, tananis@pitt.edu
Evaluation Capacity Building Writ Large
Marla Steinberg, The CAPTURE Project, marlas@sfu.ca
Diane Finegood, The CAPTURE Project, finegood@sfu.ca
Abstract: The CAPTURE Project (Canadian platform to increase the use of real world evidence) is developing a web-based evaluation platform that will make it easier for health promotion practitioners to do evaluations. Unlike shoeless Joe Jackson in the movie Field of Dreams, we know that “if we build it, they [sic] will not come.” In order to maximize the use and sustainability of the platform, we are using a systems lens. This paper will present an overview of a systems approach to evaluation capacity building. This will involve reviewing the main features of system thinking, presenting a systems map of evaluation, and illustrating how system concepts have been integrated into the design of the platform, its operations and evaluation.
Context and Considerations of Evaluating a Health Professional Capacity-Building Program in Vietnam
Katherine Williams, Population Council, kwilliams@popcouncil.org
Meiwita Budiharsana, Population Council, mbudiharsana@popcouncil.org
Quoc Mai Tung, Population Council, maitungtn@gmail.com
John Townsend, Population Council, jtownsend@popcouncil.org
Abstract: The Health Research for Development Initiative in Vietnam is a capacity-building fellowship program aimed to mobilize health professionals trained in research and facilitate the pursuit of research activities within their own professional specialties. An impact evaluation of this program demonstrated improvement in participants’ research methods skills, yet a limited change in participants’ involvement in research activities and their intentions to conduct research. Subsequent qualitative research provided insight into professional and social constraints that were inhibiting their success, including the public health sector's management structure, new leadership and service opportunities in the private sector, corruption, and overloaded job responsibilities. In addition, qualitative feedback offered recommendations from successful program participants who proposed a revised post-fellowship training strategy. This combination of research methods provided traditional measures of success while considering the context of Vietnam’s rapid economic growth, changing market for health research and its effects on the success of fellowship participants.

Session Title: Building a Flexible, Web-based Analysis Tool to Examine Person Level Changes Over Time in a Mental Health Outcomes Measurement System
Demonstration Session 642 to be held in PRESIDIO C on Friday, Nov 12, 3:35 PM to 4:20 PM
Sponsored by the Alcohol, Drug Abuse, and Mental Health TIG
Tim Santoni, University of Maryland, tsantoni@psych.umaryland.edu
Qiang Qian, University of Maryland, qqian@psych.umaryland.edu
Abstract: Maryland’s Mental Hygiene Administration, in conjunction with a wide variety of stakeholders and with assistance from the Systems Evaluation Center (SEC) of the University of Maryland, initiated an Outcomes Measurement System late in 2006. Living situation, employment/school, symptoms, and functioning are among the domains measured by the system at intake and each six months thereafter. Working on the analysis of person level changes over time in the system, the SEC developed a flexible, web-based tool to analyze person level changes over time. It allows analysis at the State, County and Provider level, for comparisons of most recent and next most recent and initial and most recent observations, for filtering for several demographic items, for variable selection of the time frame for the most recent observation for comparison. The session will focus on many issues regarding the analysis of change over time and feature a demonstration of the web based system.

Roundtable: The Cost of Delayed Start: Effect of Early Algebra on End of High School, Transition to College, and End-of-College Science, Technology, Engineering, and Mathematics (STEM) Participation
Roundtable Presentation 643 to be held in BONHAM A on Friday, Nov 12, 3:35 PM to 4:20 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Xiaoxia Newton, University of California, Berkeley, xnewton@berkeley.edu
Rosario Rivero, University of California, Berkeley, rosario.rivero.c@gmail.com
Anthony Kim, University of California, Berkeley, tonykim1@gmail.com
Abstract: “Algebra for everyone” is a familiar slogan in educational reforms concerning mathematics education. Despite the assumed benefits of algebra, researchers do not necessarily agree on when students should start algebra or whether all students need algebra at all. Using the NELS:88 longitudinal database, we carefully constructed comparable samples of students who did and did not take early algebra through propensity score matching on critical covariates so as to disentangle the effect of early algebra from that of other covariates such as students’ mathematics ability and intrinsic motivations. By systematically investigating the effect of early algebra upon long term critical outcomes through rigorous methodology, we hope to contribute to building a programmatic research for evidence-based educational policy evaluation.

Session Title: What About the Funders?
Panel Session 644 to be held in BONHAM B on Friday, Nov 12, 3:35 PM to 4:20 PM
Sponsored by the Non-profit and Foundations Evaluation TIG
Beth Bruner, Effectiveness Initiatives, bbruner@brunerfoundation.org
Abstract: The Bruner Foundation has invested in enhancing evaluation capacity and promoting evaluative thinking within nonprofit organizations for more than a decade. After years of hearing directly from funders and indirectly about them, the Foundation decided to tackle the challenge of learning more about how and what would really help promote evaluative thinking and learning for grantmakers. Designed by Bruner and Baker, an evaluator and evaluation trainer, the Evaluation in Philanthropy training pilot has involved two mid-sized, place-based funders in a brief multi-session training process focused on evaluation and evaluative thinking. Through the experience, the Bruner Foundation has gained some important insights about what funders need, what does and doesn’t work regarding training, and why what’s required of grantees isn’t always practiced by grantmakers in their own organizations. These findings and details about next steps and available resources will be presented and discussed in this session.
What About the Funders? Why and How Evaluation Capacity Building Matters in Philanthropic Organizations Too
Beth Bruner, Bruner Foundation Inc, bbruner@brunerfoundation.org
The Bruner Foundation has invested in enhancing evaluation capacity and promoting evaluative thinking within nonprofit organizations for more than a decade. After years of hearing directly from funders and indirectly about them, the Foundation decided to tackle the challenge of learning more about how and what would really help promote evaluative thinking and learning for grantmakers. Designed by Bruner and Baker, an evaluator and evaluation trainer, the Evaluation in Philanthropy training pilot has involved two mid-sized private funders in a brief multi-session training process focused on evaluation and evaluative thinking. This part of the panel session will focus on why the Bruner Foundation has raised these questions, how grantmakers responded, and more importantly why evaluative thinking is important to grantmakers.
What About the Funders? Results From the Evaluation Use in Philanthropy Pilot
Anita Baker, Anita Baker Consulting, abaker8722@aol.com
The Bruner Foundation has invested in enhancing evaluation capacity and promoting evaluative thinking within nonprofit organizations for more than a decade. After years of hearing directly from funders and indirectly about them, the Foundation decided to tackle the challenge of learning more about how and what would really help promote evaluative thinking and learning for grantmakers. During this part of the panel session, findings from the Evaluation in Philanthropy training pilot, which has involved two mid-sized private funders in a brief multi-session training process focused on evaluation and evaluative thinking, will be shared. Specifically, insights will be shared about what funders need and what does and doesn’t work regarding training. Discussion will be encouraged regarding why what’s required of grantees isn’t always practiced by grantmakers in their own organizations and how to better address that gap. Details about next steps and available resources will also be presented.

Session Title: Meta-reviews in Rural Education and Reading Interventions
Multipaper Session 645 to be held in BONHAM C on Friday, Nov 12, 3:35 PM to 4:20 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Meltem Alemdar,  Georgia Institute of Technology, meltem.alemdar@ceismc.gatech.edu
An Evaluation of the Quality of Rural Education Research: 2006 to Present
Zoe Barley, Mid-continent Research for Education and Learning, zbarley@mcrel.org
Louis Cicchinelli, Mid-continent Research for Education and Learning, lcicchinelli@mcrel.org
Abstract: This review of rural education research since 2006 examined the quality and quantity of the research. It builds on a 2005 rural research review and a 2007 article calling for improving the yield of rural research. Quality issues were two-fold: methodological and focus - many studies identified as rural are only rural by nature of the sample; i.e. a rural setting or students and lack relevance for rural educators and policymakers. A thorough literature search resulted in 62 studies. Secondary screening yielded only 20 studies with a rural focus (32%). The fact that only 32% of all “rural” studies had a rural focus is a serious concern. Nine Of the 20 rural studies were quantitative, 11 were qualitative. Methodologically 45% were high quality, 35% were medium, and 20% were low. We discuss the implications of our findings, make recommendations for future rural education research, and suggest a research agenda.
Reviewing Systematic Reviews: Meta-analysis of What Works Clearinghouse Computer-Assisted Reading Interventions
Andrei Streke, Mathematica Policy Research, astreke@mathematica-mpr.com
Tsze Chan, American Institutes for Research, tchan@air.org
Abstract: The What Works Clearinghouse (WWC) offers reviews of evidence on broad topics in education, identifies interventions shown by rigorous research to be effective, and develops targeted reviews of interventions. This paper systematically reviews research on the achievement outcomes of computer-assisted interventions that have met WWC evidence standards (with or without reservations). Computer-assisted learning programs have become increasingly popular as an alternative to the traditional teacher/student interaction intervention on improving student performance on various topics. The paper systematically reviews (1) computer-assisted programs featured in the intervention reports across WWC topic areas, and (2) computer-assisted programs within Beginning Reading and Adolescent Literacy topic areas. This work updates previous work by the author, includes new and updated WWC intervention reports (released since September 2009), and investigates which program and student characteristics are associated with the most positive outcomes.

Session Title: Applying Evaluation to Improve Learning Outcomes: Scaling Up From Small Schools to a Million Students
Multipaper Session 646 to be held in BONHAM D on Friday, Nov 12, 3:35 PM to 4:20 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Michael Furlong,  University of California, Santa Barbara, mfurlong@education.ucsb.edu
Improving Individual Student Performance in Schools in United States Schools Through Response to Intervention (RTI) Framework
Matthew Quirk, University of California, Santa Barbara, mquirk@education.ucsb.edu
Abstract: The Individuals with Disabilities Education Act encourages schools to implement Response to Intervention (RTI) frameworks to identify and monitor students at-risk for learning problems. RTI is a multi-tiered service delivery approach that monitors student progress frequently, starting in kindergarten. Students not keeping up with proficiency standards are provided supplemental interventions and academic progress regularly monitored. RTI requires data systems that manage the influx of student data associated with periodic universal screening and frequent progress monitoring of students receiving supplemental interventions. These systems need to provide teachers and other school personnel with real-time, meaningful data/reports that inform decisions about students. Although, there are a number of web-based RTI data systems, they often do not interface well with larger district level data systems. Hence, there often is a disconnect between the day-to-day assessment information yielded by RTI frameworks and broader, more comprehensive information on students that is stored in district level systems.

Session Title: Improving the Quality of Habitat and Biodiversity Conservation Program Evaluations Using Methodological and Budgetary Techniques
Multipaper Session 647 to be held in BONHAM E on Friday, Nov 12, 3:35 PM to 4:20 PM
Sponsored by the Environmental Program Evaluation TIG
Gitanjali Shrestha,  Washington State University, gitanjali.shrestha@email.wsu.edu
Improving Impact Evaluations of Biodiversity Conservations Projects
Clemencia Vela, Independent Consultant, clemenvela@aol.com
Abstract: The paper proposes suggestions to improve evaluation of impacts on conservation coming from lessons learned from Mid Term and Final Evaluations made for projects financed by Multilateral Lending Agencies and by Donor organizations. Most project documents include Baseline information and a Logical Framework that guides the implementation of Projects. Nevertheless, many projects, even some whose goals aim to improve conditions for biodiversity conservation, lack the required information to assess the project’s impacts. Most common weaknesses found are: i) Baselines that include general information but do not include indicators to allow comparison between the before and after the project scenarios; ii) Indicators included aims to verify compliance but are inadequate to assess impacts. The proposed suggestions include the allocation of a line budget within the project budget to gather information before and after the intervention regarding impact indicators and the publication of a compilation of impact indicators for biodiversity conservation.
A Mixed Methods Approach to Evaluating Multi-site Habitat Protection Programs
Mark Braza, United States Government Accountability Office, brazam@gao.gov
Michael Krafve, United States Government Accountability Office, krafvem@gao.gov
Abstract: In spite of the importance of protected areas for maintaining biodiversity, there are few large-scale statistical evaluations of the effects of land management on habitat quality. We describe a method used to estimate these effects for the National Wildlife Refuge System, which is comprised of 585 refuges and wetland management districts covering 96 million acres. These lands often experience external disturbances, including water pollution, invasive species, and habitat fragmentation. As a result, they employ wildlife biologists and maintenance workers who maintain water levels, treat invasive species, restore damaged areas, and otherwise manage the land to provide quality habitat. In 2008, we surveyed refuge managers about refuge conditions and received an 81% response rate. Our regression models estimate that refuge managers were more likely to report improved habitat quality for waterfowl and other migratory birds between 2002 and 2007 on refuges where staffing levels increased and where external disturbances did not.

Session Title: Assessing Faculty Productivity and Institutional Research Performance: Using Publication and Citation Key Performance Indicators
Demonstration Session 649 to be held in Texas D on Friday, Nov 12, 3:35 PM to 4:20 PM
Sponsored by the Research, Technology, and Development Evaluation TIG
Ann Kushmerick, Thomson Reuters, ann.kushmerick@thomsonreuters.com
Abstract: This demonstration will discuss key performance metrics for research assessment based on journal and citation data (bibliometrics). Thomson Reuters’ web-based research evaluation tool, InCites, will be demonstrated, along with an explanation of the data source, Web of Science, and its characteristics. This presentation will cover various bibliometric indicators used to measure research performance, including article output, citation counts, h-index, citation impact, interdisciplinarity measures, etc. Assessment at the institutional, as well as researcher level, will also be discussed. Applications of these bibliometric indicators for faculty assessment, collaboration analysis, and peer comparisons will be reviewed. Guidelines for using bibliometric indicators will also be discussed.

Session Title: Moving Beyond the Weighty Tome: New Approaches to Reporting Evaluation Findings
Demonstration Session 650 to be held in Texas E on Friday, Nov 12, 3:35 PM to 4:20 PM
Sponsored by the Evaluation Use TIG and the Integrating Technology Into Evaluation
Eric Graig, Usable Knowledge LLC, egraig@usablellc.net
Abstract: A social program improves, not merely because an evaluator has offered a series of findings and recommendations to a client, but because the stakeholders involved in it have been able to have a conversation about what that evaluator observed and how he or she, as an outsider, made sense of it. The challenge for an evaluator adopting a utilization focused approach, is to facilitate that conversation and one of the best ways to do this is create compeling reports that meet stakeholders where they are. The goal of this session is present practical approaches for creating reports that will have an impact on the programs they describe. Included is a discussion of multimedia report, brochure reports, blogged reports, executive summary reports, panels, etc and the tools used for creating them. Ample time will be available for questions and discussion

Session Title: Introduction to Evaluation and Public Policy
Expert Lecture Session 651 to be held in  Texas F on Friday, Nov 12, 3:35 PM to 4:20 PM
Sponsored by the and the Government Evaluation TIG
Stephanie Shipman, United States Government Accountability Office, shipmans@gao.gov
George Grob, Center for Public Program Evaluation, georgefgrob@cs.com
Abstract: Evaluation and public policy are intimately connected. Such connections occur at national, state, and local government levels, and even on the international scene. The interaction moves in two directions: sometimes evaluation affects policies for public programs, and sometimes public policies affect how evaluation is practiced. Either way, the connection is important to evaluators. This session will explain how the public policy process works. It will guide evaluators through the maze of policy processes, such as legislation, regulations, administrative procedures, budgets, re-organizations, and goal setting. It will provide practical advice on how evaluators can become public policy players—how they can influence policies that affect their very own profession, and how to get their evaluations noticed and used in the public arena.

Session Title: Program Evaluation Network: Capacity and Creativity for Multi-Site Program Evaluation
Demonstration Session 652 to be held in CROCKETT A on Friday, Nov 12, 3:35 PM to 4:20 PM
Sponsored by the Extension Education Evaluation TIG
Joseph Donaldson, University of Tennessee, jldonaldson@tennessee.edu
Abstract: The Program Evaluation Network (PEN) is a software program that uses a set of standard outcomes and valid, reliable instruments to measure the results of statewide Extension programs. Extension faculty and staff have validated 41 instruments for 16 different Extension programs. PEN allows Extension Agents to produce instruments with local program names and questions, while maintaining the integrity of the standard, statewide scales. The software provides individual, unit-level, state-level, and multi-state summary reports of the data. In its first three years, University of Tennessee and Tennessee State University Extension agents used PEN to survey over 76,000 participants in programs that served nearly 300,000. The data has been used for stakeholder communications and program improvement. This demonstration will present how the software works, including a discussion of similar products and challenges. Participants will learn how their organization can subscribe to this software through a licensing agreement.

Session Title: Small Business Sustainability: A Systems Approach to Triple Bottom Line Success
Expert Lecture Session 653 to be held in  CROCKETT B on Friday, Nov 12, 3:35 PM to 4:20 PM
Sponsored by the Business and Industry TIG
Mallary Tytel, Healthy Workplaces, mtytel@healthyworkplaces.com
Mallary Tytel, Healthy Workplaces, mtytel@healthyworkplaces.com
Abstract: Today we are operating in an environment where the number of small businesses is growing and more and more “former employees” are heading towards entrepreneurship. Within this dynamic landscape, the potential of integrating triple bottom line thinking into small businesses and startups becomes even more desirable and important. Creating a viable business strategy that includes sustainability priorities rests on standard practical criteria, especially in an environment which supports variety, personal values and innovation. Within our ongoing research practice and to ascertain relevant sustainability criteria within this population, over 100 small businesses were surveyed to identify a range of critical elements. Organized across the four fundamental categories of the balanced scorecard and the three bottom lines, those criteria were put into multidimensional sustainability scorecards. This presentation will discuss the application of these templates, how they support a systems approach to business planning, and can help small businesses achieve success.

Session Title: Understanding Indigenous Higher Education Expenses: Innovative Evaluations From New Zealand and Hawaii
Multipaper Session 654 to be held in CROCKETT C on Friday, Nov 12, 3:35 PM to 4:20 PM
Sponsored by the Indigenous Peoples in Evaluation TIG
Katherine A Tibbetts,  Kamehameha Schools, katibbet@ksbe.edu
Katherine A Tibbetts,  Kamehameha Schools, katibbet@ksbe.edu
E Tu Kahikatea, Stand Tall Kahikatea: Evaluation of One of the World's Largest Indigenous-Led Tertiary Education Providers, Te Wananga o Aotearoa
Andrea Knox, Counterbalance Research and Evaluation, andrea@counterbalance.co.nz
Shaun Akroyd, Akroyd Research and Evaluation, shaun.akroyd@clear.net.nz
Fraser Sloane, Sloane Walker Ltd, thesloanes@xtra.co.nz
Chantalle Ngapo, Te Wãnanga o Aotearoa, chantalle.ngapo@twoa.ac.nz
Abstract: Te Wananga o Aotearoa is a Maori(1)-led tertiary education institution that teaches in over 120 locations across New Zealand. With 36,695 students in 2009, Te Wananga o Aotearoa is New Zealand’s second largest tertiary education provider and is, to our knowledge, the largest indigenous-led tertiary education provider in the world. Te Wananga o Aotearoa educates both Maori and non-Maori, but is distinct from non-indigenous options in that its education is grounded in ahuatanga Maori (Maori world-view). In this large scale project, we used a mixed-method approach to evaluate outcomes for students, families and communities associated with Te Wananga o Aotearoa. Of particular interest, we have explored logic modeling from a Maori perspective, used photovoice to explore student experiences, and used a modified success case method approach to assess student and family outcomes and to identify factors associated with outcomes. (1) indigenous New Zealander
Relevance, Relationship, and Reciprocity: Evaluating the Role of Non-Academic Factors in Improving Native Hawaiian Students’ College Retention
Anna Ah Sam, University of Hawaii, Manoa, annaf@hawaii.edu
Abstract: The retention of Native Hawaiian college students is a significant challenge for colleges and universities in the State of Hawai'i. There are both academic and non-academic factors associated with improving their retention that have significant implications for designing and implementing effective retention programs that serve Native Hawaiian college students. In addition, when several of these factors are incorporated in the evaluation of such programs, a more holistic—and thereby more accurate—evaluation results. This paper will highlight the non-academic factors associated with successful retention programs and their evaluation. These include the relevance of culture in students’ social support network, the role of interpersonal relationships with peers and faculty, and the commitment to reciprocity and community involvement.

Session Title: A Theory Driven Multi-year Strategic Evaluation Plan for a Multi-program Government Agency
Expert Lecture Session 655 to be held in  CROCKETT D on Friday, Nov 12, 3:35 PM to 4:20 PM
Sponsored by the Government Evaluation TIG
Gretchen Jordan, Sandia National Laboratories, gbjorda@sandia.gov
John Reed, Innovologie LLC, jreed@innovologie.com
Abstract: This paper describes the development and the tools used to create a multi-year strategic evaluation plan for a multi-program Federal Agency. Using the theory based Impact Evaluation Framework for Technology Deployment developed by Reed, Jordan and Vine (2007), a comprehensive and integrated multi-year plan tuned to agency priorities was developed. The plan simplifies the complex. It is strategic, theory guided, integrated across diverse program elements, multi-year to facilitate budgeting, and adaptable to rapidly changing priorities. The paper highlights both the development process and the results including a discussion of the context, program logic, evaluation issues, evaluation questions, high-level descriptions of priority evaluation activities, a timeline, and a budget. Reed, John H. Gretchen Jordan, and Ed Vine. Impact Evaluation Framework for Technology Deployment Programs: An Approach for Quantifying Retrospective Energy Savings, Clean Energy Advances, and Market Effects. (Main Report) Washington DC: US Department of Energy. 2007.

Session Title: Advancing the Field of Evaluation
Multipaper Session 656 to be held in SEGUIN B on Friday, Nov 12, 3:35 PM to 4:20 PM
Sponsored by the Graduate Student and New Evaluator TIG
A Rae Clementz,  University of Illinois at Urbana-Champaign, clementz@illinois.edu
Metaphors, Models, and Analogies as the Tools for Constructing Understandings in an Evaluation
A Rae Clementz, University of Illinois at Urbana-Champaign, clementz@illinois.edu
Abstract: Metaphors, models and analogies pepper our everyday speech, but research in the cognitive sciences have gone further and suggested that analogical reasoning is one of the foundational means by which we make sense of the world (Lakoff, 1993, Hofstadter, 2001). Analogical reasoning is both incredibly powerful and problematic in evaluation. Metaphors, models and analogies can be useful tools for capturing meaning and understanding of a program (Kemp, 1999) and can help us communicate with clients and stakeholders (Reddy, 1979), but the range of an evaluator’s prior experience has significant implications for the quality of observations and judgments she makes. This paper looks at the use of metaphors models and analogies by evaluators and stakeholders during meetings, in written communication, and conceptually as they work to understand the quality and characteristics of a program.
Moving From Outputs to Outcomes and Impact: Accountability and Evaluation Quality
Korinne Chiu, University of North Carolina at Greensboro, k_chiu@uncg.edu
Kelly Graves, University of North Carolina at Greensboro, kngrave3@uncg.edu
Abstract: This paper discusses the transition within agencies from output-focused results toward impact-focused evaluations beyond what is simply required by funding agencies. A case study of a federally-funded program will provide examples of how evaluation requirements for government-funded agencies set the stage for the types of evaluation information collected. Although the information collected may meet the funding agency’s requirements, the evaluation may not provide a comprehensive view of the program. As funding agencies are transitioning to different standards of program accountability, including an emphasis on outcomes and societal impacts, programs may need guidance on how to transition their program findings into a new format and how to use the findings to inform ongoing program quality improvement. Recommendations will be provided on how to collaborate with programs in order to provide a quality evaluation that complies with funding agency standards in addition to producing a comprehensive evaluation of the program.

Session Title: Quality of International Development Evaluation in a Time of Turbulence
Expert Lecture Session 657 to be held in  REPUBLIC A on Friday, Nov 12, 3:35 PM to 4:20 PM
Sponsored by the International and Cross-cultural Evaluation TIG
Patrick Grasso, World Bank, pgrasso45@comcast.net
Vinod Thomas, World Bank, vthomas@worldbank.org
Abstract: International development is undergoing a transformation driven by fundamental shifts in the global economic architecture. In this time of turbulence and change, there is a premium on evaluative evidence to guide directions. But to seize this opportunity, evaluators need to revisit the existing evaluation frameworks, lift the game in terms of addressing important issues, and match the uncertainties of the time, while maintaining—indeed improving—evaluation quality. This lecture reviews the challenges for such a reorientation, drawing on the experience of the Independent Evaluation Group of the World Bank Group and other development organizations.

Session Title: Health Care Delivery, Effectiveness, and Reform: Where to Go From Here?
Multipaper Session 658 to be held in REPUBLIC B on Friday, Nov 12, 3:35 PM to 4:20 PM
Sponsored by the Health Evaluation TIG
Michael Harnar,  Claremont Graduate University, michaelharnar@gmail.com
Evaluating Health Insurance Outreach and Enrollment Efforts Associated With Massachusetts Health Care Reform
Linda Cabral, University of Massachusetts, linda.cabral@umassmed.edu
Christine Clements, University of Massachusetts, christine.clements@umassmed.edu
Michael Tutty, University of Massachusetts, michael.tutty@umassmed.edu
Ellen Bhang, University of Massachusetts, ellen.bhang@umassmed.edu
Kathy Muhr, University of Massachusetts, kathy.muhr@umassmed.edu
Abstract: Through the Enrollment Outreach Grant Program, Massachusetts Medicaid provides grants to community organizations to use innovative strategies to reach uninsured residents and assist them with enrollment in Medicaid and other state subsidized health insurance options. UMass Medical School’s Center for Health Policy and Research evaluated the contribution of this program to advancing Massachusetts health care reform goals to increase the number of state residents with health insurance. The evaluation aim was to understand how the program has a) supported people with navigating health insurance enrollment, and b) adapted in scope and services to meet the unique needs of multiple stakeholders. Using a case study methodology, the evaluation team conducted site visits to a third of grantees (N=17) to interview outreach workers and supervisors and shadow outreach worker activities. This paper will present the project’s findings and highlight use of qualitative methods to understand how health care reform goals are achieved.
Changing Nature of Healthcare Performance Measurement
David Mohr, United States Department of Veterans Affairs, david.mohr2@va.gov
Katerine Osatuke, United States Department of Veterans Affairs, katerine.osatuke@va.gov
Thomas Brassell, United States Department of Veterans Affairs, thomas.brassell@va.gov
Abstract: The interest in performance measurement in healthcare has greatly expanded in the past decade. Healthcare system managers, providers, purchasers, policy makers and consumers have become more interested in evaluating how effectively and efficiently the care is being delivered. This initial attention was motivated by reports that individuals were receiving suboptimal care. Since then, the interest in evaluation has been maintained and further motivated by changing policies around expense reimbursements, healthcare system accreditation and quality improvement priorities. While performance measurement can lead to improvements, there are possible tradeoffs that result from a focus on compliance with the quality standard measures versus prioritizing individual patient care needs. Additionally, challenges exist in implementing systems of tracking and measurement to support evaluation and in gaining support of providers when applying these systems. The paper will discuss historical background, motivating factors, advantages/disadvantages and likely future directions on performance evaluation.

Session Title: Risk and Tension in Private Sector Development Project Evaluation
Multipaper Session 659 to be held in REPUBLIC C on Friday, Nov 12, 3:35 PM to 4:20 PM
Sponsored by the Business and Industry TIG
Zita Unger,  Evaluation Solutions Pty Ltd, zitau@evaluationsolutions.com
The Whole is Greater Than the Sum of Parts: Program Evaluation of the Chad Cameroon Petroleum Development and Pipeline Program
Cherian Samuel, International Finance Corporation, csamuel@ifc.org
Abstract: The Whole is Greater Than the Sum of Parts: Program Evaluation of the Chad Cameroon Petroleum Development & Pipeline Program (PDPP) Projects in extractive industries with complex risks and operating in uncertain policy environments require a mix of public and private sector support to achieve social and environmental outcomes and economic growth. In 2000, the World Bank Group (WBG) approved a program of financial and technical support for the Chad Cameroon PDPP, managed by a Consortium of oil companies led by ExxonMobil. This project was evaluated in 2009 using distinct methodologies for assessing public and private sector projects. This paper analyzes the challenges encountered in harmonizing different evaluation approaches to arrive at a coherent view of the development results of the Program. Reference: THE WORLD BANK GROUP PROGRAM OF SUPPORT FOR THE CHAD-CAMEROON PETROLEUM DEVELOPMENT AND PIPELINE CONSTRUCTION, PROGRAM PERFORMANCE ASSESSMENT REPORT No.: 50315, November 20, 2009. http://siteresources.worldbank.org/INTOED/Resources/ChadCamReport.pdf
Adjusting Development Outcomes by Riskiness of Private Sector Investment Projects: Risk-Adjusted Expected Development Outcome
Hiroyuki Hatashima, International Finance Corporation, hhatashima@ifc.org
Abstract: Private sector is regarded as the key driver for economic growth. Although risk taking is the main character of private business, evaluation of private sector development has not being addressing the risk adequately. Based on past evaluation of International Finance Corporation (IFC), the private sector arm of the World Bank Group, Independent Evaluation Group (IEG) proposes Risk Adjusted Expected Development Outcome (RAEDO) framework. This approach estimates the project’s probability of achieving high development outcomes, based on the regression that estimates the impacts of its risk conditions (such as country, sponsor, product market and project type risks) and the factors that can be controlled by the institution (such as work quality and contributions during the project cycle). The actual outcomes are compared against the predicted numbers, to determine whether the gaps are due to the project risks or to the factors that could be controlled by the institution.

Return to Evaluation 2010
Search Results for All Sessions