2010 Banner

Return to search form  

Session Title: In the Multi-level Systemic Evaluation Universe, Whose Responsibility is Quality? A Discussion Among Respected Colleagues
Think Tank Session 582 to be held in Lone Star A on Friday, Nov 12, 1:40 PM to 3:10 PM
Sponsored by the Presidential Strand and the Cluster, Multi-site and Multi-level Evaluation TIG
Kathleen Toms, Research Works Inc, katytoms@researchworks.org
Sandra Mathison, University of British Columbia, sandra.mathison@ubc.ca
Michael Morris, University of New Haven, mmorris@newhaven.edu
Linda E Lee, Proactive Information Services Inc, linda@proactive.mb.ca
Josue De La Rosa, Research Works Inc, jdelarosa@researchworks.org
Abstract: As funder requirements filter through levels of their particular systems, evaluators attached to each of those levels can find themselves caught in a quality conundrum. This think-tank will consider and discuss this issue. This includes an exploration of some possible solutions, for example, whether mandatory Standards and Principles could help to achieve systemic evaluation quality. This session brings together a group of respected colleagues to frame and facilitate a discussion of this emerging challenge to evaluation quality: Sandra Mathison, widely published and referenced author and professor known for her strong social responsibility stance regarding evaluation theory and practice; Michael Morris, respected author and professor known for his work in evaluation ethics and the AEA Guiding Principles; Linda E. Lee, past National President of the Canadian Evaluation Society and practicing evaluator in Canada and abroad; and Josue De La Rosa, a new evaluator.

Session Title: Fitting the Key to the Lock: Matching Systems Methods To Evaluation Questions
Skill-Building Workshop 583 to be held in Lone Star B on Friday, Nov 12, 1:40 PM to 3:10 PM
Sponsored by the Systems in Evaluation TIG
Bob Williams, Independent Consultnat, bobwill@actrix.co.nz
Abstract: A quality evaluation not only has to use a particular method, technique or approach well, it has to choose it appropriately. Despite increasing interest in systems approaches, and examples of the use of specific methods, there has not been guidance on which systems method would be suitable in which evaluation situation. Consequently for many evaluators the leap between thinking systemically and using the most appropriate systems methods in their evaluation is a bit of a gamble. Out of the many hundreds available which ones can be useful in what circumstances; which systems keys fit which evaluation locks? One solution lies in evaluation questions - some systems methods address particular evaluation questions very well. Drawing on the just published book - Systems Concepts in Action - A Practitioner's Toolkit this session will demonstrate how evaluators can match specific systems methods to their evaluation questions.

Session Title: Youth Participatory Evaluation: Where Are We and Where Do We Go From Here?
Think Tank Session 584 to be held in Lone Star C on Friday, Nov 12, 1:40 PM to 3:10 PM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Kim Sabo Flores, Evaluation Access and ActKnowledge, kimsabo@aol.com
Jane Powers, Cornell Univeristy, jlp5@cornell.edu
Katie Richards-Schuster, University of Michigan, kers@umich.edu
Abstract: Over the past decade, the practice of Youth Participatory Evaluation has grown significantly, evidenced by increased articles, books and social networks focusing attention on the subject, along with curricula, toolkits, and web trainings. However, even with all of these new developments, questions about youth participation remain: Where is the field now and where are we going? With the proliferation of articles and materials, have we learned anything new? What types of impacts are being documented (e.g., on youth, adults, programs, organizations, and the field of evaluation)? What work needs to be done to continue supporting meaningful quality youth participatory evaluation efforts? This think tank session will engage audience members in addressing these questions through small and large discussion groups. The goals of this think tank will be to facilitated peer learning, gather key information about the current state of the field, and create actionable steps to further advance the work.

Session Title: Helping Nonprofit Agencies Move From Measuring Outcomes to Managing Them: A Budding Success Story From the United Way of Greater Houston
Panel Session 585 to be held in Lone Star D on Friday, Nov 12, 1:40 PM to 3:10 PM
Sponsored by the Non-profit and Foundations Evaluation TIG
Michael Hendricks, Independent Consultant, mikehendri@aol.com
Abstract: Like the rest of the nonprofit world, the United Way of Greater Houston (UWGH) and its partner agencies realize it’s no longer sufficient to measure only program inputs, activities, and outputs. Therefore, for the past 10 years UWGH has helped its partner agencies to also measure outcomes. But while agencies dutifully complied, and while some benefits resulted, it was unclear whether this outcome measurement substantially improved services. Re-evaluating the situation, UWGH realized it needed to step back and ask itself “WHY do we want agencies to measure outcomes, and HOW can we make it more useful for them?” As a result, UWGH began a conscious shift away from the research-focused activity of mere outcomes measurement to the improvement-focused activity of outcomes management, and the benefits are beginning to become obvious. This session describes the change, from both the UWGH and agency perspectives, and offers recommendations for other funders and agencies.
The Exciting Shift From Outcomes Measurement to Outcomes Management: The National, Cross-sectoral Perspective
Michael Hendricks, Independent Consultant, mikehendri@aol.com
Following the Government Performance and Results Act (GPRA) of 1993, the United Way of America’s Measuring Program Outcomes in 1996, and the W.K. Kellogg Foundation’s Evaluation Handbook in 1998, the government, non-profit, and foundation sectors all began to measure outcomes of human service programs. This focused much-needed attention on the end results of most human service programs – changes in program participants. However, it was assumed that measuring outcomes would lead automatically to improved program services, and this proved to be false. Instead, outcome measurement was often viewed as a peripheral research activity and given low priority. Now, all three sectors are realizing that, for the measurement of outcomes to improve services, explicit and high-level steps must be taken to identify reasons for lower performance, develop and implement better ways of delivering program services, and re-measure outcomes. This presentation describes this overall shift and its implications for all three sectors.
The United Way of Greater Houston’s Journey With Outcomes Measurement and Management
Amy Corron, United Way of Greater Houston, acorron@unitedwayhouston.org
For the past 10 years, the United Way of Greater Houston (UWGH) has been working with the agencies it funds to use outcomes measurement to improve services and communicate value. While this produced some undeniable results in terms of marketing materials and community investments, the costs to sustain the effort seemed to rival the benefits. After some important strategic planning in 2007, UWGH recognized that outcomes measurement is more valuable when combined with management of results. The last three years have been spent on a new leg of the journey towards outcomes management, starting with “affinity groups” of common service providers coming together to share best practices and measure common outcomes and now continuing with extensive staff training on outcomes management. As we embark on this new leg of the journey, UWGH is also looking inward, at developing an outcomes management system for itself.
Facilitating the Journey: Making Sense of Outcomes Measurement and Management With Agency Partners
Najah Callander, United Way of Greater Houston, ncallander@unitedwayhouston.org
The United Way of Greater Houston’s (UWGH’s) transition from outcomes measurement to outcomes management has been a winding road for us and our agency partners. The journey has been as important to our success as the measurement. Working with and learning from the groups of similar agencies (affinity groups) has deepened our understanding of funded programs and improved the effectiveness of those programs. It has also forced UWGH to provide to agencies technical assistance and other key support that we would not have otherwise, moving us from simply being a funder to being a partner. This shift has produced deeper relationships, better measurement, more meaningful management of data, and hopefully a measurable impact on the lives of those we serve. From this session funders, agencies, and interested others will understand what worked, where we still have challenges, and how to begin to implement this work in their own communities.
Helping Similar Agencies Manage Common Outcomes: The Agency Perspective
Abeer Monem, Fort Bend County Women's Center Inc, amonem@fortbendwomenscenter.org
Five separate and independent domestic violence service providers affiliated with the United Way of Greater Houston began over two years ago to learn about and devise a common outcomes management system. Throughout this process many positive developments have occurred: 1. All five agencies agreed upon and began measuring SMART outcomes. 2. A UWGH consultant helped us analyze results and revise surveys/procedures. 3. We discovered trends with respect to race and length of stay. 4. We implemented a quality assurance panel (inner agency) and conducted focus groups with clients to find out “why?” 5. Staff became more involved and provided feedback. On the other hand, collecting and reporting outcomes data takes staff time away from direct service, and we need more training on data analysis. However, the results led to several proposed service improvements - which are worth it. The process is still on-going and valuable to meeting our clients’ needs.

Session Title: Enhancing the Quality of Evaluations by Rational Planning
Panel Session 586 to be held in Lone Star E on Friday, Nov 12, 1:40 PM to 3:10 PM
Sponsored by the Quantitative Methods: Theory and Design TIG
Frederic Malter, University of Arizona, fmalter@email.arizona.edu
Abstract: Evaluation reports are often disappointing with a frequent reason being insufficient thought given to the effort in the first place. A rule: the implementation of an evaluation never improves once the process has begun. Quality of evaluations can be improved by initial attention to theory and models underlying the evaluation, without which , evaluation is likely to lack focus and veer off target. Design of evaluation is often poorly specified and less rigorous than it could and should be, sometimes because options are not fully considered. Measurement issues are critical and require attention, but they frequently are resolved in arbitrary ways. Finally, plans for analysis of data should be developed in concert with those for design, methods, and measurement, but in many cases data analyses represent a sort of Procrustean bed for whatever resulted from previous efforts, no matter how flawed. These problems are discussed in relation to specific examples.
Building Evaluations on Theories and Models
Michele Walsh, University of Arizona, mwalsh@email.arizona.edu
Lee Sechrest, University of Arizona, sechrest@email.arizona.edu
Evaluation efforts should be guided by theory and a model of the evaluation process, i.e., as opposed to the theory of the intervention itself, and the theory and associated model need to be explicit. The theory/model can best be thought of in terms of Meehl’s scheme for theory appraisal, which requires not only the specification of the hypothesis of interest but of the auxiliary hypotheses related to the theory and to the particular research implementation that must be true if the main hypothesis is to be tested adequately. These auxiliary hypotheses are rarely made explicit in research, including program evaluation. The nature of the specifications required will be illustrated by reference to one large scale evaluation and one local evaluation, with the implications for the failures of these evaluations being made evident.
Design and Methods in the Planning Stage of Evaluation
Katherine McKnight, Pearson Corportation, kathy.mcknight@gmail.com
Although it might seem unlikely that an evaluation could be undertaken without a reasonably specific account of the design of the evaluation and the methods to be employed, as a brief review will show, those requirements are often not met. One reason for that failure is that just what is required in order to define a design and its accompanying methods seems not uniformly understood. Moreover, designs and methods may sometimes be proposed that review of other work, and perhaps even just good thinking, would be evidently unrealistic. The requirements for an adequate design and the methods appropriate to implement it can be outlined, and the processes appropriate for meeting those requirements can similarly be defined. An illustration of the requirements and their realization will be presented in the form of the reconstruction of a completed evaluation project.
What Happens When Measurement Planning Fails
Patricia Herman, University of Arizona, pherman@email.arizona.edu
Mei-kuang Chen, University of Arizona, kuang@email.arizona.edu
Measurement for evaluation needs to be a strategic, planned activity. It cannot be allowed simply to develop and follow its own course or the course of whoever has an idea at a particular moment. It is not true, despite what sometimes seems to be the assumption, that if enough data are collected, surely something can be made of them. Using as an example a large, national data set, problems in identifying variables, data reduction, specification of analytic models, and missing data will be illustrated. Recommendations for effective planning for measurement will be made, including how the project involved in the illustration might have been improved.
Data Analysis: Forethought, Not Afterthought
Patrick McKnight, George Mason University, pmcknigh@gmu.edu
Planning for data analysis should be part of the plan for the implementation of any evaluation, not left to the end under the supposition that somehow it will all be worked out. The general nature of the analysis that should be employed is often implicit in the design of the evaluation and the specification of the methods to be employed, but more than a general idea is needed. A good planning tactic is to try to identify the specific data elements that will derive from evaluation activities and then to determine where each of those elements will fit into an analysis that will answer questions of central interest. A “map” showing each element , where it will come from and when, to what evaluation questions is will be related, and where in the analysis it will be used can be helpful. An illustration of such a map will be presented.

Session Title: AEA and Public Engagement: How Can the American Evaluation Association’s Role in Public Engagement Contribute to Evaluation Quality?
Think Tank Session 587 to be held in Lone Star F on Friday, Nov 12, 1:40 PM to 3:10 PM
Sponsored by the AEA Conference Committee
Thomaz Chianca, COMEA Evaluation Ltd, thomaz.chianca@gmail.com
Nicki King, University of California, Davis, njking@ucdavis.edu
Jim Rugh, Independent Consultant, jimrugh@mindspring.com
Abstract: AEA’s shift in governance to a policy-based model has changed the structure and focus of all of the Association’s committees. The newly-formed Public Engagement Team (PET) is the amalgamation of AEA’s former Public Affairs and International Committees. We define “public engagement” (PE) as the coordinated set of activities AEA conducts outside our own association in order to (a) present to, (b) learn from, and (c) collaborate with other relevant organizations and individuals, both within the US and internationally. This think tank session will emphasize both small-group discussions and cross-group sharing on a variety of subjects related to PE: collaboration, learning from others, and presenting concerns about evaluation policy and practice. A specific issue we will use as an example for discerning how AEA can and should be involved in PE will be addressing the implications of the “NONIE Impact Evaluation Guidance” for how international development programs should be evaluated.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Managing Evaluation: Continuing the Conversation
Roundtable Presentation 588 to be held in MISSION A on Friday, Nov 12, 1:40 PM to 3:10 PM
Sponsored by the Evaluation Managers and Supervisors TIG
Donald Compton, Centers for Disease Control and Prevention, dcompton@cdc.gov
Michael Baizerman, University of Minnesota, mbaizerm@umn.edu
Michael Schooley, Centers for Disease Control and Prevention, mschooley@cdc.gov
Abstract: This roundtable continues the conversation on how to recognize, assess, train and evaluate professionals to manage evaluation studies, evaluators, and an evaluation unit. The goal is to foster greater recognition of managing evaluation within the profession, to gain legitimacy for the practice and to begin education and training in doing this work at an expert level. The discussion will be grounded in, but not limited to our recent New Directions for Evaluation issue, Managing Program Evaluation: Towards Explicating a Professional Practice. Therein, we collected case examples of expert managing and analyzed these, assessed the literature and proposed foci for study, and education and training curricula and pedagogy. We will continue touching on these topics and will distribute handouts on these. The roundtable is an alternate format for us to keeping the attention on this vital evaluation practice, while collecting data from participants useful for critiquing our analyses and proposals, and for recruiting others who are managing or would like managing to be their evaluation career.
Roundtable Rotation II: Directors of Research Ensuring Quality in Practice
Roundtable Presentation 588 to be held in MISSION A on Friday, Nov 12, 1:40 PM to 3:10 PM
Sponsored by the Evaluation Managers and Supervisors TIG
Colleen Manning, Goodman Research Group Inc, manning@grginc.com
Abstract: Attend this roundtable to learn what other directors of research are doing to ensure evaluation quality, from the theoretical to the practical. Find out how your practices are similar to and differ from those of your fellow directors. Hopefully, you will leave the discussion with some new contacts and new strategies for your work! For the purposes of this session, we are defining a “director” as someone who provides leadership and oversight to others who carry out evaluations. The facilitator is a director of research at a small research and evaluation firm.

Session Title: Evaluation Management Policies: Examining Requirements of Quality Evaluation
Multipaper Session 589 to be held in MISSION B on Friday, Nov 12, 1:40 PM to 3:10 PM
Sponsored by the
Lisa Rajigah,  International Initiative for Impact Evaluation (3ie), lrajigah@3ieimpact.org
Lisa Rajigah,  International Initiative for Impact Evaluation (3ie), lrajigah@3ieimpact.org
Leslie Fierro,  SciMetrika, let6@cdc.gov
Gary Miron,  Western Michgian University, gary.miron@wmich.edu
School Reform Without Evaluation: A Policy That Undermines Evaluation Quality- A Guide for Schools to Respond
Louis Cicchinelli, Mid-continent Research for Education and Learning, lcicchinelli@mcrel.org
Zoe Barley, Mid-continent Research for Education and Learning, zbarley@mcrel.org
Abstract: Quality in evaluation must originate with program developers and adequately support program implementers. This presentation will feature an evaluation guide based on the recent school improvement grant requirements for low-performing schools/districts whether or not they have received school improvement funding. The grant requirements do not support a local evaluation –only limited data collection. The lack of evaluation guidance is a serious quality issue both for the program and for evaluation as a field. The guide presented begins with a feasibility analysis and then assists the school with developing a logic model, a local school evaluation plan and assigning a team to carry it out (with an external evaluator to assist). Worksheets are included as well as examples of completed worksheets. With audience recommendations for improving the guide, we intend to make it widely available.
Effective Policy in Saudi Arabia' Gifted Organizations
Fayez Shafloot, Western Michigan University, eval.p@hotmail.com
Abstract: This paper will describe how the evaluation policy in gifted and talent organization in the Kingdom of Saudi Arabia (KSA) affects the program input, processes, and outcomes' quality. KSA has a new gifted organization, King Abdulaziz and his Companions Foundation for the Gifted, which is supported by the government and privet sectors, serves as a leader organization for supporting and conducting all required events and activities for gifted students. The evaluation policy is being used is imported from western policies that fit the western culture and is consistent with their school systems and families supports. The evaluation policy will affect the program reputation on the longtime if that policy would not be reevaluated and updated to meet the KSA culture.
Blueprint for Quality Evaluation and Quality Services in Human Service Organizations: Why Evaluation Policy Matters
Kristin Kaylor Richardson, Western Michigan University, kkayrich@comcast.net
Abstract: The quality of evaluation, either as a single study, a series of studies, a cumulative process of knowledge building, or as “evaluation capacity” mainstreamed or built into the very fabric of an organization will in large measure depend upon the nature, scope and influence of evaluation policies. Human service agencies must be responsive to accountability demands, yet many also strive to build evaluation capacity, developing communities of practice rich in knowledge and well-positioned to act with purpose and wisdom. How can evaluation models and methods be strategically adopted and integrated to support accountability, learning and intelligent action? What is the connection between evaluation policy and how evaluation is conceptualized and practiced in agency settings? Why does evaluation policy matter? This paper addresses these questions through review and critique of existing agency-centered evaluation frameworks, and then proposes an integrated approach to strengthen the quality of evaluation in human service organizations.
Evaluation Policy Inventory
Margaret Johnson, Cornell Univeristy, maj35@cornell.edu
William M Trochim, Cornell University, wmt1@cornell.edu
Abstract: Many organizations lack clear policies about how, when, what to evaluate and how evaluation will be supported and sustained. In the absence of a conscious and comprehensive approach to evaluation policy, some policies of an organization may support evaluation, others undermine it. Policies at different levels of an organizational system may overlap or contradict one another. This presentation uses an empirically derived taxonomy of evaluation policies built from the survey responses of AEA members in 2009 to develop an Evaluation Policy Inventory instrument (EPI). The EPI is intended for use by any organization wishing to take stock of existing evaluation-related policies, pinpoint conflicting evaluation policies and show gaps in evaluation policy.
Exploring Dueling Federal Funder Preferences and Program Evaluation Needs: Challenges With Conflicting Interests and Evaluator Roles
Holly Downs, University of Illinois at Urbana-Champaign, hadowns@illinois.edu
Abstract: A growing tension exists between federal funder preferences for an “external” evaluator and the evaluation needs of a program. This paper will explore this dilemma encountered when a federally-funded undergraduate science and mathematics program at the University of Illinois shifted from an external to an internal evaluation team (i.e., a team at the university but outside the program departments). While the funder encouraged an external evaluator in the RFP, a shift by the program coordinators, who wanted better onsite data collection and Institutional Review Board coordination, trumped this funder preference post-award. These dueling dynamics can cause a chasm between what is valued and encouraged by funding agencies and the needs of the actual program. The goal of this paper is twofold: to discuss the impact of federal funders’ preferences of external evaluators on this program and to consider the implications of these issues on the field of evaluation.

Session Title: Integrating Management Consulting Competencies into the Evaluation Process
Panel Session 590 to be held in BOWIE A on Friday, Nov 12, 1:40 PM to 3:10 PM
Sponsored by the Independent Consulting TIG
Pamela Davidson, University of California, Los Angeles, davidson@ucla.edu
Abstract: Our review of the literature reveals that, for a variety of reasons, the field has paid little attention to the consultative aspects of the work conducted by evaluators. This session will deal with this important but neglected component of the role and activities engaged in by most professional evaluators. Its primary objectives are to share a set of concepts and practices essential for consulting with organizations and managers and to integrate consulting activities and competencies into the role of the professional evaluator. More specifically, our focus will be on (1)the consulting aspects of the evaluator’s role,(2)the nature of and approaches to consultation,(3)the management consulting concepts and methods most relevant for evaluators,(4)the consultative process, including its inherent issues and challenges,(5)the management of organizational change interventions, and (6)identifying and integrating consulting competencies into the role of professional evaluators.
The Evaluator as a Consultant to Management
Anthony Raia, University of California, Los Angeles, traia@anderson.ucla.edu
As indicated in our review of the literature, we suggest that not enough attention is given to the professional evaluator’s role as a consultant to management and organizations. Many evaluators are called upon to provide advice to managers, and in one way or another, almost all are involved in evaluating some sort of intervention and/or change initiative. This first presentation will (1) provide an introduction to and an overview of the rationale for the panel presentations, (2) examine the evolving role of the evaluator and expand it to include that of a consultant, (3) discuss the nature of consultation, as well as some of the different approaches to consulting activities, and (4) share a number of basic concepts and methods used by management and organizational development consultants when they work with organizations and managers.
The Consultative Process and the Management of Change
Kurt Motamedi, Pepperdine University, kurt.motamedi@pepperdine.edu
In one way or another, the work of most evaluators involves evaluating the impact of change interventions in organizations, in communities, and/or in other larger systems. An abundance of concepts, methods, and best practices can be found in the fields of Management Consulting and Organizational Change and Development, many of which we believe can enhance the overall effectiveness of a professional evaluator. This presentation will (1) provide a detailed overview of the consultative process, (2) identify and suggest ways to deal with the issues and challenges inherent in each of its phases or steps, and (3) share a set of conceptual tools, methods, and practices related to change management and to the planning and implementation of both small and large system interventions.
Consulting Competencies to Expand the Evaluator’s Role
Pamela Davidson, University of California, Los Angeles, davidson@ucla.edu
Ghere and Colleagues (2006) proposed the Essential Competencies for Program Evaluators (ECPE) and outlined a core set of competencies that can be adapted and/or expanded to address specific teaching, practice, and research needs. The validation process included a crosswalk comparison with the competencies endorsed by the AEA and the Canadian Evaluation Society. Similarly, Management Consulting and Organizational Development researchers have developed a number of consulting competency models designed to improve the practice and teaching of the disciplines. Based on a review of the literature and our own experiences as consultants, we have extracted from these fields a set of competencies most relevant for professional evaluators. This presentation will (1) provide a brief overview of some best in class competency models, (2) suggest a set of consulting competencies to expand the ECPE, and (3) provide guidance on how professional evaluators can develop and/or enhance their own consulting skills.

Session Title: The Evolution and Revolution of Culturally Responsive Evaluation
Panel Session 591 to be held in BOWIE B on Friday, Nov 12, 1:40 PM to 3:10 PM
Sponsored by the Multiethnic Issues in Evaluation TIG
Fiona Cram, Katoa Ltd, katoaltd@gmail.com
Karen E Kirkhart, Syracuse University, kirkhart@syr.edu
Nan Wehipeihana, Research Evaluation Consultancy Limited, nanw@clear.net.nz
Abstract: This panel brings together a group of evaluators actively engaged in utilizing Culturally Responsive Evaluation(CRE)strategies with cultural subgroups in the United States (U.S.), as well as with members of indigenous communities in the U.S. and New Zealand. The panel will discuss the progression of CRE as a revolutionary concept in response to traditional evaluation methods and then highlight the progression of CRE in evaluative discourse over the last decade as a vehicle for the enhancement of evaluation quality. Discussion will situate on international, indigenous, and comparative perspectives when working in communities of color and finally examine evaluation tools designed to effectively make use of the cultural context of the evaluand.
Looking Back to Move Forward: The Visual Transition of the African American Culturally Responsive Evaluation System (ACESAS) Logic Model to Enhance Culturally Responsive Evaluation in African American Communities
Pamela Frazier-Anderson, Lincoln University, pfanderson@lincoln.edu
Culturally Responsive Evaluation (CRE) is as a transformative method within the field of evaluation. Its evolution and basic tenants will be explored as well as suggestions for further study. One area currently explored by a few CRE researchers is the examination of the utilization of culturally responsive logic models for programs serving ethnic/cultural minority groups and/or in low income areas. The African American Culturally Responsive Evaluation System (ACESAS) is a logic model (still in its formative stages)designed to support CRE in African American communities, particularly those communities and programs serving individuals in which potential power differentials could impact evaluation quality and ultimately results. A brief over of the ACESAS as well as its visual transition from a more traditional view to one that is more culturally responsive of the communities it seeks to serve will be introduced.
Looking Within: Expressing Program Stories With Cultural Metaphors.
Joan LaFrance, Mekinak Consulting, lafrancejl@gmail.com
The American Indian Higher Education Consortium (AIHEC) developed an Indigenous Evaluation Framework (IEF) that incorporates cultural ways of knowing drawn from values and epistemologies of Native Americans. Central to the framework is “creating story,” which is presented in ways that encourage visualizing programs using metaphors relevant to local tribal cultures. Participants in IEF workshops have responded to the invitation to formulate their “stories” using alternatives to traditional logic modeling. This presentation will illustrate ways in which story is created and how “indigenizing” the evaluation invites a clearer understanding of the functions and importance of linking evaluation to program implementation.
Tensions Between What Is Knowable From the Outside and What is Knowable From the Inside
Kataraina Pipi, Independent Consultant, kpipi@xtra.co.nz
This presentation focuses on the topic of CRE from a Maori viewpoint. In Aotearoa/New Zealand there are a range of ways in which we undertake evaluation within our indigenous communities. Sometimes we undertake evaluations exclusively as indigenous people within our communities and other times we invite non-Maori to work alongside us. This inevitably raises discussion and debate around whether it's acceptable for evaluators from outside the culture to do or claim to do CRE. By inviting white evaluators to dive back into evaluations in communities of color, how do we ensure that the historical context of betrayal and mistrust, power issues and those whose voices have been silenced are safe-guarded? These tensions are reflected upon and the value of being culturally responsible, discerning culturally imbedded assumptions and understanding your cultural position is considered.
Responsible Responsiveness in Evaluation: To Whom and to What?
Jennifer Greene, University of Illinois at Urbana-Champaign, jcgreene@illinois.edu
Jeehae Ahn, University of Illinois at Urbana-Champaign, jahn1@illinois.edu
Ayesha Boyce, University of Illinois at Urbana-Champaign, boyce3@illinois.edu
Evaluators work in tangled and complex contexts. These contexts are threaded with multi-faceted strands of politics, social dynamics, organizational traditions, and policy expectations – all of which are framed and enacted through the lenses of culture. Culture refers broadly to shared understandings of behavior, values, rhythms of daily life, as well as interpretive meanings, expectations, and aspirations. So, what are the dimensions of these complex constructs of context and culture that are invoked in contemporary versions of culturally responsive evaluation? And with what justifications? This presentation anchors ‘responsible responsiveness’ in commitments to contextualized understandings of diversity and to advancing equity in both opportunity and accomplishment. These ideas are presented via evaluation skits and stories.

Session Title: Health Matters: Evaluating Advocacy and Policy Change
Multipaper Session 592 to be held in BOWIE C on Friday, Nov 12, 1:40 PM to 3:10 PM
Sponsored by the Advocacy and Policy Change TIG and the Health Evaluation TIG
Astrid Hendricks,  California Endowment, ahendricks@calendow.org
Using Case Studies to Assess Longer: Term Outcomes of Expanded Advocacy Capacity
Annette Gardner, University of California, San Francisco, annette.gardner@ucsf.edu
Claire Brindis, University of California, San Francisco, claire.brindis@ucsf.edu
Lori Nascimento, California Endowment, lnascimento@calendow.org
Sara Geierstanger, University of California, San Francisco, sara.geierstanger@ucsf.edu
Abstract: In 2009 and 2010, the Philip R. Lee Institute for Health Policy Studies at the University of California, San Francisco, as part of its ongoing evaluation of The California Endowment’s Clinic Consortia Policy and Advocacy Program, developed case studies detailing the advocacy activities and policy outcomes of 17 grantee agencies. Informants included decision makers, clinic staff, clinic patients, and clinic consortia staff. UCSF categorized the 17 case study initiatives by health program type (insurance coverage expansions, coordinated services, and statewide policy initiatives) and analyzed the gains for each Program outcome. Where possible, quantitative measures were aggregated, e.g., number of children enrolled in Medicaid. The objective of these case studies was to assess achievement of 3 longer-term outcomes achieved by grantees, specifically 1) Strengthened clinic operations; 2) Increased services for the underserved and uninsured; and 3) Improved health outcomes for targeted communities and populations.
Employing a Systems Change Framework to Evaluate Health and Education Policy Advocacy in California
Mary Kreger, University of California, San Francisco, mary.kreger@ucsf.edu
Claire Brindis, University of California, San Francisco, claire.brindis@ucsf.edu
Abigial Arons, University of California, San Francisco, abigail.arons@ucsf.edu
Gaylen Mohre, University of California, San Francisco, gaylen.mohre@ucsf.edu
Elodia Villasenor, University of California, San Francisco, elodia.villasenor@ucsf.edu
Katherine Sargent, University of California, San Francisco, katherine.sarget@ucsf.edu
Sara Truebridge, WestEd, struebr@wested.org
Bonnie Benard, WestEd, struebr@wested.org
Mona Jhawar, California Endowment, mjhawar@calendow.org
Abstract: Only 75% of ninth graders in California graduate from high school; in some school districts, the percentage is as low as 50%. In 2006-07, more than one in four Hispanic youth, and more than one in three African American youth dropped out of high school. Reducing the achievement gap and improving educational success in California requires a systems approach to integrate health and educational supports, to support youth in facing complex social and health conditions. We present process and outcome measures to assess collaboration efforts among interdisciplinary stakeholders for policy advocacy. Additionally, we analyze policy advocacy as it relates to structural changes across multiple sectors of communities and state policy. We present outcomes for building the policy advocacy network at a state level and implementing local-level programs. A typology of collaboration is presented that assesses barriers, successes, and cross-pollination opportunities.
Tracking and Assessing State Policies Related to Food and Active Living
Martha Quinn, University of Michigan, marthaq@umich.edu
Carpenter Laurie, University of Michigan, lauriemc@umich.edu
Abstract: Between 2007 and 2009, the Center for Managing Chronic Disease at the University of Michigan tracked and analyzed state-level policy changes in 7 states where the W.K. Kellogg Foundation’s Food & Fitness Initiative grantees are located (California, Iowa, Massachusetts, Michigan, New York, Pennsylvania and Washington). The Center contracted with the National Conference of State Legislatures, who used NetScan and other online tracking services to gather the data on state-level legislation, regulations, executive orders and resolutions. The Center staff then sorted, organized and analyzed the data to better understand the most common policy initiatives being considered in each state, types of initiatives most likely to be enacted or adopted, and changes over time. The Center staff also looked across states for similarities and differences in policy trends and innovations. The purpose of this session is to better inform conference participants about how tracking policy outcomes can enhance their evaluation efforts.
Conducting Research to Advance Smoke-Free Air Legislation in Louisiana
Jenna Klink, Louisiana Public Health Institute, jklink@lphi.org
Nikki Lawhorn, Louisiana Public Health Institute, nlawhorn@lphi.org
Lisanne Brown, Louisiana Public Health Institute, lbrown@lphi.org
Abstract: The Louisiana Smoke-Free Air Act prevents smoking in all workplaces with the exception of bars and casinos. In support of policy efforts to expand the smoke-free air law several special studies were conducted. The first study measures the level of secondhand smoke exposure in unprotected workplaces by comparing the levels of saliva cotinine and nicotine of non-smokers who work in bars and casinos to non-smoking controls who work in smoke-free workplaces. Findings suggest individuals who work in environments where smoking is allowed have elevated saliva cotinine and nicotine levels, in some cases comparable to levels in smokers, and are therefore at risk for tobacco-related morbidity and mortality. The second study evaluates the air quality in unprotected workplaces by measuring fine particulate matter concentrations with aerosol monitoring machines. The findings from both studies were used to demonstrate the negative effects of non-comprehensive smoke-free legislation.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: The Essentials of a Quality Evaluation Capstone Project, Practicum or Internship: Students’ Perspectives
Roundtable Presentation 593 to be held in GOLIAD on Friday, Nov 12, 1:40 PM to 3:10 PM
Sponsored by the Teaching of Evaluation TIG
Aubrey W Perry, Portland State University, aubrey.perry@gmail.com
Robert Tornberg, University of Minnesota, tornb012@umn.edu
Veronica Smith, data2insight, veronicasmith@data2insight.com
Abstract: Undergraduate and graduate students’ capstone projects, practicums, and internship offer valuable experience to soon-to-be evaluators. This real world exposure helps bridge the gap between academia and the workplace. Three graduate students recap their experiences serving in evaluation settings to fulfill graduate requirements. The presenters will compare and contrast their diverse experiences, methods, and challenges while serving in different settings, including a small non-profit organization aimed at employing people with disabilities, a public radio station, and a museum of natural history. The students will discuss what knowledge and skills they developed as a result of their experience. The presenters will also offer tips on entering into an evaluation experience to fulfill education requirements and how to maximize the quality of your project experience. Finally, they will open the floor to discussion for session attendees to ask questions and describe their educational evaluation experiences.
Roundtable Rotation II: An Evaluation Seminar: How Our Students Gain Practical Experience in Evaluation and Research Methodology
Roundtable Presentation 593 to be held in GOLIAD on Friday, Nov 12, 1:40 PM to 3:10 PM
Sponsored by the Teaching of Evaluation TIG
Jennifer Morrow, University of Tennessee, Knoxville, jamorrow@utk.edu
Gary Skolits, University of Tennessee, Knoxville, gskolits@utk.edu
Thelma Woodard, University of Tennessee, Knoxville, twoodar2@utk.edu
Susanne Kaesbauer, University of Tennessee, Knoxville, skaesbau@utk.edu
Abstract: In this roundtable we will discuss the creation of a three semester, three credit seminar that we created for our graduate program students in Evaluation, Statistics, and Measurement at our university. This seminar was designed to enhance our students training in evaluation and research methodology. We will discuss our reasons for creating the seminar, the design of the seminar, and two of our graduate students will discuss their experiences in the seminar. Lastly, we will spend most of the time leading a discussion with the audience members on strategies for offering these evaluation and research experiences at their institutions.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Conceptualizing the Quality of Assessment
Roundtable Presentation 594 to be held in SAN JACINTO on Friday, Nov 12, 1:40 PM to 3:10 PM
Sponsored by the Assessment in Higher Education TIG
Steve Culver, Virginia Tech, sculver@vt.edu
Ray Van Dyke, Virginia Tech, rvandyke@vt.edu
Abstract: After the Spellings Commission’s report (2006) on the lack of accountability mechanisms in higher education, student outcomes assessment processes have become a part of the landscape of every college and university in the United States. Regional accrediting bodies, such as the Southern Association of Colleges and Schools (SACS), and discipline-specific accrediting bodies, such as ABET and AACSB, now evaluate programs and institutions based on the quality of their assessment process and the results of those processes. However, judging the quality of those assessments has become problematic. Is it more important to demonstrate a continuous improvement cycle or is it more important to demonstrate reflective practice not yet supported by empirical evidence? How much should context play into this evaluation and what should be the pertinent contextual factors? This session will explore questions (and some answers) about building the appropriate elements of such an evaluation.
Roundtable Rotation II: Perspectives on Collaboration in Practice: Expanding the Role and Culture of Assessment in Academic Affairs and Student Services in Higher Education
Roundtable Presentation 594 to be held in SAN JACINTO on Friday, Nov 12, 1:40 PM to 3:10 PM
Sponsored by the Assessment in Higher Education TIG
Ge Chen, University of Texas, Austin, gechen@austin.utexas.edu
Glen E Baumgart, University of Texas, Austin, gbaumgart@austin.utexas.edu
R Joseph Rodriguez, University of Texas, Austin, joseph.rodriguez@austin.utexas.edu
Abstract: Assessment requires reflection on our practices, which directly influence student learning outcomes and institutional effectiveness. The roundtable session will focus on the assessment and collaborative practices of a large division at The University of Texas at Austin that includes academic affairs, student services, and community outreach to demonstrate the progress and growth achieved through strategic and assessment planning. Presenters will provide perspectives and reflection from the following three points of view and practice: divisional leadership, assessment consultant, and practitioner department level. Discussion will center on the importance of language, culture, leadership, outside review, and practice in creating and sustaining quality assessment in a larger organization. Participants will leave the session with tools and information to implement the lessons learned to their large organization assessment practices.

Session Title: Social Network Analysis Across Disciplines and Purposes
Multipaper Session 595 to be held in TRAVIS A on Friday, Nov 12, 1:40 PM to 3:10 PM
Sponsored by the
Maryann Durland,  Durland Consulting, mdurland@durlandconsulting.com
Maryann Durland,  Durland Consulting, mdurland@durlandconsulting.com
Use of Longitudinal Social Network Analysis: Examining Changes in Networks Among Emerging Leaders in the Ladder to Leadership Program
Kimberly Fredericks, The Sage Colleges, fredek1@sage.edu
Julia Jackson-Newsom, University of North Carolina at Greensboro, j_jackso@uncg.edu
Heather Champion, Center for Creative Leadership, championh@ccl.org
Tracy Enright Patterson, Center for Creative Leadership, pattersont@ccl.org
Abstract: Ladder to Leadership is a national program of the Robert Wood Johnson Foundation, in collaboration with the Center for Creative Leadership. For this project, we are investigating the changes in networking and collaboration among cohorts of Fellows from eight different communities across the US. Up to 30 emerging, non-profit, community health leaders per community were selected to participate in a 16-month comprehensive leadership development program aimed at increasing collaboration among healthcare professionals. Social network analysis was used to assess relationships among program participants before and after the program. These data have been used to study changes overtime in the networks in each of these communities utilizing actor-oriented stochastic models and the RSIENA network analysis program. Initial findings suggest that there are proximity, gender, and popularity effects within the networks. Additionally, balance and transitivity play a key role in tie formation within the network.
Making Meaning: Participatory Social Network Analysis
Susan Connors, University of Colorado, Denver, susan.connors@ucdenver.edu
Marc Brodersen, University of Colorado, Denver, marc.brodersen@ucdenver.edu
Kathryn Nearing, University of Colorado, Denver, kathryn.nearing @ucdenver.edu
Bonnie Walters, University of Colorado, Denver, bonnie.walters@ucdenver.edu
Abstract: Sociograms developed through social network analyses graphically present evidence of interrelationships among individuals, programs, or disciplines. As part of a program evaluation, evaluators conducted social network analysis on archival data and prepared sociograms to depict the connectedness of biomedical investigators. The method was selected to investigate the interdisciplinary nature of the research teams before and after reorganization under the Clinical Translational Sciences Institute. To increase the relevance of such data and the likelihood that results will be used for program improvement, evaluators employed participatory evaluation techniques by involving key stakeholders in the analysis of sociograms. Evaluators interviewed program administrators concerning the resulting sociograms to gain their “insider knowledge” and to make meaning of the levels of interdisciplinary collaboration. Benefits and cautions of this mixed method approach are discussed.
Bridges and Barriers to Nurturing Interdisciplinary Research: Evaluating the Social Networks of Integrative Graduate Education and Research Trainees and Their Comparative Peers Over Time
Meg Haller, University of Illinois at Chicago, mhalle1@uic.edu
Eric Welch, University of Illinois at Chicago, ewwelch@uic.edu
Abstract: This paper evaluates the extent to which National Science Foundation’s Integrative Graduate Education and Research Traineeship (NSF IGERT) programs are able to accomplish goals of integration across disciplines and sectors through new pedagogical approaches. We present findings from a longitudinal evaluation of one IGERT program at a large urban university that combine two years of quantitative data from a web-based social network survey of graduate student program participants and a student comparison group. We assess differences in education and research outcomes, interactions among students and faculty, and the extent to which the value added of the program is attributable to differences in social networks. Results demonstrate the usefulness of complex methods including control groups and longitudinal social network analysis to address key evaluation questions and to assess interdisciplinary outcomes. Conclusions present findings for both policy and evaluation.
Applying Social Network Analysis to Understand How Youth-Serving Agencies Collaborate to Connect Youth at Risk of Suicide to Services
Anupa Fabian, ICF Macro, afabian@icfi.com
Elana Light, ICF Macro, elana.r.light@macrointernational.com
Christine Walrath-Greene, ICF Macro, cwalrath@macrointernational.com
Robert L Stephens, ICF Macro, robert.l.stephens@macrointernational.com
Michael S Rodi, ICF Macro, michael.s.rodi@macrointernational.com
Abstract: This presentation seeks to contribute to the evaluation literature on using social network analysis to assess how agencies work together to address complex community issues that necessitate community-wide collaborative approaches. Specifically, we will present on methods used to collect and analyze network data on whether and how youth-serving agencies work together to connect youth at risk of suicide to services. Using Social Network Analysis and other analytical methods, we will describe key characteristics of youth referral networks, including central players in the network, gaps in links between agencies, clusters of highly interacting agencies, typical referral patterns, network cohesiveness, the degree to which agencies have formal referral mechanisms in place, the quality of linkages and changes in networks over time. Successes and challenges in collecting, analyzing, interpreting and using network data to understand youth suicide prevention referral networks will also be discussed.
A Social Network Perspective for Evaluating Transnational Policy Diffusion: The Case of the Hyogo Framework for Action (HFA)
Aileen Lapitan, University of North Carolina at Charlotte, alapitan@uncc.edu
Abstract: The Hyogo Framework for Action (HFA) is a blueprint for building the resilience of nations and communities against disasters. It is part of the United Nations’ 2004 International Strategy for Disaster Reduction (UNISDR), the focal point for promoting partnerships, coordination, policy integration and information. This paper employs an evaluation strategy for cross-national diffusion of the HFA that considers geographical proximity, learning, isomorphism and vertical influence in the policy process (Berry & Berry, 2007). It combines a network approach with a policy diffusion framework by graphing existing networks of governmental and other institutional actors, emphasizing the dimension of embeddedness. Through social network analysis, this paper characterizes strategic hubs that may foster transnational policy diffusion. References Berry, S. and W. Berry. (2007). Innovation and diffusion models in policy research. In P.A. Sabatier(Ed), Theories of the Policy Process (pp.223-260). Boulder, Colorado: Westview Press. UNISDR. (2004). Mission and objectives. Retrieved from http://www.unisdr.org/

Session Title: Multimedia Advances in Evaluation: The Use of Skype, Elluminate, and Virtual World Technologies in Conducting Focus Group Interviews
Demonstration Session 596 to be held in TRAVIS B on Friday, Nov 12, 1:40 PM to 3:10 PM
Sponsored by the Integrating Technology Into Evaluation
Corliss Brown, University of North Carolina at Chapel-Hill, ccbrown@email.unc.edu
Lauren Kendall, University of North Carolina at Chapel-Hill, lkendall@email.unc.edu
Taniya Reaves, University of North Carolina at Chapel-Hill, treaves@email.unc.edu
Jessica Milton, University of North Carolina at Chapel Hill, jrmilton@email.unc.edu
Johnavae Campbell, University of North Carolina at Chapel-Hill, johnavae@email.unc.edu
Abstract: Technological advances have facilitated data collection for evaluation research by expanding accessibility for both researchers and participants. However, current use of email does not allow real time conference participation necessary for interviews and focus groups. More recent web-based media allow several methods of communicability including audio or video teleconferencing, instant messaging and document sharing with some software allowing for integrated use of all four methods to accommodate the range of accessibility for participants. Benefits of online focus groups include lower recruitment costs, increased engagement and an absence of logistic and travel expenses. This demonstration examines the features of three applications that support facilitated data collection: Skype, Elluminate and online virtual Worlds. Limitations include increased training and orientation, discourse analysis, identity of participants and possible technical difficulties. Integrated properly, these applications could increase quality of data collection while decreasing collection time and costs.

Session Title: Real World Applications of System Concepts in Evaluation
Multipaper Session 597 to be held in TRAVIS C on Friday, Nov 12, 1:40 PM to 3:10 PM
Sponsored by the Systems in Evaluation TIG
Janice Noga,  Pathfinder Evaluation and Consulting, jan.noga@pathfinderevaluation.com
Janice Noga,  Pathfinder Evaluation and Consulting, jan.noga@pathfinderevaluation.com
Using Systems Concepts to Evaluate the iPlant Collaborative: Quality, Implications, and Benefits
Barbara Heath, East Main Educational Consulting, bheath@emeconline.com
Jennifer Young, East Main Educational Consulting, jyoung@emeconline.com
Abstract: Funded by the National Science Foundation, the iPlant Collaborative (iPlant) is a distributed, cyber infrastructure-centered, international community of plant and computing researchers. The goal of iPlant is to bring together the community to (1) identify new conceptual advances through computational thinking and (2) address an evolving array of the most compelling Grand Challenges in the plant sciences and associated, cutting-edge research challenges in the computing sciences. Our presentation intends to describe the systems-based CDE Model (Eoyang, 2007) and its impact on evaluation quality along with resulting implications and benefits in this context. Multiple methodologies are being deployed for data collection and analysis including outcome-based methods and case study. Eoyang, G. H. (2007). Human Systems Dynamics: Complexity-based Approach to a Complex Evaluation. In Systems Concepts in Evaluation: An Expert Anthology. Bob Williams and Iraj Imam (Eds.). AEA, Point Reyes.
The iPlant Collaborative: Using Case Study Methodology in a Systems-based Framework
Jennifer Young, East Main Educational Consulting, jyoung@emeconline.com
Barbara Heath, East Main Educational Consulting, bheath@emeconline.com
Abstract: The iPlant Collaborative (iPlant) is a distributed, cyber infrastructure-centered, international community of plant and computing researchers. The goal of iPlant is to bring together the community to (1) identify new conceptual advances through computational thinking and (2) address an evolving array of the most compelling Grand Challenges in the plant sciences and associated, cutting-edge research challenges in the computing sciences. Our presentation intends to describe the use of embedded case study within the systems-based CDE Model (Eoyang, 2007). The Grand Challenge team that is the focus of the case study is novel in that it is an interdisciplinary, virtual organization. The study goal is to provide a clear understanding of how the team collaborates to produce community-driven solutions. Eoyang, G. H. (2007). Human Systems Dynamics: Complexity-based Approach to a Complex Evaluation. In Systems Concepts in Evaluation: An Expert Anthology. Bob Williams and Iraj Imam (Eds.). AEA, Point Reyes.
Evaluating the Tennessee Lives Count Juvenile Justice Suicide Prevention Project: Strategies for Incorporating System Level Data in Evaluations of Complex Public Health Prevention Programs
Jennifer Lockman, Centerstone Research Institute, jennifer.lockman@centerstoneresearch.org
Heather Wilson, Centerstone Research Institute, heather.wilson@centerstoneresearch.org
Kathryn A Bowen, Centerstone Research Institute, kathryn.bowen@centerstone.org
Abstract: The Tennessee Lives Count, Youth Suicide Prevention Early Intervention Juvenile Justice Project (TLC-JJ) is a SAMHSA funded statewide initiative to reduce suicide and suicide attempts for youth (ages 10-24). Evaluations of community suicide prevention programs have historically focused on measuring individual level changes in knowledge, attitudes, and helping behaviors adults provide to suicidal youth. However, qualitative analysis of Tennessee Department of Children’s Services incident reports suggest that a “systems” level evaluation may also be appropriate, considering that multiple community organizations often work together to provide a single suicidal youth with appropriate services. Presenters will discuss how systems theory of evaluation is being applied to the TLC-JJ project, and will illustrate how mixed method designs including triangulation of data from state-wide databases may enhance the ability to evaluate system preparedness, resiliency to barriers, and connectedness, as it relates to helping youth at risk for suicide and other symptoms of mental illness.

Session Title: Implication of Evaluation Approaches on Arts Organization Policies
Multipaper Session 598 to be held in TRAVIS D on Friday, Nov 12, 1:40 PM to 3:10 PM
Sponsored by the Evaluating the Arts and Culture TIG
Ching Ching Yap,  Savannah College of Art and Design, cyap@scad.edu
Don Glass,  VSA, dlglass@vsarts.org
Department of Canadian Heritage: A Decade of Program Evaluations for the Cultural and Arts Sectors
Paule-Anny Pierre, Department of Canadian Heritage, paule-anny.pierre@pch.gc.ca
Catherine Mueller, Department of Canadian Heritage, catherine.mueller@pch.gc.ca
Oren Howlett, Department of Canadian Heritage, oren.howlett@pch.gc.ca
Abstract: The Canadian federal government has a comprehensive policy framework to foster Canadian cultural industries and the development of arts and heritage - from creators to audience. The Department of Canadian Heritage supports the book, music, film and television production industries, arts training and performance, and cultural infrastructure through a range of policies and programs. The objective is to encourage the production and promotion of Canadian cultural works, foster a sustainable and competitive marketplace, and help ensure Canadians have access to their own culture. The measurement of impacts of the support to arts and cultural sectors is challenging. This presentation will provide participants with an overview of evaluation approaches covering the past 10 years of federal programs funding various initiatives within these sectors. Presenters will outline lessons learned and evaluation challenges related to performance measurement and pressure to demonstrate value-for-money in the context of increased accountability requirements.
Mission Conceived Versus Mission Achieved
Paul Lorton Jr, University of San Francisco, lorton@usfca.edu
Abstract: The stated mission of an organization is, in a deductively ordered world, the encapsulation of the timeless focus of that organization from which measureable objectives can be derived to assess how well that organization is progressing toward its destiny. For arts organization in troubled times, this coupling of mission driven direction and success ought, in theory to be tight. The effort reported here examines the degree to which an Arts organization, in particular those presenting Opera, have stayed their mission dictated course and survived, even succeeded. By looking in depth at several opera companies in the San Francisco Bay area, more broadly, opera companies through out the United States and, in summary, at other arts organizations, we will present how well the conceived missions are actually achieved and have served to drive success.
Some Children Left Behind: Policy Implications for Children’s Museums Providing Educational Services to Minority Youth and Children With Special Needs
Deborah A Chapin, Excelsior College, debchapin50@juno.com
Anna Lobosco, New York State Developmental Disabilities Planning Council, alobosco@ddpc.ny.state.us
Dianna L Newman, State University of New York at Albany, dnewman@uamail.albany.edu
Abstract: This paper presents the findings of a study of the perceptions of eighty education directors of children’s museums responding to a paper-pencil survey. The study examined patterns in curriculum planning, current programming, pedagogy, reduction of barriers, mode of outreach, and the extent of effectiveness in the delivery of nonformal education programs for general education students, minority youth, and children with special needs. Education directors used appropriate strategies on these six constructs to a significantly greater degree for general education students than for minority youth or children with special needs. These findings have implications for policy improvements for children’s museum educators so that they adhere to American Disabilities Act and Civil Rights Act laws.

Session Title: Interesting Evaluations in Social Services and Welfare
Multipaper Session 599 to be held in INDEPENDENCE on Friday, Nov 12, 1:40 PM to 3:10 PM
Sponsored by the Human Services Evaluation TIG
Vajeera Dorabawila,  New York State Office of Children and Family Services, vajeera.dorabawila@ocfs.state.ny.us
Come Together Now: Measuring the Level of Collaboration
Yvonne Kellar-Guenther, University of Colorado, Denver, yvonne.kellar-guenther@ucdenver.edu
William Betts, University of Colorado, Denver, william.betts@ucdenver.edu
Abstract: For many programs to have sustainability, true collaboration needs to occur. As evaluators, we are often asked to measure the level of collaboration. Over the past year we have tested a measure of collaboration with three different groups. The groups vary from state leaders setting goals for a statewide initiative to a small group of early childhood, public health, mental health, and workforce providers coming together at a local level to serve TANF-eligible clients. We use a measure originally developed by Frey et al. (2006) which we have combined with subscales that measure key traits for collaboration which include trust, and shared decision making. We have looked at correlations between the sub-scales and the collaboration ratings on Frey’s measure. We have also talked with people who filled out the measure to about the acceptability of the measure. In the paper we share what we have learned to date.
The Use of Time Series Design and Intervention Analysis in Evaluating Welfare Policy
Elizabeth Hayden, Northeastern University, hayden.e@neu.edu
Abstract: Since the implementation of the Personal Responsibility Work Opportunity Reconciliation Act, state and federal courts are less likely to receive cases from welfare claimants challenging some aspect of welfare programs. Utilizing an Interrupted Time Series research design, findings from this national study indicate: (1) a decrease in welfare litigation since PRWORA, and (2) courts are more likely to grant favorable outcomes to the state than they are to welfare recipients post 1996 reform. These outcomes in judicial decision-making are more likely to occur during stringent reforms. This research will illustrate the disposition of these cases, and subsequently, the effectiveness and fairness of welfare policy along with failures in program implementation.
Conceptualizing and Measuring Innovativeness Among Community-Based Health and Social Service Programs
Daniel Finkelstein, Mathematica Policy Research, dfinkelstein@mathematica-mpr.com
Margaret Hargreaves, Mathematica Policy Research, mhargreaves@mathematica-mpr.com
Beth Stevens, Mathematica Policy Research, bstevens@mathematica-mpr.com
Abstract: Within the health and human services sector, there is widespread recognition of the need for developing innovative models to deliver services to vulnerable populations. Several foundation and award programs, such as Harvard University’s Innovations in American Government Awards Program and the Ashoka Fellows Program, currently support individuals and organizations in piloting innovative community-based health and social service projects. The federal government’s Social Innovation Fund Competition has further bolstered these efforts. Despite consensus that fostering innovation is a priority within this sector, there has been limited theoretical or empirical work to rate the characteristics of innovative health and social services programs. As a part of an evaluation of the Robert Wood Johnson Foundation’s Local Funding Partnerships Program (LFP), this paper reports on (1) a conceptual model to define innovativeness within the non-profit sector, (2) the development of a tool to rate the innovativeness of community-based health and social service projects, and (3) an analysis comparing the innovativeness of LFP-funded projects to those receiving support from other award programs.
Evaluating Equality Impacts of a Social Services Program in Brazil
Miguel Fontes, John Snow, Brazil, m.fontes@johnsnow.com.br
Lorena Vilarins, Social Service Industry, Brazil, lorena.vilarins@sesi.org.br
Rodrigo Laro, John Snow, Brazil, r.laro@johnsnow.com.br
Fabrízio Pereira, Social Service Industry, Brazil, fpereira@sesi.org.br
Diana Barbosa, Independent Consultant, tb.diana@yahoo.com.br
Danielle Valverde, National Union of Municipal Education Managers, danielle_valverde@hotmail.com
Abstract: Objectives: In 2008, SESI implemented the Citizenship Rights Event in 30 Brazilian municipalities, offering 1.3 million individuals access to 15 basic social and health services. The objective of this evaluation is demonstrating the equality impact of the program. Methods: A national survey was conducted in November 2008 (n=9,729). A Scale was generated based on access to 15 types of basic social and health services. An adaptation of the Hoover Index was used to evaluate equality impact on access to basic services by gender, race, and age. Results: Women, African-Brazilians, and youth were identified as the most vulnerable groups to accessibility of basic social services (ex-ante). After the program, reduction in the final inequality index was only observed for gender, from 0,029 to 0,022; for race and age, changes were insignificant. Conclusions: The Program demonstrated direct impact on reducing inequalities in access to basic social services for gender.

Session Title: Structural Equation Modeling Solutions for Evaluators
Multipaper Session 600 to be held in PRESIDIO A on Friday, Nov 12, 1:40 PM to 3:10 PM
Sponsored by the Quantitative Methods: Theory and Design TIG
Frederick Newman,  Florida International Unversity, newmanf@fiu.edu
Modeling Reciprocal Relationships Among Program Outcomes
Omolola A Adedokun, Purdue University, oadedok@purdue.edu
Timothy J Owens, Purdue University, towens@purdue.edu
Abstract: There is no doubt that evaluation quality is enhanced by the application of advanced statistical methods to appropriate program designs. Although programs targeting psychological processes often produce outcomes that may reciprocally affect each other, a notable limitation in the estimation of variable-oriented program models is that reciprocal relationships among outcomes are hardly explored. Using data from waves I and II of the National Longitudinal Study of Adolescent Health, this study employs the method of structural equation modeling to illustrate the estimation of reciprocal relationships among variables. Specifically, we estimated the reciprocal effects of self-esteem and academic performance. The model included instrumental variables that are expected to exercise direct effects on one of a pair of reciprocally affected variables but not the other. The models were first tested for the full sample, and a “stacked” model was then estimated to compare the estimates of the reciprocal paths between boys and girls.
Interpreting Differences in Covariance Structures Among Clinical and Demographic Subgroups for a Model Describing Perceived Barriers to Seeking Help for Abused Elder Women
Frederick Newman, Florida International Unversity, newmanf@fiu.edu
Laura Seff, Florida International Unversity, lseff@bellsouth.net
Richard Bealaurier, Florida International Unversity, rbeaulau@fiu.edu
Abstract: The study was designed to contrast the perceived barriers to help-seeking for female victims and non-victims of domestic abuse age 50+ who were not in the service system. 445 women self-administered a 78 item survey in small groups. We employed structural equation modeling to develop and confirm our model. We then tested for differences in coefficient weightings and in the co-variance structures as a function of victimhood and the demographic characteristics. Six factors were identified as contributing to the overall perceived barrier score, accounting for 84% of the variance with excellent fit statistics (?2/df=1.527, CFI=.989, RMSEA=.034). The six factor coefficients predicting overall perceived barrier scores were not significantly different by level of subgroups. However, there were significant differences in the covariances among the six factors among the various subgroups. The discussion will focus on interpreting the differences in covariance structures.
A Quantile Regression Analysis of Incarcerated Youth’s Reading Achievement: Compare and Contrast the Ordinary Least Square Regression and Quantile Regression
Weijia Ren, The Ohio State University, ren.44@buckeyemail.osu.edu
Ann O'Connell, The Ohio State University, aoconnell@ehe.osu.edu
William Loadman, The Ohio State University, loadman.1@osu.edu
Raeal Moore, The Ohio State University, moore.1219@osu.edu
Abstract: In current social science research, a lot of data do not meet the assumptions in ordinary least squares regression (i.e. non-normal, heteroscedastic) , and since the OLS approach is not robust to outliers, researchers can only choose to not to use the regression or continue to use the regression model without meeting the assumptions. In this case, ordinary least square (OLS) regression will be questionable and a new method is in need of providing better estimation to the data. Therefore, quantile regression is introduced because it can fit non-normal data, robust to outliers, and can provide a better capture to the distribution, relative to just the mean.
Evaluating Potential Impact of Intervention in Community Settings When No Comparison Data is Available: Mixture Latent Growth Modeling for Exploring Differential Change in Female Condom Use
Maryann Abbott, Institute for Community Research, maryann.abbott@icrweb.org
Emil Coman, Institute for Community Research, comanus@netscape.net
Peg Weeks, Institute for Community Research, weeks@icrweb.org
Abstract: This study illustrates the use of mixture analysis in evaluating preventive community interventions implemented in non-experimental framework. When no control group was designed and no matched comparison group is available as full panel data, the question of the potential impact of an intervention can be addressed by investigating model implied latent classes of participants that responded differently to an intervention. We show this approach with an intervention conducted in Hartford, CT aimed at increasing awareness and use of the female condom (FC) as a women-initiated HIV and sexually transmitted infections (STI) prevention method. Overall then, the community intervention aimed at increasing FC use among community females seemed to have been very successful for 49% of the sample and successful for another 43%, while not impacting the rest of 8%. More detailed inquiries can be pursued regarding specific characteristics of the three groupings and the causal processes that may be responsible for the differential effects.

Session Title: Improving the Quality of Analysis, Interpretation, and Reporting of Program Outcomes Through a Measurement, Evaluation, and Statistics Training Course
Demonstration Session 601 to be held in PRESIDIO B on Friday, Nov 12, 1:40 PM to 3:10 PM
Sponsored by the Organizational Learning and Evaluation Capacity Building TIG
Yvonne Watson, United States Environmental Protection Agency, watson.yvonne@epa.gov
Tracy Dyke-Redmond, Industrial Economics Inc, tdr@indecon.com
Terell Lasane, United States Environmental Protection Agency, lasane.terell@epa.gov
Abstract: The U.S. Environmental Protection Agency (EPA)'s Evaluation Support Division has designed a new training course to help program staff use statistically valid approaches to demonstrate program outcomes. The course responds to critiques from OMB and others regarding the unintentional but inappropriate use of non-representative program data to draw conclusions about program outcomes. The new course, Using Statistical Approaches to Support Performance Measurement and Evaluation, is designed to introduce Agency staff with little to no knowledge of program evaluation and inferential statistics to basic concepts and techniques. Course participants will learn how performance measurement, program evaluation and inferential statistics can be combined to strengthen the quality of their analysis, interpretation, and reporting of program outcomes. This demonstration will walk conference participants through the course materials, highlighting aspects of the training that were successful and unsuccessful in EPA's organizational context.

Session Title: Assessing Implementation Fidelity of Substance Abuse Prevention Environmental Change Strategies: Lessons Learned From the Substance Abuse and Mental Health Services Administration (SAMHSA), Center for Substance Abuse Prevention (CSAP), Strategic Prevention Framework State Incentive Grant (SPF-SIG), and National Cross-site Evaluation
Panel Session 602 to be held in PRESIDIO C on Friday, Nov 12, 1:40 PM to 3:10 PM
Sponsored by the Alcohol, Drug Abuse, and Mental Health TIG
Kristi Pettibone, The MayaTech Corporation, kpettibone@mayatech.com
Abstract: States and jurisdictions,funded through the Strategic Prevention Framework State Incentive Grant (SPF SIG) initiative, were required to fund communities to implement a range of substance abuse prevention interventions, including interventions designed to change the environment in which substance abuse occurs. The SPF SIG cross-site evaluation team developed implementation fidelity (IF) measures for environmental change strategies to assist grantees in implementing these strategies and evaluating their effectiveness. Three presentations comprise this panel. The first presentation describes the need for, and development of, measures for assessing IF of environmental change strategies, the data collection process, and the characteristics of data submitted by SPF SIG grantees. The second presentation discusses the preliminary assessment of the connection between the IF data and the outcomes or accomplishments associated with environmental change strategies. The third presentation describes a grantee’s adaptation of the assessment process to accommodate state-level and community-level conditions.
A Process for Assessing Implementaion Fidelity for Substance Abuse Prevention Environmental Change Strategies
Ann Landy, Westat, annlandy@westat.com
Elisabeth Cook, Westat, elisabethcook@westat.com
Communities funded through state SPF SIG grants were required to implement participant-based prevention interventions and environmental change strategies, in addition to the five SPF steps, to address key consumption and consequence problems identified through analyses of epidemiology data. To help states assess community-level performance, the SPF SIG national cross-site evaluation team developed a series of implementation fidelity (IF) rating scales for environmental strategies, participant-based interventions, and the SPF steps. Sixteen of 26 Cohort 1 and Cohort 2 SPF SIG grantees used the IF assessment tool, or an adaptation of the tool, on a voluntary basis, to collect and submit implementation fidelity data. This presentation specifically describes the process for identifying core activities for 21 environmental change strategies, construction of the environmental change strategy IF rating scales, and state data collection procedures. Presenters will report a summary of the environmental change strategy IF data submitted by the 16 SPF SIG grantees.
Exploring the Relationship Between Implementation Fidelity and Process and Outcome Data to Evaluate the Effectiveness of Environmental Change Strategies
Kristi Pettibone, The MayaTech Corporation, kpettibone@mayatech.com
Sixteen of 26 SPF SIG states participated in a voluntary data collection activity designed to assess the relationship between implementation fidelity of substance abuse prevention environmental change strategies and process and outcome measures. Process measures include counts of the number of environmental change strategies implemented and the number of activities conducted within these strategies obtained through a standardized, web-based tool that each state's funded community partners completed twice a year. Outcome data include measures of substance use/abuse behaviors and consequences such as, motor vehicle crashes, crime and alcohol-, tobacco- or drug-related mortality. This presentation will report summaries of IF rating data for environmental change strategies, submitted by the 16 grantees, and the process data as well as results of an initial assessment of the relationships among the IF data, process data, and outcome data.
Adaptation of the Strategic Prevention Framework State Incentive Grant (SPF SIG) Implementation Fidelity Assessment User's Guide in Texas
Adrian Reyes, Behavioral Assessment Inc, aistrategies@aol.com
The SPF SIG Implementation Fidelity (IF) Assessment Guide—developed by the national cross-site evaluation team--was distributed to each of the 26 Cohort 1 and Cohort 2 grantee evaluation teams. The Texas SPF SIG evaluator, Behavioral Assessment, Inc., reviewed the Guide and determined that the recommended approach for collecting rating data should be adapted to accommodate the needs of the Texas funded communities. This presentation describes the conditions that prompted the need for adaptation, the on-line tool that was developed for collecting the IF ratings, and the use of the tool for rating environmental change strategy implementation. The presenter also will describe the training and technical assistance provided to the local evaluators, through webinars and site visits, for using the on-line tool and maintaining inter-rater reliability. Environmental strategy IF data obtained from six communities will be presented and advantages and disadvantages of using the system will be discussed.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Innovations in Youth Empowerment Evaluation
Roundtable Presentation 603 to be held in BONHAM A on Friday, Nov 12, 1:40 PM to 3:10 PM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Kimberly Kay Lopez, Independent Consultant, kimklopez@hotmail.com
Abstract: This discussion is based on the youth focused Empowerment Evaluation (Fetterman, 2001) of community-based youth prevention programs. The researcher developed an innovative approach by using photography and journal writing within the Empowerment Evaluation methodology. The researcher also modified the scoring scale used in Empowerment Evaluation to a letter grade system. These innovations resonated with the youth and allowed the researcher to not only collect but validate data across several modalities. These innovations have been applied to several youth program evaluations with great success. Youth-focused participatory research is relevant to evaluation research as the expertise of the youth is valued as an equitable partner in research. Fetterman, D. M. (2001). Foundations of Empowerment Evaluation. Thousand Oaks, Sage Publications.
Roundtable Rotation II: Slipping and Sliding Like a Weasel on the Run: Empowerment Evaluation and the Hawthorne Effect
Roundtable Presentation 603 to be held in BONHAM A on Friday, Nov 12, 1:40 PM to 3:10 PM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Michael Matteson, University of Wollongong, cenetista3637@hotmail.com
Abstract: In the course of evaluating the empowerment aspect of an Empowerment Evaluation I was faced with the argument that any effect of the evaluation on the evaluation team participants was in fact a Hawthorne or placebo effect. This was seen as meaning that any effect would be the result of the evaluation team's changing their actions to please me as a participant observer and the evaluator, not as a result of process use of the Empowerment Evaluation experience as such. I thought this could be seen as just a part of the Empowerment Evaluation process itself, but there was a problem with gaining post evaluation data on evaluation effects by participant observation after the evaluation was concluded. This roundtable would look at the issues involved and ways of overcoming them, the role of the evaluator in empowerment Evaluation, and the relevance of different interpretations of “empowerment” in Empowerment Evaluation on what is done and what can be regarded as success.

Session Title: Improving Evaluation in the Real World of Nonprofits and Foundations
Panel Session 604 to be held in BONHAM B on Friday, Nov 12, 1:40 PM to 3:10 PM
Sponsored by the Non-profit and Foundations Evaluation TIG
Pamela Imm, University of South Carolina, pamimm@windstream.net
Abraham Wandersman, University of South Carolina, wandersman@sc.edu
Abstract: The session will focus on how the Health Foundation of Central Massachusetts (THFCM) integrates the work of evaluation and advocacy into their grantmaking. The panel members will present the structure of the Foundations’s grantmaking, how evaluators are recruited to work with grantees, steps taken to ensure evaluation strategies are incorporated into the work of the grantees, and the benefits that evaluators perceive to working with the Foundation. The panel will describe the accountability model used by the Foundation as well as how evaluators are encouraged to facilitate the work with the grantees. The evaluators’ perspective will also be presented to include the advantages of being involved in a small community of learners through monthly conference calls, professional development workshops, and informal and formal networking. By utilizing high quality evaluators, the Foundation promotes its grantmaking principles of evidence-based practice, continuous quality improvement, and systems change through policy and advocacy.
Intregrating Evaluation Into the Grantmaking Agenda
Pamela Imm, University of South Carolina, pamimm@windstream.net
From its inception, THFCM has worked to integrate formal evaluation practice into its grantmaking agenda. THFCM works with the grantees to help select the best evaluators for individual projects by using a variety of strategies. First, they identify and recruit high quality evaluators through informal referrals, AEA contacts, and other settings (e.g., universities) who are likely to be interested in a long-term evaluation commitment (3-5 years) with a project. THFCM finalizes its pool of evaluators and invites them to meet in a large setting with the grantees to share information about their approach, style, and areas of expertise. This process, similar to a speed dating session, results in the grantees forwarding their selection of acceptable evaluators to THFCM for final matching. The presenter will discuss the benefits and challenges of working closely with their evaluators from the perspective of the grantees.
Why and How Should Foundations Engage Skilled, Experienced Program Evaluators in Their Grantmaking?
Jan Yost, Health Foundation of Central Massachusetts, jyost@hfcm.org
Elaine Cinelli, Health Foundation of Central Massachusetts, ecinelli@hfcm.org
The presenter will provide the rationale for engaging evaluators to serve in a role as partners with the funder and grantee in order to increase the probability of project outcomes and sustainability. Lessons learned from the ten-year experience of a moderate-sized funder will offer guidance to foundations on: embedding evaluation as continuous quality improvement throughout the grantmaking process; recruiting evaluators, and facilitating their match and work as partners with invited applicants and grantees. The funder will also discuss its roles including approaches for interacting as a true partner with the evaluator and grantee throughout a three-to-five year grant project cycle to plan for and evaluate outcomes including advocacy efforts to affect public policy and sustain the project.
How Can Foundations Attract and Retain Skilled, Experienced Program Evaluators for Local Projects?
Emily Rothman, Boston University, erothman@bu.edu
In this session, the presenter will describe ten concrete things that Foundations can do to recruit and retain program evaluators with high standards for rigor. These include being prepared for and accommodating human subjects reviews, university billing systems, and working out agreements about the publication of data. The presenter will describe challenges related to engaging in community evaluation projects from the perspective of an academic researcher, and share lessons learned and solutions found during four years of collaboration with The Health Foundation of Central Massachusetts and two of their Synergy Initiative grantees.

Session Title: Teacher Effectiveness and Teacher Quality: What's the Difference?
Think Tank Session 605 to be held in BONHAM C on Friday, Nov 12, 1:40 PM to 3:10 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Nathan Balasubramanian, Centennial Board of Cooperative Educational Services, nathanbala@gmail.com
Nathan Balasubramanian, Centennial Board of Cooperative Educational Services, nathanbala@gmail.com
Joy Perry, Fort Morgan School District, jperry@morgan.k12.co.us
Roxie Bracken, Keenesburg School District, roxiebracken@re3j.com
Abstract: How might we evaluate teacher performance? What is the difference between highly effective teachers and highly qualified teachers? Participants in this think tank will delve into how these two constructs might impact student performance in our elementary and secondary schools. The three breakout group leaders, with their combined expertise of over 80 years between them, will facilitate this think tank informed by their recent collaborative research, “Leveraging Innovation: Teacher Effectiveness Initiative,” with funding from a 2010 NCLB Recruitment and Retention Grant from the Office of Federal Programs Administration at the Colorado Department of Education. Participants will leave with a deep understanding of the distinction, following an initial orientation, three small group discussions, and a check and connect upshot to recap our 90-minute lively dialogue.

Session Title: Fidelity of Program Implementation in Educational Evaluations
Multipaper Session 606 to be held in BONHAM D on Friday, Nov 12, 1:40 PM to 3:10 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Javan Ridge,  Colorado Springs School District 11, ridgejb@d11.org
Stacey Merola,  ICF International, smerola@icfi.com
Measuring the Fidelity of Implementation of Response to Intervention for Behavior (Positive Behavior Support) Across All Three Tiers of Support
Karen Childs, Florida Positive Behavior Support, childs@fmhi.usf.edu
Abstract: This workshop will provide information on the development, validation and use of implementation fidelity instruments available for use by schools in evaluating school-wide positive behavior support (otherwise known as response to intervention for behavior). The Benchmarks of Quality (BoQ) is a research-validated self-evaluation tool for evaluating fidelity of school level implementation of Tier 1/Universal level behavior support. Participants will receive information on the theoretical background, development and validation of the BoQ for Tier 1. Participants will learn how to complete the instrument, how the instrument is used by school, district and state teams to monitor implementation fidelity, and how to use results to improve implementation. Participants will also receive information about a newly developed instrument; the Benchmarks for Advanced Tiers (Tier 2/Supplemental and Tier 3/Intensive levels of support). This discussion will include an explanation of instrument development, results of the validation study, administration of the instrument, and use of results.
A Practical Approach to Managing Issues in Implementation Fidelity in K–12 Programs
Hendrick Ruitman, Cobblestone Applied Research & Evaluation Inc, todd.ruitman@cobblestoneeval.com
Rebecca Eddy, Cobblestone Applied Research & Evaluation Inc, rebecca.eddy@cobblestoneeval.com
Namrata Mahajan, Cobblestone Applied Research & Evaluation Inc, namrata.mahajan@cobblestoneeval.com
Abstract: Measuring the fidelity of program implementation and its relationship to outcomes has not yet achieved prominence in the field of K – 12 programs and curriculum studies. A recent literature review found that only 5 of 23 studies even considered the relationship between implementation and outcomes (O’Donnell, 2008). While there exist multiple publications that give evaluators the skills to measure implementation (see Berry & Eddy, 2008; Chen, 2005; Dane & Schneider, 1998), there is little practical advice that exists to inform evaluators how to manage issues related to implementation fidelity throughout the course of an evaluation. Specifically, we intend to discuss situations where participants may desire to suspend or reduce the fidelity of implementation of a K – 12 educational program, tips for evaluators to manage implementation issues, and overall implications that may result from these situations.
A Study of the Relationship Between Fidelity of Program Implementation and Achievement Outcomes
Sarah Gareau, University of South Carolina, gareau@mailbox.sc.edu
Heather Bennett, University of South Carolina, bennethl@mailbox.sc.edu
Diane Monrad, University of South Carolina, dmonrad@mailbox.sc.edu
Tammiee Dickenson, University of South Carolina, tsdicken@mailbox.sc.edu
Ishikawa Tomonori, University of South Carolina, ishikawa@mailbox.sc.edu
Abstract: As we consider the theme of the 2010 AEA conference: “Evaluation Quality”, the increased national focus on fidelity of program implementation comes to mind. A limitation of traditional evaluation has been that it has focused largely on program outcomes, with very little emphasis placed on the manner in which the program is implemented. The proposed research will use implementation rubrics developed for a state literacy program in South Carolina schools to investigate relationships between program components and student achievement. The implementation rubric was completed by five personnel for each school, two individuals at the school level and three individuals at the state level, making a total of 95 rubrics for the 19 schools. The specific analyses will include descriptive statistics, correlations, regression studies, and possibly hierarchical linear modeling. Results revealing the relationship between level of implementation (fidelity) for each of the components/items and achievement outcomes will be shared.
Using an Innovation Configuration Map to Empirically Establish Implementation Fidelity of an Intervention to Improve Achievement of Struggling Adolescent Readers
Jill Feldman, Research for Better Schools, feldman@rbs.org
Ning Rui, Research for Better Schools, rui@rbs.org
Abstract: The complexity of effecting systemic change is well-documented in the literature (Baskin, 2003; Connor, 1992; Rogers, 1983; Senge et al., 1994; and Hall et al., 2006). Determining whether an approach can produce desired effects depends on a clear understanding of an innovation’s key components and the extent to which it was implemented as intended. Although teachers often claim they are using the same innovation, observations of classroom practice may suggest otherwise (George, Hall, & Uchiyama, 2000). In addition to understanding whether or not an approach works, practitioners need to know why, for whom, and under what conditions. This requires systematic measurement of fidelity of classroom implementation. This paper highlights use of an innovation configuration (IC) map to define the key constructs, describe various fidelity levels, and present fidelity data related to implementation of an intensive professional development model for urban middle school teachers to support achievement of struggling adolescent readers

Session Title: Key Issues in Evaluating Industrial and Commercial Energy Efficiency Programs and Technologies
Multipaper Session 607 to be held in BONHAM E on Friday, Nov 12, 1:40 PM to 3:10 PM
Sponsored by the Environmental Program Evaluation TIG and the Business and Industry TIG
Mary Sutter,  Opinion Dynamics Corporation, msutter@opiniondynamics.com
Evaluation Results Help Programs Cool Off!
Lark Lee, PA Consulting Group, lark.lee@paconsulting.com
Laura Schauer, PA Consulting Group, laura.schauer@paconsulting.com
Abstract: The air conditioning (AC) market is experiencing unprecedented changes in standard efficiency levels. In October 2009, leading manufacturers of central air conditioners, furnaces, and heat pumps signed a historic, voluntary agreement with the nation's leading energy efficiency advocacy organizations supporting new federal standards for those products. These changes come on the heels of industry-driving trends, including increased efficiency requirements as part of the American Reinvestment and Recovery Act, increased manufacturer rebates, and utility program initiatives. With a multitude of factors influencing the AC market, why are some programs wildly successful, smashing their goals, while other programs struggle to succeed? This paper will synthesize results across separate evaluation efforts in New York, Michigan and Colorado to highlight the primary drivers of these programs’ performance, characterizing and disentangling, to the extent feasible, the role and influence various demographic, economic, and programmatic factors played in the programs' performances in 2009.
Evaluation of Progress to Develop and Evaluate Sustainable Federal Facilities
Dale Pahl, United States Environmental Protection Agency, pahl.dale@epa.gov
Dan Amon, United States Environmental Protection Agency, amon.dan@epa.gov
Bill Ridge, United States Environmental Protection Agency, ridge.william@epa.gov
Andy Miller, United States Environmental Protection Agency, miller.andy@epa.gov
Bob Thompson, United States Environmental Protection Agency, thompson.bob@epa.gov
Abstract: This presentation focuses on the evaluation implications of Executive Order 13514, which directs that sustainability is integral to federal leadership for environmental, energy, and economic performance in the United States. To achieve the national goal of a sustainable and clean energy economy, this executive order directs all federal agencies to extend their strategic goals and plans to: increase energy efficiency; reduce greenhouse gas emissions; conserve and protect water resources; operate high performance sustainable buildings; ensure that new federal buildings are designed to achieve zero-net-energy by 2030; and evaluate progress to achieve performance goals for sustainable facilities. The potential environmental and energy benefits of this executive order are significant because the ‘footprint’ of federal facilities in the United States is immense. For example, the federal government is among the largest buyers of energy in the nation and the federal facilities portfolio is estimated to include 550,000 buildings.
Why Market Evaluations Matter! Significantly Improving Energy Program Outcomes With Market Intelligence in the Large Commercial Building Sector
John Reed, Innovologie LLC, jreed@innovologie.com
Abstract: The development and use of market evaluations has not been widely discussed but they are extremely important for determining how best to implement robust programs that achieve their goals. Programs commonly address a specific issue in the absence of understanding the larger (market) context. This paper describes and defines market evaluation. It describes methods for uncovering market structure, decision makers, networks, decision criteria, and “modeling” decision outcomes. It provides examples of several market evaluations for the large commercial building sector in the US. It then describes how these market evaluations have been used to re-define, more carefully target, and make more robust energy efficiency programs targeting the large commercial building sector. More specifically it shows how using market intelligence resulted in redirecting programs that targeted decision-makers, architects, vendors, and building engineers at the building level to targeting high-level decision makers who control not one but many properties resulting in more rapid adoption of efficient technologies.
When Designing a High Quality Evaluation Involves Complex and Evolving Program Issues: Strategies From an Evaluation of an Industrial Process Energy Efficiency Program
Kara Crohn, Research Into Action, kara.crohn@gmail.com
Marjorie McCrae, Research Into Action, marjorie@researchintoaction.com
Abstract: The “greening” of industry rightly focuses on reducing resource consumption. In the past two years there has been an increasing spotlight on energy consumption. Incentive programs have been in place for many years to assist industrial firms in replacing inefficient equipment with highly energy-efficient equipment. Through the Industrial Process Efficiency (IPE) program in the New York State Energy Research & Development Agency (NYSERDA) a strategic change is underway to encourage industrial firms to reduce energy consumption throughout their production lines. While potential resource and cost savings may be substantial, the perceived risk of making process changes can be higher. This presentation will discuss strategic and methodological issues involved in designing an evaluation that seeks to assist the program in identifying key factors in encouraging potential and current program participants to pursue energy-efficient process improvements, especially decision-making processes, program representatives’ relations with various levels of decision-makers, and incentives that outweigh risks.

Session Title: Report on a Test of a General Method for Quick Evaluation of Medical Research by Morbidity
Multipaper Session 609 to be held in Texas D on Friday, Nov 12, 1:40 PM to 3:10 PM
Sponsored by the Research, Technology, and Development Evaluation TIG and the Health Evaluation TIG
Jerald Hage,  University of Maryland, hage@socy.umd.edu
A Method for Quick Evaluation of Medical Research
Jerald Hage, University of Maryland, hage@socy.umd.edu
Abstract: The method consists of identifying by computer at NIH all projects of a certain kind that were in existence in the years 2006-8 and finished before 2009. Their abstracts were printed and all projects were divided into two categories: knowledge studies and clinical studies. For the latter group, papers were identified that had been supported by the specific program in NIH. A grid of 16 metrics was then applied to these papers and estimates were made. In the process a number of lessons were learned.

Session Title: Taking a Good, Long Look In the Mirror: How Can We Hold Ourselves Accountable for Quality Recommendations?
Think Tank Session 610 to be held in Texas E on Friday, Nov 12, 1:40 PM to 3:10 PM
Sponsored by the Evaluation Use TIG and the Organizational Learning and Evaluation Capacity Building TIG
Jennifer Iriti, University of Pittsburgh, iriti@pitt.edu
Jennifer Iriti, University of Pittsburgh, iriti@pitt.edu
Kari Nelsestuen, Education Northwest, kari.nelsestuen@educationnorthwest.org
Abstract: This Think Tank session begins a conversation within the field about building an empirically-based understanding of recommendations generated from evaluations—whether and how clients respond to them and if so, what impact they ultimately have. Although the making of recommendations is common practice for a majority of evaluators, the systematic study of their use and impact has lagged. The session is geared toward both evaluation practitioners and researchers who have interest in advancing the field toward evidence-based practice. After a brief framing by the facilitators, participants will break into small groups to consider the following questions: 1) What do and don’t we know about the various ways clients respond to recommendations from evaluators and how could we track these responses systematically? 2) What do and don’t we know about the impact of evaluation recommendations and how could we assess impact of those that are implemented by clients?

Session Title: Snapshots of Exemplary Evaluations
Panel Session 611 to be held in Texas F on Friday, Nov 12, 1:40 PM to 3:10 PM
Sponsored by the Research on Evaluation TIG
Paul Brandon, University of Hawaii, brandon@hawaii.edu
Paul Brandon, University of Hawaii, brandon@hawaii.edu
Abstract: Existing research on the nature of exemplary evaluation practice is limited. In spite of awards being given for outstanding work, there has been little study of what makes an evaluation exemplary and how to produce excellent work consistently. The four presentations in this panel examine national, regional, and local studies for the purposes of (a) describing the characteristics of exemplary work; (b) identifying the factors, conditions, events, and actions that contribute to exemplary work; and (c) considering the impediments to improved practice. Influences such as effective evaluation designs, strong stakeholder relationships, and evaluator flexibility in dealing with changing contextual factors are shown as key in promoting exemplary practice. Implications for an improved theory of evaluation practice, as well as improvements in evaluation practice, are also considered.
Informing Policy in a Cultural Crossfire: The Title V Abstinence Education Evaluation
Christopher Trenholm, Mathematica Policy Research, ctrenholm@mathematica-mpr.com
Barbara Devaney, Mathematica Policy Research, bdevaney@mathematica-mpr.com
In 1998, Mathematica Policy Research began a multi-year evaluation of abstinence programs funded as part of the landmark welfare reform legislation. The evaluation used an experimental design to estimate impacts of programs. Based on baseline and follow-up survey data collected from over 2,000 youth in four sites, the analysis revealed no evidence that the programs increased abstinence from sex; however, it also found no evidence that the programs reduced rates of contraception, a common concern of program opponents. Despite the contentiousness of its findings, the evaluation was widely acknowledged for its rigor and objectivity and, in turn, it had a substantial impact on federal and state policy. Among the keys to this success were the evaluation’s highly credible research design (and effective implementation of this design), a collaborative approach that sought buy-in from the range of study stakeholders, and a concise, consistent, and balanced presentation of the evaluation’s findings.
Exemplariness as Engaging and Value-added Evaluation
Melanie Hwalek, SPEC Associates, mhwalek@specassociates.org
Size, scope and type of evaluation are less important than other factors in meeting criteria for exemplariness. Hwalek takes the position that an evaluation is exemplary when: (a) people become excited about the evaluation, (b) the right information is provided at the right time, and (c) the evaluation maintains its methodological rigor. Three cases will be presented as illustration: a shoe string evaluation of an HIV medication compliance training program, a multi-faceted evaluation of a capacity building organization, and a complex multi-state policy change evaluation. Six factors are hypothesized to contribute to exemplary work: culture of the user organization as one of learning, organizational involvement in determining what is to be learned, having an evaluation champion within the user organization, the fit of the evaluation feedback mechanism to the needs of the organization, trust between evaluator and key organizational contact, and the evaluator’s skill in making the evaluation come alive.
Curbing Home Health Care Costs While Improving Quality of Care
George Grob, Center for Public Program Evaluation, georgefgrob@cs.com
In the early 1990’s, the Medicare program began implementing a new benefit to provide in-home care to home-bound patients. This program built on a tradition of dedicated visiting nurses, active since the 1900s. However, the benefits soon became industrialized, with new chains of providers organized to exploit the new benefit. Annual Medicare expenditures mushroomed from $3 billion to $18 billion. Auditors found that 40% of Medicare payments were improper. Others raised questions about the quality of care. The Office of Inspector General’s Office of Evaluation and Inspections undertook a series of studies that led to reforms which ultimately saved $50 billion and promoted better care. Follow-up studies concluded that Medicare beneficiaries truly needing home health care were able to get it. It is this series of evaluations, along with auditors and investigators, that yielded these results. This presentation will address the exemplary features of the studies.
Exemplary Evaluations: Characteristics, Conditions, and Considerations
Nick Smith, Syracuse University, nlsmith@syr.edu
Leigh M Tolley, Syracuse University, lmtolley@syr.edu
Although professional societies such as the American Educational Research Association and the American Evaluation Association give awards for outstanding evaluation studies, there has been relatively little research on what exactly makes evaluation work exemplary, the conditions that contribute to outstanding practice, and the barriers or limitations that must be overcome to improve practice. This paper examines 15 studies previously identified as exemplary and investigates the factors, conditions, events, and actions that both contributed to their excellence and impeded their improvement. Preliminary findings suggest that, in addition to employing an effective study design, building strong stakeholder relationships and being responsive to unforeseen contextual factors are key in producing exemplary work. Results of research such as that reported here will not only reshape thinking about the nature of useful evaluation theory, but will provide concrete insights into the improvement of everyday practice in evaluation.

Session Title: Extension Educators and Evaluation
Multipaper Session 612 to be held in CROCKETT A on Friday, Nov 12, 1:40 PM to 3:10 PM
Sponsored by the Extension Education Evaluation TIG
Karen Ballard,  University of Arkansas, kballard@uaex.edu
Beyond reporting: Do Extension Educators Use the Results of Evaluations?
Sarah Baughman, Virginia Polytechnic Institute and State University, baughman@vt.edu
Heather Boyd, Virginia Polytechnic Institute and State University, hboyd@vt.edu
Nancy Franz, Virginia Tech, nfranz@vt.edu
Abstract: Increasing demands for accountability in educational programming have resulted in increasing calls for program evaluation in educational organizations. Many organizations include conducting program evaluations as part of the job responsibilities of program staff. Cooperative Extension is a national system offering non-formal educational programs through land grant universities. Many Extension services require non-formal educational program evaluations be conducted by locally-based educators. Evaluation research has focused primarily on the efforts of professional, external evaluators. The work of program staff with many responsibilities including program evaluation has received little attention. This study examines how field based Extension educators in four Extension services use the results of their evaluations. Four types of evaluation use are measured and explored; instrumental use, conceptual use, persuasive use and process use. Pilot study (n=35) results indicate that Extension educators engage in persuasive use most often, followed by instrumental and conceptual use. There is some evidence of process use.
Program Evaluation Competencies of Extension Educators: Implications for Professional Development
Megan McClure, Texas A&M University, mmcclure@aged.tamu.edu
Nick Fuhrman, University of Georgia, fuhrman@uga.edu
Chris Morgan, University of Georgia, acm@uga.edu
Abstract: As more Extension systems rely upon Extension educators to evaluate their own programs for purposes of improvement and for demonstrating results, determining the professional development frameworks most useful to educators is key. Evaluation competencies have been an area of interest in Extension evaluation, as well as the broader evaluation discipline, for some time. What skills are most necessary for these Extension educators who evaluate their own programs? Which of these skills can be best taught and what are the best ways of instruction? This presentation focuses on findings from a census of 4-H educators in Oklahoma, and all Extension educators in Georgia, in order to address these issues in the contemporary Extension workforce in these states.
Online Learning Circles: Building Evaluation Capacity Out of Thin Air
Benjamin Silliman, North Carolina State University, ben_silliman@ncsu.edu
Abstract: This Paper describes the process of developing and sustaining an evaluation learning circle among community professionals and reports individual and organizational outcomes of that initiative. Over the past two years, 12 Extension professionals, working online and face-to-face, learned basic skills ranging from planning to reporting, as outlined in the 4-H National Evaluation for Impact framework. Each professional then evaluated a local program and prepared reports for key stakeholders. Learning circle members significantly increased skills in each of the seven steps, as indicated by a self-assessment survey. Appreciate inquiry journals documented their growth in understanding and applying the evaluation process as well as in their aspirations for using those skills to improve programs. Results affirm the value and clarify the process by which professionals grow through communities of practice and by which individuals and teams build evaluation capacity in organizations.

Session Title: Evaluating Effectiveness in Recruitment, Mentoring, and Human Resources' Functions
Multipaper Session 613 to be held in CROCKETT B on Friday, Nov 12, 1:40 PM to 3:10 PM
Sponsored by the Business and Industry TIG
Ray Haynes,  Indiana University, rkhaynes@indiana.edu
Ascertaining the Comparative Yields of Alternate Methods of Employment Recruitment: A Management Application of Empirical Assessment Techniques
Ann Breihan, College of Notre Dame of Maryland, abreihan@ndm.edu
Abstract: This paper offers an empirical analysis of the comparative efficacy and yield associated with different employment recruitment methods. Much of the important research in recruitment strategies’ efficacy in this field predates easy access to on-line recruitment, focusing instead on recruitment through newspapers, radio and television. This analysis presents results based on data about the hire sources and turnover rates experienced from Jan. 2008 to Oct. 2009 by three Mid-Atlantic corporations offering day and residential programming to adults with developmental disabilities. This rich database includes 8,500 applicants. The relative effectiveness of each of the sources of hires is analyzed in terms of absolute yield and relative cost. Because of the size of the database, statistically significant correlations have been possible for a range of occupations, from managerial to direct service positions, including full-time, part-time permanent and temporary positions.
Developmental Network Mentoring: A Logic Model Approach to Evaluating This Form of Organizational Mentoring
Ray Haynes, Indiana University, rk.haynes@indiana.edu
Rajashi Ghosh, Drexel University, rg429@drexel.edu
Abstract: This paper presentation focuses on the evaluation of mentoring within the context of developmental networks or relationship constellations. Organizational mentoring has evolved to manifest in different forms. Hallmark characteristics of this evolution within organizations include the coexistence of organizationally sanctioned formal mentoring programs along with its naturally occurring informal counterpart. Our aim is to present an evaluative framework for determining and ensuring the efficacy of organizationally sanctioned formal mentoring programs that use developmental networks or relationship constellations. Toward this end, we will discuss a logic model that could be used to develop and evaluate organization-based formal mentoring programs that feature developmental networks.
An Historical Study of Effectiveness of Taiwanese Companies' Human Resources (HR) Functions in China
Chien Yu, National Taiwan Normal University, yuchienping@yahoo.com.tw
Abstract: Since Chinese government took the open market strategies from 1979, more than 500,000 Taiwanese companies have moved to or established their branch companies in China. They started to set factories in southern part of China-Shen Zhen-a small town of Kwangtung Province, and kept to spread to the Pearl River Delta (or Zhusanjiao), the Yangtze River Delta, and other places of China. The movement of business is almost more than 30 years. The phenomenon deserved to explore in many ways, one of which is on HR function. The purposes of the study is trying to explore the effectiveness of Taiwanese Compann's HR Functions in China.

Session Title: Impact of Social and Economic Development Interventions: Presentation of Synthetic Reviews of Education, Early Childhood Development and Agricultural Extension Programs in Developing Countries
Multipaper Session 614 to be held in CROCKETT C on Friday, Nov 12, 1:40 PM to 3:10 PM
Sponsored by the International and Cross-cultural Evaluation TIG
Marie Gaarder,  International Initiative for Impact Evaluation (3ie), mgaarder@3ieimpact.org
The Impact of Agricultural Extension Services
Hugh Waddington, International Initiative for Impact Evaluation (3ie), hwaddington@3ieimpact.org
Birte Snilstveit, International Initiative for Impact Evaluation (3ie), bsnilstveit@3ieimpact.org
Howard White, International Initiative for Impact Evaluation (3ie), hwhite@3ieimpact.org
Jock Anderson, World Bank, janderson@worldbank.org
Abstract: This paper provides a synthetic review of literature examining the effectiveness of interventions in agricultural extension and advisory services in fostering improved outcomes for farmers in developing countriers. The review was conducted to Campbell/ Cochrane Collaboration standards of systematic review, while also emphasising the context-mechanism configurations which underlie program effectiveness. Quantitative estimates from impact evaluations examining effectiveness of extension interventions were synthesised using meta-analysis. The review also provides special focus on synthesis of quantitative and qualitative literature on farmer field schools (FFSs), the extension modality which has received much attention from policy makers globally in recent years.

Session Title: Perspectives on Conducting Quality Evaluations at Various Levels of Government
Panel Session 615 to be held in CROCKETT D on Friday, Nov 12, 1:40 PM to 3:10 PM
Sponsored by the Government Evaluation TIG
Rakesh Mohan, Idaho State Legislature, rmohan@ope.idaho.gov
Kathryn Newcomer, George Washington University, newcomer@gwu.edu
Abstract: Focusing on the role of evaluation in government, this session will highlight differences among various levels of government (county, city, state, federal and international) in how quality in evaluation is addressed. The three standards offered by House (1980) - truth (validity), beauty (credibility), and justice (fairness) – will be used as one framework to illustrate differences among the different levels of government. Through the panel, the presentations will highlight not only the strategies that differ among the different levels, but also the ways in which the three standards vary in importance and weight. Barriers to achieving the standards and the unique challenges at each level of government will be described. Recommendations for evaluators in working at each level will be offered.
Quality Standards in International Development Evaluation
Patrick Grasso, World Bank, pgrasso45@comcast.net
High-quality evaluation of international development assistance is important because of the stakes involved for the well-being of people in poor and middle-income countries, but also challenging because of the difficult and complex environments in which assistance is delivered. Efforts to ensure high-quality evaluation have led to the development of standards by a number of organizations, including the OECD/DAC Evaluation Network of the major donor countries, the Evaluation Cooperation Group of the International Financial Institutions (such as the World Bank and International Monetary Fund), and the United Nations Evaluation Group. Drawing on examples of actual practice, this paper examines the use of these standards guiding evaluations of the effectiveness of development assistance.
A Pragmatic Approach to Ensuring Quality in County Government Evaluation
David J Bernstein, Westat, davidbernstein@westat.com
County governments, like other local governments, may not have the staff and funding resources to dedicate to evaluation that the United States Federal government has. This does not mean that county governments do not dedicate some level of resources to ensuring that their programs are efficient, effective, and equitable. To paraphrase the Late Speaker of the House Tip O’Neil, if “all politics is local,” then all local evaluations are political. The presenter will reflect on the 17 years that he spent conducting evaluations, developing and analyzing performance measurement systems, and advising local government managers and elected officials on ways to ensure quality evaluation and accountability in county government. He will share his philosophy on ways to ensure quality evaluation standards in a context that is inherently political.
Rigorous Evaluation and City Government: Examples From New York City's Center for Economic Opportunity
Kathryn Henderson, Westat, kathrynhenderson2@westat.com
Debra Rog, Westat, debrarog@westat.com
Jennifer Hamilton, Westat, jenniferhamilton@westat.com
In 2006, the NYC Mayor’s Office established the Center for Economic Opportunity to work with City agencies to design and implement evidence-based initiatives to reduce poverty and increase self-sufficiency through education, employment programs for disconnected youth, and at-risk and low-income populations. A key hallmark of the initiative is an explicit policy on evaluation as a tool for accountability and decision-making therefore CEO has contracted with Westat and Metis Associates to lead an evaluation efforts for about 35 of CEO’s anti-poverty programs. This presentation outlines the role of these evaluations in enabling CEO to (1) measure the benefits to participants in these programs, and (2) inform the work of both policy makers and program managers. The discussion will include the challenges confronted, the strategies used to deal with the challenges, and the ways that results from the evaluations have been used to strengthen program operations and make funding determinations.

Session Title: Reaching for the Pot of Gold: Tested Techniques for Enhancing Evaluation Quality With Trustworthiness and Authenticity
Multipaper Session 616 to be held in SEGUIN B on Friday, Nov 12, 1:40 PM to 3:10 PM
Sponsored by the Qualitative Methods TIG
Eric Barela,  Partners in School Innovation, ebarela@partnersinschools.org
Eric Barela,  Partners in School Innovation, ebarela@partnersinschools.org
Lessons Learned From Planning and Conducting Site Visits
Mika Yamashita, Academy for Educational Development, myamashita@aed.org
Abstract: Site visits are a common feature of many program evaluations, but not much has been written about them. This paper presents our experiences and lessons learned from planning and implementing site visits for several formative evaluation projects. The projects we evaluated were education support programs provided by intermediary organizations and schools. We discuss strategies we used to establish shared norms and expectations within a site visit team, to better communicate with the sites about data collection requests. We also present our post site visit reporting strategies. Building upon Lawrentz and colleague’s work (2002, 2003), the paper aims at expanding our understanding of strategies and steps we can take to improve site visit planning and implementation.
A Recipe for Success: Lessons Learned for Using Qualitative Methods Across Project Teams
Nicole Leacock, Washington University in St Louis, nleacock@wustl.edu
Stephanie Herbers, Washington University in St Louis, sherbers@wustl.edu
Nancy Mueller, Washington University in St Louis, nmueller@wustl.edu
Virginia Houmes, Washington University in St Louis, vhoumes@wustl.edu
Lana Wald, Washington University in St Louis, lwald@wustl.edu
Eric Ndichu, Washington University in St Louis, endichu@gwbmail.wustl.edu
Abstract: Collecting qualitative data adds important context to an evaluation. It also requires an investment in staff time, skills, and other resources. Evaluation centers must strategically manage their staff and resources in order to efficiently implement qualitative data collection and analyses while still maintaining quality. This is especially true in circumstances where there are several project teams taking on multiple evaluations with different needs, goals, and timelines. In this presentation we will draw on our experiences as an evaluation center that consistently employs qualitative methods as part of a mixed methods approach. This presentation will outline the qualitative data collection and analysis processes utilized by multiple projects within our center. We will also present lessons learned for building project team capacity for data collection, analysis, and dissemination of results.
Applications of credibility techniques to Promote Trustworthiness of Findings in a Qualitative Program Evaluation: A Demonstration
John Hitchcock, Ohio University, hitchcoc@ohio.edu
Jerry Johnson, Ohio University, jerry.johnson.ohiou@gmail.com
Bonnie Prince, Ohio University, bonnielprince@aol.com
Abstract: The purpose of this paper is to demonstrate how a series of credibility techniques was applied to a qualitatively-oriented program evaluation. The intervention of interest was designed to promote culturally aware and responsive pedagogy among K-12 teacher candidates pursuing degrees in higher education, and the authors serve as external program evaluators. Evaluation objectives were to offer a series of formative input and transition into summative findings. The nature and scope of program delivery suggested a qualitative case-study design would yield the most informed findings. In order to address potential researcher (i.e., evaluator) bias as a validity threat, a series of qualitative techniques were employed to strengthen the design. These included but were not limited to: triangulation, member checks, negative-case analyses and sampling, referential adequacy and so on. As these techniques are established in the literature, the focus will be on demonstrating their application to promote evaluation quality.

Session Title: Building Capacity to Monitor and Evaluate Development Policies, Programs, and Projects: Everyone Wants to do It, but How Should It Be Done to Ensure High Quality?
Panel Session 617 to be held in REPUBLIC A on Friday, Nov 12, 1:40 PM to 3:10 PM
Sponsored by the International and Cross-cultural Evaluation TIG
Linda Morra Imas, World Bank, lmorra@worldbank.org
Abstract: Developing country governments around the world are planning and trying to implement results-based monitoring and evaluation (M&E) systems. The developed world has recognized the capacity gaps and the Paris Declaration called for donors to increase technical cooperation and build capacity. The 2008 Accra Agenda for Action reinforced the need to improve partner countries’ capacities to monitor and evaluate the performance of public programs. However, despite some effort, a late 2008 report shows that capacities are still weak in many governments--M&E implementation lags planning and quality is generally low. Many are now engaged in renewed efforts to build M&E capacity in the development context. But what works? What capacity-building efforts are needed to yield good quality M&E? This panel explores four types of M&E capacity building efforts, experience with them based on actual cases, their advantages and disadvantages, and factors important for their transfer to behavior change and quality M&E.
Evaluation Capacity Building: Lessons Learned From the Field in Botswana
Robert Lahey, REL Solutions Inc, relahey@rogers.com
Bob Lahey presents on hands-on work in Botswana to develop and put in place national M&E capability, identifying factors that worked to support the efforts as well as those that did not, the advantages and disadvantages of such a field-based approach for evaluation capacity building, and lessons for the broader evaluation community on building quality M&E systems. M&E efforts in Botswana are driven by two key initiatives: Vision 2016 and Public Sector Reform which led to desire to increase “results” measurement and reporting. Factors identified and discussed as necessary for good quality monitoring and evaluation development and implementation include: (1) drivers; (2) leadership and commitment; (3) capacity “to do”; (4) capacity “to use”; and (5) oversight. The lessons, however, are different.
Evaluation Capacity Building in the International Context: View From the Academy
Robert Shepherd, Carleton University, robert_p_shepherd@carleton.ca
Susan Phillips, Carleton University, susan_phillips@carleton.ca
The authors argue that for quality M&E in the development context, we need more than skilled evaluators who are good at measurement and can monitor and evaluate projects. They also need to understand the international development context and be able to evaluate greater levels of complexity, including program, joint program, organizational and country level evaluations. But, even this will not alone build sufficient capacity without building the demand side, that is educating public managers who inculcate an evaluative culture in their organizations, recognize quality evaluations, and demand quality M&E in their programs. Universities can have a major role in building such capacity – in offering appropriate degrees with relevant content and undertaking related research. But relevance, rigor, and reach are critical. The authors explain what this would mean and major challenges to accomplishing it. They illustrate parts of the dream with programs now underway.
Building Capacity in Development Evaluation: The International Program in Development Evaluation Training
Linda Morra Imas, World Bank, lmorra@worldbank.org
This presentation describes the origin and nature of this now10 year old experiment in building capacity in international development evaluation and who it trains. A collaboration of the World Bank and Ottawa’s Carleton University, the program has evolved and grown offshoots in the form of customized local, national, and regional offerings, some of which are now becoming institutionalized themselves. Several thousands have taken IPDET training in Canada or through one of its offshoots. Advantages and disadvantages of the approach are discussed as well as how it is itself evaluated to help ensure provision of high quality M&E training and gain knowledge of how and the extent to which it impacts in contributing to the practice of high quality M&E in developing countries. Features that evaluations to date have found key to the program’s success are identified, in addition to challenges going forward.
Regional Centers for Evaluation Capacity Development
Nidhi Khattri, World Bank, nkhattri@worldbank.org
This session presents efforts to strengthen institutions at the regional level to supply cost-effective, relevant, and demand-driven quality M&E capacity building services through a new initiative supported by multiple international development organizations, called the Regional Centers for Learning on Evaluation and Results (CLEAR). CLEAR stems from high demand for, but limited availability of relevant M&E services, scarce quality programs, and small pool of local experts resulting in countries being dependent on international supply-- which is expensive, untimely, and not necessarily customized to specific needs. The initiative has two components: (1) regional centers that will provide applied in-region training, technical assistance, and evaluation work, and (2) global learning to strengthen practical knowledge-sharing on M&E across regions on what works, what does not, and why. Advantages and disadvantages of the approach are discussed as well as the major challenges going forward and how the quality of M&E capacity-building services will be evaluated.

Session Title: Examining Heart Disease and Stroke: Sharing Lessons to Improve Evaluation Quality
Multipaper Session 618 to be held in REPUBLIC B on Friday, Nov 12, 1:40 PM to 3:10 PM
Sponsored by the Health Evaluation TIG
Cindy Wong,  Bradeis University, cindyjwong@gmail.com
Evaluating a Statewide Health System Improvement Effort: Reducing Time to Treatment for Heart Attack and Stroke Patients
Megan Mikkelsen, Washington State Department of Health, megan.mikkelsen@doh.wa.gov
Abstract: The Centers for Disease Control and Prevention(CDC) funded Emergency Cardiac and Stroke Technical Advisory Committee (TAC), brings statewide stakeholders together to establish a coordinated emergency response system that improves the quality of care for heart attack and stroke patients in Washington State. The evaluation follows the CDC framework for evaluation and measures short and long term outcomes that should lead to better emergency response and hospital care for individuals having a heart attack or stroke. The evaluation of this program includes a variety of methods and sources to determine success. Process measures focus on stakeholder awareness and support of the work of the TAC, adoption of the emergency medical service (EMS) and hospital protocols recommended by the TAC, and statewide trainings. Measurement of EMS and hospital use of guidelines created by the TAC demonstrates success in pushing the time to treatment and quality of care goal.
Assuring Quality in the Design of a Multi-state Surveillance Evaluation
Monica Oliver, Centers for Disease Control and Prevention, ior3@cdc.gov
Rachel Barron-Simpson, Centers for Disease Control and Prevention, rbarronsimpson@cdc.gov
Abstract: From 2005-2007, the Centers for Disease Control and Prevention (CDC), Division for Heart Disease and Stroke Prevention funded four states to develop and implement a state cardiovascular health examination (CVHSE) survey. The CVHSE survey project combines the use of patient interviews and health examinations to facilitate the collection of state-level cholesterol and blood pressure data. This presentation describes the process-and-outcome evaluation undertaken to understand the implementation challenges and opportunities of the CVHSE survey. It discusses how the breadth of expertise and perspectives on the evaluation team, in tandem with focused pre-evaluation interviews with pilot states, provided quality assurance for the design of the full evaluation. Ernest House describes ‘beauty’ in evaluation quality as involving, among other things, authenticity and form; the results of the CVHSE survey evaluation demonstrate that the contextual elements of each pilot case have informed ‘beauty’ for future iterations of the CVHSE survey.
Outcome Measurement in Health Evaluation: Evaluation of the Heart & Stroke Foundation of Ontario’s (HSFO) Hypertension Management Initiative (HMI)
Shirley Von Sychowski, Heart and Stroke Foundation of Ontario, svonsychowski@hsf.on.ca
Abstract: In this paper, we describe the use of a prospective delayed design to evaluate the HMI and to assess its impact independent of community trends, as well as high-level results HMI has yielded after 3 years in field. The HMI is an inter-professional chronic disease management program in 11 communities across Ontario aimed to improve the management and control of hypertension. Hypertension, a major risk factor for cardiovascular mortality, is ranked as the highest diagnostic category for drug expenditures in Canada. Interventions can make a difference if significant change is achieved in blood pressure (BP) control: clinical trials demonstrate that a 3 mmHg reduction in BP leads to 8% reduction in mortality due to stroke, 5% reduction in mortality due to coronary heart disease, and 4% decrease in all-cause mortality.
Evaluation at the Community Level: The Importance of Quality
Aisha Tucker Brown, Northrop Grumman Corporation, atuckerbrown@cdc.gov
Alberta Mirambeau, Centers for Disease Control and Prevention, amirambeau@cdc.gov
Abstract: The Division for Heart Disease and Stroke Prevention (DHDSP) supports its grantees in conducting quality evaluation at the state and local level. It is imperative that evaluations conducted at the community level be of good quality and address effectiveness and accountability. Thus, the division provides grantees with tools and technical assistance to produce quality evaluation results. Specifically, DHDSP provides evaluation leadership and support to the Mississippi Delta, an area with a particularly high burden of heart disease and stroke. DHDSP’s support includes aiding in the development of strategies to evaluate program initiatives and providing ongoing evaluation consultation and expert advice to program staff and their mini-grantees in the Delta Region in an effort to build evaluation capacity and ensure quality evaluations are being conducted at the community level. This presentation will focus on the quality of the evaluation work of two Mississippi Delta community level projects.

Session Title: Preliminary Results of Prevention Capacity Building in Science-based Teen Pregnancy, HIV, and STI (Sexually Transmitted Infections) Prevention
Multipaper Session 619 to be held in REPUBLIC C on Friday, Nov 12, 1:40 PM to 3:10 PM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Duane House,  Centers for Disease Control and Prevention, lhouse1@cdc.gov
Thomas Chapel,  Centers for Disease Control and Prevention, tchapel@cdc.gov
Capacity Change Over Time: Is Prevention Capacity and Capacity Utilization Changing Across Local Partners?
Duane House, Centers for Disease Control and Prevention, lhouse1@cdc.gov
Catherine Lesesne, Centers for Disease Control and Prevention, clesesne@cdc.gov
Abstract: Using the local needs assessment and TA relationship ratings collected annually on local partners, prevention capacity of the organization was measured as well as characteristics of the relationship between the local organization and their technical assistance (TA) provider. Preliminary findings will be presented for 70 local organizations with data collected at two time points between 2007 and 2009. Findings include: Funded partners provided a total of 1793 hours of Training and 1398 hours of TA to local sites. Local partner engagement in TA increased significantly from time 1 (M=3.89, SD=.75) to time 2 (M=4.04, SD=.69) as rated by TA providers, F(1,69)=3.80, p<.05. In addition, the relationship quality between TA provider and local partner increased from time 1 (M=4.26, SD=.57) to time 2 (M=4.38, SD=.58), F(1,69)=5.22, p<.05. Local organization ratings of self-efficacy regarding prevention capacity increased significantly from time 1 (M=3.36, SD=.70) to time 2 (M=3.67, SD=.81), F(1,69)=9.77, p<.05.

Return to Evaluation 2010
Search Results for All Sessions