2010 Banner

Return to search form  

Session Title: Evaluation Anthropology Praxis Today: A Five Year Retrospective
Panel Session 662 to be held in Lone Star A on Friday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the Presidential Strand
Chair(s):
Jacqueline Copeland-Carson, Copeland Carson & Associates, jcc@copelandcarson.net
Discussant(s):
David Fetterman, Fetterman & Associates, fettermanassociates@gmail.com
Rodney Hopson, Duquesne University, hopson@duq.edu
Abstract: Evaluation Anthropology Praxis: Charting a New Future
Ethnography and Mental Health Consumers
Charity Goodman, United States Department of Health and Human Services, charity.goodman@samhsa.hhs.gov
The mission of the Center for Mental Health Service, Substance Abuse and Mental Health Services in the US Department of Health and Human Services is to promote effective mental health services in every community, including the Center itself. The research will measure the extent to which consumer issues are represented in our internal activities and co-worker attitudes. What is the role of ethnography in examining workplace culture and how can culture change be promoted? Our methodology is in-depth ethnographic interviews and a written survey instrument to measure culture change. As part of this evaluation, we will collect information on how comfortable staff may feel in disclosing their experiences with the mental health system to co-workers and or supervisors. Also, the paper will explore how mental health consumers and ethnography are linked as well as how anthropology can enhance the quality of evaluation research.
What Evaluation Anthropologists Bring to the Party: A Systems Approach to Context
Michael D Lieber, University of Illinois at Chicago, mdlieber@uic.edu
Eve Pinsker, University of Illinois at Chicago, epinsker@uic.edu
Sponsors of social interventions now increasingly recognize that complex human problems need solutions that affect multiple organizational levels (individual, institutional, community) and hence have been calling on evaluators to assess “system change.” Gregory Bateson and Margaret Mead, anthropologists and systems theorists, developed powerful concepts for addressing systems and system change in relation to social problems, including ways of analyzing pattern, context, meaning, and what complexity theorists now call “emergence.” The context of any action or utterance is the relationship between the interacting parties, which is shaped by the pattern of messages that they exchange. The assumptions underlying these patterns are what anthropologists conceptualize as “culture.” Ethnographic methods target data that facilitate the analysis of context. One example of practical application of these concepts is the evaluation of coalition-building as a change strategy in community health projects, addressing the question: “How do we know when a coalition has produced system change?”
Making Evaluation Real: Incorporating Praxis Into Graduate Study of Evaluation Anthropology
Mary Odell Butler, University of Maryland, maryobutler@verizon.net
Evaluation anthropology tends to focus on approaches such as utilization focused evaluation, participatory evaluation and empowerment evaluation that incorporate stakeholders in the design of evaluation questions, the implementation of data collection and the analysis of evaluation results. Completion of such evaluations, even on a small scale, is not often feasible within a one-semester course because of the need to bring stakeholders into the evaluation design and to consult with them regularly through out the evaluation, along with the lengthy process of ethnographic interviewing and analysis. This paper deals with the use of problem-based learning and case study approaches drawn from evaluation pedagogy to support the practical learning of ethnographically based evaluation. An example of a learning scenario will be presented and its usefulness critiqued.

Session Title: Systems Perspectives on Using Logic Models to Improve Evaluation Quality
Panel Session 663 to be held in Lone Star B on Friday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the Systems in Evaluation TIG and the Program Theory and Theory-driven Evaluation TIG
Chair(s):
Patricia Rogers, Royal Melbourne Institute of Technology, patricia.rogers@rmit.edu.au
Abstract: Truth, Beauty and Justice have always been at the heart of logic models. The very notion of "logic" has embedded within it the idea of exposing the truth of an argument in elegant, beautiful ways. Justice is served by opening up and revealing embedded assumptions and values. But logic models do not always meet these ideals. To what extent does current practice in using logic models enhance the quality of evaluation? How should they be used? How critical are quality logic models to quality evaluation? What constitutes a quality logic model? This presentation features three speakers who have deeply considered these issues in different ways in different parts of the world. They draw on these experiences and observations together with insights from systems approach to evaluation.
A Systems Perspective on Evaluation Quality and Logic Models
Bob Williams, Independent Consultant, bobwill@actrix.co.nz
The systems field can be considered the ground from which logic models sprung. Indeed the field has generated many different kinds of logic models depending on the orientation and purpose of the systemic inquiry. So if we were to establish criteria by which to judge the quality of logic models used in evaluation, then the systems field is a good place to start. Indeed the core systems features can be seen match the three core features of quality: thus inter-relationships create beauty, perspectives enable truth and boundaries promote justice. Bob's presentation will explore what those criteria could be
Representing Simple, Complicated and Complex Aspects in Logic Models for Evaluation Quality
Patricia Rogers, Royal Melbourne Institute of Technology, patricia.rogers@rmit.edu.au
Logic models do not always improve the quality of evaluation. They can be inaccurate, misrepresenting what programs, policies and projects do, what they produce and how they work. They can be ugly, ‘spaghetti’ diagrams only comprehensible and beautiful to those who created them. They can entrench injustice by only presenting the perspectives of the powerful. This presentation sets out a situationally responsive approach to developing, representing and using logic models in evaluation to appropriately address simple, complicated and complex aspects of projects, programs, policies and strategies. It shows how careful attention to these can improve the quality of logic models and of evaluations that are based on them.
Changing Institutional Approaches to Using Logic Models
Richard Hummelbrunner, OEAR Regional Development Consultants, hummelbrunner@oear.at
Richard Hummelbrunner has dealt extensively with the dominant logic models in the international development field (LogFrame). Over the past years he has been involved in developing alternative models drawing on broader intellectual traditions, including the systems field. His presentation will sum up the main lessons learned from using these models, including the traps hidden in their presumptive beauty of simplicity and he will outline some variations or alternatives, which do justice to the broader range of perspectives found in real life situations and thus depart from the assumption of one single logic or ‘truth’ in projects or programs.

Session Title: The Evaluative Journey: Implementing Evaluation Activities That Faciliate Ongoing Decision Making
Demonstration Session 664 to be held in Lone Star C on Friday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Presenter(s):
Paul St Roseman, Sakhu and Associates, pstroseman@sakhuandassociates.com
Abstract: As Evaluation continues to develop as both a discipline and an essential process of organizational development, there is increased demand for evaluators to develop responsive and inclusive service delivery strategies. This demonstration presents approaches used to focus and frame evaluation efforts as an ongoing decision making process. Topics that will be examined include: (1) administrative coaching as a pathway to develop, interpret, and utilize evaluation products, (2) co-authorship as a tool for data analysis, and (3) web-based resources as a tool for virtual collaboration. This presentation is most appropriate for evaluation practitioners who collaborate with administrators and their staff to design, implement, sustain and utilize evaluation products.

Session Title: Multiple Sites, Multiple Layers, Multiple Players: Lessons From the Field on Keeping Quality High and Frustration Low
Panel Session 665 to be held in Lone Star D on Friday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the Non-profit and Foundations Evaluation TIG
Chair(s):
Jacqueline WIlliams Kaye, Atlantic Philanthropies, j.williamskaye@atlanticphilanthropies.org
Abstract: Multi-site initiative evaluations face multiple layers of complexity rooted in factors such as variation in local cultures and contexts, sheer numbers of entities involved, the initiative design itself, as well as other factors. This session’s panel of experts will discuss strategies to address issues such as creating appropriate communication and coordination mechanisms across stakeholders; “right-sizing” the evaluation relative to the scope of the initiative and capacity of the players; balancing the desire for comparable, cross-site indicators with flexibility for local customization; when and how to make technical assistance available to local sites; and maintaining quality and integrity in the data collection, analysis and reporting processes. The perspectives of evaluation staff from two large funding organizations as well as evaluation professionals representing both national and local evaluator roles will provide hard-won insights into strategies for ensuring high quality multi-site evaluations that also respect the interests and constraints of stakeholders involved.
Leveraging the Opportunity in Having Multiple Evaluation Teams
Jacqueline Williams Kaye, Atlantic Philanthropies, j.williamskaye@atlanticphilanthropies.org
Steven LaFrance, Learning for Action Group, steven@lfagroup.com
Stephen Baker, Learning for Action Group, steven@lfagroup.com
Malu Gonzalez, University of Texas, El Paso, mlgonzalez6@utep.edu
The ability to connect and convene is an important supplement to grant dollars. Providing opportunities for grantees to network often enhances a funder’s initiative. It’s less common for funders to foster relationships among organizations providing support services or technical assistance to grantees. Using experience from a four-site initiative with local evaluation teams this presentation explores how relationships among the evaluators can be established and maintained. Then we’ll discuss how evaluation peers – in the same role but in different places – can enhance the quality of their individual efforts as well as the quality of the overall initiative evaluation. How can individual evaluation team strengths be used to raise all boats? When is having a group of peers to provide input useful and when is it “too many cooks”? What happens when evaluation process works better in some sites than others? These and other questions will be explored among the panelists, and with the audience.
Aligning Interests, Needs and Methods: Managing Cross-site Evaluation
Meridith Polin, Public/Private Ventures, mpolin@ppv.org
Integrated service initiatives are highly complex; they include multiple service providers who have their ‘own’ missions and data systems, but also must adopt the ‘shared’ mission of the initiative including integrated data collection and management practices. Unique opportunities and challenges abound in the design and implementation of an evaluation in this integrated framework. This presentation describes how Public/Private Ventures, one of Elev8’s evaluation partners, developed and manages ongoing data collection for the Initiative from two perspectives: the site level and across four sites nationally. The presentation will discuss (1) the need for consistency as well as flexibility within a data collection system, (2) the process for ongoing feedback and technical assistance, (3) practices for establishing and fostering collaboration among the partners, and (4) features of the system that ensure its long-term sustainability and use among service providers.
Evaluator Capacity and Evaluation Quality in a Multi-site National Initiative
Scott Hebert, Sustained Impact, shebert@sustainedimpact.com
Thomas Kelly, Annie E Casey Foundation, tkelly@aecf.org
Making Connections is a ten-year community change initiative in ten urban neighborhoods across the US. Its cross-site evaluation has relied on the ability of local evaluators to implement effective evaluations of process and outcomes and the funder invested time and resources in building local site capacity to participate in and use evaluations. Even with adequate resources and a clear demand for an evaluation, maintaining communication and coordination across ten local teams with the national cross-site team was a constant struggle throughout the decade. Changes in staffing and local circumstances and limited time and attention of busy implementers limited the ability to achieve and maintain a shared national framework for evaluation, including minimum standards and expectations for evaluation comprehensiveness and use at local and national levels. This presentation will summarize key strategies for addressing evaluation capacity and quality with multiple evaluation partners and clients at local and cross-site levels.

Session Title: The Case for Brief(er) Measures
Panel Session 666 to be held in Lone Star E on Friday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the Quantitative Methods: Theory and Design TIG
Chair(s):
Lee Sechrest, University of Arizona, sechrest@email.arizona.edu
Abstract: It is often assumed that longer measures will be better than shorter measures; that may not always be the case. Determining how measures might be shortened without important cost to reliability or validity would be of great potential value to program evaluators and other researchers. Some instances of the utility and even superiority of single item measures have been identified, and principles underlying them have been described. Very brief scales have been developed by application of methods of intensive data analysis, often resulting in better predictions of criteria than possible with the full scales. Moreover, similar methods can be effective in reducing even very large omnibus measures and sets of measures to a small subset of items or scales that effectively can, by regression methods, reproduce the information in the total set. Description of these approaches and methods and illustration of their applications will be the focus of this panel.
Single Item Measures
Lee Sechrest, University of Arizona, sechrest@email.arizona.edu
Single item measures are commonplace in social science research: sex, age, marital status, education, income, and many other variables are routinely assessed by single items. Rossiter has observed that single items may be better than multiple items when the characteristic being measured can be conceptualized as concrete and singular. Such characteristics as age, marital status, and income fit those requirements, but so do many other possible characteristics of interest in program evaluation and in social science more generally. Nicotine addiction, medical conditions (erectile dysfunction), liking for various “objects,” intentions, and some attitudes, have, among other characteristics, been found to be adequately, or even better, assessed by single item measures. Careful consideration should be given in planning for evaluations to whether characteristics to be assessed could be classified as concrete and singular and, therefore, potentially quantifiable by a single item.
Very Brief Scales: Carved Out Scales
Patrick McKnight, George Mason University, pmcknigh@gmu.edu
Evaluators long for short, psychometrically-sound instruments for all applications because shorter measures reduce respondent burden and missing data. Psychometric theory, however, dictates the opposite: we must increase the length of our measures to improve reliability and validity. Previous research indicates that some short measures – even single items - provide equal or greater predictive validity compared to longer versions. These findings lead us to create a procedure to empirically test whether shorter versions of a measure can provide equal or better predictive validity than the longer versions. The procedure involves randomly generating and comparing the predictive/criterion validity of small item subsets via a genetic algorithm - an iterative mulitple comparison approach. Only the best performing subsets survive each comparison round resulting in some subsets "winning" after several hundred comparisons. The purpose of this talk is to demonstrate the procedure and show that in many cases, shorter measures do outperform longer measures.
Abbreviated Omnibus Measures
Mei-kuang Chen, University of Arizona, kuang@email.arizona.edu
Yarkoni has shown recently that the application of a genetic algorithm for item selection made it possible to reduce 500 items on eight “broadband” personality measures constituting 200 separate scales to a total of 181 items that correlated quite highly with the original scales. The method is computer (and data) intensive, but it has substantial potential importance and wide applicability in many research settings, including program evaluation. Analyses emulating genetic algorithms, show that great compression can be achieved in evaluation measurement when the number of variables is reasonably large and many measures are inter-correlated at least modestly. These analyses will have to be carried out on measures and data sets of general interest in order for investigators to be able to use the results to plan and carry out their own work.

Session Title: Evaluation 201: Evaluation Skills Needed After Coursework
Skill-Building Workshop 667 to be held in Lone Star F on Friday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the Cluster, Multi-site and Multi-level Evaluation TIG
Presenter(s):
Martha Ann Carey, Maverick Solutions, marthaann123@sbcglobal.net
Molly Engle, Oregon State University, molly.engle@oregonstate.edu
Abstract: Research methods and subject matter expertise are generally thought of as the necessary skills needed to begin doing evaluations. In addition, there are essential skills not found in textbooks or college courses. Drawing from their experiences in many roles as an evaluator, evaluation team member, and program developer in various settings including academia, government, and non profit organizations, the workshop presenters will provide an overview of tools in planning evaluations beyond logic models. Including monitoring, conflict management, team building, communication among others, these tools are especially important in working with cluster and multisite programs that involve complexity, as well as opportunities, beyond single site evaluations; especially relevant topics are team work and multiple perspectives. The audience for this workshop includes evaluators new to the field and their supervisors. Exercises will include working with examples suggested by workshop members, and examples of funded multisite programs.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Examining Collaboration in an Evaluation of a Large Scale Civic Education Program
Roundtable Presentation 668 to be held in MISSION A on Friday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the and the Collaborative, Participatory & Empowerment Evaluation TIG
Presenter(s):
Liliana Rodriguez-Campos, University of South Florida, liliana@usf.edu
Connie Walker-Egea, University of South Florida, cwalkerpr@yahoo.com
Michael Berson, University of South Florida, berson@coedu.usf.edu
Abstract: Collaboration is the ability to actively work with others in a mutually beneficial relationship in order to achieve a shared vision, not likely to otherwise occur. The level of collaboration varies for each evaluation and it will depend on the situation within the evaluation. The collaborative relationship between the evaluators and stakeholders was a key component to achieve the goals and objectives of an evaluation of a civic education program. The group of collaboration members was the core decision-making body for the evaluation and was deeply involved in the collaborative effort. The supportive evaluation environment facilitated the collaboration and actively engaged the key stakeholders during the evaluation process. These key stakeholders had a high level of collaboration, assuming responsibility for the entire program and developing appreciation of all aspects of their work. This roundtable will examine the contribution and the role of the collaboration members throughout this evaluation process.
Roundtable Rotation II: Using Mixed Methods to Evaluate a School Based Civic Engagement Initiative
Roundtable Presentation 668 to be held in MISSION A on Friday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the and the Collaborative, Participatory & Empowerment Evaluation TIG
Presenter(s):
Michael Berson, University of South Florida, berson@usf.edu
Liliana Rodriguez-Campos, University of South Florida, liliana@usf.edu
Aarti P Bellara, University of South Florida, abellara@mail.usf.edu
Abstract: Accountability, rigorous evidence, and causality are common terms used to describe federally funded educational program evaluations, which often imply the use of experimental methods. Given the United States Department of Education’s 2003 priority to rigorous scientific methods (ED, 2003), evaluators have engaged in scholarly discourse discussing the strengths and weaknesses of this policy( American Evaluation Association [AEA], 2003, Bickman, et.al., 2003; Chatterji, 2004, 2009; Cooksy, Mark, & Trochim, 2009; Donaldson & Christie, 2005; Julnes & Rog, 2007; Mark, 2003; Scriven, 2003, 2009). Applied evaluation is a practical tool that takes on multiple forms based upon the context and nature of the specific program, and often these programs require multiple methods that complement each other (Rallis & Rossman, 2003) and provide cross-checks on evaluation findings. The purpose of this paper is to describe the successful use of a mixed method design to evaluate a federally funded school based civic initiative.

Session Title: Evaluating Community Capacity Building as a Prevention Strategy
Multipaper Session 669 to be held in MISSION B on Friday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the Special Needs Populations TIG
Chair(s):
Laura Leviton,  Robert Wood Johnson Foundation, llevito@rwjf.org
Community Capacity and Evaluation
Presenter(s):
Carter Roeber, LTG Associates Inc, croeber@ltgassociates.com
Niel Tashima, LTG Associates Inc, partners@ltgassociates.com
Abstract: The Strengthening What Works initiative is evaluating the efforts of eight grantees who are working towards preventing IPV at the primary and secondary levels. One challenge in evaluating primary prevention is the difficulty in measuring the effects of primary level interventions on the rates of IPV in a particular community. With such a complex phenomenon as IPV it is difficult to isolate the effects of a single input, such as a media campaign. Given this complexity, the grantees intervene on multiple fronts, by enhancing social networks, improving collaboration between community organizations, and building community capacity. Research has demonstrated that improvements in community capacity, a feature of all true community-based organizations, increase protective factors against IPV and other public health issues. LTG Associates is collaborating with grantees to develop the means to measure how these multiple interventions increase community capacity and provide additional protection again IPV.

Session Title: Evaluation Opportunities Within a National Science Foundation (NSF) Program
Multipaper Session 670 to be held in BOWIE A on Friday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the Independent Consulting TIG
Chair(s):
Arlen Gullickson,  Western Michigan University, arlen.gullickson@wmich.edu
Discussant(s):
Peter Saflund,  The Saflund Institute, psaflund@earthlink.net
Wanted: Evaluators!
Presenter(s):
Lori Wingate, Western Michigan University, lori.wingate@wmich.edu
Abstract: The NSF’s Advanced Technological Education program makes between 75 and 90 new awards each year, all of which require evaluation. Program grantees are in need of skilled and responsive evaluators, but they don’t know where to find them and often are not even sure what an evaluator is supposed to do for them. This presentation will provide an overview of the program and its grantees, NSF’s expectations for grant-level evaluations, and grantees’ evaluation needs. In addition, audience members will learn about opportunities for collaboration to develop evaluation training, tools, and methods to enhance evaluation practice within the program and beyond.

Session Title: Practicing Culturally Responsive Evaluation: Graduate Education Diveristy Intership (GEDI) Program Intern Reflections on the Role of Competence, Context, and Cultural Perceptions - Part II
Multipaper Session 671 to be held in BOWIE B on Friday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the Multiethnic Issues in Evaluation TIG
Chair(s):
Rita O'Sullivan,  University of North Carolina at Chapel-Hill, ritao@email.unc.edu
Discussant(s):
Michelle Jay,  University of South Carolina, mjay@sc.edu
Being Culturally Responsive in the Digital World
Presenter(s):
Larry Daffin, New York University, larry.daffin@nyu.edu
Abstract: Being culturally responsive as an evaluator is an ever present task. One must always be cognizant of thoughts, perceptions, and assumptions of oneself and others as it relates to the environment and context. Designing culturally competent methods of instruction only complicates this phenomenon, for one must initially anticipate how the material will be received by the community/communities of interest. This presentation is a testament to and exploration of the challenges in creating a culturally competent and responsive web based protocol. Formally known as the virtual Systems Evaluation Protocol (vSEP), the online guide is being created to serve as a self-administered mechanism to assist STEM organizations in enhancing their evaluation capacity, through the creation of effective pathway models and evaluation plans.

Session Title: Evaluations of Court and Corrections Programs
Multipaper Session 672 to be held in BOWIE C on Friday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the Crime and Justice TIG
Chair(s):
Roger Przybylski,  RKC Group, rogerkp@comcast.net
An Innovative and Efficient Propensity Score Methodology for Evaluating OVI Courts
Presenter(s):
Robert Seufert, Miami University, seuferrl@muohio.edu
Kaitlin Kubilius, Miami University, kubilika@muohio.edu
Abstract: Recently, OVI (operating a vehicle while impaired) courts have been used to rehabilitate OVI offenders instead of incarceration. We used logistic regression-based propensity scores, a sophisticated method for determining statistical equivalency, to compare outcomes between participants in the first Ohio OVI court and statistically equivalent nonparticipant offenders. The subjects were compared on number of OVIs, driving under suspension (DUS), and alcohol- and drug-related offenses subsequent to program referral; operator’s license validity at the conclusion of the study period; and numbers of jail days served. Participants had significantly better outcomes than nonparticipants on the following: subsequent alcohol-related offenses (p = .10), valid operator’s license (p = .05), mandatory jail days served (p = .001), and total jail days, including any additional time served for noncompliance with OVI court or probation terms (p = .001). While not significant, OVI court participants also had fewer subsequent OVIs and DUSs.
Evaluation of Family Strengthening Services to Incarcerated Fathers and Their Partners
Presenter(s):
Christine Lindquist, RTI International, lindquist@rti.org
Anupa Bir, RTI International, abirt@rti.org
Tasseli McKay, RTI International, tmckay@rti.org
Hope Smiley-McDonald, RTI International, smiley@rti.org
Abstract: Recent research suggests that the partners and families of incarcerated men are an important resource for men’s successful reentry into society. However, programming to support couple and family relationships through incarceration and community reintegration is relatively rare. The Marriage and Family Strengthening Grants for Incarcerated and Reentering Fathers and Their Partners (MFS-IP), which were funded by the U.S. Department of Health and Human Services, were designed to meet this need by providing services to families during and after a father’s incarceration to enhance family functioning and improve reentry outcomes. A four-year implementation study of these grantees has documented the particular demands of implementing programming at this crucial juncture of the human services and criminal justice sectors. Implementation findings also point to some key strategies that successful grantees use to serve families effectively. In addition, a longitudinal impact study of couples participating in MFS-IP services and comparable couples is being conducted. Findings from the baseline interviews provide insights into the family structures, incarceration experiences and service needs of this population.
Implementation Evaluation of a Reading Intervention Program for Incarcerated Youth
Presenter(s):
Deborah Kwon, The Ohio State University, kwon.59@osu.edu
Raeal Moore, The Ohio State University, moore.1219@gmail.com
William Loadman, Ohio State University, loadman.1@osu.edu
Abstract: This study focuses on the implementation evaluation of a federally funded multi-site reading program, Scholastic Read 180. The fidelity of implementation of the Read180 program was gauged through multiple sources and methods, including self-reported measures and evaluator classroom observations. Challenges associated with aligning the logic model to actual implementation as well as difficulties in measuring fidelity of implementation with regard to this program will be discussed. Instruments and results will be shared highlighting the streamlined observational rubric that was developed as well as reconciling a contradictory data pattern that posed a unique problem to triangulating data. Future uses of the implementation data such as connecting the implementation with the impact to provide institution-level variables for the multi-level dataset will be explored.
Using Logistic Regression to Predict Incarcerated Youth’s Reading Proficiency
Presenter(s):
Jing Zhao, The Ohio State University, zhao.195@osu.edu
William Loadman, Ohio State University, loadman.1@osu.edu
Raeal Moore, The Ohio State University, moore.1219@gmail.com
Weijia Ren, The Ohio State University, osurwj@gmail.com
Deborah Kwon, The Ohio State University, kwon.59@osu.edu
Charles Okonkwo, The Ohio State University, okonkwo.1@buckeyemail.osu.edu
Abstract: Reading skill is one of the most important skills youth need to master today. The primary purpose of the present study is to examine the impact of the intervention program, Read180 on incarcerated high school youth’s reading achievement, to be specific, to see whether the program helped them move from being basic readers to being proficient readers. It also aims to see the relationship between reading proficiency and other variables such as race and disability across high schools. Reading ability is assessed using Scholastic Reading Inventory (SRI). The sample consists of 348 incarcerated youth in seven high schools in Ohio State after four quarters of instruction. Logistic regression analyses will be conducted to see if treatment, race, and disability are significant predictors of whether the youth are proficient readers or not. Dependent variable is the SRI scores (0=basic and 1=proficient). Policy implications for the study will also be discussed.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: My First Year as an Internal Evaluator: What I Didn't Know That I Didn't Know
Roundtable Presentation 673 to be held in GOLIAD on Friday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the Graduate Student and New Evaluator TIG and the Collaborative, Participatory & Empowerment Evaluation TIG
Presenter(s):
Pamela Bishop, University of Tennessee, Knoxville, pbaird@utk.edu
Abstract: In my few short years working as a professional evaluator, I had always held positions in which I was external to the organization being evaluated. Although the process of program evaluation has never been mundane, external evaluation carried with it the expectation of a certain sequence of events: begin the evaluation process, conduct the evaluation, and close the evaluation. When I accepted my first internal evaluation position in February 2009, I quickly learned I would need to not only redefine my ideas of the way the evaluation process works, but also my ideas of what an evaluator actually does. This roundtable is a forum for discussing the learning journey for new evaluators, graduate students, internal evaluators, or those considering becoming internal evaluators, about what it means (and does not mean) to be an internal evaluator.
Roundtable Rotation II: Evaluator/ Practitioner Collaborations
Roundtable Presentation 673 to be held in GOLIAD on Friday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the Graduate Student and New Evaluator TIG and the Collaborative, Participatory & Empowerment Evaluation TIG
Presenter(s):
Angela Moore, National Institute of Justice, angela.moore.parmley@usdoj.gov
Winnie Reed, National Institute of Justice, winnie.reed@usdoj.gov
Carolyn Block, Illinois Criminal Justice Information Authority, crblock@rcn.com
Deshonna Collier-Goubil, National Institute of Justice, deshonnac@hotmail.com
Abstract: Evaluators are often called on to collaborate with practitioners however many young scholars lack the the practical experience that would inform them about the information that is most needed in the field. Evaluation research requires data, and the gatekeepers to data access and data understanding are often practitioners – including caretakers of large, archived datasets, and direct service providers who collect and maintain client data. Successful collaboration depends on a set of skills not taught in most PhD programs. This roundtable will focus on advice for new evaluators on the benefits of collaboration, alternative roads into research collaborations with practitioners, the skills necessary to create and maintain successful collaborations, barriers to collaboration and how to overcome them, conflicts between differing agendas and work with practitioners, pitfalls and how to avoid them or deal with them, collaborative proposals for funding, designing research that protects confidentiality, collaboration in disseminating the results of the evaluation, and the ways in which collaborations evolve over time. Concrete examples from the field will be discussed at the roundtable.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Accreditation as a Pathway to Build Community and Generate Renewal
Roundtable Presentation 674 to be held in SAN JACINTO on Friday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the Assessment in Higher Education TIG
Presenter(s):
Lorna Escoffery, Escoffery Consulting Collaborative Inc, lorna@escofferyconsulting.com
Abstract: Accreditation processes aim to promote institutional self-evaluation and improvement. However, they are time consuming and can seem intimidating to participants for they require a comprehensive self-study, generating and sharing sensitive data, and collaborating across hierarchical levels and organizational structures. However, these processes can be productive and effective if senior leadership provides adequate resources, fosters a process involving all the community, and encourages information sharing. The University of Miami School of Medicine engaged in such a process for the 2009 re-accreditation visit by the LCME and the results validate that an accreditation process can be a valuable tool to evaluate the quality of a medical education program and the organization(s) supporting it. The accreditation process prompted important changes as the medical school community adopted organizational and educational objectives, became more knowledgably about the school, and embraced the process as well as the changes it generated.
Roundtable Rotation II: Using Evaluation to Help Transform Departments in the Challenging Economic Environment of Higher Education
Roundtable Presentation 674 to be held in SAN JACINTO on Friday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the Assessment in Higher Education TIG
Presenter(s):
Sabra Lee, Lesley University, slee@lesley.edu
Ellen Iverson, Carleton College, eiverson@carleton.edu
Abstract: This presentation provides an overview of an evaluation for a federally sponsored program that employs a systems-approach in helping higher education geosciences departments adapt to, prosper in and become stronger in a changing and challenging economic environment. The program includes national workshops, traveling workshops and a collection of website resources. The program evaluation uses a participant-oriented systems approach, employing a mixed-methods approach to provide formative input and embedded assessment of the program. Case studies exemplify the range of effects and impacts of the program’s strategies. Through the use of a case study, the session provides examples of methods and instruments used to evaluate the program (including website forms, departmental applications, action plans, surveys, and interview protocols). Participants interested in evaluation of professional development that looks at the impact of both face-to-face and website resources as well as those interested in discussing participant-oriented systems approaches may find this presentation valuable.

Session Title: How to Cast Your Net: Network Analysis Techniques in Public Health Evaluation
Demonstration Session 675 to be held in TRAVIS A on Friday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the
Presenter(s):
Lana Wald, Washington University in St Louis, lwald@gwbmail.wustl.edu
Bobbi Carothers, Washington University in St Louis, bcarothers@wustl.edu
Douglas Luke, Washington University in St Louis, dluke@wustl.edu
Jenine Harris, Saint Louis University, harrisjk@slu.edu
Abstract: Social network analysis can serve as a valuable evaluation approach for understanding and quantifying relationships between individuals and organizations. Network analysis techniques provide evaluators with tools to visualize relationships and the flow of information. These techniques contribute to a more in depth understanding of the context in which a program operates and can be a useful component of a comprehensive evaluation. Network analysis entails three steps: 1) defining who is in the network, 2) measuring network participants, and 3) showing network relationships in a visual form. We will demonstrate these steps along with how this approach has been utilized in two multi-site initiatives. Lessons learned from these evaluations will also be presented. At the end of the session, participants will be able to describe the steps for network analysis and identify how it can be applied in their work.

Session Title: Methodological Considerations: Choosing an Appropriate Cost Analysis Methodology for the Evaluation
Multipaper Session 676 to be held in TRAVIS B on Friday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the Costs, Effectiveness, Benefits, and Economics TIG
Chair(s):
Nadini Persaud,  University of the West Indies, npersaud07@yahoo.com
Sacrifices Must Be Made: Methodological Choices on a Shoestring Cost-Benefit Analysis Budget
Presenter(s):
Jeffrey Prottas, Brandeis University, prottas@brandeis.edu
Melanie Gaiser, Brandeis University, melaniegaiser@myfairpoint.net
Abstract: Nonprofit organizations often do not have large budgets available for program evaluation. In order to perform nonprofit program cost-benefit evaluations, difficult choices may have to be made to enable the completion of a project within a relatively small budget. This paper presents a case that involves methodological decision-making in an environment where the use of a classic cost-benefit evaluation model is not possible. Resource constraints required tradeoffs in the methodology used to gather data and determine which costs to include in the analysis. The project team also worked with the nonprofit to determine how to address program planning costs and managers’ fears that including these costs in the analysis would preclude program replication. Finally, the paper addresses the steps taken and obstacles encountered due to the program’s structure; program costs are immediate, while the benefits will take several years to become apparent.
Evaluating the Cost-Benefit Impacts of a Cooking and Eating Healthy Program in Brazil
Presenter(s):
Miguel Fontes, John Snow, Brazil, m.fontes@johnsnow.com.br
Lorena Vilarins, Social Service Industry, Brazil, lorena.vilarins@sesi.org.br
Milton Souza, Social Service Industry, Brazil, milton.souza@sesi.org.br
Rodrigo Laro, John Snow, Brazil, r.laro@johnsnow.com.br
Fabrízio Pereira, Social Service Industry, Brazil, fpereira@sesi.org.br
Morgana Rios, Social Service Industry, Brazil, morganarios.souza@gmail.com
Diana Barbosa, Independent Consultant, tb.diana@yahoo.com.br
Danielle Valverde, National Union of Municipal Education Managers, danielle_valverde@hotmail.com
Abstract: Objectives: In 2009, SESI implemented a Cooking and Eating Healthy Program in all 27 Brazilian states with 60,000 individuals. The objective of this evaluation is demonstrating its economic impact. Methods: A national evaluative research was conducted (n=9,615). A Scale was generated based on 61 variables for eating/cooking Knowledge, Attitudes, Practices (KAP) of participants. A multivariate regression projected the impact of the Scale´s attributes (Conbranch alpha=0.83) on family income. A Gittinger matrix calculated total Net Present Value (NPV) and Benefit:Cost Ratio. Results: Ex-ante mean of 56.2 points increased to 68.1 points for ex-post in final Scale (weighted range -129 to 129). Significant association between higher levels in Scale and reduction of food waste/expenditure was detected (coefficient 0.001; p-value<0.05). The NPV reached R$23 million after discounting (US$10 million equivalent) and Benefit:Cost Ratio was 4.2:1.0. Conclusions: The Program demonstrated impact on KAP healthy cooking and eating and contributes to reduction of food waste.
Evaluating the Cost-Benefit Impacts of a Social Service Program in Brazil
Presenter(s):
Rodrigo Laro, John Snow, Brazil, r.laro@johnsnow.com.br
Miguel Fontes, John Snow, Brazil, m.fontes@johnsnow.com.br
Lorena Vilarins, Social Service Industry, Brazil, lorena.vilarins@sesi.org.br
Diana Barbosa, Independent Consultant, tb.diana@yahoo.com.br
Danielle Valverde, National Union of Municipal Education Managers, danielle_valverde@hotmail.com
Abstract: Objectives: In 2009, SESI implemented the Citizenship Rights Event in 30 Brazilian municipalities, offering 1.3 million individuals access to basic social and health services. The objective of this evaluation is demonstrating the economic impact of the program. Methods: A national survey was conducted in November 2009 (n=9,729). A Scale was generated based on 15 types of services. A multivariate regression projected the impact of the Scale´s attributes on family income. A Gittinger matrix was used to calculate total Net Present Value (NPV) and Benefit:Cost Ratio. Results: At the national level, ex-ante mean of -1.3 point increased to 12.0 points for ex-post in final Scale (range -65 to 65). Significant association between higher levels in Scale and family income was detected (coefficient 8.36; p-value<0.05). The NPV reached near US$22.0 million after discounting and Benefit:Cost Ratio 10.6:1.0. Conclusion: The Program demonstrated impact on citizenship indicators and contributes to sustainability of local communities.
Getting to Impact for a Know Thyself Leader Development Program
Presenter(s):
Stacey Farber, Cincinnati Children's Hospital Medical Center, stacey.farber@cchmc.org
Scott Steel, Cincinnati Children's Hospital Medical Center, scott.steel@cchmc.org
Daniel McLinden, Cincinnati Children's Hospital Medical Center, daniel.mclinden@cchmc.org
Britteny Howell, Cincinnati Children's Hospital Medical Center, britteny.howell@cchmc.org
Janet Matulis, Cincinnati Children's Hospital Medical Center, janet.matulis@cchmc.org
Abstract: Calls for evidence of the impact and value reaped by staff training are pervasive in organizational cultures that encourage and support the development of its workforce. Combining the quantitative approach of Phillips and Phillips' (2007) Return on Investment (ROI) impact assessment methodology with the case study method approach of Brinkerhoff (2003), this paper presentation offers evaluators a model for determining and communicating to organizational leaders the impact (e.g., application, implementation, business measures) and value (i.e., ROI) of a leadership development program focused on building leader self-awareness. The nature of the program and the context in which it occurs (high accountability, mission-driven hospital during tough economic times) also adds a level of complexity that evaluators may find of interest.
Economic Analyses in Program Evaluation: First Approximations When Economists are Late on the Scene
Presenter(s):
Mustafa Karakus, Westat, mustafakarakus@westat.com
Abstract: To choose the most effective program among alternatives, economic analysis provides essential information. However, in setting up programs and designing program evaluations, economists are most often the last people on the scene. Thus, an economist is in a situation where s/he has to make the most effective use of administrative data and other program level information to implement sound economic evaluations. This study will discuss approaches to economic evaluations when availability of program data is less than ideal. The process starts with having the key program staff on board to understand the value of economic analyses. Then, all cost points in a program provide information in a uniform and systematic fashion through data collection tools designed by evaluators. Economic data collection tools and how to implement reasonable first approximations of program costs and benefits will be discussed. Example economic evaluations involving employment, career advancement, drug court, chronic homelessness programs will be presented.

Session Title: From Compliance to Reliance: Critical Moments in Integrating Evaluation Into an Organization’s Work
Panel Session 677 to be held in TRAVIS C on Friday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the Evaluation Use TIG and the Organizational Learning and Evaluation Capacity Building TIG
Chair(s):
Jennifer Iriti, University of Pittsburgh, iriti@pitt.edu
Abstract: What are the critical moments in an organization’s journey from “evaluation for compliance” to relying on evaluation for core program design and decision-making? This panel will introduce participants to a success case in which evaluators worked with an education non-profit over 6 years to integrate evaluation and evaluative thinking into the work of the organization. First, context about the case and how the organization’s orientation toward evaluation has changed is provided. Then the following issues are examined from the perspectives of evaluator, organizational leader, program implementer, and funder: a) value-added of the evaluation work; b) critical moments in the work; c) generalizable lessons about evaluation integration and use; and, d) challenges to the organization’s shift toward reliance on evaluation. The goal of the session is to draw on a real-world longitudinal example to bring to life the actions, conditions, and tools that supported use and ownership of evaluation.
Evaluator Perspective
Jennifer Iriti, University of Pittsburgh, iriti@pitt.edu
Catherine A Nelson, Independent Consultant, catawsumb@yahoo.com
The evaluator perspective will be shared by Jennifer Iriti. Dr. Iriti has been co-evaluator with Catherine Nelson on this evaluation since its inception. She will both facilitate the session overall and contribute as a panelist. She will summarize the nature and general components of the evaluation activity and provide contextual information about the organization. As a panelist, she will offer insights about the case from the evaluator perspective. Key issues she will discuss include: specific ways that program staff’s participation in the evaluation design process affected on its own program design work (process use), how the evaluation design and refinement kept the user at the center of the work, how the collaborative development of an assessment tool was a turning point in the organization’s valuing of evaluation, and the ways that implementation evaluation early on shifted the perceived utility of evaluation more generally.
Organization Leader Perspective
Mary Hicks, Boundless Readers, mhicks@boundlessreaders.org
Mary A. Hicks is the Executive Director of Boundless Readers (the focus of the evaluation) and has been the key program decision-maker with regard to evaluation since 2003. As the leader of the organization, she offers a perspective that incorporates the strategic, political, human resources, and financial issues of the organization overall in relation to the evaluation work. She has chosen to sustain the evaluation activities of the organization well beyond the time and scope demands that initially sparked the evaluation. Key issues that Ms. Hicks will share include: how the collaborative work with evaluators led to a re-visioning and standardization of the organization’s measurable goals across programs, how the context of “new program development” served as a natural entrée for evaluation use, the ways in which internal program planning was permanently altered by participating in the evaluation, and the challenges of human and financial resources required to sustain evaluation.
Program Implementer Perspective
Nancy Plaskett, Chicago Children's Museum, nancyp@chicagochildrensmuseum.org
Nancy Plaskett was the Program Director at Boundless Readers from July 2005 through September 2007 during which time intensive evaluation was conducted for several programs. She will offer insights about evaluation from the program implementer’s perspective and will discuss the benefits of evaluation to produce strong formative information that enhances program philosophy, structure, activities, and targeted outcomes. She will share examples of specific program and process improvements that resulted from the evaluation and efforts to build capacity in staff to implement rigorous programming that resulted from evaluation-related revisions to the design. Challenges she will discuss include program participant resistance to incorporate evaluative elements in to the work of the program as well as challenges in maintaining a high fidelity of implementation of the program evaluation itself.
Program Funder Perspective
Regina Dixon-Reeves, Lloyd A Fry Foundation, rdixonreeves@fryfoundation.org
Regina Dixon-Reeves is a program officer with the Lloyd A. Fry Foundation, which has been a core funder of Boundless Readers since 1990. This funder played a particularly important role in this case in that it specifically funded the evaluation work for the Rochelle Lee Teacher Awards, the organization’s 22 year old flagship program. Dr. Dixon-Reeves will share her perspective on why the foundation supported an evaluation and how she has seen the organization change as a result. Specifically, she will discuss how the foundation viewed the changes that took place in Boundless Readers’ decision-making and programming as a result of the evaluation. In addition, Dr. Dixon-Reeves will talk about the role of evaluation in the funding relationship with Boundless Readers over time.

Session Title: Working Together and Getting the Message Heard in Teen Pregnancy Prevention Programs
Multipaper Session 678 to be held in TRAVIS D on Friday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the Health Evaluation TIG
Chair(s):
Robert LaChausse,  California State University, San Bernardino, rlachaus@csusb.edu
Reactance to Sexual Abstinence Education: A Method of Assessment
Presenter(s):
Ann Peisher, University of Georgia, apeisher@uga.edu
Amy Laura Arnold, University of Georgia, alarnold@uga.edu
Virginia Dick, University of Georgia, vdick@cviog.uga.edu
Robetta McKenzie, Augusta Partnership for Children Inc, rmckenzie@arccp.org
Don Bower, University of Georgia, dbower@uga.edu
Katrina Aaron, Augusta Partnership for Children Inc, kaaron@arccp.org
Abstract: Dosage has long been advocated as a predictor of participant outcomes of health education programs. Evaluator concerns about potential over-saturation in a three-year longitudinal sexual abstinence education program lead to this study. Content analysis of post year-one focus groups was used to evaluate potential and actual resistance to the pregnancy prevention program. This study identifies factors of the intervention that adolescents might be resistant towards.. Factors addressed are: the assessment of adolescent motivation to resist messages; bias message processing approaches; and the exploration of ways to increase the success of similar programs. Following reactance theory, researchers explored the negative cognitive statements made by participants (n=18) through theme analysis. While participants expressed positive thoughts toward the concept and program, three factors including the credibility of the message source, perceived home environmental pressures and the intrusive nature of assessment instruments posed a threat to the potential success of the pregnancy prevention program.
Effectiveness and Utilization of a Parent Education Curriculum: A Two-Level Evaluation of Educators and Parents
Presenter(s):
Sheetal Malhotra, Medical Institute for Sexual Health, smalhotra@medinstitute.org
Diane Santa Maria, University of Texas, dianedickerson@hotmail.com
Melissa Steiner, Medical Institute for Sexual Health, msteiner@medinstitute.org
Abstract: Community educators can be effective mediators to reach many parents for an exponential effect on sexual health education and communication. Methods: Educators are trained to provide the Building Family Connections (BFC) curriculum to parents in their communities to increase parent-child sexual health communication. Objectives: To evaluate 1) effectiveness of BFC training for educators, 2) Uptake and effectiveness transfer of this information to parenting adults. Results: 34 educators were trained in BFC in 2008. Pre/post and training evaluation surveys showed significant increases in participant knowledge (p<0.05). Most (>80%) participants agreed the training was useful and prepared them adequately. Approximately 290 parenting adults have attended 23 BFC courses held by 7 trainers. Evaluation of these courses for parenting adults have shown significant (p<0.01) changes in knowledge, attitudes and behaviors of parents related to sexual health communication with adolescents. Conclusion: Training of educators is exponentially effective in enhancing parent-child communication on sexual health.
Evaluating Healthy Girls: Working With Community Organizations to Design a City-wide Teen Pregnancy Prevention Evaluation
Presenter(s):
Jessica Rice, University of Wisconsin, rice4@wisc.edu
Melissa Lemke, University of Wisconsin, melissa.lemke@aurora.org
Nicole Angresano, United Way of Greater Milwaukee, nangresano@unitedwaymilwaukee.org
Julie Rothwell, United Way of Greater Milwaukee, jrothwell@unitedwaymilwaukee.org
Abstract: In 2008, the United Way of Greater Milwaukee began the process of planning an evaluation of teen pregnancy prevention programs supported by their Healthy Girls funding stream. This called for an evaluation plan that would apply across a variety of organizations and curricula. In the first eighteen months of the evaluation, programs worked with the evaluator to create a logic model and identify outcomes of interest, an instrument was created and pilot data was collected and analyzed. The evaluation faced several challenges in the beginning including development of feasible data collection plan, negotiating a consent process with the local school district research review board, earning the buy-in of the partner organizations, and balancing funder and community organization interests while maintaining the integrity of the evaluation. This presentation will focus on approaches used to address these challenges, both successfully and unsuccessfully, and preliminary results.
Labor Pains: The Challenges of Developing an Evaluation Instrument for a City-wide Teen Pregnancy Prevention Evaluation
Presenter(s):
Jessica Rice, University of Wisconsin, rice4@wisc.edu
Melissa Lemke, University of Wisconsin, melissa.lemke@aurora.org
Nicole Angresano, United Way of Greater Milwaukee, nangresano@unitedwaymilwaukee.org
Julie Rothwell, United Way of Greater Milwaukee, jrothwell@unitedwaymilwaukee.org
Abstract: Our goal was to create a data collection instrument that could be used to evaluate teen pregnancy prevention programs occurring in organizations across the city of Milwaukee. Drawing from a set of established questions and a series of pilot tests with the target adolescent audience, we created a self-administered instrument that could measure common outcomes across programs. Due to the sensitive nature of some of the outcomes being measured (sexual behavior) respondent confidentiality was essential and a process for maintaining confidentiality was developed. Developing and implementing this instrument presented several challenges including: balancing the need for respondent confidentiality with the ability to match pre- and post-tests, developing and implementing a unique identifier and creating questions that measured outcomes of interest while at the same time being understandable to the teen participants. Our experiences with addressing these challenges as well as suggestions for evaluators dealing with similar challenges will be presented.

Session Title: Beyond Fidelity: Evaluating the Implementation of Evidence-based Practices
Think Tank Session 679 to be held in INDEPENDENCE on Friday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the Human Services Evaluation TIG
Presenter(s):
Miles McNall, Michigan State University, mcnall@msu.edu
Abstract: The focus of this think tank will be on re-conceptualizing the current approach to evaluating the implementation of “evidence-based practices” (EBPs). In most cases, implementation evaluations focus on the extent to which EBPs are implemented with fidelity to their original models. However, because EBPs are implemented in a wide variety of organizational, political and cultural contexts, evaluations of EBPs that maintain an exclusive focus on fidelity miss important contextual factors that may impact both the fidelity and effectiveness of interventions. As such, there is a need to develop broader evaluation frameworks that capture the factors that influence the success or failure of implementation. Preliminary frameworks derived from the implementation literature will be presented to generate a discussion that leads to the development of a new framework for implementation evaluation.

Session Title: Basic Change: Examining a Simple Design From Multiple Approaches
Demonstration Session 680 to be held in PRESIDIO A on Friday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the Quantitative Methods: Theory and Design TIG
Presenter(s):
Julius Najab, George Mason University, jnajab@gmu.edu
Caroline Wiley, University of Arizona, crhummel@email.arizona.edu
Simone Erchov, George Mason University, sfranz1@gmu.edu
Abstract: In this demonstration, we will cover various approaches to a simple, yet common, evaluation design (pre-post-test) and review the conceptual aspects and results to each method. A majority of evaluators and data analysts can quickly and accurately interpret basic analyses (e.g. means, standard deviations, and t-tests) but the more advanced techniques are often difficult to interpret or inaccessible to those unfamiliar. The variety of disciplines that form evaluation will often expose evaluators to analyses not commonly used in their field of training. We will analyze pre-post data using multiple methods and inform evaluators about the conceptual aspects, similarities/inconsistencies, and the conclusions drawn from each method. The purpose of this demonstration is to inform evaluators unfamiliar with advanced quantitative techniques on different perspectives and approaches to examining data from a commonly used evaluation design.

Session Title: Using the Comprehensive Organizational Assessment Tool to Diagnose and Evaluate Organizational Capacity
Demonstration Session 681 to be held in PRESIDIO B on Friday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the Organizational Learning and Evaluation Capacity Building TIG
Presenter(s):
Ashley Kasprzak, JVA Consulting LLC, ashley@jvaconsulting.com
Randi Nelson, JVA Consulting LLC, randi@jvaconsulting.com
Abstract: The Comprehensive Organizational Assessment (COA) is a research-based tool developed by JVA Consulting to measure gains in nonprofit organizational capacity and guide capacity building training and technical assistance. Demonstration participants will learn about the COA’s 17 critical nonprofit capacity indicators and its use as a pre-post outcome measurement tool. The facilitator will compare characteristics and applications of the COA to 20 other organizational assessment tools currently used in the U.S. The COA’s background, including its use with 14 capacity building initiatives, will be presented. The facilitator will describe data collection methods used with the COA and demonstrate the interview process used with nonprofit leadership. Audience members will view a pre-populated COA and be invited to role-play the interview process. In conclusion, the facilitator will demonstrate consulting with nonprofit leadership to select action steps for improving organizational capacity. Each participant will receive a hard-copy sample of a completed COA report

Session Title: National Evaluation of the Addiction Technology Transfer Center Network (ATTC): Findings and Observations From a Contextually Rich, Mixed Method Evaluation Study
Panel Session 682 to be held in PRESIDIO C on Friday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the Alcohol, Drug Abuse, and Mental Health TIG
Chair(s):
Roy Gabriel, RMC Research Corporation, rgabriel@rmccorp.com
Abstract: The Substance Abuse and Mental Health Services Administration contracted with MANILA Consulting Group, Abt Associates, and RMC Research to conduct the first independent national evaluation of SAMHSA/Center for Substance Abuse Treatment’s Addiction Technology Transfer Center (ATTC) program since it was funded in 1993. The ATTC program supports the workforce that provides addictions treatment services to 23 million Americans each year through training, consultation and product development. The 3-year evaluation was preceded by a two-year evaluation design contract to develop an appropriate evaluation approach for this long standing, contextually rich program. The final design comprised 3 studies aiming to identify and build upon the successes of technology transfer and disseminate effective strategies. This session will include (1) an overview of the ATTC program and the importance of the evaluation; (2) presentations on each of three studies; and (3) a discussion of key findings, decisions and challenges related to this evaluation effort.
Substance Abuse and Mental Health Services Administration (SAMHSA), Center for Substance Abuse Treatment (CSAT) Overview of the ATTC Network and the Importance of Evaluation
Deepa Avula, United States Department of Health and Human Services, deepa.avula@samhsa.hhs.gov
In complement with the SAMHSA/CSAT mission of promoting the quality and availability of community-based substance abuse treatment services, the ATTC Network was established to enhance clinical practice and improve the provision of addictions treatment. SAMHSA/CSAT leadership considers the ATTC Network the cornerstone of the national effort to infuse evidence-based practices in the publicly funded substance abuse treatment system. In 2008, SAMHSA/CSAT funded the first external evaluation of the ATTC Network to identify the successes of technology transfer efforts and build upon them in the future, share lessons learned across regions for the enhancement of all regions’ activities, and distinguish between region-specific and more cross-regional processes and outcomes. The evaluation was designed using a formative, highly participatory approach, which yielded significant buy-in from key SAMHSA/CSAT and ATTC staff regarding the process and the foci of the evaluation. This buy-in and participation was critical to the success of the evaluation.
Planning and Partnering Study: Overview and Key Findings
Margaret Gwaltney, Abt Associates Inc, meg_gwaltney@abtassoc.com
Cori Sheedy, Abt Associates Inc, cori_sheedy@abtassoc.com
The Planning and Partnering Study was designed to determine the processes and partnerships the ATTCs undertake to meet the needs of the addictions treatment workforce. The study utilized a mixed-method approach, including semi-structured individual and small-group interviews with staff from the 15 ATTCs, telephone interviews with more than 150 stakeholders, a survey of ATTCs’ Advisory Board members, and ATTC-reported data on event and activity characteristics. The presentation will describe how ATTCs are organizationally structured and the services they provide. Study findings demonstrate that ATTCs have leveraged their relationships with state agencies, educational institutions, provider organizations and other partners to provide training, technical assistance and other support to enhance the skills of the addiction workforce. Best practices for building and sustaining these relationships will be presented. The session will also discuss stakeholder-identified priorities for the treatment field and the extent to which ATTCs have been able to meet these needs.
Customer Satisfaction and Benefit Study: Overview and Key Findings
Andrea Muse, MANILA Consulting Group Inc, amuse@manilaconsulting.net
Megan Cummings, MANILA Consulting Group Inc, mcummings@manilaconsulting.net
Cliff Chamberlin, MANILA Consulting Group Inc, cchamberlin@manilaconsulting.net
Richard Finkbiner, MANILA Consulting Group Inc, rfinkbiner@manilaconsulting.net
The Customer Satisfaction and Benefit Study was designed to assess the degree of satisfaction and benefit from ATTC activities and products among the range of stakeholders important to the ATTC Network. Drawing from respondent lists provided by each region, sampling within each region was employed with six types of stakeholders: Single State Agency personnel, senior leadership of State treatment provider associations, region-specific stakeholders (e.g., cultural leaders), addiction educators, treatment agency staff, and activity participants (e.g., clinical staff). The presentation will describe the perceived quality and benefits of ATTC services among the various stakeholders. Study findings indicate a significant variation in the level of satisfaction among individuals’ experiences with the ATTCs, including individuals’ motivations to seek services from the ATTCs, the services that were most useful, and how information was used by the ATTCs’ customers.
Change in Practice Study: Overview and Key Findings
Jeffrey Knudsen, RMC Research Corporation, jknudsen@rmccorp.com
The examination of change in practice is central to the identification of successes of technology transfer efforts. The Change in Practice (CIP) Study was the sole component of the national evaluation where behavioral changes in an ATTC target population were examined. The CIP Study addressed the outcomes of technology transfer efforts in three topical areas: clinical supervision, motivational interviewing, and assessment-based treatment planning. In particular, the CIP Study assessed the impact of technology transfer efforts on the implementation of discrete behavioral activities (i.e., critical actions) core to each practice. Study methodology utilized the Success Case Method approach, and all participants were surveyed, and those reporting the highest and lowest levels of post implementation success were interviewed to provide detailed accounts of their experiences. This session will review overall implementation levels of critical actions, illustrative examples of change, impacts of change, and facilitators and barriers to change.
Key Findings, Decisions and Challenges Accompanying the National ATTC Evaluation
Roy Gabriel, RMC Research Corporation, rgabriel@rmccorp.com
The national evaluation of the ATTC Network was aided enormously by the preceding two-year design phase, in which the evaluators worked closely with both the evaluand and the funding agency in formulating the most rigorous and responsive evaluation approach possible. During the evaluation itself, it became a challenge to maintain this participatory evaluation methodology, while assuring other stakeholders of the objectivity of the evaluation. Key findings emerging from the three interrelated studies comprising this evaluation included the incredible diversity of partnerships and collaborators established by the regional ATTCs; the major focus on specific cultural groups in some of the regions; the high degree of satisfaction with their products and services expressed by a wide range of ATTC customers; the characteristics associated with successful implementation of evidence-based practices following extensive training by the ATTCs; and the specific barriers reported by those who were not successful.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Evaluating K-12 Professional Development: Implementation of the Sheltered Instruction Observation Protocol (SIOP) Model
Roundtable Presentation 683 to be held in BONHAM A on Friday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Presenter(s):
Michelle Bakerson, Indiana University South Bend, mmbakerson@yahoo.com
Abstract: Evaluators are often contracted by school districts or organizations receiving grants to develop and facilitate programs to benefit the school or organization. One such school district in Northern Indiana is a district whose K-12 teachers received professional development over a five year period in the Sheltered Instruction Observation Protocol (SIOP) model. The evaluation was conducted to determine the extent to which teachers are implementing strategies of the (SIOP) model in the classroom after participating in the professional development and to what degree that implementation was occurring within the classroom? The evaluation was designed to be a learning tool for facilitating the improvement of the professional development provided at this school. Accordingly, a collaborative evaluation approach was utilized to actively engage the school and the teachers during the whole process. A cross-sectional survey design was also selected for this evaluation. The steps, advantages, and obstacles of this evaluation will be shared.
Roundtable Rotation II: School Climate: A Comprehensive Data Collection and School Improvement System
Roundtable Presentation 683 to be held in BONHAM A on Friday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Presenter(s):
Barbara Dietsch, WestEd, bdietsc@wested.org
Sandy Sobolew-Shubin, WestEd, ssobole@wested.org
Rebeca Cerna, WestEd, rcerna@wested.org
Greg Austin, WestEd, gaustin@wested.org
Abstract: During the Roundtable, participants will engage in a discussion about the importance of using multiple sources to assist program providers in making data-driven decisions in their schools and communities. Presenters will discuss an innovative data collection system that provides a means to obtain staff perceptions about learning and teaching conditions in order to regularly inform decisions about professional development, instruction, the implementation of learning supports, and school reform. Two components make up the system: a student survey (California Healthy Kids Survey) and the web-based California School Climate Survey. The system was developed to provide data that link instruction with the assessment of non-cognitive barriers to learning, such as substance abuse, violence and victimization, and poor mental health among students. It addresses issues such as equity, bias, and cultural competence, which have been linked to the achievement gap plaguing racial/ethnic minorities and can be customized to include questions of local concern.

Session Title: Making Sense of the Relationships Between Nonprofits, Funders, and Evaluation
Multipaper Session 684 to be held in BONHAM B on Friday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the Non-profit and Foundations Evaluation TIG
Chair(s):
Salvatore Alaimo,  Grand Valley State University, alaimos@gvsu.edu
The Relationship Between Evaluation Quality and Use: A Foundation’s Perspective
Presenter(s):
Rosanna Tran, California HealthCare Foundation, rtran@chcf.org
Abstract: Funders can play a critical role in how program evaluations are designed and implemented, and consequently, evaluation quality. This paper will present two case studies that examine the experience of the California HealthCare Foundation (CHCF) in defining and promoting evaluation quality. The first, an evaluation of one of CHCF’s early projects, highlights the tensions that can surface when research methods and design, project timeline, and the needs of users are misaligned. The second is a more recent evaluation in which CHCF realized that grantee engagement and input were prerequisites to a high quality evaluation, and worked closely with the grantee and evaluator to ensure that the evaluation addressed questions that were a priority from the grantee’s perspective. From these experiences, CHCF has found that emphasizing utility for intended users often provides guidance on how to prioritize the various aspects of evaluation quality.
Theory Usage in Nonprofit and Foundation Evaluation: Theorists, Funders and Recipient Perspectives
Presenter(s):
Anne Hewitt, Seton Hall University, hewittan@shu.edu
Charles Gasper, Missouri Foundation for Health, cgasper@mffh.org
Abstract: To improve the quality of nonprofit and foundation evaluations, a three-phase initiative was developed to identify, introduce and disseminate evaluation theories to aid grant recipients in project evaluation design. In phase one, a Delphi study of senior evaluation theorists will provide a scholarly foundation for the choice of appropriate theoretical approaches and models. For phase two, a randomized survey of current funders will help identify their familiarity and competency level with the suggested theories. Finally, for phase three, a survey of current nonprofit grant recipients from several diverse foundations provides a baseline comparison for theory usage and proficiency. This session presents commonalities and differences among surveyed groups and provides recommendations for improving evaluation quality via theory integration.
The Role of the Funder in Evaluation Capacity Building for Nonprofit Human Services Organizations
Presenter(s):
Salvatore Alaimo, Grand Valley State University, alaimos@gvsu.edu
Abstract: The increasing call for accountability and competition for resources challenges nonprofit organizations with responding to the external pull from funders, government agencies and accrediting bodies to build long-term capacity to evaluate their programs. These external stakeholders who typically hold the resources nonprofits require for satisfying their missions and ultimately for survival are in a position to impact the evaluation capacity of their funding recipients. Salvatore Alaimo will draw on his qualitative study of one-on-one interviews with 20 funders of various types to begin to address the following questions: • How a nonprofit organization’s capacity to evaluate its programs is impacted by external funders? • Is there variance in funder and recipient’s perceptions and understanding of program evaluation? • How do funder requirements and relationship dynamics impact a nonprofit organization’s evaluation capacity?
Fostering and Measuring Nonprofit Networks to Inform Grantmaking Decisions
Presenter(s):
Patricia Zerounian, Montgomery County Health Department, zerounianp@co.monterey.ca.us
Janet Shing, Community Foundation for Monterey County, janet@cfmco.org
Jeff Bryant, Community Foundation for Monterey County, jeff@cfmco.org
Krista Hanni, Montgomery County Health Department, hannikd@co.monterey.ca.us
Abstract: When the David and Lucile Packard Foundation awarded a grant to implement and strengthen nonprofit networks, it was part of a larger philanthropic inquiry: Could knowing how networks connect, share, and mobilization teach foundations how to be more effective sponsors of community change? How might the impacts of social networks transform grantmaking and program development? Evaluators investigated four newly-formed nonprofit networks using two comprehensive evaluation plans – one to test the effectiveness of introducing networking to diverse community groups, and another to test the transforming impact of these networks on grantmaking decisions. Network members answered survey questions in five categories of network function: affinity to network concepts; learning/contributing; network cohesion; increased effectiveness; and network technology. These categories directly corresponded to answers the foundation sought. An analysis of the networks’ Google Group activities measured group member engagement. Group interviews with emergent leaders of the four networks added qualitative content.

Session Title: Current Issues in Evaluating High School Programs
Multipaper Session 685 to be held in BONHAM C on Friday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Chair(s):
Manolya Tanyu,  Learning Point Associates, manolya.tanyu@learningpt.org
Discussant(s):
Eric Barela,  Partners in School Innovation, ebarela@partnersinschools.org
Increasing the Volume of Student Voice: Perceptions of Personalization in High Poverty/ Minority Schools
Presenter(s):
Sabrina F Sembiante, University of Miami, s.sembiante@umiami.edu
Ann G Bessell, University of Miami, agbessell@miami.edu
Cathleen Armstead, University of Miami, carmstead@miami.edu
Abstract: “Data-in-a-Day” methodology is an innovative means to gauge the often elusive voice of students in evaluation. Smaller Learning Communities (SLC), a School-Within-a-School approach, includes personalization, a central component of secondary school reform with its’ positive effects on overall student behavior, achievement, attendance, and graduation rates. Within the context of the larger SLC evaluation of 28 large, urban high schools, this study examined the impact of personalization and the roles of ethnicity, poverty, and fidelity of implementation from the perspective of the students. A mixed method approach utilized quantitative covariates (i.e., student ethnicity, free/reduced lunch, extent of implementation, a personalization composite, overall school performance, student behavior indices) and qualitative data points (i.e., classroom interactions, student focus group, questionnaire responses). Preliminary findings suggest that students’ perceptions of personalization vary dramatically across Hispanic, African-American, and Black non-African-American student populations. The utility and process of Data-in-a-Day to gauge student voice will also be discussed.
The Evaluation of a Model for Alternative Education
Presenter(s):
Jessaca Spybrook, Western Michigan University, jessaca.spybrook@wmich.edu
Margaret Richardson, Western Michigan University, richardsonm@groupwise.wmich.edu
James Henry, Western Michigan University, james.henry@wmich.edu
Abstract: The rate of expulsion of students in the United States has increased 15 percent since 2002. Only ten states in the United States mandate educational alternatives for expelled students. There is some movement toward educational alternatives for expelled students, with the Strict Discipline Academy (SDA) as developed in Michigan as one alternative. SDAs are schools with strict behavioral rules and disciplinary practices in place. This study reports on the evaluation of one SDA. The results suggest that expelled students can be academically and behaviorally successful with flexible structure, student skill building, and teacher/staff relational support. The session will include details of this particular SDA’s practices and student outcomes as well as provide suggestions for developing alternative education models from the lessons learned through the evaluation of this SDA.
Cognitive Labs to Evaluate Test Items for Use on an Alternate Assessments Based on Modified Academic Achievement Standards (AA-MAS)
Presenter(s):
Tammiee Dickenson, University of South Carolina, tsdicken@mailbox.sc.edu
Karen Price, University of South Carolina, pricekj@mailbox.sc.edu
Heather Bennett, University of South Carolina, bennethl@mailbox.sc.edu
Joanna Gilmore, University of South Carolina, jagilmore@mailbox.sc.edu
Abstract: This study evaluated the benefits of item enhancements applied to science inquiry items for incorporation into an alternate assessment based on modified achievement standards (AA-MAS) for high school students. Six items were included in the cognitive lab sessions involving both students with and without disabilities. The enhancements (e.g. visuals, removal of a distractor, reading support) were intended to improve access to the items for students who had grade-level science content knowledge, but whose disability may hinder their ability to answer the items in the original form. Students were asked to think aloud while answering items and answer follow-up questions about specific item enhancement features. Achievement did not show much improvement, but reported cognitive effort suggests reduction in perceived difficulty of enhanced items. The results were also used to make decisions on revisions to items and enhancements in constructing a pilot test consisting of 40 items.
Assessing the Effectiveness of a School Connectedness Scale for Evaluation
Presenter(s):
Jill Lohmeier, University of Massachusetts, Lowell, jill_lohmeier@uml.edu
Steven Lee, University of Kansas, swlee@ku.edu
Abstract: Evaluators are frequently asked to assess the effectiveness of school programs implemented to show increases in academic achievement. School connectedness has been shown to be directly related to academic achievement (McNeely, Nonnemaker & Blum, 2002) and is therefore also of interest to evaluators. The construct of school connectedness has been shown to consist of three elements: connectedness to adults in schools, connectedness to peers and connectedness to the school (Karcher & Lee, 2002). This presentation will report the psychometric properties and factor analyses findings from a School Connectedness Scale given to adolescents in two high schools, one a large urban school and one a medium sized suburban school. Issues related to implementation of the instrument in different districts will be discussed as well as the similarities and differences of the resulting factor structures from the two sets of data.

Session Title: Improving Evaluation Quality: A Focus on Better Measures of Implementation
Multipaper Session 686 to be held in BONHAM D on Friday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Chair(s):
Helene Jennings,  ICF Macro, helene.p.jennings@macrointernational.com
Measuring the Quality of Implementation of Online Learning Games in Instruction
Presenter(s):
Shani Reid, ICF Macro, sreid@icfi.com
Abstract: In 2005 ICF Macro embarked into fairly unchartered territory when they began the evaluation of an online learning game being developed by Maryland Public Television. The intent of the game developers was that it be used by students primarily during out-of-class time, with limited in-class use with their teachers to target specific instructional topics. Although teachers were taught best-practices for integrating the game into classroom instruction, they were free to adapt many of these practices for their own classrooms. In this presentation we discuss the framework used to measure the implementation of the game and the relationship this implementation had on student outcomes on the state’s standardized assessment.

Session Title: The Environmental Evaluators Network: Quality in an Era of Results-based Performance
Panel Session 687 to be held in BONHAM E on Friday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the Environmental Program Evaluation TIG
Chair(s):
Matt Keene, United States Environmental Protection Agency, keene.matt@epa.gov
Discussant(s):
Kathryn Newcomer, George Washington University, newcomer@gwu.edu
Abstract: In June 2010 the 5th annual Environmental Evaluators Networking Forum will host 250 domestic and international representatives from government agencies, foundations, consulting firms, non-profits, academia, and international institutions to discuss the effects of an era of results-based performance on the quality of environmental program and policy evaluation. In the format of a panel discussion, key staff from sponsor organizations of the Forum will share and discuss: 1) the current state and future of the quality of environmental evaluation in terms of opportunities, challenges, skills, tools, and methods; and 2) the relationship between the Environmental Evaluators Network and the quality of evaluation at their respective organizations. The purpose of this session is to provide the audience with a better understanding of current trends in environmental evaluation and describe how the Environmental Evaluators Network has evolved to address issues of evaluation quality.
The Environmental Evaluators Network and Improving the Quality of Evaluation at the United States Environmental Protection Agency (EPA)
Matt Keene, United States Environmental Protection Agency, keene.matt@epa.gov
Katherine Dawes, United States Environmental Protection Agency, dawes.katherine@epa.gov
Matt Keene is a social scientist working with the United States Environmental Protection Agency’s (EPA) Evaluation Support Division. As a lead coordinator of the Environmental Evaluators Network (EEN) and the annual EEN Forum, Matt will present an overview of the June 2010 EEN Forum, summarize the major themes discussed during the event and introduce key future goals and opportunities for the network. As a staff member of EPA’s evaluation unit, Matt will discuss how the Agency uses the EEN’s products and services to address EPA’s strategic goal to increase the Agency’s capacity for quality program measurement and evaluation.
Lessons Learned From the First Five Years of the Environmental Evaluators Network
Matt Birnbaum, National Fish and Wildlife Foundation, matthew.birnbaum@nfwf.org
The Environmental Evaluators Network started informally six years ago through discussions between several evaluators working in the environmental arena and affiliated with AEA. As the network has grown over the past six years, its annual international meeting in Washington, DC has more than doubled in size as increasingly diverse organizations and disciplines become aware of the network and recognize its potential value. Regional network events have started up in Canada and events are soon to be launched in Europe and the Pacific Islands. This presentation will be based on interviews with key stakeholders affiliated with EEN to identify major contributions and challenges to assessing its achievements and future strategic issues impacting its future direction, including assessing its niche with larger umbrella organizations like AEA.
In Search of Quality and a Commitment to Evaluation: The Environmental Evaluators Network and the National Oceanic and Atmospheric Administration (NOAA)
Kate Barba, National Oceanic and Atmospheric Administration, kate.barba@noaa.gov
Cassandra Barnes, National Oceanic and Atmospheric Administration, cassandra.barnes@noaa.gov
Ginger Hinchcliff, National Oceanic and Atmospheric Administration, ginger.hinchcliff@noaa.gov
Kate Barba works for NOAA’s Office of Ocean and Coastal Resource Management as Chief of the National Policy and Evaluation Division. As a NOAA organization co-sponsor of the EEN, Kate will highlight key findings from the 2010 EEN event that address concerns for enhanced quality in environmental evaluation with an emphasis on professional development and capacity building in federal environmental programs. Building upon the EEN experience, Kate will discuss how NOAA has initiated the development of an evaluator network within the agency, with mention of the challenges and opportunities inherent to this process. To complement internal capacity building, NOAA has developed training modules in program design and evaluation, targeted primarily to program partners in coastal states to foster quality in program design and evaluability.

Session Title: Your Input, Please: Research, Technology and Development (RTD) Topical Interest Group (TIG) Draft User’s Guide to Conducting Research and Development Evaluation
Think Tank Session 689 to be held in Texas D on Friday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the Research, Technology, and Development Evaluation TIG
Presenter(s):
Gretchen Jordan, Sandia National Laboratories, gbjorda@sandia.gov
Discussant(s):
Brian Zuckerman, Science and Technology Policy Institute, bzuckerm@ida.org
Cheryl Oros, Oros Consulting LLC, cheryl.oros@gmail.com
Juan Rogers, School of Public Policy Georgia Institute of Technology, jdrogers@gatech.edu
George Teather, George Teather and Associates, gteather@sympatico.ca
Abstract: Evaluation of research, technology, and development (RT&D) is a maturing discipline. Although in recent years resource guides for R&D evaluation practitioners have been created and disseminated (e.g., Ruegg and Feller 2003; Ruegg and Jordan 2007), there is not agreement among R&D evaluators, and especially those working in government, as to when evaluations should be conducted, what types of evaluation to pursue, and how they should be approached. While U.S. government policy calls upon R&D agencies to conduct evaluation and to strengthen their capacity to do so, no guidance has been provided for policy implementation. During 2010, the RTD TIG, as a community of practice, has responded to this call by developing a user-focused White Paper regarding evaluation practices as applied to government-funded R&D. At this Think Tank, the TIG leadership will present the current draft of the White Paper for discussion.

Session Title: Strategy as Evaluand: Quality and Utilization Issues in Evaluating Strategy
Panel Session 690 to be held in Texas E on Friday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the Evaluation Use TIG , the Organizational Learning and Evaluation Capacity Building TIG, and the Non-profit and Foundations Evaluation TIG
Chair(s):
Michael Quinn Patton, Utilization-Focused Evaluation, mqpatton@prodigy.net
Abstract: Strategy as a new unit of analysis for evaluation involves a different purpose and use: strategic use. Traditionally, evaluation has focused on projects and programs. Organizational development makes the organization the unit of analysis for assessing effectiveness, usually focused on mission fulfillment. Management, in contrast, often focuses on strategy as the defining determinant of effectiveness. Herbert Simon, one of the preeminent management and organizational theorists, posited that the series of decisions which determines behavior over some stretch of time may be called a strategy. Distinguished management scholar Henry Mintzberg in his recent book Tracking Strategies defines strategy as "pattern consistency in behavior over time" (2007: 1). Philanthropy, government, and non-profits, greatly influenced by business management trends, are paying a great deal of attention to strategic issues. This session will examine issues of quality and utilization when focusing on strategy as an evaluand, or unit of analysis and impact.
Evaluating Strategy: Issues in Treating Strategy as the Evaluand
Michael Quinn Patton, Utilization-Focused Evaluation, mqpatton@prodigy.net
Strategy is a new unit of analysis for evaluation, one that involves a different purpose and use: strategic use. Traditionally, evaluation has focused on projects and programs. Organizational development makes the organization the unit of analysis for assessing effectiveness, usually focused on mission fulfillment. Strategy as the unit of analysis, or evaluand, involves evaluating how decisions are made and executed with regard to how a broad initiative expects to have an impact (strategy as perspective) and where that impact will be made (strategy as position or niche). Issues of evaluation quality and utilization at the level of strategy begin with conceptualizing just what is meant by strategy and identifying characteristics of the strategy that can be evaluated. Strategy as an evaluand, or unit of analysis and impact, can be evaluated using such criteria as consistency, alignment, coherence, utility, and capacity to inform choices and decisions.
Strategy Evaluation Case Example: A Retrospective Strategy Evaluation of a 20-year Philanthropic Grant Program
Patricia Patrizi, Patrizi & Associates, patti@patriziassociates.com
In 2006 the Robert Wood Johnson Foundation commissioned a strategic assessment of its investments to improve care at the end of life. The evaluators sought to document strategy and field level effects related to the portfolio as a whole of 337 grants made over 20 years as opposed to those associated with individual grant programs. This presentation addresses the many lessons surfaced during this evaluation affecting every aspect of evaluation design. For instance: what is strategy?
Experience From the International Development Research Centre in Doing and Using Strategic Evaluation
Tricia Wind, International Development Research Centre, twind@idrc.ca
The International Development Research Centre (IDRC) is a Canadian crown corporation that supports research in developing countries. IDRC undertakes evaluate work at different levels: strategic evaluations; program and project evaluation; and ongoing learning. “Strategic” evaluations examine issues that cut across the Centre’s eighteen programs. They have focused on programming modalities and broad corporate result areas like capacity building and policy influence. In a decentralized, utilization-focused approach to evaluation, IDRC has taken the view that strategic-level evaluation requires separate studies, as opposed to “rolling up” data from program or project evaluations. This panel will highlight how IDRC has framed and used strategic evaluations. It will also discuss the findings of a 2009 study, which reviewed recent strategic and program evaluations in light of IDRC’s Corporate Strategy. This study drew out a number of “tensions” that exist in IDRC programming. Naming these tensions has proved helpful for subsequent evaluation and strategy development.
An Experiment in Evaluating Strategy: The WK Kellogg Foundation’s Devolution Initiative
Kay Sherwood, Independent Consultant, kay.sherwood@verizon.net
In 1996, the United States government began a historic strategic shift called “devolution” – shifting powers, responsibilities, and funding from the federal level of government to the state level, and sometimes the local level, for a number of social welfare programs, beginning with the cash assistance program for low-income families. The law overturned a social safety net that had been built over several decades. It was politically divisive. Soon after this historic welfare reform became law, the Kellogg Foundation began its own devolution journey, eventually committing seven years and $56 million to a project with 31 grantees as well as $3.6 million to an external evaluation of the “Devolution Initiative.”This case looks at strategy, and the evaluation of strategy, at three levels, beginning with the Congressional action of 1996, moving to the Kellogg Foundation’s responding Devolution Initiative, and then to the external evaluation of the Devolution Initiative.

Session Title: Research on Data Collection Approaches
Multipaper Session 691 to be held in Texas F on Friday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the Research on Evaluation TIG
Chair(s):
Leslie Fierro,  SciMetrika, let6@cdc.gov
On the Other Side of Recruitment: Participant Perceptions of Risk During Disclosure for Biomedical Research
Presenter(s):
Jonathan Rubright, University of Delaware, rubright@udel.edu
Pamela Sankar, University of Pennsylvania, sankarp@mail.med.upenn.edu
Jason Karlawish, University of Pennsylvania, jason.karlawish@uphs.upenn.edu
Abstract: The integrity of the informed consent process is crucial to evaluation quality. While much effort has gone into what information should be disclosed to potential participants, comparatively less attention has been given to how individuals perceive this information. Participants’ perceptions of a project may impact whether they are willing to participate at all. This paper describes findings from research on the informed consent process, specifically on individual’s perceptions of risks and benefits. The research was conducted in the context of a disclosure for an Alzheimer’s disease biomarker study and focuses on the questions: How do individuals perceive information on risks and benefits presented in an informed consent form? Do individuals hold any misconceptions about the risks and benefits of research, and if so, what are they?
Inclusion of People with Disabilities in Evaluation Practice
Presenter(s):
Miriam Jacobson, Claremont Graduate University, miriam.jacobson@cgu.edu
Abstract: This study investigates the nature of inclusion of people with disabilities in the evaluations of their programs. Using published articles describing recent examples of these evaluations, a content analysis systematically examined which stakeholders were included, how their input was obtained, and areas of the evaluation where participation actually occurred. This will address the following questions: 1) What level of inclusion of program participants with disabilities has been implemented in practice? 2) What methodologies are used to elicit views of people with disabilities? 3) What is the potential role of contextual variables in moderating inclusion? Implications for evaluation practice and theories of inclusion in evaluation will be discussed.
Using Norm-based Appeals to Increase Response-Rate in Evaluation Research: A Field Experiment
Presenter(s):
Anne Heberger, National Academy of Sciences, aheberger@nas.edu
Shalini Misra, University of California, Irvine, shalinim@uci.edu
Daniel Stokols, University of California, Irvine, dstokols@uci.edu
Abstract: Persuasive appeals emphasizing descriptive social norms (what most other people do in the same situation) are effective in promoting pro-environmental behavior (cf., Cialdini, 2003). This paper reports two field experiments that tested the effectiveness of norm-based persuasive messages in the context of evaluation research. Participants at an interdisciplinary conference were randomly assigned to one of two groups. The experimental group received a message highlighting a descriptive social norm: “Every year, over 75% of conference participants complete the post-conference survey. Join your fellow participants in improving the quality of future conferences by filling out this survey” before receiving an online survey evaluating the conference. The control group received a generic message without any norm-based appeals. Three months later, participants were randomly assigned to receive either a descriptive norm-based message or a generic message requesting them to complete another post-conference survey. The experimental findings and their implications for evaluation research are discussed.

Session Title: Building Capacity for the 4-H Science, Engineering, and Technology Initiative to Get to Outcomes
Multipaper Session 692 to be held in CROCKETT A on Friday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the Extension Education Evaluation TIG
Chair(s):
Suzanne Le Menestrel,  United States Department of Agriculture, slemenestrel@nifa.usda.gov
Discussant(s):
Martin Smith,  University of California, Davis, mhsmith@ucdavis.edu
The Power of Process Evaluation in Science, Engineering, and Technology Programs
Presenter(s):
Melissa Cater, Louisiana State University, mcater@agcenter.lsu.edu
Abstract: Is this program effective? This is a troubling question for many informal science, engineering, or technology programs, yet it is an answerable question. Process evaluation aids in understanding program operations by elucidating fidelity, context, quality, intensity, breadth, and depth of the program and by describing program participants. This paper will explore strategies for answering the question of program effectiveness using real world examples from a school garden program and an environmental education program.

Session Title: Teaching Through an Interdisciplinary Focus
Multipaper Session 693 to be held in CROCKETT B on Friday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the Teaching of Evaluation TIG
Chair(s):
Linda Schrader,  Florida State University, lschrader@fsu.edu
Discussant(s):
Jean A King,  University of Minnesota, kingx004@umn.edu
Study of the Process and Effects of the Evaluation Fellows Program
Presenter(s):
Jean A King, University of Minnesota, kingx004@umn.edu
Robert Tornberg, University of Minnesota, tornb012@umn.edu
Jeanne Zimmer, University of Minnesota, zimme285@umn.edu
Abstract: The purpose of this study is to examine the outcomes of the Evaluation Fellows Program (EFP), an innovative training program at the University of Minnesota that involves participants from four different roles—practitioners, evaluators, policy makers and funders—in examining the intersection of program evaluation and a specific subject area to improve both evaluation practice and use. The first cohort, which finished in 2008, addressed out of school time programs; the second cohort, currently in its first year, is examining the evaluation of educational reform. Our evaluation question is: What are the short- and longer-term outcomes of the Evaluation Fellows Program? We plan to identify the effects on participants immediately following their participation and then within a year following completion. This paper will include an analysis of data from the first cohort one year after the end of the program and from the second cohort shortly after its last session.
Evaluation Training: Characteristics and Context in Educational Administration Programs
Presenter(s):
Tara Shepperson, Eastern Kentucky University, tara.shepperson@eku.edu
Abstract: As the profession of evaluation has expanded, so has training across disciplines. While common in many disciplines, evaluation is often taught within learning silos of traditional university programs rather than as the trans-discipline envisioned by Scriven. This presentation will offer insights from an empirical examination of evaluation training for school administrators. Specifically, it suggests the limitations on evaluation training and practice that may result from a discipline-centric focus. Professionals in many social service fields receive training in evaluation in graduate programs, and it is important to look at specifics of professional standards, accreditation requirements, politics and policies, and common field practices to understand underlying frameworks and assumptions that influence what approaches and methods of evaluation are taught. This training ultimately impacts the nature, character, and quality of evaluation practice in the field.
Finding Solutions: How Small Non-governmental Organization's (NGOs) Can Partner With Universities to Overcome Evaluation Challenges
Presenter(s):
Tim Heaton, Brigham Young University, tim_heaton@byu.edu
Kendal Blust, Brigham Young University, ktblust@gmail.com
Matt Cox, Brigham Young University, coxcito@mac.com
Abstract: Small NGOs attempting program evaluation often encounter significant challenges because of a lack of evaluation skills and organizational resources. We propose that one solution to these challenges is to create cooperative agreements between universities and small NGOs. This relationship would create opportunities for university students to apply their learning and receive hands-on experience, and would provide NGOs with the skills and resources needed for quality evaluation. We present four case studies in which students worked as evaluators for small NGOs and describe the benefits that came from these alliances as well as the challenges in creating them. We show that cooperation between universities and small NGOs would enable NGOs to conduct program evaluation without needing a highly trained staff or large monetary investments, yet we also present some of the difficulties that both parties should be aware of when attempting this kind of cooperative evaluation effort.

Session Title: Data Collection Instruments for Quality Evaluation
Skill-Building Workshop 694 to be held in CROCKETT C on Friday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the International and Cross-cultural Evaluation TIG
Presenter(s):
Chung Lai, International Relief & Development, clai@ird-dc.org
Mamadou Sidibe, International Relief & Development, msidibe@ird-dc.org
Abstract: This skill building session will focus on the association between field work and data quality criteria using a data assessment model and examples of data collection instruments as solutions to improve data quality for evaluations. Good quality data to measure performance results often presents challenges to project implementers. There are political, economical, social, cultural, environmental issues and circumstances present in all situations that affect project implementation. Moreover, these circumstances also affect the data collection process and hence data quality. To help minimize these circumstances, data collection instruments can be designed to improve data quality by observing data quality criteria: validity, integrity, precision, reliability, timeliness. These criteria will be elaborated in a data assessment model as well as some of the data collection instruments, such as monitoring procedures, questionnaires, data entry guides, data definition codebook.

Session Title: Collaboration in Government-Sponsored Evaluations
Multipaper Session 695 to be held in CROCKETT D on Friday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the Government Evaluation TIG and the Collaborative, Participatory & Empowerment Evaluation TIG
Chair(s):
Maria Whitsett,  Moak, Casey and Associates, mwhitsett@moakcasey.com
Some Evidence on Challenges in Government/Stakeholder Partnerships: A Case Study of the Voluntary Sector Initiative (VSI) and Networks
Presenter(s):
Caroline DeWitt, Human Resources and Skills Development, caroline.dewitt@hrsdc-rhdcc.gc.ca
Abstract: Some Evidence on Challenges in Government/Stakeholder Partnerships: A Case Study of the Voluntary Sector Initiative (VSI) and Networks The Canadian federal government has sought the participation of stakeholders in the development of policy, program design and implementation. (Salamon: 2005:) describes the model as public administration: “leaping beyond the borders of the public agency, embracing a wide assortment of third parties’’ that are intimately involved in the implementation and management of the public’s business.’ The delivery structure creates a principal–agent dichotomy where third parties exercise discretion over the expenditure of public funds and a monopoly of knowledge in relation to activities, costs, and results. This makes third parties an essential partner in evaluations. Government moves from centrally-driven and hierarchical evaluations to a more consultative approach that requires cooperation and collaboration. The VSI evaluation is an example of an evaluation undertaken that involved collaborating with stakeholders and networks throughout the process.
Assessing More Than Mortar and Bricks: Combining Research, Theory, Politics, and Chaos in a HOPE VI Community Revitalization Program Evaluation
Presenter(s):
Andrew Scott Ziner, Atlantic Social Research Corporation, asrc@rcn.com
Ross Koppel, University of Pennsylvania, rkoppel@sas.upenn.edu
Abstract: A longitudinal evaluation of a federally-supported HOPE VI program is a prescribed and proscribed activity – until it confronts reality. Neither the research and nor the underlying theoretical bases should be ad hoc, reactive or sloppy. How do, and how should, evaluators deal with these dilemmas? Using a longitudinal case study and research analysis, this paper explores the all-too-common evaluator’s conundrum: How do we make wise choices when confronted with the mess of most program realities? What are the implications of each choice to the outcomes, recommendations, and continuing evaluation? This analysis offers lessons for specific evaluation strategies and for the larger questions underlying our logic, our choices and the role evaluation theory and methods must play.
An Assessment of the Impact of Proactive Community Partnerships on Census Quality
Presenter(s):
Edward Kissam, JBS International Inc, ekissam@jbsinternational.com
Jesus Martínez-Saldaña, Independent Consultant, jesus@jesusmartinez.org
Anna Garcia, Independent Consultant, annamg01@yahoo.com
JoAnn Intili, JBS International Inc, jintili@jbsinternational.com
Abstract: The 2010 Census included new operational procedures, targeted outreach efforts and partnerships with local community organizations designed to decrease differential undercount of hard-to-count (HTC) population subgroups. JBS International assessed the effectiveness of these efforts to improve census data quality in HTC tracts in rural California with concentrations of farmworkers and Latino immigrants using a survey methodology adapted from the Bureau’s own post-enumeration survey approaches to coverage measurement. We report patterns of undercount and examine whether innovations such as mailing of Spanish-language questionnaires, better citing of Questionnaire Assistance Centers, more “Be Counted” centers where a person who failed to receive a mailed census form can request one, collaboration with community organizations to improve the Master Address File, and an expanded Spanish-language media campaign mitigated historic undercounts of migrant and seasonal farmworkers and specific sub-populations within this group: Mexican immigrants of indigenous origin, complex households, and persons living in substandard “low-visibility” housing.
The Effects of Qualitative Feedback on Mid-Managers’ Improvement on Performance Behaviors in the Veterans Health Administration
Presenter(s):
Thomas Brassell, United States Department of Veterans Affairs, thomas.brassell@va.gov
Boris Yanovsky, United States Department of Veterans Affairs, boris.yanovsky@va.gov
Katerine Osatuke, United States Department of Veterans Affairs, katerine.osatuke@va.gov
Sue R Dyrenforth, United States Department of Veterans Affairs, sue.dyrenforth@va.gov
Abstract: Many organizations take employee development seriously. The Department of Veterans Affairs (VA) is no exception. One expression of its commitment to employee development is the VA’s use of the 360-degree feedback appraisal system, designed for the professional evaluation and development of mid-level managers. The 360-degree feedback has become a popular, widely used resource across the department: participation was 1,300 in 2009 and is expected to double to around 2,500 in 2010. Along with quantitative ratings of behaviors expressing job-related competencies, participants receive qualitative feedback on their strengths and areas for improvement from multiple raters (boss, peer, staff, self). The evaluation has the explicit intention of facilitating development through feedback delivery. This study examined what type of feedback results in optimal performance improvement. Results showed that simple feedback resulted in significantly greater performance improvement than complex feedback. Additionally, complex feedback did not result in significant performance improvement whereas simple feedback did.

Session Title: Theorectical Issues in Feminist Evaluation
Multipaper Session 696 to be held in SEGUIN B on Friday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the Feminist Issues in Evaluation TIG
Chair(s):
Kathryn A Bowen,  Centerstone Research Institute, kathryn.bowen@centerstone.org
Discussant(s):
Michael Bamberger,  Independent Consultant, jmichaelbamberger@gmail.com
The Role of Black Feminist Theory in Feminist Evaluation
Presenter(s):
Sarita Davis, Georgia State University, saritadavis@gsu.edu
Abstract: The Sojourner Project applies an interpretive framework to explore the degree to which the intersecting lens of gender, race, and class affects the HIV risk and use of mood altering substances among 50 African American women living in both low and high burden areas in metropolitan Atlanta. Research suggests that a strictly biomedical framework for HIV and Substance abuse program planning and intervention typically serves to homogenize difference or complexity by, for example, separating race from socioeconomic status and gender as discrete, rather than mutually constitutive, concepts (Gentry, 2007; Mullings, 2005). The contribution of the Sojourner Project to evaluation is that it invites us to understand the relational nature of sexual decision making and substance abuse among dispossessed women in a way that conveys a message about the interaction of race, class, and gender, as well as dialectic of residence, resilience, and resistance (Crenshaw, 1995).
Standpoint Theory(ies): A Pathway to Incorporating Social Justice Into the Evaluation Process
Presenter(s):
Denice Cassaro, Cornell University, dac11@cornell.edu
Divya Bheda, University of Oregon, dbheda@uoregon.edu
Abstract: This paper will offer an understanding of feminist standpoint theories and their practical applications within the evaluation process. ‘Standpoint’ offers a strong framework from which to guide an evaluation process that lends itself to greater understanding and empowerment of marginalized stakeholders. Not only does standpoint theory offer a comprehensive base to choose appropriate methodology, but it also helps evaluators locate themselves in the evaluative process, enabling them be more self-critical, reflective, and reflexive. Our social identities as individuals (race, class, ethnicity, sex, gender) naturally impact who we are as evaluators and continually inform the theories we espouse, methods we employ, what we value, the questions we ask, the data we collect, what we accept as evidence, and how we interpret what we find. By using the underlying principles of standpoint theory(ies), the result will be an increase in the quality, usefulness and value of the evaluation to all stakeholders involved.
The Believer’s Calling from a Gender Perspective: Evaluating Presbyterian (United States of America) Participation in the United Nations Commission on the Status of Women Fifty Forth Conference
Presenter(s):
Cathryn Surgenor, New York Theological Seminary, revcsurgenor@onwardever.net
Abstract: The Presbyterian Church ( USA) has been sending a delegation to the United Nations’ Commission on the Status of Women for the past five years. This two week conference is attended by representatives of 44 member nations and two to three thousand delegates from Non Government Organizations. The focus of the conference is on the implementation of the Platform for Action that was agreed upon at the Fourth World Conference held in Beijing in 1995. This year I attended the conference as a participant observer and evaluated both the process experienced by our delegates and the impact the experience has had on them. I propose to present a paper on this collaborative evaluation experience which seeks to understand to what degree attending this conference has empowered delegates to act with regard to issues such as human trafficking, violence against women, women’s access to health, education financial capital, and decision making.

Session Title: Export and Translation: Evaluating the Sharing of People, Programs, and Instruments Across Borders
Multipaper Session 697 to be held in REPUBLIC A on Friday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the International and Cross-cultural Evaluation TIG
Chair(s):
Mary Crave,  University of Wisconsin, crave@conted.uwex.edu
Evaluating the Effectiveness of Short Term Humanitarian Trips to Another Country
Presenter(s):
Christi Delgatty, Texas State University, delgatty@yahoo.com
Abstract: It is difficult to evaluate the effectiveness of short term humanitarian trips. Rarely is a trip reported as ineffective, and even when goals are unmet, trips are often praised for the effort and willingness of the volunteers. This project involved gathering data from trip participants concerning how they evaluate the effectiveness of their involvement in these short term trips. I used qualitative in-depth interviews focused on work done in Cameroon in addition to my personal experience to create a quantitative survey to assess how volunteers evaluate the effectiveness of their short term trips to other countries. This project is the initial phase of gathering several layers of data that will be used to make informed recommendations to organizations of several ways they can improve the effectiveness of their short term humanitarian trips.
Complexities of the Transition Framework: A Case Study of a Civic Education Export
Presenter(s):
Natalia Glebova, University of Massachusetts, Amherst, nglebova@educ.umass.edu
Abstract: This case study evaluates a project designed and implemented during the past decade to expose Russian educators to US ‘exemplary curricula’ in civic education and by that to ‘assist democratic transition’ and ‘change in knowledge and value’. Based on interviews as well as personal knowledge, I analyze how the goals of the program as formulated in its funding documents and interpreted by the receiving side were complicated by the goals and expectations of the recipients. I address the following questions: What did Russian educators actually learn from the program? What did participants perceive to be the program’s contribution to their democratic transition? In what ways did the program logic support or hinder effective implementation? In short, I find that the products (interpreted and adapted courses that built on the ‘exemplary curricula’) contained messages radically different from the intended: civic education became patriotic education with the emphasis on nation-building values and goals.
The Effect of Translation on Evaluation Instruments: Multiple Group Confirmatory Factor Analysis (MGCFA) and Multiple Indicators, Multiple Causes (MIMIC) Models Outcomes
Presenter(s):
Fatma Ayyad, Western Michigan University, fattmah@hotmail.com
Brooks Applegate, Western Michigan University, brooks.applegate@wmich.edu
Abstract: A fundamental prerequisite to the use of translated survey in multinational studies is the reproduction of the conceptual model underlying its scoring and interpretation. Structural equation modeling combined with multiple group confirmatory factor analysis (MGCFA) and multiple indicators, multiple causes (MIMIC) was used to test these aspects of the construct validity of the Belief in Personal Control Scale (BPCS) in the USA and two Arabic countries: Saudi Arabia and Palestine. A forward translation, blind-back translations, and expert evaluation of equivalence by bilingual and English peaking experts were conducted to achieve conceptual equivalence between the original and translated instruments. Data came from Participants included college students because they are more likely bilingual. Empirical validation of the BPCS from a group of bilingual Arabic demonstrated a comparable item scale, means, and variances. Advantages and difficulties of using multi-cultural, multi-method to establish translation equivalence and to validate the translated instruments are discussed.

Session Title: Improving Evaluations of Nutrition, Physical Activity, and Obesity Programs Through Schools, Providers, and Statewide Efforts
Multipaper Session 698 to be held in REPUBLIC B on Friday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the Health Evaluation TIG
Chair(s):
Jenica Huddleston,  University of California, Berkeley, jenhud@berkeley.edu
Early Indicators of Success With the Use of the Electronic Medical Record (EMR) for Implementation of Expert Committee Recommendations on Childhood Overweight in Delaware
Presenter(s):
Gregory Benjamin, Nemours Health and Prevention Services, gbenjami@nemours.org
Vonna Drayton, Nemours Health and Prevention Services, vdrayton@nemours.org
Denise Hughes, Nemours Health and Prevention Services, dhughes@nemours.org
Jia Zhao, Nemours Health and Prevention Services, jzhao@nemours.org
Abstract: This paper reports on the effectiveness of using an electronic medical record (EMR) combined with an innovative quality improvement initiative (QII) to increase the proportion of medical providers who classify BMI and provide healthy lifestyle counseling to their patients (0-18 yrs). Changes in BMI prevalence among patients of both non-QII medical providers and QII-participating medical providers were examined. Data from the EMR indicate that QII medical providers were more likely than non-QII medical providers to classify weights based on body mass index (BMI) percentiles, and to counsel patients clients on healthy lifestyles. Patients under age two who received treatment from QII-participating medical providers showed a significant decrease (p<0.05) in mean BMI percentiles from 2007 to 2008; however, this trend was not found for patients who went to non-QII medical providers. EMR enhancements improve implementation of clinical guidelines, increase levels of documentation, and facilitate providers’ workflow, which ultimately improves patient outcomes (e.g., decrease in BMI in children).
A National Strategy to Enhance Quality State Evaluations: The Case of the Division of Nutrition, Physical Activity and Obesity (DNPAO)
Presenter(s):
Donald Compton, Centers for Disease Control and Prevention, dcompton@cdc.gov
Michael Baizerman, University of Minnesota, mbaizerm@umn.edu
Rosanne Farris, Centers for Disease Control and Prevention, rfarris@cdc.gov
Abstract: Quality evaluations and their use are a primary goal of the Centers for Disease Control and Prevention’s (CDC’s) Division of Nutrition, Physical Activity, and Obesity (DNPAO) for states receiving CDC funds. In order to accomplish this goal, the DNPAO evaluation team developed an evaluation strategic plan. The overall framework for the strategic plan is Evaluation Capacity Building. (ECB). This paper introduces the DNPAO model of ECB and describes how states are being supported to implement ECB to produce and use quality studies. DNPAO and states each organize and use advice structures such as evaluation consultation groups to guide their work. These structures of expertise will be reviewed in general, and then particular attention will be given to advice structure and practice designed to enhance the appropriate, timely, and effective uses of evaluation studies.
2009 Delaware Child Care Provider Survey: Exploring the Relationship Between Awareness, Policies, and Practices and Behaviors
Presenter(s):
Tiho Enev, Nemours Health and Prevention Services, tenev@nemours.org
Alex Camacho, Nemours Health and Prevention Services, acamacho@nemours.org
Abstract: The 2009 Delaware Child Care Provider Survey is a critical component of the systems-level evaluation model of Nemours Health and Prevention Services (NHPS), aimed at evaluating its work in childcare settings. The two versions of the questionnaire, the director’s and the teacher’s, present two different perspectives on center-level characteristics, related to healthy eating and physical activity (HEPA). The survey is designed to study the relationship between a) the center director awareness of the HEPA policies introduced by the Office of Child Care Licensing and the Child and Adult Care Food Program, b) center policies, and c) staff practices and behaviors in childcare settings. The data allows to empirically test and validate the systematic connections between these three components. The findings reveal strong positive correlations between the constructs. The results are used to further inform the programmatic work of NHPS and expand the evaluation methods and instruments used in the field.
Mixed Method Evaluation of a Multi-district Physical Education Curriculum Intervention
Presenter(s):
Randy Knuth, Knuth Research Inc., randy@knuthresearch.com
Bob Lutz, Gonzaga University, lutz@gonzaga.edu
Abstract: Reducing trends of increased obesity and diabetes in K-12 students has been the focus of the US ED's Carol White PEP program. This presentation will discuss the qualitative and quantitative methods used to evaluate a curricular intervention that focuses on increasing activity and fitness levels and helping students to master cognitive concepts related to nutrition and healthy lifestyles. The evaluation has been conducted independently in over 20 school districts over a four-year period. Quantitative data collection used activity logs, fitness measures, cognitive assessments, and attitude surveys each designed to track changes in student outcomes. Qualitative methods, including focus groups, interviews and the use of the SOFIT protocol for classroom observation, were used to assess levels of curricular implementation over time and to identify challenges and effective strategies. The goal is to increase capacity to use data for continuous improvement and to sustain changes at the systems (district) level.

Session Title: Technical Assistance in Action: How Does the Practice Look?
Panel Session 699 to be held in REPUBLIC C on Friday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Chair(s):
Pamela Imm, University of South Carolina, pamimm@windstream.net
Abstract: The current research about the necessary types of levels of intensity of technical assistance to influence program and community outcomes is limited. In fact, technical assistance efforts remain mostly intuitive rather than data driven (Florin, Mitchell, Stevenson, & Klein 2000). Interestingly, the investment in technical assistance continues to grow with many federal agencies contracting with technical assistance providers to work closely with their grantees to promote high level planning, implementation and outcomes. This session will provide an opportunity for technical assistance providers to discuss their work and to offer ideas for how to conceptualize and measure technical assistance in a variety of settings. This will include ideas for qualitative and quantitative measurement.
Literature Review on Proactive Technical Assistance Systems in Community Settings
Jason Katz, University of South Carolina, jakatz@mailbox.sc.edu
Abraham Wandersman, University of South Carolina, wandersman@sc.edu
Although the benefits of technical assistance (TA) have been demonstrated in community-based settings, more research is needed to better understand which aspects of TA are most effective for communities. One aspect of TA, proactive design, has been broadly conceptualized in the literature as TA that it is catalyzed by the TA provider. A modified grounded theory (Kloos, Gross, Meese, Meade, et al., 2005) approach was used to gather information from communities participating in a federal mental health systems transformation initiative about how to operationalize proactive technical assistance. Results suggested that communities regard four domains as important within a proactive TA strategy: (1) TA provider orientation to a community; (2) TA provider immersion in a community’s practices and perspectives; (3) a context-informed assessment; and (4) the delivery of community-specific TA support. Areas for future research and implications for the practice of proactive TA will be discussed.
Notes From the Field: Lessons Learned From Using a Participatory Approach to Evaluation
Jessica Waggett, Institute for Community Health, jwaggett@challiance.org
Emily Chiasson, Institute for Community Health, echiasson@challiance.org
Elisa Friedman, Institute for Community Health, efriedman@challiance.org
Karen Hacker, Institute for Community Health, khacker@challiance.org
The Institute for Community Health (ICH) is a community-based evaluation and research institute. A cornerstone of our mission is the promotion of capacity-building amongst our partner communities through participatory approaches. ICH has worked for 10 years with community partners to 1) provide evaluation capacity-building technical assistance and education, and 2) develop and implement evaluations that encourage data driven decisions. While projects are often small in scope, ICH implements a continuum of effective evaluation approaches that match available resources- tailoring capacity-building efforts to the needs of our partners. Drawing from our experience working on a variety of small project evaluations, we will share how our approach increases both partners’ understanding and motivation for evaluation, and utilization of data for community health improvement. In particular we will look at a case study of a local opioid overdose prevention project, examining lessons learned from our participatory approach to evaluation.
Helping Practitioners Use Data for Planning and Evaluation
Jane Powers, Cornell University, jlp5@cornell.edu
For the past ten years, we have been working on an initiative called Assets Coming Together for Youth, helping communities and youth serving programs across New York State promote the health and well being of adolescents through strategies of positive youth development. We operate an academic Center of Excellence that connects leading edge youth development research to practice, and provides training and technical support, evaluation assistance, and resources to front line providers as well as policy makers. In this presentation, we will share lessons learned in helping programs, organizations, and coalitions evaluate their youth development efforts. We will report on a technical assistance approach that leads practitioners through a reflection process that involves gathering self-assessment data which are then used for planning and evaluation purposes. Data interpretation sessions engage participants and foster discussion about how to enhance program quality, improve practice, and create change that optimizes positive youth development.
Training and Technical Assistance to Build Capacity of Mental Health in Laos
Paul Florin, University of Rhode Island, pflorin@mailbox.uri.edu
Lao PDR, is a low-income nation of 5.5 million people in Southeast Asia. Currently 2 psychiatrists in the capital city serve the entire population and there are no mental health services in the countryside, made up of 17 Provinces. This paper will describe the work of an international team who has been conducting a feasibility study on how to approach building mental health capacity in Laos. The paper will 1) describe the initial reconnaissance project conducted in March 2009; 2) overview a concept paper that describes a two level (national and provincial) approach to capacity building); 3) review a follow-on visit in March 2010 and 4) describe how initial training and technical assistance will be used to assess and treat perinatal depression in one province by integrating it within the existing primary health care. Implications for training and technical assistance design in low-income countries will be discussed.
Measuring Technical Assistance Influence Using Project Activity Networks
Peter Kreiner, Brandeis University, pkreiner@brandeis.edu
This study developed a new approach to measuring the influence of technical assistance on project progress in eight communities funded to address youth substance abuse. We developed a citation network of project activities, capturing, for each project activity, which prior activities gave rise to it, including technical assistance activities. We then applied the quantitative tools of network analysis to derive a measure of each activity’s influence on downstream activities, yielding measures for each project of the relative influence of technical assistance activities. Using these measures, we explored the relative influence of technical assistance over time, across projects, and across project activity categories. Through key informant interviews in each community, we also assessed changes in community and coalition capacity to address youth substance abuse. Comparisons of changes in capacity with relative influence of technical assistance yielded insights into where technical assistance was most effective, and how its influence could be improved.
Effectiveness of the Getting to Outcomes (GTO) Technical Assistance Model to Reduce Underage Drinking
Pamela Imm, University of South Carolina, pamimm@windstream.net
Annie Wright, University of South Carolina, patriciaannewright@yahoo.com
Matthew Chinman, RAND Corporation, chinman@rand.org
This presentation will focus on the provision of training and technical assistance on the Getting to Outcomes (GTO)tm accountability system and the effectiveness of that strategy for demonstrating usage of the GTOtm system. Training and TA on GTOtm has been provided to three coalitions in order to help them implement key environmental strategies to prevent underage drinking. The work and outcomes of these three intervention coalitions are being compared to the underage drinking outcomes of three comparison coalitions. The intervention period was initiated with an extensive training on GTO and followed by 18 months of technical assistance on GTO. Logs of TA for each of the intervention coalitions have been maintained and TA activities have specifically focused on alcohol compliance checks, responsible beverage service, and media advocacy. Researchers will present their model for how technical assistance is conceptualized and quantified.

Return to Evaluation 2010
Search Results for All Sessions