Evaluation 2008 Banner

Return to search form  

Session Title: Revisiting the Criteria and Process for Evaluating AEA Conference Proposals and Presentations
Think Tank Session 202 to be held in Capitol Ballroom Section 1 on Thursday, Nov 6, 9:15 AM to 10:45 AM
Sponsored by the AEA Conference Committee
Howard Mzumara,  Indiana University-Purdue University Indianapolis,  hmzumara@iupui.edu
Daniela Schroeter,  Western Michigan University,  daniela.schroeter@wmich.edu
Susan Kistler,  American Evaluation Association,  susan@eval.org
Nicole Vicinanza,  JBS International,  nvicinanza@jbsinternational.com
Amy Gullickson,  Western Michigan University,  amy.m.gullickson@wmich.edu
Abstract: Building on the Think Tank session entitled 'Learning from AEA TIG Proposal Review Standards' that was held at the 2007 AEA Conference, this session will continue an open but structured discussion about issues and concerns regarding variability in the quality of proposals and presentations offered at the annual conference. All AEA attendees are invited to attend the session and participate in re-examining the appropriateness and usefulness of existing criteria for reviewing session proposals, and then engage in an interactive discussion aimed at assisting AEA with articulation of specific requirements, criteria, and appropriate measures for assessing the quality of proposals and presentations offered in the respective sessions at the conference. Most importantly, participants will have an opportunity to offer suggestions that could help AEA in making incremental quality improvements that would address attendees' concerns regarding variability in the quality of presentations made at the annual conference.

Session Title: Intermediate Consulting Skills: A Self Help Fair
Think Tank Session 203 to be held in Capitol Ballroom Section 2 on Thursday, Nov 6, 9:15 AM to 10:45 AM
Sponsored by the Independent Consulting TIG
Maura Harrington,  Southern California Center for Nonprofit Management,  maura.harrington@yahoo.com
Mariam Azin,  Planning, Research & Evaluation Services,  mazin@presassociates.com
Christina Lynch,  Partners in Evaluation and Planning,  colynch@verizon.net
Courtney Malloy,  Vital Research LLC,  courtney@vitalresearch.com
Geri Lynn Peak,  Two Gems Consulting Services,  geri@twogemsconsulting.com
Dawn Hanson Smart,  Clegg and Associates Inc,  dsmart@cleggassociates.com
Victoria Straub,  SPEC Associates,  vstraub@specassociates.org
Patricia Yee,  Vital Research LLC,  patyee@vitalresearch.com
Abstract: This skill-building workshop will allow experienced independent evaluation consultants to interact with colleagues with less experience in order to demonstrate and share some of their hard-earned lessons. A series of seven Topic Tables will be set up, each with an experienced Table Leader who is prepared to share information about one consulting topic they enjoy and do well. Time is provided for participants to both ask questions and contribute their ideas. Every 15 minutes, participants will circulate to a different Topic Table. Topics include: - How much are you worth? Budgeting for staff and expertise - Contractual considerations concerning confidentiality and copyrights - How to handle the lean times; supplementing your evaluation business - How close is too close? Long term relationships with clients - The development of effective, collaborative evaluator-client relationships - Innovation on the fly: generating new ideas for evaluation from the world crucible - Streamlining your evaluation

Session Title: Professional Designation for Evaluators in Canada: Where We Are Now
Panel Session 204 to be held in Capitol Ballroom Section 3 on Thursday, Nov 6, 9:15 AM to 10:45 AM
Sponsored by the AEA Conference Committee
Martha McGuire,  Cathexis Consulting,  martha@cathexisconsulting.ca
Jean King,  University of Minnesota,  kingx004@umn.edu
Abstract: The Canadian Evaluation Society is in the process of developing a system for voluntary professional designations following extensive consultations process with the memberships and associated organizations. This panel discussion will examine the issues of credentialing of evaluators and its role in informing evaluation policies by presenting the recent development in CES. A Professional Designation Core Committee (PDCC) was established by CES National Council to lead the project and has constituted three sub-committees to address identified activity streams - infrastructure, credentialing and partnerships/outreach. The PDCC undertook a 'cross-walk' of different extant knowledge bases in order to develop a comprehensive list of evaluator competencies.' The panel will invite discussions based on their presentations of: the proposed Competencies for Canadian Evaluation Practice that draws from all of the references in the crosswalk and reflects the current Canadian evaluation environment; the role and the progress of the three sub committees in bringing the forward.
Overview of Professional Designation for Evaluators
Heather Buchanan,  Jua Management Consulting Services,  hbuchanan@jua.ca
Brigitte Maicher,  Net Results and Associates,  maicherb@nb.sympatico.ca
Keiko Kuji Shikatani,  Independent Consultant,  kujikeiko@aol.com
The Professional Development Core Committee members will present the overall plan of the Professional designation project including the development of Competencies for Canadian Evaluation Practice. The proposed system for credentialing evaluators in Canada rests on three important aspects, which collectively define and shape evaluation practice. These three pillars are standards, ethic and competencies. Standards define for the practitioner the acceptable characteristics of evaluation products and services. Competencies are the skills, knowledge and abilities required in a person practicing evaluation. Ethics then provide an umbrella, under which the competencies are applied and products produced. The presentation of the three pillars will demonstrate the crosscutting and overlapping nature of these three dimensions. This is not a static picture. It is one which needs to evolve as the demand and supply of evaluation services evolves and grows over time, in response to both changing contexts and innovation within the profession itself.
A Competency Profile for Canadian Evaluators?
Brigitte Maicher,  Net Results and Associates,  maicherb@nb.sympatico.ca
In order to implement the professional designation project in Canada three objectives were identified and subcommittees were aligned with each objective. This presentation will focus on one of these objectives. 'To recognize degrees of competency within Canadian evaluation practice' The process entails the development of measurable criteria from the validated competencies. The presentation will report on the validation process of the criteria and how they are to be applied to the two designations 'Credentialed Evaluator' and the official 'Member Category'. The selection and functions of the Credentialing Board, the integration of continuing evaluation education, experience, education and equivalencies as well as grand-parenting proposals will be discussed. The dispute mechanism will be described. The presentation will outline the challenges encountered and aim to engage the audience.
Partnership and Outreach Component of the Professional Designation Project
Heather Buchanan,  Jua Management Consulting Services,  hbuchanan@jua.ca
Present about: Partnerships and Outreach undertaken in the development and implementation (to date) of professional designations. The presentation will focus on the challenges encountered in: - actively and meaningfully engage with both the demand and supply side of 'evaluation' in Canada, and engage the international evaluation community on the professional designation system being designed in Canada. The professional designation project is guided by principles of inclusiveness, partnering, utility, feasibility and transparency, all framed in a volunteer, fiscally constrained and rapid paced environment. An overview of the communication and consultation mechanisms employed will be provided, highlighting the successes and lessons learned. Importantly the presentation will discuss the extent to which the work of this project served to build and augment CES' outreach to and partnerships with those who are impacted by and / or support professionalizing evaluation practice in Canada.
Working In Tandem To Build An Infrastructure for the CES Professional Designation System
Keiko Kuji Shikatani,  Independent Consultant,  kujikeiko@aol.com
The Infrastructure Sub Committee (ISC) was established to create a sustainable infrastructure for a system of professional designation services to CES members. This includes: identifying and designing the infrastructure required to support professional designations (administrative support, systems, procedures, etc); identifying policy and constitutional implications and change needs; preparing costing for the ongoing operation of the system; developing the fee structure (or fee structure options) for a system of voluntary professional designations; developing/designing the organizational structure and accountability mechanisms for ongoing delivery of the new 'services / program'; and liaising with CSC in the definition and establishment of a system for complaints regarding professional designations. We will present our progress up-to-date and issues encountered in working in tandem with the other sub committees in an integrated manner as actions in each will be informed by and contribute to the agenda of them.

Session Title: The Future of Ethics in Program Evaluation
Panel Session 205 to be held in Capitol Ballroom Section 4 on Thursday, Nov 6, 9:15 AM to 10:45 AM
Sponsored by the AEA Conference Committee
Donna Mertens,  Gallaudet University,  donna.mertens@gallaudet.edu
Michael Quinn Patton,  Utilization Focused Evaluation,  mqpatton@prodigy.net
Abstract: Program evaluators work in dynamic, complex systems - often with the need to make decisions about challenging ethical dilemmas not specifically addressed in AEA's Guiding Principles or in an organization's policy statements about evaluation. Hence, the importance of critical reflection on such issues, not only as they have been experienced in past evaluations, but also with an eye to implications for the future. Based on contributors to the forthcoming Handbook of Social Research Ethics, the panel includes: Kate Toms who contrasts the perspectives of American and Australasian evaluators on the topic of ethics to reveal issues of import that demand more thought. Ken Howe and Heather MacGillivary wrestle with the ethical issues that emerge as evaluators struggle to tie their work to the policy arena. Pauline Ginsberg and Donna M. Mertens bring together ethical issues embedded in power differentials in program evaluation as they relate to increased attention to cultural competency, beneficence, reciprocity, and confidentiality.
Ethical Perspectives in Evaluation: A Contract Not Fully Negotiated
Kathleen Toms,  Research Works Inc,  katytoms@researchworks.org
Amanda Wolf,  Victoria University of Wellington,  amanda.wolf@vuw.ac.nz
David Turner,  New Zealand Ministry of Justice,  adturner@xtra.co.nz
This presentation will discuss the ethics of evaluation as a profession, and ethics of evaluation as professional ethics. Evaluators display fundamental characteristics of professional practice, they think, function, and learn across a series of 'new' instances; blend together experience, knowledge, and a certain disposition or 'feel' in their work; and, provide a service to society. When compared to the ethical codes of other professions, the existing evaluation standards and codes of practice are more minimum competencies than codes of professional ethics. Professional ethics concerns the moral issues that arise when individuals draw on specialist knowledge to provide services to the public. It follows that evaluation has its own unique ethical considerations because the 'rightness' of professional practices and societal consequences is central to evaluation's 'social contract', a contract not yet completely negotiated. While evaluators are limited in how they ethically justify their decisions, it remains open to question the manner in which professional norms contain emerging ethical perspectives.
The Ethics and Implications of Deliberative Democratic Social Research
Kenneth Howe,  University of Colorado Boulder,  ken.howe@colorado.edu
Heather MacGillivary,  University of Colorado Boulder,  heather.macgillivary@colorado.edu
Evaluation research that incorporates the principles of deliberative democracy creates unique ethical and practical challenges. These challenges are often amplified when evaluators try to connect results to important policy decisions. Howe and MacGillivary will revisit three central political-cum-methodological principles of democratic deliberative social research: inclusion, dialogue and deliberation. Howe will discuss how media coverage, stakeholder polarization and limited resources impacted deliberative democracy in two educational policy evaluations concerning school choice and bilingual education. MacGillivary will describe a participatory democratic evaluation of supported housing services for people with chronic psychiatric disabilities. Specifically, MacGillivary's example will describe how to establish a credible deliberation with oppressed groups. Three implications from these examples will be discussed; the distinction between participatory and deliberative democratic research, the detriments of resurgent 'experimentism', and the deleterious impacts of doggedly self-interested, dominant groups.
Exploring the Frontiers of Evaluation Ethics
Pauline E Ginsberg,  Utica College,  pginsbe@utica.edu
Donna Mertens,  Gallaudet University,  donna.mertens@gallaudet.edu
The forthcoming (Sept., 2008) Handbook of Social Research Ethics draws from many disciplines, many paradigms, many evaluation topics, many methods and many cultures. As a result, there is great diversity among its contributors regarding site-specific dilemmas that are likely to arise and means by which these might be negotiated. There is also a central core of issues that cross disciplines, paradigms, cultures, methodologies, and topics. These may be roughly divided into three categories: redefining familiar terminology, power relations, and the evaluator's role as information provider v. advocate. Familiar terms include respect, confidentiality and informed consent. Power relations involve evaluator/gatekeeper, evaluator/funder, evaluator/community, evaluator/agency or institution, and evaluator/participant linkages. Finally, the evaluator's role as information provider v. that of advocate is the most knotty one for evaluators, intertwined of necessity with the evaluator's ordinary moral sense, the specific evaluation contract, and professional future.

Session Title: Approaches to Evaluating Gender Equality
Multipaper Session 206 to be held in Capitol Ballroom Section 5 on Thursday, Nov 6, 9:15 AM to 10:45 AM
Sponsored by the Feminist Issues in Evaluation TIG
Tristi Nichols,  Manitou Inc,  tnichols@manitouinc.com
Sharon Brisolara,  Evaluation Solutions,  evaluationsolutions@hughes.net
Abstract: Fostering women's equality is often the aim of international development projects working with women. How to evaluate gender equality and how to understand 'women's empowerment' within the context of an international development project, however, are matters for which there is little discussion in the literature. Sessions in this panel will present a discussion of the issues shaping what has been and can be done with respect to evaluating gender equality, concrete examples of how gender equality and related indicators have been evaluated, and a discussion of appropriate evaluation approaches and models to use in addressing gender issues.
Assessing the Outcomes and Impacts of Alternative Strategies for Promoting Gender Equality and Women's Empowerment in International Development Programs
Michael Bamberger,  Independent Consultant,  jmichaelbamberger@gmail.com
Although gender equality is a major goal of international development, relatively little empirical evidence is available on the impacts of development interventions on indicators of gender equality. Cost and time constraints are often cited to explain this lack of empirical data. The paper will use ongoing research on food security to illustrate ways to obtain operationally useful estimates of gender impacts within typical budget, time and data constraints. Promising approaches include: (a) more effective use of existing program data; (b) mining the extensive, but frequently under-utilized community knowledge of local organizations involved in program service delivery; (c) techniques for reconstructing baseline data; and (d) mixed-method approaches combining simple quantification with participatory techniques for reconstructing causal chains. The systematic use of triangulation is recommended to confront the widely different estimates of outcomes such as improved gender relations versus continued high levels of domestic violence often obtained from program staff and other sources.
Empowering Women In Muslim Majority Countries
Karl Feld,  D3 Systems Inc,  karl.feld@d3systems.com
Veronica Gardner,  D3 Systems Inc,  veronica.gardner@d3systems.com
Research can measure the effectiveness of women's empowerment initiatives in the economies of the Muslim world at the individual level. It has a role to play in quantifying international program assumptions about Muslim women's desires, hopes and experiences amongst the general populations of Muslim-majority countries. It can also provide insight into how to mobilize women to change their socio-cultural environment for the better. This research suggests that many of the fundamental assumptions made about Muslim women today are correct, but much more nuanced than often suggested. This is particularly the case when it comes to getting social, cultural and legal reform right. An empirical knowledge of the heterogeneous state of women's enfranchisement and their attitudes towards it by country and region in the Muslim world today can improve the performance of assistance efforts.
Gender Issues in Global Evaluation
Donna Podems,  Macro International,  donna.R.Podems@macrointernational.com
It's a quagmire it seems, discussions on gender and feminist evaluation and their practical application. The discussion on the differences between gender and feminist evaluation and their practical application occurs in various evaluation contexts throughout the 'developed' and 'developing' world, from conference venues to field work. A case study based on a sex workers project in South Africa that is fighting for the legalization of sex work will provide the case study to: Describe the differences between gender and feminist approaches; Discuss strategies for applying feminist and/or gender elements in their own evaluations; and allow for dialogue regarding, is it Feminist or Gender?

Session Title: Monitoring and Promoting LGBTQ Health Through Evaluation
Multipaper Session 207 to be held in Capitol Ballroom Section 6 on Thursday, Nov 6, 9:15 AM to 10:45 AM
Sponsored by the Lesbian, Gay, Bisexual, Transgender Issues TIG and the Health Evaluation TIG
Steve Fifield,  University of Delaware,  fifield@udel.edu
Inclusive Service Provision for Sexually Diverse Populations: Surveying Staff as a Tool for Sensitization
Shelly Makleff,  International Planned Parenthood Federation,  smakleff@ippfwhr.org
River Finlay,  International Planned Parenthood Federation,  rfinlay@ippfwhr.org
Stephanie Chamberlin,  International Planned Parenthood Federation,  schamberlin@ippfwhr.org
Abstract: International Planned Parenthood Federation/Western Hemisphere Region (IPPF/WHR) launched a series of projects in 2004 to address the sexual and reproductive health (SRH) needs of sexually diverse (including gay, lesbian, bisexual and transgender) populations throughout Latin America. Strategies involve the use of tools to assess attitudes, define needs, and develop institutional priorities. Tool implementation among IPPF/WHR member associations revealed an unanticipated outcome: the act of tool completion itself served to sensitize staff to issues of sexual diversity, by instigating dialogue and posing questions that were new to participants. This process facilitated reflection of assumptions around client sexual behavior and reproductive intentions--assumptions that affect quality of care and client-provider interactions. This session will describe programmatic and institutional uses of tool implementation and findings. Evaluation tool implementation is one of many strategies to facilitate provision of high quality and non-judgmental services for all clients.
A Dangerous Combination: How Policy and Methodological Challenges Perpetuate Health Disparities in Lesbian, Gay, Bisexual and Transgendered Populations
Kari Greene,  Program Design and Evaluation Services,  kari.greene@state.or.us
Abstract: Researchers and evaluators face numerous methodological challenges when examining lesbian, gay, bisexual and transgender (LGBT) health issues. These challenges are exacerbated by multi-level structural and political barriers specific to LGBT communities, resulting in a dearth of national and local research and evaluation on LGBT health issues. A review of the extant literature reveals health disparities for LGBT populations for a number of health indicators, yet structural barriers exist on multiple levels to further examining, addressing and ameliorating those disparities. This session will examine four key methodological challenges related to evaluating and researching LGBT populations and health issues. Furthermore, key policies, practices and structural issues that impact the health of LGBT populations will be explored. Through a multi-level examination of these factors, attendees will gain knowledge the hidden and overt barriers to improving health outcomes for LGBT communities.
Smoking Prevalence and Cessation among Lesbian, Gay, Bisexual, and Transgender Arizona Residents
John Daws,  University of Arizona,  johndaws@email.arizona.edu
Abstract: In Arizona, tobacco is a particular problem for the lesbian, gay, bisexual, and transgender (LGBT) community. Compared to heterosexual residents, LGBT Arizonans have a higher prevalence of current smoking, are younger when they initiate smoking, smoke more cigarettes per day, and are less likely to be ex-smokers. Recently, however, the number of LGBT smokers who participate in state-run cessation programs has increased. Success in quitting has also recently increased: in the past fiscal year, the quit rate for LGBT clients (18%) was significantly higher than that for heterosexual clients (12%). This paper will describe differences between LGBT and heterosexual smokers, and how these differences affect cessation success. Absent from this paper will be information about tobacco use among LGBTQ youth in Arizona. None of the three surveillance instruments which monitor youth smoking (and other substance use) include sexual-orientation questions. This paper will discuss strategies for evaluating the youth tobacco problem.

Session Title: Challenging the Basics: The Relationship Between Evaluation Methods and Findings in Substance Abuse and Mental Health Studies
Multipaper Session 208 to be held in Capitol Ballroom Section 7 on Thursday, Nov 6, 9:15 AM to 10:45 AM
Sponsored by the Alcohol, Drug Abuse, and Mental Health TIG
Robert Hanson,  Health Canada,  robert_hanson@hc-sc-gc.ca
Resiliency: A Qualitative Meta-Synthesis
Scott Nebel,  University of Denver,  scott.nebel@du.edu
Abstract: Resiliency has become an increasingly popular phenomenon in the field of mental health and studies involving children confronted with some form of distress. Nonetheless, little consensus among researchers, clinicians, and evaluators seems to exist regarding what the term resiliency truly encompasses. Utilizing the qualitative methodology of meta-synthesis, this study analyzes, deconstructs, and synthesizes predominate qualitative studies on the issue of resiliency in an attempt to provide a more explicit understanding of the concept and in the hopes of delineating it from other recovery oriented terms and phenomena.
The Prevalence of Pseudoscientific Research Practices in the Evaluation of "Evidence-Based" Drug and Alcohol Prevention Programs
Dennis Gorman,  Texas A&M University,  gorman@srph.tamhsc.edu
Eugenia Conde,  Texas A&M University,  eugeniacd@tamu.edu
Brian Colwell,  Texas A&M University,  colwell@srph.tamhsc.edu
Abstract: Research has shown that evaluations of some of the most widely advocated school-based alcohol and drug prevention programs employ questionable practices in their data analysis and presentation. These practices include selective reporting among numerous outcome variables, changes in measurement scales before analysis, multiple subgroup analysis, post hoc sample refinement, and selective use of alpha levels above 0.05 and one-tailed significance tests. However, it is unclear just how widespread the use of such practices is within the overall field of alcohol and drug prevention since the focus of the existing critiques has been on a fairly narrow range of school-based programs. This presentation addresses this issue by reviewing the data analysis and presentation practices used in the evaluations of the 34 school-based programs that appear on the Substance Abuse and Mental Health Services Administration’s (SAMHSA) National Registry of Effective and Promising Programs.
Validating Self-Reports of Illegal Drug Use to Evaluate National Drug Control Policy: Taking the Data for a Spin
Stephen Magura,  Western Michigan University,  stephen.magura@wmich.edu
Abstract: Illicit drug use remains at high levels in the U.S. The federal Office of National Drug Control Policy evaluates the outcomes of national drug demand reduction policies by assessing changes in the levels of drug use, including measures of change from several federally-sponsored annual national surveys. The survey methods, relying exclusively on self-reported drug use (interviews or paper-and-paper), have been criticized by the Congressional General Accountability Office (GAO) as well as by independent experts. This analysis critiques a major validity study of self-reported drug use conducted by the federal government, showing that the favorable summary offered for public consumption is highly misleading. Specifically, the findings of the validity study, which compared self-reports with urine tests, are consistent with prior research showing that self-reports substantially underestimate drug use and can dramatically affect indicators of change. Thus, these national surveys are largely inadequate for evaluating national drug demand reduction policies and programs.
The Making of “Effective” Drug Prevention Programs through Pseudoscientific Research Practices: A Reanalysis of Data from the Drug Abuse Resistance Education (DARE) Program
J Charles Huber Jr,  Texas A&M University,  jchuber@srph.tamhsc.edu
Dennis Gorman,  Texas A&M University,  gorman@srph.tamhsc.edu
Abstract: Evaluations of some of the most widely advocated school-based alcohol, tobacco and other drug (ATOD) prevention programs have been shown to employ questionable practices in their data analysis and presentation such as selective reporting among numerous outcome variables, post hoc sample refinement, changes in measurement scales before analysis, multiple subgroup analysis, and selective use of alpha levels above 0.05 and one-tailed significance tests. This raises the question as to whether the use of such irregular and questionable data management and analysis practices results in significant overestimation of program effects. We address this issue by reanalyzing data from an independent evaluation conducted by Richard Clayton and colleagues of the Drug Abuse Resistance Education (DARE) program that produced null results. Specifically, we will determine whether the use of questionable data analysis and presentation practices can produce positive results when applied to this dataset.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Evaluation in the Wild: The Challenges of Effectiveness Evaluations
Roundtable Presentation 209 to be held in the Limestone Boardroom on Thursday, Nov 6, 9:15 AM to 10:45 AM
Sponsored by the Theories of Evaluation TIG
Andrea Beesley,  Mid-Continent Research for Education and Learning,  abeesley@mcrel.org
Sheila A Arens,  Mid-Continent Research for Education and Learning,  sarens@mcrel.org
Abstract: This session addresses the difference between tightly controlled pilot studies or trials (efficacy), as compared to evaluations of programs in a more realistic, less controlled setting (effectiveness). Efficacy evaluations emphasize internal validity, while effectiveness evaluations take place (usually) after some evidence of efficacy has been obtained, and the emphasis is on external validity. Because effectiveness evaluations take place “in the wild,” that is, in the real-world setting with all of its variations, diversity, and external influences, the evaluator cannot necessarily expect that the program will be implemented exactly as it was in pilot studies or efficacy trials. The presenters will discuss the differences between efficacy and effectiveness evaluations, and give examples from their work. They will also raise issues of client/developer relations, recruiting, implementation fidelity, and data collection and analysis in an effectiveness evaluation. Participants will be encouraged to discuss these issues and contribute knowledge from their own experiences.
Roundtable Rotation II: What Evaluation Theory Doesn’t Tell Us: A Conversation Between Evaluator and Evaluand
Roundtable Presentation 209 to be held in the Limestone Boardroom on Thursday, Nov 6, 9:15 AM to 10:45 AM
Sponsored by the Theories of Evaluation TIG
Kyoung Jin Kim,  University of Illinois Urbana-Champaign,  kkim37@uiuc.edu
Tania Rempert,  University of Illinois Urbana-Champaign,  trempert@uiuc.edu
Abstract: Using actual quotations and comments as shared in relation to two recent evaluations, the authors consider what an evaluator and evaluand might discuss if they were to consider evaluation theory within the context of their evaluation in practice; uncovering the effect that contextual factors have on evaluation theory when translated into evaluation practice. As a dialogue between an evaluator and evaluand demonstrates, this paper addresses 1) What role does evaluation theory play in helping determine how evaluations will be implemented?, 2) Does evaluation theory offer enough practical advice to evaluation practitioners regarding what strategies, methods, and tools are appropriate for a particular context?, and 3) Does evaluation theory help guide the evaluator’s thinking about how to interact with stakeholders, enact value commitments, and attend to key characteristics of different contexts?

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Garnering Grantee Buy-in on a National Cross-Site Evaluation: The Case of ConnectHIV
Roundtable Presentation 210 to be held in the Sandstone Boardroom on Thursday, Nov 6, 9:15 AM to 10:45 AM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Susan Rogers,  Academy for Educational Development,  srogers@aed.org
Abstract: ConnectHIV is a Pfizer Foundation funded initiative that is providing $7.5 million in grants and technical assistance over three years (2007-2010) to twenty AIDS Service Organizations that take a comprehensive, evidence-based approach to HIV prevention and access to care and treatment. Evaluation is a centerpiece of the initiative where assessment activities are being conducted at both the local grantee level as well as nationally through a cross-site evaluation. Grantee buy-in on evaluation was garnered through several methods, including: participation in a individualized evaluation needs assessment and in a grantee ‘evaluation summit’ after funding awards were made; grantee specific technical assistance on local evaluation efforts; formation of a Grantee Evaluation Advisory Committee on the cross-site evaluation; grantees’ direct involvement in the formation of cross-site measures; the promotion of peer learning communities and training of grantees on PDAs for national data collection. While the participatory process is time-consuming, overall it has fostered innovative approaches to engaging grantees which has resulted in grantees’ appreciation and ownership of the project’s evaluation components resulting in quality data reporting, increased use of findings and increased interest in evaluation beyond the project.
Roundtable Rotation II: Motivating the Client to Make Change: Using Self-Reflection in a Collaborative Evaluation
Roundtable Presentation 210 to be held in the Sandstone Boardroom on Thursday, Nov 6, 9:15 AM to 10:45 AM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Bridget Cotner,  University of South Florida,  bcotner@cas.usf.edu
Aarti Bellara,  University of South Florida,  bellara@coedu.usf.edu
Abstract: This evaluation paper highlights the importance of collaborating with the client to maximize the effectiveness of a program in a formative evaluation. Using a collaborative evaluation approach, motivated the client to reflect on and modify the implementation of her science inquiry-training program. This program aimed to increase the performance of English Language Learners by modeling science inquiry techniques and strategies to classroom teachers in the intermediate level of an elementary school. Through self-reflection the client was able to identify strategies to improve the program, and better align the objectives with her practice and needs of the participants.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Evaluator Role in a Stakeholder Driven Community Assessment
Roundtable Presentation 211 to be held in the Marble Boardroom on Thursday, Nov 6, 9:15 AM to 10:45 AM
Sponsored by the Needs Assessment TIG
Jane Yoo,  Results Research,  drjaneyoo@yahoo.com
Kristin Ward,  Casey Family Programs,  kward@casey.org
Diana Ruiz,  Norwood Healthy Start,  druiz3@lausd.k12.ca.us
Abstract: In child welfare and related fields, assessments of children, parents, families or communities are traditionally conducted from the perspective of professionals. Rarely do stakeholders have an opportunity to assess themselves, and then to have this assessment drive service planning. A community-based initiative for child maltreatment prevention called Neighborhood Based Prevention (NBP) approached the assessment process differently. As a stakeholder driven initiative, it made a deliberate attempt to use stakeholder assessments of needs and assets to develop a plan of action. The evaluation team facilitated the community assessment, thereby playing a critical dual role of research support and evaluation. This dual role has implications for evaluation policy and practice, especially for initiatives that are participatory and action oriented but limited in technical expertise. In our Roundtable, a stakeholder representative and the NBP evaluators will partner to discuss the assessment process, outputs and implications for evaluation policy and practice in community-based initiatives.
Roundtable Rotation II: Identifying and Serving the Needs of Evaluators in a Federally Funded Program Context
Roundtable Presentation 211 to be held in the Marble Boardroom on Thursday, Nov 6, 9:15 AM to 10:45 AM
Sponsored by the Needs Assessment TIG
Lori Wingate,  Western Michigan University,  lori.wingate@wmich.edu
Arlen Gullickson,  Western Michigan University,  arlen.gullickson@wmich.edu
Abstract: This roundtable session offers participants an opportunity to review and provide critical feedback on a checklist for assessing the evaluation needs and capacities of projects funded through the National Science Foundation’s Advanced Technological Education program. The needs assessment results will guide the development of resources (e.g., instruments, templates, exemplars, how-to-guides) keyed to areas of greatest need. These materials will be offered through a Web-based evaluation resource center. The resource center will be oriented to the needs of ATE grantees and evaluators, but may be accessed by the evaluation community at large. Since the checklist’s content will drive the needs assessment and subsequent resource development, it’s critical that it is properly focused on important aspects of evaluation and reflect relevant evaluation standards and NSF’s expectations for evaluation. This work has especially important implications for improving evaluation practice, since it is grounded in the work of evaluators in a specific, real-world context.

Session Title: Building Evaluation Capacity Through Appreciative Inquiry and Participatory Methods
Panel Session 212 to be held in Centennial Section A on Thursday, Nov 6, 9:15 AM to 10:45 AM
Sponsored by the Organizational Learning and Evaluation Capacity Building TIG
Hallie Preskill,  Claremont Graduate University,  hpreskill@ca.rr.com
Hallie Preskill,  Claremont Graduate University,  hpreskill@ca.rr.com
Abstract: In many organizations, the task of program evaluation is relegated to internal or external 'experts' who design, implement and report on the results of evaluation. This often leads to a separation of evaluation from other program activities and reports have little use for program management. Organizations today need increased internal capacity to contract for, use, and participate in evaluations. To respond to increasing demand for evaluation capacity building, evaluators of the 21st century must be well equipped with tools to help organizations learn about evaluation and develop evaluation cultures. One useful tool is Appreciative Inquiry (AI), an approach to inquiry that engages participants in the systematic study of success as a means of learning through evaluation. This panel will show how the use of AI can help to improve the quality of M&E data, increase evaluation use, build program staff's evaluation competencies, and integrate monitoring and evaluation with program activities.
Using Appreciative Inquiry to Build Evaluation Capacity: Results From Three Case Studies
Shanelle Boyle,  Claremont Graduate University,  shanelle.boyle@gmail.com
In an effort to respond to the increasing demand for evaluation capacity building (ECB), evaluators of the 21st century must be well equipped with tools to help organizations learn about evaluation and develop evaluation cultures. One tool, or ECB strategy, that may be extremely useful to evaluators is Appreciative Inquiry (AI). AI is an approach to inquiry that asks participants to consider peak experiences, successes, and positive outcomes, as a means for improving the present, while also creating a vision of the future. While AI has an enormous potential to help organizations build evaluation capacity, little is known about how to use AI for ECB purposes or its long-term impacts. In an attempt to shed light on these issues, this paper will present findings from a research study involving the implementation of an AI on ECB experiences at three organizations. All attendees will receive a handout summarizing the study.
Using Appreciative Inquiry to Transform Evaluative Thinking and Build Evaluation Capacity: The Albania Experience
Mary Gutmann,  EnCompass LLC,  mgutmann@encompassworld.com
In many organizations, the task of program evaluation is relegated to internal or external 'experts' who design, implement and report on the results of evaluation. This often leads to a separation of evaluation from other program activities and reports have little use for program management. Organizations today need increased internal capacity to contract for, use, and participate in evaluations. To respond to increasing demand for evaluation capacity building, evaluators of the 21st century must be well equipped with tools to help organizations learn about evaluation and develop evaluation cultures. One useful tool is Appreciative Inquiry (AI), an approach to inquiry that engages participants in the systematic study of success as a means of learning through evaluation. This panel will show how the use of AI can help to improve the quality of M&E data, increase evaluation use, build program staff's evaluation competencies, and integrate monitoring and evaluation with program activities.
Using Appreciative Inquiry to Build Evaluation Capacity in Evaluation Learning Circles and Other Initiatives
Carolyn Cohen,  Cohen Research and Evaluation,  cohenevaluation@seanet.com
This session describes lessons learned from experiences in integrating elements of AI into small scale projects. One example involves incorporating AI as part of an evaluation capacity-building 'coaching' endeavor. In this case, AI was infused into Evaluation Learning Circles conducted with new program officers in a small philanthropic foundation. In another case, AI was used as part of a one-time intervention at a board of directors' strategic planning retreat. Ms. Cohen will describe the specifics of these activities, and share lessons learned on the varying levels of success of these endeavors.

Session Title: Linear and Logistic Regression for Program Evaluation
Skill-Building Workshop 213 to be held in Centennial Section B on Thursday, Nov 6, 9:15 AM to 10:45 AM
Sponsored by the Quantitative Methods: Theory and Design TIG
Tessa Crume,  Rocky Mountain Center for Health and Education,  tessac@rmc.org
Abstract: Linear and logistic regression can be popular techniques for assessing relationships between outcome and predictor variables in program evaluation. Learn how the casual user statistics can employ these techniques to efficiently assess the impact of variables on the outcome, controlled for the influence of others. Basic rules, steps, and theories of how to build logistic and linear regression models and interpret their results will be reviewed and applied to program evaluation scenarios. Strong background in math NOT REQUIRED.

Session Title: Moving Evaluation Policy and Practice Towards Cultural Responsiveness
Multipaper Session 214 to be held in Centennial Section C on Thursday, Nov 6, 9:15 AM to 10:45 AM
Sponsored by the Presidential Strand and the Indigenous Peoples in Evaluation TIG
Katherine Tibbetts,  Kamehameha Schools,  katibbet@ksbe.edu
Joan LaFrance,  Mekinak Consulting,  joanlafrance1@msn.com
Challenges with Government Evaluation Requirements for Evaluators of Indigenous Substance Abuse Prevention Programs
Jane Grover,  RMC Research Corporation,  jgrover@rmccorp.com
Abstract: How can indigenous evaluators implement culturally competent models in AI/AN communities while ensuring that government evaluation requirements are met? Through describing the challenges in one tribal community with state funding, this paper will discuss how American Indian/Alaska Native substance abuse prevention programs are evaluating the implementation and outcomes of Strategic Prevention Framework (SPF). Examples will also be drawn from the work of tribal entities recently funded under the same requirements as states. The SPF is the federal Center for Substance Abuse Prevention’s (CSAP) model for implementing community changes in attitudes, policies, and practices related to substance abuse. Based on the work of community coalitions, the SPF draws upon some of the strengths of indigenous communities. But requirements for epidemiological data are challenging to tribal and urban Indian grantees because these data are often not available or seriously under represent AI/AN populations. Requirements for implementing evidence-based programs formed on other populations, and for evaluation data based on quantitative methods add to the challenge. Throughout the process, much is being learned that will hopefully strengthen AI/AN grantees and increase the cultural competence of government evaluation requirements.
Going Native: An Evolution of Evaluation Policy and Practice Toward Indigenous Methods
Katherine Tibbetts,  Kamehameha Schools,  katibbet@ksbe.edu
Abstract: As a novice program evaluator in the early 1980’s, the author was cautioned by her mentor about the pitfalls of “going Native” (becoming too close to the program and staff). At that time, most evaluators operated within the paradigm that an objective Truth would be revealed through the scientific method, assuming the evaluator kept sufficient distance between herself and the object of study. However, based on a review of changes in evaluation standards over the last three decades, position statements from the American Evaluation Association, changes in TIG options within AEA, writings on evaluation theory from an indigenous perspective, and personal experience, the paper reaches the conclusion that “going native” in ways described in the draft AEA Cultural Competency Statement is essential to high quality evaluation. It further draws connections between this position and indigenous approaches to program evaluation as documented in the evaluation literature.
Cultural Competence Methodology
Holly Echo Hawk,  Walter R McDonald & Associates Inc,  echohawk@pacifier.com
Carolyn Lichtenstein,  Walter R McDonald & Associates Inc,  clichtenstein@wrma.com
Abstract: The federal government has funded child and family system of care transformation beginning with the Child and Adolescent Service System Program (CASSP) initiative in 1984, which later transformed into the federal Comprehensive Community Mental Health Services for Children and Their Families Program. Funded communities are required to participate in a national evaluation study. American Indian and Alaska Native communities have been an important part of the children's mental health transformation movement and have made significant contributions to the practice level interpretation of cultural competence and program evaluation. Tribal communities were an important segment of the funded communities and cultural disconnects were common during the funding partnership. Much learning occurred between both the national evaluation entity and the local tribal communities about cultural competence methodology. The workshop will provide details of how the study methodology adhered to cultural competence principles and resulted in a Tribal partnership that advanced the mutual agendas.

Session Title: What is Systems Thinking? The Ongoing Debate
Panel Session 215 to be held in Centennial Section D on Thursday, Nov 6, 9:15 AM to 10:45 AM
Sponsored by the Systems in Evaluation TIG
Laura Colosi,  Cornell University,  lac19@cornell.edu
Abstract: This panel offers insight into why people in evaluation (and many other fields), are drawn to systems thinking. The reasons for its growing popularity are diverse and beneath these reasons are that it offers a model for thinking differently. Despite this allure, there is disagreement about what constitutes systems thinking, and it's meaning is ambiguous. The recent article in Evaluation and Program Planning seeks to address and eliminate some of this ambiguity so that the reader may gain more insight into what systems' thinking is and, therefore, how to apply its main ideas to a particular field or practical context.
The False Dichotomy: Methodological Pluralism and Universality
Derek Cabrera,  Cornell University,  dac66@cornell.edu
The question ''what is systems thinking?'' cannot be answered by a litany of examples of systems thoughts, methods, methodologies, approaches, theories, ideas, etc. Such a response is analogous to answering the biologist's question ''what is life?'' with a long list of kingdoms, phyla, classes, orders, families, genus and species. Taxonomy of the living does not provide an adequate theory of life. Likewise, taxonomy of systems ideas, even a pluralistic one, does not provide an adequate theory for systems thinking.
Integrating Systems Thinking into Evaluation Practice
Laura Colosi,  Cornell University,  lac19@cornell.edu
This panel addresses the perceived lack of practical tools available for systems thinking as defined by the DSRP model. There are several tools and methods that go beyond a simple set of written methods or proposed methodologies (e.g., SSM, CST, etc). These tools have been assessed in both evaluation studies, and case study research conducted with academic researchers, graduate students in many fields, elementary and secondary school teachers, and parents of children of different ages. The tools presented go well beyond the tools and methods of existing models of systems thinking with the additional benefit of being universally applicable.
The Big Picture
Derek Cabrera,  Cornell University,  dac66@cornell.edu
Laura Colosi,  Cornell University,  lac19@cornell.edu
This panel explicates the broader implications of the authors proposed definition of systems thinking and the DSRP model as: 1) a general model of thinking, 2) as the missing code necessary for evolutionary epistemology, and 3) as a general theory of things. This will in turn, clear up the misunderstanding that the DSRP model is a set of 4 elements rather than formalism for thinking with complex structure and predictive internal dynamics. There are many other uses of the DSRP model, in particular as a universal 'theory of things'. In our paper, we took a perspective on DSRP in which we attempted to explain its utility as a model of thinking systemically in evaluation.

Session Title: How to Obtain Stakeholder Buy-In and Standardize Performance Measurements: Application of Centers for Disease Control and Prevention Framework for Program Evaluation and Evaluation Standards
Demonstration Session 216 to be held in Centennial Section E on Thursday, Nov 6, 9:15 AM to 10:45 AM
Sponsored by the Graduate Student and New Evaluator TIG
Kai Young,  Centers for Disease Control and Prevention,  deq0@cdc.gov
Abstract: This session will provide attendees the step-by-step methods to standardize performance measurement and develop indicator reports for tracking progress and prioritize evaluation efforts. The relevance and the application of CDC Framework for Program Evaluation and the Program Evaluation Standards (i.e., utility, feasibility, propriety, and accuracy), in the development and implementation of the performance measurement process will be discussed. We will explore the steps for engaging stakeholders in the process of - clarifying objectives to articulate program priorities - developing indicators from objectives - assessing the validity and reliability of the measures - building consensus on the measures, and - developing reports for monitoring performance Finally, we will demonstrate how to integrate performance measurement into routine program or organization practices, and how such processes can serve to enhance the communication of program priorities and strengthen collaboration among staff and partners. Attendees will learn from real life examples from the Division of Tuberculosis Elimination, CDC.

Session Title: Examining Published Literature to Better Understand Evaluation Theory and Practice
Multipaper Session 217 to be held in Centennial Section F on Thursday, Nov 6, 9:15 AM to 10:45 AM
Sponsored by the Research on Evaluation TIG and the Theories of Evaluation TIG
Tarek Azzam,  Claremont Graduate University,  tarek.azzam@cgu.edu
What Can Publishing Patterns Tell us About Evaluation Theorists? A Bibliometric Analysis Based on Alkin and Christie's Evaluation Theory Tree
Anne Heberger,  Claremont Graduate University,  aheberger@nas.edu
Abstract: In 2004 Alkin and Christie proposed a new categorization of evaluation theorists in the form of an evaluation theory tree with three main branches: use, methods, and valuing. Theorists were placed on a branch based on the main focus of their work and their perceived relationship to others on the same branch. Using bibliometrics, the study of patterns that arise in publications and use of documents, this paper uses the references theorists cite in their scholarly writing as data source. Data for this paper are the peer-reviewed articles written by the theory tree theorists that have been indexed in the academic database Web of Science. The analyses will include 1) a description of the bodies of knowledge these theorists draw upon in their writing and 2) an examination of the interrelationships between theorists on the same branch and across branches. Multidimensional scaling maps will visually illustrate the results.
Evaluation Approaches in Practice: A Glimpse at the Past Five Years
Susan Hibbard,  University of South Florida,  hibbard@coedu.usf.edu
John Ferron,  University of South Florida,  ferron@tempest.coedu.usf.edu
Abstract: The lack of empirical research on evaluation has created a demand for research to be conducted on all aspects of evaluation. This paper focuses on evaluation approaches reported in a sample of the literature from 2003-2007. Articles investigated were published in four peer-reviewed journals: American Journal of Evaluation, Educational Evaluation and Policy Analysis, Evaluation, and Evaluation and Program Planning. Initially articles were coded according to Fitzpatrick, Sanders, & Worthen’s (2004) five evaluation approaches (objectives-oriented, management-oriented, consumer-oriented, expertise-oriented, and participant-oriented). Articles were investigated further for statements of explanation or validation for using the approach identified. Among other elements, the purpose of the evaluation and distinguishing characteristics were also explored. Studying evaluation approaches is imperative for the professionalizing of program evaluation (Stufflebeam & Shinkfield, 2007). The results of this study will reveal some of the approaches evaluators are using in practice. References Fitzpatrick, J. L., Sanders, J. R., & Worthen B. R. (2004). Program evaluation: Alternative approaches and practical guidelines. Needham Heights, MA: Allyn and Bacon. Stufflebeam, D. L., Shinkfield, A. J., (2007). Evaluation theory, models, and applications. San Francisco, CA: John Wiley & Sons, Inc.
Bridging the Gap Between Theory and Practice: A Review of Literature on Practice as a Reflection of Evaluation Theory Utilization
Dreolin Fleischer,  Claremont Graduate University,  dreolin.fleischer@cgu.edu
Jessica Veffer,  Claremont Graduate University,  jessica.veffer@cgu.edu
Abstract: A sample of one hundred and twenty articles published during a three-year span (2004-2006) were reviewed from eight North American peer reviewed evaluation-focused journals. Each article was selected for inclusion in our analysis if an evaluation study was described and results reported. The purpose of this review was to identify common components of evaluation practice across a large set of peer-reviewed studies. These data allow us to examine what practitioners are doing within the field, and to comment on the state of academic evaluation literature as it relates to practice. The results from this review provide empirical support for the commonly held belief that many evaluation theories are largely infeasible in practice, and are therefore under-utilized. Our presentation will delve into the results, as well as the implications for bridging the gap between theory and practice.

Session Title: Evaluation Methodologies Shaping Extension Education
Multipaper Session 218 to be held in Centennial Section G on Thursday, Nov 6, 9:15 AM to 10:45 AM
Sponsored by the Extension Education Evaluation TIG
Nancy Franz,  Virginia Polytechnic Institute and State University,  nfranz@vt.edu
What Lies Beneath: Casting Stakeholders’ Perceptions in Rural Development Projects Evaluation, Through Q Methodology
Virginia Gravina,  University of the Republic,  virginia@fagro.edu.uy
Pedro De Hegedüs,  University of the Republic,  phegedus@adinet.com.uy
Abstract: This paper’s objective is to provide a specific example of an evaluation context using Q methodology as a key tool in development projects evaluation. A development project for family farmers, in order to train and provide them with production and organizational skills, was carried out in the state of San Luis, Argentina. Leaded by INTA, National Institute of Agriculture Technology, a government institution; the project was operant for 10 years, so project evaluation was needed for decision making. Q methodology was used to evaluate the stakeholders’ perception about the project. As a result five ways of how the beneficiaries perceived the project emerged, however all of them were imbued with project’s key topics, and none of them focused on a particular one. This reinforced the idea that development projects not only worked on the intended goals, but also are a source of other unexpected effects that should also be evaluated.
Evaluation of In-service Training Programs Using Retrospective Methods: Problems and Alternatives in Establishing Accuracy
Koralalage Jayaratne,  North Carolina State University,  jay_jayaratne@ncsu.edu
Lisa Guion,  North Carolina State University,  lisa_guion@ncsu.edu
Abstract: Due to tightening budget, extension managers are demanding for outcome data to prove the effectiveness of in-service training programs. This demand can be met only if programs are evaluated systematically. Therefore, extension specialists ask for easy, valid, and reliable methods to evaluate extension in-service training programs. Even though, the pre and post-test evaluation approach is relatively a valid method, trainers are reluctant to use this method due to practical challenges in matching pre and post surveys. Some participants are not comfortable in revealing their identity and leave pre and post surveys without any identifiers. Retrospective pre and post method is an alternative evaluation approach for this problem. This paper describes how the retrospective method was used to evaluate a state-wide extension in-service training program; presents results; shares problems and alternatives in establishing accuracy.
Utilizing Social Network Analysis in Extension: Exploring Extension's Reach
Tom Bartholomay,  University of Minnesota Extension,  barth020@umn.edu
Scott Chazdon,  University of Minnesota Extension,  schazdon@umn.edu
Mary Marczak,  University of Minnesota,  marcz001@umn.edu
Abstract: This presentation will describe the University of Minnesota Extension’s evaluation of its outreach networks using social network analysis (SNA). The presentation will include descriptions of the process, the value of SNA mapping and statistical procedures, what was learned from the data, and how the results support other evaluation goals within University of Minnesota Extension. The purpose of this evaluation was to evaluate the direction and strength of Extension’s networks in Minnesota to monitoring changes in network patterns over time and inform strategic decision-making regarding robust and neglected network zones. Findings will be used at the project, program, center, and Extension-wide level. Although Extension educators deliver their services through a variety of educational programs, they also deliver their research-based information through their social networks and relationships, enabling organizations throughout the state and elsewhere to achieve significantly greater success in their objectives. SNA provides one way to measure these important networks.
Methodological Rigor and its Relationship to Evaluation Use Within Extension
Marc Braverman,  Oregon State University,  marc.braverman@oregonstate.edu
Mary Arnold,  Oregon State University,  mary.arnold@oregonstate.edu
Abstract: This presentation explores the role and influence of methodological rigor in outcome evaluations of Extension programs. Rigor is a characteristic of evaluation quality, consisting of elements such as valid measurement strategies, strong evaluation design, sufficient sample size and power, appropriate data analyses, etc. Skilled evaluators are very aware of its importance but the same is not always true for Extension administrators, decision-makers and other stakeholders, who sometimes head straight for the evaluation’s conclusions without considering the strength of evidence behind those conclusions. The presentation discusses questions about how evaluations may be correctly or incorrectly interpreted: Which organizational relationships and structures within Extension promote the planning of high-quality evaluations? Who within Extension ensures that rigor is appropriately considered in the decision-making process? What are the implications of misinterpreting rigor for the long-term quality of Extension programs? The presentation provides recommendations on the evaluator’s role in ensuring rigor within state Extension services.

Session Title: Logic Models: Better Strategies for Great Results
Skill-Building Workshop 219 to be held in Centennial Section H on Thursday, Nov 6, 9:15 AM to 10:45 AM
Sponsored by the Program Theory and Theory-driven Evaluation TIG
Cynthia Phillips,  Phillips Wyatt Knowlton Inc,  cynthiap@pwkinc.com
Lisa Knowlton,  Phillips Wyatt Knowlton Inc,  lisawk@pwkinc.com
Abstract: This session will present techniques for ensuring quality (actionable, feasible) in logic models. It will focus on three techniques: a mark up, display and archetypes. This session is organized in relatively equal 30-minute segments. It will begin with a briefing, followed by an interactive exercise and end with a discussion. We will explain the mark up technique and provide an opportunity for participants to apply their learning on sample models. This session will connect program strategy, implementation and evaluation for a new look at effectiveness.

Session Title: Innovative Approaches to Practice and Policy Development
Multipaper Session 220 to be held in Mineral Hall Section A on Thursday, Nov 6, 9:15 AM to 10:45 AM
Sponsored by the Social Work TIG
Kimberly Farris,  University of Illinois at Chicago,  kimberlydfarris@gmail.com
Privatization of Behavioral Health Services and Managed Care Policy: Implications For Evaluation Practice
Aisha Williams,  APS Healthcare Inc,  aishad@comcast.net
Nicole Griep,  APS Healthcare Inc,  ngriep@apshealthcare.com
Abstract: This paper seeks to explore the established policies used to govern the administration of mental health services in the state of Georgia, how those polices have shaped the current evaluation tool used to assess behavioral health organizations, and the impact of policy change and its implications for evaluation practice resulting in a revised evaluation tool. This paper contributes to the body of knowledge in the field of evaluation because it demonstrates how an external review organization uses policy to inform specific evaluation practices and tools.
Bridging the Gap Between Evaluation Policy and Social Work Practice
Derrick Gervin,  Clark Atlanta University,  dgervin@yahoo.com
Sarita Davis,  Georgia State University,  skdavis04@yahoo.com
Abstract: Dialogue among social work professionals regarding the use of evaluation has grown since the Government Performance and Results Act (GPRA) was signed into law, in 1993. Although the Council on Social Work Education (CSWE) mandates that evaluation be incorporated into the social work curriculum (CSWE, EPAS, 2001), the profession has not fully embraced evaluation practice (Adam, Zosky & Unrau, 2004). This presentation examines the extent to which CSWE policy has influenced social work education, field experience, and continuing education. The presenters will discuss the efficacy of these efforts and propose strategies that bridge remaining gaps between policy, social work education, and practice. As part of this presentation, the paper intends more specifically to examine the CSWE policy mandates and illustrate how evaluation teaching models have relevance in social work education.
Achieving Homeownership for Homeless Families via Asset Development: Does It Work?
Jenny L Jones,  Virginia Commonwealth University,  jljones2@vcu.edu
Geraldine Meeks,  Virginia Commonwealth University,  gs2meeks@vcu.edu
Salathia Johnson,  Virginia Supportive Housing,  sjohnson@vsh.org
Abstract: The Home Buy5, a long term, comprehensive housing program designed to assist low-income families who are currently living in shelters, transitional housing, or who are at-risk for of becoming homeless achieve home ownership. The program consists of a five phase process that encompasses a series of activities related to home ownership, i.e. financial education, life skills, assessment of education level, etc. Over a period up to five years families learn how to rebuild their credit, manage their monthly finances, eliminate outstanding debt, and build savings. The program is grounded in the belief that families can obtain permanent housing via the participation of asset building. A primary assumption of asset building is that saving is an institutional phenomenon, i.e., when access and incentives are right, people---including poor people---are more likely to save (Sherraden, 2007). In an effort to better understand the program processes, and to determine the effectiveness of the program, a summative evaluation was conducted.

Session Title: Quality Still Counts: Evaluation Policy and Practice in Human Services and Health Care
Panel Session 221 to be held in Mineral Hall Section B on Thursday, Nov 6, 9:15 AM to 10:45 AM
Sponsored by the Human Services Evaluation TIG
Tom Lengyel,  American Humane Association,  toml@americanhumane.org
Tom Lengyel,  American Humane Association,  toml@americanhumane.org
Abstract: Accreditation standards, contract expectations and foundation grantors have created a context for human services and health care requiring the institutionalization of program evaluation initiatives and ongoing quality maintenance and improvement processes. Although the external environment creates a demand for these activities, policies for the practice of evaluation may not be specified. Evaluators from 3 organizational perspectives: community hospital, child and family social services, and a national organization with an evidenced-based program model, will illustrate the ways in which participatory program evaluation practice influences evaluation policy changes and leads to quality improvement.
Evaluating Model Fidelity in a National Child Abuse Prevention Organization: Development, Dissemination and Use of a Valid Tool
Margaret Polinsky,  Parents Anonymous Inc,  ppolinsky@parentsanonymous.org
Parents Anonymous« Inc. is an international Network of accredited organizations that implement evidence-based Parents Anonymous« Mutual Support Groups for adults (with co-occurring programs for children and youth) with the goal of addressing risk and protective factors related to the prevention of child maltreatment. The journey toward developing a valid Group Fidelity Tool to evaluate the fidelity of the group to the Parents Anonymous« principles and standards for groups has taken more than two years. The Shared Leadership approach included input from Parents Anonymous« Staff and Parent Leaders. The Network has been eager to adopt the tool and use it to generate awareness of their model fidelity to guide individual groups and to inform their overall accredited organization. This panel presentation will discuss the journey toward instrument development, validation, and adoption of the Group Fidelity Tool by the Network, including efforts to educate the Network about the importance of fidelity measurement.
Shaping Practice and Policy through an Innovative and Automated Use of Patients' Handwritten Comments in a Hospital Setting
Paul Longo,  Independent Consultant,  plongo2@cox.net
Though hospital visits and stays can result in the profitable attainment of desired health-care outcomes, hospitals are potentially dangerous and costly settings. Consequently, they're assessed in highly regulated ways to reduce risks, increase profitability, determine reimbursement rates, compensate management, prevent undesired outcomes, promote best practices, identify improvement opportunities, etc.. Increasingly, survey data related to patients' clinical and non-clinical hospital experiences are being factored into routine hospital assessments, helping stakeholders make better decisions. This presentation explores the manner in which a patient-surveying initiative was introduced to a not-for-profit, faith-based, community hospital in post-Katrina New Orleans. Highlighting the influence of utilization-focused, participatory evaluation practices, the presentation is focused on the ways in which patients' narrative comments are systematically processed and strategically used throughout the hospital. Examples are provided to illustrate how associated expectations and practices have been institutionalized and encoded into hospital policy as part of an integrated, values-driven, patient-centered, cultural-transformation initiative.
Quality Improvement and Evaluation as Catalyst for Organizational Change
Lois Thiessen Love,  Uhlich Children's Advantage Network,  lovel@ucanchicago.org
The complimentary missions of quality improvement initiatives and internal program evaluation are: 1) to maximize the quality of program implementation processes in the client's interest; and 2) to maximize the fit of the intervention to client need and for maximum client benefit. However, within human service organizations these functions often occur within the context of an organizational goal of impressing the external monitoring and funding environment. To successfully balance serving the client's interests and the larger organizational survival needs, quality improvement's and evaluation's roles are to become catalysts for change within the organization. Changes may occur in the multiple areas including the ways in which services are delivered as well as the voices and nature of information used to influence program decisions. Examples of the change agent role and implications for evaluation policy for human service organizations will be provided.

Session Title: A Demonstration of Using High Technical Standards in Direct Observation: The Protocol for Assessing GK-12 Graduate Teaching Fellows' Presentation Skills
Demonstration Session 222 to be held in Mineral Hall Section C on Thursday, Nov 6, 9:15 AM to 10:45 AM
Sponsored by the Qualitative Methods TIG
Joyce Kaser,  The Study Group Inc,  jskaser@aol.com
Rick Tankersley,  Florida Institute of Technology,  rtankers@fit.edu
Patti Bourexis,  The Study Group Inc,  studygroup@aol.com
Abstract: This session demonstrates the 'Protocol for Assessing GK-12 Graduate Teaching Fellows' Presentation Skills' that assesses the presentation skills of scientists and other technical professionals while maintaining AEA's Guiding Principle on Systemic Inquiry. It addresses three key technical standards for direct observation as a qualitative evaluation method: the importance of objectively defining what are we looking for, how we know it when we see it, and how we capture it accurately. The session will demonstrate the protocol including an introduction to 11 criteria for effective presentations drawn from the literature, a rubric for assessing these criteria, and a process for recording and summarizing performance. Participants will use the protocol in observing and assessing two scientists' presentations, discuss their assessments, and consider how to use the protocol in their own practice. Participants will be invited to use the protocol and provide feedback to the developers.

Session Title: Using Systems Tools to Understand Multi-Site Program Evaluation
Skill-Building Workshop 223 to be held in Mineral Hall Section D on Thursday, Nov 6, 9:15 AM to 10:45 AM
Sponsored by the Cluster, Multi-site and Multi-level Evaluation TIG
Molly Engle,  Oregon State University,  molly.engle@oregonstate.edu
Abstract: Coordinating evaluation efforts of large multi-site programs requires specific skills from evaluators. Connecting multi-site evaluations with overall program objectives can be accomplished with quick diagramming tools that show function, feedback loops, and leverage points for priority decisions. Targeting evaluators who are responsible for evaluating large multi-site programs or evaluators within a specific program of a larger multi-site program, participants will, both individually or in small groups, draw a program system and consider its value to program goals and objectives. Drawings will be discussed, the method assessed, and insights summarized. The workshop also will review, 'What did you learn and how do you intend to use this skill?' along with 'What was the value of this experience to you?' This skill building workshop integrates the sciences of intentional learning, behavioral change, systems thinking and practice, and assessment as functional systems of evaluation and accountability.

Session Title: Building Evidence, Building Capacity: Compatibilities and Conflicts
Demonstration Session 224 to be held in Mineral Hall Section E on Thursday, Nov 6, 9:15 AM to 10:45 AM
Sponsored by the Non-profit and Foundations Evaluation TIG
Jane Reisman,  Organizational Research Services,  jreisman@organizationalresearch.com
Anne Gienapp,  Organizational Research Services,  agienapp@organizationalresearch.com
Kasey Langley,  Organizational Research Services,  klangley@organizationalresearch.com
Abstract: Across fields of service, evidence-based practices are in the spotlight. Funders that articulate their philosophy in relationship to evidence-based practices are better able to make informed choices about their grant making, as well as grantee evaluation and reporting requirements, and how to support their grantees. Grantees that understand their goals and capacity in relationship to evidence-based practices are better able to pursue funding arrangements that fit their organization and community. Drawing upon their years of work with early learning, family support and child abuse and neglect prevention funders in Hawaii and Washington State, presenters will share examples of different approaches that include supporting community-based programs' development along a continuum of evidence; capacity-building to support community-based organizations' development of evidence-informed practices; and support for implementation of evidence-based practices. Presenters will share a framework that describes approaches to evidence- and capacity-building and discusses implications for grant making and evaluation.

Session Title: Connecting the Dots: Methods and Issues in Assessment of Fidelity of Implementation in Education
Panel Session 225 to be held in Mineral Hall Section F on Thursday, Nov 6, 9:15 AM to 10:45 AM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Bonnie Walters,  University of Colorado Denver,  bonnie.walters@cudenver.edu
Abstract: Determining fidelity of implementation is an essential component of an evaluation study that hopes to 'connect the dots' between an innovative educational practice and intended outcomes, such as improvement in student achievement, or to compare the effectiveness of an innovation to traditional practices. Panelists will provide insight into current methods of assessing the degree to which innovative instructional strategies are consistently and accurately implemented in classroom practice according to the original design. The merits and challenges of each of the methods will be examined, alternative methods drawn from the literature will be compared, and an interactive discussion concerning the experiences of participants will be facilitated. The purpose of the panel is to advance the evolution and refinement of methods of assessing fidelity of implementation in the field of education.
From the Field: Assessing Teacher Alignment with Literacy Best Practices
Lori Conrad,  Public Education and Business Coalition,  lconrad@pebc.org
In her role as the Senior Director of Education for the Public Education & Business Coalition (PEBC), Conrad has refined a rubric as one method of assessing teachers' alignment with PEBC's best classroom practices in literacy. The rubric has a self-assessment version for use by participating teachers and a version for trained classroom observers. She will share the challenges and strategies inherent in the development of an appropriate tool for assessing fidelity of implementation across grade levels and content areas.
From the Field: Assessing Teacher Use of Inquiry Science Methods
Chris Renda,  University of Colorado Denver,  car@timemarkllc.com
Teachers participating in math and science professional development are assessed to determine fidelity of implementation of inquiry science through interviews and classroom observations. Instruments used to assess fidelity are: the Levels of Use interview protocol (Hall & Hord, 2001) and the Reformed Teaching Observation Protocol (Piburn & Sawada, 2000). Benefits and challenges of each instrument are discussed, and reliability and factor analysis shared.
From the Field: Assessing Teacher Use of Formative Assessment Practices
Julie O'Brian,  Colorado Consortium for Data-Driven Decisions,  julie@ctlt.org
As Director of the Colorado Consortium for Data-Driven Decisions, O'Brian incorporates several strategies for measuring the fidelity of implementation of what teachers learn in a professional development institute on classroom formative assessment practices. In her presentation, she will share how her team has developed an innovation configuration map detailing the levels of subset skills, knowledge, and practices; and how they use video-taped lessons to analyze levels of implementation.
From an Evaluator's Perspective: Comparison of Methods of Assessing Fidelity in Education
Susan Connors,  University of Colorado Denver,  susan.connors@cudenver.edu
While the importance for evaluators to examine fidelity of implementation is evident, there is no established procedure or standard for carrying out this critical evaluator task (Ruiz-Primo, 2006). As a Senior Research Associate, Connors will discuss the relative strengths and concerns about currently used methods drawing from the literature and from involvement in the evaluation studies of the other panelists in this session.

Session Title: Strengthening Communities Through the Use of Evaluation: Issues and Perspectives
Multipaper Session 226 to be held in Mineral Hall Section G on Thursday, Nov 6, 9:15 AM to 10:45 AM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Elena Polush,  Iowa State University,  elenap@iastate.edu
Participatory Monitoring and Evaluation: Lessons from Anti-poverty interventions in Northern Ghana
Jerim Obure,  University of Amsterdam,  jerotus@yahoo.com
Ton Dietz,  University of Amsterdam,  a.j.dietz@uva.nl
Abstract: This presentation is based on an ongoing evaluation research done in Northern Ghana on monitoring and evaluation systems used by various development organizations in anti-poverty interventions. The research objective was to find out the extent to which the systems involved other stakeholders in monitoring and evaluation procedures. This study has been conducted among Cordaid and ICCO funded projects, two Dutch international NGOs closely affiliated to the Dutch Ministry of Development Co-operation. Most of the studied organizations showed a hybrid of Participatory Monitoring and Evaluation and Results Based Management, emphasizing on concepts of community participation and logical frameworks. Results showed a high degree of technical inter-dependence especially among organizations affiliated to the Association of Church Development Programs (ACDEP), an umbrella organization of church-based NGOs which have been the major players in the region since 1970s. Findings indicate interesting experiences which will be worth sharing with the others in the evaluation profession.
Using a Participatory Approach to Understand Youth Homelessness and Create Community Change
Jane Powers,  Cornell University,  jlp5@cornell.edu
Amanda Purington,  Cornell University,  ald17@cornell.edu
Abstract: We describe a project that examined the scope and nature of youth homelessness in an upstate New York community using a participatory approach that engaged a group of formerly homeless youth as research partners to conduct the study. The youth researchers were involved in all aspects of the project, from designing the tools and methodology, to recruiting subjects, collecting the data, interpreting the findings, and presenting results to key community stakeholders. We highlight the methods used to engage and sustain youth participation in research , and the value of the approach for advancing knowledge and practice. The multiple benefits of this approach will be discussed as a strategy to promote positive youth development, advance research, impact policy, and improve services for homeless youth.
“We Adults Simply Have to Listen”: An Obesity Prevention Youth Empowerment Action Research Project
Barbara MkNelly,  Network for a Healthy California,  barbara.mknelly@cdph.ca.gov
Kamaljeet Singh Khaira,  Network for a Healthy California,  kamaljeet.singh-khaira@cdph.ca.gov
Andy Fourney,  Network for a Healthy California,  andy.fourney@cdph.ca.gov
Abstract: The potential for participatory action research to build capacity and create social change is well known. However, few examples exist for applying a youth-led approach to nutrition and physical activity promotion. Diverse student teams, together with an adult ally, have undertaken a multi-step inquiry process for creating meaningful nutrition programs in ten low-resource sites throughout California. This California Department of Public Health (CDPH) pilot program was developed with Youth in Focus, a non-profit with extensive youth leadership and participatory research experience. The presentation will 1) highlight critical elements of the pilot’s ten-month training process, 2) provide specific examples of the action research projects and their results, and 3) examine evaluation policy implications and possible opportunities for replicating this approach more broadly in CDPH’s Network for a Healthy California, a large-scale social marketing initiative funded primarily by the U.S. Department of Agriculture’s Food Stamp Nutrition Education Program.
Combining Qualitative and Quantitative Program Evaluation Paradigms to Assess a Community Organizing Effort
Debra Harris,  California State University at Fresno,  dharris@csufresno.edu
Abstract: The evaluation of a community organizing effort began in the Central Valley region of California in the fall of 2005 using both qualitative and quantitative program evaluation principles. This evaluation continued for two years resulting in a Case Study. A Health and Nutrition Collaborative Action Team requested the evaluation to help assess their overarching goal of promoting access to health foods and physical activity. The specific objective of the Health and Nutrition Collaborative Action Team was to increase the number of safe “walk-able” routes to healthy foods in as many communities as possible. First, the evaluation used interviews with Collaborative Action Team members, community leaders, and citizens of the community to provide qualitative data regarding the effectiveness of these community-organizing efforts. Second, the evaluation used a “Walkability Checklist” to assist community leaders and citizens to make sustainable community policy changes. Lessons from this Case Study will be shared.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: The Creation of a Learning Community in Youth Development
Roundtable Presentation 227 to be held in the Slate Room on Thursday, Nov 6, 9:15 AM to 10:45 AM
Sponsored by the Non-profit and Foundations Evaluation TIG
Wayne Parker,  The Virginia G Piper Charitable Trust,  wparker@pipertrust.org
Abstract: Describes the creation by a foundation of a multiagency learning community to evaluate youth development programs. Included is the core set of measures created, the use of technology to automate data collection, the reports created, and the process used to create the community.
Roundtable Rotation II: Staying Alive: Developing an Evaluation Policy For a Nonprofit, Youth-Serving Organization While Meeting the Needs of the Organization and its Funders
Roundtable Presentation 227 to be held in the Slate Room on Thursday, Nov 6, 9:15 AM to 10:45 AM
Sponsored by the Non-profit and Foundations Evaluation TIG
Charles Giuli,  Pacific Resources for Education and Learning,  giulic@prel.org
Abstract: The Marimed Foundation treats youth in Hawaii who abuse substances, have difficulty controlling and understanding their emotions, and who have trouble adjusting to their schools, communities, and families. Boys and girls, typically between the ages of 15 and 16, who are experiencing adjustment and control difficulties, enter one of Marimed’s programs. The treatment offered is based on a combination of experiential education, with sail training at its center; school-based curricula; individual, family, and group therapy; and residential treatment. The purpose of this session is to describe a long-term effort to develop a standing evaluation of the programs offered by Marimed. The challenges encountered will be presented and discussed along with the lessons learned from this experience.

Roundtable: Evaluating Competency-Based Curricula: Case Study in Undergraduate Medical Education
Roundtable Presentation 228 to be held in the Agate Room Section B on Thursday, Nov 6, 9:15 AM to 10:45 AM
Sponsored by the Assessment in Higher Education TIG
France Gagnon,  University of British Columbia,  fgagnon@medd.med.ubc.ca
Helen Hsu,  University of British Columbia,  hhsu@medd.med.ubc.ca
Chris Lovato,  University of British Columbia,  chris.lovato@ubc.ca
Abstract: Competency frameworks are an important underpinning of medical education curricula. Their intended value lies in their utility for defining the competencies required to practice medicine, the desired learning outcomes of medical education. In this context, one of the challenges faced by evaluators is how to assess the alignment of course content (including objectives) with the overall curriculum and the desired outcomes. What methods should be used to assess alignment? What criteria should be used to measure the extent to which a course is supporting students toward achievement of competencies? How can results be used to facilitate program decision-making? This paper will describe a case study assessing the alignment between a course on social contextual determinants in medical practice and competencies required to practice medicine. Challenges and lessons learned in designing, implementing, and communicating results of this evaluation, and implications for evaluation policy will be discussed.

Session Title: What Does That Mean to You? Using Cognitive Interviewing to Improve Evaluation
Demonstration Session 229 to be held in the Agate Room Section C on Thursday, Nov 6, 9:15 AM to 10:45 AM
Sponsored by the AEA Conference Committee
Stacy Scherr,  Cincinnati Children's Hospital Medical Center,  stacy.scherr@cchmc.org
Abstract: This demonstration provides a theoretical overview and practical explanation of cognitive interviewing and reports the results of three cognitive interviewing projects. Cognitive interviewing is a method of learning how survey respondents comprehend, interpret, and answer survey questions. It sheds light on how respondents understand the subject, uncovers design flaws in questions and questionnaires, and improves the validity and reliability of the data. Attendees will learn about the background and history of cognitive interviewing, different ways to implement the method, and how it compares to other methods of understanding respondent comprehension such as focus groups, expert review, behavior coding, and respondent debriefing. Three very different cognitive interviewing projects and how project results were used to improve survey instruments will be discussed. The demonstration will conclude with a discussion of how attendees can apply cognitive interviewing to improve an evaluation plan.

Session Title: Instruments, Product, and Policy
Multipaper Session 230 to be held in the Granite Room Section A on Thursday, Nov 6, 9:15 AM to 10:45 AM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Tysza Ghanda,  University of Illinois Urbana-Champaign,  tgandha2@uiuc.edu
Abstract: The intersection of evaluation and policy calls to the fore questions of the effect of evaluations on policy decisions. This mulitipaper session takes a broad to narrow approach to explore how instruments used in evaluation lead to results which, in turn, affect policy decision-making within the context of public education. To examine the use of evaluation in constructing policies for public education, the first paper will provide an overview related to the ways in which instruments gather information and the ways in which those methods determine the types of information gathered, and subsequently, the policies that are constructed based on the information gathered such as funding decisions, grants, access, and college affordability. The second and third papers will explore the aforementioned role of evaluation instruments within specific contexts, namely instruments utilized in school-based evaluation and No Child Left Behind to make evaluative judgments about school quality.
Instruments, Product, and Policy
Amarachuku Enyia,  University of Illinois Urbana-Champaign,  aenyia@law.uiuc.edu
To examine the impact of evaluation instruments, this paper will bring attention to the ways in which instruments gather information and the ways in which those methods determine the types of information gathered, and subsequently, the policies that are constructed based on the information gathered such as funding decisions, grants, access, and college affordability. To do this, this the paper will highlight how instruments from a more Western context are used to measure and examine various phenomenon in Non-Western contexts, thereby yielding inappropriate findings and subsequent ill conceived policies.
Rethinking No Child Left Behind (NCLB) Using Critical Race Theory
Jori Hall,  University of Illinois Urbana-Champaign,  jorihall@uiuc.edu
The No Child Left Behind (NCLB) policy makes evaluative judgments of schools in this nation based primarily on standardized tests. As a result, changes are made within schools and subsequent policies are informed by those changes. However, these changes to schools and policies, more often than not, eschew the social, economic, and political contexts of schools. This traditional cycle of educational reform must be revisited. Accordingly, this paper utilizes critical race theory as a lens to investigate the measures used by NCLB to make evaluative judgments about the quality of schools as well as the products of those judgments as it relates to educational policy.
Developing Culturally Relevant Instruments
Maurice Samuels,  University of Illinois Urbana-Champaign,  msamuels@uiuc.edu
The author will discuss the importance in developing instruments that are culturally and contextually relevant to the evaluand. Specifically, using findings from an instrumental case study, the author will explore how instruments that considered the cultural context of a school assisted an internal school-based evaluation team in making the 'right decisions' that supported improvements to student learning. Therefore, this paper will share strategies used by the evaluator to develop the instruments as well as argue why the evaluation field should consider culture and context in identifying and developing data collection instruments (Frierson, Hood, & Hughes, 2002).

Session Title: The Next Generation Rapid-Learning Health System: Combining Decision Support Tools, Organizational Decision Making, and Dissemination and Implementation Science to Improve Healthcare
Panel Session 231 to be held in the Granite Room Section B on Thursday, Nov 6, 9:15 AM to 10:45 AM
Sponsored by the Health Evaluation TIG
Arne Beck,  Kaiser Permanente,  arne.beck@kp.org
Abstract: In this series of presentations, we provide the conceptual framework for a next generation rapid learning health system, along with examples of specific applications in healthcare program evaluation. Decision support tools are foundational to rapid-learning health systems. The first presentation will describe the use of these tools to conduct healthcare program evaluations in Kaiser Permanente Colorado. The second presentation will address the organizational context which can facilitate or impede the effectiveness of a rapid-learning health system. The third presentation will focus on the application of dissemination and implementation science to spread best practices identified through the rapid-learning evaluation process. The fourth presentation will be a case study of the development, evaluation, and dissemination of a palliative care program in Kaiser Permanente nationally that illustrates the aforementioned next generation rapid-learning health system concepts.
Rapid Learning Health Systems: Decision Support Tools
Arne Beck,  Kaiser Permanente,  arne.beck@kp.org
Kaiser Permanente Colorado is developing a rapid-learning health system where electronic health records (EHRs) provide real-time opportunities to answer such questions as: - What is the evidence base for procedures? - What explains variations and increases in health care spending and use? - How can the health of minorities and special populations be improved? KP's HealthConnect is the largest civilian EHR system. It links all KP facilities and provides members, physicians and authorized providers with online access to clinical information 24/7. Additional tools include disease registries, technologies for collecting patient-centered and/or patient-health plan information and supporting patient-provider communication, and cost capture systems. This presentation describes rapid-learning system tools employed at KP Colorado that enable us to evaluate the impact of planned and natural health services experiments, and provide information for decision making with increasing rapidity. Examples illustrating the use of these tools in several healthcare program evaluations will be discussed.
Building a Rapid Learning Health System Culture
R Sam Larson,  Kaiser Permanente,  r.sam.larson@kp.org
Rapid learning tools need organizational systems and processes that bring this rich information into the decision making realm of the organization. Like many nonprofits and healthcare organizations, KP Colorado lacks a policy on how and who makes decisions to stop, continue, select or scale-up healthcare programs and initiatives. Yet, KPCO is actively asking questions about the relative improvement in quality, service and affordability of most of our healthcare programs and seeking out emergent promising practices. This presentation explores how KPCO is intentionally moving from an implicit to an explicit culture of evaluation by identifying and coming alongside several strategic initiatives to conduct formative and summative evaluations and to incorporate findings into a rapid-learning cycle. We also discuss how we document those programs identified as 'best practices'. We conclude with a broad discussion on building an organizational culture supportive of rapid-learning and evidence-based decision making.
Spreading Best Practices in a Rapid Learning Health System
James Dearing,  Kaiser Permanente,  james.w.dearing@kp.org
The original rapid-learning health system concept focused primarily on the use of electronic health records (EHRs) to evaluate planned and natural health services experiments in real-time fashion. Little attention was given to the actual dissemination and implementation of healthcare programs if the results from these experiments were deemed successful. A critical component of a next generation rapid-learning system should be the application of dissemination and implementation science to understand barriers to adoption of best practices and to facilitate rapid, efficient, and effective spread of identified such practices. This presentation will describe the concepts from dissemination and implementation science, illustrate their application in healthcare, and demonstrate how, when paired with rapid-learning system decision support tools, they can be integrated into a more comprehensive and robust rapid-learning system model.
Developing, Evaluating and Disseminating Best Practices in a Rapid Learning Health System: Palliative Care in Kaiser Permanente
Doug Conner,  Kaiser Permanente,  douglas.a.connor@kp.org
Kaiser Permanente recently implemented an inpatient palliative care team to all of its hospitals in the US. This implementation was based on evaluation of a pilot study and randomized control trial (RCT) of the team in 3 hospitals. The pilot study highlighted the resistance of hospital staff and physicians to both the concept of a multi-disciplinary palliative care team and to the use of a RCT. These findings were used to develop educational strategies to improve acceptance of the larger RCT by hospital staff and physicians. During the RCT, interim analyses were done to document patient benefits and cost savings providing evidence to health plan leaders, engaged in budget planning, for the subsequent implementation of the team as a continuing consult service. Final evaluation provided evidence that the model followed by one hospital team was most effective and this model was implemented in the national roll-out to all hospitals.

Session Title: The Advantages of a Mixed Methods Approach
Panel Session 232 to be held in the Granite Room Section C on Thursday, Nov 6, 9:15 AM to 10:45 AM
Sponsored by the AEA Conference Committee
Carrie Markovitz,  Abt Associates Inc,  carrie_markovitz@abtassoc.com
Abstract: Mixed methods research designs provide evaluators with powerful ways of aligning real-world data needs with useful evaluation techniques that can yield valuable results for clients. This panel presentation will illustrate the mixed methods design approaches employed by evaluators on three very different evaluation studies. Each presentation will not only present different mixed methods designs, but also explore how the use of dual methods allowed the evaluators to develop, identify, and/or measure unique outcomes and perspectives.
First Identifying Then Investigating Indigenous Program Impacts Using Mixed Methods
Mary McNabb,  Learning Gauge Inc,  mlmcnabb@msn.com
Dr. Mary McNabb, EdD, will discuss the design of a five-year evaluation study in education which began with an Exploratory Design and ended with an Explanatory Design. The evaluation goal was to discover the variables pertinent to successful implementation of a professional development program. It began with QUAL data collection involving interviews with project staff about those who had participated in their professional development training. Telephone interviews with key informants resulted in the generation of themes and codes leading to the discovery of program variables appropriate for a taxonomy of program impacts. Dr. McNabb then used these variables to design a program impact survey. Later in the evaluation, she used the taxonomy to code and to quantify QUAL answers from open-ended survey questions. At this point in the evaluation, the Explanatory Design was employed to shed light on the survey's other QUAN results. In her presentation, Dr. McNabb will reflection upon the benefits of mixed methods designs in her work with educational programs. She will provide specific examples of how the distinctions between the two designs helped her clarify the purpose, appropriate timing, and weighting of various types of data to answer different types of evaluation questions during the phases of a long-term evaluation.
Improving Lives and Communities: Perspectives on 40 Years of VISTA Service
Carrie Markovitz,  Abt Associates Inc,  carrie_markovitz@abtassoc.com
Dr. Carrie Markovitz will present the findings from a study on the experiences of participants in the AmeriCorps*VISTA program starting with the program's inception in 1965 until 1993. In addition to participants' experiences, the study focused particular attention on the long-term effects of VISTA on members' civic engagement, education, employment, and the intergenerational transfer of values. Outcomes were assessed using a mixed methods approach: 1) a telephone survey designed to provide information on the breadth of VISTA's effects; and 2) a series of in-depth personal interviews providing highly detailed insights into the experiences of a much smaller sample. Both data collection components were structured to gather feedback from three distinct generations of members defined by major program and policy shifts that have shaped the evolution of VISTA over its 40-year history. To provide a point of reference, the experiences and outcomes of VISTA members were compared to a comparison group of similar individuals who applied for VISTA, completed a portion of the training, but ultimately did not serve. In particular, Dr. Carrie Markovitz will discuss the advantages of using a mixed methods approach for answering the key research questions of the study.
Giving Voice to the Client Perspective in Data Collection and Data Analysis Using Mixed Methods
Carrie Petrucci,  Evaluation, Management and Training Associates Inc,  cpetrucci@emt.org
Dr. Petrucci will discuss how mixed methods were used in data collection and data analysis to incorporate the client perspective within a survey design. A nominal group technique was used in a focus group to identify what residents had learned about HIV transmission and how their behaviors had changed. A qualitative approach was used to brainstorm ideas with clients, followed by a quantitative ranking of all of the gathered responses. The analysis proceeded with an emerging themes latent content analysis by the researcher. These responses resulted in five main themes. It was suggested that from a client perspective, the immediate areas of concern were how to deal with their health situations in the present time, and learning how to deal with their own emotions connected to HIV/AIDS, particularly their fears. Secondary were the basic behavioral changes that must occur to maintain their health. The rankings showed how clients prioritized each of the themes relative to one another, suggesting practice and research applications. These qualitative findings will be discussed within the context of HIV risk behaviors from a longitudinal survey given to the same clients. An additional advantage to focus group interviews is that they can be more amenable to working in multicultural settings, due to the depth of knowledge that can be gathered, the voice that is given to clients, and the trust that can be achieved.

Session Title: Youth Participatory Evaluation: Reviewing the Status of the Field and Exploring Critical Questions for the Future
Panel Session 233 to be held in the Quartz Room Section A on Thursday, Nov 6, 9:15 AM to 10:45 AM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Kim Sabo Flores,  Kim Sabo Consulting,  kimsabo@aol.com
Abstract: What is the current status of youth participation in evaluation research, and what policies and practices can help strengthen their engagement? This panel will feature presenters who have been actively strengthening the field of youth participatory evaluation over the last decade. The presenting panelists will draw from their research and their own experience in practice to discuss the current status of youth participatory evaluation since the Wingspread symposium, a meeting that took place in 2002 and brought together youth leaders and adult allies to discussion specific strategies for strengthening youth participation in community research and evaluation. In particular, the panel will discuss the declaration of principles developed at Wingspread, lessons learned over the last five years, and the critical issues related to policy and practice that will be needed to strengthen youth participation in the future.
Youth Led Evaluation as a Sound Educational Strategy
Robert Shumer,  University of Minnesota,  drrdsminn@msn.com
Youth led evaluation has been part of my research/evaluation agenda for the past several years, working in both national and international settings. Approximately half of these projects were conducted through youth organizations and the other half were part of educational efforts in school. One of my goals as someone who believes evaluation is a sound educational strategy is to use evaluation practice as an integral part of education in schools. In this role youth-led evaluation can be both a service and a learning experience. In doing this work, several important issues have arisen that require discussion. Those items that are most important include: how much and how often must training occur to produce quality work; how do youth deal with moral and ethical issues of evaluation, especially participatory work that places them in privileged positions within their school or community; and, as a school-based activity, how do we ensure that the academic work translates into credits in traditional academic subjects?
From Tokenism to Action: Youth Participatory Evaluation and Research in Town Planning
Louise Chawla,  University of Colorado,  louise.chawla@colorado.edu
When youth participate in city and town planning, they enter an arena of fiercely contested spaces where a partnership with adults is necessary for any significant change to be achieved. All too easily, youth participation can be tokenistic. Based on the experience of the Growing Up in Cities program of UNESCO, a series of considerations are presented to ensure that projects function in a context where city officials will act on ideas that youth generate for the improvement of their communities. These considerations serve as a type of checklist to evaluate the likelihood of successful action when a project is being designed. One of the spurs to action can be advance notice to city decision-makers that there will be follow-up evaluations of how effectively they respond to youth ideas. This presentation examines how youth and adult partners can collaborate in these processes of project design and follow-up evaluations.
Youth Participatory Evaluation: Lessoned Learned, Challenge and Future Issues for Policy and Practice
Katie Richards-Schuster,  University of Michigan,  kers@umich.edu
What do young people need to strengthen participation in evaluation? What are the specific skills needed? What are the factors that support participation? What are the challenges? Over the last five years, we have conducted education and training workshops to prepare young people to engage in participatory evaluation. We will draw on findings from a training program on participatory evaluation to discuss lessons learned, facilitating and limiting factors, and future issues for policy and practice.
From The Margins to Center: Exploring Policies and Practices that Can Support Authentic Youth Engagement in Evaluation
Kim Sabo Flores,  Kim Sabo Consulting,  kimsabo@aol.com
Over the past two-decades, I have experimented with playful strategies, and creative methods that seek to authentically engage young people in evaluations of the programs that are meant to serve them. What I have discovered through this work is that youth participatory evaluation has the potential to support youth development, organizational development, community development, and methodological development. However, youth participatory evaluation, while a growing field still remains on the margins. During this presentation and discussion, I would like to explore the questions, along with my fellow panelists and the audience: how do both evaluation practices and policies need to be revised and/or modified to be supportive of the inclusion of both children and youth? Throughout the dialogue I will draw on specific examples of both the challenges and potentials of this type of work,
Core Concepts and Guiding Principles of Youth Participatory Evaluation
Barry Checkoway,  University of Michigan,  barrych@umich.edu
Youth participatory evaluation is emerging as a field of practice and, as it does, it scope and quality will be strengthened by more research on its models and methods, objectives and outcomes facilitating and limiting factors, empirical case studies and lessons learned from practice. The emergence of a field of practice is advanced by articulation of core concepts and guiding principles, and this effort is newly underway. The Wingspread Conference featured development of the first such statement of principles of which we are aware, and they will be discussed and debated in this session.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Measuring Overall Quality of Life
Roundtable Presentation 234 to be held in the Quartz Room Section B on Thursday, Nov 6, 9:15 AM to 10:45 AM
Sponsored by the Quantitative Methods: Theory and Design TIG and the International and Cross-cultural Evaluation TIG
Patricia Herman,  University of Arizona,  pherman@email.arizona.edu
Abstract: Which is more valuable to society as a whole: improved education or reduced crime? Where do we best spend our limited resources: cancer cures or better housing? As a society we make these decisions every day informally through public opinion or consumer demand, or more formally through our votes and elected officials. The trade-offs implicit in these decisions are never easy to make, but they would be more tractable if a measure of the impact of each option was available in a common metric, ideally, some unit of life quality. The search for this common metric has been carried out by a number of disciplines (e.g., economics, psychology, medicine, sociology, philosophy, ethics), and a number of quality-of-life-like measures are presently in use (e.g., gross domestic product, infant mortality, quality-adjusted life-years). This roundtable will review existing quality-of-life measures, discuss the validity of each, and present next steps for an optimal measure.
Roundtable Rotation II: Citizen Engagement Into Community Quality of Life Evaluation
Roundtable Presentation 234 to be held in the Quartz Room Section B on Thursday, Nov 6, 9:15 AM to 10:45 AM
Sponsored by the Quantitative Methods: Theory and Design TIG and the International and Cross-cultural Evaluation TIG
Irina Makeeva,  Inter-Regional Public Foundation,  irina@cip.nsk.su
Abstract: The presentation will describe the experience of involving citizens of small municipalities into developing community quality of life indicators and evaluating the process and results of community governance. It is based on Siberian Center’s pilot project “Communities Of, By and For People” implemented in 6 pilot municipalities (small towns and rural areas) in Siberia in 2006-2008. Each community team included representatives from local government, deputies, mass media and local activists. The initiative was a joint project of the Siberian Center (Novosibirsk, Russia) and the Epstein&Fass consulting company (New York, USA) that had developed and applied the Effective Community Governance Model.

Session Title: Evaluation Practice And Policy in Evidence Based Programming for Older Adults in Colorado: Experience of the Consortium for Older Adult Wellness (COAW)
Panel Session 235 to be held in Room 102 in the Convention Center on Thursday, Nov 6, 9:15 AM to 10:45 AM
Sponsored by the Health Evaluation TIG
Sharry Erzinger,  University of Colorado Denver,  sharry.erzinger@cudenver.edu
Abstract: The uncommon experience of reflecting on patterns of the past in order to develop implications for the future is allowed in this example of the network for establishing evidence based programs for older adults throughout Colorado. The Consortium for Older Adult Wellness (COAW) has a community based network of service providers who deliver the evidence based programs outside of the traditional medical system, and participate in the evaluation of their efforts. Policy implications of their efforts are described in this panel of papers.
Case History of How Consortium for Older Adult Wellness (COAW) Developed a Statewide Community Based Delivery Network to Serve Wellness Needs of Older Adults: The Role of Participatory Evaluation
Chris Katzenmeyer,  Consortium for Older Adult Wellness,  ckatz12@msn.com
The Consortium for Older Adult Wellness (COAW) has established a network of community agencies that provides community- based wellness services to older adults throughout the state of Colorado. Evaluation has enabled COAW's rapid expansion over a relatively short period of time. Participatory evaluation has been integral to each successive stage of expansion. The participation of those implementing the program at the most local level throughout the state has contributed to establishing priorities, streamlining programming, and sharing of best practices in recruiting participants. Simultaneously, at regional and national levels, the need for policy that not only encourages but also pays for wellness services for older adults has created a receptive policy framework within which programs such as COAW can expand. The experience of COAW highlights the importance of maintaining an evaluation system for evidence based programs to broadly disseminate services to populations who need them.
Creating and Maintaining Local Support of Evaluation in the Consortium for Older Adult Wellness (COAW)
Sharry Erzinger,  University of Colorado Denver,  sharry.erzinger@cudenver.edu
Jean Scandlyn,  University of Colorado Denver,  jean.scandlyn@cudenver.edu
Alisa Velonis,  Tacoma-Pierce County Health Department, 
At times the requirements and format of evaluation tools for evidence based programs conflict with the community based participatory nature of optimal programming for the target population of older adults, who have distinct opinions and preferences. Evaluation of the evidence based programs delivered by COAW and partners requires operationalizing a value of participation at the most local level, receptive to and respectful of alternative opinions while simultaneously adhering to the requirements for fidelity to the evidence based program. Pre-testing materials, designing user friendly formats and providing adequate tutoring to use the evaluation tools enhance the understanding of and support for evaluation at the most local level. Open, two way communication remains the most effective means by which local implementers of the program provide meaningful evaluative comments on adjustments that will benefit the overall design of the evaluation system.
Northwest Colorado Visiting Nurse Association in Steamboat Springs: the Aging Well Program as Partner with Consortium of Older Adult Wellness (COAW)
Donna Hackley,  Northwest Visiting Nurse Association,  dhackley@nwcovna.org
The Aging Well Program expands the evidence based programs of COAW through combining best practices with innovative strategies and community participation to measurably improve the health and quality of life of older adults. The program strives to keep older adults healthy, safe and independent at home in their rural communities and uses specific measures directly related to the evaluation of evidence-based programs including Healthier Living (COAW's program). Focused on prevention and wellness, Aging Well does not rely on traditional medical models of intervention that focus on frailty/disability and crisis management. Seeking to fundamentally re-frame the issue of health and healthcare, Aging Well seeks to develop an economically efficient and replicable system for effective delivery of social and health services to aging adults. Aging Well expands beyond the evaluation of COAW programs to develop an on-going monitoring strategy and evaluation framework that measures outcomes of its programs to reduce social isolation through community-based programs.

Session Title: Evaluating Health Needs and Services for Special Populations
Multipaper Session 236 to be held in Room 104 in the Convention Center on Thursday, Nov 6, 9:15 AM to 10:45 AM
Sponsored by the Special Needs Populations TIG
Jennifer Nolan,  Indiana University Northwest,  jnolan@ium.edu
Family Centered Health Care: Evaluation to Engage Differing Perspectives
Debra Wagler,  Nevada State Health Division,  dwagler@dwcommunicate.com
Alyssa Rye,  University of Nevada Reno,  rye@unr.edu
Abstract: In Nevada, families of children and youth with special health care needs report 41% do not receive family-centered care or medical homes (National Study 2006). In a 2006 survey, 49% of primary care providers in Nevada reported they deliver family-centered care/medical homes. While these statistics accurately describe the status of family-centered care, they do little to improve our understanding of the barriers for families experiencing family-centered care. Even though the status of family-centered care reported from the family perspective and the physician perspective is similar, the recommendations for change differ greatly. Evaluation serves as a tool to engage the parents and providers in developing guidelines for improving the delivery of family-centered care. A family-centered approach recognizes parent expertise as integral to the evaluation will involve families, physicians’ associations, and providers in the process for more practical and useful conclusions. The evaluation developed strategies to strengthen the physician-patient relationship.
A Meta-Analytic Review on Stability of Early Antisocial Behaviors
Xinsheng Cai,  American Institutes for Research,  ccai@air.org
Abstract: A meta-analysis was conducted to examine the magnitude of stability of antisocial behaviors with onset before age 6 and the variables affecting the stability effect sizes. Over 70 empirical research reports met inclusion criteria. Stability was coded as correlational effect sizes for the relationship between antisocial behaviors at Time 1 and Time 2. Results showed great variability in the weighted mean stability effect sizes. The effects of informants and subtypes of antisocial behaviors on the stability of antisocial behaviors were investigated. Demographic variables, such as social economic status, and race were found to have differential effect on boy and girl antisocial behaviors overtime. The findings suggest that antisocial behaviors in young children are not as stable as those in school age children and the information on antisocial behaviors in early childhood alone is insufficient to predict later antisocial behaviors accurately.
Using Mixed Methods and a Flexible Design to Understand the Impact of the Medicare Prescription Drug Benefit on Low-Income Seniors
Susan Hewitt,  Health District of Northern Larimer County,  shewitt@healthdistrict.org
Stacy Page,  Health District of Northern Larimer County,  spage@healthdistrict.org
Deborah Delay,  Health District of Northern Larimer County,  ddelay@healthdistrict.org
Abstract: Since 1996, The Health District of Northern Larimer County (Colorado) has provided prescription assistance to help low-income individuals afford needed medications. With the enactment of the Medicare Prescription Drug Benefit (Part D) in 2006, clients on Medicare were no longer eligible for prescription assistance. There was concern that some clients who transitioned to Medicare Part D would experience financial and other barriers that could limit their ability to take medications as prescribed. In 2006, a mixed-method study of clients who enrolled in Part D was undertaken to quantify changes in medication expenses and medication-taking behavior. As the complexity of the transition unfolded, the second year of the study used a mostly qualitative design to explore the “hits” and “misses” of this new policy on the health and welfare of low income seniors. The evaluation findings provided the foundation for changes to program eligibility protocols and informed policy advocacy.

Session Title: Evaluation Methods for Climate Mitigation Projects
Panel Session 237 to be held in Room 106 in the Convention Center on Thursday, Nov 6, 9:15 AM to 10:45 AM
Sponsored by the Environmental Program Evaluation TIG
Betty Seto,  KEMA Inc,  betty.seto@us.kema.com
Abstract: Climate change is among the most important and publicly visible environmental problems we face. This panel will explore current climate policy in the U.S. and internationally, and the role of evaluation related to diverse emission reduction strategies.
Overview of Climate Change Policy and the Importance of Evaluation
Betty Seto,  KEMA Inc,  betty.seto@us.kema.com
This presentation will set the stage for the panel with an overview of the sources of greenhouse gas emissions, common emissions reductions strategies and the role of evaluation in these strategies.
Energy Efficiency Project Evaluation
Tim Drew,  California Public Utilities Commission,  zap@cpuc.ca.gov
Energy efficiency projects and programs form an important approach to greenhouse gas (GHG) emission reductions, both domestically and internationally (under Kyoto Protocol). The energy efficiency program evaluation industry is well established in the US. This presentation will compare California evaluation protocols to the verification procedures for certified emissions reductions (CERs) under the Kyoto Protocol and other carbon offset standards.
Renewable Energy Project Evaluation
Bobbi Tannenbaum,  KEMA Inc,  bobbi.tannenbaum@us.kema.com
The installation of renewable energy systems is another approach to reducing greenhouse gas emissions caused by energy production. KEMA is currently conducting evaluations of three statewide renewable energy programs (addressing technologies ranging from residential solar electric (PV) systems to large fuel cells and anaerobic digesters). The extent and ways in which these renewable energy programs address both carbon emissions and evaluation is varied. This presentation will compare the states on these issues .
Carbon Offset and the Need for Program Evaluation
Steve Schiller,  Schiller Consulting,  steve@schiller.com
Due to the lack of federal legislation on climate change, many states and local governments have started voluntary initiatives to reduce emissions in their communities. Private companies have purchased carbon offsets (or offered them to their customers) to mitigate the products carbon footprint. This market for carbon offsets in the U.S. is voluntary. Competing carbon offset providers use different verification procedures to assess the quality and quantity of their offsets. These programs, both local climate initiatives and voluntary offset programs are in need of evaluation to assess their effectiveness and impacts.

Session Title: Evaluating the Paris Declaration: A Joint Cross National Evaluation
Panel Session 238 to be held in Room 108 in the Convention Center on Thursday, Nov 6, 9:15 AM to 10:45 AM
Sponsored by the International and Cross-cultural Evaluation TIG
Niels Dabelstein,  Danish Institute for International Studies,  nda@diis.dk
Abstract: The Paris Declaration, endorsed in March 2005, is an international agreement signed by over one hundred Ministers, Heads of Agencies and other Senior Officials. The Paris Declaration lays down an action-orientated roadmap intended to improve the quality of aid and its impact on development. An independent cross-country evaluation of the Paris Declaration commissioned and overseen by an international Reference Group was initiated in 2007. The first phase of the evaluation consists of 20 separate but coordinated evaluations in Australia, Bangladesh, Bolivia, Denmark, Finland, France, Germany, Luxembourg, Mali, Netherlands, New Zealand, Philippines, Senegal, South Africa, Sri Lanka, Uganda, UK, Vietnam, AsDB and UNDG. A Synthesis of these evaluation was completed in June 2008. This is one of the hitherto largest joint evaluations undertaken applying a unique decentralized approach The Panel will present and discuss the organizational and methodological lessons learned by different stakeholders in the evaluation.
Organizing the Evaluation of the Paris Declaration
Niels Dabelstein,  Danish Institute for International Studies,  nda@diis.dk
This first presentation will describe the how the evaluation was organized and designed to ensure stakeholder ownership, and discuss the strengths and weaknesses. Niels Dabelstein is co-chair of the Reference Group for the Evaluation and coordinated the overall evaluation.
Evaluating the Paris Declaration: A Developing Country Perspective
Velayuthan Sivagnanasothy,  Ministry of Plan Implementation,  sivagnanasothy@hotmail.com
This presentation will describe and discuss the conduct of the country evaluation in Sri Lanka. Focus will be on organizational issues. Mr. Sivagnanasothy is co-chair of the Reference Group for the Evaluation, member of the Management group and was responsible for the evaluation in Sri Lanka
Evaluating the Paris Declaration: A Donor Country Perspective
Ted Kliest,  Netherlands Ministry of Foreign Affairs,  tj.kliest@minbuza.nl
This presentation will describe and discuss the conduct of the donor country evaluation in the Netherlands. Focus will be on organizational issues. Mr. Kliest is a member of the Reference and Management Groups for the Evaluation and was responsible for the conduct of the Evaluation in the Netherlands.
Evaluating the Paris Declaration: Synthesizing 20 Component Studies
Bernard Wood,  International Development and Strategies,  bwood@magma.ca
This presentation will discuss the methodological challenges of synthesizing 20 diverse evaluations. Mr. Wood in an independent evaluator leading the team that produced the overall evaluation Synthesis report.

Session Title: The Impact of Networked Contexts on Federal-Level Performance Measurement: Results From a Multiple Case Study of Four Centers for Disease Control and Prevention Funded Programs
Panel Session 239 to be held in Room 110 in the Convention Center on Thursday, Nov 6, 9:15 AM to 10:45 AM
Sponsored by the Government Evaluation TIG and the Health Evaluation TIG
Amy DeGroff,  Centers for Disease Control and Prevention,  adegroff@cdc.gov
Abstract: In 2007-2008, a multiple case study was conducted to explore the challenges of developing national-level performance measurement systems for public health programs implemented through decentralized, networked structures. Four Centers for Disease Control and Prevention-funded programs were included: the National Sexually Transmitted Disease Program, the National Breast and Cervical Cancer Early Detection Program, the National Tobacco Control Program, and the Public Health Emergency Preparedness Program. Case study methods included document review, individual interviews, a focus group, and participant observations. The study examined the relationship between the networked structure and the resulting performance measurement system as well as issues of accountability and use. In-case and cross-case analysis were conducted. For this panel, individuals representing three of the four cases will discuss their case-specific results and their own experiences developing and implementing their program's performance measurement system. The study's key investigator will present cross-case findings and make recommendations for future practice.
Four National-Level Performance Measurement Systems: Cross-Case Findings
Amy DeGroff,  Centers for Disease Control and Prevention,  adegroff@cdc.gov
Although a significant literature exists addressing the virtues and challenges of performance measurement in the public sector, little empirical research has been conducted studying national-level, performance measurement systems aimed at monitoring large-scale, decentralized programs. In 2007-2008, a multiple case study was conducted to explore the challenges of developing national-level performance measurement systems for public health programs implemented through decentralized, networked structures. Each of the four programs included in the study are implemented through a vast network of state and local partners across the country. Given extensive variability in program models and implementation, the decentralized structure inherently challenges efforts to identify common measures, collect and report valid data, and ensure data use. In addition, accountability is often fractured as multiple actors at various levels contribute toward the achievement of program goals and outcomes. In this presentation, the study's key investigator will describe the research methodology and present cross-case findings.
Developing and Implementing a National Measurement System for Public Health Emergency Preparedness: Are We Ready Yet?
Anita McLees,  Centers for Disease Control and Prevention,  zdu5@cdc.gov
Craig W Thomas,  Centers for Disease Control and Prevention,  cwthomas@cdc.gov
Karen Mumford,  Centers for Disease Control and Prevention,  kmumford@cdc.gov
Since 2002, CDC's Division of State and Local Readiness (DSLR) has annually awarded approximately $1 billion to 62 states, territories, and localities through the Public Health Emergency Preparedness (PHEP) Cooperative Agreement. In order to address questions about the extent to which this investment has improved our nation's preparedness for and response to public health emergencies, DSLR is working in collaboration with numerous stakeholders to develop and implement a performance measurement system to address program improvement and accountability. This presentation will highlight the evolution of CDC's approach to measuring PHEP grantees' progress as well as challenges and accomplishments to date, including the unique challenges of developing and implementing a national-level performance measurement system for programs implemented through decentralized, networked structures.
Developing and Implementing Performance Measures for 65 Diverse Sexually Transmitted Disease (STD) Programs
Betty Apt,  Centers for Disease Control and Prevention,  bapt@cdc.gov
Dayne Collins,  Centers for Disease Control and Prevention,  dcollins@cdc.gov
In 2004, CDC's Division of STD Prevention (DSTDP) implemented performance measures in the 65 project areas they fund. This presentation will describe the methods DSTDP used to address the challenges associated with developing and implementing national measures for project areas that have diverse at-risk populations and disease burden. For example, some project areas have high rates of syphilis morbidity in men who have sex with men; while other project areas have little syphilis, but have high prevalence of Chlamydia in young women. Some project areas are rural states and others are urban cities. The presentation will cover methods DSTDP used, such as workgroups with project area representatives and pilot testing, to engage the many stakeholders, develop measures that project areas could accept, introduce the measures as a program requirement, and then help project areas use the measures to improve their STD programs.
Core Outcome Indicator Measurement Development for the National Tobacco Control Program
Sheila Porter,  Centers for Disease Control and Prevention,  sporter@cdc.gov
Todd Rogers,  Public Health Institute,  txrogers@pacbell.net
Natasha Jamison,  Centers for Disease Control and Prevention,  njamison@cdc.gov
Martha Engstrom,  Centers for Disease Control and Prevention,  mengstrom@cdc.gov
The Centers for Disease Control and Prevention's Office on Smoking and Health (OSH) provides funding and leadership for the National Tobacco Control Program (NTCP), involving comprehensive tobacco control and prevention activities in 50 states, tribes, U.S. territories, and the District of Columbia. NTCP's goals are to prevent initiation of tobacco use among young people, eliminate non-smokers exposure to secondhand smoke, promoting quitting among adults and young people, and eliminating tobacco-related disparities. To monitor and evaluate NTCP efforts and outcomes, OSH has developed a set of core outcome indicators. Although data for many indicators can be obtained from existing population-based surveys of adults and youth, several core indicators must be measured through data collection at the community level (e.g., tobacco industry practices) or by accessing archival records (e.g., tobacco control policy tracking). This paper will present a summary of OSH efforts to develop and implement standardized methods to measure core indicators.

Session Title: Moving Beyond Bibliometric Analysis: Emerging Evaluation Approaches at the National Institute of Environmental Health Sciences
Panel Session 240 to be held in Room 112 in the Convention Center on Thursday, Nov 6, 9:15 AM to 10:45 AM
Sponsored by the Research, Technology, and Development Evaluation TIG
Edward Liebow,  Battelle,  liebowe@battelle.org
Christie Drew,  National Institute of Environmental Health Sciences,  drewc@niehs.nih.gov
Howard Fishbein,  Battelle,  fishbeinh@battelle.org
Abstract: Evaluations of research programs have traditionally assessed contributions to knowledge through citation analysis. The objective of many agencies that fund biomedical research is to improve health. Conventional bibliometric analyses are insufficient for measuring the many impacts not captured in peer reviewed publications. The National Institute of Environmental Health Sciences Division of Extramural Research and Training is interested in developing approaches to identify and quantify the broader impacts of its research portfolio, going beyond the publications its grantees produce. After presenting an overview of the Division's evaluation approach, the panel discusses the development of a conceptual model of comprehensive research metrics, applies this model to the agency's asthma research portfolio, and, after noting gaps and limitations of data from existing sources, describes a primary data collection effort (survey and key informant interviews) designed to fill significant gaps. Feedback from panelists and audience members about implications for research impact assessment will be encouraged.
Evaluation of Research Impacts: The National Institute of Environmental Health Sciences (NIEHS) Research Portfolio
Jerry Phelps,  National Institute of Environmental Health Sciences,  phelps@niehs.nih.gov
The NIEHS mission is to reduce the burden of human illness and disability by understanding how the environment influences the development and progression of human disease. The Institute's extramural research portfolio supports basic research, clinical research, technology development, and training activities associated with the health effects of air and water pollution as well as occupational exposures to contaminants. Mr. Phelps, who manages the evaluation of portions of the Extramural Research Portfolio as the Project Officer on the Battelle contract, will provide an overview of the research portfolio and how plans for future research support are developed. He will then describe the Institute's need for analytical tools that will track program inputs, outputs, and outcomes, (especially outcomes that include more than publications) and support the Institute's efforts to understand how NIEHS research is making a difference in basic science, regulatory and public health arenas.
Conceptual Model of Comprehensive Research Metrics for Improved Human Health and Environment
Jill Engel Cox,  Battelle,  engelcoxj@battelle.org
Evaluating scientific research programs mainly consists of near-term outputs measured through bibliometrics. Examining long-term outcomes for research programs has been particularly challenging, in part due to the serendipitous collation of scientific evidence, and in part because spatially and temporally distant outcomes are only indirectly linked to specific research findings. Dr. Engel-Cox, a senior environmental scientist at Battelle, summarizes recent efforts to build a logic model and associated metrics to measure the contribution of environmental health research programs to improvements in human health, the environment, and the economy. Expert input and literature research were used to define the components and linkages between extramural environmental health research grant programs and the outputs and outcomes related to health and social welfare, environmental quality and sustainability, economics, and quality of life. The model delineates pathways for contributions by five types of institutional partners in the research process: NIEHS, other government agencies, grantee institutions, business and industry, and community partners. Dr. Engel-Cox briefly discusses two examples and the strengths and limits of outcome-based evaluation of research programs.
Scientific and Public Health Impacts of the NIEHS Extramural Asthma Research Program from Existing Data
Shyanika Rose,  Battelle,  rosesw@battelle.org
Through support for asthma research, NIEHS aims to reduce morbidity, mortality, and other public health effects of asthma. Ms. Rose, a senior scientist and program evaluation specialist at Battelle, will describe a study to assess impacts of the NIEHS asthma research portfolio by characterizing publications resulting from these awards, as well as applications of research in clinical practice, interventions, education, and technology developments. Expert panel review, logic model development, and secondary data analyses shaped this assessment. Findings showed that the largest share of NIEHS-funded asthma research is basic scientific research, yet results are published in clinical investigation journals, increasing the likelihood of translation into practice. NIEHS provides support for genetic ontology research and health education curricula. NIEHS-funded investigators are regularly appointed to policy and legislative advisory posts. Recommendations included closing gaps in documentation of research outputs and outcomes, and employing primary data collection to better characterize uses of scientific knowledge.
Scientific and Public Health Impacts of the NIEHS Extramural Asthma Research Program From New Primary Data
Carlyn Orians,  Battelle,  orians@battelle.org
The purpose of this study is to amplify and extend the assessment of the NIEHS asthma research portfolio using primary data collection to fill in gaps in current documentation. Ms. Orians, a senior scientist and program evaluation specialist at Battelle, describes the study, which began with formative key-informant interviews to help develop a survey targeting the universe of NIEHS asthma researchers and asthma researchers supported by other federal agencies. The survey covered such topic areas as types of research support, dissemination mechanisms, and the development of public health interventions, patents and new drug applications, and commercial products. The survey also covered respondents' views on the impact of their research on community outreach and public awareness, new scientific discoveries, laws, regulations, standards, and clinical care guidelines. The program logic model also led us to ask respondents to characterize observed linkages between their work and changes in environmental quality and exposure to environmental triggers that reduce the prevalence of asthma. As a matter of policy and practice, NIEHS is weighing the time and resource requirements associated with conducting such a survey against the Institutes increased ability to answer questions concerning how its research is making a difference in the regulatory and public health arenas through non-survey methods.

Session Title: Hierarchical Linear Modeling/Multi-Level Analysis
Multipaper Session 241 to be held in Room 103 in the Convention Center on Thursday, Nov 6, 9:15 AM to 10:45 AM
Sponsored by the Quantitative Methods: Theory and Design TIG
Patrick McKnight,  George Mason University,  pmcknigh@gmu.edu
The Use of Interrupted Time Series Designs for Program and Policy Evaluation
Joseph Stevens,  University of Oregon,  stevensj@uoregon.edu
Keith Zvoch,  University of Oregon,  kzvoch@uoregon.edu
Drew Braun,  Bethel School District,  dbraun@bethel.k12.or.us
Abstract: The proposed paper describes the use of interrupted time series (ITS) designs for program evaluation. Using multilevel, longitudinal models (HLM), we evaluate whether instructional interventions, student characteristics, and district policies affect either the level or growth of student reading fluency. ITS designs are often recommended in quasi-experimental situations (Shadish, Cook, & Campbell, 2002) as an alternative method for providing control over threats to internal validity. Multilevel, longitudinal modeling provides a flexible and powerful means for analysis of these designs. Another advantage of this analytic approach is that intervention need not occur at the same time for each student. By using individual growth trajectories, the effects of intervention can be modeled at the time of occurrence. The paper will demonstrate the use and application of HLM for the ITS design and provide examples of the estimated relationships between intervention, student characteristics, and district policies with level and growth in reading fluency.
Multilevel Analysis of Pay Equity: Individual and Organizational Factors Within a Single Organization
Blair Stephenson,  Los Alamos National Laboratory,  blairs@lanl.gov
Abstract: Traditional human capital approaches to understanding pay equity have relied primarily on methods such as OLS regression, where individual factors are emphasized. Multilevel analysis methods more readily accommodate factors related to organizational structure, resulting in more accurate parameter estimates, as well as the potential for more practical and flexible model(s). Decisions regarding higher level(s) may necessitate a tradeoff between statistical and pragmatic concerns. Comparisons to OLS approach, estimates of random group effects, and issues related to the use of multilevel models in pay equity studies are discussed. Note: to ensure confidentiality, as the study dataset is derived from a single organization, some aspects of the underlying dataset and specific findings have been modified. The scope, methodology, and issues encountered during the study are accurately portrayed.
A Hierarchical Linear Modeling Approach to Analyzing Reading Fluency Growth
Steven Guglielmo,  University of Oregon,  sgugliel@uoregon.edu
Joseph Stevens,  University of Oregon,  stevensj@uoregon.edu
Drew Braun,  Bethel School District,  dbraun@bethel.k12.or.us
Abstract: The proposed paper uses a hierarchical linear modeling (HLM) approach in analyzing patterns of growth in reading fluency over time. HLM is a powerful method in the analysis of change, allowing one to describe growth in terms of two basic parameters: starting values (i.e., intercepts) and rates of change (i.e., slopes). In addition, these models can incorporate multilevel variables—including both student- and institution-level characteristics—as predictors of intercepts and slopes. Using an HLM framework, we examine whether the patterns of reading fluency growth differ as a function of students' gender and ethnicity. The paper will apply various HLM models in examining growth in reading fluency, demonstrate significant relationships between student characteristics and reading growth, and discuss implications for school policy.

Session Title: Three Ways Foundations Get Evaluation Done
Panel Session 242 to be held in Room 105 in the Convention Center on Thursday, Nov 6, 9:15 AM to 10:45 AM
Sponsored by the Non-profit and Foundations Evaluation TIG
Deborah Bonnet,  DBonnet Associates,  dbonnet@dbonnet.com
Abstract: Foundations have three instruments for funding program and policy evaluation: (1) through programs' grants, (2) as grants to evaluators, and (3) as contracts with evaluators. Each choice has important implications for who is eligible to conduct the evaluation; control over evaluation design and release of findings; external perceptions; and dynamics among funders, nonprofits, and evaluators. Further, how much and when to use each option is perhaps second only to the evaluation budget in drawing the attention of foundation executives and boards in evaluation planning, and thus constitutes an important component of foundation evaluation policy. Presenters will focus on advantages, disadvantages, caveats, and tips for using each instrument. Audience discussion will invite the views and experiences of nonprofits and evaluators.
The Rules
Deborah Bonnet,  DBonnet Associates,  dbonnet@dbonnet.com
Deborah will introduce, in simple matrix form, the basic parameters and constraints of each option. During the session, the matrix will be expanded to include, for each vehicle: why foundations use it, how grantees respond, other advantages, other downsides, when it is the best choice, when it is the worst, and tricks for making it work.
Funding Evaluation through Program Grants
Helen Davis Picher,  William Penn Foundation,  hdpicher@wpennfdn.org
Helen will share the William Penn Foundation's rationale and experiences with this instrument.
Funding Evaluation through Grants to Evaluation Organizations
Edward Pauly,  The Wallace Foundation,  epauly@wallacefoundation.org
Ed will share the Wallace Foundation's rationale and experiences with this instrument.
Funding Evaluation through Contracts with Evaluators
Lester Baxter,  Pew Charitable Trusts,  lbaxter@pewtrusts.org
Lester will share the Pew Charitable Trusts' rationale and experiences with this instrument.

Session Title: Building Evaluation Capacity in Schools: Strategies for Fidelity
Multipaper Session 243 to be held in Room 107 in the Convention Center on Thursday, Nov 6, 9:15 AM to 10:45 AM
Sponsored by the Evaluation Use TIG , the Pre-K - 12 Educational Evaluation TIG, and the Organizational Learning and Evaluation Capacity Building TIG
Jacqueline Stillisano,  Texas A&M University,  jstillisano@tamu.edu
Increasing the Utilization of Evaluation Findings Through the Disaggregation of Survey Data
Nicole Gerardi,  University of California Los Angeles,  gerardi_nicole@yahoo.com
Abstract: Evaluation reports are often based on aggregated data where analysis is restricted to descriptive statistics. Disaggregating the data can allow new relationships to appear that vary substantially from relationships based on aggregated data. It is important that evaluation findings and suggestions are specific enough, especially when conducting formative program evaluation. The author was part of a large evaluation conducted over three years (2003-2006), at a high school in Los Angeles Unified School District, geared at understanding the college going culture of the school. This paper explores how disaggregating evaluation survey data provides a much richer interpretation of particular evaluation findings. The increased use of evaluation findings by school personnel, once provided with higher quality suggestions, is addressed. The paper includes a brief discussion of college going culture literature, the processes of re-analyzing data and re-presenting findings, and what various stakeholders were able to do with the new evaluation findings.
Measuring Implementation Fidelity: Implications for Program Implementation and Evaluation
Rochelle Fritz,  Miami University,  rokuserm@muohio.edu
Paul Flaspohler,  Miami University,  flaspopd@muohio.edu
Abstract: This paper focuses on the development of and implications for a measure of implementation fidelity for an evidence-based violence prevention program. This paper discusses a process for developing an implementation fidelity measure that incorporates items assessing both program specific elements as well as general principals of effective prevention. In a review of the literature concerning implementation, Fixsen and colleagues (2005) noted that feedback loops are important in keeping evidence-based programs on track. Additionally, they found in the literature that measures of fidelity that were built into the functions of the program site may be more useful than measures conducted by outside researchers. The measure discussed in this paper is meant to be a self-assessment that can be used for program monitoring and program planning. Additional uses of the measure, such as linking program fidelity to program outcomes, are considered.
Consequences of Building Evaluation Capacity in Schools: A Case Study of Iteration in Team Development
Edward McLain,  University of Alaska Anchorage,  ed@uaa.alaska.edu
Susan Tucker,  E & D Associates LLC,  sutucker@sutucker.cnc.net
Letitia Fickel,  University of Alaska Anchorage,  aflcf@uaa.alaska.edu
Abstract: Building the capacity of school-based "data teams" to use improvement-oriented evaluation methodologies across diverse contexts has not been studied systematically. USDE’s Title II Teacher Quality Enhancement (TQE) is charged with enhancing teacher quality in the context of high-need schools. This paper shares the results of a professional development project with school-based data teams that have been operating since spring 2005 in collaboration with Alaska's TQE project. The goal of these teams is to facilitate staff and school decisions regarding strengthening diverse K-12 student performance, instructional strategies, resource management, systemic support mechanisms, and enhancing professional development. The paper discusses the development of a tool to describe the iterative nature of team functioning that has emerged from our grounded theory study with the teams. The tool’s utility is illustrated via a study of selected teams as context for the lessons learned in the development and initial application of the tool.
The After School Program Report Card: A Tool for Sharing and Utilizing Evaluation Findings
Joelle Greene,  National Community Renaissance,  jgreene@nationalcore.org
Susan Neufeld,  National Community Renaissance,  sneufeld@nationalcore.org
Yoon Choi,  National Community Renaissance,  ychoi@nationalcore.org
Abstract: The After School Program Report Card is a user-friendly tool that promotes the timely utilization of data for program decision-making. This report card simplifies data so that all staff -- from administrators to field staff – can identify and address program strengths and areas of improvement. We will present the tool, provide an overview of the tool’s development, and discuss the impact it had on program improvement and evaluation process. In addition we will share lessons learned and solicit input from session attendees to strengthen the tool. National Community Renaissance’s Department of Community Assessment and Program Evaluation serves as the internal evaluators for a collaborative of seven community-based after school providers provide 20 individual after school programs serving over 600 children and youth located on-site of affordable housing developments in Southern California.

Session Title: The Role of Organization/Institutional Context in Evaluation Use and Organizational Learning
Multipaper Session 244 to be held in Room 109 in the Convention Center on Thursday, Nov 6, 9:15 AM to 10:45 AM
Sponsored by the Organizational Learning and Evaluation Capacity Building TIG and the Systems in Evaluation TIG
Martin Steinmeyer,  Independent Consultant,  m.steinmeyer@gmx.net
Nancy Zajano,  Learning Point Associates,  nancy.zajano@learningpt.org
Organizational Resistance to Evaluative Reflection and Learning
David Campbell,  University of California Davis,  dave.c.campbell@ucdavis.edu
Abstract: This paper presents a meta-analysis of six recent evaluations in which the goal of creating learning communities was frustrated to greater or lesser degrees. In retrospect, the failures are not attributable to faulty designs or other evaluation mechanics. Rather, attending focally to these matters created a kind of tunnel vision that blinded evaluators to aspects of the organizational context that proved decisive in facilitating or hindering organizational learning and adaptation that took advantage of evaluation findings. This paper draws on the organizational development literature to illuminate common sources of resistance to evaluative reflection, illustrating these with case examples from government and foundation sponsored evaluations. Theory and practice points to a conundrum for evaluation practice: how to be engaged critically with organizational stakeholders without unwittingly exacerbating a climate of anxiety and defensiveness that undermines the spirit of open inquiry we hope to foster.
Higher Ed Assessment: The Role of Institutional Context in Faculty Engagement
Susan Boser,  Indiana University of Pennsylvania,  sboser@iup.edu
Abstract: Accrediting and licensure bodies have been moving US higher education institutions into student learning outcomes assessment since the 1990s, and remain a keenly felt presence. However, higher education institutions often experience multiple challenges to developing systematic outcomes assessment and utilizing such evaluation for program improvement. Interestingly, faculty resistance to assessment often presents one particular challenge. This session will look at the role of the institutional context in influencing faculty attitudes toward assessment, examining particular institutional characteristics that might inhibit and those that might promote faculty participation in assessment. In particular, this paper will draw on case examples and a review of the literature to suggest some specific strategies that institutions might use to introduce quality assessment practices to faculty, support active use of assessment findings for curriculum revision and sustain on-going assessment use. Comments will draw selectively on research regarding evaluation use, systems theory, and the literature of participatory evaluation.
Building Capacity Through Empowerment Evaluation: Is Principled Practice Possible in a Resource-Constrained Setting?
Wendi Siebold,  Evaluation, Management and Training Associates Inc,  wendi.lyn1@gmail.com
Rhonda Johnson,  University of Alaska,  rhonda.johnson@uaa.alaska.edu
Abstract: A core tenet of empowerment evaluation (EE) is building the capacity of local practitioners to evaluate their efforts. While promising in theory, the practice of EE proves to be challenging when working with organizations and practitioners who have limited time and funding for evaluation activities and capacity-building. The ten “principles” of EE introduced by Fetterman and Wandersman (2005) frame EE practice, yet tensions between these principles become prevalent in resource-constrained settings. For example, the principle of “inclusion” is difficult to achieve when following the principle of “capacity building.” Key stakeholders whose evaluation capacity needs to be built to ensure the institutionalization of evaluation (i.e. program executive directors), are often the hardest to involve due to time constraints. The application of empowerment evaluation in the national CDC-funded DELTA domestic violence prevention initiative offers insights into whether it is truly possible for evaluators to shift from conducting evaluation to building evaluation capacity.

Session Title: Longitudinal Evaluation of a Professional Development Program: Lessons Learned
Panel Session 245 to be held in Room 111 in the Convention Center on Thursday, Nov 6, 9:15 AM to 10:45 AM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Leigh D'Amico,  University of South Carolina,  kale_leigh@yahoo.com
Ching Ching Yap,  University of South Carolina,  ccyap@gwm.sc.edu
Abstract: Stakeholders involved in the Teacher Quality Research (TQR) Project, an initiative of the South Carolina Department of Education, discuss opportunities and challenges in evaluating a school-based professional development program for teachers. Through TQR, trained coaches and facilitators serve as on-site leaders in the implementation of a 9-month curriculum designed to improve classroom assessment at low performing middle schools. Two panelists involved in the development and implementation of TQR provide their perspectives on the evaluation process and its usefulness in informing project delivery. Additional panelists and the moderator, who serve as evaluators of TQR, discuss the evolution of the evaluation plan including up-to-date initiatives and results. Evaluation methods include surveys, focus groups, and interviews to examine teachers' knowledge about classroom assessment and use of professional development strategies; observational techniques to evaluate fidelity of curriculum implementation by coaches and facilitators; and test data linked to participating teachers to explore student achievement impacts.
Overview of a Longitudinal Evaluation of a Professional Development Program
Ching Ching Yap,  University of South Carolina,  ccyap@gwm.sc.edu
Ching Ching Yap serves as a research assistant professor with the Office of Program Evaluation at the University of South Carolina and is the principal investigator for the evaluation portion of this project. She was involved in the development of the evaluation plan for this project and has been integral in its implementation. Functioning as the moderator of the panel, she will provide an overview of the original Teacher Quality Research (TQR) evaluation plan. She will introduce each panelist and provide that person's background and relevance to the TQR evaluation process.
Perspectives from Implementers: How Evaluation Evolved and Shaped the Delivery of a Professional Development Program
Christina Schneider,  CTB McGraw-Hill,  christina_schneider@ctb.com
Dawn Mazzie,  South Carolina Department of Education,  dmazzie@ed.sc.gov
Christina Schneider serves as the principal investigator of the Teacher Quality Research Project (TQR) through the South Carolina Department of Education. Christina was involved in the planning and development of the TQR project and oversees its overall functioning. She will discuss the impetus for this professional development project, the rationale for its approach, and the impact of evaluation findings on its continuous implementation. Dawn Mazzie serves as the grant director for TQR at the South Carolina Department of Education. Dawn has been with the program since its inception and provides oversight and direction to the middle schools involved in the program including principals, coaches, facilitators, and teacher participants. She will inform the discussion with her experiences as the grant director and her perceptions of the evaluation plan, its effectiveness, and results. She will also discuss next steps as the project wraps-up its initial 3-year implementation cycle.
Perspectives from Evaluators: How Challenges and Opportunities Impacted the Evaluation of a Professional Development Program
Leigh D'Amico,  University of South Carolina,  kale_leigh@yahoo.com
Candace Thompson,  University of North Carolina Wilmington,  kulturalhybrid@yahoo.com
Leigh D'Amico and Candace Thompson serve as evaluators of this project with the Office of Program Evaluation at the University of South Carolina. Leigh will discuss the evolving nature of the evaluation plan, which has sought to provide results on original areas of interest despite difficulties with data collection and capitalize on unforeseen opportunities that have arisen. She will also highlight some substantial evaluation endeavors undertaken in Year 3 of the project to evaluate obstacles that caused attrition and explore student achievement effects in more detail. Candace has a strong qualitative research background and assists in the analysis of focus groups, interviews, open-ended survey responses, and video-taped professional development sessions. Candace will discuss the opportunities and challenges of using differing research methods to evaluate portions of the professional development project.

Session Title: Research Methods in Multiethnic Evaluation
Multipaper Session 246 to be held in Room 113 in the Convention Center on Thursday, Nov 6, 9:15 AM to 10:45 AM
Sponsored by the Multiethnic Issues in Evaluation TIG
Imelda Castaneda-Emenaker,  University of Cincinnati,  castania@ucmail.uc.edu
Kien Lee,  Association for the Study and Development of Community,  kien@capablecommunity.com
Borrowing Techniques for Gathering Qualitative Data with Culturally Diverse Populations
Wendy DuBow,  National Research Center Inc,  wendy@n-r-c.com
Abstract: As evaluators become more aware of the varied styles of communication in racially, ethnically and culturally diverse groups, it may be useful to incorporate techniques from other fields, such as social work and public health. Family Group Conferencing (FGC) is a strengths-based model of decision making drawn from Maori culture. In the U.S., FGC has largely been used to make child-welfare decisions involving extended families. Evaluation could benefit from its incorporation of cultural traditions, community identity, elders, and alternative modes of communication. The Promotora model is a form of health education outreach increasingly used in the Latino community. Promotoras are themselves community members, who have been trained as liaisons with health and human services organizations. Without buy-in from a community, it is difficult to gather authentic data. The Promotora model may be overlaid onto evaluation to aid in recruiting and collecting data from populations of interest.
Cultural Competence of Program Evaluators: Why It’s Important and Can It be Taught?
Krystall Dunaway,  Old Dominion University,  kdunaway@odu.edu
Jennifer Morrow,  University of Tennessee,  jamorrow@utk.edu
Bryan Porter,  Old Dominion University,  bporter@odu.edu
Abstract: Why is cultural competence in evaluation important? The ultimate goal of evaluation is to solve social problems and develop solutions in a timely manner; therefore, an evaluation pervaded by racist, heterosexist, or classist attitudes of the evaluator can lead to irrelevant research questions and methodologies and yield inaccurate results. Thus, cultural competence ought to be one of the central principles of our field. With that in mind, the purposes of this paper are to 1) define cultural competence, 2) convey the belief that an individual cannot be a competent evaluator unless he or she is culturally competent, 3) argue that cultural competence can be taught to evaluators, and 4) describe cultural competence training models used in other disciplines that could be modified for the field of evaluation.
Mixed Methods Approach to Evaluating an Enrichment Program for Minority Students in Sciences
Margaret Mwenda,  University of Iowa,  margaret-mwenda@uiowa.edu
Vernita Morgan,  University of Iowa,  vernita-morgan@uiowa.edu
Abstract: This paper compares the quality of data collected over a period of three years --2005 to 2007--through surveys, focus groups, and mixed-method techniques respectively. It also discusses the relevance of mixed method inquiry for evaluating minority enrichment programs. This paper highlights the value of using mixed methods for evaluation. The analyses confirm that the concurrent mixed method approach yields richer data.

Return to Evaluation 2008
Search Results for All Sessions