Saturday, May 18, 2019
Program Evaluation as a Key Tool in Health and Human Services
Program Evaluation as a Key Tool in Health and Human  go Maria Delos Angeles Mora HCA460 Research Methods in Health and Human Services  professor TyKeysha Boone April 22, 2013 Program Evaluation as a Key Tool in Health and Human Services In this competitive wellness c  argon environment, consumers want and  rest better health c ar  function and hospital systems argon concerned about maintaining their oerall image. There is also attention to ways in which patient  expiation measurement  place be integrated into an oerall measure of clinical quality. As lots of information is available to be use in a hypothetical evaluation.The American  rubor Cross is my selection due to that I  hunt downed with them for several years as a unpaid  campaigner and telephonic representative to answer incoming calls that mandatory to be checked for  distinct parts of the United States and  terra firma territories. The fundamental Principles of the Global  departure Cross Network  atomic number 18 based on    humanity- the Red Cross,  natural of a  entrust to bring assistance without discrimination to the wounded on the battlefield,  fetch upeavors-in its international and national capacity-to  interrupt and alleviate human suffering wherever it  may be found.Its purpose is to protect life and health and to ensure respect for the human being. It promotes mutual understanding, friendship, and cooperation lasting peace amongst all peoples, impartiality-it  give aways no discrimination as to nationality, race, religious beliefs, class or political opinions. It endeavors to relieve the suffering of individuals, being guided solely by their needs, and to give priority to the most urgent cases of distress, neutrality- In order to continue to enjoy the confidence of all, the Red Cross may not take sides in hostilities or engage at any  while in controversies of a political, racial, religious or deological nature, independence-since the Red Cross is considered is  free-living. The national soci   eties, while auxiliaries in the  humanistic  gains of their governments and subject to the laws of their respective countries, moldiness always maintain their autonomy so that they may be able at all times to act in accordance with Red Cross principles, voluntary  avail-is a voluntary relief movement not prompted in any manner by desire for gain, unity-is there is a Red Cross society in any one country no one can be turned out as it may be open to all.It must carry on its humanitarian work throughout its territory, and universality-as the Red Cross is a worldwide  founding in which all societies have equal status and share equal responsibilities and duties in helping  from each one other. In the continuing effort to improve human service  architectural plans, funders, policymakers, and service  fork uprs are increasingly recognizing the grandness of  loaded  design evaluations. They want to know what the  classs accomplish, what they cost, and how they should be operated to achieve    maximum cost-effectiveness.They want to know which programs work for which groups, and they want conclusions based on  point, rather than testimonials and impassioned pleas. This paper lays out, for the non-technician, the basic principles of program evaluation design. It signals  leafy vegetable pitfalls, identifies constraints that need to be considered, and presents ideas for solving potential problems. These principles are general and can be applied to a wide range of human service programs.We illustrate these principles here with examples from programs for vulnerable children and youth. Evaluation of these programs is particularly  thought-provoking because they address a wide diversity of problems and possible solutions, often include multiple agencies and clients, and change over time to meet shifting service needs. It is  very(prenominal) important to follow the steps in selecting the  eliminate Evaluation Design. The first step in the  handle of selecting an evaluation desi   gn is to clarify the questions that need to be answered.The next step is to develop a logic model that lays out the expected causal linkages  in the midst of the program (and program components) and the program goals. Without tracing these anticipated links it is impossible to interpret the evaluation evidence that is collected. The third step is to review the program to assess its readiness for evaluation. These three steps can be done at the same time or in overlapping stages. Clarifying the Evaluation Questions is a design of any evaluation begins by defining the audience for the evaluation findings, what they need to know, and when.The questions  utilize are determine which of the following four major types of evaluation should be chosen such as The  trespass evaluations focus on questions of causality. Did the program have its intended effects? If so, who was helped and what activities or characteristics of the program created the impact? Did the program have any unintended con   sequences, positive or negative? How  effect monitoring does provides information on key aspects of how a system or program is operating and the extent to which specified program objectives are being attained (e. g.  depends of youth served compared to  localize goals, reductions in school dropouts compared to target goals). Results are  apply by service providers, funders, and policymakers to assess the programs performance and accomplishments. Process evaluations answer questions about how the program operates and document the procedures and activities undertaken in service  delivery. Such evaluations help identify problems faced in delivering service and strategies for overcoming these problems. They are useful to practitioners and service providers in replicating or adapting program strategies.Cost evaluations address how much the program or program components cost, preferably in relation to alternative uses of the same resources and to the benefits being produced by the program   . In the current fiscal environment, programs must expect to defend their costs against alternative uses. As the comprehensive evaluation will include all these activities. Sometimes, however, the questions raised, the target audience for findings, or the available resources limit the evaluation focus to one or two of these activities.Whether to provide preliminary evaluations to staff for use in improving program operations and developing additional services is an issue that needs to be faced. Preliminary results can be effectively used to identify  working(a) problems and develop the capacity of program staff to conduct their own ongoing evaluation and monitoring activities (Connell, J. P. , Kubisch, A. C. , Schorr, L. B. , and Weiss, C. H. (1995).  besides this use of evaluation findings, called formative evaluations, presents a challenge to evaluators who are faced with the much more ifficult  job of estimating the impact of an evolving intervention. When the program itself is c   ontinuing to change, measuring impact requires ongoing measurement of the types and level of service provided. The  jeopardy in formative evaluations is that the line between program operations and assessment will be blurred. The  bare effort and resources required for impact analysis in formative evaluations has to be measured against the potential gains to the program from ongoing improvements and the greater usefulness of the final evaluation findings.Performance monitoring involves identification and collection of specific  info on program outputs, outcomes, and accomplishments. Although they may measure subjective  component parts such as client satisfaction, the  entropy are numeric, consisting of frequency counts, statistical averages, ratios, or percentages. Output measures  recoil internal activities the amount of work done  inwardly the program or organization. Outcome measures (immediate and longer term) reflect progress towards program goals. Often the same measurements    (e. g. number/percent of youth who stopped or reduced substance abuse) may be used for performance monitoring and impact evaluation. However, unlike impact evaluation, performance monitoring does not make any rigorous effort to determine whether these were caused by program efforts or by other external events. The way that we are looking at Design Variations is when programs are operating in a number of communities, the sites are  likely to  turn in  deputation, structure, the nature and extent of project implementation, primary clients/targets, and timeliness.They may offer somewhat different sets of services, or have identified somewhat different goals. In such situations, it is advisable to construct a core set of performance measures to be used by all, and to supplement these with local performance indicators that reflect differences. For example, some youth programs will collect detailed data on youth school performance, including grades,  attention, and  disciplinary actions,    while others will simply have data on promotion to the next grade or whether the youth is still enrolled or has dropped out.A multi-school performance monitoring system might require data on promotion and enrollment for all schools, and specify more detailed or specialized indicators on attendance or disciplinary actions for one or a subset of schools to use in their own performance monitoring.  other look is at the Considerations/Limitations when selecting performance indicators, evaluators and service providers need it is important to consider The relevance of potential measures to the mission/objective of the local program or national initiative. The comprehensiveness of the set of measures. The programs control over the factor being measured.The validity of the measure and the reliability and accuracy of the measure, feasibility of collecting the data. How much effort and  funds is required to generate each measure? Practical Issues. The set of performance indicators should be s   imple, limited to a few key indicators of priority outcomes. Too many indicators burden the data collection and analysis and make it less likely that managers will understand and use reported information. Regular measurement, ideally quarterly, is important so that the system provides the information in time to make shifts in program operations and to capture changes over time.However, pressures for timely reporting should not be allowed to sacrifice data quality. For the performance monitoring to take  tail in a reliable and timely way, the evaluation should include adequate support and plans for training and  expert assistance for data collection. Routine quality control procedures should be established to check on data entry accuracy and missing information. At the point of analysis, procedures for verifying trends should be in place, particularly if the results are unexpected. The costs of performance monitoring are modest relative to impact evaluations, but still vary widely de   pending on the data used.Most performance indicator data come from records maintained by service providers. The added expense involves regularly collecting and analyzing these records, as well as preparing and disseminating reports to those concerned. This is typically a part-time work assignment for a supervisor within the agency. The expense will be greater if client satisfaction surveys are used to measure outcomes. An outside survey organization may be required for a large-scale survey of past clients alternatively, a self-administered exit questionnaire can be given to clients at the end of services.In either case, the assistance of professional researchers is needed in preparing data sets, analyses, and reports. Process Analysis key chemical element in process analysis is a systematic, focused plan for collecting data to (1) determine  whatsoever the program model is being implemented as specified and, if not, how operations differ from those initially  think (2) identify unin   tended consequences and unanticipated outcomes and (3) understand the program from the perspectives of staff, participants, and the community.The design variation is the systemic procedure used to collect data for process evaluation often include case studies, focus groups, and ethnography. As  unbendable pressures demonstrates program impacts dictates making evaluation activities a required and intrinsic part of program activities from the start. At the very least, evaluation activities should include performance monitoring.The collection and analysis of data on program progress and process builds the capacity for self-evaluation and contributes to good program management and efforts to obtain support for program continuation-for example, when the funding is serving as seed money for a program that is intended, if successful, to continue under local sponsorship. Performance monitoring can be extended to non-experimental evaluation with additional analysis of program records and/or    client surveys. These evaluation activities may be conducted either by program staff with research training or by an independent evaluator.In either case, training and technical assistance to support program evaluation efforts will be needed to maintain data quality and assist in appropriate analysis and use of the findings. There are several strong arguments for evaluation designs that go further in documenting program impact. Only experimental or quasi-experimental designs provide convincing evidence that program funds are well invested, and that the program is making a real difference to the well-being of the population served. These evaluations need to be conducted by experienced researchers and supported by adequate budgets.A good strategy may be implementing small-scale programs to test alternative models of service delivery in settings that will allow a stronger impact evaluation design than is possible in a large scale, national program. Often program evaluation should proce   ed in stages. The first year of program operations can be devoted to process studies and performance monitoring, the information from which can serve as a basis for more extensive evaluation efforts once operations are running smoothly. Finally,  cookery to obtain support for the evaluation at every level-community, program staff, agency leadership and funder-should be extensive.Each of these has a stake in the results. Each should have a voice in planning. And each should perceive  nett benefits from the results. Only in this way will the results be acknowledged as valid and actually used for program improvement. Reference Connell, J. P. , Kubisch, A. C. , Schorr, L. B. , and Weiss, C. H. (1995) New Approaches to Evaluating Community Initiatives Concepts, Methods, and Contexts. Washington, DC The Aspen Institute. Ellickson, P. L. , Bell, R. M. , and McGuigan, K. (1993) Preventing Adolescent Drug Use Long- Term Results of a Junior High School Program. American Journal of Public Heal   th 83(6) 856-861. Engle, R-F and Granger, CW. J. (1987) Cointegration and Error Correction Representation,  tenderness and Testing.  Retrieved from Econometrica 55 25 1-276. Evaluation Strategies for Human Service Programs. Retrieved from http//www. ojp. usdoj. gov/BJA/evaluation/guide/documents/evaluation_strategies. htmlp 6. Heckman, J. J. (1979)  archetype Selection Bias as a Specification Error.  Econometrica 47153-162. IRB Forum. Retrieved from www. irbforum. rg. Joreskog, K. G. (1977) Structural Equation Models in the  friendly Sciences.  In P. R. Krishnaiah (ed. ), Applications of Statistics, 265-287. Amsterdam North-Holland Bryk, A. S. and Raudenbush, S. W. (1992) Hierarchical Linear Models Applications and Meta- Analysis Techniques. Newbury Park, CA Sage Kalbfleish, J. D. , and Prentice, K. L. (1980) the Statistical Analysis of  disaster Time Data. New York Wiley. Kumpfer, K. L, Shur, G. H. , Ross, J. H. , Bunnell, K. K. , Librett, J. J. and Milward, A. R. 1993) Measurement   s in  streak A Manual on Selecting and Using Instruments to Evaluate Prevention Programs. Retrieved from Public Health Service, U. S. Department of Health and Human Services, (SMA) 93-2041. Monette, Duane R. , Thomas J. Sullivan, Cornell R. DeJong. Applied Social Research A Tool for the Human Services, 8th Edition. Wadsworth, 2014-03-11. . MREL Appendix A. Retrieved from http//www. ecs. org/html/educationIssues/Research/primer/appendixA. asp. Program Evaluation  one hundred one A Workshop. Retrieved from http//aetcnec. ucsf. edu/evaluation/pacific_evaluation%5B1%5D. ppt.  
Subscribe to:
Post Comments (Atom)
 
 
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.