896 resultados para Continuous quality improvement


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Clinical auditing practices are recognized universally as a useful tool in evaluating and improving the quality of care provided by a health service. External auditing is a regular activity for mental health services in Australia but internal auditing activities are conducted at the discretion of each service. This paper evaluates the effectiveness of 6 years of internal auditing activities in a mental health service. A review of the scope, audit tools, purpose, sampling and design of the internal audits and identification of the recommendations from six consecutive annual audit reports was completed. Audit recommendations were examined, as well as levels of implementation and reasons for success or failure. Fifty-seven recommendations were identified, with 35% without action, 28% implemented and 33.3% still pending or in progress. The recommendations were more likely to be implemented if they relied on activity, planning and action across a selection of service areas rather than being restricted to individual departments within a service, if they did not involve non-mental health service departments and if they were not reliant on attitudinal change. Tools used, scope and reporting formats have become more sophisticated as part of the evolutionary nature of the auditing process. Internal auditing in the Barwon Health Mental Health Service has been effective in producing change in the quality of care across the organization. A number of evolutionary changes in the audit process have improved the efficiency and effectiveness of the audit.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Today’s pet food industry is growing rapidly, with pet owners demanding high-quality diets for their pets. The primary role of diet is to provide enough nutrients to meet metabolic requirements, while giving the consumer a feeling of well-being. Diet nutrient composition and digestibility are of crucial importance for health and well being of animals. A recent strategy to improve the quality of food is the use of “nutraceuticals” or “Functional foods”. At the moment, probiotics and prebiotics are among the most studied and frequently used functional food compounds in pet foods. The present thesis reported results from three different studies. The first study aimed to develop a simple laboratory method to predict pet foods digestibility. The developed method was based on the two-step multi-enzymatic incubation assay described by Vervaeke et al. (1989), with some modification in order to better represent the digestive physiology of dogs. A trial was then conducted to compare in vivo digestibility of pet-foods and in vitro digestibility using the newly developed method. Correlation coefficients showed a close correlation between digestibility data of total dry matter and crude protein obtained with in vivo and in vitro methods (0.9976 and 0.9957, respectively). Ether extract presented a lower correlation coefficient, although close to 1 (0.9098). Based on the present results, the new method could be considered as an alternative system of evaluation of dog foods digestibility, reducing the need for using experimental animals in digestibility trials. The second parte of the study aimed to isolate from dog faeces a Lactobacillus strain capable of exert a probiotic effect on dog intestinal microflora. A L. animalis strain was isolated from the faeces of 17 adult healthy dogs..The isolated strain was first studied in vitro when it was added to a canine faecal inoculum (at a final concentration of 6 Log CFU/mL) that was incubated in anaerobic serum bottles and syringes which simulated the large intestine of dogs. Samples of fermentation fluid were collected at 0, 4, 8, and 24 hours for analysis (ammonia, SCFA, pH, lactobacilli, enterococci, coliforms, clostridia). Consequently, the L. animalis strain was fed to nine dogs having lactobacilli counts lower than 4.5 Log CFU per g of faeces. The study indicated that the L animalis strain was able to survive gastrointestinal passage and transitorily colonize the dog intestine. Both in vitro and in vivo results showed that the L. animalis strain positively influenced composition and metabolism of the intestinal microflora of dogs. The third trail investigated in vitro the effects of several non-digestible oligosaccharides (NDO) on dog intestinal microflora composition and metabolism. Substrates were fermented using a canine faecal inoculum that was incubated in anaerobic serum bottles and syringes. Substrates were added at the final concentration of 1g/L (inulin, FOS, pectin, lactitol, gluconic acid) or 4g/L (chicory). Samples of fermentation fluid were collected at 0, 6, and 24 hours for analysis (ammonia, SCFA, pH, lactobacilli, enterococci, coliforms). Gas production was measured throughout the 24 h of the study. Among the tested NDO lactitol showed the best prebiotic properties. In fact, it reduced coliforms and increased lactobacilli counts, enhanced microbial fermentation and promoted the production of SCFA while decreasing BCFA. All the substrates that were investigated showed one or more positive effects on dog faecal microflora metabolism or composition. Further studies (in particular in vivo studies with dogs) will be needed to confirm the prebiotic properties of lactitol and evaluate its optimal level of inclusion in the diet.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the last decade the near-surface mounted (NSM) strengthening technique using carbon fibre reinforced polymers (CFRP) has been increasingly used to improve the load carrying capacity of concrete members. Compared to externally bonded reinforcement (EBR), the NSM system presents considerable advantages. This technique consists in the insertion of carbon fibre reinforced polymer laminate strips into pre-cut slits opened in the concrete cover of the elements to be strengthened. CFRP reinforcement is bonded to concrete with an appropriate groove filler, typically epoxy adhesive or cement grout. Up to now, research efforts have been mainly focused on several structural aspects, such as: bond behaviour, flexural and/or shear strengthening effectiveness, and energy dissipation capacity of beam-column joints. In such research works, as well as in field applications, the most widespread adhesives that are used to bond reinforcements to concrete are epoxy resins. It is largely accepted that the performance of the whole application of NSM systems strongly depends on the mechanical properties of the epoxy resins, for which proper curing conditions must be assured. Therefore, the existence of non-destructive methods that allow monitoring the curing process of epoxy resins in the NSM CFRP system is desirable, in view of obtaining continuous information that can provide indication in regard to the effectiveness of curing and the expectable bond behaviour of CFRP/adhesive/concrete systems. The experimental research was developed at the Laboratory of the Structural Division of the Civil Engineering Department of the University of Minho in Guimar\~aes, Portugal (LEST). The main objective was to develop and propose a new method for continuous quality control of the curing of epoxy resins applied in NSM CFRP strengthening systems. This objective is pursued through the adaptation of an existing technique, termed EMM-ARM (Elasticity Modulus Monitoring through Ambient Response Method) that has been developed for monitoring the early stiffness evolution of cement-based materials. The experimental program was composed of two parts: (i) direct pull-out tests on concrete specimens strengthened with NSM CFRP laminate strips were conducted to assess the evolution of bond behaviour between CFRP and concrete since early ages; and, (ii) EMM-ARM tests were carried out for monitoring the progressive stiffness development of the structural adhesive used in CFRP applications. In order to verify the capability of the proposed method for evaluating the elastic modulus of the epoxy, static E-Modulus was determined through tension tests. The results of the two series of tests were then combined and compared to evaluate the possibility of implementation of a new method for the continuous monitoring and quality control of NSM CFRP applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND We describe the setup of a neonatal quality improvement tool and list which peer-reviewed requirements it fulfils and which it does not. We report on the so-far observed effects, how the units can identify quality improvement potential, and how they can measure the effect of changes made to improve quality. METHODS Application of a prospective longitudinal national cohort data collection that uses algorithms to ensure high data quality (i.e. checks for completeness, plausibility and reliability), and to perform data imaging (Plsek's p-charts and standardized mortality or morbidity ratio SMR charts). The collected data allows monitoring a study collective of very low birth-weight infants born from 2009 to 2011 by applying a quality cycle following the steps 'guideline - perform - falsify - reform'. RESULTS 2025 VLBW live-births from 2009 to 2011 representing 96.1% of all VLBW live-births in Switzerland display a similar mortality rate but better morbidity rates when compared to other networks. Data quality in general is high but subject to improvement in some units. Seven measurements display quality improvement potential in individual units. The methods used fulfil several international recommendations. CONCLUSIONS The Quality Cycle of the Swiss Neonatal Network is a helpful instrument to monitor and gradually help improve the quality of care in a region with high quality standards and low statistical discrimination capacity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the Practice Change Model, physicians act as key stakeholders, people who have both an investment in the practice and the capacity to influence how the practice performs. This leadership role is critical to the development and change of the practice. Leadership roles and effectiveness are an important factor in quality improvement in primary care practices.^ The study conducted involved a comparative case study analysis to identify leadership roles and the relationship between leadership roles and the number and type of quality improvement strategies adopted during a Practice Change Model-based intervention study. The research utilized secondary data from four primary care practices with various leadership styles. The practices are located in the San Antonio region and serve a large Hispanic population. The data was collected by two ABC Project Facilitators from each practice during a 12-month period including Key Informant Interviews (all staff members), MAP (Multi-method Assessment Process), and Practice Facilitation field notes. This data was used to evaluate leadership styles, management within the practice, and intervention tools that were implemented. The chief steps will be (1) to analyze if the leader-member relations contribute to the type of quality improvement strategy or strategies selected (2) to investigate if leader-position power contributes to the number of strategies selected and the type of strategy selected (3) and to explore whether the task structure varies across the four primary care practices.^ The research found that involving more members of the clinic staff in decision-making, building bridges between organizational staff and clinical staff, and task structure are all associated with the direct influence on the number and type of quality improvement strategies implemented in primary care practice.^ Although this research only investigated leadership styles of four different practices, it will offer future guidance on how to establish the priorities and implementation of quality improvement strategies that will have the greatest impact on patient care improvement. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objectives of this dissertation were to evaluate health outcomes, quality improvement measures, and the long-term cost-effectiveness and impact on diabetes-related microvascular and macrovascular complications of a community health worker-led culturally tailored diabetes education and management intervention provided to uninsured Mexican Americans in an urban faith-based clinic. A prospective, randomized controlled repeated measures design was employed to compare the intervention effects between: (1) an intervention group (n=90) that participated in the Community Diabetes Education (CoDE) program along with usual medical care; and (2) a wait-listed comparison group (n=90) that received only usual medical care. Changes in hemoglobin A1c (HbA1c) and secondary outcomes (lipid status, blood pressure and body mass index) were assessed using linear mixed-models and an intention-to-treat approach. The CoDE group experienced greater reduction in HbA1c (-1.6%, p<.001) than the control group (-.9%, p<.001) over the 12 month study period. After adjusting for group-by-time interaction, antidiabetic medication use at baseline, changes made to the antidiabetic regime over the study period, duration of diabetes and baseline HbA1c, a statistically significant intervention effect on HbA1c (-.7%, p=.02) was observed for CoDE participants. Process and outcome quality measures were evaluated using multiple mixed-effects logistic regression models. Assessment of quality indicators revealed that the CoDE intervention group was significantly more likely to have received a dilated retinal examination than the control group, and 53% achieved a HbA1c below 7% compared with 38% of control group subjects. Long-term cost-effectiveness and impact on diabetes-related health outcomes were estimated through simulation modeling using the rigorously validated Archimedes Model. Over a 20 year time horizon, CoDE participants were forecasted to have less proliferative diabetic retinopathy, fewer foot ulcers, and reduced numbers of foot amputations than control group subjects who received usual medical care. An incremental cost-effectiveness ratio of $355 per quality-adjusted life-year gained was estimated for CoDE intervention participants over the same time period. The results from the three areas of program evaluation: impact on short-term health outcomes, quantification of improvement in quality of diabetes care, and projection of long-term cost-effectiveness and impact on diabetes-related health outcomes provide evidence that a community health worker can be a valuable resource to reduce diabetes disparities for uninsured Mexican Americans. This evidence supports formal integration of community health workers as members of the diabetes care team.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One important task in the design of an antenna is to carry out an analysis to find out the characteristics of the antenna that best fulfills the specifications fixed by the application. After that, a prototype is manufactured and the next stage in design process is to check if the radiation pattern differs from the designed one. Besides the radiation pattern, other radiation parameters like directivity, gain, impedance, beamwidth, efficiency, polarization, etc. must be also evaluated. For this purpose, accurate antenna measurement techniques are needed in order to know exactly the actual electromagnetic behavior of the antenna under test. Due to this fact, most of the measurements are performed in anechoic chambers, which are closed areas, normally shielded, covered by electromagnetic absorbing material, that simulate free space propagation conditions, due to the absorption of the radiation absorbing material. Moreover, these facilities can be employed independently of the weather conditions and allow measurements free from interferences. Despite all the advantages of the anechoic chambers, the results obtained both from far-field measurements and near-field measurements are inevitably affected by errors. Thus, the main objective of this Thesis is to propose algorithms to improve the quality of the results obtained in antenna measurements by using post-processing techniques and without requiring additional measurements. First, a deep revision work of the state of the art has been made in order to give a general vision of the possibilities to characterize or to reduce the effects of errors in antenna measurements. Later, new methods to reduce the unwanted effects of four of the most commons errors in antenna measurements are described and theoretical and numerically validated. The basis of all them is the same, to perform a transformation from the measurement surface to another domain where there is enough information to easily remove the contribution of the errors. The four errors analyzed are noise, reflections, truncation errors and leakage and the tools used to suppress them are mainly source reconstruction techniques, spatial and modal filtering and iterative algorithms to extrapolate functions. Therefore, the main idea of all the methods is to modify the classical near-field-to-far-field transformations by including additional steps with which errors can be greatly suppressed. Moreover, the proposed methods are not computationally complex and, because they are applied in post-processing, additional measurements are not required. The noise is the most widely studied error in this Thesis, proposing a total of three alternatives to filter out an important noise contribution before obtaining the far-field pattern. The first one is based on a modal filtering. The second alternative uses a source reconstruction technique to obtain the extreme near-field where it is possible to apply a spatial filtering. The last one is to back-propagate the measured field to a surface with the same geometry than the measurement surface but closer to the AUT and then to apply also a spatial filtering. All the alternatives are analyzed in the three most common near-field systems, including comprehensive noise statistical analyses in order to deduce the signal-to-noise ratio improvement achieved in each case. The method to suppress reflections in antenna measurements is also based on a source reconstruction technique and the main idea is to reconstruct the field over a surface larger than the antenna aperture in order to be able to identify and later suppress the virtual sources related to the reflective waves. The truncation error presents in the results obtained from planar, cylindrical and partial spherical near-field measurements is the third error analyzed in this Thesis. The method to reduce this error is based on an iterative algorithm to extrapolate the reliable region of the far-field pattern from the knowledge of the field distribution on the AUT plane. The proper termination point of this iterative algorithm as well as other critical aspects of the method are also studied. The last part of this work is dedicated to the detection and suppression of the two most common leakage sources in antenna measurements. A first method tries to estimate the leakage bias constant added by the receiver’s quadrature detector to every near-field data and then suppress its effect on the far-field pattern. The second method can be divided into two parts; the first one to find the position of the faulty component that radiates or receives unwanted radiation, making easier its identification within the measurement environment and its later substitution; and the second part of this method is able to computationally remove the leakage effect without requiring the substitution of the faulty component. Resumen Una tarea importante en el diseño de una antena es llevar a cabo un análisis para averiguar las características de la antena que mejor cumple las especificaciones fijadas por la aplicación. Después de esto, se fabrica un prototipo de la antena y el siguiente paso en el proceso de diseño es comprobar si el patrón de radiación difiere del diseñado. Además del patrón de radiación, otros parámetros de radiación como la directividad, la ganancia, impedancia, ancho de haz, eficiencia, polarización, etc. deben ser también evaluados. Para lograr este propósito, se necesitan técnicas de medida de antenas muy precisas con el fin de saber exactamente el comportamiento electromagnético real de la antena bajo prueba. Debido a esto, la mayoría de las medidas se realizan en cámaras anecoicas, que son áreas cerradas, normalmente revestidas, cubiertas con material absorbente electromagnético. Además, estas instalaciones se pueden emplear independientemente de las condiciones climatológicas y permiten realizar medidas libres de interferencias. A pesar de todas las ventajas de las cámaras anecoicas, los resultados obtenidos tanto en medidas en campo lejano como en medidas en campo próximo están inevitablemente afectados por errores. Así, el principal objetivo de esta Tesis es proponer algoritmos para mejorar la calidad de los resultados obtenidos en medida de antenas mediante el uso de técnicas de post-procesado. Primeramente, se ha realizado un profundo trabajo de revisión del estado del arte con el fin de dar una visión general de las posibilidades para caracterizar o reducir los efectos de errores en medida de antenas. Después, se han descrito y validado tanto teórica como numéricamente nuevos métodos para reducir el efecto indeseado de cuatro de los errores más comunes en medida de antenas. La base de todos ellos es la misma, realizar una transformación de la superficie de medida a otro dominio donde hay suficiente información para eliminar fácilmente la contribución de los errores. Los cuatro errores analizados son ruido, reflexiones, errores de truncamiento y leakage y las herramientas usadas para suprimirlos son principalmente técnicas de reconstrucción de fuentes, filtrado espacial y modal y algoritmos iterativos para extrapolar funciones. Por lo tanto, la principal idea de todos los métodos es modificar las transformaciones clásicas de campo cercano a campo lejano incluyendo pasos adicionales con los que los errores pueden ser enormemente suprimidos. Además, los métodos propuestos no son computacionalmente complejos y dado que se aplican en post-procesado, no se necesitan medidas adicionales. El ruido es el error más ampliamente estudiado en esta Tesis, proponiéndose un total de tres alternativas para filtrar una importante contribución de ruido antes de obtener el patrón de campo lejano. La primera está basada en un filtrado modal. La segunda alternativa usa una técnica de reconstrucción de fuentes para obtener el campo sobre el plano de la antena donde es posible aplicar un filtrado espacial. La última es propagar el campo medido a una superficie con la misma geometría que la superficie de medida pero más próxima a la antena y luego aplicar también un filtrado espacial. Todas las alternativas han sido analizadas en los sistemas de campo próximos más comunes, incluyendo detallados análisis estadísticos del ruido con el fin de deducir la mejora de la relación señal a ruido lograda en cada caso. El método para suprimir reflexiones en medida de antenas está también basado en una técnica de reconstrucción de fuentes y la principal idea es reconstruir el campo sobre una superficie mayor que la apertura de la antena con el fin de ser capaces de identificar y después suprimir fuentes virtuales relacionadas con las ondas reflejadas. El error de truncamiento que aparece en los resultados obtenidos a partir de medidas en un plano, cilindro o en la porción de una esfera es el tercer error analizado en esta Tesis. El método para reducir este error está basado en un algoritmo iterativo para extrapolar la región fiable del patrón de campo lejano a partir de información de la distribución del campo sobre el plano de la antena. Además, se ha estudiado el punto apropiado de terminación de este algoritmo iterativo así como otros aspectos críticos del método. La última parte de este trabajo está dedicado a la detección y supresión de dos de las fuentes de leakage más comunes en medida de antenas. El primer método intenta realizar una estimación de la constante de fuga del leakage añadido por el detector en cuadratura del receptor a todos los datos en campo próximo y después suprimir su efecto en el patrón de campo lejano. El segundo método se puede dividir en dos partes; la primera de ellas para encontrar la posición de elementos defectuosos que radian o reciben radiación indeseada, haciendo más fácil su identificación dentro del entorno de medida y su posterior substitución. La segunda parte del método es capaz de eliminar computacionalmente el efector del leakage sin necesidad de la substitución del elemento defectuoso.