902 resultados para ASSURANCE
Resumo:
A risk score model was developed based in a population of 1,224 individuals from the general population without known diabetes aging 35 years or more from an urban Brazilian population sample in order to select individuals who should be screened in subsequent testing and improve the efficacy of public health assurance. External validation was performed in a second, independent, population from a different city ascertained through a similar epidemiological protocol. The risk score was developed by multiple logistic regression and model performance and cutoff values were derived from a receiver operating characteristic curve. Model`s capacity of predicting fasting blood glucose levels was tested analyzing data from a 5-year follow-up protocol conducted in the general population. Items independently and significantly associated with diabetes were age, BMI and known hypertension. Sensitivity, specificity and proportion of further testing necessary for the best cutoff value were 75.9, 66.9 and 37.2%, respectively. External validation confirmed the model`s adequacy (AUC equal to 0.72). Finally, model score was also capable of predicting fasting blood glucose progression in non-diabetic individuals in a 5-year follow-up period. In conclusion, this simple diabetes risk score was able to identify individuals with an increased likelihood of having diabetes and it can be used to stratify subpopulations in which performing of subsequent tests is necessary and probably cost-effective.
Resumo:
Background: The cerebrospinal fluid (CSF) biomarkers amyloid beta (A beta)-42, total-tau (T-tau), and phosphorylated-tau (P-tau) demonstrate good diagnostic accuracy for Alzheimer`s disease (AD). However, there are large variations in biomarker measurements between studies, and between and within laboratories. The Alzheimer`s Association has initiated a global quality control program to estimate and monitor variability of measurements, quantify batch-to-batch assay variations, and identify sources of variability. In this article, we present the results from the first two rounds of the program. Methods: The program is open for laboratories using commercially available kits for A beta, T-tau, or P-tau. CSF samples (aliquots of pooled CSF) are sent for analysis several times a year from the Clinical Neurochemistry Laboratory at the Molndal campus of the University of Gothenburg, Sweden. Each round consists of three quality control samples. Results: Forty laboratories participated. Twenty-six used INNOTEST enzyme-linked immunosorbent assay kits, 14 used Luminex xMAP with the INNO-BIA AlzBio3 kit (both measure A beta-(1-42), P-tau(181P), and T-tau), and 5 used Mesa Scale Discovery with the A beta triplex (A beta N-42, A beta N-40, and A beta N-38) or T-tau kits. The total coefficients of variation between the laboratories were 13% to 36%. Five laboratories analyzed the samples six times on different occasions. Within-laboratory precisions differed considerably between biomarkers within individual laboratories. Conclusions: Measurements of CSF AD biomarkers show large between-laboratory variability, likely caused by factors related to analytical procedures and the analytical kits. Standardization of laboratory procedures and efforts by kit vendors to increase kit performance might lower variability, and will likely increase the usefulness of CSF AD biomarkers. (C) 2011 The Alzheimer`s Association. All rights reserved.
Resumo:
Purpose: Several attempts to determine the transit time of a high dose rate (HDR) brachytherapy unit have been reported in the literature with controversial results. The determination of the source speed is necessary to accurately calculate the transient dose in brachytherapy treatments. In these studies, only the average speed of the source was measured as a parameter for transit dose calculation, which does not account for the realistic movement of the source, and is therefore inaccurate for numerical simulations. The purpose of this work is to report the implementation and technical design of an optical fiber based detector to directly measure the instantaneous speed profile of a (192)Ir source in a Nucletron HDR brachytherapy unit. Methods: To accomplish this task, we have developed a setup that uses the Cerenkov light induced in optical fibers as a detection signal for the radiation source moving inside the HDR catheter. As the (192)Ir source travels between two optical fibers with known distance, the threshold of the induced signals are used to extract the transit time and thus the velocity. The high resolution of the detector enables the measurement of the transit time at short separation distance of the fibers, providing the instantaneous speed. Results: Accurate and high resolution speed profiles of the 192Ir radiation source traveling from the safe to the end of the catheter and between dwell positions are presented. The maximum and minimum velocities of the source were found to be 52.0 +/- 1.0 and 17.3 +/- 1:2 cm/s. The authors demonstrate that the radiation source follows a uniformly accelerated linear motion with acceleration of vertical bar a vertical bar = 113 cm/s(2). In addition, the authors compare the average speed measured using the optical fiber detector to those obtained in the literature, showing deviation up to 265%. Conclusions: To the best of the authors` knowledge, the authors directly measured for the first time the instantaneous speed profile of a radiation source in a HDR brachytherapy unit traveling from the unit safe to the end of the catheter and between interdwell distances. The method is feasible and accurate to implement on quality assurance tests and provides a unique database for efficient computational simulations of the transient dose. (C) 2010 American Association of Physicists in Medicine. [DOI: 10.1118/1.3483780]
Resumo:
The ultimate check of the actual dose delivered to a patient in radiotherapy can only be achieved by using in vivo dosimetry. This work reports a pilot study to test the applicability of a thermoluminescent dosimetric system for performing in vivo entrance dose measurements in external photon beam radiotherapy. The measurements demonstrated the value of thermoluminescent dosimetry as a treatment verification method and its applicability as a part of a quality assurance program in radiotherapy. (c) 2009 Elsevier Ltd. All rights reserved.
Resumo:
Activated sludge models are used extensively in the study of wastewater treatment processes. While various commercial implementations of these models are available, there are many people who need to code models themselves using the simulation packages available to them, Quality assurance of such models is difficult. While benchmarking problems have been developed and are available, the comparison of simulation data with that of commercial models leads only to the detection, not the isolation of errors. To identify the errors in the code is time-consuming. In this paper, we address the problem by developing a systematic and largely automated approach to the isolation of coding errors. There are three steps: firstly, possible errors are classified according to their place in the model structure and a feature matrix is established for each class of errors. Secondly, an observer is designed to generate residuals, such that each class of errors imposes a subspace, spanned by its feature matrix, on the residuals. Finally. localising the residuals in a subspace isolates coding errors. The algorithm proved capable of rapidly and reliably isolating a variety of single and simultaneous errors in a case study using the ASM 1 activated sludge model. In this paper a newly coded model was verified against a known implementation. The method is also applicable to simultaneous verification of any two independent implementations, hence is useful in commercial model development.
Resumo:
Warranty is an important element of marketing new products as better warranty signals higher product quality and provides greater assurance to customers. Servicing warranty involves additional costs to the manufacturer and this cost depends on product reliability and warranty terms. Product reliability is influenced by the decisions made during the design and manufacturing of the product. As such warranty is very important in the context of new products. Product warranty has received the attention of researchers from many different disciplines and the literature on warranties is vast. This paper carries out a review of the literature that has appeared in the last ten years. It highlights issues of interest to manufacturers in the context of managing new products from an overall business perspective. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
Background: There is good evidence that angiotensin-converting enzyme (ACE) inhibitors are beneficial after myocardial infarction (MI). However, it is not known how widely this evidence is used in practice and whether all eligible patients receive this therapy. Aim: To assess the usage of ACE inhibitors in patients after MI in a large teaching hospital. Method: A one month prospective analysis, combined with a three month retrospective analysis, was conducted at the Royal Brisbane Hospital (RBH) in February-March 2000. Patients admitted with an MI or who had been diagnosed with an MI during admission from November 1999 to March 2000 were identified from the coronary care unit (CCU) records. Inpatient medication charts and outpatient records were then reviewed. Information collected included: ACE inhibitor use, doses, reasons for prescribing/not prescribing ACE inhibitors, and ACE inhibitor prescribers (cardiologists or general physicians). Results: Forty four patients with an MI were included in the study, 28 of whom were prescribed ACE-inhibitors (64%). Twenty four of the 28 patients on ACE inhibitors were prescribed perindopril. The major reason given for prescribing ACE inhibitors was signs of congestive cardiac failure. All ACE inhibitors initiated in patients after MI at RBH were ordered by cardiologists. Conclusion: ACE inhibitors were prescribed appropriately in 88% of patients who met criteria for their use. This high percentage of appropriate prescribing was encouraging. Reevaluation as part of an ongoing quality assurance activity could be used to ensure this is maintained.
Resumo:
Food safety concerns have escalated in China as they have elsewhere, especially in relation to meats. Beef production and consumption has increased proportionately faster than all other meats over the last two decades. Yet the slaughtering, processing and marketing of beef remains, for the most part, extremely primitive when compared with Western beef supply chains. By comparing the economics of household slaughtering with that of various types of abattoirs, this paper explains why household slaughtering and wet markets still dominate beef processing and distribution in China. The negative economic, social and industry development implications of enforcing more stringent food safety regulations are highlighted. The willingness/capacity of consumers to pay the added cost of better inspection and other services to guarantee food safety is investigated. In this context, the paper also evaluates the market opportunities for both domestic and imported Green Beef. The paper questions the merit of policy initiatives aimed at modernising Chinese beef supply chains for the mass market along Western lines. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
The flock-level sensitivity of pooled faecal culture and serological testing using AGID for the detection of ovine Johne's disease-infected flocks were estimated using non-gold-standard methods. The two tests were compared in an extensive field trial in 296 flocks in New South Wales during 1998. In each flock, a sample of sheep was selected and tested for ovine Johne's disease using both the AGID and pooled faecal culture. The flock-specificity of pooled faecal culture also was estimated from results of surveillance and market-assurance testing in New South Wales. The overall flock-sensitivity of pooled faecal culture was 92% (95% CI: 82.4 and 97.4%) compared to 61% (50.5 and 70.9%) for serology (assuming that both tests were 100% specific). In low-prevalence flocks (estimated prevalence
Resumo:
For dynamic simulations to be credible, verification of the computer code must be an integral part of the modelling process. This two-part paper describes a novel approach to verification through program testing and debugging. In Part 1, a methodology is presented for detecting and isolating coding errors using back-to-back testing. Residuals are generated by comparing the output of two independent implementations, in response to identical inputs. The key feature of the methodology is that a specially modified observer is created using one of the implementations, so as to impose an error-dependent structure on these residuals. Each error can be associated with a fixed and known subspace, permitting errors to be isolated to specific equations in the code. It is shown that the geometric properties extend to multiple errors in either one of the two implementations. Copyright (C) 2003 John Wiley Sons, Ltd.
Resumo:
In Part 1 of this paper a methodology for back-to-back testing of simulation software was described. Residuals with error-dependent geometric properties were generated. A set of potential coding errors was enumerated, along with a corresponding set of feature matrices, which describe the geometric properties imposed on the residuals by each of the errors. In this part of the paper, an algorithm is developed to isolate the coding errors present by analysing the residuals. A set of errors is isolated when the subspace spanned by their combined feature matrices corresponds to that of the residuals. Individual feature matrices are compared to the residuals and classified as 'definite', 'possible' or 'impossible'. The status of 'possible' errors is resolved using a dynamic subset testing algorithm. To demonstrate and validate the testing methodology presented in Part 1 and the isolation algorithm presented in Part 2, a case study is presented using a model for biological wastewater treatment. Both single and simultaneous errors that are deliberately introduced into the simulation code are correctly detected and isolated. Copyright (C) 2003 John Wiley Sons, Ltd.
Resumo:
O presente artigo pretende analisar a quest??o da qualidade da programa????o na televis??o brasileira a partir da proposta de um novo marco regulat??rio para o setor de comunica????o social eletr??nica. Essa nova lei, entre outras disposi????es, ir?? regulamentar o artigo 221 da Constitui????o Federal, que trata dos princ??pios pelos quais o conte??do televisivo deve pautar-se. Com isso, define-se qualidade levando-se em considera????o dois aspectos: diversidade e ressalvas ?? liberdade de express??o, ambos previstos na Constitui????o Federal. A partir dessa conceitua????o, prop??e-se a instrumentaliza????o do controle social sobre o conte??do televisivo e a garantia de meios para a diversidade da programa????o. Com rela????o ao primeiro aspecto, recomenda-se a atua????o transparente de uma futura ag??ncia reguladora e a implementa????o de mecanismo de controle individual da programa????o. No que tange ?? diversidade, ressalta-se a import??ncia do fortalecimento das televis??es p??blicas e medidas governamentais no sentido de estimular a multiprograma????o propiciada pelo advento da tecnologia digital.
Resumo:
Abstract This study will exam the relative importance of values and interests in Obama's foreign policy, focusing on crucial cases: the military actions related to Afghanistan, Iraq, Libya, Non-Syria, Al-Qaeda and ISIL. We will argue that his "leading from behind" strategy is not very distant from the foreign and defense strategies of his post-Cold War predecessors, by which democracy is seen as an assurance to security. According to Obama's strategy, Americans will only provide support for the building of democracy in the target countries, while this task should be performed by the locals themselves. Americans will provide military training to the new governments as well so they can be responsible for their own security, including preventing regrouping of terrorists in their soil. If Obama opposes the imposing of democracy by the use of force, empirical data shows that his administration is "not prepared to accept" any option that threats US security or American liberal-democratic values, bringing in this way values and interests very close to each other.
Resumo:
This paper presents a catalog of smells in the context of interactive applications. These so-called usability smells are indicators of poor design on an application’s user interface, with the potential to hinder not only its usability but also its maintenance and evolution. To eliminate such usability smells we discuss a set of program/usability refactorings. In order to validate the presented usability smells catalog, and the associated refactorings, we present a preliminary empirical study with software developers in the context of a real open source hospital management application. Moreover, a tool that computes graphical user interface behavior models, giving the applications’ source code, is used to automatically detect usability smells at the model level.
Resumo:
Quality Management System has been implemented at the René Rachou Research Center since 2003. This study investigated its importance for collaborators (Cs) in laboratories. This was a quantitative and descriptive study performed in a group of 113 collaborators. It was based on the World Health Organization handbook: Quality Practices in Basic Biomedical Research. The questionnaires evaluated the parameters using the Likert scale. Biosafety, training and ethics were considered to be the most important parameters. Supervision and quality assurance, data recording, study plan, SOPs and file storage achieved intermediate evaluation. The lower frequency of responses was obtained for result report, result verification, personnel and publishing practices. Understanding the perception of the collaborators allows the development of improvement actions aiming the construction of a training program directing strategies for disseminating quality.