951 resultados para quantitative evaluation
Resumo:
Background:In January 2011 Spain modified clean air legislation in force since 2006, removing all existing exceptions applicable to hospitality venues. Although this legal reform was backed by all political parties with parliamentary representation, the government's initiative was contested by the tobacco industry and its allies in the hospitality industry. One of the most voiced arguments against the reform was its potentially disruptive effect on the revenue of hospitality venues. This paper evaluates the impact of this reform on household expenditure at restaurants and bars and cafeterias. Methods and empirical strategy:We use micro-data from the Encuesta de Presupuestos Familiares (EPF) for years 2006 to 2012 to estimate "two part" models where the probability of observing a positive expenditure and, for those who spend, the expected level of expenditure are functions of an array of explanatory variables. We apply a before-after analysis with a wide range of controls for confounding factors and a flexible modeling of time effects.Results:In line with the majority of studies that analyze the effects of smoking bans using objective data, our results suggest that the reform did not cause reductions in households' expenditures on restaurant services or on bars and cafeteria services.
Resumo:
Although numerous positron emission tomography (PET) studies with (18) F-fluoro-deoxyglucose (FDG) have reported quantitative results on cerebral glucose kinetics and consumption, there is a large variation between the absolute values found in the literature. One of the underlying causes is the inconsistent use of the lumped constants (LCs), the derivation of which is often based on multiple assumptions that render absolute numbers imprecise and errors hard to quantify. We combined a kinetic FDG-PET study with magnetic resonance spectroscopic imaging (MRSI) of glucose dynamics in Sprague-Dawley rats to obtain a more comprehensive view of brain glucose kinetics and determine a reliable value for the LC under isoflurane anaesthesia. Maps of Tmax /CMRglc derived from MRSI data and Tmax determined from PET kinetic modelling allowed to obtain an LC-independent CMRglc . The LC was estimated to range from 0.33 ± 0.07 in retrosplenial cortex to 0.44 ± 0.05 in hippocampus, yielding CMRglc between 62 ± 14 and 54 ± 11 μmol/min/100 g, respectively. These newly determined LCs for four distinct areas in the rat brain under isoflurane anaesthesia provide means of comparing the growing amount of FDG-PET data available from translational studies.
Resumo:
What follows are the refined guidelines from the Thin Maintenance Surface: Phase II Report. For that report, test sections were created and monitored along with some existing test sections. From the monitoring and evaluation of these test sections, literature reviews, and the experience and knowledge of the authors, the following guidelines were created. More information about thin maintenance surfaces and their uses can be found in the above-mentioned report.
Resumo:
Professional cleaning is a basic service occupation with a wide variety of tasks carried out in all kind of different sectors and workplaces by a large workforce. One important risk for cleaning workers is the exposure to chemical substances that are present in cleaning products.Monoethanolamine was found to be often present in cleaning products such as general purpose cleaners, bathroom cleaners, floor cleaners and kitchen cleaners. Monoethanolamine can injure the skin, and exposure to monoethanolamine was associated to asthma even when the air concentrations were low. It is a strong irritant and known to be involved in sensitizing mechanisms. It is very likely that the use of cleaning products containing monoethanolamine gives rise to respiratory and dermal exposures. Therefore there is a need to further investigate the exposures to monoethanolamine for both, respiratory and dermal exposure.The determination of monoethanolamine has traditionally been difficult and analytical methods available are little adapted for occupational exposure assessments. For monoethanolamine air concentrations, a sampling and analytical method was already available and could be used. However, a method to analyses samples for skin exposure assessments as well as samples of skin permeation experiments was missing. Therefore one main objective of this master thesis was to search an already developed and described analytical method for the measurement of monoethanolamine in water solutions, and to set it up in the laboratory. Monoethanolamine was analyzed after a derivatisation reaction with o-pthtaldialdehyde. The derivated fluorescing monoethanolamine was then separated with high performance liquid chromatography and detection took place with a fluorescent detector. The method was found to be suitable for qualitative and quantitative analysis of monoethanolamine. An exposure assessment was conducted in the cleaning sector to measure the respiratory and dermal exposures to monoethanolamine during floor cleaning. Stationary air samples (n=36) were collected in 8 companies and samples for dermal exposures (n=12) were collected in two companies. Air concentrations (Mean = 0.18 mg/m3, Standard Deviation = 0.23 mg/m3, geometric Mean = 0.09 mg/m3, Geometric Standard Deviation = 3.50) detected were mostly below 1/10 of the Swiss 8h time weighted average occupational exposure limit. Factors that influenced the measured monoethanolamine air concentrations were room size, ventilation system and the concentration of monoethanolamine in the cleaning product and amount of monoethanolamine used. Measured skin exposures ranged from 0.6 to 128.4 mg/sample. Some cleaning workers that participated in the skin exposure assessment did not use gloves and had direct contact with the solutions containing the cleaning product and monoethanolamine. During the entire sampling campaign, cleaning workers mostly did not use gloves. Cleaning workers are at risk to be regularly exposed to low air concentrations of monoethanolamine. This exposure may be problematic if a worker suffers from allergic reactions (e.g. Asthma). In that case a substitution of the cleaning product may be a good prevention measure as several different cleaning products are available for similar cleaning tasks. Currently there are no occupational exposure limits to compare the skin exposures that were found. To prevent skin exposures, adaptations of the cleaning techniques and the use of gloves should be considered. The simultaneous skin and airborne exposures might accelerate adverse health effects. Overall the risks caused by exposures to monoethanolamine are considered as low to moderate when the cleaning products are used correctly. Whenever possible, skin exposures should be avoided. Further research should consider especially the dermal exposure routes, as very high exposures might occur by skin contact with cleaning products. Dermatitis but also sensitization might be caused by skin exposures. In addition, new biomedical insights are needed to better understand the risks of the dermal exposure. Therefore skin permeability experiments should be considered.
Resumo:
Molecular monitoring of BCR/ABL transcripts by real time quantitative reverse transcription PCR (qRT-PCR) is an essential technique for clinical management of patients with BCR/ABL-positive CML and ALL. Though quantitative BCR/ABL assays are performed in hundreds of laboratories worldwide, results among these laboratories cannot be reliably compared due to heterogeneity in test methods, data analysis, reporting, and lack of quantitative standards. Recent efforts towards standardization have been limited in scope. Aliquots of RNA were sent to clinical test centers worldwide in order to evaluate methods and reporting for e1a2, b2a2, and b3a2 transcript levels using their own qRT-PCR assays. Total RNA was isolated from tissue culture cells that expressed each of the different BCR/ABL transcripts. Serial log dilutions were prepared, ranging from 100 to 10-5, in RNA isolated from HL60 cells. Laboratories performed 5 independent qRT-PCR reactions for each sample type at each dilution. In addition, 15 qRT-PCR reactions of the 10-3 b3a2 RNA dilution were run to assess reproducibility within and between laboratories. Participants were asked to run the samples following their standard protocols and to report cycle threshold (Ct), quantitative values for BCR/ABL and housekeeping genes, and ratios of BCR/ABL to housekeeping genes for each sample RNA. Thirty-seven (n=37) participants have submitted qRT-PCR results for analysis (36, 37, and 34 labs generated data for b2a2, b3a2, and e1a2, respectively). The limit of detection for this study was defined as the lowest dilution that a Ct value could be detected for all 5 replicates. For b2a2, 15, 16, 4, and 1 lab(s) showed a limit of detection at the 10-5, 10-4, 10-3, and 10-2 dilutions, respectively. For b3a2, 20, 13, and 4 labs showed a limit of detection at the 10-5, 10-4, and 10-3 dilutions, respectively. For e1a2, 10, 21, 2, and 1 lab(s) showed a limit of detection at the 10-5, 10-4, 10-3, and 10-2 dilutions, respectively. Log %BCR/ABL ratio values provided a method for comparing results between the different laboratories for each BCR/ABL dilution series. Linear regression analysis revealed concordance among the majority of participant data over the 10-1 to 10-4 dilutions. The overall slope values showed comparable results among the majority of b2a2 (mean=0.939; median=0.9627; range (0.399 - 1.1872)), b3a2 (mean=0.925; median=0.922; range (0.625 - 1.140)), and e1a2 (mean=0.897; median=0.909; range (0.5174 - 1.138)) laboratory results (Fig. 1-3)). Thirty-four (n=34) out of the 37 laboratories reported Ct values for all 15 replicates and only those with a complete data set were included in the inter-lab calculations. Eleven laboratories either did not report their copy number data or used other reporting units such as nanograms or cell numbers; therefore, only 26 laboratories were included in the overall analysis of copy numbers. The median copy number was 348.4, with a range from 15.6 to 547,000 copies (approximately a 4.5 log difference); the median intra-lab %CV was 19.2% with a range from 4.2% to 82.6%. While our international performance evaluation using serially diluted RNA samples has reinforced the fact that heterogeneity exists among clinical laboratories, it has also demonstrated that performance within a laboratory is overall very consistent. Accordingly, the availability of defined BCR/ABL RNAs may facilitate the validation of all phases of quantitative BCR/ABL analysis and may be extremely useful as a tool for monitoring assay performance. Ongoing analyses of these materials, along with the development of additional control materials, may solidify consensus around their application in routine laboratory testing and possible integration in worldwide efforts to standardize quantitative BCR/ABL testing.
Resumo:
Large numbers and functionally competent T cells are required to protect from diseases for which antibody-based vaccines have consistently failed (1), which is the case for many chronic viral infections and solid tumors. Therefore, therapeutic vaccines aim at the induction of strong antigen-specific T-cell responses. Novel adjuvants have considerably improved the capacity of synthetic vaccines to activate T cells, but more research is necessary to identify optimal compositions of potent vaccine formulations. Consequently, there is a great need to develop accurate methods for the efficient identification of antigen-specific T cells and the assessment of their functional characteristics directly ex vivo. In this regard, hundreds of clinical vaccination trials have been implemented during the last 15 years, and monitoring techniques become more and more standardized.
Resumo:
To compare the prediction of hip fracture risk of several bone ultrasounds (QUS), 7062 Swiss women > or =70 years of age were measured with three QUSs (two of the heel, one of the phalanges). Heel QUSs were both predictive of hip fracture risk, whereas the phalanges QUS was not. INTRODUCTION: As the number of hip fracture is expected to increase during these next decades, it is important to develop strategies to detect subjects at risk. Quantitative bone ultrasound (QUS), an ionizing radiation-free method, which is transportable, could be interesting for this purpose. MATERIALS AND METHODS: The Swiss Evaluation of the Methods of Measurement of Osteoporotic Fracture Risk (SEMOF) study is a multicenter cohort study, which compared three QUSs for the assessment of hip fracture risk in a sample of 7609 elderly ambulatory women > or =70 years of age. Two QUSs measured the heel (Achilles+; GE-Lunar and Sahara; Hologic), and one measured the heel (DBM Sonic 1200; IGEA). The Cox proportional hazards regression was used to estimate the hazard of the first hip fracture, adjusted for age, BMI, and center, and the area under the ROC curves were calculated to compare the devices and their parameters. RESULTS: From the 7609 women who were included in the study, 7062 women 75.2 +/- 3.1 (SD) years of age were prospectively followed for 2.9 +/- 0.8 years. Eighty women reported a hip fracture. A decrease by 1 SD of the QUS variables corresponded to an increase of the hip fracture risk from 2.3 (95% CI, 1.7, 3.1) to 2.6 (95% CI, 1.9, 3.4) for the three variables of Achilles+ and from 2.2 (95% CI, 1.7, 3.0) to 2.4 (95% CI, 1.8, 3.2) for the three variables of Sahara. Risk gradients did not differ significantly among the variables of the two heel QUS devices. On the other hand, the phalanges QUS (DBM Sonic 1200) was not predictive of hip fracture risk, with an adjusted hazard risk of 1.2 (95% CI, 0.9, 1.5), even after reanalysis of the digitalized data and using different cut-off levels (1700 or 1570 m/s). CONCLUSIONS: In this elderly women population, heel QUS devices were both predictive of hip fracture risk, whereas the phalanges QUS device was not.
Resumo:
This paper presents the evaluation results of the methods submitted to Challenge US: Biometric Measurements from Fetal Ultrasound Images, a segmentation challenge held at the IEEE International Symposium on Biomedical Imaging 2012. The challenge was set to compare and evaluate current fetal ultrasound image segmentation methods. It consisted of automatically segmenting fetal anatomical structures to measure standard obstetric biometric parameters, from 2D fetal ultrasound images taken on fetuses at different gestational ages (21 weeks, 28 weeks, and 33 weeks) and with varying image quality to reflect data encountered in real clinical environments. Four independent sub-challenges were proposed, according to the objects of interest measured in clinical practice: abdomen, head, femur, and whole fetus. Five teams participated in the head sub-challenge and two teams in the femur sub-challenge, including one team who tackled both. Nobody attempted the abdomen and whole fetus sub-challenges. The challenge goals were two-fold and the participants were asked to submit the segmentation results as well as the measurements derived from the segmented objects. Extensive quantitative (region-based, distance-based, and Bland-Altman measurements) and qualitative evaluation was performed to compare the results from a representative selection of current methods submitted to the challenge. Several experts (three for the head sub-challenge and two for the femur sub-challenge), with different degrees of expertise, manually delineated the objects of interest to define the ground truth used within the evaluation framework. For the head sub-challenge, several groups produced results that could be potentially used in clinical settings, with comparable performance to manual delineations. The femur sub-challenge had inferior performance to the head sub-challenge due to the fact that it is a harder segmentation problem and that the techniques presented relied more on the femur's appearance.
Resumo:
Background: Optimization methods allow designing changes in a system so that specific goals are attained. These techniques are fundamental for metabolic engineering. However, they are not directly applicable for investigating the evolution of metabolic adaptation to environmental changes. Although biological systems have evolved by natural selection and result in well-adapted systems, we can hardly expect that actual metabolic processes are at the theoretical optimum that could result from an optimization analysis. More likely, natural systems are to be found in a feasible region compatible with global physiological requirements. Results: We first present a new method for globally optimizing nonlinear models of metabolic pathways that are based on the Generalized Mass Action (GMA) representation. The optimization task is posed as a nonconvex nonlinear programming (NLP) problem that is solved by an outer- approximation algorithm. This method relies on solving iteratively reduced NLP slave subproblems and mixed-integer linear programming (MILP) master problems that provide valid upper and lower bounds, respectively, on the global solution to the original NLP. The capabilities of this method are illustrated through its application to the anaerobic fermentation pathway in Saccharomyces cerevisiae. We next introduce a method to identify the feasibility parametric regions that allow a system to meet a set of physiological constraints that can be represented in mathematical terms through algebraic equations. This technique is based on applying the outer-approximation based algorithm iteratively over a reduced search space in order to identify regions that contain feasible solutions to the problem and discard others in which no feasible solution exists. As an example, we characterize the feasible enzyme activity changes that are compatible with an appropriate adaptive response of yeast Saccharomyces cerevisiae to heat shock Conclusion: Our results show the utility of the suggested approach for investigating the evolution of adaptive responses to environmental changes. The proposed method can be used in other important applications such as the evaluation of parameter changes that are compatible with health and disease states.
Resumo:
The evaluation of investments in advanced technology is one of the most important decision making tasks. The importance is even more pronounced considering the huge budget concerning the strategic, economic and analytic justification in order to shorten design and development time. Choosing the most appropriate technology requires an accurate and reliable system that can lead the decision makers to obtain such a complicated task. Currently, several Information and Communication Technologies (ICTs) manufacturers that design global products are seeking local firms to act as their sales and services representatives (called distributors) to the end user. At the same time, the end user or customer is also searching for the best possible deal for their investment in ICT's projects. Therefore, the objective of this research is to present a holistic decision support system to assist the decision maker in Small and Medium Enterprises (SMEs) - working either as individual decision makers or in a group - in the evaluation of the investment to become an ICT's distributor or an ICT's end user. The model is composed of the Delphi/MAH (Maximising Agreement Heuristic) Analysis, a well-known quantitative method in Group Support System (GSS), which is applied to gather the average ranking data from amongst Decision Makers (DMs). After that the Analytic Network Process (ANP) analysis is brought in to analyse holistically: it performs quantitative and qualitative analysis simultaneously. The illustrative data are obtained from industrial entrepreneurs by using the Group Support System (GSS) laboratory facilities at Lappeenranta University of Technology, Finland and in Thailand. The result of the research, which is currently implemented in Thailand, can provide benefits to the industry in the evaluation of becoming an ICT's distributor or an ICT's end user, particularly in the assessment of the Enterprise Resource Planning (ERP) programme. After the model is put to test with an in-depth collaboration with industrial entrepreneurs in Finland and Thailand, the sensitivity analysis is also performed to validate the robustness of the model. The contribution of this research is in developing a new approach and the Delphi/MAH software to obtain an analysis of the value of becoming an ERP distributor or end user that is flexible and applicable to entrepreneurs, who are looking for the most appropriate investment to become an ERP distributor or end user. The main advantage of this research over others is that the model can deliver the value of becoming an ERP distributor or end user in a single number which makes it easier for DMs to choose the most appropriate ERP vendor. The associated advantage is that the model can include qualitative data as well as quantitative data, as the results from using quantitative data alone can be misleading and inadequate. There is a need to utilise quantitative and qualitative analysis together, as can be seen from the case studies.
Resumo:
This paper describes an evaluation framework that allows a standardized and quantitative comparison of IVUS lumen and media segmentation algorithms. This framework has been introduced at the MICCAI 2011 Computing and Visualization for (Intra)Vascular Imaging (CVII) workshop, comparing the results of eight teams that participated. We describe the available data-base comprising of multi-center, multi-vendor and multi-frequency IVUS datasets, their acquisition, the creation of the reference standard and the evaluation measures. The approaches address segmentation of the lumen, the media, or both borders; semi- or fully-automatic operation; and 2-D vs. 3-D methodology. Three performance measures for quantitative analysis have been proposed. The results of the evaluation indicate that segmentation of the vessel lumen and media is possible with an accuracy that is comparable to manual annotation when semi-automatic methods are used, as well as encouraging results can be obtained also in case of fully-automatic segmentation. The analysis performed in this paper also highlights the challenges in IVUS segmentation that remains to be solved.
Resumo:
Reversed phase liquid chromatography (RPLC) coupled to mass spectrometry (MS) is the gold standard technique in bioanalysis. However, hydrophilic interaction chromatography (HILIC) could represent a viable alternative to RPLC for the analysis of polar and/or ionizable compounds, as it often provides higher MS sensitivity and alternative selectivity. Nevertheless, this technique can be also prone to matrix effects (ME). ME are one of the major issues in quantitative LC-MS bioanalysis. To ensure acceptable method performance (i.e., trueness and precision), a careful evaluation and minimization of ME is required. In the present study, the incidence of ME in HILIC-MS/MS and RPLC-MS/MS was compared for plasma and urine samples using two representative sets of 38 pharmaceutical compounds and 40 doping agents, respectively. The optimal generic chromatographic conditions in terms of selectivity with respect to interfering compounds were established in both chromatographic modes by testing three different stationary phases in each mode with different mobile phase pH. A second step involved the assessment of ME in RPLC and HILIC under the best generic conditions, using the post-extraction addition method. Biological samples were prepared using two different sample pre-treatments, i.e., a non-selective sample clean-up procedure (protein precipitation and simple dilution for plasma and urine samples, respectively) and a selective sample preparation, i.e., solid phase extraction for both matrices. The non-selective pretreatments led to significantly less ME in RPLC vs. HILIC conditions regardless of the matrix. On the contrary, HILIC appeared as a valuable alternative to RPLC for plasma and urine samples treated by a selective sample preparation. Indeed, in the case of selective sample preparation, the compounds influenced by ME were different in HILIC and RPLC, and lower and similar ME occurrence was generally observed in RPLC vs. HILIC for urine and plasma samples, respectively. The complementary of both chromatographic modes was also demonstrated, as ME was observed only scarcely for urine and plasma samples when selecting the most appropriate chromatographic mode.
Resumo:
Objective The objective of the present study was to evaluate current radiographic parameters designed to investigate adenoid hypertrophy and nasopharyngeal obstruction, and to present an alternative radiographic assessment method. Materials and Methods In order to do so, children (4 to14 years old) who presented with nasal obstruction or oral breathing complaints were submitted to cavum radiographic examination. One hundred and twenty records were evaluated according to quantitative radiographic parameters, and data were correlated with a gold-standard videonasopharyngoscopic study, in relation to the percentage of choanal obstruction. Subsequently, a regression analysis was performed in order to create an original model so the percentage of the choanal obstruction could be predicted. Results The quantitative parameters demonstrated moderate, if not weak correlation with the real percentage of choanal obstruction. The regression model (110.119*A/N) demonstrated a satisfactory ability to “predict” the actual percentage of choanal obstruction. Conclusion Since current adenoid quantitative radiographic parameters present limitations, the model presented by the present study might be considered as an alternative assessment method in cases where videonasopharyngoscopic evaluation is unavailable.
Resumo:
The purpose of the METKU Project (Development of Maritime Safety Culture) is to study how the ISM Code has influenced the safety culture in the maritime industry. This literature review is written as a part of the Work Package 2 which is conducted by the University of Turku, Centre for Maritime Studies. The maritime traffic is rapidly growing in the Baltic Sea which leads to a growing risk of maritime accidents. Particularly in the Gulf of Finland, the high volume of traffic causes a high risk of maritime accidents. The growing risks give us good reasons for implementing the research project concerning maritime safety and the effectiveness of the safety measures, such as the safety management systems. In order to reduce maritime safety risks, the safety management systems should be further developed. The METKU Project has been launched to examine the improvements which can be done to the safety management systems. Human errors are considered as the most important reason for maritime accidents. The international safety management code (the ISM Code) has been established to cut down the occurrence of human errors by creating a safety-oriented organizational culture for the maritime industry. The ISM Code requires that a company should provide safe practices in ship operation and a safe working environment and establish safeguards against all identified risk. The fundamental idea of the ISM Code is that companies should continuously improve safety. The commitment of the top management is essential for implementing a safety-oriented culture in a company. The ISM Code has brought a significant contribution to the progress of maritime safety in recent years. Shipping companies and ships’ crews are more environmentally friendly and more safety-oriented than 12 years ago. This has been showed by several studies which have been analysed for this literature research. Nevertheless, the direct effect and influence of the ISM Code on maritime safety could not be isolated very well. No quantitative measurement (statistics/hard data) could be found in order to present the impacts of the ISM Code on maritime safety. In this study it has been discovered that safety culture has emerged and it is developing in the maritime industry. Even though the roots of the safety culture have been established there are still serious barriers to the breakthrough of the safety management. These barriers could be envisaged as cultural factors preventing the safety process. Even though the ISM Code has been effective over a decade, the old-established behaviour which is based on the old day’s maritime culture still occurs. In the next phase of this research project, these cultural factors shall be analysed in regard to the present safety culture of the maritime industry in Finland.
Resumo:
In the context of the evidence-based practices movement, the emphasis on computing effect sizes and combining them via meta-analysis does not preclude the demonstration of functional relations. For the latter aim, we propose to augment the visual analysis to add consistency to the decisions made on the existence of a functional relation without losing sight of the need for a methodological evaluation of what stimuli and reinforcement or punishment are used to control the behavior. Four options for quantification are reviewed, illustrated, and tested with simulated data. These quantifications include comparing the projected baseline with the actual treatment measurements, on the basis of either parametric or nonparametric statistics. The simulated data used to test the quantifications include nine data patterns in terms of the presence and type of effect and comprising ABAB and multiple baseline designs. Although none of the techniques is completely flawless in terms of detecting a functional relation only when it is present but not when it is absent, an option based on projecting split-middle trend and considering data variability as in exploratory data analysis proves to be the best performer for most data patterns. We suggest that the information on whether a functional relation has been demonstrated should be included in meta-analyses. It is also possible to use as a weight the inverse of the data variability measure used in the quantification for assessing the functional relation. We offer an easy to use code for open-source software for implementing some of the quantifications.