984 resultados para Bayesian method
Resumo:
The aim of this paper is twofold. First, we study the determinants of economic growth among a wide set of potential variables for the Spanish provinces (NUTS3). Among others, we include various types of private, public and human capital in the group of growth factors. Also,we analyse whether Spanish provinces have converged in economic terms in recent decades. Thesecond objective is to obtain cross-section and panel data parameter estimates that are robustto model speci¯cation. For this purpose, we use a Bayesian Model Averaging (BMA) approach.Bayesian methodology constructs parameter estimates as a weighted average of linear regression estimates for every possible combination of included variables. The weight of each regression estimate is given by the posterior probability of each model.
Resumo:
Recent findings suggest an association between exposure to cleaning products and respiratory dysfunctions including asthma. However, little information is available about quantitative airborne exposures of professional cleaners to volatile organic compounds deriving from cleaning products. During the first phases of the study, a systematic review of cleaning products was performed. Safety data sheets were reviewed to assess the most frequently added volatile organic compounds. It was found that professional cleaning products are complex mixtures of different components (compounds in cleaning products: 3.5 ± 2.8), and more than 130 chemical substances listed in the safety data sheets were identified in 105 products. The main groups of chemicals were fragrances, glycol ethers, surfactants, solvents; and to a lesser extent phosphates, salts, detergents, pH-stabilizers, acids, and bases. Up to 75% of products contained irritant (Xi), 64% harmful (Xn) and 28% corrosive (C) labeled substances. Hazards for eyes (59%), skin (50%) and by ingestion (60%) were the most reported. Monoethanolamine, a strong irritant and known to be involved in sensitizing mechanisms as well as allergic reactions, is frequently added to cleaning products. Monoethanolamine determination in air has traditionally been difficult and air sampling and analysis methods available were little adapted for personal occupational air concentration assessments. A convenient method was developed with air sampling on impregnated glass fiber filters followed by one step desorption, gas chromatography and nitrogen phosphorous selective detection. An exposure assessment was conducted in the cleaning sector, to determine airborne concentrations of monoethanolamine, glycol ethers, and benzyl alcohol during different cleaning tasks performed by professional cleaning workers in different companies, and to determine background air concentrations of formaldehyde, a known indoor air contaminant. The occupational exposure study was carried out in 12 cleaning companies, and personal air samples were collected for monoethanolamine (n=68), glycol ethers (n=79), benzyl alcohol (n=15) and formaldehyde (n=45). All but ethylene glycol mono-n-butyl ether air concentrations measured were far below (<1/10) of the Swiss eight hours occupational exposure limits, except for butoxypropanol and benzyl alcohol, where no occupational exposure limits were available. Although only detected once, ethylene glycol mono-n-butyl ether air concentrations (n=4) were high (49.5 mg/m3 to 58.7 mg/m3), hovering at the Swiss occupational exposure limit (49 mg/m3). Background air concentrations showed no presence of monoethanolamine, while the glycol ethers were often present, and formaldehyde was universally detected. Exposures were influenced by the amount of monoethanolamine in the cleaning product, cross ventilation and spraying. The collected data was used to test an already existing exposure modeling tool during the last phases of the study. The exposure estimation of the so called Bayesian tool converged with the measured range of exposure the more air concentrations of measured exposure were added. This was best described by an inverse 2nd order equation. The results suggest that the Bayesian tool is not adapted to predict low exposures. The Bayesian tool should be tested also with other datasets describing higher exposures. Low exposures to different chemical sensitizers and irritants should be further investigated to better understand the development of respiratory disorders in cleaning workers. Prevention measures should especially focus on incorrect use of cleaning products, to avoid high air concentrations at the exposure limits. - De récentes études montrent l'existence d'un lien entre l'exposition aux produits de nettoyages et les maladies respiratoires telles que l'asthme. En revanche, encore peu d'informations sont disponibles concernant la quantité d'exposition des professionnels du secteur du nettoyage aux composants organiques volatiles provenant des produits qu'ils utilisent. Pendant la première phase de cette étude, un recueil systématique des produits professionnels utilisés dans le secteur du nettoyage a été effectué. Les fiches de données de sécurité de ces produits ont ensuite été analysées, afin de répertorier les composés organiques volatiles les plus souvent utilisés. Il a été mis en évidence que les produits de nettoyage professionnels sont des mélanges complexes de composants chimiques (composants chimiques dans les produits de nettoyage : 3.5 ± 2.8). Ainsi, plus de 130 substances listées dans les fiches de données de sécurité ont été retrouvées dans les 105 produits répertoriés. Les principales classes de substances chimiques identifiées étaient les parfums, les éthers de glycol, les agents de surface et les solvants; dans une moindre mesure, les phosphates, les sels, les détergents, les régulateurs de pH, les acides et les bases ont été identifiés. Plus de 75% des produits répertoriés contenaient des substances décrites comme irritantes (Xi), 64% nuisibles (Xn) et 28% corrosives (C). Les risques pour les yeux (59%), la peau (50%) et par ingestion (60%) était les plus mentionnés. La monoéthanolamine, un fort irritant connu pour être impliqué dans les mécanismes de sensibilisation tels que les réactions allergiques, est fréquemment ajouté aux produits de nettoyage. L'analyse de la monoéthanolamine dans l'air a été habituellement difficile et les échantillons d'air ainsi que les méthodes d'analyse déjà disponibles étaient peu adaptées à l'évaluation de la concentration individuelle d'air aux postes de travail. Une nouvelle méthode plus efficace a donc été développée en captant les échantillons d'air sur des filtres de fibre de verre imprégnés, suivi par une étape de désorption, puis une Chromatographie des gaz et enfin une détection sélective des composants d'azote. Une évaluation de l'exposition des professionnels a été réalisée dans le secteur du nettoyage afin de déterminer la concentration atmosphérique en monoéthanolamine, en éthers de glycol et en alcool benzylique au cours des différentes tâches de nettoyage effectuées par les professionnels du nettoyage dans différentes entreprises, ainsi que pour déterminer les concentrations atmosphériques de fond en formaldéhyde, un polluant de l'air intérieur bien connu. L'étude de l'exposition professionnelle a été effectuée dans 12 compagnies de nettoyage et les échantillons d'air individuels ont été collectés pour l'éthanolamine (n=68), les éthers de glycol (n=79), l'alcool benzylique (n=15) et le formaldéhyde (n=45). Toutes les substances mesurées dans l'air, excepté le 2-butoxyéthanol, étaient en-dessous (<1/10) de la valeur moyenne d'exposition aux postes de travail en Suisse (8 heures), excepté pour le butoxypropanol et l'alcool benzylique, pour lesquels aucune valeur limite d'exposition n'était disponible. Bien que détecté qu'une seule fois, les concentrations d'air de 2-butoxyéthanol (n=4) étaient élevées (49,5 mg/m3 à 58,7 mg/m3), se situant au-dessus de la frontière des valeurs limites d'exposition aux postes de travail en Suisse (49 mg/m3). Les concentrations d'air de fond n'ont montré aucune présence de monoéthanolamine, alors que les éthers de glycol étaient souvent présents et les formaldéhydes quasiment toujours détectés. L'exposition des professionnels a été influencée par la quantité de monoéthanolamine présente dans les produits de nettoyage utilisés, par la ventilation extérieure et par l'emploie de sprays. Durant la dernière phase de l'étude, les informations collectées ont été utilisées pour tester un outil de modélisation de l'exposition déjà existant, l'outil de Bayesian. L'estimation de l'exposition de cet outil convergeait avec l'exposition mesurée. Cela a été le mieux décrit par une équation du second degré inversée. Les résultats suggèrent que l'outil de Bayesian n'est pas adapté pour mettre en évidence les taux d'expositions faibles. Cet outil devrait également être testé avec d'autres ensembles de données décrivant des taux d'expositions plus élevés. L'exposition répétée à des substances chimiques ayant des propriétés irritatives et sensibilisantes devrait être investiguée d'avantage, afin de mieux comprendre l'apparition de maladies respiratoires chez les professionnels du nettoyage. Des mesures de prévention devraient tout particulièrement être orientées sur l'utilisation correcte des produits de nettoyage, afin d'éviter les concentrations d'air élevées se situant à la valeur limite d'exposition acceptée.
Resumo:
Over the last 10 years, diffusion-weighted imaging (DWI) has become an important tool to investigate white matter (WM) anomalies in schizophrenia. Despite technological improvement and the exponential use of this technique, discrepancies remain and little is known about optimal parameters to apply for diffusion weighting during image acquisition. Specifically, high b-value diffusion-weighted imaging known to be more sensitive to slow diffusion is not widely used, even though subtle myelin alterations as thought to happen in schizophrenia are likely to affect slow-diffusing protons. Schizophrenia patients and healthy controls were scanned with a high b-value (4000s/mm(2)) protocol. Apparent diffusion coefficient (ADC) measures turned out to be very sensitive in detecting differences between schizophrenia patients and healthy volunteers even in a relatively small sample. We speculate that this is related to the sensitivity of high b-value imaging to the slow-diffusing compartment believed to reflect mainly the intra-axonal and myelin bound water pool. We also compared these results to a low b-value imaging experiment performed on the same population in the same scanning session. Even though the acquisition protocols are not strictly comparable, we noticed important differences in sensitivities in the favor of high b-value imaging, warranting further exploration.
Resumo:
Purpose: SIOPEN scoring of 123I mIBG imaging has been shown to predict response to induction chemotherapy and outcome at diagnosis in children with HRN.Method: Patterns of skeletal 123I mIBG uptake were assigned numerical scores (Mscore) ranging from 0 (no metastasis) to 72 (diffuse metastases) within 12 body areas as described previously. 271 anonymised, paired image data sets acquired at diagnosis and on completion of Rapid COJEC induction chemotherapy were reviewed, constituting a representative sample of 1602 children treated prospectively within the HR-NBL1/SIOPEN trial. Pre-and post-treatment Mscores were compared with bone marrow cytology (BM) and 3 year event free survival (EFS).Results: Results 224/271 patients showed skeletal MIBG-uptake at diagnosis and were evaluable forMIBG-response. Complete response (CR) on MIBG to Rapid COJEC induction was achieved by 66%, 34% and 15% of patients who had pre-treatment Mscores of <18 (n¼65, 29%), 18-44 (n¼95,42%) and Y ´ 45 (n¼64, 28.5%) respectively (chi squared test p<.0001). Mscore at diagnosis and on completion of Rapid COJEC correlated strongly with BM involvement (p<0.0001). The correlation of pre score with post scores and response was highly significant (p<0.001). Most importantly, the 3 year EFS in 47 children with Mscore 0 at diagnosis was 0.68 (A ` 0.07), by comparison with 0.42 (A` 0.06), 0.35 (A` 0.05) and 0.25 (A` 0.06) for patients in pre-treatment score groups <18, 18-44 and Y ´ 45, respectively (p<0.001). AnMscore threshold ofY ´ 45 at diagnosis was associated with significantly worse outcome by comparison with all other Mscore groups (p¼0.029). The 3 year EFS of 0.53 (A` 0.07) of patients in metastatic CR (mIBG and BM) after Rapid Cojec (33%) is clearly superior to patients not achieving metastatic CR (0.24 (A ` 0.04), p¼0.005).Conclusion: SIOPEN scoring of 123I mIBG imaging has been shown to predict response to induction chemotherapy and outcome at diagnosis in children with HRN.
Resumo:
With the trend in molecular epidemiology towards both genome-wide association studies and complex modelling, the need for large sample sizes to detect small effects and to allow for the estimation of many parameters within a model continues to increase. Unfortunately, most methods of association analysis have been restricted to either a family-based or a case-control design, resulting in the lack of synthesis of data from multiple studies. Transmission disequilibrium-type methods for detecting linkage disequilibrium from family data were developed as an effective way of preventing the detection of association due to population stratification. Because these methods condition on parental genotype, however, they have precluded the joint analysis of family and case-control data, although methods for case-control data may not protect against population stratification and do not allow for familial correlations. We present here an extension of a family-based association analysis method for continuous traits that will simultaneously test for, and if necessary control for, population stratification. We further extend this method to analyse binary traits (and therefore family and case-control data together) and accurately to estimate genetic effects in the population, even when using an ascertained family sample. Finally, we present the power of this binary extension for both family-only and joint family and case-control data, and demonstrate the accuracy of the association parameter and variance components in an ascertained family sample.
Resumo:
Nationwide, about five cents of each highway construction dollar is spent on culverts. In Iowa, average annual construction costs on the interstate, primary, and federal-aid secondary systems are about $120,000,000. Assuming the national figure applies to Iowa, about $6,000,000 are spent on culvert construction annually. For each one percent reduction in overall culvert costs, annual construction costs would be reduced by $60,000. One area of potential cost reduction lies in the sizing of the culvert. Determining the flow area and hydraulic capacity is accomplished in the initial design of the culvert. The normal design sequence is accomplished in two parts. The hydrologic portion consists of the determination of a design discharge in cubic feet per second using one of several available methods. This discharge is then used directly in the hydraulic portion of the design to determine the proper type, size, and shape of culvert to be used, based on various site and design restrictions. More refined hydrologic analyses, including rainfall-runoff analysis, flood hydrograph development, and streamflow routing techniques, are not pursued in the existing design procedure used by most county and state highway engineers.
Resumo:
Macroporosity is often used in the determination of soil compaction. Reduced macroporosity can lead to poor drainage, low root aeration and soil degradation. The aim of this study was to develop and test different models to estimate macro and microporosity efficiently, using multiple regression. Ten soils were selected within a large range of textures: sand (Sa) 0.07-0.84; silt 0.03-0.24; clay 0.13-0.78 kg kg-1 and subjected to three compaction levels (three bulk densities, BD). Two models with similar accuracy were selected, with a mean error of about 0.02 m³ m-3 (2 %). The model y = a + b.BD + c.Sa, named model 2, was selected for its simplicity to estimate Macro (Ma), Micro (Mi) or total porosity (TP): Ma = 0.693 - 0.465 BD + 0.212 Sa; Mi = 0.337 + 0.120 BD - 0.294 Sa; TP = 1.030 - 0.345 BD 0.082 Sa; porosity values were expressed in m³ m-3; BD in kg dm-3; and Sa in kg kg-1. The model was tested with 76 datum set of several other authors. An error of about 0.04 m³ m-3 (4 %) was observed. Simulations of variations in BD as a function of Sa are presented for Ma = 0 and Ma = 0.10 (10 %). The macroporosity equation was remodeled to obtain other compaction indexes: a) to simulate maximum bulk density (MBD) as a function of Sa (Equation 11), in agreement with literature data; b) to simulate relative bulk density (RBD) as a function of BD and Sa (Equation 13); c) another model to simulate RBD as a function of Ma and Sa (Equation 16), confirming the independence of this variable in relation to Sa for a fixed value of macroporosity and, also, proving the hypothesis of Hakansson & Lipiec that RBD = 0.87 corresponds approximately to 10 % macroporosity (Ma = 0.10 m³ m-3).
Resumo:
A HPLC method is presented for the identification and quantification in plasma and urine of beta-adrenergic receptor antagonists (betaxolol, carteolol, metipranolol, and timolol) commonly prescribed in ophthalmology. An extraction method is described using pindolol as an internal standard. An RSIL 10 micron column was used. The lower detection limits of the beta-blockers were found to be 4-27 ng/ml. This method is simple, rapid and sensitive; moreover, it allows the determination of 8 other beta-blockers.
Resumo:
A modified Bargmann-Wigner method is used to derive (6s + 1)-component wave equations. The relation between different forms of these equations is shown.
Resumo:
OBJECTIVE: To describe a method to obtain a profile of the duration and intensity (speed) of walking periods over 24 hours in women under free-living conditions. DESIGN: A new method based on accelerometry was designed for analyzing walking activity. In order to take into account inter-individual variability of acceleration, an individual calibration process was used. Different experiments were performed to highlight the variability of acceleration vs walking speed relationship, to analyze the speed prediction accuracy of the method, and to test the assessment of walking distance and duration over 24-h. SUBJECTS: Twenty-eight women were studied (mean+/-s.d.) age: 39.3+/-8.9 y; body mass: 79.7+/-11.1 kg; body height: 162.9+/-5.4 cm; and body mass index (BMI) 30.0+/-3.8 kg/m(2). RESULTS: Accelerometer output was significantly correlated with speed during treadmill walking (r=0.95, P<0.01), and short unconstrained walks (r=0.86, P<0.01), although with a large inter-individual variation of the regression parameters. By using individual calibration, it was possible to predict walking speed on a standard urban circuit (predicted vs measured r=0.93, P<0.01, s.e.e.=0.51 km/h). In the free-living experiment, women spent on average 79.9+/-36.0 (range: 31.7-168.2) min/day in displacement activities, from which discontinuous short walking activities represented about 2/3 and continuous ones 1/3. Total walking distance averaged 2.1+/-1.2 (range: 0.4-4.7) km/day. It was performed at an average speed of 5.0+/-0.5 (range: 4.1-6.0) km/h. CONCLUSION: An accelerometer measuring the anteroposterior acceleration of the body can estimate walking speed together with the pattern, intensity and duration of daily walking activity.
Resumo:
Summary
Resumo:
In forensic science, there is a strong interest in determining the post-mortem interval (PMI) of human skeletal remains up to 50 years after death. Currently, there are no reliable methods to resolve PMI, the determination of which relies almost exclusively on the experience of the investigating expert. Here we measured (90)Sr and (210)Pb ((210)Po) incorporated into bones through a biogenic process as indicators of the time elapsed since death. We hypothesised that the activity of radionuclides incorporated into trabecular bone will more accurately match the activity in the environment and the food chain at the time of death than the activity in cortical bone because of a higher remodelling rate. We found that determining (90)Sr can yield reliable PMI estimates as long as a calibration curve exists for (90)Sr covering the studied area and the last 50 years. We also found that adding the activity of (210)Po, a proxy for naturally occurring (210)Pb incorporated through ingestion, to the (90)Sr dating increases the reliability of the PMI value. Our results also show that trabecular bone is subject to both (90)Sr and (210)Po diagenesis. Accordingly, we used a solubility profile method to determine the biogenic radionuclide only, and we are proposing a new method of bone decontamination to be used prior to (90)Sr and (210)Pb dating.
Resumo:
The rate of carbon dioxide production is commonly used as a measure of microbial activity in the soil. The traditional method of CO2 determination involves trapping CO2 in an alkali solution and then determining CO2 concentration indirectly by titration of the remaining alkali in the solution. This method is still commonly employed in laboratories throughout the world due to its relative simplicity and the fact that it does not require expensive, specific equipment. However, there are several drawbacks: the method is time-consuming, requires large amounts of chemicals and the consistency of results depends on the operator's skills. With this in mind, an improved method was developed to analyze CO2 captured in alkali traps, which is cheap and relatively simple, with a substantially shorter sample handling time and reproducibility equivalent to the traditional titration method. A comparison of the concentration values determined by gas phase flow injection analysis (GPFIA) and titration showed no significant difference (p > 0.05), but GPFIA has the advantage that only a tenth of the sample volume of the titration method is required. The GPFIA system does not require the purchase of new, costly equipment but the device was constructed from items commonly found in laboratories, with suggestions for alternative configurations for other detection units. Furthermore, GPFIA for CO2 analysis can be equally applied to samples obtained from either the headspace of microcosms or from a sampling chamber that allows CO2 to be released from alkali trapping solutions. The optimised GPFIA method was applied to analyse CO2 released from degrading hydrocarbons from a site contaminated by diesel spillage.
Resumo:
A large variety of techniques have been used to measure soil CO2 released from the soil surface, and much of the variability observed between locations must be attributed to the different methods used by the investigators. Therefore, a minimum protocol of measurement procedures should be established. The objectives of this study were (a) to compare different absorption areas, concentrations and volumes of the alkali trapping solution used in closed static chambers (CSC), and (b) to compare both, the optimized alkali trapping solution and the soda-lime trapping using CSC to measure soil respiration in sugarcane areas. Three CO2 absorption areas were evaluated (7; 15 and 20 % of the soil emission area or chamber); two volumes of NaOH (40 and 80 mL) at three concentrations (0.1, 0.25 and 0.5 mol L-1). Three different types of alkaline traps were tested: (a), 80 mL of 0.5 mol L-1 NaOH in glass containers, absorption area 15 % (V0.5); (b) 40 mL of 2 mol L-1 NaOH retained in a sponge, absorption area 80 % (S2) and (c) 40 g soda lime, absorption area 15 % (SL). NaOH concentrations of 0.5 mol L-1 or lower underestimated the soil CO2-C flux or CO2 flux. The lower limit of the alkali trap absorption area should be a minimum of 20 % of the area covered by the chamber. The 2 mol L-1 NaOH solution trap (S2) was the most efficient (highest accuracy and highest CO2 fluxes) in measuring soil respiration.