988 resultados para Colunm sampler
Resumo:
Understanding hydrosedimental behavior of a watershed is essential for properly managing and using its hydric resources. The objective of this study was to verify the feasibility of the alternative procedure for the indirect determination of the sediment key curve using a turbidimeter. The research was carried out on the São Francisco Falso River, which is situated in the west of the state of Paraná on the left bank of ITAIPU reservoir. The direct method was applied using a DH-48 sediment suspended sampler. The indirect method consisted of the use of a linigraph and a turbidimeter. Based on the results obtained, it was concluded that the indirect method using a turbidimeter showed to be fully feasible, since it gave a power function-type mathematical model equal of the direct method. Furthermore, the average suspended sediment discharge into the São Francisco Falso River during the 2006/2007 harvest was calculated at 7.26 metric t day-1.
Resumo:
Many studies have attempted to evaluate the importance of airborne fungi in the development of invasive fungal infection, especially for immunocompromised hosts. Several kinds of instruments are available to quantitate fungal propagule levels in air. We compared the performance of the most frequently used air sampler, the Andersen sampler with six stages, with a portable one, the Reuter centrifugal sampler (RCS). A total of 84 samples were analyzed, 42 with each sampler. Twenty-eight different fungal genera were identified in samples analyzed with the Andersen instrument. In samples obtained with the RCS only seven different fungal genera were identified. The three most frequently isolated genera in samples analyzed with both devices were Penicillium, Aspergillus and Cladophialophora. In areas supplied with a high efficiency particulate air filter, fungal spore levels were usually lower when compared to areas without these filters. There was a significant correlation between total fungal propagule measurements taken with both devices on each sampling occasion (Pearson coefficient = 0.50). However, the Andersen device recovered a broader spectrum of fungi. We conclude that the RCS can be used for quantitative estimates of airborne microbiological concentrations. For qualitative studies, however, this device cannot be recommended.
Resumo:
The 10-HDA content in Brazilian samples (São Paulo State) of royal jelly (RJ) was analyzed using an HPLC method based on the work by BLOODWORTH et al. [2]. The chromatographic conditions were: isocratic system, reversed phase C18-H column, auto sampler, diode array UV-VIS detector adjusted to 225nm, mobile phase composed by methanol/water (45:55) at pH= 2.5 adjusted with phosphoric acid; a-naphtol was used as internal standard, and the running time was 30min. By statistical analysis of the results, the 10-HDA contents of the samples analyzed seem to have two ranges: 1.8% and 3% (w/w), that would be useful to qualify the RJ. This is the first data regarding 10-HDA content of Brazilian RJ.
Resumo:
Les méthodes de Monte Carlo par chaîne de Markov (MCMC) sont des outils très populaires pour l’échantillonnage de lois de probabilité complexes et/ou en grandes dimensions. Étant donné leur facilité d’application, ces méthodes sont largement répandues dans plusieurs communautés scientifiques et bien certainement en statistique, particulièrement en analyse bayésienne. Depuis l’apparition de la première méthode MCMC en 1953, le nombre de ces algorithmes a considérablement augmenté et ce sujet continue d’être une aire de recherche active. Un nouvel algorithme MCMC avec ajustement directionnel a été récemment développé par Bédard et al. (IJSS, 9 :2008) et certaines de ses propriétés restent partiellement méconnues. L’objectif de ce mémoire est de tenter d’établir l’impact d’un paramètre clé de cette méthode sur la performance globale de l’approche. Un second objectif est de comparer cet algorithme à d’autres méthodes MCMC plus versatiles afin de juger de sa performance de façon relative.
Resumo:
Les méthodes de Monte Carlo par chaînes de Markov (MCCM) sont des méthodes servant à échantillonner à partir de distributions de probabilité. Ces techniques se basent sur le parcours de chaînes de Markov ayant pour lois stationnaires les distributions à échantillonner. Étant donné leur facilité d’application, elles constituent une des approches les plus utilisées dans la communauté statistique, et tout particulièrement en analyse bayésienne. Ce sont des outils très populaires pour l’échantillonnage de lois de probabilité complexes et/ou en grandes dimensions. Depuis l’apparition de la première méthode MCCM en 1953 (la méthode de Metropolis, voir [10]), l’intérêt pour ces méthodes, ainsi que l’éventail d’algorithmes disponibles ne cessent de s’accroître d’une année à l’autre. Bien que l’algorithme Metropolis-Hastings (voir [8]) puisse être considéré comme l’un des algorithmes de Monte Carlo par chaînes de Markov les plus généraux, il est aussi l’un des plus simples à comprendre et à expliquer, ce qui en fait un algorithme idéal pour débuter. Il a été sujet de développement par plusieurs chercheurs. L’algorithme Metropolis à essais multiples (MTM), introduit dans la littérature statistique par [9], est considéré comme un développement intéressant dans ce domaine, mais malheureusement son implémentation est très coûteuse (en termes de temps). Récemment, un nouvel algorithme a été développé par [1]. Il s’agit de l’algorithme Metropolis à essais multiples revisité (MTM revisité), qui définit la méthode MTM standard mentionnée précédemment dans le cadre de l’algorithme Metropolis-Hastings sur un espace étendu. L’objectif de ce travail est, en premier lieu, de présenter les méthodes MCCM, et par la suite d’étudier et d’analyser les algorithmes Metropolis-Hastings ainsi que le MTM standard afin de permettre aux lecteurs une meilleure compréhension de l’implémentation de ces méthodes. Un deuxième objectif est d’étudier les perspectives ainsi que les inconvénients de l’algorithme MTM revisité afin de voir s’il répond aux attentes de la communauté statistique. Enfin, nous tentons de combattre le problème de sédentarité de l’algorithme MTM revisité, ce qui donne lieu à un tout nouvel algorithme. Ce nouvel algorithme performe bien lorsque le nombre de candidats générés à chaque itérations est petit, mais sa performance se dégrade à mesure que ce nombre de candidats croît.
Resumo:
L’apprentissage supervisé de réseaux hiérarchiques à grande échelle connaît présentement un succès fulgurant. Malgré cette effervescence, l’apprentissage non-supervisé représente toujours, selon plusieurs chercheurs, un élément clé de l’Intelligence Artificielle, où les agents doivent apprendre à partir d’un nombre potentiellement limité de données. Cette thèse s’inscrit dans cette pensée et aborde divers sujets de recherche liés au problème d’estimation de densité par l’entremise des machines de Boltzmann (BM), modèles graphiques probabilistes au coeur de l’apprentissage profond. Nos contributions touchent les domaines de l’échantillonnage, l’estimation de fonctions de partition, l’optimisation ainsi que l’apprentissage de représentations invariantes. Cette thèse débute par l’exposition d’un nouvel algorithme d'échantillonnage adaptatif, qui ajuste (de fa ̧con automatique) la température des chaînes de Markov sous simulation, afin de maintenir une vitesse de convergence élevée tout au long de l’apprentissage. Lorsqu’utilisé dans le contexte de l’apprentissage par maximum de vraisemblance stochastique (SML), notre algorithme engendre une robustesse accrue face à la sélection du taux d’apprentissage, ainsi qu’une meilleure vitesse de convergence. Nos résultats sont présent ́es dans le domaine des BMs, mais la méthode est générale et applicable à l’apprentissage de tout modèle probabiliste exploitant l’échantillonnage par chaînes de Markov. Tandis que le gradient du maximum de vraisemblance peut-être approximé par échantillonnage, l’évaluation de la log-vraisemblance nécessite un estimé de la fonction de partition. Contrairement aux approches traditionnelles qui considèrent un modèle donné comme une boîte noire, nous proposons plutôt d’exploiter la dynamique de l’apprentissage en estimant les changements successifs de log-partition encourus à chaque mise à jour des paramètres. Le problème d’estimation est reformulé comme un problème d’inférence similaire au filtre de Kalman, mais sur un graphe bi-dimensionnel, où les dimensions correspondent aux axes du temps et au paramètre de température. Sur le thème de l’optimisation, nous présentons également un algorithme permettant d’appliquer, de manière efficace, le gradient naturel à des machines de Boltzmann comportant des milliers d’unités. Jusqu’à présent, son adoption était limitée par son haut coût computationel ainsi que sa demande en mémoire. Notre algorithme, Metric-Free Natural Gradient (MFNG), permet d’éviter le calcul explicite de la matrice d’information de Fisher (et son inverse) en exploitant un solveur linéaire combiné à un produit matrice-vecteur efficace. L’algorithme est prometteur: en terme du nombre d’évaluations de fonctions, MFNG converge plus rapidement que SML. Son implémentation demeure malheureusement inefficace en temps de calcul. Ces travaux explorent également les mécanismes sous-jacents à l’apprentissage de représentations invariantes. À cette fin, nous utilisons la famille de machines de Boltzmann restreintes “spike & slab” (ssRBM), que nous modifions afin de pouvoir modéliser des distributions binaires et parcimonieuses. Les variables latentes binaires de la ssRBM peuvent être rendues invariantes à un sous-espace vectoriel, en associant à chacune d’elles, un vecteur de variables latentes continues (dénommées “slabs”). Ceci se traduit par une invariance accrue au niveau de la représentation et un meilleur taux de classification lorsque peu de données étiquetées sont disponibles. Nous terminons cette thèse sur un sujet ambitieux: l’apprentissage de représentations pouvant séparer les facteurs de variations présents dans le signal d’entrée. Nous proposons une solution à base de ssRBM bilinéaire (avec deux groupes de facteurs latents) et formulons le problème comme l’un de “pooling” dans des sous-espaces vectoriels complémentaires.
Resumo:
Como proyecto de grado, el presente trabajo consiste en una revisión teórica de los conceptos de liderazgo, poder e influencia, junto con las posibles relaciones que entre ellos se pueden presentar. Para ello, cada concepto es definido de manera individual, y con base en ello, se identifica la dependencia que tienen estos conceptos entre sí y la importancia dentro del desarrollo del liderazgo transformacional actual. Para lograr lo propuesto, se llevó a cabo la revisión de una parte de la literatura académica presente en libros, revistas académicas, bases de datos y documentos relacionados con los temas y conceptos tratados. A partir de ello, se entendió la evolución del concepto del liderazgo y los enfoques presentados desde la década de los 1920´s hasta la actualidad, junto con el modelo de rango total y el tipo transaccional y transformacional del liderazgo, para luego definir el papel y la importancia de los conceptos de poder, los tipos de poder, la influencia y las tácticas de influencia, y así, identificar las posibles relaciones que se presentan entre los conceptos y la importancia de estos en el entorno organizacional actual.
Resumo:
Due to their confinement to specific hostplants or restricted habitat types, Auchenorrhyncha have the potential to make suitable biological indicators to measure the quality of chalk grassland under different management practices for nature conservation. The Auchenorrhyncha data from a study designed to identify the factors influencing the invertebrate diversity of chalk grasslands in southern England was used to evaluate the potential use of this group of insects as biological indicators. Between 1998 and 2002 altogether 81 chalk grassland sites were sampled. Vegetation structure and composition were recorded, and Auchenorrhyncha were sampled at each site on three occasions in each of two seasons using a ‘Vortis’ suction sampler. Auchenorrhyncha assemblages were then linked to the different grassland plant communities occurring on chalk soils according to the British National Vegetation Classification (NVC). Altogether 96 Auchenorrhyncha species were recorded during the study. Using data on the frequency and dominance of species, as is commonly done for plant communities, it was possible to identify the preferential and differential species of distinct Auchenorrhyncha assemblages. Significant differences between the Auchenorrhyncha assemblages associated with the various chalk grassland plant communities of the NVC were observed down to a level of sub-communities. We conclude that data on Auchenorrhyncha assemblages can provide valuable information for the setting of conservation management priorities, where data on floristic composition alone may not be sufficient, providing additional information on aspects of vegetation structure and condition.
Resumo:
Passive samplers have been predominantly used to monitor environmental conditions in single volumes. However, measurements using a calibrated passive sampler- Solid Phase Microextraction (SPME) fibre, in three houses with cold pitched roof, successfully demonstrated the potential of the SPME fibre as a device for monitoring air movement in two volumes. The roofs monitored were pitched at 15° - 30° with insulation thickness varying between 200-300 mm on the ceiling. For effective analysis, two constant sources of volatile organic compounds were diffused steadily in the house. Emission rates and air movement from the house to the roof was predicted using developed algorithms. The airflow rates which were calibrated against conventional tracer gas techniques were introduced into a HAM software package to predict the effects of air movement on other varying parameters. On average it was shown from the in situ measurements that about 20-30% of air entering the three houses left through gaps and cracks in the ceiling into the roof. Although these field measurements focus on the airflows, it is associated with energy benefits such that; if these flows are reduced then significantly energy losses would also be reduced (as modelled) consequently improving the energy efficiency of the house. Other results illustrated that condensation formation risks were dependent on the airtightness of the building envelopes including configurations of their roof constructions.
Resumo:
Statistical methods of inference typically require the likelihood function to be computable in a reasonable amount of time. The class of “likelihood-free” methods termed Approximate Bayesian Computation (ABC) is able to eliminate this requirement, replacing the evaluation of the likelihood with simulation from it. Likelihood-free methods have gained in efficiency and popularity in the past few years, following their integration with Markov Chain Monte Carlo (MCMC) and Sequential Monte Carlo (SMC) in order to better explore the parameter space. They have been applied primarily to estimating the parameters of a given model, but can also be used to compare models. Here we present novel likelihood-free approaches to model comparison, based upon the independent estimation of the evidence of each model under study. Key advantages of these approaches over previous techniques are that they allow the exploitation of MCMC or SMC algorithms for exploring the parameter space, and that they do not require a sampler able to mix between models. We validate the proposed methods using a simple exponential family problem before providing a realistic problem from human population genetics: the comparison of different demographic models based upon genetic data from the Y chromosome.
Resumo:
We present a model of market participation in which the presence of non-negligible fixed costs leads to random censoring of the traditional double-hurdle model. Fixed costs arise when household resources must be devoted a priori to the decision to participate in the market. These costs, usually of time, are manifested in non-negligible minimum-efficient supplies and supply correspondence that requires modification of the traditional Tobit regression. The costs also complicate econometric estimation of household behavior. These complications are overcome by application of the Gibbs sampler. The algorithm thus derived provides robust estimates of the fixed-costs, double-hurdle model. The model and procedures are demonstrated in an application to milk market participation in the Ethiopian highlands.
Resumo:
Scrotal circumference data from 47,605 Nellore young bulls, measured at around 18 mo of age (SC18), were analyzed simultaneously with 27,924 heifer pregnancy (HP) and 80,831 stayability (STAY) records to estimate their additive genetic relationships. Additionally, the possibility that economically relevant traits measured directly in females could replace SC18 as a selection criterion was verified. Heifer pregnancy was defined as the observation that a heifer conceived and remained pregnant, which was assessed by rectal palpation at 60 d. Females were exposed to sires for the first time at about 14 mo of age (between 11 and 16 mo). Stayability was defined as whether or not a cow calved every year up to 5 yr of age, when the opportunity to breed was provided. A Bayesian linear-threshold-threshold analysis via Gibbs sampler was used to estimate the variance and covariance components of the multitrait model. Heritability estimates were 0.42 +/- 0.01, 0.53 +/- 0.03, and 0.10 +/- 0.01, for SC18, HP, and STAY, respectively. The genetic correlation estimates were 0.29 +/- 0.05, 0.19 +/- 0.05, and 0.64 +/- 0.07 between SC18 and HP, SC18 and STAY, and HP and STAY, respectively. The residual correlation estimate between HP and STAY was -0.08 +/- 0.03. The heritability values indicate the existence of considerable genetic variance for SC18 and HP traits. However, genetic correlations between SC18 and the female reproductive traits analyzed in the present study can only be considered moderate. The small residual correlation between HP and STAY suggests that environmental effects common to both traits are not major. The large heritability estimate for HP and the high genetic correlation between HP and STAY obtained in the present study confirm that EPD for HP can be used to select bulls for the production of precocious, fertile, and long-lived daughters. Moreover, SC18 could be incorporated in multitrait analysis to improve the prediction accuracy for HP genetic merit of young bulls.
Resumo:
It is known that patients may cease participating in a longitudinal study and become lost to follow-up. The objective of this article is to present a Bayesian model to estimate the malaria transition probabilities considering individuals lost to follow-up. We consider a homogeneous population, and it is assumed that the considered period of time is small enough to avoid two or more transitions from one state of health to another. The proposed model is based on a Gibbs sampling algorithm that uses information of lost to follow-up at the end of the longitudinal study. To simulate the unknown number of individuals with positive and negative states of malaria at the end of the study and lost to follow-up, two latent variables were introduced in the model. We used a real data set and a simulated data to illustrate the application of the methodology. The proposed model showed a good fit to these data sets, and the algorithm did not show problems of convergence or lack of identifiability. We conclude that the proposed model is a good alternative to estimate probabilities of transitions from one state of health to the other in studies with low adherence to follow-up.
Resumo:
The concentrations of the water-soluble inorganic aerosol species, ammonium (NH4+), nitrate (NO3-), chloride (Cl-), and sulfate (SO42-), were measured from September to November 2002 at a pasture site in the Amazon Basin (Rondnia, Brazil) (LBA-SMOCC). Measurements were conducted using a semi-continuous technique (Wet-annular denuder/Steam-Jet Aerosol Collector: WAD/SJAC) and three integrating filter-based methods, namely (1) a denuder-filter pack (DFP: Teflon and impregnated Whatman filters), (2) a stacked-filter unit (SFU: polycarbonate filters), and (3) a High Volume dichotomous sampler (HiVol: quartz fiber filters). Measurements covered the late dry season (biomass burning), a transition period, and the onset of the wet season (clean conditions). Analyses of the particles collected on filters were performed using ion chromatography (IC) and Particle-Induced X-ray Emission spectrometry (PIXE). Season-dependent discrepancies were observed between the WAD/SJAC system and the filter-based samplers. During the dry season, when PM2.5 (D-p <= 2.5 mu m) concentrations were similar to 100 mu g m(-3), aerosol NH4+ and SO42- measured by the filter-based samplers were on average two times higher than those determined by the WAD/SJAC. Concentrations of aerosol NO3- and Cl- measured with the HiVol during daytime, and with the DFP during day- and nighttime also exceeded those of the WAD/SJAC by a factor of two. In contrast, aerosol NO3- and Cl- measured with the SFU during the dry season were nearly two times lower than those measured by the WAD/SJAC. These differences declined markedly during the transition period and towards the cleaner conditions during the onset of the wet season (PM2.5 similar to 5 mu g m(-3)); when filter-based samplers measured on average 40-90% less than the WAD/SJAC. The differences were not due to consistent systematic biases of the analytical techniques, but were apparently a result of prevailing environmental conditions and different sampling procedures. For the transition period and wet season, the significance of our results is reduced by a low number of data points. We argue that the observed differences are mainly attributable to (a) positive and negative filter sampling artifacts, (b) presence of organic compounds and organosulfates on filter substrates, and (c) a SJAC sampling efficiency of less than 100%.
Resumo:
In this paper, we present a Bayesian approach for estimation in the skew-normal calibration model, as well as the conditional posterior distributions which are useful for implementing the Gibbs sampler. Data transformation is thus avoided by using the methodology proposed. Model fitting is implemented by proposing the asymmetric deviance information criterion, ADIC, a modification of the ordinary DIC. We also report an application of the model studied by using a real data set, related to the relationship between the resistance and the elasticity of a sample of concrete beams. Copyright (C) 2008 John Wiley & Sons, Ltd.