894 resultados para gaussian mixture model


Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this talk we investigate the usage of spectrally shaped amplified spontaneous emission (ASE) in order to emulate highly dispersed wavelength division multiplexed (WDM) signals in an optical transmission system. Such a technique offers various simplifications to large scale WDM experiments. Not only does it offer a reduction in transmitter complexity, removing the need for multiple source lasers, it potentially reduces the test and measurement complexity by requiring only the centre channel of a WDM system to be measured in order to estimate WDM worst case performance. The use of ASE as a test and measurement tool is well established in optical communication systems and several measurement techniques will be discussed [1, 2]. One of the most prevalent uses of ASE is in the measurement of receiver sensitivity where ASE is introduced in order to degrade the optical signal to noise ratio (OSNR) and measure the resulting bit error rate (BER) at the receiver. From an analytical point of view noise has been used to emulate system performance, the Gaussian Noise model is used as an estimate of highly dispersed signals and has had consider- able interest [3]. The work to be presented here extends the use of ASE by using it as a metric to emulate highly dispersed WDM signals and in the process reduce WDM transmitter complexity and receiver measurement time in a lab environment. Results thus far have indicated [2] that such a transmitter configuration is consistent with an AWGN model for transmission, with modulation format complexity and nonlinearities playing a key role in estimating the performance of systems utilising the ASE channel emulation technique. We conclude this work by investigating techniques capable of characterising the nonlinear and damage limits of optical fibres and the resultant information capacity limits. REFERENCES McCarthy, M. E., N. Mac Suibhne, S. T. Le, P. Harper, and A. D. Ellis, “High spectral efficiency transmission emulation for non-linear transmission performance estimation for high order modulation formats," 2014 European Conference on IEEE Optical Communication (ECOC), 2014. 2. Ellis, A., N. Mac Suibhne, F. Gunning, and S. Sygletos, “Expressions for the nonlinear trans- mission performance of multi-mode optical fiber," Opt. Express, Vol. 21, 22834{22846, 2013. Vacondio, F., O. Rival, C. Simonneau, E. Grellier, A. Bononi, L. Lorcy, J. Antona, and S. Bigo, “On nonlinear distortions of highly dispersive optical coherent systems," Opt. Express, Vol. 20, 1022-1032, 2012.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Estimates of abundance or density are essential for wildlife management and conservation. There are few effective density estimates for the Buff-throated Partridge Tetraophasis szechenyii, a rare and elusive high-mountain Galliform species endemic to western China. In this study, we used the temporary emigration N-mixture model to estimate density of this species, with data acquired from playback point count surveys around a sacred area based on indigenous Tibetan culture of protection of wildlife, in Yajiang County, Sichuan, China, during April–June 2009. Within 84 125-m radius points, we recorded 53 partridge groups during three repeats. The best model indicated that detection probability was described by covariates of vegetation cover type, week of visit, time of day, and weather with weak effects, and a partridge group was present during a sampling period with a constant probability. The abundance component was accounted for by vegetation association. Abundance was substantially higher in rhododendron shrubs, fir-larch forests, mixed spruce-larch-birch forests, and especially oak thickets than in pine forests. The model predicted a density of 5.14 groups/km², which is similar to an estimate of 4.7 – 5.3 groups/km² quantified via an intensive spot-mapping effort. The post-hoc estimate of individual density was 14.44 individuals/km², based on the estimated mean group size of 2.81. We suggest that the method we employed is applicable to estimate densities of Buff-throated Partridges in large areas. Given importance of a mosaic habitat for this species, local logging should be regulated. Despite no effect of the conservation area (sacred) on the abundance of Buff-throated Partridges, we suggest regulations linking the sacred mountain conservation area with the official conservation system because of strong local participation facilitated by sacred mountains in land conservation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Abstract

Continuous variable is one of the major data types collected by the survey organizations. It can be incomplete such that the data collectors need to fill in the missingness. Or, it can contain sensitive information which needs protection from re-identification. One of the approaches to protect continuous microdata is to sum them up according to different cells of features. In this thesis, I represents novel methods of multiple imputation (MI) that can be applied to impute missing values and synthesize confidential values for continuous and magnitude data.

The first method is for limiting the disclosure risk of the continuous microdata whose marginal sums are fixed. The motivation for developing such a method comes from the magnitude tables of non-negative integer values in economic surveys. I present approaches based on a mixture of Poisson distributions to describe the multivariate distribution so that the marginals of the synthetic data are guaranteed to sum to the original totals. At the same time, I present methods for assessing disclosure risks in releasing such synthetic magnitude microdata. The illustration on a survey of manufacturing establishments shows that the disclosure risks are low while the information loss is acceptable.

The second method is for releasing synthetic continuous micro data by a nonstandard MI method. Traditionally, MI fits a model on the confidential values and then generates multiple synthetic datasets from this model. Its disclosure risk tends to be high, especially when the original data contain extreme values. I present a nonstandard MI approach conditioned on the protective intervals. Its basic idea is to estimate the model parameters from these intervals rather than the confidential values. The encouraging results of simple simulation studies suggest the potential of this new approach in limiting the posterior disclosure risk.

The third method is for imputing missing values in continuous and categorical variables. It is extended from a hierarchically coupled mixture model with local dependence. However, the new method separates the variables into non-focused (e.g., almost-fully-observed) and focused (e.g., missing-a-lot) ones. The sub-model structure of focused variables is more complex than that of non-focused ones. At the same time, their cluster indicators are linked together by tensor factorization and the focused continuous variables depend locally on non-focused values. The model properties suggest that moving the strongly associated non-focused variables to the side of focused ones can help to improve estimation accuracy, which is examined by several simulation studies. And this method is applied to data from the American Community Survey.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Brain injury due to lack of oxygen or impaired blood flow around the time of birth, may cause long term neurological dysfunction or death in severe cases. The treatments need to be initiated as soon as possible and tailored according to the nature of the injury to achieve best outcomes. The Electroencephalogram (EEG) currently provides the best insight into neurological activities. However, its interpretation presents formidable challenge for the neurophsiologists. Moreover, such expertise is not widely available particularly around the clock in a typical busy Neonatal Intensive Care Unit (NICU). Therefore, an automated computerized system for detecting and grading the severity of brain injuries could be of great help for medical staff to diagnose and then initiate on-time treatments. In this study, automated systems for detection of neonatal seizures and grading the severity of Hypoxic-Ischemic Encephalopathy (HIE) using EEG and Heart Rate (HR) signals are presented. It is well known that there is a lot of contextual and temporal information present in the EEG and HR signals if examined at longer time scale. The systems developed in the past, exploited this information either at very early stage of the system without any intelligent block or at very later stage where presence of such information is much reduced. This work has particularly focused on the development of a system that can incorporate the contextual information at the middle (classifier) level. This is achieved by using dynamic classifiers that are able to process the sequences of feature vectors rather than only one feature vector at a time.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Dirichlet distribution is a multivariate generalization of the Beta distribution. It is an important multivariate continuous distribution in probability and statistics. In this report, we review the Dirichlet distribution and study its properties, including statistical and information-theoretic quantities involving this distribution. Also, relationships between the Dirichlet distribution and other distributions are discussed. There are some different ways to think about generating random variables with a Dirichlet distribution. The stick-breaking approach and the Pólya urn method are discussed. In Bayesian statistics, the Dirichlet distribution and the generalized Dirichlet distribution can both be a conjugate prior for the Multinomial distribution. The Dirichlet distribution has many applications in different fields. We focus on the unsupervised learning of a finite mixture model based on the Dirichlet distribution. The Initialization Algorithm and Dirichlet Mixture Estimation Algorithm are both reviewed for estimating the parameters of a Dirichlet mixture. Three experimental results are shown for the estimation of artificial histograms, summarization of image databases and human skin detection.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

An RVE–based stochastic numerical model is used to calculate the permeability of randomly generated porous media at different values of the fiber volume fraction for the case of transverse flow in a unidirectional ply. Analysis of the numerical results shows that the permeability is not normally distributed. With the aim of proposing a new understanding on this particular topic, permeability data are fitted using both a mixture model and a unimodal distribution. Our findings suggest that permeability can be fitted well using a mixture model based on the lognormal and power law distributions. In case of a unimodal distribution, it is found, using the maximum-likelihood estimation method (MLE), that the generalized extreme value (GEV) distribution represents the best fit. Finally, an expression of the permeability as a function of the fiber volume fraction based on the GEV distribution is discussed in light of the previous results.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-08

Relevância:

80.00% 80.00%

Publicador:

Resumo:

O presente trabalho visa o desenvolvimento de um processo para a produção de biodiesel partindo de óleos de alta acidez, aplicando um processo em duas etapas de catálise homogênea. A primeira é a reação de esterificação etílica dos ácidos graxos livres, catalisada por H2SO4, ocorrendo no meio de triglicerídeos e a segunda é a transesterificação dos triglicerídeos remanescentes, ocorrendo no meio dos ésteres alquílicos da primeira etapa e catalisada com álcali (NaOH) e álcool etílico ou metílico. A reação de esterificação foi estudada com uma mistura modelo consistindo de óleo de soja neutro acidificado artificialmente com 15%p de ácido oleico PA. Este valor foi adotado, como referência, devido a certas gorduras regionais (óleo de mamona advinda de agricultura familiar, sebos de matadouro e óleo de farelo de arroz, etc.) apresentarem teores entre 10-20%p de ácidos graxos livres. Nas duas etapas o etanol é reagente e também solvente, sendo a razão molar mistura:álcool um dos parâmetros pesquisados nas relações 1:3, 1:6 e 1:9. Outros foram a temperatura 60 e 80ºC e a concentração percentual do catalisador, 0,5, 1,0 e 1,5%p, (em relação à massa de óleo). A combinatória destes parâmetros resultou em 18 reações. Dentre as condições reacionais estudadas, oito atingiram acidez aceitável inferior a 1,5%p possibilitando a definição das condições para aplicação ótima da segunda etapa. A melhor condição nesta etapa ocorreu quando a reação foi conduzida a 60°C com 1%p de H2SO4 e razão molar 1:6. No final da primeira etapa foram realizados tratamentos pertinentes como a retirada do catalisador e estudada sua influência sobre a acidez final, utilizando-se de lavagens com e sem adição de hexano, seguidas de evaporação ou adição de agente secante. Na segunda etapa estudaram-se as razões molares de óleo:álcool de 1:6 e 1:9 com álcool metílico e etílico, com 0,5 e 1%p de NaOH assim como o tratamento da reação (lavagem ou neutralização do catalisador) a 60°C, resultando em 16 experimentos. A melhor condição nesta segunda etapa ocorreu com 0,5%p de NaOH, razão molar óleo:etanol de 1:6 e somente as reações em que se aplicaram lavagens apresentaram índices de acidez adequados (<1,0%p) coerentes com os parâmetros da ANP.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Different types of base fluids, such as water, engine oil, kerosene, ethanol, methanol, ethylene glycol etc. are usually used to increase the heat transfer performance in many engineering applications. But these conventional heat transfer fluids have often several limitations. One of those major limitations is that the thermal conductivity of each of these base fluids is very low and this results a lower heat transfer rate in thermal engineering systems. Such limitation also affects the performance of different equipments used in different heat transfer process industries. To overcome such an important drawback, researchers over the years have considered a new generation heat transfer fluid, simply known as nanofluid with higher thermal conductivity. This new generation heat transfer fluid is a mixture of nanometre-size particles and different base fluids. Different researchers suggest that adding spherical or cylindrical shape of uniform/non-uniform nanoparticles into a base fluid can remarkably increase the thermal conductivity of nanofluid. Such augmentation of thermal conductivity could play a more significant role in enhancing the heat transfer rate than that of the base fluid. Nanoparticles diameters used in nanofluid are usually considered to be less than or equal to 100 nm and the nanoparticles concentration usually varies from 5% to 10%. Different researchers mentioned that the smaller nanoparticles concentration with size diameter of 100 nm could enhance the heat transfer rate more significantly compared to that of base fluids. But it is not obvious what effect it will have on the heat transfer performance when nanofluids contain small size nanoparticles of less than 100 nm with different concentrations. Besides, the effect of static and moving nanoparticles on the heat transfer of nanofluid is not known too. The idea of moving nanoparticles brings the effect of Brownian motion of nanoparticles on the heat transfer. The aim of this work is, therefore, to investigate the heat transfer performance of nanofluid using a combination of smaller size of nanoparticles with different concentrations considering the Brownian motion of nanoparticles. A horizontal pipe has been considered as a physical system within which the above mentioned nanofluid performances are investigated under transition to turbulent flow conditions. Three different types of numerical models, such as single phase model, Eulerian-Eulerian multi-phase mixture model and Eulerian-Lagrangian discrete phase model have been used while investigating the performance of nanofluids. The most commonly used model is single phase model which is based on the assumption that nanofluids behave like a conventional fluid. The other two models are used when the interaction between solid and fluid particles is considered. However, two different phases, such as fluid and solid phases is also considered in the Eulerian-Eulerian multi-phase mixture model. Thus, these phases create a fluid-solid mixture. But, two phases in the Eulerian-Lagrangian discrete phase model are independent. One of them is a solid phase and the other one is a fluid phase. In addition, RANS (Reynolds Average Navier Stokes) based Standard κ-ω and SST κ-ω transitional models have been used for the simulation of transitional flow. While the RANS based Standard κ-ϵ, Realizable κ-ϵ and RNG κ-ϵ turbulent models are used for the simulation of turbulent flow. Hydrodynamic as well as temperature behaviour of transition to turbulent flows of nanofluids through the horizontal pipe is studied under a uniform heat flux boundary condition applied to the wall with temperature dependent thermo-physical properties for both water and nanofluids. Numerical results characterising the performances of velocity and temperature fields are presented in terms of velocity and temperature contours, turbulent kinetic energy contours, surface temperature, local and average Nusselt numbers, Darcy friction factor, thermal performance factor and total entropy generation. New correlations are also proposed for the calculation of average Nusselt number for both the single and multi-phase models. Result reveals that the combination of small size of nanoparticles and higher nanoparticles concentrations with the Brownian motion of nanoparticles shows higher heat transfer enhancement and thermal performance factor than those of water. Literature suggests that the use of nanofluids flow in an inclined pipe at transition to turbulent regimes has been ignored despite its significance in real-life applications. Therefore, a particular investigation has been carried out in this thesis with a view to understand the heat transfer behaviour and performance of an inclined pipe under transition flow condition. It is found that the heat transfer rate decreases with the increase of a pipe inclination angle. Also, a higher heat transfer rate is found for a horizontal pipe under forced convection than that of an inclined pipe under mixed convection.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Understanding spatial patterns of land use and land cover is essential for studies addressing biodiversity, climate change and environmental modeling as well as for the design and monitoring of land use policies. The aim of this study was to create a detailed map of land use land cover of the deforested areas of the Brazilian Legal Amazon up to 2008. Deforestation data from and uses were mapped with Landsat-5/TM images analysed with techniques, such as linear spectral mixture model, threshold slicing and visual interpretation, aided by temporal information extracted from NDVI MODIS time series. The result is a high spatial resolution of land use and land cover map of the entire Brazilian Legal Amazon for the year 2008 and corresponding calculation of area occupied by different land use classes. The results showed that the four classes of Pasture covered 62% of the deforested areas of the Brazilian Legal Amazon, followed by Secondary Vegetation with 21%. The area occupied by Annual Agriculture covered less than 5% of deforested areas; the remaining areas were distributed among six other land use classes. The maps generated from this project ? called TerraClass - are available at INPE?s web site (http://www.inpe.br/cra/projetos_pesquisas/terraclass2008.php)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, the problem of semantic place categorization in mobile robotics is addressed by considering a time-based probabilistic approach called dynamic Bayesian mixture model (DBMM), which is an improved variation of the dynamic Bayesian network. More specifically, multi-class semantic classification is performed by a DBMM composed of a mixture of heterogeneous base classifiers, using geometrical features computed from 2D laserscanner data, where the sensor is mounted on-board a moving robot operating indoors. Besides its capability to combine different probabilistic classifiers, the DBMM approach also incorporates time-based (dynamic) inferences in the form of previous class-conditional probabilities and priors. Extensive experiments were carried out on publicly available benchmark datasets, highlighting the influence of the number of time-slices and the effect of additive smoothing on the classification performance of the proposed approach. Reported results, under different scenarios and conditions, show the effectiveness and competitive performance of the DBMM.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Affiliation: Institut de recherche en immunologie et en cancérologie, Université de Montréal

Relevância:

50.00% 50.00%

Publicador:

Resumo:

In survival analysis frailty is often used to model heterogeneity between individuals or correlation within clusters. Typically frailty is taken to be a continuous random effect, yielding a continuous mixture distribution for survival times. A Bayesian analysis of a correlated frailty model is discussed in the context of inverse Gaussian frailty. An MCMC approach is adopted and the deviance information criterion is used to compare models. As an illustration of the approach a bivariate data set of corneal graft survival times is analysed. (C) 2006 Elsevier B.V. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Estimating and predicting degradation processes of engineering assets is crucial for reducing the cost and insuring the productivity of enterprises. Assisted by modern condition monitoring (CM) technologies, most asset degradation processes can be revealed by various degradation indicators extracted from CM data. Maintenance strategies developed using these degradation indicators (i.e. condition-based maintenance) are more cost-effective, because unnecessary maintenance activities are avoided when an asset is still in a decent health state. A practical difficulty in condition-based maintenance (CBM) is that degradation indicators extracted from CM data can only partially reveal asset health states in most situations. Underestimating this uncertainty in relationships between degradation indicators and health states can cause excessive false alarms or failures without pre-alarms. The state space model provides an efficient approach to describe a degradation process using these indicators that can only partially reveal health states. However, existing state space models that describe asset degradation processes largely depend on assumptions such as, discrete time, discrete state, linearity, and Gaussianity. The discrete time assumption requires that failures and inspections only happen at fixed intervals. The discrete state assumption entails discretising continuous degradation indicators, which requires expert knowledge and often introduces additional errors. The linear and Gaussian assumptions are not consistent with nonlinear and irreversible degradation processes in most engineering assets. This research proposes a Gamma-based state space model that does not have discrete time, discrete state, linear and Gaussian assumptions to model partially observable degradation processes. Monte Carlo-based algorithms are developed to estimate model parameters and asset remaining useful lives. In addition, this research also develops a continuous state partially observable semi-Markov decision process (POSMDP) to model a degradation process that follows the Gamma-based state space model and is under various maintenance strategies. Optimal maintenance strategies are obtained by solving the POSMDP. Simulation studies through the MATLAB are performed; case studies using the data from an accelerated life test of a gearbox and a liquefied natural gas industry are also conducted. The results show that the proposed Monte Carlo-based EM algorithm can estimate model parameters accurately. The results also show that the proposed Gamma-based state space model have better fitness result than linear and Gaussian state space models when used to process monotonically increasing degradation data in the accelerated life test of a gear box. Furthermore, both simulation studies and case studies show that the prediction algorithm based on the Gamma-based state space model can identify the mean value and confidence interval of asset remaining useful lives accurately. In addition, the simulation study shows that the proposed maintenance strategy optimisation method based on the POSMDP is more flexible than that assumes a predetermined strategy structure and uses the renewal theory. Moreover, the simulation study also shows that the proposed maintenance optimisation method can obtain more cost-effective strategies than a recently published maintenance strategy optimisation method by optimising the next maintenance activity and the waiting time till the next maintenance activity simultaneously.