899 resultados para two-Gaussian mixture model


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The main aim of the study is to give a clear picture of various meteorological factors affecting the dispersal of pollutants. One such important developing metropolis, namely Madras, is chosen for the present study. The study throws light into the occurrence of inversions, isothermals and lapse conditions and the vertical and horizontal extent of mixing of pollutants. The thesis also aims to study the wind climatology and atmospheric stability .The study gives a insight to the spatial distribution of sulphudioxide concentration using the Gaussian plume model, which accounts for various industrial sources. The researcher suggests optimum locations for industries and various steps to reduce air pollution.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pollutants that once enter into the earth’s atmosphere become part of the atmosphere and hence their dispersion, dilution, direction of transportation etc. are governed by the meteorological conditions. The thesis deals with the study of the atmospheric dispersion capacity, wind climatology, atmospheric stability, pollutant distribution by means of a model and the suggestions for a comprehensive planning for the industrially developing city, Cochin. The definition, sources, types and effects of air pollution have been dealt with briefly. The influence of various meteorological parameters such as vector wind, temperature and its vertical structure and atmospheric stability in relation to pollutant dispersal have been studied. The importance of inversions, mixing heights, ventilation coefficients were brought out. The spatial variation of mixing heights studies for the first time on a microscale region, serves to delineate the regions of good and poor dispersal capacity. A study of wind direction fluctuation, σθ and its relation to stability and mixing heights were shown to be much useful. It was shown that there is a necessity to look into the method of σθ computation. The development of Gausssian Plume Model along with the application for multiple sources was presented. The pollutant chosen was sulphur dioxide and industrial sources alone were considered. The percentage frequency of occurrence of inversions and isothermals are found to be low in all months during the year. The spatial variation of mixing heights revealed that a single mixing height cannot be taken as a representative for the whole city have low mixing heights and monsoonal months showed lowest mixing heights. The study of ventilation co-efficients showed values less than the required optimum value 6000m2/5. However, the low values may be due to the consideration of surface wind alone instead of the vertically averaged wind. Relatively more calm conditions and light winds during night and strong winds during day time were observed. During the most of the year westerlies during day time and northeasterlies during night time are the dominant winds. Unstable conditions with high values of σθ during day time and stable conditions with lower values of σθ during night time are the prominent features. Monsoonal months showed neutral stability for most of the time. A study σθ of and Pasquill Stability category has revealed the difficulty in giving a unique value of for each stability category. For the first time regression equations have been developed relating mixing heights and σθ. A closer examination of σθ revealed that half of the range of wind direction fluctuations is to be taken, instead of one by sixth, to compute σθ. The spatial distribution of SO2 showed a more or less uniform distribution with a slight intrusion towards south. Winter months showed low concentrations contrary to the expectations. The variations of the concentration is found to be influenced more by the mixing height and the stack height rather than wind speed. In the densely populated areas the concentration is more than the threshold limit value. However, the values reported appear to be high, because no depletion of the material is assumed through dry or wet depositions and also because of the inclusion of calm conditions with a very light wind speed. A reduction of emission during night time with a consequent rise during day time would bring down the levels of pollution. The probable locations for the new industries could be the extreme southeast parts because the concentration towards the north falls off very quickly resulting low concentrations. In such a case pollutant spread would be towards south and west, thus keeping the city interior relatively free from pollution. A more detailed examination of the pollutant spread by means of models that would take the dry and wet depositions may be necessary. Nevertheless, the present model serves to give the trend of the distribution of pollutant concentration with which one can suggest the optimum locations for the new industries

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The nonlinear dynamics of certain important reaction systems are discussed and analysed in this thesis. The interest in the theoretical and the experimental studies of chemical reactions showing oscillatory dynamics and associated properties is increasing very rapidly. An attempt is made to study some nonlinear phenomena exhibited by the well known chemical oscillator, the BelousovZhabotinskii reaction whose mathematical properties are much in common with the properties of biological oscillators. While extremely complex, this reaction is still much simpler than biological systems at least from the modelling point of view. A suitable model [19] for the system is analysed and the researcher has studied the limit cycle behaviour of the system, for different values of the stoichiometric parameter f, by keeping the value of the reaction rate (k6) fixed at k6 = l. The more complicated three-variable model is stiff in nature.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The deteriorating air quality especially in urban environments is a cause of serious concern. In spite of being an effective sink, the atmosphere also has its own limitations in effectively dispersing the pollutants being dumped into it continuously by various sources, mainly industries. Many a time, it is not the higher emissions that cause alarming level of pollutants but the unfavourable atmospheric conditions under which the atmosphere is not able to disperse them effectively, leading to accumulation of pollutants near the ground. Hence, it is imperative to have an estimate of the atmospheric potential for dispersal of the substances emitted into it. This requires a knowledge of mixing height, ventilation coefficient, wind and stability of the region under study. Mere estimation of such pollution potential is not adequate, unless the probable distribution of concentration of pollutants is known. This can be obtained by means of mathematical models. The pollution potential coupled with the distribution of concentration provides a good basis for initiating steps to mitigate air pollution in any developing urban area. In this thesis, a fast developing industrial city, namely, Trivandrum is chosen for estimating the pollution potential and determining the spatial distribution of sulphur dioxide concentration. Each of the parameters required for pollution potential is discussed in detail separately. The thesis is divided into nine chapters.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Two formulations of model-based object recognition are described. MAP Model Matching evaluates joint hypotheses of match and pose, while Posterior Marginal Pose Estimation evaluates the pose only. Local search in pose space is carried out with the Expectation--Maximization (EM) algorithm. Recognition experiments are described where the EM algorithm is used to refine and evaluate pose hypotheses in 2D and 3D. Initial hypotheses for the 2D experiments were generated by a simple indexing method: Angle Pair Indexing. The Linear Combination of Views method of Ullman and Basri is employed as the projection model in the 3D experiments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a tree-structured architecture for supervised learning. The statistical model underlying the architecture is a hierarchical mixture model in which both the mixture coefficients and the mixture components are generalized linear models (GLIM's). Learning is treated as a maximum likelihood problem; in particular, we present an Expectation-Maximization (EM) algorithm for adjusting the parameters of the architecture. We also develop an on-line learning algorithm in which the parameters are updated incrementally. Comparative simulation results are presented in the robot dynamics domain.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We formulate density estimation as an inverse operator problem. We then use convergence results of empirical distribution functions to true distribution functions to develop an algorithm for multivariate density estimation. The algorithm is based upon a Support Vector Machine (SVM) approach to solving inverse operator problems. The algorithm is implemented and tested on simulated data from different distributions and different dimensionalities, gaussians and laplacians in $R^2$ and $R^{12}$. A comparison in performance is made with Gaussian Mixture Models (GMMs). Our algorithm does as well or better than the GMMs for the simulations tested and has the added advantage of being automated with respect to parameters.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper proposes three tests to determine whether a given nonlinear device noise model is in agreement with accepted thermodynamic principles. These tests are applied to several models. One conclusion is that every Gaussian noise model for any nonlinear device predicts thermodynamically impossible circuit behavior: these models should be abandoned. But the nonlinear shot-noise model predicts thermodynamically acceptable behavior under a constraint derived here. Further, this constraint specifies the current noise amplitude at each operating point from knowledge of the device v - i curve alone. For the Gaussian and shot-noise models, this paper shows how the thermodynamic requirements can be reduced to concise mathematical tests involving no approximatio

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In most studies on civil wars, determinants of conflict have been hitherto explored assuming that actors involved were either unitary or stable. However, if this intra-group homogeneity assumption does not hold, empirical econometric estimates may be biased. We use Fixed Effects Finite Mixture Model (FE-FMM) approach to address this issue that provides a representation of heterogeneity when data originate from different latent classes and the affiliation is unknown. It allows to identify sub-populations within a population as well as the determinants of their behaviors. By combining various data sources for the period 2000-2005, we apply this methodology to the Colombian conflict. Our results highlight a behavioral heterogeneity in guerrilla’s armed groups and their distinct economic correlates. By contrast paramilitaries behave as a rather homogenous group.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Changes to stratospheric sudden warmings (SSWs) over the coming century, as predicted by the Geophysical Fluid Dynamics Laboratory (GFDL) chemistry climate model [Atmospheric Model With Transport and Chemistry (AMTRAC)], are investigated in detail. Two sets of integrations, each a three-member ensemble, are analyzed. The first set is driven with observed climate forcings between 1960 and 2004; the second is driven with climate forcings from a coupled model run, including trace gas concentrations representing a midrange estimate of future anthropogenic emissions between 1990 and 2099. A small positive trend in the frequency of SSWs is found. This trend, amounting to 1 event/decade over a century, is statistically significant at the 90% confidence level and is consistent over the two sets of model integrations. Comparison of the model SSW climatology between the late 20th and 21st centuries shows that the increase is largest toward the end of the winter season. In contrast, the dynamical properties are not significantly altered in the coming century, despite the increase in SSW frequency. Owing to the intrinsic complexity of our model, the direct cause of the predicted trend in SSW frequency remains an open question.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A traditional method of validating the performance of a flood model when remotely sensed data of the flood extent are available is to compare the predicted flood extent to that observed. The performance measure employed often uses areal pattern-matching to assess the degree to which the two extents overlap. Recently, remote sensing of flood extents using synthetic aperture radar (SAR) and airborne scanning laser altimetry (LIDAR) has made more straightforward the synoptic measurement of water surface elevations along flood waterlines, and this has emphasised the possibility of using alternative performance measures based on height. This paper considers the advantages that can accrue from using a performance measure based on waterline elevations rather than one based on areal patterns of wet and dry pixels. The two measures were compared for their ability to estimate flood inundation uncertainty maps from a set of model runs carried out to span the acceptable model parameter range in a GLUE-based analysis. A 1 in 5-year flood on the Thames in 1992 was used as a test event. As is typical for UK floods, only a single SAR image of observed flood extent was available for model calibration and validation. A simple implementation of a two-dimensional flood model (LISFLOOD-FP) was used to generate model flood extents for comparison with that observed. The performance measure based on height differences of corresponding points along the observed and modelled waterlines was found to be significantly more sensitive to the channel friction parameter than the measure based on areal patterns of flood extent. The former was able to restrict the parameter range of acceptable model runs and hence reduce the number of runs necessary to generate an inundation uncertainty map. A result of this was that there was less uncertainty in the final flood risk map. The uncertainty analysis included the effects of uncertainties in the observed flood extent as well as in model parameters. The height-based measure was found to be more sensitive when increased heighting accuracy was achieved by requiring that observed waterline heights varied slowly along the reach. The technique allows for the decomposition of the reach into sections, with different effective channel friction parameters used in different sections, which in this case resulted in lower r.m.s. height differences between observed and modelled waterlines than those achieved by runs using a single friction parameter for the whole reach. However, a validation of the modelled inundation uncertainty using the calibration event showed a significant difference between the uncertainty map and the observed flood extent. While this was true for both measures, the difference was especially significant for the height-based one. This is likely to be due to the conceptually simple flood inundation model and the coarse application resolution employed in this case. The increased sensitivity of the height-based measure may lead to an increased onus being placed on the model developer in the production of a valid model

Relevância:

100.00% 100.00%

Publicador:

Resumo:

1. We compared the baseline phosphorus (P) concentrations inferred by diatom-P transfer functions and export coefficient models at 62 lakes in Great Britain to assess whether the techniques produce similar estimates of historical nutrient status. 2. There was a strong linear relationship between the two sets of values over the whole total P (TP) gradient (2-200 mu g TP L-1). However, a systematic bias was observed with the diatom model producing the higher values in 46 lakes (of which values differed by more than 10 mu g TP L-1 in 21). The export coefficient model gave the higher values in 10 lakes (of which the values differed by more than 10 mu g TP L-1 in only 4). 3. The difference between baseline and present-day TP concentrations was calculated to compare the extent of eutrophication inferred by the two sets of model output. There was generally poor agreement between the amounts of change estimated by the two approaches. The discrepancy in both the baseline values and the degree of change inferred by the models was greatest in the shallow and more productive sites. 4. Both approaches were applied to two lakes in the English Lake District where long-term P data exist, to assess how well the models track measured P concentrations since approximately 1850. There was good agreement between the pre-enrichment TP concentrations generated by the models. The diatom model paralleled the steeper rise in maximum soluble reactive P (SRP) more closely than the gradual increase in annual mean TP in both lakes. The export coefficient model produced a closer fit to observed annual mean TP concentrations for both sites, tracking the changes in total external nutrient loading. 5. A combined approach is recommended, with the diatom model employed to reflect the nature and timing of the in-lake response to changes in nutrient loading, and the export coefficient model used to establish the origins and extent of changes in the external load and to assess potential reduction in loading under different management scenarios. 6. However, caution must be exercised when applying these models to shallow lakes where the export coefficient model TP estimate will not include internal P loading from lake sediments and where the diatom TP inferences may over-estimate TP concentrations because of the high abundance of benthic taxa, many of which are poor indicators of trophic state.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Remote sensing from space-borne platforms is often seen as an appealing method of monitoring components of the hydrological cycle, including river discharge, due to its spatial coverage. However, data from these platforms is often less than ideal because the geophysical properties of interest are rarely measured directly and the measurements that are taken can be subject to significant errors. This study assimilated water levels derived from a TerraSAR-X synthetic aperture radar image and digital aerial photography with simulations from a two dimensional hydraulic model to estimate discharge, inundation extent, depths and velocities at the confluence of the rivers Severn and Avon, UK. An ensemble Kalman filter was used to assimilate spot heights water levels derived by intersecting shorelines from the imagery with a digital elevation model. Discharge was estimated from the ensemble of simulations using state augmentation and then compared with gauge data. Assimilating the real data reduced the error between analyzed mean water levels and levels from three gauging stations to less than 0.3 m, which is less than typically found in post event water marks data from the field at these scales. Measurement bias was evident, but the method still provided a means of improving estimates of discharge for high flows where gauge data are unavailable or of poor quality. Posterior estimates of discharge had standard deviations between 63.3 m3s-1 and 52.7 m3s-1, which were below 15% of the gauged flows along the reach. Therefore, assuming a roughness uncertainty of 0.03-0.05 and no model structural errors discharge could be estimated by the EnKF with accuracy similar to that arguably expected from gauging stations during flood events. Quality control prior to assimilation, where measurements were rejected for being in areas of high topographic slope or close to tall vegetation and trees, was found to be essential. The study demonstrates the potential, but also the significant limitations of currently available imagery to reduce discharge uncertainty in un-gauged or poorly gauged basins when combined with model simulations in a data assimilation framework.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Two simple and frequently used capture–recapture estimates of the population size are compared: Chao's lower-bound estimate and Zelterman's estimate allowing for contaminated distributions. In the Poisson case it is shown that if there are only counts of ones and twos, the estimator of Zelterman is always bounded above by Chao's estimator. If counts larger than two exist, the estimator of Zelterman is becoming larger than that of Chao's, if only the ratio of the frequencies of counts of twos and ones is small enough. A similar analysis is provided for the binomial case. For a two-component mixture of Poisson distributions the asymptotic bias of both estimators is derived and it is shown that the Zelterman estimator can experience large overestimation bias. A modified Zelterman estimator is suggested and also the bias-corrected version of Chao's estimator is considered. All four estimators are compared in a simulation study.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The paper considers meta-analysis of diagnostic studies that use a continuous score for classification of study participants into healthy or diseased groups. Classification is often done on the basis of a threshold or cut-off value, which might vary between studies. Consequently, conventional meta-analysis methodology focusing solely on separate analysis of sensitivity and specificity might be confounded by a potentially unknown variation of the cut-off value. To cope with this phenomena it is suggested to use, instead, an overall estimate of the misclassification error previously suggested and used as Youden’s index and; furthermore, it is argued that this index is less prone to between-study variation of cut-off values. A simple Mantel–Haenszel estimator as a summary measure of the overall misclassification error is suggested, which adjusts for a potential study effect. The measure of the misclassification error based on Youden’s index is advantageous in that it easily allows an extension to a likelihood approach, which is then able to cope with unobserved heterogeneity via a nonparametric mixture model. All methods are illustrated at hand of an example on a diagnostic meta-analysis on duplex doppler ultrasound, with angiography as the standard for stroke prevention.