153 resultados para Gaussian complexities


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nutrigenetics and personalised nutrition are components of the concept that in the future genotyping will be used as a means of defining dietary recommendations to suit the individual. Over the last two decades there has been an explosion of research in this area, with often conflicting findings reported in the literature. Reviews of the literature in the area of apoE genotype and cardiovascular health, apoA5 genotype and postprandial lipaemia and perilipin and adiposity are used to demonstrate the complexities of genotype-phenotype associations and the aetiology of apparent between-study inconsistencies in the significance and size of effects. Furthermore, genetic research currently often takes a very reductionist approach, examining the interactions between individual genotypes and individual disease biomarkers and how they are modified by isolated dietary components or foods. Each individual possesses potentially hundreds of 'at-risk' gene variants and consumes a highly-complex diet. In order for nutrigenetics to become a useful public health tool, there is a great need to use mathematical and bioinformatic tools to develop strategies to examine the combined impact of multiple gene variants on a range of health outcomes and establish how these associations can be modified using combined dietary strategies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work the G(A)(0) distribution is assumed as the universal model for amplitude Synthetic Aperture (SAR) imagery data under the Multiplicative Model. The observed data, therefore, is assumed to obey a G(A)(0) (alpha; gamma, n) law, where the parameter n is related to the speckle noise, and (alpha, gamma) are related to the ground truth, giving information about the background. Therefore, maps generated by the estimation of (alpha, gamma) in each coordinate can be used as the input for classification methods. Maximum likelihood estimators are derived and used to form estimated parameter maps. This estimation can be hampered by the presence of corner reflectors, man-made objects used to calibrate SAR images that produce large return values. In order to alleviate this contamination, robust (M) estimators are also derived for the universal model. Gaussian Maximum Likelihood classification is used to obtain maps using hard-to-deal-with simulated data, and the superiority of robust estimation is quantitatively assessed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A novel framework for multimodal semantic-associative collateral image labelling, aiming at associating image regions with textual keywords, is described. Both the primary image and collateral textual modalities are exploited in a cooperative and complementary fashion. The collateral content and context based knowledge is used to bias the mapping from the low-level region-based visual primitives to the high-level visual concepts defined in a visual vocabulary. We introduce the notion of collateral context, which is represented as a co-occurrence matrix, of the visual keywords, A collaborative mapping scheme is devised using statistical methods like Gaussian distribution or Euclidean distance together with collateral content and context-driven inference mechanism. Finally, we use Self Organising Maps to examine the classification and retrieval effectiveness of the proposed high-level image feature vector model which is constructed based on the image labelling results.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work compares classification results of lactose, mandelic acid and dl-mandelic acid, obtained on the basis of their respective THz transients. The performance of three different pre-processing algorithms applied to the time-domain signatures obtained using a THz-transient spectrometer are contrasted by evaluating the classifier performance. A range of amplitudes of zero-mean white Gaussian noise are used to artificially degrade the signal-to-noise ratio of the time-domain signatures to generate the data sets that are presented to the classifier for both learning and validation purposes. This gradual degradation of interferograms by increasing the noise level is equivalent to performing measurements assuming a reduced integration time. Three signal processing algorithms were adopted for the evaluation of the complex insertion loss function of the samples under study; a) standard evaluation by ratioing the sample with the background spectra, b) a subspace identification algorithm and c) a novel wavelet-packet identification procedure. Within class and between class dispersion metrics are adopted for the three data sets. A discrimination metric evaluates how well the three classes can be distinguished within the frequency range 0. 1 - 1.0 THz using the above algorithms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The identification and visualization of clusters formed by motor unit action potentials (MUAPs) is an essential step in investigations seeking to explain the control of the neuromuscular system. This work introduces the generative topographic mapping (GTM), a novel machine learning tool, for clustering of MUAPs, and also it extends the GTM technique to provide a way of visualizing MUAPs. The performance of GTM was compared to that of three other clustering methods: the self-organizing map (SOM), a Gaussian mixture model (GMM), and the neural-gas network (NGN). The results, based on the study of experimental MUAPs, showed that the rate of success of both GTM and SOM outperformed that of GMM and NGN, and also that GTM may in practice be used as a principled alternative to the SOM in the study of MUAPs. A visualization tool, which we called GTM grid, was devised for visualization of MUAPs lying in a high-dimensional space. The visualization provided by the GTM grid was compared to that obtained from principal component analysis (PCA). (c) 2005 Elsevier Ireland Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A greedy technique is proposed to construct parsimonious kernel classifiers using the orthogonal forward selection method and boosting based on Fisher ratio for class separability measure. Unlike most kernel classification methods, which restrict kernel means to the training input data and use a fixed common variance for all the kernel terms, the proposed technique can tune both the mean vector and diagonal covariance matrix of individual kernel by incrementally maximizing Fisher ratio for class separability measure. An efficient weighted optimization method is developed based on boosting to append kernels one by one in an orthogonal forward selection procedure. Experimental results obtained using this construction technique demonstrate that it offers a viable alternative to the existing state-of-the-art kernel modeling methods for constructing sparse Gaussian radial basis function network classifiers. that generalize well.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A quadratic programming optimization procedure for designing asymmetric apodization windows tailored to the shape of time-domain sample waveforms recorded using a terahertz transient spectrometer is proposed. By artificially degrading the waveforms, the performance of the designed window in both the time and the frequency domains is compared with that of conventional rectangular, triangular (Mertz), and Hamming windows. Examples of window optimization assuming Gaussian functions as the building elements of the apodization window are provided. The formulation is sufficiently general to accommodate other basis functions. (C) 2007 Optical Society of America

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this study a minimum variance neuro self-tuning proportional-integral-derivative (PID) controller is designed for complex multiple input-multiple output (MIMO) dynamic systems. An approximation model is constructed, which consists of two functional blocks. The first block uses a linear submodel to approximate dominant system dynamics around a selected number of operating points. The second block is used as an error agent, implemented by a neural network, to accommodate the inaccuracy possibly introduced by the linear submodel approximation, various complexities/uncertainties, and complicated coupling effects frequently exhibited in non-linear MIMO dynamic systems. With the proposed model structure, controller design of an MIMO plant with n inputs and n outputs could be, for example, decomposed into n independent single input-single output (SISO) subsystem designs. The effectiveness of the controller design procedure is initially verified through simulations of industrial examples.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A beamforming algorithm is introduced based on the general objective function that approximates the bit error rate for the wireless systems with binary phase shift keying and quadrature phase shift keying modulation schemes. The proposed minimum approximate bit error rate (ABER) beamforming approach does not rely on the Gaussian assumption of the channel noise. Therefore, this approach is also applicable when the channel noise is non-Gaussian. The simulation results show that the proposed minimum ABER solution improves the standard minimum mean squares error beamforming solution, in terms of a smaller achievable system's bit error rate.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A novel framework referred to as collaterally confirmed labelling (CCL) is proposed, aiming at localising the visual semantics to regions of interest in images with textual keywords. Both the primary image and collateral textual modalities are exploited in a mutually co-referencing and complementary fashion. The collateral content and context-based knowledge is used to bias the mapping from the low-level region-based visual primitives to the high-level visual concepts defined in a visual vocabulary. We introduce the notion of collateral context, which is represented as a co-occurrence matrix of the visual keywords. A collaborative mapping scheme is devised using statistical methods like Gaussian distribution or Euclidean distance together with collateral content and context-driven inference mechanism. We introduce a novel high-level visual content descriptor that is devised for performing semantic-based image classification and retrieval. The proposed image feature vector model is fundamentally underpinned by the CCL framework. Two different high-level image feature vector models are developed based on the CCL labelling of results for the purposes of image data clustering and retrieval, respectively. A subset of the Corel image collection has been used for evaluating our proposed method. The experimental results to-date already indicate that the proposed semantic-based visual content descriptors outperform both traditional visual and textual image feature models. (C) 2007 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Pro-poor decision making depends on an understanding of the complexities and interrelationships between household livelihood, demographic, and economic factors. This article describes the design and implementation of the Poverty Assessor, a software programme to assist practitioners, policy makers, and researchers in visualising the direct impacts on poverty of specific livelihood factors and events among populations living in poverty. The software enables users to upload their own data and profile households in relation to the national poverty line, by selecting from a range of demographic and livelihood indicators. The authors present findings from the programme, using a dataset from Bolivia.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cloud radar and lidar can be used to evaluate the skill of numerical weather prediction models in forecasting the timing and placement of clouds, but care must be taken in choosing the appropriate metric of skill to use due to the non- Gaussian nature of cloud-fraction distributions. We compare the properties of a number of different verification measures and conclude that of existing measures the Log of Odds Ratio is the most suitable for cloud fraction. We also propose a new measure, the Symmetric Extreme Dependency Score, which has very attractive properties, being equitable (for large samples), difficult to hedge and independent of the frequency of occurrence of the quantity being verified. We then use data from five European ground-based sites and seven forecast models, processed using the ‘Cloudnet’ analysis system, to investigate the dependence of forecast skill on cloud fraction threshold (for binary skill scores), height, horizontal scale and (for the Met Office and German Weather Service models) forecast lead time. The models are found to be least skillful at predicting the timing and placement of boundary-layer clouds and most skilful at predicting mid-level clouds, although in the latter case they tend to underestimate mean cloud fraction when cloud is present. It is found that skill decreases approximately inverse-exponentially with forecast lead time, enabling a forecast ‘half-life’ to be estimated. When considering the skill of instantaneous model snapshots, we find typical values ranging between 2.5 and 4.5 days. Copyright c 2009 Royal Meteorological Society

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Rainfall can be modeled as a spatially correlated random field superimposed on a background mean value; therefore, geostatistical methods are appropriate for the analysis of rain gauge data. Nevertheless, there are certain typical features of these data that must be taken into account to produce useful results, including the generally non-Gaussian mixed distribution, the inhomogeneity and low density of observations, and the temporal and spatial variability of spatial correlation patterns. Many studies show that rigorous geostatistical analysis performs better than other available interpolation techniques for rain gauge data. Important elements are the use of climatological variograms and the appropriate treatment of rainy and nonrainy areas. Benefits of geostatistical analysis for rainfall include ease of estimating areal averages, estimation of uncertainties, and the possibility of using secondary information (e.g., topography). Geostatistical analysis also facilitates the generation of ensembles of rainfall fields that are consistent with a given set of observations, allowing for a more realistic exploration of errors and their propagation in downstream models, such as those used for agricultural or hydrological forecasting. This article provides a review of geostatistical methods used for kriging, exemplified where appropriate by daily rain gauge data from Ethiopia.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper addresses the statistical mechanics of ideal polymer chains next to a hard wall. The principal quantity of interest, from which all monomer densities can be calculated, is the partition function, G N(z) , for a chain of N discrete monomers with one end fixed a distance z from the wall. It is well accepted that in the limit of infinite N , G N(z) satisfies the diffusion equation with the Dirichlet boundary condition, G N(0) = 0 , unless the wall possesses a sufficient attraction, in which case the Robin boundary condition, G N(0) = - x G N ′(0) , applies with a positive coefficient, x . Here we investigate the leading N -1/2 correction, D G N(z) . Prior to the adsorption threshold, D G N(z) is found to involve two distinct parts: a Gaussian correction (for z <~Unknown control sequence '\lesssim' aN 1/2 with a model-dependent amplitude, A , and a proximal-layer correction (for z <~Unknown control sequence '\lesssim' a described by a model-dependent function, B(z).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A neural network enhanced proportional, integral and derivative (PID) controller is presented that combines the attributes of neural network learning with a generalized minimum-variance self-tuning control (STC) strategy. The neuro PID controller is structured with plant model identification and PID parameter tuning. The plants to be controlled are approximated by an equivalent model composed of a simple linear submodel to approximate plant dynamics around operating points, plus an error agent to accommodate the errors induced by linear submodel inaccuracy due to non-linearities and other complexities. A generalized recursive least-squares algorithm is used to identify the linear submodel, and a layered neural network is used to detect the error agent in which the weights are updated on the basis of the error between the plant output and the output from the linear submodel. The procedure for controller design is based on the equivalent model, and therefore the error agent is naturally functioned within the control law. In this way the controller can deal not only with a wide range of linear dynamic plants but also with those complex plants characterized by severe non-linearity, uncertainties and non-minimum phase behaviours. Two simulation studies are provided to demonstrate the effectiveness of the controller design procedure.