951 resultados para statistical distribution


Relevância:

30.00% 30.00%

Publicador:

Resumo:

When Recurrent Neural Networks (RNN) are going to be used as Pattern Recognition systems, the problem to be considered is how to impose prescribed prototype vectors ξ^1,ξ^2,...,ξ^p as fixed points. The synaptic matrix W should be interpreted as a sort of sign correlation matrix of the prototypes, In the classical approach. The weak point in this approach, comes from the fact that it does not have the appropriate tools to deal efficiently with the correlation between the state vectors and the prototype vectors The capacity of the net is very poor because one can only know if one given vector is adequately correlated with the prototypes or not and we are not able to know what its exact correlation degree. The interest of our approach lies precisely in the fact that it provides these tools. In this paper, a geometrical vision of the dynamic of states is explained. A fixed point is viewed as a point in the Euclidean plane R2. The retrieving procedure is analyzed trough statistical frequency distribution of the prototypes. The capacity of the net is improved and the spurious states are reduced. In order to clarify and corroborate the theoretical results, together with the formal theory, an application is presented

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Motivation: Within bioinformatics, the textual alignment of amino acid sequences has long dominated the determination of similarity between proteins, with all that implies for shared structure, function, and evolutionary descent. Despite the relative success of modern-day sequence alignment algorithms, so-called alignment-free approaches offer a complementary means of determining and expressing similarity, with potential benefits in certain key applications, such as regression analysis of protein structure-function studies, where alignment-base similarity has performed poorly. Results: Here, we offer a fresh, statistical physics-based perspective focusing on the question of alignment-free comparison, in the process adapting results from “first passage probability distribution” to summarize statistics of ensemble averaged amino acid propensity values. In this paper, we introduce and elaborate this approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The issues relating fuzzy sets definition are under consideration including the analogue for separation axiom, statistical interpretation and membership function representation by the conditional Probabilities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A detailed quantitative numerical analysis of partially coherent quasi-CW fiber laser is performed on the example of high-Q cavity Raman fiber laser. The key role of precise spectral performances of fiber Bragg gratings forming the laser cavity is clarified. It is shown that cross phase modulation between the pump and Stokes waves does not affect the generation. Amplitudes of different longitudinal modes strongly fluctuate obeying the Gaussian distribution. As intensity statistics is noticeably non-exponential, longitudinal modes should be correlated. © 2011 SPIE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The impact of climate change on the potential distribution of four Mediterranean pine species – Pinus brutia Ten., Pinus halepensis Mill., Pinus pinaster Aiton, and Pinus pinea L. – was studied by the Climate Envelope Model (CEM) to examine whether these species are suitable for the use as ornamental plants without frost protection in the Carpathian Basin. The model was supported by EUFORGEN digital area database (distribution maps), ESRI ArcGIS 10 software’s Spatial Analyst module (modeling environment), PAST (calibration of the model with statistical method), and REMO regional climate model (climatic data). The climate data were available in a 25 km resolution grid for the reference period (1961–1990) and two future periods (2011–2040, 2041–2070). The regional climate model was based on the IPCC SRES A1B scenario. While the potential distribution of P. brutia was not predicted to expand remarkably, an explicit shift of the distribution of the other three species was shown. Northwestern African distribution segments seem to become abandoned in the future. Current distribution of P. brutia may be highly endangered by the climate change. P. halepensis in the southern part and P. pinaster in the western part of the Carpathian Basin may find suitable climatic conditions in the period of 2041–2070.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The potential future distribution of four Mediterranean pines was aimed to be modeled supported by EUFORGEN digital area database (distribution maps), ESRI ArcGIS 10 software’s Spatial Analyst module (modeling environment), PAST (calibration of the model with statistical method), and REMO regional climate model (climatic data). The studied species were Pinus brutia, Pinus halepensis, Pinus pinaster, and Pinus pinea. The climate data were available in a 25 km resolution grid for the reference period (1961-90) and two future periods (2011-40, 2041-70). The climate model was based on the IPCC SRES A1B scenario. The model results show explicit shift of the distributions to the north in case of three of the four studied species. The future (2041-70) climate of Western Hungary seems to be suitable for Pinus pinaster.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Lateral load distribution factor is a key factor for designing and analyzing curved steel I-girder bridges. In this dissertation, the effects of various parameters on moment and shear distribution for curved steel I-girder bridges were studied using the Finite Element Method (FEM). The parameters considered in the study were: radius of curvature, girder spacing, overhang, span length, number of girders, ratio of girder stiffness to overall bridge stiffness, slab thickness, girder longitudinal stiffness, cross frame spacing, and girder torsional inertia. The variations of these parameters were based on the statistical analysis of the real bridge database, which was created by extracting data from existing or newly designed curved steel I-girder bridge plans collected all over the nation. A hypothetical bridge superstructure model that was made of all the mean values of the data was created and used for the parameter study. ^ The study showed that cross frame spacing and girder torsional inertia had negligible effects. Other parameters had been identified as key parameters. Regression analysis was conducted based on the FEM analysis results and simplified formulas for predicting positive moment, negative moment, and shear distribution factors were developed. Thirty-three real bridges were analyzed using FEM to verify the formulas. The ratio of the distribution factor obtained from the formula to the one obtained from the FEM analysis, which was referred to as the g-ratio, was examined. The results showed that the standard deviation of the g-ratios was within 0.04 to 0.06 and the mean value of the g-ratios was greater than unity by one standard deviation. This indicates that the formulas are conservative in most cases but not overly conservative. The final formulas are similar in format to the current American Association of State Highway and Transportation Officials (AASHTO) Load Resistance and Factor Design (LRFD) specifications. ^ The developed formulas were compared with other simplified methods. The outcomes showed that the proposed formulas had the most accurate results among all methods. ^ The formulas developed in this study will assist bridge engineers and researchers in predicting the actual live load distribution in horizontally curved steel I-girder bridges. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Microarray platforms have been around for many years and while there is a rise of new technologies in laboratories, microarrays are still prevalent. When it comes to the analysis of microarray data to identify differentially expressed (DE) genes, many methods have been proposed and modified for improvement. However, the most popular methods such as Significance Analysis of Microarrays (SAM), samroc, fold change, and rank product are far from perfect. When it comes down to choosing which method is most powerful, it comes down to the characteristics of the sample and distribution of the gene expressions. The most practiced method is usually SAM or samroc but when the data tends to be skewed, the power of these methods decrease. With the concept that the median becomes a better measure of central tendency than the mean when the data is skewed, the tests statistics of the SAM and fold change methods are modified in this thesis. This study shows that the median modified fold change method improves the power for many cases when identifying DE genes if the data follows a lognormal distribution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Based on the quantitative study of diatoms and radiolarians, summer sea-surface temperature (SSST) and sea ice distribution were estimated from 122 sediment core localities in the Atlantic, Indian and Pacific sectors of the Southern Ocean to reconstruct the last glacial environment at the EPILOG (19.5-16.0 ka or 23 000-19 000 cal yr. B.P.) time-slice. The statistical methods applied include the Imbrie and Kipp Method, the Modern Analog Technique and the General Additive Model. Summer SSTs reveal greater surface-water cooling than reconstructed by CLIMAP (Geol. Soc. Am. Map Chart. Ser. MC-36 (1981) 1), reaching a maximum (4-5 °C) in the present Subantarctic Zone of the Atlantic and Indian sector. The reconstruction of maximum winter sea ice (WSI) extent is in accordance with CLIMAP, showing an expansion of the WSI field by around 100% compared to the present. Although only limited information is available, the data clearly show that CLIMAP strongly overestimated the glacial summer sea ice extent. As a result of the northward expansion of Antarctic cold waters by 5-10° in latitude and a relatively small displacement of the Subtropical Front, thermal gradients were steepened during the last glacial in the northern zone of the Southern Ocean. Such reconstruction may, however, be inapposite for the Pacific sector. The few data available indicate reduced cooling in the southern Pacific and give suggestion for a non-uniform cooling of the glacial Southern Ocean.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Peer reviewed

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The work is supported in part by NSFC (Grant no. 61172070), IRT of Shaanxi Province (2013KCT-04), EPSRC (Grant no.Ep/1032606/1).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The work is supported in part by NSFC (Grant no. 61172070), IRT of Shaanxi Province (2013KCT-04), EPSRC (Grant no.Ep/1032606/1).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent research into resting-state functional magnetic resonance imaging (fMRI) has shown that the brain is very active during rest. This thesis work utilizes blood oxygenation level dependent (BOLD) signals to investigate the spatial and temporal functional network information found within resting-state data, and aims to investigate the feasibility of extracting functional connectivity networks using different methods as well as the dynamic variability within some of the methods. Furthermore, this work looks into producing valid networks using a sparsely-sampled sub-set of the original data.

In this work we utilize four main methods: independent component analysis (ICA), principal component analysis (PCA), correlation, and a point-processing technique. Each method comes with unique assumptions, as well as strengths and limitations into exploring how the resting state components interact in space and time.

Correlation is perhaps the simplest technique. Using this technique, resting-state patterns can be identified based on how similar the time profile is to a seed region’s time profile. However, this method requires a seed region and can only identify one resting state network at a time. This simple correlation technique is able to reproduce the resting state network using subject data from one subject’s scan session as well as with 16 subjects.

Independent component analysis, the second technique, has established software programs that can be used to implement this technique. ICA can extract multiple components from a data set in a single analysis. The disadvantage is that the resting state networks it produces are all independent of each other, making the assumption that the spatial pattern of functional connectivity is the same across all the time points. ICA is successfully able to reproduce resting state connectivity patterns for both one subject and a 16 subject concatenated data set.

Using principal component analysis, the dimensionality of the data is compressed to find the directions in which the variance of the data is most significant. This method utilizes the same basic matrix math as ICA with a few important differences that will be outlined later in this text. Using this method, sometimes different functional connectivity patterns are identifiable but with a large amount of noise and variability.

To begin to investigate the dynamics of the functional connectivity, the correlation technique is used to compare the first and second halves of a scan session. Minor differences are discernable between the correlation results of the scan session halves. Further, a sliding window technique is implemented to study the correlation coefficients through different sizes of correlation windows throughout time. From this technique it is apparent that the correlation level with the seed region is not static throughout the scan length.

The last method introduced, a point processing method, is one of the more novel techniques because it does not require analysis of the continuous time points. Here, network information is extracted based on brief occurrences of high or low amplitude signals within a seed region. Because point processing utilizes less time points from the data, the statistical power of the results is lower. There are also larger variations in DMN patterns between subjects. In addition to boosted computational efficiency, the benefit of using a point-process method is that the patterns produced for different seed regions do not have to be independent of one another.

This work compares four unique methods of identifying functional connectivity patterns. ICA is a technique that is currently used by many scientists studying functional connectivity patterns. The PCA technique is not optimal for the level of noise and the distribution of the data sets. The correlation technique is simple and obtains good results, however a seed region is needed and the method assumes that the DMN regions is correlated throughout the entire scan. Looking at the more dynamic aspects of correlation changing patterns of correlation were evident. The last point-processing method produces a promising results of identifying functional connectivity networks using only low and high amplitude BOLD signals.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dynamic positron emission tomography (PET) imaging can be used to track the distribution of injected radio-labelled molecules over time in vivo. This is a powerful technique, which provides researchers and clinicians the opportunity to study the status of healthy and pathological tissue by examining how it processes substances of interest. Widely used tracers include 18F-uorodeoxyglucose, an analog of glucose, which is used as the radiotracer in over ninety percent of PET scans. This radiotracer provides a way of quantifying the distribution of glucose utilisation in vivo. The interpretation of PET time-course data is complicated because the measured signal is a combination of vascular delivery and tissue retention effects. If the arterial time-course is known, the tissue time-course can typically be expressed in terms of a linear convolution between the arterial time-course and the tissue residue function. As the residue represents the amount of tracer remaining in the tissue, this can be thought of as a survival function; these functions been examined in great detail by the statistics community. Kinetic analysis of PET data is concerned with estimation of the residue and associated functionals such as ow, ux and volume of distribution. This thesis presents a Markov chain formulation of blood tissue exchange and explores how this relates to established compartmental forms. A nonparametric approach to the estimation of the residue is examined and the improvement in this model relative to compartmental model is evaluated using simulations and cross-validation techniques. The reference distribution of the test statistics, generated in comparing the models, is also studied. We explore these models further with simulated studies and an FDG-PET dataset from subjects with gliomas, which has previously been analysed with compartmental modelling. We also consider the performance of a recently proposed mixture modelling technique in this study.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tropical Cyclones are a continuing threat to life and property. Willoughby (2012) found that a Pareto (power-law) cumulative distribution fitted to the most damaging 10% of US hurricane seasons fit their impacts well. Here, we find that damage follows a Pareto distribution because the assets at hazard follow a Zipf distribution, which can be thought of as a Pareto distribution with exponent 1. The Z-CAT model is an idealized hurricane catastrophe model that represents a coastline where populated places with Zipf- distributed assets are randomly scattered and damaged by virtual hurricanes with sizes and intensities generated through a Monte-Carlo process. Results produce realistic Pareto exponents. The ability of the Z-CAT model to simulate different climate scenarios allowed testing of sensitivities to Maximum Potential Intensity, landfall rates and building structure vulnerability. The Z-CAT model results demonstrate that a statistical significant difference in damage is found when only changes in the parameters create a doubling of damage.