983 resultados para JOINT DISTRIBUTION


Relevância:

60.00% 60.00%

Publicador:

Resumo:

The paper presents a new copula based method for measuring dependence between random variables. Our approach extends the Maximum Mean Discrepancy to the copula of the joint distribution. We prove that this approach has several advantageous properties. Similarly to Shannon mutual information, the proposed dependence measure is invariant to any strictly increasing transformation of the marginal variables. This is important in many applications, for example in feature selection. The estimator is consistent, robust to outliers, and uses rank statistics only. We derive upper bounds on the convergence rate and propose independence tests too. We illustrate the theoretical contributions through a series of experiments in feature selection and low-dimensional embedding of distributions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We developed a coarse-grained yet microscopic detailed model to study the statistical fluctuations of single-molecule protein conformational dynamics of adenylate kinase. We explored the underlying conformational energy landscape and found that the system has two basins of attractions, open and closed conformations connected by two separate pathways. The kinetics is found to be nonexponential, consistent with single-molecule conformational dynamics experiments. Furthermore, we found that the statistical distribution of the kinetic times for the conformational transition has a long power law tail, reflecting the exponential density of state of the underlying landscape. We also studied the joint distribution of the two pathways and found memory effects.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Seismic signal is a typical non-stationary signal, whose frequency is continuously changing with time and is determined by the bandwidth of seismic source and the absorption characteristic of the media underground. The most interesting target of seismic signal’s processing and explaining is to know about the local frequency’s abrupt changing with the time, since this kind of abrupt changing is indicating the changing of the physical attributes of the media underground. As to the seismic signal’s instantaneous attributes taken from time-frequency domain, the key target is to search a effective, non-negative and fast algorithm time-frequency distribution, and transform the seismic signal into this time-frequency domain to get its instantaneous power spectrum density, and then use the process of weighted adding and average etc. to get the instantaneous attributes of seismic signal. Time-frequency analysis as a powerful tool to deal with time variant non-stationary signal is becoming a hot researching spot of modern signal processing, and also is an important method to make seismic signal’s attributes analysis. This kind of method provides joint distribution message about time domain and frequency domain, and it clearly plots the correlation of signal’s frequency changing with the time. The spectrum decomposition technique makes seismic signal’s resolving rate reach its theoretical level, and by the method of all frequency scanning and imaging the three dimensional seismic data in frequency domain, it improves and promotes the resolving abilities of seismic signal vs. geological abnormal objects. Matching pursuits method is an important way to realize signal’s self-adaptive decomposition. Its main thought is that any signal can be expressed by a series of time-frequency atoms’ linear composition. By decomposition the signal within an over completed library, the time-frequency atoms which stand for the signal itself are selected neatly and self-adaptively according to the signal’s characteristics. This method has excellent sparse decomposition characteristics, and is widely used in signal de-noising, signal coding and pattern recognizing processing and is also adaptive to seismic signal’s decomposition and attributes analysis. This paper takes matching pursuits method as the key research object. As introducing the principle and implementation techniques of matching pursuits method systematically, it researches deeply the pivotal problems of atom type’s selection, the atom dictionary’s discrete, and the most matching atom’s searching algorithm, and at the same time, applying this matching pursuits method into seismic signal’s processing by picking-up correlative instantaneous messages from time-frequency analysis and spectrum decomposition to the seismic signal. Based on the research of the theory and its correlative model examination of the adaptively signal decomposition with matching pursuit method, this paper proposes a fast optimal matching time-frequency atom’s searching algorithm aimed at seismic signal’s decomposition by frequency-dominated pursuit method and this makes the MP method pertinence to seismic signal’s processing. Upon the research of optimal Gabor atom’s fast searching and matching algorithm, this paper proposes global optimal searching method using Simulated Annealing Algorithm, Genetic Algorithm and composed Simulated Annealing and Genetic Algorithm, so as to provide another way to implement fast matching pursuit method. At the same time, aimed at the characteristics of seismic signal, this paper proposes a fast matching atom’s searching algorithm by means of designating the max energy points of complex seismic signal, searching for the most optimal atom in the neighbor area of these points according to its instantaneous frequency and instantaneous phase, and this promotes the calculating efficiency of seismic signal’s matching pursuit algorithm. According to these methods proposed above, this paper implements them by programmed calculation, compares them with some open algorithm and proves this paper’s conclusions. It also testifies the active results of various methods by the processing of actual signals. The problems need to be solved further and the aftertime researching targets are as follows: continuously seeking for more efficient fast matching pursuit algorithm and expanding its application range, and also study the actual usage of matching pursuit method.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Evaluating the mechanical properties of rock masses is the base of rock engineering design and construction. It has great influence on the safety and cost of rock project. The recognition is inevitable consequence of new engineering activities in rock, including high-rise building, super bridge, complex underground installations, hydraulic project and etc. During the constructions, lots of engineering accidents happened, which bring great damage to people. According to the investigation, many failures are due to choosing improper mechanical properties. ‘Can’t give the proper properties’ becomes one of big problems for theoretic analysis and numerical simulation. Selecting the properties reasonably and effectively is very significant for the planning, design and construction of rock engineering works. A multiple method based on site investigation, theoretic analysis, model test, numerical test and back analysis by artificial neural network is conducted to determine and optimize the mechanical properties for engineering design. The following outcomes are obtained: (1) Mapping of the rock mass structure Detailed geological investigation is the soul of the fine structure description. Based on statistical window,geological sketch and digital photography,a new method for rock mass fine structure in-situ mapping is developed. It has already been taken into practice and received good comments in Baihetan Hydropower Station. (2) Theoretic analysis of rock mass containing intermittent joints The shear strength mechanisms of joint and rock bridge are analyzed respectively. And the multiple modes of failure on different stress condition are summarized and supplied. Then, through introducing deformation compatibility equation in normal direction, the direct shear strength formulation and compression shear strength formulation for coplanar intermittent joints, as well as compression shear strength formulation for ladderlike intermittent joints are deducted respectively. In order to apply the deducted formulation conveniently in the real projects, a relationship between these formulations and Mohr-Coulomb hypothesis is built up. (3) Model test of rock mass containing intermittent joints Model tests are adopted to study the mechanical mechanism of joints to rock masses. The failure modes of rock mass containing intermittent joints are summarized from the model test. Six typical failure modes are found in the test, and brittle failures are the main failure mode. The evolvement processes of shear stress, shear displacement, normal stress and normal displacement are monitored by using rigid servo test machine. And the deformation and failure character during the loading process is analyzed. According to the model test, the failure modes quite depend on the joint distribution, connectivity and stress states. According to the contrastive analysis of complete stress strain curve, different failure developing stages are found in the intact rock, across jointed rock mass and intermittent jointed rock mass. There are four typical stages in the stress strain curve of intact rock, namely shear contraction stage, linear elastic stage, failure stage and residual strength stage. There are three typical stages in the across jointed rock mass, namely linear elastic stage, transition zone and sliding failure stage. Correspondingly, five typical stages are found in the intermittent jointed rock mass, namely linear elastic stage, sliding of joint, steady growth of post-crack, joint coalescence failure, and residual strength. According to strength analysis, the failure envelopes of intact rock and across jointed rock mass are the upper bound and lower bound separately. The strength of intermittent jointed rock mass can be evaluated by reducing the bandwidth of the failure envelope with geo-mechanics analysis. (4) Numerical test of rock mass Two sets of methods, i.e. the distinct element method (DEC) based on in-situ geology mapping and the realistic failure process analysis (RFPA) based on high-definition digital imaging, are developed and introduced. The operation process and analysis results are demonstrated detailedly from the research on parameters of rock mass based on numerical test in the Jinping First Stage Hydropower Station and Baihetan Hydropower Station. By comparison,the advantages and disadvantages are discussed. Then the applicable fields are figured out respectively. (5) Intelligent evaluation based on artificial neural network (ANN) The characters of both ANN and parameter evaluation of rock mass are discussed and summarized. According to the investigations, ANN has a bright application future in the field of parameter evaluation of rock mass. Intelligent evaluation of mechanical parameters in the Jinping First Stage Hydropower Station is taken as an example to demonstrate the analysis process. The problems in five aspects, i. e. sample selection, network design, initial value selection, learning rate and expected error, are discussed detailedly.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We propose a simple and flexible framework for forecasting the joint density of asset returns. The multinormal distribution is augmented with a polynomial in (time-varying) non-central co-moments of assets. We estimate the coefficients of the polynomial via the Method of Moments for a carefully selected set of co-moments. In an extensive empirical study, we compare the proposed model with a range of other models widely used in the literature. Employing a recently proposed as well as standard techniques to evaluate multivariate forecasts, we conclude that the augmented joint density provides highly accurate forecasts of the “negative tail” of the joint distribution.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The paper describes the use of radial basis function neural networks with Gaussian basis functions to classify incomplete feature vectors. The method uses the fact that any marginal distribution of a Gaussian distribution can be determined from the mean vector and covariance matrix of the joint distribution.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

BACKGROUND: Dyslipidemia is recognized as a major cause of coronary heart disease (CHD). Emerged evidence suggests that the combination of triglycerides (TG) and waist circumference can be used to predict the risk of CHD. However, considering the known limitations of TG, non-high-density lipoprotein (non-HDL = Total cholesterol - HDL cholesterol) cholesterol and waist circumference model may be a better predictor of CHD. PURPOSE: The Framingham Offspring Study data were used to determine if combined non-HDL cholesterol and waist circumference is equivalent to or better than TG and waist circumference (hypertriglyceridemic waist phenotype) in predicting risk of CHD. METHODS: A total of3,196 individuals from Framingham Offspring Study, aged ~ 40 years old, who fasted overnight for ~ 9 hours, and had no missing information on nonHDL cholesterol, TG levels, and waist circumference measurements, were included in the analysis. Receiver Operator Characteristic Curve (ROC) Area Under the Curve (AUC) was used to compare the predictive ability of non-HDL cholesterol and waist circumference and TG and waist circumference. Cox proportional-hazards models were used to examine the association between the joint distributions of non-HDL cholesterol, waist circumference, and non-fatal CHD; TG, waist circumference, and non-fatal CHD; and the joint distribution of non-HDL cholesterol and TG by waist circumference strata, after adjusting for age, gender, smoking, alcohol consumption, diabetes, and hypertension status. RESULTS: The ROC AUC associated with non-HDL cholesterol and waist circumference and TG and waist circumference are 0.6428 (CI: 0.6183, 0.6673) and 0.6299 (CI: 0.6049, 0.6548) respectively. The difference in the ROC AVC is 1.29%. The p-value testing if the difference in the ROC AVCs between the two models is zero is 0.10. There was a strong positive association between non-HDL cholesterol and the risk for non-fatal CHD within each TO levels than that for TO levels within each level of nonHDL cholesterol, especially in individuals with high waist circumference status. CONCLUSION: The results suggest that the model including non-HDL cholesterol and waist circumference may be superior at predicting CHD compared to the model including TO and waist circumference.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis examines the performance of Canadian fixed-income mutual funds in the context of an unobservable market factor that affects mutual fund returns. We use various selection and timing models augmented with univariate and multivariate regime-switching structures. These models assume a joint distribution of an unobservable latent variable and fund returns. The fund sample comprises six Canadian value-weighted portfolios with different investing objectives from 1980 to 2011. These are the Canadian fixed-income funds, the Canadian inflation protected fixed-income funds, the Canadian long-term fixed-income funds, the Canadian money market funds, the Canadian short-term fixed-income funds and the high yield fixed-income funds. We find strong evidence that more than one state variable is necessary to explain the dynamics of the returns on Canadian fixed-income funds. For instance, Canadian fixed-income funds clearly show that there are two regimes that can be identified with a turning point during the mid-eighties. This structural break corresponds to an increase in the Canadian bond index from its low values in the early 1980s to its current high values. Other fixed-income funds results show latent state variables that mimic the behaviour of the general economic activity. Generally, we report that Canadian bond fund alphas are negative. In other words, fund managers do not add value through their selection abilities. We find evidence that Canadian fixed-income fund portfolio managers are successful market timers who shift portfolio weights between risky and riskless financial assets according to expected market conditions. Conversely, Canadian inflation protected funds, Canadian long-term fixed-income funds and Canadian money market funds have no market timing ability. We conclude that these managers generally do not have positive performance by actively managing their portfolios. We also report that the Canadian fixed-income fund portfolios perform asymmetrically under different economic regimes. In particular, these portfolio managers demonstrate poorer selection skills during recessions. Finally, we demonstrate that the multivariate regime-switching model is superior to univariate models given the dynamic market conditions and the correlation between fund portfolios.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

La théorie de l'information quantique étudie les limites fondamentales qu'imposent les lois de la physique sur les tâches de traitement de données comme la compression et la transmission de données sur un canal bruité. Cette thèse présente des techniques générales permettant de résoudre plusieurs problèmes fondamentaux de la théorie de l'information quantique dans un seul et même cadre. Le théorème central de cette thèse énonce l'existence d'un protocole permettant de transmettre des données quantiques que le receveur connaît déjà partiellement à l'aide d'une seule utilisation d'un canal quantique bruité. Ce théorème a de plus comme corollaires immédiats plusieurs théorèmes centraux de la théorie de l'information quantique. Les chapitres suivants utilisent ce théorème pour prouver l'existence de nouveaux protocoles pour deux autres types de canaux quantiques, soit les canaux de diffusion quantiques et les canaux quantiques avec information supplémentaire fournie au transmetteur. Ces protocoles traitent aussi de la transmission de données quantiques partiellement connues du receveur à l'aide d'une seule utilisation du canal, et ont comme corollaires des versions asymptotiques avec et sans intrication auxiliaire. Les versions asymptotiques avec intrication auxiliaire peuvent, dans les deux cas, être considérées comme des versions quantiques des meilleurs théorèmes de codage connus pour les versions classiques de ces problèmes. Le dernier chapitre traite d'un phénomène purement quantique appelé verrouillage: il est possible d'encoder un message classique dans un état quantique de sorte qu'en lui enlevant un sous-système de taille logarithmique par rapport à sa taille totale, on puisse s'assurer qu'aucune mesure ne puisse avoir de corrélation significative avec le message. Le message se trouve donc «verrouillé» par une clé de taille logarithmique. Cette thèse présente le premier protocole de verrouillage dont le critère de succès est que la distance trace entre la distribution jointe du message et du résultat de la mesure et le produit de leur marginales soit suffisamment petite.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Using a scaling assumption, we propose a phenomenological model aimed to describe the joint probability distribution of two magnitudes A and T characterizing the spatial and temporal scales of a set of avalanches. The model also describes the correlation function of a sequence of such avalanches. As an example we study the joint distribution of amplitudes and durations of the acoustic emission signals observed in martensitic transformations [Vives et al., preceding paper, Phys. Rev. B 52, 12 644 (1995)].

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this thesis, certain continuous time inventory problems with positive service time under local purchase guided by N/T-policy are analysed. In most of the cases analysed, we arrive at stochastic decomposition of system states, that is, the joint distribution of the system states is obtained as the product of marginal distributions of the components. The thesis is divided into ve chapters

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A joint distribution of two discrete random variables with finite support can be displayed as a two way table of probabilities adding to one. Assume that this table has n rows and m columns and all probabilities are non-null. This kind of table can be seen as an element in the simplex of n · m parts. In this context, the marginals are identified as compositional amalgams, conditionals (rows or columns) as subcompositions. Also, simplicial perturbation appears as Bayes theorem. However, the Euclidean elements of the Aitchison geometry of the simplex can also be translated into the table of probabilities: subspaces, orthogonal projections, distances. Two important questions are addressed: a) given a table of probabilities, which is the nearest independent table to the initial one? b) which is the largest orthogonal projection of a row onto a column? or, equivalently, which is the information in a row explained by a column, thus explaining the interaction? To answer these questions three orthogonal decompositions are presented: (1) by columns and a row-wise geometric marginal, (2) by rows and a columnwise geometric marginal, (3) by independent two-way tables and fully dependent tables representing row-column interaction. An important result is that the nearest independent table is the product of the two (row and column)-wise geometric marginal tables. A corollary is that, in an independent table, the geometric marginals conform with the traditional (arithmetic) marginals. These decompositions can be compared with standard log-linear models. Key words: balance, compositional data, simplex, Aitchison geometry, composition, orthonormal basis, arithmetic and geometric marginals, amalgam, dependence measure, contingency table

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Los métodos disponibles para realizar análisis de descomposición que se pueden aplicar cuando los datos son completamente observados, no son válidos cuando la variable de interés es censurada. Esto puede explicar la escasez de este tipo de ejercicios considerando variables de duración, las cuales se observan usualmente bajo censura. Este documento propone un método del tipo Oaxaca-Blinder para descomponer diferencias en la media en el contexto de datos censurados. La validez de dicho método radica en la identificación y estimación de la distribución conjunta de la variable de duración y un conjunto de covariables. Adicionalmente, se propone un método más general que permite descomponer otros funcionales de interés como la mediana o el coeficiente de Gini, el cual se basa en la especificación de la función de distribución condicional de la variable de duración dado un conjunto de covariables. Con el fin de evaluar el desempeño de dichos métodos, se realizan experimentos tipo Monte Carlo. Finalmente, los métodos propuestos son aplicados para analizar las brechas de género en diferentes características de la duración del desempleo en España, tales como la duración media, la probabilidad de ser desempleado de largo plazo y el coeficiente de Gini. Los resultados obtenidos permiten concluir que los factores diferentes a las características observables, tales como capital humano o estructura del hogar, juegan un papel primordial para explicar dichas brechas.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A set of random variables is exchangeable if its joint distribution function is invariant under permutation of the arguments. The concept of exchangeability is discussed, with a view towards potential application in evaluating ensemble forecasts. It is argued that the paradigm of ensembles being an independent draw from an underlying distribution function is probably too narrow; allowing ensemble members to be merely exchangeable might be a more versatile model. The question is discussed whether established methods of ensemble evaluation need alteration under this model, with reliability being given particular attention. It turns out that the standard methodology of rank histograms can still be applied. As a first application of the exchangeability concept, it is shown that the method of minimum spanning trees to evaluate the reliability of high dimensional ensembles is mathematically sound.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We introduce an algorithm (called REDFITmc2) for spectrum estimation in the presence of timescale errors. It is based on the Lomb-Scargle periodogram for unevenly spaced time series, in combination with the Welch's Overlapped Segment Averaging procedure, bootstrap bias correction and persistence estimation. The timescale errors are modelled parametrically and included in the simulations for determining (1) the upper levels of the spectrum of the red-noise AR(1) alternative and (2) the uncertainty of the frequency of a spectral peak. Application of REDFITmc2 to ice core and stalagmite records of palaeoclimate allowed a more realistic evaluation of spectral peaks than when ignoring this source of uncertainty. The results support qualitatively the intuition that stronger effects on the spectrum estimate (decreased detectability and increased frequency uncertainty) occur for higher frequencies. The surplus information brought by algorithm REDFITmc2 is that those effects are quantified. Regarding timescale construction, not only the fixpoints, dating errors and the functional form of the age-depth model play a role. Also the joint distribution of all time points (serial correlation, stratigraphic order) determines spectrum estimation.