995 resultados para Feynman-Kac formula Markov semigroups principal eigenvalue


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, the distribution of the ratio of extreme eigenvalues of a complex Wishart matrix is studied in order to calculate the exact decision threshold as a function of the desired probability of false alarm for the maximum-minimum eigenvalue (MME) detector. In contrast to the asymptotic analysis reported in the literature, we consider a finite number of cooperative receivers and a finite number of samples and derive the exact decision threshold for the probability of false alarm. The proposed exact formulation is further reduced to the case of two receiver-based cooperative spectrum sensing. In addition, an approximate closed-form formula of the exact threshold is derived in terms of a desired probability of false alarm for a special case having equal number of receive antennas and signal samples. Finally, the derived analytical exact decision thresholds are verified with Monte-Carlo simulations. We show that the probability of detection performance using the proposed exact decision thresholds achieves significant performance gains compared to the performance of the asymptotic decision threshold.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertação para obtenção do Grau de Mestre em Engenharia Mecânica

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper develops a general stochastic framework and an equilibrium asset pricing model that make clear how attitudes towards intertemporal substitution and risk matter for option pricing. In particular, we show under which statistical conditions option pricing formulas are not preference-free, in other words, when preferences are not hidden in the stock and bond prices as they are in the standard Black and Scholes (BS) or Hull and White (HW) pricing formulas. The dependence of option prices on preference parameters comes from several instantaneous causality effects such as the so-called leverage effect. We also emphasize that the most standard asset pricing models (CAPM for the stock and BS or HW preference-free option pricing) are valid under the same stochastic setting (typically the absence of leverage effect), regardless of preference parameter values. Even though we propose a general non-preference-free option pricing formula, we always keep in mind that the BS formula is dominant both as a theoretical reference model and as a tool for practitioners. Another contribution of the paper is to characterize why the BS formula is such a benchmark. We show that, as soon as we are ready to accept a basic property of option prices, namely their homogeneity of degree one with respect to the pair formed by the underlying stock price and the strike price, the necessary statistical hypotheses for homogeneity provide BS-shaped option prices in equilibrium. This BS-shaped option-pricing formula allows us to derive interesting characterizations of the volatility smile, that is, the pattern of BS implicit volatilities as a function of the option moneyness. First, the asymmetry of the smile is shown to be equivalent to a particular form of asymmetry of the equivalent martingale measure. Second, this asymmetry appears precisely when there is either a premium on an instantaneous interest rate risk or on a generalized leverage effect or both, in other words, whenever the option pricing formula is not preference-free. Therefore, the main conclusion of our analysis for practitioners should be that an asymmetric smile is indicative of the relevance of preference parameters to price options.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nous considérons des processus de diffusion, définis par des équations différentielles stochastiques, et puis nous nous intéressons à des problèmes de premier passage pour les chaînes de Markov en temps discret correspon- dant à ces processus de diffusion. Comme il est connu dans la littérature, ces chaînes convergent en loi vers la solution des équations différentielles stochas- tiques considérées. Notre contribution consiste à trouver des formules expli- cites pour la probabilité de premier passage et la durée de la partie pour ces chaînes de Markov à temps discret. Nous montrons aussi que les résultats ob- tenus convergent selon la métrique euclidienne (i.e topologie euclidienne) vers les quantités correspondantes pour les processus de diffusion. En dernier lieu, nous étudions un problème de commande optimale pour des chaînes de Markov en temps discret. L’objectif est de trouver la valeur qui mi- nimise l’espérance mathématique d’une certaine fonction de coût. Contraire- ment au cas continu, il n’existe pas de formule explicite pour cette valeur op- timale dans le cas discret. Ainsi, nous avons étudié dans cette thèse quelques cas particuliers pour lesquels nous avons trouvé cette valeur optimale.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As stated in Aitchison (1986), a proper study of relative variation in a compositional data set should be based on logratios, and dealing with logratios excludes dealing with zeros. Nevertheless, it is clear that zero observations might be present in real data sets, either because the corresponding part is completely absent –essential zeros– or because it is below detection limit –rounded zeros. Because the second kind of zeros is usually understood as “a trace too small to measure”, it seems reasonable to replace them by a suitable small value, and this has been the traditional approach. As stated, e.g. by Tauber (1999) and by Martín-Fernández, Barceló-Vidal, and Pawlowsky-Glahn (2000), the principal problem in compositional data analysis is related to rounded zeros. One should be careful to use a replacement strategy that does not seriously distort the general structure of the data. In particular, the covariance structure of the involved parts –and thus the metric properties– should be preserved, as otherwise further analysis on subpopulations could be misleading. Following this point of view, a non-parametric imputation method is introduced in Martín-Fernández, Barceló-Vidal, and Pawlowsky-Glahn (2000). This method is analyzed in depth by Martín-Fernández, Barceló-Vidal, and Pawlowsky-Glahn (2003) where it is shown that the theoretical drawbacks of the additive zero replacement method proposed in Aitchison (1986) can be overcome using a new multiplicative approach on the non-zero parts of a composition. The new approach has reasonable properties from a compositional point of view. In particular, it is “natural” in the sense that it recovers the “true” composition if replacement values are identical to the missing values, and it is coherent with the basic operations on the simplex. This coherence implies that the covariance structure of subcompositions with no zeros is preserved. As a generalization of the multiplicative replacement, in the same paper a substitution method for missing values on compositional data sets is introduced

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this article we prove new results concerning the existence and various properties of an evolution system U(A+B)(t, s)0 <= s <= t <= T generated by the sum -(A(t) + B(t)) of two linear, time-dependent, and generally unbounded operators defined on time-dependent domains in a complex and separable Banach space B. In particular, writing L(B) for the algebra of all linear bounded operators on B, we can express U(A+B)(t, s)0 <= s <= t <= T as the strong limit in C(8) of a product of the holomorphic contraction semigroups generated by -A (t) and - B(t), respectively, thereby proving a product formula of the Trotter-Kato type under very general conditions which allow the domain D(A(t) + B(t)) to evolve with time provided there exists a fixed set D subset of boolean AND(t is an element of)[0,T] D(A(t) + B(t)) everywhere dense in B. We obtain a special case of our formula when B(t) = 0, which, in effect, allows us to reconstruct U(A)(t, s)0 <=(s)<=(t)<=(T) very simply in terms of the semigroup generated by -A(t). We then illustrate our results by considering various examples of nonautonomous parabolic initial-boundary value problems, including one related to the theory of timedependent singular perturbations of self-adjoint operators. We finally mention what we think remains an open problem for the corresponding equations of Schrodinger type in quantum mechanics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this article dedicated to Professor V. Lakshmikantham on the occasion of the celebration of his 84th birthday, we announce new results concerning the existence and various properties of an evolution system UA+B(t, s)(0 <= s <= t <= T) generated by the sum -(A(t)+B(t)) of two linear, time-dependent and generally unbounded operators defined on time-dependent domains in a complex and separable Banach space B. In particular, writing G(B) for the algebra of all linear bounded operators on B, we can express UA+B(t, s)(0 <= s <= t <= T) as the strong limit in L(B) of a product of the holomorphic contraction semigroups generated by -A(t) and -B(t), thereby getting a product formula of the Trotter-Kato type under very general conditions which allow the domain D(A(t)+B(t)) to evolve with time provided there exists a fixed set D subset of boolean AND D-t epsilon[0,D-T](A(t)+B(t)) everywhere dense in B. We then mention several possible applications of our product formula to various classes of non-autonomous parabolic initial-boundary value problems, as well as to evolution problems of Schrodinger type related to the theory of time-dependent singular perturbations of self-adjoint operators in quantum mechanics. We defer all the proofs and all the details of the applications to a separate publication. (C) 2008 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, the analysis for the performance of the discrete Fourier transform LMS adaptive filter (DFT-LMS) and the discrete cosine transform LMS adaptive filter (DCT-LMS) for the Markov-2 inputs is presented. To improve the convergence property of the least mean squares (LMS) adaptive filter, the DFT-LMS and DCT-LMS preprocess the inputs with the fixed orthogonal transforms and power normalization. We derive the asymptotic results for the eigenvalues and eigenvalue distributions of the preprocessed input autocorrelation matrices with DFT-LMS and DCT-LMS for Markov-2 inputs. These results explicitly show the superior decorrelation property of DCT-LMS over that of DFT-LMS, and also provide the upper bounds for the eigenvalue spreads of the finite-length DFT-LMS and DCT-LMS adaptive filters. Simulation results are demonstrated to support the analytic results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recently, many unified learning algorithms have been developed to solve the task of principal component analysis (PCA) and minor component analysis (MCA). These unified algorithms can be used to extract principal component and if altered simply by the sign, it can also serve as a minor component extractor. This is of practical significance in the implementations of algorithms. Convergence of the existing unified algorithms is guaranteed only under the condition that the learning rates of algorithms approach zero, which is impractical in many practical applications. In this paper, we propose a unified PCA & MCA algorithm with a constant learning rate, and derive the sufficient conditions to guarantee convergence via analyzing the discrete-time dynamics of the proposed algorithm. The achieved theoretical results lay a solid foundation for the applications of our proposed algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a new multivariate process capability index (MPCI) which is based on the principal component analysis (PCA) and is dependent on a parameter (Formula presented.) which can take on any real number. This MPCI generalises some existing multivariate indices based on PCA proposed by several authors when (Formula presented.) or (Formula presented.). One of the key contributions of this paper is to show that there is a direct correspondence between this MPCI and process yield for a unique value of (Formula presented.). This result is used to establish a relationship between the capability status of the process and to show that under some mild conditions, the estimators of this MPCI is consistent and converge to a normal distribution. This is then applied to perform tests of statistical hypotheses and in determining sample sizes. Several numerical examples are presented with the objective of illustrating the procedures and demonstrating how they can be applied to determine the viability and capacity of different manufacturing processes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes a methodology for automatic extraction of building roof contours from a Digital Elevation Model (DEM), which is generated through the regularization of an available laser point cloud. The methodology is based on two steps. First, in order to detect high objects (buildings, trees etc.), the DEM is segmented through a recursive splitting technique and a Bayesian merging technique. The recursive splitting technique uses the quadtree structure for subdividing the DEM into homogeneous regions. In order to minimize the fragmentation, which is commonly observed in the results of the recursive splitting segmentation, a region merging technique based on the Bayesian framework is applied to the previously segmented data. The high object polygons are extracted by using vectorization and polygonization techniques. Second, the building roof contours are identified among all high objects extracted previously. Taking into account some roof properties and some feature measurements (e. g., area, rectangularity, and angles between principal axes of the roofs), an energy function was developed based on the Markov Random Field (MRF) model. The solution of this function is a polygon set corresponding to building roof contours and is found by using a minimization technique, like the Simulated Annealing (SA) algorithm. Experiments carried out with laser scanning DEM's showed that the methodology works properly, as it delivered roof contours with approximately 90% shape accuracy and no false positive was verified.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pós-graduação em Física - IFT

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pós-graduação em Zootecnia - FMVZ

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pós-graduação em Matemática em Rede Nacional - IBILCE

Relevância:

30.00% 30.00%

Publicador:

Resumo:

En esta tesis se presenta una metodología para la caracterización del oleaje, dentro del marco de las nuevas Recomendaciones para Obras Marítimas (ROM 0.0.-00 y ROM 1.0-09), por ser una de las principales acciones que afectan a la estabilidad de las estructuras marítimas. Debido al carácter aleatorio intrínsecamente multivariado de la acción considerada, las tormentas, su caracterización paramétrica se realiza en términos de funciones cópula uniparamétricas. Las variables consideradas son altura de ola significante del pico de la tormenta, el periodo medio asociado y la magnitud, o número de olas, de todo el ciclo de solicitación. Para establecer un patrón teórico de evolución de la tormenta que permita extrapolar las muestras fuera de la región con datos se analizan los modelos teóricos existentes, comprobándose que no reproducen adecuadamente las tormentas constituidas por estados de mar con un peso importante de oleaje swell. Para evitar esta limitación se proponen cuatro modelos teóricos de evolución de tormentas con distintas formas geométricas. El análisis de los modelos existentes y los propuestos pone de relieve que el Modelo Magnitud Equivalente de Tormenta (EMS= Equivalent Magnitude Storm) con la forma triangular es el que mejor adapta las tormentas constituidas por estados de mar típicos del viento. Para tormentas con un mayor grado de desarrollo, el modelo teórico de tormenta EMS con la forma trapezoidal es el adecuado. De las aproximaciones propuestas para establecer el periodo medio de los sucesivos estados de mar del ciclo de solicitación. la propuesta por Martín Soldevilla et al., (2009) es la más versátil y , en general , mejor reproduce la evolución de todo tipo de tormentas. La caracterización de las tormentas se complementa con la altura de ola máxima. Debido a la mayor disponibilidad y longitud temporal de los datos sintéticos frente a las registros, la práctica totalidad de los análisis de extremos se realizan con tormentas sintéticas en las que la distribución de olas individuales es desconocida. Para evitar esta limitación se utilizan modelos teóricos de distribución de olas acordes a las características de cada uno de los estados de mar que conforman la tormenta sintética. Para establecer dichas características se utiliza la curtosis y en función de su valor la altura de ola máxima se determina asumiendo una determinada distribución de olas. Para estados de mar lineales la distribución de olas individuales de Rayleigh es la considerada. Para condiciones no lineales de gran ancho de banda el modelo de distribución de olas propuesto por Dawson, (2004) es el utilizado y si es de banda estrecha las predicciones de (Boccotti, (1989), Boccotti et al., (2013)) se compara con las resultantes del modelo de Dawson. La caracterización de la evolución de las tormentas en términos multivariados es aplicada al estudio de la progresión del daño del manto principal de diques en talud, y al rebase de las olas. Ambos aspectos cubren el segundo objetivo de la tesis en el que se propone una nueva formulación para el dimensionamiento de mantos constituidos por bloques cúbicos de hormigón. Para el desarrollo de esta nueva formulación se han utilizado los resultados recogidos en los estudios de estabilidad del manto principal de diques talud realizados en modelo físico a escala reducida en el Centro de Estudios de Puertos y Costas (CEDEX) desde la década de los 80 empleando, en su mayoría, bloques paralelepípedos cúbicos de hormigón. Por este motivo y porque los últimos diques construidos en la costa Española utilizan este tipo de pieza, es por lo que la formulación planteada se centra en este tipo de pieza. Después de un primer análisis de las fórmulas de cálculo y de evolución existentes, se llega a la conclusión de que es necesario realizar un esfuerzo de investigación en este campo, así como ensayos en laboratorio y recogida de datos in-situ con base a desarrollar fórmulas de evolución de daño para mantos constituidos por piezas diferentes a la escollera, que tenga en cuenta las principales variables que condiciona su estabilidad. En esta parte de la tesis se propone un método de análisis de evolución de daño, que incluye el criterio de inicio de avería, adecuada para diques en talud constituidos por bloque cúbicos de hormigón y que considera la incidencia oblicua, el daño acumulado y el rebase. This thesis proposes a methodology to estimate sea waves, one of the main actions affecting the maritime structures stability, complying with (ROM 0.0.-00 & ROM 1.0-09.Due to the multivariate behavior of sea storms, the characterization of the structures of sea storms is done using copula function. The analyzed variables are the significant height wave, mean period and magnitude or number of waves during the storm history. The storm evolution in terms of the significant height wave and the mean period is also studied in other to analyze the progressive failure modes. The existing models of evolution are studied, verifying that these approximations do not adjust accurately for developed waves. To overcome this disadvantage, four evolution models are proposed, with some geometrical shapes associated to fit any development degree. The proposed Equivalent Magnitude Storm model, EMS, generally obtains the best results for any kind of storm (predominant sea, swell or both). The triangle is recommended for typical sea storms whereas the trapezoid shape is much more appropriate for more developed storm conditions.The Martín Soldevilla et al., (2009) approach to estimate the mean period is better than others approaches used.The storm characterization is completed with the maximum wave height of the whole storm history. Due to synthetic historical waves databases are more accessible and longer than recorded database, the extreme analyses are done with synthetic data. For this reason the individual waves’ distribution is not known. For that limitation to be avoided, and depending on the characteristics of every sea states, one theoretical model of waves is choose and used. The kurtosis parameter is used to distinguish between linear and nonlinear sea states. The Rayleigh model is used for the linear sea states. For the nonlinear sea states, Dawson, (2004) approach is used for non-narrow bandwidth storms, comparing the results with the Boccotti, (1989), Boccotti et al., (2013) approach, with is used for narrow bandwidth storms. The multivariate and storm evolution characterization is used to analyze of stone armour damage progression and wave overtopping discharge. Both aspects are included in the second part of the thesis, with a new formula is proposed to design cubes armour layer. The results the stability studies of armour layer, done in the Centre for Harbours and Coastal Studies (CEDEX) laboratory are used for defining a new stability formula. For this reason and because the last biggest breakwater built in Spain using the cube, the damage progression is analyze for this kind of concrete block. Before to analyze the existing formulae, it is concluded that it is necessary more investigation, more tests in laboratory and data gathering in situ to define damage evolution formulae to armour of other kind of pieces and that takes to account the principal variables. This thesis proposed a method to calculate the damage progression including oblique waves, accumulated damage, and overtopping effect. The method also takes account the beginning of the movement of the blocks.