976 resultados para Numerical Models
Resumo:
Cette thèse s'intéresse à étudier les propriétés extrémales de certains modèles de risque d'intérêt dans diverses applications de l'assurance, de la finance et des statistiques. Cette thèse se développe selon deux axes principaux, à savoir: Dans la première partie, nous nous concentrons sur deux modèles de risques univariés, c'est-à- dire, un modèle de risque de déflation et un modèle de risque de réassurance. Nous étudions le développement des queues de distribution sous certaines conditions des risques commun¬s. Les principaux résultats sont ainsi illustrés par des exemples typiques et des simulations numériques. Enfin, les résultats sont appliqués aux domaines des assurances, par exemple, les approximations de Value-at-Risk, d'espérance conditionnelle unilatérale etc. La deuxième partie de cette thèse est consacrée à trois modèles à deux variables: Le premier modèle concerne la censure à deux variables des événements extrême. Pour ce modèle, nous proposons tout d'abord une classe d'estimateurs pour les coefficients de dépendance et la probabilité des queues de distributions. Ces estimateurs sont flexibles en raison d'un paramètre de réglage. Leurs distributions asymptotiques sont obtenues sous certaines condi¬tions lentes bivariées de second ordre. Ensuite, nous donnons quelques exemples et présentons une petite étude de simulations de Monte Carlo, suivie par une application sur un ensemble de données réelles d'assurance. L'objectif de notre deuxième modèle de risque à deux variables est l'étude de coefficients de dépendance des queues de distributions obliques et asymétriques à deux variables. Ces distri¬butions obliques et asymétriques sont largement utiles dans les applications statistiques. Elles sont générées principalement par le mélange moyenne-variance de lois normales et le mélange de lois normales asymétriques d'échelles, qui distinguent la structure de dépendance de queue comme indiqué par nos principaux résultats. Le troisième modèle de risque à deux variables concerne le rapprochement des maxima de séries triangulaires elliptiques obliques. Les résultats théoriques sont fondés sur certaines hypothèses concernant le périmètre aléatoire sous-jacent des queues de distributions. -- This thesis aims to investigate the extremal properties of certain risk models of interest in vari¬ous applications from insurance, finance and statistics. This thesis develops along two principal lines, namely: In the first part, we focus on two univariate risk models, i.e., deflated risk and reinsurance risk models. Therein we investigate their tail expansions under certain tail conditions of the common risks. Our main results are illustrated by some typical examples and numerical simu¬lations as well. Finally, the findings are formulated into some applications in insurance fields, for instance, the approximations of Value-at-Risk, conditional tail expectations etc. The second part of this thesis is devoted to the following three bivariate models: The first model is concerned with bivariate censoring of extreme events. For this model, we first propose a class of estimators for both tail dependence coefficient and tail probability. These estimators are flexible due to a tuning parameter and their asymptotic distributions are obtained under some second order bivariate slowly varying conditions of the model. Then, we give some examples and present a small Monte Carlo simulation study followed by an application on a real-data set from insurance. The objective of our second bivariate risk model is the investigation of tail dependence coefficient of bivariate skew slash distributions. Such skew slash distributions are extensively useful in statistical applications and they are generated mainly by normal mean-variance mixture and scaled skew-normal mixture, which distinguish the tail dependence structure as shown by our principle results. The third bivariate risk model is concerned with the approximation of the component-wise maxima of skew elliptical triangular arrays. The theoretical results are based on certain tail assumptions on the underlying random radius.
Resumo:
Sackung is a widespread post-glacial morphological feature affecting Alpine mountains and creating characteristic geomorphological expression that can be detected from topography. Over long time evolution, internal deformation can lead to the formation of rapidly moving phenomena such as a rock-slide or rock avalanche. In this study, a detailed description of the Sierre rock-avalanche (SW Switzerland) is presented. This convex-shaped postglacial instability is one of the larger rock-avalanche in the Alps, involving more than 1.5 billion m3 with a run-out distance of about 14 km and extremely low Fahrböschung angle. This study presents comprehensive analyses of the structural and geological characteristics leading to the development of the Sierre rock-avalanche. In particular, by combining field observations, digital elevation model analyses and numerical modelling, the strong influence of both ductile and brittle tectonic structures on the failure mechanism and on the failure surface geometry is highlighted. The detection of pre-failure deformation indicates that the development of the rock avalanche corresponds to the last evolutionary stage of a pre-existing deep seated gravitational slope instability. These analyses accompanied by the dating and the characterization of rock avalanche deposits, allow the proposal of a destabilization model that clarifies the different phases leading to the development of the Sierre rock avalanche.
Resumo:
We present a detailed analytical and numerical study of the avalanche distributions of the continuous damage fiber bundle model CDFBM . Linearly elastic fibers undergo a series of partial failure events which give rise to a gradual degradation of their stiffness. We show that the model reproduces a wide range of mechanical behaviors. We find that macroscopic hardening and plastic responses are characterized by avalanche distributions, which exhibit an algebraic decay with exponents between 5/2 and 2 different from those observed in mean-field fiber bundle models. We also derive analytically the phase diagram of a family of CDFBM which covers a large variety of potential avalanche size distributions. Our results provide a unified view of the statistics of breaking avalanches in fiber bundle models
Resumo:
The paper proposes a numerical solution method for general equilibrium models with a continuum of heterogeneous agents, which combines elements of projection and of perturbation methods. The basic idea is to solve first for the stationary solutionof the model, without aggregate shocks but with fully specified idiosyncratic shocks. Afterwards one computes a first-order perturbation of the solution in the aggregate shocks. This approach allows to include a high-dimensional representation of the cross-sectional distribution in the state vector. The method is applied to a model of household saving with uninsurable income risk and liquidity constraints. The model includes not only productivity shocks, but also shocks to redistributive taxation, which cause substantial short-run variation in the cross-sectional distribution of wealth. If those shocks are operative, it is shown that a solution method based on very few statistics of the distribution is not suitable, while the proposed method can solve the model with high accuracy, at least for the case of small aggregate shocks. Techniques are discussed to reduce the dimension of the state space such that higher order perturbations are feasible.Matlab programs to solve the model can be downloaded.
Resumo:
Functionally relevant large scale brain dynamics operates within the framework imposed by anatomical connectivity and time delays due to finite transmission speeds. To gain insight on the reliability and comparability of large scale brain network simulations, we investigate the effects of variations in the anatomical connectivity. Two different sets of detailed global connectivity structures are explored, the first extracted from the CoCoMac database and rescaled to the spatial extent of the human brain, the second derived from white-matter tractography applied to diffusion spectrum imaging (DSI) for a human subject. We use the combination of graph theoretical measures of the connection matrices and numerical simulations to explicate the importance of both connectivity strength and delays in shaping dynamic behaviour. Our results demonstrate that the brain dynamics derived from the CoCoMac database are more complex and biologically more realistic than the one based on the DSI database. We propose that the reason for this difference is the absence of directed weights in the DSI connectivity matrix.
Resumo:
In this work we analyze how patchy distributions of CO2 and brine within sand reservoirs may lead to significant attenuation and velocity dispersion effects, which in turn may have a profound impact on surface seismic data. The ultimate goal of this paper is to contribute to the understanding of these processes within the framework of the seismic monitoring of CO2 sequestration, a key strategy to mitigate global warming. We first carry out a Monte Carlo analysis to study the statistical behavior of attenuation and velocity dispersion of compressional waves traveling through rocks with properties similar to those at the Utsira Sand, Sleipner field, containing quasi-fractal patchy distributions of CO2 and brine. These results show that the mean patch size and CO2 saturation play key roles in the observed wave-induced fluid flow effects. The latter can be remarkably important when CO2 concentrations are low and mean patch sizes are relatively large. To analyze these effects on the corresponding surface seismic data, we perform numerical simulations of wave propagation considering reservoir models and CO2 accumulation patterns similar to the CO2 injection site in the Sleipner field. These numerical experiments suggest that wave-induced fluid flow effects may produce changes in the reservoir's seismic response, modifying significantly the main seismic attributes usually employed in the characterization of these environments. Consequently, the determination of the nature of the fluid distributions as well as the proper modeling of the seismic data constitute important aspects that should not be ignored in the seismic monitoring of CO2 sequestration problems.
Resumo:
Whereas numerical modeling using finite-element methods (FEM) can provide transient temperature distribution in the component with enough accuracy, it is of the most importance the development of compact dynamic thermal models that can be used for electrothermal simulation. While in most cases single power sources are considered, here we focus on the simultaneous presence of multiple sources. The thermal model will be in the form of a thermal impedance matrix containing the thermal impedance transfer functions between two arbitrary ports. Eachindividual transfer function element ( ) is obtained from the analysis of the thermal temperature transient at node ¿ ¿ after a power step at node ¿ .¿ Different options for multiexponential transient analysis are detailed and compared. Among the options explored, small thermal models can be obtained by constrained nonlinear least squares (NLSQ) methods if the order is selected properly using validation signals. The methods are applied to the extraction of dynamic compact thermal models for a new ultrathin chip stack technology (UTCS).
Resumo:
In groundwater applications, Monte Carlo methods are employed to model the uncertainty on geological parameters. However, their brute-force application becomes computationally prohibitive for highly detailed geological descriptions, complex physical processes, and a large number of realizations. The Distance Kernel Method (DKM) overcomes this issue by clustering the realizations in a multidimensional space based on the flow responses obtained by means of an approximate (computationally cheaper) model; then, the uncertainty is estimated from the exact responses that are computed only for one representative realization per cluster (the medoid). Usually, DKM is employed to decrease the size of the sample of realizations that are considered to estimate the uncertainty. We propose to use the information from the approximate responses for uncertainty quantification. The subset of exact solutions provided by DKM is then employed to construct an error model and correct the potential bias of the approximate model. Two error models are devised that both employ the difference between approximate and exact medoid solutions, but differ in the way medoid errors are interpolated to correct the whole set of realizations. The Local Error Model rests upon the clustering defined by DKM and can be seen as a natural way to account for intra-cluster variability; the Global Error Model employs a linear interpolation of all medoid errors regardless of the cluster to which the single realization belongs. These error models are evaluated for an idealized pollution problem in which the uncertainty of the breakthrough curve needs to be estimated. For this numerical test case, we demonstrate that the error models improve the uncertainty quantification provided by the DKM algorithm and are effective in correcting the bias of the estimate computed solely from the MsFV results. The framework presented here is not specific to the methods considered and can be applied to other combinations of approximate models and techniques to select a subset of realizations
Resumo:
Gel electrophoresis allows one to separate knotted DNA (nicked circular) of equal length according to the knot type. At low electric fields, complex knots, being more compact, drift faster than simpler knots. Recent experiments have shown that the drift velocity dependence on the knot type is inverted when changing from low to high electric fields. We present a computer simulation on a lattice of a closed, knotted, charged DNA chain drifting in an external electric field in a topologically restricted medium. Using a Monte Carlo algorithm, the dependence of the electrophoretic migration of the DNA molecules on the knot type and on the electric field intensity is investigated. The results are in qualitative and quantitative agreement with electrophoretic experiments done under conditions of low and high electric fields.
Resumo:
The intensity correlation functions C(t) for the colored-gain-noise model of dye lasers are analyzed and compared with those for the loss-noise model. For correlation times ¿ larger than the deterministic relaxation time td, we show with the use of the adiabatic approximation that C(t) values coincide for both models. For small correlation times we use a method that provides explicit expressions of non-Markovian correlation functions, approximating simultaneously short- and long-time behaviors. Comparison with numerical simulations shows excellent results simultaneously for short- and long-time regimes. It is found that, when the correlation time of the noise increases, differences between the gain- and loss-noise models tend to disappear. The decay of C(t) for both models can be described by a time scale that approaches the deterministic relaxation time. However, in contrast with the loss-noise model, a secondary time scale remains for large times for the gain-noise model, which could allow one to distinguish between both models.
Resumo:
The invaded cluster (IC) dynamics introduced by Machta et al. [Phys. Rev. Lett. 75, 2792 (1995)] is extended to the fully frustrated Ising model on a square lattice. The properties of the dynamics that exhibits numerical evidence of self-organized criticality are studied. The fluctuations in the IC dynamics are shown to be intrinsic of the algorithm and the fluctuation-dissipation theorem is no longer valid. The relaxation time is found to be very short and does not present a critical size dependence.
Resumo:
INTRODUCTION: The importance of the micromovements in the mechanism of aseptic loosening is clinically difficult to evaluate. To complete the analysis of a series of total knee arthroplasties (TKA), we used a tridimensional numerical model to study the micromovements of the tibial implant. MATERIAL AND METHODS: Fifty one patients (with 57 cemented Porous Coated Anatomic TKAs) were reviewed (mean follow-up 4.5 year). Radiolucency at the tibial bone-cement interface was sought on the AP radiographs and divided in 7 areas. The distribution of the radiolucency was then correlated with the axis of the lower limb as measured on the orthoradiograms. The tridimensional numerical model is based on the finite element method. It allowed the measurement of the cemented prosthetic tibial implant's displacements and the micromovements generated at bone-ciment interface. A total load (2000 Newton) was applied at first vertically and asymetrically on the tibial plateau, thereby simulating an axial deviation of the lower limbs. The vector's posterior inclination then permitted the addition of a tangential component to the axial load. This type of effort is generated by complex biomechanical phenomena such as knee flexion. RESULTS: 81 per cent of the 57 knees had a radiolucent line of at least 1 mm, at one or more of the tibial cement-epiphysis jonctional areas. The distribution of these lucent lines showed that they came out more frequently at the periphery of the implant. The lucent lines appeared most often under the unloaded margin of the tibial plateau, when axial deviation of lower limbs was present. Numerical simulations showed that asymetrical loading on the tibial plateau induced a subsidence of the loaded margin (0-100 microns) and lifting off at the opposite border (0-70 microns). The postero-anterior tangential component induced an anterior displacement of the tibial implant (160-220 microns), and horizontal micromovements with non homogenous distribution at the bone-ciment interface (28-54 microns). DISCUSSION: Comparison of clinical and numerical results showed a relation between the development of radiolucent lines and the unloading of the tibial implant's margin. The deleterious effect of lower limbs' axial deviation is thereby proven. The irregular distribution of lucent lines under the tibial plateau was similar of the micromovements' repartition at the bone-cement interface when tangential forces were present. A causative relation between the two phenomenaes could not however be established. Numerical simulation is a truly useful method of study; it permits to calculate micromovements which are relative, non homogenous and of very low amplitude. However, comparative clinical studies remain as essential to ensure the credibility of results.
Resumo:
Application of semi-distributed hydrological models to large, heterogeneous watersheds deals with several problems. On one hand, the spatial and temporal variability in catchment features should be adequately represented in the model parameterization, while maintaining the model complexity in an acceptable level to take advantage of state-of-the-art calibration techniques. On the other hand, model complexity enhances uncertainty in adjusted model parameter values, therefore increasing uncertainty in the water routing across the watershed. This is critical for water quality applications, where not only streamflow, but also a reliable estimation of the surface versus subsurface contributions to the runoff is needed. In this study, we show how a regularized inversion procedure combined with a multiobjective function calibration strategy successfully solves the parameterization of a complex application of a water quality-oriented hydrological model. The final value of several optimized parameters showed significant and consistentdifferences across geological and landscape features. Although the number of optimized parameters was significantly increased by the spatial and temporal discretization of adjustable parameters, the uncertainty in water routing results remained at reasonable values. In addition, a stepwise numerical analysis showed that the effects on calibration performance due to inclusion of different data types in the objective function could be inextricably linked. Thus caution should be taken when adding or removing data from an aggregated objective function.
Resumo:
The flow of two immiscible fluids through a porous medium depends on the complex interplay between gravity, capillarity, and viscous forces. The interaction between these forces and the geometry of the medium gives rise to a variety of complex flow regimes that are difficult to describe using continuum models. Although a number of pore-scale models have been employed, a careful investigation of the macroscopic effects of pore-scale processes requires methods based on conservation principles in order to reduce the number of modeling assumptions. In this work we perform direct numerical simulations of drainage by solving Navier-Stokes equations in the pore space and employing the Volume Of Fluid (VOF) method to track the evolution of the fluid-fluid interface. After demonstrating that the method is able to deal with large viscosity contrasts and model the transition from stable flow to viscous fingering, we focus on the macroscopic capillary pressure and we compare different definitions of this quantity under quasi-static and dynamic conditions. We show that the difference between the intrinsic phase-average pressures, which is commonly used as definition of Darcy-scale capillary pressure, is subject to several limitations and it is not accurate in presence of viscous effects or trapping. In contrast, a definition based on the variation of the total surface energy provides an accurate estimate of the macroscopic capillary pressure. This definition, which links the capillary pressure to its physical origin, allows a better separation of viscous effects and does not depend on the presence of trapped fluid clusters.