80 resultados para Monte - Carlo study
Resumo:
Report for the scientific sojourn carried out at Massachusetts General Hospital Cancer Center-Harvard Medical School, Estats Units, from 2010 to 2011. The project aims to study the aggregation behavior of amphiphilic molecules in the continuous phase of highly concentrated emulsions, which can be used as templates for the synthesis of meso/macroporous materials. At this stage of the project, we have investigated the self-assembly of diblock and triblock surfactants under the effect of a confined geometry being surrounded by the droplets of the dispersed phase. These droplets limit the growth of the aggregates, deeply modify their orientation and hence alter their spatial arrangement as compared to the self-assembly taking place far enough from any boundary surface, that is in the bulk. By performing Monte Carlo simulations, we have showed that the interface between the dispersed and continuous phases as well as its shape has a significant impact on the structural order of the resulting aggregates and hence on the potential applications of highly concentrated emulsions as reaction media, drug delivery systems, or templates for meso/macroporous materials. Due to the combined effect of symmetry breaking and morphological frustration, very intriguing structures, such as square columnar liquid crystals, twisted X-shaped aggregates, and helical phases of cylindrical aggregates, never observed in the bulk for the same model surfactant, have been found. The presence of other more conventional structures, such as micelles and cubic and hexagonal liquid crystals, formed at low and high amphiphilic concentrations, respectively, further enhance the interest on this already rich aggregation behavior.
Resumo:
This paper analyses the impact of using different correlation assumptions between lines of business when estimating the risk-based capital reserve, the Solvency Capital Requirement (SCR), under Solvency II regulations. A case study is presented and the SCR is calculated according to the Standard Model approach. Alternatively, the requirement is then calculated using an Internal Model based on a Monte Carlo simulation of the net underwriting result at a one-year horizon, with copulas being used to model the dependence between lines of business. To address the impact of these model assumptions on the SCR we conduct a sensitivity analysis. We examine changes in the correlation matrix between lines of business and address the choice of copulas. Drawing on aggregate historical data from the Spanish non-life insurance market between 2000 and 2009, we conclude that modifications of the correlation and dependence assumptions have a significant impact on SCR estimation.
Resumo:
In this paper we propose a Pyramidal Classification Algorithm,which together with an appropriate aggregation index producesan indexed pseudo-hierarchy (in the strict sense) withoutinversions nor crossings. The computer implementation of thealgorithm makes it possible to carry out some simulation testsby Monte Carlo methods in order to study the efficiency andsensitivity of the pyramidal methods of the Maximum, Minimumand UPGMA. The results shown in this paper may help to choosebetween the three classification methods proposed, in order toobtain the classification that best fits the original structureof the population, provided we have an a priori informationconcerning this structure.
Resumo:
We develop a general error analysis framework for the Monte Carlo simulationof densities for functionals in Wiener space. We also study variancereduction methods with the help of Malliavin derivatives. For this, wegive some general heuristic principles which are applied to diffusionprocesses. A comparison with kernel density estimates is made.
Resumo:
Any electoral system has an electoral formula that converts voteproportions into parliamentary seats. Pre-electoral polls usually focuson estimating vote proportions and then applying the electoral formulato give a forecast of the parliament's composition. We here describe theproblems arising from this approach: there is always a bias in theforecast. We study the origin of the bias and some methods to evaluateand to reduce it. We propose some rules to compute the sample sizerequired for a given forecast accuracy. We show by Monte Carlo simulationthe performance of the proposed methods using data from Spanish electionsin last years. We also propose graphical methods to visualize how electoralformulae and parliamentary forecasts work (or fail).
Resumo:
We study the statistical properties of three estimation methods for a model of learning that is often fitted to experimental data: quadratic deviation measures without unobserved heterogeneity, and maximum likelihood withand without unobserved heterogeneity. After discussing identification issues, we show that the estimators are consistent and provide their asymptotic distribution. Using Monte Carlo simulations, we show that ignoring unobserved heterogeneity can lead to seriously biased estimations in samples which have the typical length of actual experiments. Better small sample properties areobtained if unobserved heterogeneity is introduced. That is, rather than estimating the parameters for each individual, the individual parameters are considered random variables, and the distribution of those random variables is estimated.
Resumo:
We study model selection strategies based on penalized empirical loss minimization. We point out a tight relationship between error estimation and data-based complexity penalization: any good error estimate may be converted into a data-based penalty function and the performance of the estimate is governed by the quality of the error estimate. We consider several penalty functions, involving error estimates on independent test data, empirical {\sc vc} dimension, empirical {\sc vc} entropy, andmargin-based quantities. We also consider the maximal difference between the error on the first half of the training data and the second half, and the expected maximal discrepancy, a closely related capacity estimate that can be calculated by Monte Carlo integration. Maximal discrepancy penalty functions are appealing for pattern classification problems, since their computation is equivalent to empirical risk minimization over the training data with some labels flipped.
Resumo:
In this paper we analyse, using Monte Carlo simulation, the possible consequences of incorrect assumptions on the true structure of the random effects covariance matrix and the true correlation pattern of residuals, over the performance of an estimation method for nonlinear mixed models. The procedure under study is the well known linearization method due to Lindstrom and Bates (1990), implemented in the nlme library of S-Plus and R. Its performance is studied in terms of bias, mean square error (MSE), and true coverage of the associated asymptotic confidence intervals. Ignoring other criteria like the convenience of avoiding over parameterised models, it seems worst to erroneously assume some structure than do not assume any structure when this would be adequate.
Resumo:
We present a lattice model to study the equilibrium phase diagram of ordered alloys with one magnetic component that exhibits a low temperature phase separation between paramagnetic and ferromagnetic phases. The model is constructed from the experimental facts observed in Cu3-xAlMnx and it includes coupling between configurational and magnetic degrees of freedom that are appropriate for reproducing the low temperature miscibility gap. The essential ingredient for the occurrence of such a coexistence region is the development of ferromagnetic order induced by the long-range atomic order of the magnetic component. A comparative study of both mean-field and Monte Carlo solutions is presented. Moreover, the model may enable the study of the structure of ferromagnetic domains embedded in the nonmagnetic matrix. This is relevant in relation to phenomena such as magnetoresistance and paramagnetism
Resumo:
A new arena for the dynamics of spacetime is proposed, in which the basic quantum variable is the two-point distance on a metric space. The scaling dimension (that is, the Kolmogorov capacity) in the neighborhood of each point then defines in a natural way a local concept of dimension. We study our model in the region of parameter space in which the resulting spacetime is not too different from a smooth manifold.
Resumo:
The energy and structure of dilute hard- and soft-sphere Bose gases are systematically studied in the framework of several many-body approaches, such as the variational correlated theory, the Bogoliubov model, and the uniform limit approximation, valid in the weak-interaction regime. When possible, the results are compared with the exact diffusion Monte Carlo ones. Jastrow-type correlation provides a good description of the systems, both hard- and soft-spheres, if the hypernetted chain energy functional is freely minimized and the resulting Euler equation is solved. The study of the soft-sphere potentials confirms the appearance of a dependence of the energy on the shape of the potential at gas paremeter values of x~0.001. For quantities other than the energy, such as the radial distribution functions and the momentum distributions, the dependence appears at any value of x. The occurrence of a maximum in the radial distribution function, in the momentum distribution, and in the excitation spectrum is a natural effect of the correlations when x increases. The asymptotic behaviors of the functions characterizing the structure of the systems are also investigated. The uniform limit approach is very easy to implement and provides a good description of the soft-sphere gas. Its reliability improves when the interaction weakens.
Resumo:
A final-state-effects formalism suitable to analyze the high-momentum response of Fermi liquids is presented and used to study the dynamic structure function of liquid 3He. The theory, developed as a natural extension of the Gersch-Rodriguez formalism, incorporates the Fermi statistics explicitly through a new additive term which depends on the semidiagonal two-body density matrix. The use of a realistic momentum distribution, calculated using the diffusion Monte Carlo method, and the inclusion of this additive correction allows for good agreement with available deep-inelastic neutron scattering data.
Resumo:
We present a numerical study of classical particles diffusing on a solid surface. The particles motion is modeled by an underdamped Langevin equation with ordinary thermal noise. The particle-surface interaction is described by a periodic or a random two-dimensional potential. The model leads to a rich variety of different transport regimes, some of which correspond to anomalous diffusion such as has recently been observed in experiments and Monte Carlo simulations. We show that this anomalous behavior is controlled by the friction coefficient and stress that it emerges naturally in a system described by ordinary canonical Maxwell-Boltzmann statistics.