982 resultados para Simulations, Quantum Models, Resonant Tunneling Diode
Resumo:
We used multiple sets of simulations both at the atomistic and coarse-grained level of resolution to investigate interaction and binding of α-tochoperol transfer protein (α-TTP) to phosphatidylinositol phosphate lipids (PIPs). Our calculations indicate that enrichment of membranes with such lipids facilitate membrane anchoring. Atomistic models suggest that PIP can be incorporated into the binding cavity of α-TTP and therefore confirm that such protein can work as lipid exchanger between the endosome and the plasma membrane. Comparison of the atomistic models of the α-TTP-PIPs complex with membrane-bound α-TTP revealed different roles for the various basic residues composing the basic patch that is key for the protein/ligand interaction. Such residues are of critical importance as several point mutations at their position lead to severe forms of ataxia with vitamin E deficiency (AVED) phenotypes. Specifically, R221 is main residue responsible for the stabilization of the complex. R68 and R192 exchange strong interactions in the protein or in the membrane complex only, suggesting that the two residues alternate contact formation, thus facilitating lipid flipping from the membrane into the protein cavity during the lipid exchange process. Finally, R59 shows weaker interactions with PIPs anyway with a clear preference for specific phosphorylation positions, hinting a role in early membrane selectivity for the protein. Altogether, our simulations reveal significant aspects at the atomistic scale of interactions of α-TTP with the plasma membrane and with PIP, providing clarifications on the mechanism of intracellular vitamin E trafficking and helping establishing the role of key residue for the functionality of α-TTP.
Resumo:
Simulations of supersymmetric field theories with spontaneously broken supersymmetry require in addition to the ultraviolet regularisation also an infrared one, due to the emergence of the massless Goldstino. The intricate interplay between ultraviolet and infrared effects towards the continuum and infinite volume limit demands careful investigations to avoid potential problems. In this paper – the second in a series of three – we present such an investigation for N=2 supersymmetric quantum mechanics formulated on the lattice in terms of bosonic and fermionic bonds. In one dimension, the bond formulation allows to solve the system exactly, even at finite lattice spacing, through the construction and analysis of transfer matrices. In the present paper we elaborate on this approach and discuss a range of exact results for observables such as the Witten index, the mass spectra and Ward identities.
Resumo:
This study compares gridded European seasonal series of surface air temperature (SAT) and precipitation (PRE) reconstructions with a regional climate simulation over the period 1500–1990. The area is analysed separately for nine subareas that represent the majority of the climate diversity in the European sector. In their spatial structure, an overall good agreement is found between the reconstructed and simulated climate features across Europe, supporting consistency in both products. Systematic biases between both data sets can be explained by a priori known deficiencies in the simulation. Simulations and reconstructions, however, largely differ in the temporal evolution of past climate for European subregions. In particular, the simulated anomalies during the Maunder and Dalton minima show stronger response to changes in the external forcings than recorded in the reconstructions. Although this disagreement is to some extent expected given the prominent role of internal variability in the evolution of regional temperature and precipitation, a certain degree of agreement is a priori expected in variables directly affected by external forcings. In this sense, the inability of the model to reproduce a warm period similar to that recorded for the winters during the first decades of the 18th century in the reconstructions is indicative of fundamental limitations in the simulation that preclude reproducing exceptionally anomalous conditions. Despite these limitations, the simulated climate is a physically consistent data set, which can be used as a benchmark to analyse the consistency and limitations of gridded reconstructions of different variables. A comparison of the leading modes of SAT and PRE variability indicates that reconstructions are too simplistic, especially for precipitation, which is associated with the linear statistical techniques used to generate the reconstructions. The analysis of the co-variability between sea level pressure (SLP) and SAT and PRE in the simulation yields a result which resembles the canonical co-variability recorded in the observations for the 20th century. However, the same analysis for reconstructions exhibits anomalously low correlations, which points towards a lack of dynamical consistency between independent reconstructions.
Ab initio simulations of the structure of thin water layers on defective anatase TiO₂ (101) surfaces
Resumo:
Temperature changes in Antarctica over the last millennium are investigated using proxy records, a set of simulations driven by natural and anthropogenic forcings and one simulation with data assimilation. Over Antarctica, a long term cooling trend in annual mean is simulated during the period 1000–1850. The main contributor to this cooling trend is the volcanic forcing, astronomical forcing playing a dominant role at seasonal timescale. Since 1850, all the models produce an Antarctic warming in response to the increase in greenhouse gas concentrations. We present a composite of Antarctic temperature, calculated by averaging seven temperature records derived from isotope measurements in ice cores. This simple approach is supported by the coherency displayed between model results at these data grid points and Antarctic mean temperature. The composite shows a weak multi-centennial cooling trend during the pre-industrial period and a warming after 1850 that is broadly consistent with model results. In both data and simulations, large regional variations are superimposed on this common signal, at decadal to centennial timescales. The model results appear spatially more consistent than ice core records. We conclude that more records are needed to resolve the complex spatial distribution of Antarctic temperature variations during the last millennium.
Resumo:
67P/Churyumov-Gerasimenko (67P) is a Jupiter-family comet and the object of investigation of the European Space Agency mission Rosetta. This report presents the first full 3D simulation results of 67P’s neutral gas coma. In this study we include results from a direct simulation Monte Carlo method, a hydrodynamic code, and a purely geometric calculation which computes the total illuminated surface area on the nucleus. All models include the triangulated 3D shape model of 67P as well as realistic illumination and shadowing conditions. The basic concept is the assumption that these illumination conditions on the nucleus are the main driver for the gas activity of the comet. As a consequence, the total production rate of 67P varies as a function of solar insolation. The best agreement between the model and the data is achieved when gas fluxes on the night side are in the range of 7% to 10% of the maximum flux, accounting for contributions from the most volatile components. To validate the output of our numerical simulations we compare the results of all three models to in situ gas number density measurements from the ROSINA COPS instrument. We are able to reproduce the overall features of these local neutral number density measurements of ROSINA COPS for the time period between early August 2014 and January 1 2015 with all three models. Some details in the measurements are not reproduced and warrant further investigation and refinement of the models. However, the overall assumption that illumination conditions on the nucleus are at least an important driver of the gas activity is validated by the models. According to our simulation results we find the total production rate of 67P to be constant between August and November 2014 with a value of about 1 × 10²⁶ molecules s⁻¹.
Resumo:
Osteoporotic proximal femur fractures are caused by low energy trauma, typically when falling on the hip from standing height. Finite element simulations, widely used to predict the fracture load of femora in fall, usually include neither mass-related inertial effects, nor the viscous part of bone's material behavior. The aim of this study was to elucidate if quasi-static non-linear homogenized finite element analyses can predict in vitro mechanical properties of proximal femora assessed in dynamic drop tower experiments. The case-specific numerical models of thirteen femora predicted the strength (R2=0.84, SEE=540 N, 16.2%), stiffness (R2=0.82, SEE=233 N/mm, 18.0%) and fracture energy (R2=0.72, SEE=3.85 J, 39.6%); and provided fair qualitative matches with the fracture patterns. The influence of material anisotropy was negligible for all predictions. These results suggest that quasi-static homogenized finite element analysis may be used to predict mechanical properties of proximal femora in the dynamic sideways fall situation.
Resumo:
We present results on the nucleon scalar, axial, and tensor charges as well as on the momentum fraction, and the helicity and transversity moments. The pion momentum fraction is also presented. The computation of these key observables is carried out using lattice QCD simulations at a physical value of the pion mass. The evaluation is based on gauge configurations generated with two degenerate sea quarks of twisted mass fermions with a clover term. We investigate excited states contributions with the nucleon quantum numbers by analyzing three sink-source time separations. We find that, for the scalar charge, excited states contribute significantly and to a less degree to the nucleon momentum fraction and helicity moment. Our result for the nucleon axial charge agrees with the experimental value. Furthermore, we predict a value of 1.027(62) in the MS¯¯¯¯¯ scheme at 2 GeV for the isovector nucleon tensor charge directly at the physical point. The pion momentum fraction is found to be ⟨x⟩π±u−d=0.214(15)(+12−9) in the MS¯¯¯¯¯ at 2 GeV.
Resumo:
We analyse the variability of the probability distribution of daily wind speed in wintertime over Northern and Central Europe in a series of global and regional climate simulations covering the last centuries, and in reanalysis products covering approximately the last 60 years. The focus of the study lies on identifying the link of the variations in the wind speed distribution to the regional near-surface temperature, to the meridional temperature gradient and to the North Atlantic Oscillation. Our main result is that the link between the daily wind distribution and the regional climate drivers is strongly model dependent. The global models tend to behave similarly, although they show some discrepancies. The two regional models also tend to behave similarly to each other, but surprisingly the results derived from each regional model strongly deviates from the results derived from its driving global model. In addition, considering multi-centennial timescales, we find in two global simulations a long-term tendency for the probability distribution of daily wind speed to widen through the last centuries. The cause for this widening is likely the effect of the deforestation prescribed in these simulations. We conclude that no clear systematic relationship between the mean temperature, the temperature gradient and/or the North Atlantic Oscillation, with the daily wind speed statistics can be inferred from these simulations. The understand- ing of past and future changes in the distribution of wind speeds, and thus of wind speed extremes, will require a detailed analysis of the representation of the interaction between large-scale and small-scale dynamics.
Resumo:
Strategies are compared for the development of a linear regression model with stochastic (multivariate normal) regressor variables and the subsequent assessment of its predictive ability. Bias and mean squared error of four estimators of predictive performance are evaluated in simulated samples of 32 population correlation matrices. Models including all of the available predictors are compared with those obtained using selected subsets. The subset selection procedures investigated include two stopping rules, C$\sb{\rm p}$ and S$\sb{\rm p}$, each combined with an 'all possible subsets' or 'forward selection' of variables. The estimators of performance utilized include parametric (MSEP$\sb{\rm m}$) and non-parametric (PRESS) assessments in the entire sample, and two data splitting estimates restricted to a random or balanced (Snee's DUPLEX) 'validation' half sample. The simulations were performed as a designed experiment, with population correlation matrices representing a broad range of data structures.^ The techniques examined for subset selection do not generally result in improved predictions relative to the full model. Approaches using 'forward selection' result in slightly smaller prediction errors and less biased estimators of predictive accuracy than 'all possible subsets' approaches but no differences are detected between the performances of C$\sb{\rm p}$ and S$\sb{\rm p}$. In every case, prediction errors of models obtained by subset selection in either of the half splits exceed those obtained using all predictors and the entire sample.^ Only the random split estimator is conditionally (on $\\beta$) unbiased, however MSEP$\sb{\rm m}$ is unbiased on average and PRESS is nearly so in unselected (fixed form) models. When subset selection techniques are used, MSEP$\sb{\rm m}$ and PRESS always underestimate prediction errors, by as much as 27 percent (on average) in small samples. Despite their bias, the mean squared errors (MSE) of these estimators are at least 30 percent less than that of the unbiased random split estimator. The DUPLEX split estimator suffers from large MSE as well as bias, and seems of little value within the context of stochastic regressor variables.^ To maximize predictive accuracy while retaining a reliable estimate of that accuracy, it is recommended that the entire sample be used for model development, and a leave-one-out statistic (e.g. PRESS) be used for assessment. ^
Resumo:
My dissertation focuses on developing methods for gene-gene/environment interactions and imprinting effect detections for human complex diseases and quantitative traits. It includes three sections: (1) generalizing the Natural and Orthogonal interaction (NOIA) model for the coding technique originally developed for gene-gene (GxG) interaction and also to reduced models; (2) developing a novel statistical approach that allows for modeling gene-environment (GxE) interactions influencing disease risk, and (3) developing a statistical approach for modeling genetic variants displaying parent-of-origin effects (POEs), such as imprinting. In the past decade, genetic researchers have identified a large number of causal variants for human genetic diseases and traits by single-locus analysis, and interaction has now become a hot topic in the effort to search for the complex network between multiple genes or environmental exposures contributing to the outcome. Epistasis, also known as gene-gene interaction is the departure from additive genetic effects from several genes to a trait, which means that the same alleles of one gene could display different genetic effects under different genetic backgrounds. In this study, we propose to implement the NOIA model for association studies along with interaction for human complex traits and diseases. We compare the performance of the new statistical models we developed and the usual functional model by both simulation study and real data analysis. Both simulation and real data analysis revealed higher power of the NOIA GxG interaction model for detecting both main genetic effects and interaction effects. Through application on a melanoma dataset, we confirmed the previously identified significant regions for melanoma risk at 15q13.1, 16q24.3 and 9p21.3. We also identified potential interactions with these significant regions that contribute to melanoma risk. Based on the NOIA model, we developed a novel statistical approach that allows us to model effects from a genetic factor and binary environmental exposure that are jointly influencing disease risk. Both simulation and real data analyses revealed higher power of the NOIA model for detecting both main genetic effects and interaction effects for both quantitative and binary traits. We also found that estimates of the parameters from logistic regression for binary traits are no longer statistically uncorrelated under the alternative model when there is an association. Applying our novel approach to a lung cancer dataset, we confirmed four SNPs in 5p15 and 15q25 region to be significantly associated with lung cancer risk in Caucasians population: rs2736100, rs402710, rs16969968 and rs8034191. We also validated that rs16969968 and rs8034191 in 15q25 region are significantly interacting with smoking in Caucasian population. Our approach identified the potential interactions of SNP rs2256543 in 6p21 with smoking on contributing to lung cancer risk. Genetic imprinting is the most well-known cause for parent-of-origin effect (POE) whereby a gene is differentially expressed depending on the parental origin of the same alleles. Genetic imprinting affects several human disorders, including diabetes, breast cancer, alcoholism, and obesity. This phenomenon has been shown to be important for normal embryonic development in mammals. Traditional association approaches ignore this important genetic phenomenon. In this study, we propose a NOIA framework for a single locus association study that estimates both main allelic effects and POEs. We develop statistical (Stat-POE) and functional (Func-POE) models, and demonstrate conditions for orthogonality of the Stat-POE model. We conducted simulations for both quantitative and qualitative traits to evaluate the performance of the statistical and functional models with different levels of POEs. Our results showed that the newly proposed Stat-POE model, which ensures orthogonality of variance components if Hardy-Weinberg Equilibrium (HWE) or equal minor and major allele frequencies is satisfied, had greater power for detecting the main allelic additive effect than a Func-POE model, which codes according to allelic substitutions, for both quantitative and qualitative traits. The power for detecting the POE was the same for the Stat-POE and Func-POE models under HWE for quantitative traits.
Resumo:
Prevalent sampling is an efficient and focused approach to the study of the natural history of disease. Right-censored time-to-event data observed from prospective prevalent cohort studies are often subject to left-truncated sampling. Left-truncated samples are not randomly selected from the population of interest and have a selection bias. Extensive studies have focused on estimating the unbiased distribution given left-truncated samples. However, in many applications, the exact date of disease onset was not observed. For example, in an HIV infection study, the exact HIV infection time is not observable. However, it is known that the HIV infection date occurred between two observable dates. Meeting these challenges motivated our study. We propose parametric models to estimate the unbiased distribution of left-truncated, right-censored time-to-event data with uncertain onset times. We first consider data from a length-biased sampling, a specific case in left-truncated samplings. Then we extend the proposed method to general left-truncated sampling. With a parametric model, we construct the full likelihood, given a biased sample with unobservable onset of disease. The parameters are estimated through the maximization of the constructed likelihood by adjusting the selection bias and unobservable exact onset. Simulations are conducted to evaluate the finite sample performance of the proposed methods. We apply the proposed method to an HIV infection study, estimating the unbiased survival function and covariance coefficients. ^
Resumo:
The performance of the Hosmer-Lemeshow global goodness-of-fit statistic for logistic regression models was explored in a wide variety of conditions not previously fully investigated. Computer simulations, each consisting of 500 regression models, were run to assess the statistic in 23 different situations. The items which varied among the situations included the number of observations used in each regression, the number of covariates, the degree of dependence among the covariates, the combinations of continuous and discrete variables, and the generation of the values of the dependent variable for model fit or lack of fit.^ The study found that the $\rm\ C$g* statistic was adequate in tests of significance for most situations. However, when testing data which deviate from a logistic model, the statistic has low power to detect such deviation. Although grouping of the estimated probabilities into quantiles from 8 to 30 was studied, the deciles of risk approach was generally sufficient. Subdividing the estimated probabilities into more than 10 quantiles when there are many covariates in the model is not necessary, despite theoretical reasons which suggest otherwise. Because it does not follow a X$\sp2$ distribution, the statistic is not recommended for use in models containing only categorical variables with a limited number of covariate patterns.^ The statistic performed adequately when there were at least 10 observations per quantile. Large numbers of observations per quantile did not lead to incorrect conclusions that the model did not fit the data when it actually did. However, the statistic failed to detect lack of fit when it existed and should be supplemented with further tests for the influence of individual observations. Careful examination of the parameter estimates is also essential since the statistic did not perform as desired when there was moderate to severe collinearity among covariates.^ Two methods studied for handling tied values of the estimated probabilities made only a slight difference in conclusions about model fit. Neither method split observations with identical probabilities into different quantiles. Approaches which create equal size groups by separating ties should be avoided. ^
Resumo:
Sea surface temperatures and sea-ice extent are the most critical variables to evaluate the Southern Ocean paleoceanographic evolution in relation to the development of the global carbon cycle, atmospheric CO2 variability and ocean-atmosphere circulation. In contrast to the Atlantic and the Indian sectors, the Pacific sector of the Southern Ocean has been insufficiently investigated so far. To cover this gap of information we present diatom-based estimates of summer sea surface temperature (SSST) and winter sea-ice concentration (WSI) from 17 sites in the polar South Pacific to study the Last Glacial Maximum (LGM) at the EPILOG time slice (19,000-23,000 cal. years BP). Applied statistical methods are the Imbrie and Kipp Method (IKM) and the Modern Analog Technique (MAT) to estimate temperature and sea-ice concentration, respectively. Our data display a distinct LGM east-west differentiation in SSST and WSI with steeper latitudinal temperature gradients and a winter sea-ice edge located consistently north of the Pacific-Antarctic Ridge in the Ross sea sector. In the eastern sector of our study area, which is governed by the Amundsen Abyssal Plain, the estimates yield weaker latitudinal SSST gradients together with a variable extended winter sea-ice field. In this sector, sea-ice extent may have reached sporadically the area of the present Subantarctic Front at its maximum LGM expansion. This pattern points to topographic forcing as major controller of the frontal system location and sea-ice extent in the western Pacific sector whereas atmospheric conditions like the Southern Annular Mode and the ENSO affected the oceanographic conditions in the eastern Pacific sector. Although it is difficult to depict the location and the physical nature of frontal systems separating the glacial Southern Ocean water masses into different zones, we found a distinct temperature gradient in latitudes straddled by the modern Southern Subtropical Front. Considering that the glacial temperatures north of this zone are similar to the modern, we suggest that this represents the Glacial Southern Subtropical Front (GSSTF), which delimits the zone of strongest glacial SSST cooling (>4K) to its North. The southern boundary of the zone of maximum cooling is close to the glacial 4°C isotherm. This isotherm, which is in the range of SSST at the modern Antarctic Polar Front (APF), represents a circum-Antarctic feature and marks the northern edge of the glacial Antarctic Circumpolar Current (ACC). We also assume that a glacial front was established at the northern average winter sea ice edge, comparable with the modern Southern Antarctic Circumpolar Current Front (SACCF). During the glacial, this front would be located in the area of the modern APF. The northward deflection of colder than modern surface waters along the South American continent leads to a significant cooling of the glacial Humboldt Current surface waters (4-8K), which affects the temperature regimes as far north as into tropical latitudes. The glacial reduction of ACC temperatures may also result in the significant cooling in the Atlantic and Indian Southern Ocean, thus may enhance thermal differentiation of the Southern Ocean and Antarctic continental cooling. Comparison with temperature and sea ice simulations for the last glacial based on numerical simulations show that the majority of modern models overestimate summer and winter sea ice cover and that there exists few models that reproduce our temperature data rather well.
Resumo:
This paper shows how an Armington-Krugman-Melitz encompassing module based on Dixon and Rimmer (2012) can be calibrated, and clarifies the choice of initial levels for two kinds of number of firms, or parameter values for two kinds of fixed costs, that enter a Melitz-type specification can be set freely to any preferred value, just as the cases we derive quantities from given value data assuming some of the initial prices to be unity. In consequence, only one kind of additional information, which is on the shape parameter related to productivity, just is required in order to incorporate Melitz-type monopolistic competition and heterogeneous firms into a standard applied general equilibrium model. To be a Krugman-type, nothing is needed. This enables model builders in applied economics to fully enjoy the featured properties of the theoretical models invented by Krugman (1980) and Melitz (2003) in practical policy simulations at low cost.