998 resultados para calculation models
Resumo:
We calculate the relic abundance of mixed axion/neutralino cold dark matter which arises in R-parity conserving supersymmetric (SUSY) models wherein the strong CP problem is solved by the Peccei-Quinn (PQ) mechanism with a concommitant axion/saxion/axino supermultiplet. By numerically solving the coupled Boltzmann equations, we include the combined effects of 1. thermal axino production with cascade decays to a neutralino LSP, 2. thermal saxion production and production via coherent oscillations along with cascade decays and entropy injection, 3. thermal neutralino production and re-annihilation after both axino and saxion decays, 4. gravitino production and decay and 5. axion production both thermally and via oscillations. For SUSY models with too high a standard neutralino thermal abundance, we find the combined effect of SUSY PQ particles is not enough to lower the neutralino abundance down to its measured value, while at the same time respecting bounds on late-decaying neutral particles from BBN. However, models with a standard neutralino underabundance can now be allowed with either neutralino or axion domination of dark matter, and furthermore, these models can allow the PQ breaking scale f(a) to be pushed up into the 10(14) - 10(15) GeV range, which is where it is typically expected to be in string theory models.
Resumo:
Purpose - The purpose of this paper is to develop an efficient numerical algorithm for the self-consistent solution of Schrodinger and Poisson equations in one-dimensional systems. The goal is to compute the charge-control and capacitance-voltage characteristics of quantum wire transistors. Design/methodology/approach - The paper presents a numerical formulation employing a non-uniform finite difference discretization scheme, in which the wavefunctions and electronic energy levels are obtained by solving the Schrodinger equation through the split-operator method while a relaxation method in the FTCS scheme ("Forward Time Centered Space") is used to solve the two-dimensional Poisson equation. Findings - The numerical model is validated by taking previously published results as a benchmark and then applying them to yield the charge-control characteristics and the capacitance-voltage relationship for a split-gate quantum wire device. Originality/value - The paper helps to fulfill the need for C-V models of quantum wire device. To do so, the authors implemented a straightforward calculation method for the two-dimensional electronic carrier density n(x,y). The formulation reduces the computational procedure to a much simpler problem, similar to the one-dimensional quantization case, significantly diminishing running time.
Resumo:
This study aims to compare and validate two soil-vegetation-atmosphere-transfer (SVAT) schemes: TERRA-ML and the Community Land Model (CLM). Both SVAT schemes are run in standalone mode (decoupled from an atmospheric model) and forced with meteorological in-situ measurements obtained at several tropical African sites. Model performance is quantified by comparing simulated sensible and latent heat fluxes with eddy-covariance measurements. Our analysis indicates that the Community Land Model corresponds more closely to the micrometeorological observations, reflecting the advantages of the higher model complexity and physical realism. Deficiencies in TERRA-ML are addressed and its performance is improved: (1) adjusting input data (root depth) to region-specific values (tropical evergreen forest) resolves dry-season underestimation of evapotranspiration; (2) adjusting the leaf area index and albedo (depending on hard-coded model constants) resolves overestimations of both latent and sensible heat fluxes; and (3) an unrealistic flux partitioning caused by overestimated superficial water contents is reduced by adjusting the hydraulic conductivity parameterization. CLM is by default more versatile in its global application on different vegetation types and climates. On the other hand, with its lower degree of complexity, TERRA-ML is much less computationally demanding, which leads to faster calculation times in a coupled climate simulation.
Resumo:
The lateral characteristics of tires in terms of lateral forces as a function of sideslip angle is a focal point in the prediction of ground loads and ground handling aircraft behavior. However, tests to validate such coefficients are not mandatory to obtain Aircraft Type Certification and so they are not available for ATR tires. Anyway, some analytical values are implemented in ATR calculation codes (Flight Qualities in-house numerical code and Loads in-house numerical code). Hence, the goal of my work is to further investigate and validate lateral tires characteristics by means of: exploitation and re-parameterization of existing test on NLG tires, implementation of easy-handle model based on DFDR parameters to compute sideslip angles, application of this model to compute lateral loads on existing flight tests and incident cases, analysis of results. The last part of this work is dedicated to the preliminary study of a methodology to perform a test to retrieve lateral tire loads during ground turning with minimum requirements in terms of aircraft test instrumentation. This represents the basis for future works.
Resumo:
In this work we address the problem of finding formulas for efficient and reliable analytical approximation for the calculation of forward implied volatility in LSV models, a problem which is reduced to the calculation of option prices as an expansion of the price of the same financial asset in a Black-Scholes dynamic. Our approach involves an expansion of the differential operator, whose solution represents the price in local stochastic volatility dynamics. Further calculations then allow to obtain an expansion of the implied volatility without the aid of any special function or expensive from the computational point of view, in order to obtain explicit formulas fast to calculate but also as accurate as possible.
Resumo:
Complete basis set and Gaussian-n methods were combined with Barone and Cossi's implementation of the polarizable conductor model (CPCM) continuum solvation methods to calculate pKa values for six carboxylic acids. Four different thermodynamic cycles were considered in this work. An experimental value of −264.61 kcal/mol for the free energy of solvation of H+, ΔGs(H+), was combined with a value for Ggas(H+) of −6.28 kcal/mol, to calculate pKa values with cycle 1. The complete basis set gas-phase methods used to calculate gas-phase free energies are very accurate, with mean unsigned errors of 0.3 kcal/mol and standard deviations of 0.4 kcal/mol. The CPCM solvation calculations used to calculate condensed-phase free energies are slightly less accurate than the gas-phase models, and the best method has a mean unsigned error and standard deviation of 0.4 and 0.5 kcal/mol, respectively. Thermodynamic cycles that include an explicit water in the cycle are not accurate when the free energy of solvation of a water molecule is used, but appear to become accurate when the experimental free energy of vaporization of water is used. This apparent improvement is an artifact of the standard state used in the calculation. Geometry relaxation in solution does not improve the results when using these later cycles. The use of cycle 1 and the complete basis set models combined with the CPCM solvation methods yielded pKa values accurate to less than half a pKa unit. © 2001 John Wiley & Sons, Inc. Int J Quantum Chem, 2001
Resumo:
Context. Planet formation models have been developed during the past years to try to reproduce what has been observed of both the solar system and the extrasolar planets. Some of these models have partially succeeded, but they focus on massive planets and, for the sake of simplicity, exclude planets belonging to planetary systems. However, more and more planets are now found in planetary systems. This tendency, which is a result of radial velocity, transit, and direct imaging surveys, seems to be even more pronounced for low-mass planets. These new observations require improving planet formation models, including new physics, and considering the formation of systems. Aims: In a recent series of papers, we have presented some improvements in the physics of our models, focussing in particular on the internal structure of forming planets, and on the computation of the excitation state of planetesimals and their resulting accretion rate. In this paper, we focus on the concurrent effect of the formation of more than one planet in the same protoplanetary disc and show the effect, in terms of architecture and composition of this multiplicity. Methods: We used an N-body calculation including collision detection to compute the orbital evolution of a planetary system. Moreover, we describe the effect of competition for accretion of gas and solids, as well as the effect of gravitational interactions between planets. Results: We show that the masses and semi-major axes of planets are modified by both the effect of competition and gravitational interactions. We also present the effect of the assumed number of forming planets in the same system (a free parameter of the model), as well as the effect of the inclination and eccentricity damping. We find that the fraction of ejected planets increases from nearly 0 to 8% as we change the number of embryos we seed the system with from 2 to 20 planetary embryos. Moreover, our calculations show that, when considering planets more massive than ~5 M⊕, simulations with 10 or 20 planetary embryos statistically give the same results in terms of mass function and period distribution.
Resumo:
Introduction Commercial treatment planning systems employ a variety of dose calculation algorithms to plan and predict the dose distributions a patient receives during external beam radiation therapy. Traditionally, the Radiological Physics Center has relied on measurements to assure that institutions participating in the National Cancer Institute sponsored clinical trials administer radiation in doses that are clinically comparable to those of other participating institutions. To complement the effort of the RPC, an independent dose calculation tool needs to be developed that will enable a generic method to determine patient dose distributions in three dimensions and to perform retrospective analysis of radiation delivered to patients who enrolled in past clinical trials. Methods A multi-source model representing output for Varian 6 MV and 10 MV photon beams was developed and evaluated. The Monte Carlo algorithm, know as the Dose Planning Method (DPM), was used to perform the dose calculations. The dose calculations were compared to measurements made in a water phantom and in anthropomorphic phantoms. Intensity modulated radiation therapy and stereotactic body radiation therapy techniques were used with the anthropomorphic phantoms. Finally, past patient treatment plans were selected and recalculated using DPM and contrasted against a commercial dose calculation algorithm. Results The multi-source model was validated for the Varian 6 MV and 10 MV photon beams. The benchmark evaluations demonstrated the ability of the model to accurately calculate dose for the Varian 6 MV and the Varian 10 MV source models. The patient calculations proved that the model was reproducible in determining dose under similar conditions described by the benchmark tests. Conclusions The dose calculation tool that relied on a multi-source model approach and used the DPM code to calculate dose was developed, validated, and benchmarked for the Varian 6 MV and 10 MV photon beams. Several patient dose distributions were contrasted against a commercial algorithm to provide a proof of principal to use as an application in monitoring clinical trial activity.
Resumo:
The comparison of radiotherapy techniques regarding secondary cancer risk has yielded contradictory results possibly stemming from the many different approaches used to estimate risk. The purpose of this study was to make a comprehensive evaluation of different available risk models applied to detailed whole-body dose distributions computed by Monte Carlo for various breast radiotherapy techniques including conventional open tangents, 3D conformal wedged tangents and hybrid intensity modulated radiation therapy (IMRT). First, organ-specific linear risk models developed by the International Commission on Radiological Protection (ICRP) and the Biological Effects of Ionizing Radiation (BEIR) VII committee were applied to mean doses for remote organs only and all solid organs. Then, different general non-linear risk models were applied to the whole body dose distribution. Finally, organ-specific non-linear risk models for the lung and breast were used to assess the secondary cancer risk for these two specific organs. A total of 32 different calculated absolute risks resulted in a broad range of values (between 0.1% and 48.5%) underlying the large uncertainties in absolute risk calculation. The ratio of risk between two techniques has often been proposed as a more robust assessment of risk than the absolute risk. We found that the ratio of risk between two techniques could also vary substantially considering the different approaches to risk estimation. Sometimes the ratio of risk between two techniques would range between values smaller and larger than one, which then translates into inconsistent results on the potential higher risk of one technique compared to another. We found however that the hybrid IMRT technique resulted in a systematic reduction of risk compared to the other techniques investigated even though the magnitude of this reduction varied substantially with the different approaches investigated. Based on the epidemiological data available, a reasonable approach to risk estimation would be to use organ-specific non-linear risk models applied to the dose distributions of organs within or near the treatment fields (lungs and contralateral breast in the case of breast radiotherapy) as the majority of radiation-induced secondary cancers are found in the beam-bordering regions.
Resumo:
BACKGROUND The success of an intervention to prevent the complications of an infection is influenced by the natural history of the infection. Assumptions about the temporal relationship between infection and the development of sequelae can affect the predicted effect size of an intervention and the sample size calculation. This study investigates how a mathematical model can be used to inform sample size calculations for a randomised controlled trial (RCT) using the example of Chlamydia trachomatis infection and pelvic inflammatory disease (PID). METHODS We used a compartmental model to imitate the structure of a published RCT. We considered three different processes for the timing of PID development, in relation to the initial C. trachomatis infection: immediate, constant throughout, or at the end of the infectious period. For each process we assumed that, of all women infected, the same fraction would develop PID in the absence of an intervention. We examined two sets of assumptions used to calculate the sample size in a published RCT that investigated the effect of chlamydia screening on PID incidence. We also investigated the influence of the natural history parameters of chlamydia on the required sample size. RESULTS The assumed event rates and effect sizes used for the sample size calculation implicitly determined the temporal relationship between chlamydia infection and PID in the model. Even small changes in the assumed PID incidence and relative risk (RR) led to considerable differences in the hypothesised mechanism of PID development. The RR and the sample size needed per group also depend on the natural history parameters of chlamydia. CONCLUSIONS Mathematical modelling helps to understand the temporal relationship between an infection and its sequelae and can show how uncertainties about natural history parameters affect sample size calculations when planning a RCT.
Resumo:
67P/Churyumov-Gerasimenko (67P) is a Jupiter-family comet and the object of investigation of the European Space Agency mission Rosetta. This report presents the first full 3D simulation results of 67P’s neutral gas coma. In this study we include results from a direct simulation Monte Carlo method, a hydrodynamic code, and a purely geometric calculation which computes the total illuminated surface area on the nucleus. All models include the triangulated 3D shape model of 67P as well as realistic illumination and shadowing conditions. The basic concept is the assumption that these illumination conditions on the nucleus are the main driver for the gas activity of the comet. As a consequence, the total production rate of 67P varies as a function of solar insolation. The best agreement between the model and the data is achieved when gas fluxes on the night side are in the range of 7% to 10% of the maximum flux, accounting for contributions from the most volatile components. To validate the output of our numerical simulations we compare the results of all three models to in situ gas number density measurements from the ROSINA COPS instrument. We are able to reproduce the overall features of these local neutral number density measurements of ROSINA COPS for the time period between early August 2014 and January 1 2015 with all three models. Some details in the measurements are not reproduced and warrant further investigation and refinement of the models. However, the overall assumption that illumination conditions on the nucleus are at least an important driver of the gas activity is validated by the models. According to our simulation results we find the total production rate of 67P to be constant between August and November 2014 with a value of about 1 × 10²⁶ molecules s⁻¹.
Resumo:
n this article, a tool for simulating the channel impulse response for indoor visible light communications using 3D computer-aided design (CAD) models is presented. The simulation tool is based on a previous Monte Carlo ray-tracing algorithm for indoor infrared channel estimation, but including wavelength response evaluation. The 3D scene, or the simulation environment, can be defined using any CAD software in which the user specifies, in addition to the setting geometry, the reflection characteristics of the surface materials as well as the structures of the emitters and receivers involved in the simulation. Also, in an effort to improve the computational efficiency, two optimizations are proposed. The first one consists of dividing the setting into cubic regions of equal size, which offers a calculation improvement of approximately 50% compared to not dividing the 3D scene into sub-regions. The second one involves the parallelization of the simulation algorithm, which provides a computational speed-up proportional to the number of processors used.
Resumo:
Civil buildings are not specifically designed to support blast loads, but it is important to take into account these potential scenarios because of their catastrophic effects, on persons and structures. A practical way to consider explosions on reinforced concrete structures is necessary. With this objective we propose a methodology to evaluate blast loads on large concrete buildings, using LS-DYNA code for calculation, with Lagrangian finite elements and explicit time integration. The methodology has three steps. First, individual structural elements of the building like columns and slabs are studied, using continuum 3D elements models subjected to blast loads. In these models reinforced concrete is represented with high precision, using advanced material models such as CSCM_CONCRETE model, and segregated rebars constrained within the continuum mesh. Regrettably this approach cannot be used for large structures because of its excessive computational cost. Second, models based on structural elements are developed, using shells and beam elements. In these models concrete is represented using CONCRETE_EC2 model and segregated rebars with offset formulation, being calibrated with continuum elements models from step one to obtain the same structural response: displacement, velocity, acceleration, damage and erosion. Third, models basedon structural elements are used to develop large models of complete buildings. They are used to study the global response of buildings subjected to blast loads and progressive collapse. This article carries out different techniques needed to calibrate properly the models based on structural elements, using shells and beam elements, in order to provide results of sufficient accuracy that can be used with moderate computational cost.