915 resultados para models of computation
Resumo:
Here I develop a model of a radiative-convective atmosphere with both radiative and convective schemes highly simplified. The atmospheric absorption of radiation at selective wavelengths makes use of constant mass absorption coefficients in finite width spectral bands. The convective regime is introduced by using a prescribed lapse rate in the troposphere. The main novelty of the radiative-convective model developed here is that it is solved without using any angular approximation for the radiation field. The solution obtained in the purely radiation mode (i. e. with convection ignored) leads to multiple equilibria of stable states, being very similar to some results recently found in simple models of planetary atmospheres. However, the introduction of convective processes removes the multiple equilibria of stable states. This shows the importance of taking convective processes into account even for qualitative analyses of planetary atmosphere
Resumo:
1. Identifying those areas suitable for recolonization by threatened species is essential to support efficient conservation policies. Habitat suitability models (HSM) predict species' potential distributions, but the quality of their predictions should be carefully assessed when the species-environment equilibrium assumption is violated.2. We studied the Eurasian otter Lutra lutra, whose numbers are recovering in southern Italy. To produce widely applicable results, we chose standard HSM procedures and looked for the models' capacities in predicting the suitability of a recolonization area. We used two fieldwork datasets: presence-only data, used in the Ecological Niche Factor Analyses (ENFA), and presence-absence data, used in a Generalized Linear Model (GLM). In addition to cross-validation, we independently evaluated the models with data from a recolonization event, providing presences on a previously unoccupied river.3. Three of the models successfully predicted the suitability of the recolonization area, but the GLM built with data before the recolonization disagreed with these predictions, missing the recolonized river's suitability and badly describing the otter's niche. Our results highlighted three points of relevance to modelling practices: (1) absences may prevent the models from correctly identifying areas suitable for a species spread; (2) the selection of variables may lead to randomness in the predictions; and (3) the Area Under Curve (AUC), a commonly used validation index, was not well suited to the evaluation of model quality, whereas the Boyce Index (CBI), based on presence data only, better highlighted the models' fit to the recolonization observations.4. For species with unstable spatial distributions, presence-only models may work better than presence-absence methods in making reliable predictions of suitable areas for expansion. An iterative modelling process, using new occurrences from each step of the species spread, may also help in progressively reducing errors.5. Synthesis and applications. Conservation plans depend on reliable models of the species' suitable habitats. In non-equilibrium situations, such as the case for threatened or invasive species, models could be affected negatively by the inclusion of absence data when predicting the areas of potential expansion. Presence-only methods will here provide a better basis for productive conservation management practices.
Resumo:
We study whether the neutron skin thickness Δrnp of 208Pb originates from the bulk or from the surface of the nucleon density distributions, according to the mean-field models of nuclear structure, and find that it depends on the stiffness of the nuclear symmetry energy. The bulk contribution to Δrnp arises from an extended sharp radius of neutrons, whereas the surface contribution arises from different widths of the neutron and proton surfaces. Nuclear models where the symmetry energy is stiff, as typical of relativistic models, predict a bulk contribution in Δrnp of 208Pb about twice as large as the surface contribution. In contrast, models with a soft symmetry energy like common nonrelativistic models predict that Δrnp of 208Pb is divided similarly into bulk and surface parts. Indeed, if the symmetry energy is supersoft, the surface contribution becomes dominant. We note that the linear correlation of Δrnp of 208Pb with the density derivative of the nuclear symmetry energy arises from the bulk part of Δrnp. We also note that most models predict a mixed-type (between halo and skin) neutron distribution for 208Pb. Although the halo-type limit is actually found in the models with a supersoft symmetry energy, the skin-type limit is not supported by any mean-field model. Finally, we compute parity-violating electron scattering in the conditions of the 208Pb parity radius experiment (PREX) and obtain a pocket formula for the parity-violating asymmetry in terms of the parameters that characterize the shape of the 208Pb nucleon densities.
Resumo:
Understanding the structure of interphase chromosomes is essential to elucidate regulatory mechanisms of gene expression. During recent years, high-throughput DNA sequencing expanded the power of chromosome conformation capture (3C) methods that provide information about reciprocal spatial proximity of chromosomal loci. Since 2012, it is known that entire chromatin in interphase chromosomes is organized into regions with strongly increased frequency of internal contacts. These regions, with the average size of ∼1 Mb, were named topological domains. More recent studies demonstrated presence of unconstrained supercoiling in interphase chromosomes. Using Brownian dynamics simulations, we show here that by including supercoiling into models of topological domains one can reproduce and thus provide possible explanations of several experimentally observed characteristics of interphase chromosomes, such as their complex contact maps.
Resumo:
Preface The starting point for this work and eventually the subject of the whole thesis was the question: how to estimate parameters of the affine stochastic volatility jump-diffusion models. These models are very important for contingent claim pricing. Their major advantage, availability T of analytical solutions for characteristic functions, made them the models of choice for many theoretical constructions and practical applications. At the same time, estimation of parameters of stochastic volatility jump-diffusion models is not a straightforward task. The problem is coming from the variance process, which is non-observable. There are several estimation methodologies that deal with estimation problems of latent variables. One appeared to be particularly interesting. It proposes the estimator that in contrast to the other methods requires neither discretization nor simulation of the process: the Continuous Empirical Characteristic function estimator (EGF) based on the unconditional characteristic function. However, the procedure was derived only for the stochastic volatility models without jumps. Thus, it has become the subject of my research. This thesis consists of three parts. Each one is written as independent and self contained article. At the same time, questions that are answered by the second and third parts of this Work arise naturally from the issues investigated and results obtained in the first one. The first chapter is the theoretical foundation of the thesis. It proposes an estimation procedure for the stochastic volatility models with jumps both in the asset price and variance processes. The estimation procedure is based on the joint unconditional characteristic function for the stochastic process. The major analytical result of this part as well as of the whole thesis is the closed form expression for the joint unconditional characteristic function for the stochastic volatility jump-diffusion models. The empirical part of the chapter suggests that besides a stochastic volatility, jumps both in the mean and the volatility equation are relevant for modelling returns of the S&P500 index, which has been chosen as a general representative of the stock asset class. Hence, the next question is: what jump process to use to model returns of the S&P500. The decision about the jump process in the framework of the affine jump- diffusion models boils down to defining the intensity of the compound Poisson process, a constant or some function of state variables, and to choosing the distribution of the jump size. While the jump in the variance process is usually assumed to be exponential, there are at least three distributions of the jump size which are currently used for the asset log-prices: normal, exponential and double exponential. The second part of this thesis shows that normal jumps in the asset log-returns should be used if we are to model S&P500 index by a stochastic volatility jump-diffusion model. This is a surprising result. Exponential distribution has fatter tails and for this reason either exponential or double exponential jump size was expected to provide the best it of the stochastic volatility jump-diffusion models to the data. The idea of testing the efficiency of the Continuous ECF estimator on the simulated data has already appeared when the first estimation results of the first chapter were obtained. In the absence of a benchmark or any ground for comparison it is unreasonable to be sure that our parameter estimates and the true parameters of the models coincide. The conclusion of the second chapter provides one more reason to do that kind of test. Thus, the third part of this thesis concentrates on the estimation of parameters of stochastic volatility jump- diffusion models on the basis of the asset price time-series simulated from various "true" parameter sets. The goal is to show that the Continuous ECF estimator based on the joint unconditional characteristic function is capable of finding the true parameters. And, the third chapter proves that our estimator indeed has the ability to do so. Once it is clear that the Continuous ECF estimator based on the unconditional characteristic function is working, the next question does not wait to appear. The question is whether the computation effort can be reduced without affecting the efficiency of the estimator, or whether the efficiency of the estimator can be improved without dramatically increasing the computational burden. The efficiency of the Continuous ECF estimator depends on the number of dimensions of the joint unconditional characteristic function which is used for its construction. Theoretically, the more dimensions there are, the more efficient is the estimation procedure. In practice, however, this relationship is not so straightforward due to the increasing computational difficulties. The second chapter, for example, in addition to the choice of the jump process, discusses the possibility of using the marginal, i.e. one-dimensional, unconditional characteristic function in the estimation instead of the joint, bi-dimensional, unconditional characteristic function. As result, the preference for one or the other depends on the model to be estimated. Thus, the computational effort can be reduced in some cases without affecting the efficiency of the estimator. The improvement of the estimator s efficiency by increasing its dimensionality faces more difficulties. The third chapter of this thesis, in addition to what was discussed above, compares the performance of the estimators with bi- and three-dimensional unconditional characteristic functions on the simulated data. It shows that the theoretical efficiency of the Continuous ECF estimator based on the three-dimensional unconditional characteristic function is not attainable in practice, at least for the moment, due to the limitations on the computer power and optimization toolboxes available to the general public. Thus, the Continuous ECF estimator based on the joint, bi-dimensional, unconditional characteristic function has all the reasons to exist and to be used for the estimation of parameters of the stochastic volatility jump-diffusion models.
Liming in Agricultural Production Models with and Without the Adoption of Crop-Livestock Integration
Resumo:
ABSTRACT Perennial forage crops used in crop-livestock integration (CLI) are able to accumulate large amounts of straw on the soil surface in no-tillage system (NTS). In addition, they can potentially produce large amounts of soluble organic compounds that help improving the efficiency of liming in the subsurface, which favors root growth, thus reducing the risks of loss in yield during dry spells and the harmful effects of “overliming”. The aim of this study was to test the effects of liming on two models of agricultural production, with and without crop-livestock integration, for 2 years. Thus, an experiment was conducted in a Latossolo Vermelho (Oxisol) with a very clayey texture located in an agricultural area under the NTS in Bandeirantes, PR, Brazil. Liming was performed to increase base saturation (V) to 65, 75, and 90 % while one plot per block was maintained without the application of lime (control). A randomized block experimental design was adopted arranged in split-plots and four plots/block, with four replications. The soil properties evaluated were: pH in CaCl2, soil organic matter (SOM), Ca, Mg, K, Al, and P. The effects of liming were observed to a greater depth and for a long period through mobilization of ions in the soil, leading to a reduction in SOM and Al concentration and an increase in pH and the levels of Ca and Mg. In the first crop year, adoption of CLI led to an increase in the levels of K and Mg and a reduction in the levels of SOM; however, in the second crop year, the rate of decline of SOM decreased compared to the decline observed in the first crop year, and the level of K increased, whereas that of P decreased. The extent of the effects of liming in terms of depth and improvement in the root environment from the treatments were observed only partially from the changes observed in the chemical properties studied.
Integrating species distribution models (SDMs) and phylogeography for two species of Alpine Primula.
Resumo:
The major intention of the present study was to investigate whether an approach combining the use of niche-based palaeodistribution modeling and phylo-geography would support or modify hypotheses about the Quaternary distributional history derived from phylogeographic methods alone. Our study system comprised two closely related species of Alpine Primula. We used species distribution models based on the extant distribution of the species and last glacial maximum (LGM) climate models to predict the distribution of the two species during the LGM. Phylogeographic data were generated using amplified fragment length polymorphisms (AFLPs). In Primula hirsuta, models of past distribution and phylogeographic data are partly congruent and support the hypothesis of widespread nunatak survival in the Central Alps. Species distribution models (SDMs) allowed us to differentiate between alpine regions that harbor potential nunatak areas and regions that have been colonized from other areas. SDMs revealed that diversity is a good indicator for nunataks, while rarity is a good indicator for peripheral relict populations that were not source for the recolonization of the inner Alps. In P. daonensis, palaeo-distribution models and phylogeographic data are incongruent. Besides the uncertainty inherent to this type of modeling approach (e.g., relatively coarse 1-km grain size), disagreement of models and data may partly be caused by shifts of ecological niche in both species. Nevertheless, we demonstrate that the combination of palaeo-distribution modeling with phylogeographical approaches provides a more differentiated picture of the distributional history of species and partly supports (P. hirsuta) and partly modifies (P. daonensis and P. hirsuta) hypotheses of Quaternary distributional history. Some of the refugial area indicated by palaeodistribution models could not have been identified with phylogeographic data.
Resumo:
The objective of this work was to develop neural network models of backpropagation type to estimate solar radiation based on extraterrestrial radiation data, daily temperature range, precipitation, cloudiness and relative sunshine duration. Data from Córdoba, Argentina, were used for development and validation. The behaviour and adjustment between values observed and estimates obtained by neural networks for different combinations of input were assessed. These estimations showed root mean square error between 3.15 and 3.88 MJ m-2 d-1 . The latter corresponds to the model that calculates radiation using only precipitation and daily temperature range. In all models, results show good adjustment to seasonal solar radiation. These results allow inferring the adequate performance and pertinence of this methodology to estimate complex phenomena, such as solar radiation.
Resumo:
Although hydrocarbon-bearing fluids have been known from the alkaline igneous rocks of the Khibiny intrusion for many years, their origin remains enigmatic. A recently proposed model of post-magmatic hydrocarbon (HC) generation through Fischer-Tropsch (FT) type reactions suggests the hydration of Fe-bearing phases and release of H-2 which reacts with magmatically derived CO2 to form CH4 and higher HCs. However, new petrographic, microthermometric, laser Raman, bulk gas and isotope data are presented and discussed in the context of previously published work in order to reassess models of HC generation. The gas phase is dominated by CH4 with only minor proportions of higher hydrocarbons. No remnants of the proposed primary CO2-rich fluid are found in the complex. The majority of the fluid inclusions are of secondary nature and trapped in healed microfractures. This indicates a high fluid flux after magma crystallisation. Entrapment conditions for fluid inclusions are 450-550 degrees C at 2.8-4.5 kbar. These temperatures are too high for hydrocarbon gas generation through the FT reaction. Chemical analyses of rims of Fe-rich phases suggest that they are not the result of alteration but instead represent changes in magma composition during crystallisation. Furthermore, there is no clear relationship between the presence of Fe-rich minerals and the abundance of fluid inclusion planes (FIPs) as reported elsewhere. delta C-13 values for methane range from -22.4% to -5.4%, confirming a largely abiogenic origin for the gas. The presence of primary CH4-dominated fluid inclusions and melt inclusions, which contain a methane-rich gas phase, indicates a magmatic origin of the HCs. An increase in methane content, together with a decrease in delta C-13 isotope values towards the intrusion margin suggests that magmatically derived abiogenic hydrocarbons may have mixed with biogenic hydrocarbons derived from the surrounding country rocks. (C) 2006 Elsevier BV. All rights reserved.
Resumo:
It is generally accepted that between 70 and 80% of manufacturing costs can be attributed to design. Nevertheless, it is difficult for the designer to estimate manufacturing costs accurately, especially when alternative constructions are compared at the conceptual design phase, because of the lack of cost information and appropriate tools. In general, previous reports concerning optimisation of a welded structure have used the mass of the product as the basis for the cost comparison. However, it can easily be shown using a simple example that the use of product mass as the sole manufacturing cost estimator is unsatisfactory. This study describes a method of formulating welding time models for cost calculation, and presents the results of the models for particular sections, based on typical costs in Finland. This was achieved by collecting information concerning welded products from different companies. The data included 71 different welded assemblies taken from the mechanical engineering and construction industries. The welded assemblies contained in total 1 589 welded parts, 4 257 separate welds, and a total welded length of 3 188 metres. The data were modelled for statistical calculations, and models of welding time were derived by using linear regression analysis. Themodels were tested by using appropriate statistical methods, and were found to be accurate. General welding time models have been developed, valid for welding in Finland, as well as specific, more accurate models for particular companies. The models are presented in such a form that they can be used easily by a designer, enabling the cost calculation to be automated.
Resumo:
Geophysical data may provide crucial information about hydrological properties, states, and processes that are difficult to obtain by other means. Large data sets can be acquired over widely different scales in a minimally invasive manner and at comparatively low costs, but their effective use in hydrology makes it necessary to understand the fidelity of geophysical models, the assumptions made in their construction, and the links between geophysical and hydrological properties. Geophysics has been applied for groundwater prospecting for almost a century, but it is only in the last 20 years that it is regularly used together with classical hydrological data to build predictive hydrological models. A largely unexplored venue for future work is to use geophysical data to falsify or rank competing conceptual hydrological models. A promising cornerstone for such a model selection strategy is the Bayes factor, but it can only be calculated reliably when considering the main sources of uncertainty throughout the hydrogeophysical parameter estimation process. Most classical geophysical imaging tools tend to favor models with smoothly varying property fields that are at odds with most conceptual hydrological models of interest. It is thus necessary to account for this bias or use alternative approaches in which proposed conceptual models are honored at all steps in the model building process.
Resumo:
Maximum entropy modeling (Maxent) is a widely used algorithm for predicting species distributions across space and time. Properly assessing the uncertainty in such predictions is non-trivial and requires validation with independent datasets. Notably, model complexity (number of model parameters) remains a major concern in relation to overfitting and, hence, transferability of Maxent models. An emerging approach is to validate the cross-temporal transferability of model predictions using paleoecological data. In this study, we assess the effect of model complexity on the performance of Maxent projections across time using two European plant species (Alnus giutinosa (L.) Gaertn. and Corylus avellana L) with an extensive late Quaternary fossil record in Spain as a study case. We fit 110 models with different levels of complexity under present time and tested model performance using AUC (area under the receiver operating characteristic curve) and AlCc (corrected Akaike Information Criterion) through the standard procedure of randomly partitioning current occurrence data. We then compared these results to an independent validation by projecting the models to mid-Holocene (6000 years before present) climatic conditions in Spain to assess their ability to predict fossil pollen presence-absence and abundance. We find that calibrating Maxent models with default settings result in the generation of overly complex models. While model performance increased with model complexity when predicting current distributions, it was higher with intermediate complexity when predicting mid-Holocene distributions. Hence, models of intermediate complexity resulted in the best trade-off to predict species distributions across time. Reliable temporal model transferability is especially relevant for forecasting species distributions under future climate change. Consequently, species-specific model tuning should be used to find the best modeling settings to control for complexity, notably with paleoecological data to independently validate model projections. For cross-temporal projections of species distributions for which paleoecological data is not available, models of intermediate complexity should be selected.
Resumo:
Radiostereometric analysis (RSA) is a highly accurate method for the measurement of in vivo micromotion of orthopaedic implants. Validation of the RSA method is a prerequisite for performing clinical RSA studies. Only a limited number of studies have utilised the RSA method in the evaluation of migration and inducible micromotion during fracture healing. Volar plate fixation of distal radial fractures has increased in popularity. There is still very little prospective randomised evidence supporting the use of these implants over other treatments. The aim of this study was to investigate the precision, accuracy, and feasibility of using RSA in the evaluation of healing in distal radius fractures treated with a volar fixed-angle plate. A physical phantom model was used to validate the RSA method for simple distal radius fractures. A computer simulation model was then used to validate the RSA method for more complex interfragmentary motion in intra-articular fractures. A separate pre-clinical investigation was performed in order to evaluate the possibility of using novel resorbable markers for RSA. Based on the validation studies, a prospective RSA cohort study of fifteen patients with plated AO type-C distal radius fractures with a 1-year follow-up was performed. RSA was shown to be highly accurate and precise in the measurement of fracture micromotion using both physical and computer simulated models of distal radius fractures. Resorbable RSA markers demonstrated potential for use in RSA. The RSA method was found to have a high clinical precision. The fractures underwent significant translational and rotational migration during the first two weeks after surgery, but not thereafter. Maximal grip caused significant translational and rotational interfragmentary micromotion. This inducible micromotion was detectable up to eighteen weeks, even after the achievement of radiographic union. The application of RSA in the measurement of fracture fragment migration and inducible interfragmentary micromotion in AO type-C distal radius fractures is feasible but technically demanding. RSA may be a unique tool in defining the progress of fracture union.
Resumo:
Atherosclerosis is a life-long vascular inflammatory disease and the leading cause of death in Finland and in other western societies. The development of atherosclerotic plaques is progressive and they form when lipids begin to accumulate in the vessel wall. This accumulation triggers the migration of inflammatory cells that is a hallmark of vascular inflammation. Often, this plaque will become unstable and form vulnerable plaque which may rupture causing thrombosis and in the worst case, causing myocardial infarction or stroke. Identification of these vulnerable plaques before they rupture could save lives. At present, in the clinic, there exists no appropriated, non-invasive method for their identification. The aim of this thesis was to evaluate novel positron emission tomography (PET) probes for the detection of vulnerable atherosclerotic plaques and to characterize, two mouse models of atherosclerosis. These studies were performed by using ex vivo and in vivo imaging modalities. The vulnerability of atherosclerotic plaques was evaluated as expression of active inflammatory cells, namely macrophages. Age and the duration of high-fat diet had a drastic impact on the development of atherosclerotic plaques in mice. In imaging of atherosclerosis, 6-month-old mice, kept on high-fat diet for 4 months, showed matured, metabolically active, atherosclerotic plaques. [18F]FDG and 68Ga were accumulated in the areas representative of vulnerable plaques. However, the slow clearance of 68Ga limits its use for the plaque imaging. The novel synthesized [68Ga]DOTA-RGD and [18F]EF5 tracers demonstrated efficient uptake in plaques as compared to the healthy vessel wall, but the pharmacokinetic properties of these tracers were not optimal in used models. In conclusion, these studies resulted in the identification of new strategies for the assessment of plaque stability and mouse models of atherosclerosis which could be used for plaque imaging. In the used probe panel, [18F]FDG was the best tracer for plaque imaging. However, further studies are warranted to clarify the applicability of [18F]EF5 and [68Ga]DOTA-RGD for imaging of atherosclerosis with other experimental models.
Resumo:
This thesis is concerned with the state and parameter estimation in state space models. The estimation of states and parameters is an important task when mathematical modeling is applied to many different application areas such as the global positioning systems, target tracking, navigation, brain imaging, spread of infectious diseases, biological processes, telecommunications, audio signal processing, stochastic optimal control, machine learning, and physical systems. In Bayesian settings, the estimation of states or parameters amounts to computation of the posterior probability density function. Except for a very restricted number of models, it is impossible to compute this density function in a closed form. Hence, we need approximation methods. A state estimation problem involves estimating the states (latent variables) that are not directly observed in the output of the system. In this thesis, we use the Kalman filter, extended Kalman filter, Gauss–Hermite filters, and particle filters to estimate the states based on available measurements. Among these filters, particle filters are numerical methods for approximating the filtering distributions of non-linear non-Gaussian state space models via Monte Carlo. The performance of a particle filter heavily depends on the chosen importance distribution. For instance, inappropriate choice of the importance distribution can lead to the failure of convergence of the particle filter algorithm. In this thesis, we analyze the theoretical Lᵖ particle filter convergence with general importance distributions, where p ≥2 is an integer. A parameter estimation problem is considered with inferring the model parameters from measurements. For high-dimensional complex models, estimation of parameters can be done by Markov chain Monte Carlo (MCMC) methods. In its operation, the MCMC method requires the unnormalized posterior distribution of the parameters and a proposal distribution. In this thesis, we show how the posterior density function of the parameters of a state space model can be computed by filtering based methods, where the states are integrated out. This type of computation is then applied to estimate parameters of stochastic differential equations. Furthermore, we compute the partial derivatives of the log-posterior density function and use the hybrid Monte Carlo and scaled conjugate gradient methods to infer the parameters of stochastic differential equations. The computational efficiency of MCMC methods is highly depend on the chosen proposal distribution. A commonly used proposal distribution is Gaussian. In this kind of proposal, the covariance matrix must be well tuned. To tune it, adaptive MCMC methods can be used. In this thesis, we propose a new way of updating the covariance matrix using the variational Bayesian adaptive Kalman filter algorithm.