974 resultados para math computation
Resumo:
We study the singular effects of vanishingly small surface tension on the dynamics of finger competition in the Saffman-Taylor problem, using the asymptotic techniques described by Tanveer [Philos. Trans. R. Soc. London, Ser. A 343, 155 (1993)] and Siegel and Tanveer [Phys. Rev. Lett. 76, 419 (1996)], as well as direct numerical computation, following the numerical scheme of Hou, Lowengrub, and Shelley [J. Comput. Phys. 114, 312 (1994)]. We demonstrate the dramatic effects of small surface tension on the late time evolution of two-finger configurations with respect to exact (nonsingular) zero-surface-tension solutions. The effect is present even when the relevant zero-surface-tension solution has asymptotic behavior consistent with selection theory. Such singular effects, therefore, cannot be traced back to steady state selection theory, and imply a drastic global change in the structure of phase-space flow. They can be interpreted in the framework of a recently introduced dynamical solvability scenario according to which surface tension unfolds the structurally unstable flow, restoring the hyperbolicity of multifinger fixed points.
Resumo:
Gradients of variation-or clines-have always intrigued biologists. Classically, they have been interpreted as the outcomes of antagonistic interactions between selection and gene flow. Alternatively, clines may also establish neutrally with isolation by distance (IBD) or secondary contact between previously isolated populations. The relative importance of natural selection and these two neutral processes in the establishment of clinal variation can be tested by comparing genetic differentiation at neutral genetic markers and at the studied trait. A third neutral process, surfing of a newly arisen mutation during the colonization of a new habitat, is more difficult to test. Here, we designed a spatially explicit approximate Bayesian computation (ABC) simulation framework to evaluate whether the strong cline in the genetically based reddish coloration observed in the European barn owl (Tyto alba) arose as a by-product of a range expansion or whether selection has to be invoked to explain this colour cline, for which we have previously ruled out the actions of IBD or secondary contact. Using ABC simulations and genetic data on 390 individuals from 20 locations genotyped at 22 microsatellites loci, we first determined how barn owls colonized Europe after the last glaciation. Using these results in new simulations on the evolution of the colour phenotype, and assuming various genetic architectures for the colour trait, we demonstrate that the observed colour cline cannot be due to the surfing of a neutral mutation. Taking advantage of spatially explicit ABC, which proved to be a powerful method to disentangle the respective roles of selection and drift in range expansions, we conclude that the formation of the colour cline observed in the barn owl must be due to natural selection.
Resumo:
A simple model exhibiting a noise-induced ordering transition (NIOT) and a noise-induced disordering transition (NIDT), in which the noise is purely multiplicative, is presented. Both transitions are found in two dimensions as well as in one dimension. We show analytically and numerically that the critical behavior of these two transitions is described by the so called multiplicative noise (MN) universality class. A computation of the set of critical exponents is presented in both d=1 and d=2.
Resumo:
Optimizing collective behavior in multiagent systems requires algorithms to find not only appropriate individual behaviors but also a suitable composition of agents within a team. Over the last two decades, evolutionary methods have emerged as a promising approach for the design of agents and their compositions into teams. The choice of a crossover operator that facilitates the evolution of optimal team composition is recognized to be crucial, but so far, it has never been thoroughly quantified. Here, we highlight the limitations of two different crossover operators that exchange entire agents between teams: restricted agent swapping (RAS) that exchanges only corresponding agents between teams and free agent swapping (FAS) that allows an arbitrary exchange of agents. Our results show that RAS suffers from premature convergence, whereas FAS entails insufficient convergence. Consequently, in both cases, the exploration and exploitation aspects of the evolutionary algorithm are not well balanced resulting in the evolution of suboptimal team compositions. To overcome this problem, we propose combining the two methods. Our approach first applies FAS to explore the search space and then RAS to exploit it. This mixed approach is a much more efficient strategy for the evolution of team compositions compared to either strategy on its own. Our results suggest that such a mixed agent-swapping algorithm should always be preferred whenever the optimal composition of individuals in a multiagent system is unknown.
Resumo:
Sex chromosomes are expected to evolve suppressed recombination, which leads to degeneration of the Y and heteromorphism between the X and Y. Some sex chromosomes remain homomorphic, however, and the factors that prevent degeneration of the Y in these cases are not well understood. The homomorphic sex chromosomes of the European tree frogs (Hyla spp.) present an interesting paradox. Recombination in males has never been observed in crossing experiments, but molecular data are suggestive of occasional recombination between the X and Y. The hypothesis that these sex chromosomes recombine has not been tested statistically, however, nor has the X-Y recombination rate been estimated. Here, we use approximate Bayesian computation coupled with coalescent simulations of sex chromosomes to quantify X-Y recombination rate from existent data. We find that microsatellite data from H. arborea, H. intermedia and H. molleri support a recombination rate between X and Y that is significantly different from zero. We estimate that rate to be approximately 10(5) times smaller than that between X chromosomes. Our findings support the notion that very low recombination rate may be sufficient to maintain homomorphism in sex chromosomes.
Resumo:
We propose a definition of classical differential cross sections for particles with essentially nonplanar orbits, such as spinning ones. We give also a method for its computation. The calculations are carried out explicitly for electromagnetic, gravitational, and short-range scalar interactions up to the linear terms in the slow-motion approximation. The contribution of the spin-spin terms is found to be at best 10-6 times the post-Newtonian ones for the gravitational interaction.
Resumo:
(2+1)-dimensional anti-de Sitter (AdS) gravity is quantized in the presence of an external scalar field. We find that the coupling between the scalar field and gravity is equivalently described by a perturbed conformal field theory at the boundary of AdS3. This allows us to perform a microscopic computation of the transition rates between black hole states due to absorption and induced emission of the scalar field. Detailed thermodynamic balance then yields Hawking radiation as spontaneous emission, and we find agreement with the semiclassical result, including greybody factors. This result also has application to four and five-dimensional black holes in supergravity.
Resumo:
Cortical folding (gyrification) is determined during the first months of life, so that adverse events occurring during this period leave traces that will be identifiable at any age. As recently reviewed by Mangin and colleagues(2), several methods exist to quantify different characteristics of gyrification. For instance, sulcal morphometry can be used to measure shape descriptors such as the depth, length or indices of inter-hemispheric asymmetry(3). These geometrical properties have the advantage of being easy to interpret. However, sulcal morphometry tightly relies on the accurate identification of a given set of sulci and hence provides a fragmented description of gyrification. A more fine-grained quantification of gyrification can be achieved with curvature-based measurements, where smoothed absolute mean curvature is typically computed at thousands of points over the cortical surface(4). The curvature is however not straightforward to comprehend, as it remains unclear if there is any direct relationship between the curvedness and a biologically meaningful correlate such as cortical volume or surface. To address the diverse issues raised by the measurement of cortical folding, we previously developed an algorithm to quantify local gyrification with an exquisite spatial resolution and of simple interpretation. Our method is inspired of the Gyrification Index(5), a method originally used in comparative neuroanatomy to evaluate the cortical folding differences across species. In our implementation, which we name local Gyrification Index (lGI(1)), we measure the amount of cortex buried within the sulcal folds as compared with the amount of visible cortex in circular regions of interest. Given that the cortex grows primarily through radial expansion(6), our method was specifically designed to identify early defects of cortical development. In this article, we detail the computation of local Gyrification Index, which is now freely distributed as a part of the FreeSurfer Software (http://surfer.nmr.mgh.harvard.edu/, Martinos Center for Biomedical Imaging, Massachusetts General Hospital). FreeSurfer provides a set of automated reconstruction tools of the brain's cortical surface from structural MRI data. The cortical surface extracted in the native space of the images with sub-millimeter accuracy is then further used for the creation of an outer surface, which will serve as a basis for the lGI calculation. A circular region of interest is then delineated on the outer surface, and its corresponding region of interest on the cortical surface is identified using a matching algorithm as described in our validation study(1). This process is repeatedly iterated with largely overlapping regions of interest, resulting in cortical maps of gyrification for subsequent statistical comparisons (Fig. 1). Of note, another measurement of local gyrification with a similar inspiration was proposed by Toro and colleagues(7), where the folding index at each point is computed as the ratio of the cortical area contained in a sphere divided by the area of a disc with the same radius. The two implementations differ in that the one by Toro et al. is based on Euclidian distances and thus considers discontinuous patches of cortical area, whereas ours uses a strict geodesic algorithm and include only the continuous patch of cortical area opening at the brain surface in a circular region of interest.
Resumo:
Preface The starting point for this work and eventually the subject of the whole thesis was the question: how to estimate parameters of the affine stochastic volatility jump-diffusion models. These models are very important for contingent claim pricing. Their major advantage, availability T of analytical solutions for characteristic functions, made them the models of choice for many theoretical constructions and practical applications. At the same time, estimation of parameters of stochastic volatility jump-diffusion models is not a straightforward task. The problem is coming from the variance process, which is non-observable. There are several estimation methodologies that deal with estimation problems of latent variables. One appeared to be particularly interesting. It proposes the estimator that in contrast to the other methods requires neither discretization nor simulation of the process: the Continuous Empirical Characteristic function estimator (EGF) based on the unconditional characteristic function. However, the procedure was derived only for the stochastic volatility models without jumps. Thus, it has become the subject of my research. This thesis consists of three parts. Each one is written as independent and self contained article. At the same time, questions that are answered by the second and third parts of this Work arise naturally from the issues investigated and results obtained in the first one. The first chapter is the theoretical foundation of the thesis. It proposes an estimation procedure for the stochastic volatility models with jumps both in the asset price and variance processes. The estimation procedure is based on the joint unconditional characteristic function for the stochastic process. The major analytical result of this part as well as of the whole thesis is the closed form expression for the joint unconditional characteristic function for the stochastic volatility jump-diffusion models. The empirical part of the chapter suggests that besides a stochastic volatility, jumps both in the mean and the volatility equation are relevant for modelling returns of the S&P500 index, which has been chosen as a general representative of the stock asset class. Hence, the next question is: what jump process to use to model returns of the S&P500. The decision about the jump process in the framework of the affine jump- diffusion models boils down to defining the intensity of the compound Poisson process, a constant or some function of state variables, and to choosing the distribution of the jump size. While the jump in the variance process is usually assumed to be exponential, there are at least three distributions of the jump size which are currently used for the asset log-prices: normal, exponential and double exponential. The second part of this thesis shows that normal jumps in the asset log-returns should be used if we are to model S&P500 index by a stochastic volatility jump-diffusion model. This is a surprising result. Exponential distribution has fatter tails and for this reason either exponential or double exponential jump size was expected to provide the best it of the stochastic volatility jump-diffusion models to the data. The idea of testing the efficiency of the Continuous ECF estimator on the simulated data has already appeared when the first estimation results of the first chapter were obtained. In the absence of a benchmark or any ground for comparison it is unreasonable to be sure that our parameter estimates and the true parameters of the models coincide. The conclusion of the second chapter provides one more reason to do that kind of test. Thus, the third part of this thesis concentrates on the estimation of parameters of stochastic volatility jump- diffusion models on the basis of the asset price time-series simulated from various "true" parameter sets. The goal is to show that the Continuous ECF estimator based on the joint unconditional characteristic function is capable of finding the true parameters. And, the third chapter proves that our estimator indeed has the ability to do so. Once it is clear that the Continuous ECF estimator based on the unconditional characteristic function is working, the next question does not wait to appear. The question is whether the computation effort can be reduced without affecting the efficiency of the estimator, or whether the efficiency of the estimator can be improved without dramatically increasing the computational burden. The efficiency of the Continuous ECF estimator depends on the number of dimensions of the joint unconditional characteristic function which is used for its construction. Theoretically, the more dimensions there are, the more efficient is the estimation procedure. In practice, however, this relationship is not so straightforward due to the increasing computational difficulties. The second chapter, for example, in addition to the choice of the jump process, discusses the possibility of using the marginal, i.e. one-dimensional, unconditional characteristic function in the estimation instead of the joint, bi-dimensional, unconditional characteristic function. As result, the preference for one or the other depends on the model to be estimated. Thus, the computational effort can be reduced in some cases without affecting the efficiency of the estimator. The improvement of the estimator s efficiency by increasing its dimensionality faces more difficulties. The third chapter of this thesis, in addition to what was discussed above, compares the performance of the estimators with bi- and three-dimensional unconditional characteristic functions on the simulated data. It shows that the theoretical efficiency of the Continuous ECF estimator based on the three-dimensional unconditional characteristic function is not attainable in practice, at least for the moment, due to the limitations on the computer power and optimization toolboxes available to the general public. Thus, the Continuous ECF estimator based on the joint, bi-dimensional, unconditional characteristic function has all the reasons to exist and to be used for the estimation of parameters of the stochastic volatility jump-diffusion models.
Resumo:
Background: Molecular tools may help to uncover closely related and still diverging species from a wide variety of taxa and provide insight into the mechanisms, pace and geography of marine speciation. There is a certain controversy on the phylogeography and speciation modes of species-groups with an Eastern Atlantic-Western Indian Ocean distribution, with previous studies suggesting that older events (Miocene) and/or more recent (Pleistocene) oceanographic processes could have influenced the phylogeny of marine taxa. The spiny lobster genus Palinurus allows for testing among speciation hypotheses, since it has a particular distribution with two groups of three species each in the Northeastern Atlantic (P. elephas, P. mauritanicus and P. charlestoni) and Southeastern Atlantic and Southwestern Indian Oceans (P. gilchristi, P. delagoae and P. barbarae). In the present study, we obtain a more complete understanding of the phylogenetic relationships among these species through a combined dataset with both nuclear and mitochondrial markers, by testing alternative hypotheses on both the mutation rate and tree topology under the recently developed approximate Bayesian computation (ABC) methods. Results Our analyses support a North-to-South speciation pattern in Palinurus with all the South-African species forming a monophyletic clade nested within the Northern Hemisphere species. Coalescent-based ABC methods allowed us to reject the previously proposed hypothesis of a Middle Miocene speciation event related with the closure of the Tethyan Seaway. Instead, divergence times obtained for Palinurus species using the combined mtDNA-microsatellite dataset and standard mutation rates for mtDNA agree with known glaciation-related processes occurring during the last 2 my. Conclusion The Palinurus speciation pattern is a typical example of a series of rapid speciation events occurring within a group, with very short branches separating different species. Our results support the hypothesis that recent climate change-related oceanographic processes have influenced the phylogeny of marine taxa, with most Palinurus species originating during the last two million years. The present study highlights the value of new coalescent-based statistical methods such as ABC for testing different speciation hypotheses using molecular data.
Resumo:
Whole-body counting is a technique of choice for assessing the intake of gamma-emitting radionuclides. An appropriate calibration is necessary, which is done either by experimental measurement or by Monte Carlo (MC) calculation. The aim of this work was to validate a MC model for calibrating whole-body counters (WBCs) by comparing the results of computations with measurements performed on an anthropomorphic phantom and to investigate the effect of a change in phantom's position on the WBC counting sensitivity. GEANT MC code was used for the calculations, and an IGOR phantom loaded with several types of radionuclides was used for the experimental measurements. The results show a reasonable agreement between measurements and MC computation. A 1-cm error in phantom positioning changes the activity estimation by >2%. Considering that a 5-cm deviation of the positioning of the phantom may occur in a realistic counting scenario, this implies that the uncertainty of the activity measured by a WBC is ∼10-20%.
Resumo:
Objectives: Therapeutic drug monitoring (TDM) aims at optimizing treatment by individualizing dosage regimen based on blood concentrations measurement. Maintaining concentrations within a target range requires pharmacokinetic (PK) and clinical capabilities. Bayesian calculation represents a gold standard in TDM approach but requires computing assistance. The aim of this benchmarking was to assess and compare computer tools designed to support TDM clinical activities.¦Methods: Literature and Internet were searched to identify software. Each program was scored against a standardized grid covering pharmacokinetic relevance, user-friendliness, computing aspects, interfacing, and storage. A weighting factor was applied to each criterion of the grid to consider its relative importance. To assess the robustness of the software, six representative clinical vignettes were also processed through all of them.¦Results: 12 software tools were identified, tested and ranked. It represents a comprehensive review of the available software characteristics. Numbers of drugs handled vary from 2 to more than 180, and integration of different population types is available for some programs. Nevertheless, 8 programs offer the ability to add new drug models based on population PK data. 10 computer tools incorporate Bayesian computation to predict dosage regimen (individual parameters are calculated based on population PK models). All of them are able to compute Bayesian a posteriori dosage adaptation based on a blood concentration while 9 are also able to suggest a priori dosage regimen, only based on individual patient covariates. Among those applying Bayesian analysis, MM-USC*PACK uses a non-parametric approach. The top 2 programs emerging from this benchmark are MwPharm and TCIWorks. Others programs evaluated have also a good potential but are less sophisticated or less user-friendly.¦Conclusions: Whereas 2 software packages are ranked at the top of the list, such complex tools would possibly not fit all institutions, and each program must be regarded with respect to individual needs of hospitals or clinicians. Programs should be easy and fast for routine activities, including for non-experienced users. Although interest in TDM tools is growing and efforts were put into it in the last years, there is still room for improvement, especially in terms of institutional information system interfacing, user-friendliness, capability of data storage and automated report generation.
Resumo:
[spa] La participación del trabajo en la renta nacional es constante bajo los supuestos de una función de producción Cobb-Douglas y competencia perfecta. En este artículo se relajan estos supuestos y se investiga si el comportamiento no constante de la participación del trabajo en la renta nacional se explica por (i) una elasticidad de sustitución entre capital y trabajo no unitaria y (ii) competencia no perfecta en el mercado de producto. Nos centramos en España y los U.S. y estimamos una función de producción con elasticidad de sustitución constante y competencia imperfecta en el mercado de producto. El grado de competencia imperfecta se mide a través del cálculo del price markup basado en laaproximación dual. Mostramos que la elasticidad de sustitución es mayor que uno en España y menor que uno en los US. También mostramos que el price markup aleja la elasticidad de sustitución de uno, lo aumenta en España, lo reduce en los U.S. Estos resultados se utilizan para explicar la senda decreciente de la participación del trabajo en la renta nacional, común a ambas economías, y sus contrastadas sendas de capital.
Resumo:
False identity documents constitute a potential powerful source of forensic intelligence because they are essential elements of transnational crime and provide cover for organized crime. In previous work, a systematic profiling method using false documents' visual features has been built within a forensic intelligence model. In the current study, the comparison process and metrics lying at the heart of this profiling method are described and evaluated. This evaluation takes advantage of 347 false identity documents of four different types seized in two countries whose sources were known to be common or different (following police investigations and dismantling of counterfeit factories). Intra-source and inter-sources variations were evaluated through the computation of more than 7500 similarity scores. The profiling method could thus be validated and its performance assessed using two complementary approaches to measuring type I and type II error rates: a binary classification and the computation of likelihood ratios. Very low error rates were measured across the four document types, demonstrating the validity and robustness of the method to link documents to a common source or to differentiate them. These results pave the way for an operational implementation of a systematic profiling process integrated in a developed forensic intelligence model.
Resumo:
This study aims to improve the accuracy and usability of Iowa Falling Weight Deflectometer (FWD) data by incorporating significant enhancements into the fully-automated software system for rapid processing of the FWD data. These enhancements include: (1) refined prediction of backcalculated pavement layer modulus through deflection basin matching/optimization, (2) temperature correction of backcalculated Hot-Mix Asphalt (HMA) layer modulus, (3) computation of 1993 AASHTO design guide related effective SN (SNeff) and effective k-value (keff ), (4) computation of Iowa DOT asphalt concrete (AC) overlay design related Structural Rating (SR) and kvalue (k), and (5) enhancement of user-friendliness of input and output from the software tool. A high-quality, easy-to-use backcalculation software package, referred to as, I-BACK: the Iowa Pavement Backcalculation Software, was developed to achieve the project goals and requirements. This report presents theoretical background behind the incorporated enhancements as well as guidance on the use of I-BACK developed in this study. The developed tool, I-BACK, provides more fine-tuned ANN pavement backcalculation results by implementation of deflection basin matching optimizer for conventional flexible, full-depth, rigid, and composite pavements. Implementation of this tool within Iowa DOT will facilitate accurate pavement structural evaluation and rehabilitation designs for pavement/asset management purposes. This research has also set the framework for the development of a simplified FWD deflection based HMA overlay design procedure which is one of the recommended areas for future research.