977 resultados para instersection computation


Relevância:

10.00% 10.00%

Publicador:

Resumo:

We propose a definition of classical differential cross sections for particles with essentially nonplanar orbits, such as spinning ones. We give also a method for its computation. The calculations are carried out explicitly for electromagnetic, gravitational, and short-range scalar interactions up to the linear terms in the slow-motion approximation. The contribution of the spin-spin terms is found to be at best 10-6 times the post-Newtonian ones for the gravitational interaction.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

(2+1)-dimensional anti-de Sitter (AdS) gravity is quantized in the presence of an external scalar field. We find that the coupling between the scalar field and gravity is equivalently described by a perturbed conformal field theory at the boundary of AdS3. This allows us to perform a microscopic computation of the transition rates between black hole states due to absorption and induced emission of the scalar field. Detailed thermodynamic balance then yields Hawking radiation as spontaneous emission, and we find agreement with the semiclassical result, including greybody factors. This result also has application to four and five-dimensional black holes in supergravity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cortical folding (gyrification) is determined during the first months of life, so that adverse events occurring during this period leave traces that will be identifiable at any age. As recently reviewed by Mangin and colleagues(2), several methods exist to quantify different characteristics of gyrification. For instance, sulcal morphometry can be used to measure shape descriptors such as the depth, length or indices of inter-hemispheric asymmetry(3). These geometrical properties have the advantage of being easy to interpret. However, sulcal morphometry tightly relies on the accurate identification of a given set of sulci and hence provides a fragmented description of gyrification. A more fine-grained quantification of gyrification can be achieved with curvature-based measurements, where smoothed absolute mean curvature is typically computed at thousands of points over the cortical surface(4). The curvature is however not straightforward to comprehend, as it remains unclear if there is any direct relationship between the curvedness and a biologically meaningful correlate such as cortical volume or surface. To address the diverse issues raised by the measurement of cortical folding, we previously developed an algorithm to quantify local gyrification with an exquisite spatial resolution and of simple interpretation. Our method is inspired of the Gyrification Index(5), a method originally used in comparative neuroanatomy to evaluate the cortical folding differences across species. In our implementation, which we name local Gyrification Index (lGI(1)), we measure the amount of cortex buried within the sulcal folds as compared with the amount of visible cortex in circular regions of interest. Given that the cortex grows primarily through radial expansion(6), our method was specifically designed to identify early defects of cortical development. In this article, we detail the computation of local Gyrification Index, which is now freely distributed as a part of the FreeSurfer Software (http://surfer.nmr.mgh.harvard.edu/, Martinos Center for Biomedical Imaging, Massachusetts General Hospital). FreeSurfer provides a set of automated reconstruction tools of the brain's cortical surface from structural MRI data. The cortical surface extracted in the native space of the images with sub-millimeter accuracy is then further used for the creation of an outer surface, which will serve as a basis for the lGI calculation. A circular region of interest is then delineated on the outer surface, and its corresponding region of interest on the cortical surface is identified using a matching algorithm as described in our validation study(1). This process is repeatedly iterated with largely overlapping regions of interest, resulting in cortical maps of gyrification for subsequent statistical comparisons (Fig. 1). Of note, another measurement of local gyrification with a similar inspiration was proposed by Toro and colleagues(7), where the folding index at each point is computed as the ratio of the cortical area contained in a sphere divided by the area of a disc with the same radius. The two implementations differ in that the one by Toro et al. is based on Euclidian distances and thus considers discontinuous patches of cortical area, whereas ours uses a strict geodesic algorithm and include only the continuous patch of cortical area opening at the brain surface in a circular region of interest.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Preface The starting point for this work and eventually the subject of the whole thesis was the question: how to estimate parameters of the affine stochastic volatility jump-diffusion models. These models are very important for contingent claim pricing. Their major advantage, availability T of analytical solutions for characteristic functions, made them the models of choice for many theoretical constructions and practical applications. At the same time, estimation of parameters of stochastic volatility jump-diffusion models is not a straightforward task. The problem is coming from the variance process, which is non-observable. There are several estimation methodologies that deal with estimation problems of latent variables. One appeared to be particularly interesting. It proposes the estimator that in contrast to the other methods requires neither discretization nor simulation of the process: the Continuous Empirical Characteristic function estimator (EGF) based on the unconditional characteristic function. However, the procedure was derived only for the stochastic volatility models without jumps. Thus, it has become the subject of my research. This thesis consists of three parts. Each one is written as independent and self contained article. At the same time, questions that are answered by the second and third parts of this Work arise naturally from the issues investigated and results obtained in the first one. The first chapter is the theoretical foundation of the thesis. It proposes an estimation procedure for the stochastic volatility models with jumps both in the asset price and variance processes. The estimation procedure is based on the joint unconditional characteristic function for the stochastic process. The major analytical result of this part as well as of the whole thesis is the closed form expression for the joint unconditional characteristic function for the stochastic volatility jump-diffusion models. The empirical part of the chapter suggests that besides a stochastic volatility, jumps both in the mean and the volatility equation are relevant for modelling returns of the S&P500 index, which has been chosen as a general representative of the stock asset class. Hence, the next question is: what jump process to use to model returns of the S&P500. The decision about the jump process in the framework of the affine jump- diffusion models boils down to defining the intensity of the compound Poisson process, a constant or some function of state variables, and to choosing the distribution of the jump size. While the jump in the variance process is usually assumed to be exponential, there are at least three distributions of the jump size which are currently used for the asset log-prices: normal, exponential and double exponential. The second part of this thesis shows that normal jumps in the asset log-returns should be used if we are to model S&P500 index by a stochastic volatility jump-diffusion model. This is a surprising result. Exponential distribution has fatter tails and for this reason either exponential or double exponential jump size was expected to provide the best it of the stochastic volatility jump-diffusion models to the data. The idea of testing the efficiency of the Continuous ECF estimator on the simulated data has already appeared when the first estimation results of the first chapter were obtained. In the absence of a benchmark or any ground for comparison it is unreasonable to be sure that our parameter estimates and the true parameters of the models coincide. The conclusion of the second chapter provides one more reason to do that kind of test. Thus, the third part of this thesis concentrates on the estimation of parameters of stochastic volatility jump- diffusion models on the basis of the asset price time-series simulated from various "true" parameter sets. The goal is to show that the Continuous ECF estimator based on the joint unconditional characteristic function is capable of finding the true parameters. And, the third chapter proves that our estimator indeed has the ability to do so. Once it is clear that the Continuous ECF estimator based on the unconditional characteristic function is working, the next question does not wait to appear. The question is whether the computation effort can be reduced without affecting the efficiency of the estimator, or whether the efficiency of the estimator can be improved without dramatically increasing the computational burden. The efficiency of the Continuous ECF estimator depends on the number of dimensions of the joint unconditional characteristic function which is used for its construction. Theoretically, the more dimensions there are, the more efficient is the estimation procedure. In practice, however, this relationship is not so straightforward due to the increasing computational difficulties. The second chapter, for example, in addition to the choice of the jump process, discusses the possibility of using the marginal, i.e. one-dimensional, unconditional characteristic function in the estimation instead of the joint, bi-dimensional, unconditional characteristic function. As result, the preference for one or the other depends on the model to be estimated. Thus, the computational effort can be reduced in some cases without affecting the efficiency of the estimator. The improvement of the estimator s efficiency by increasing its dimensionality faces more difficulties. The third chapter of this thesis, in addition to what was discussed above, compares the performance of the estimators with bi- and three-dimensional unconditional characteristic functions on the simulated data. It shows that the theoretical efficiency of the Continuous ECF estimator based on the three-dimensional unconditional characteristic function is not attainable in practice, at least for the moment, due to the limitations on the computer power and optimization toolboxes available to the general public. Thus, the Continuous ECF estimator based on the joint, bi-dimensional, unconditional characteristic function has all the reasons to exist and to be used for the estimation of parameters of the stochastic volatility jump-diffusion models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: Molecular tools may help to uncover closely related and still diverging species from a wide variety of taxa and provide insight into the mechanisms, pace and geography of marine speciation. There is a certain controversy on the phylogeography and speciation modes of species-groups with an Eastern Atlantic-Western Indian Ocean distribution, with previous studies suggesting that older events (Miocene) and/or more recent (Pleistocene) oceanographic processes could have influenced the phylogeny of marine taxa. The spiny lobster genus Palinurus allows for testing among speciation hypotheses, since it has a particular distribution with two groups of three species each in the Northeastern Atlantic (P. elephas, P. mauritanicus and P. charlestoni) and Southeastern Atlantic and Southwestern Indian Oceans (P. gilchristi, P. delagoae and P. barbarae). In the present study, we obtain a more complete understanding of the phylogenetic relationships among these species through a combined dataset with both nuclear and mitochondrial markers, by testing alternative hypotheses on both the mutation rate and tree topology under the recently developed approximate Bayesian computation (ABC) methods. Results Our analyses support a North-to-South speciation pattern in Palinurus with all the South-African species forming a monophyletic clade nested within the Northern Hemisphere species. Coalescent-based ABC methods allowed us to reject the previously proposed hypothesis of a Middle Miocene speciation event related with the closure of the Tethyan Seaway. Instead, divergence times obtained for Palinurus species using the combined mtDNA-microsatellite dataset and standard mutation rates for mtDNA agree with known glaciation-related processes occurring during the last 2 my. Conclusion The Palinurus speciation pattern is a typical example of a series of rapid speciation events occurring within a group, with very short branches separating different species. Our results support the hypothesis that recent climate change-related oceanographic processes have influenced the phylogeny of marine taxa, with most Palinurus species originating during the last two million years. The present study highlights the value of new coalescent-based statistical methods such as ABC for testing different speciation hypotheses using molecular data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Whole-body counting is a technique of choice for assessing the intake of gamma-emitting radionuclides. An appropriate calibration is necessary, which is done either by experimental measurement or by Monte Carlo (MC) calculation. The aim of this work was to validate a MC model for calibrating whole-body counters (WBCs) by comparing the results of computations with measurements performed on an anthropomorphic phantom and to investigate the effect of a change in phantom's position on the WBC counting sensitivity. GEANT MC code was used for the calculations, and an IGOR phantom loaded with several types of radionuclides was used for the experimental measurements. The results show a reasonable agreement between measurements and MC computation. A 1-cm error in phantom positioning changes the activity estimation by >2%. Considering that a 5-cm deviation of the positioning of the phantom may occur in a realistic counting scenario, this implies that the uncertainty of the activity measured by a WBC is ∼10-20%.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objectives: Therapeutic drug monitoring (TDM) aims at optimizing treatment by individualizing dosage regimen based on blood concentrations measurement. Maintaining concentrations within a target range requires pharmacokinetic (PK) and clinical capabilities. Bayesian calculation represents a gold standard in TDM approach but requires computing assistance. The aim of this benchmarking was to assess and compare computer tools designed to support TDM clinical activities.¦Methods: Literature and Internet were searched to identify software. Each program was scored against a standardized grid covering pharmacokinetic relevance, user-friendliness, computing aspects, interfacing, and storage. A weighting factor was applied to each criterion of the grid to consider its relative importance. To assess the robustness of the software, six representative clinical vignettes were also processed through all of them.¦Results: 12 software tools were identified, tested and ranked. It represents a comprehensive review of the available software characteristics. Numbers of drugs handled vary from 2 to more than 180, and integration of different population types is available for some programs. Nevertheless, 8 programs offer the ability to add new drug models based on population PK data. 10 computer tools incorporate Bayesian computation to predict dosage regimen (individual parameters are calculated based on population PK models). All of them are able to compute Bayesian a posteriori dosage adaptation based on a blood concentration while 9 are also able to suggest a priori dosage regimen, only based on individual patient covariates. Among those applying Bayesian analysis, MM-USC*PACK uses a non-parametric approach. The top 2 programs emerging from this benchmark are MwPharm and TCIWorks. Others programs evaluated have also a good potential but are less sophisticated or less user-friendly.¦Conclusions: Whereas 2 software packages are ranked at the top of the list, such complex tools would possibly not fit all institutions, and each program must be regarded with respect to individual needs of hospitals or clinicians. Programs should be easy and fast for routine activities, including for non-experienced users. Although interest in TDM tools is growing and efforts were put into it in the last years, there is still room for improvement, especially in terms of institutional information system interfacing, user-friendliness, capability of data storage and automated report generation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

[spa] La participación del trabajo en la renta nacional es constante bajo los supuestos de una función de producción Cobb-Douglas y competencia perfecta. En este artículo se relajan estos supuestos y se investiga si el comportamiento no constante de la participación del trabajo en la renta nacional se explica por (i) una elasticidad de sustitución entre capital y trabajo no unitaria y (ii) competencia no perfecta en el mercado de producto. Nos centramos en España y los U.S. y estimamos una función de producción con elasticidad de sustitución constante y competencia imperfecta en el mercado de producto. El grado de competencia imperfecta se mide a través del cálculo del price markup basado en laaproximación dual. Mostramos que la elasticidad de sustitución es mayor que uno en España y menor que uno en los US. También mostramos que el price markup aleja la elasticidad de sustitución de uno, lo aumenta en España, lo reduce en los U.S. Estos resultados se utilizan para explicar la senda decreciente de la participación del trabajo en la renta nacional, común a ambas economías, y sus contrastadas sendas de capital.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

False identity documents constitute a potential powerful source of forensic intelligence because they are essential elements of transnational crime and provide cover for organized crime. In previous work, a systematic profiling method using false documents' visual features has been built within a forensic intelligence model. In the current study, the comparison process and metrics lying at the heart of this profiling method are described and evaluated. This evaluation takes advantage of 347 false identity documents of four different types seized in two countries whose sources were known to be common or different (following police investigations and dismantling of counterfeit factories). Intra-source and inter-sources variations were evaluated through the computation of more than 7500 similarity scores. The profiling method could thus be validated and its performance assessed using two complementary approaches to measuring type I and type II error rates: a binary classification and the computation of likelihood ratios. Very low error rates were measured across the four document types, demonstrating the validity and robustness of the method to link documents to a common source or to differentiate them. These results pave the way for an operational implementation of a systematic profiling process integrated in a developed forensic intelligence model.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study aims to improve the accuracy and usability of Iowa Falling Weight Deflectometer (FWD) data by incorporating significant enhancements into the fully-automated software system for rapid processing of the FWD data. These enhancements include: (1) refined prediction of backcalculated pavement layer modulus through deflection basin matching/optimization, (2) temperature correction of backcalculated Hot-Mix Asphalt (HMA) layer modulus, (3) computation of 1993 AASHTO design guide related effective SN (SNeff) and effective k-value (keff ), (4) computation of Iowa DOT asphalt concrete (AC) overlay design related Structural Rating (SR) and kvalue (k), and (5) enhancement of user-friendliness of input and output from the software tool. A high-quality, easy-to-use backcalculation software package, referred to as, I-BACK: the Iowa Pavement Backcalculation Software, was developed to achieve the project goals and requirements. This report presents theoretical background behind the incorporated enhancements as well as guidance on the use of I-BACK developed in this study. The developed tool, I-BACK, provides more fine-tuned ANN pavement backcalculation results by implementation of deflection basin matching optimizer for conventional flexible, full-depth, rigid, and composite pavements. Implementation of this tool within Iowa DOT will facilitate accurate pavement structural evaluation and rehabilitation designs for pavement/asset management purposes. This research has also set the framework for the development of a simplified FWD deflection based HMA overlay design procedure which is one of the recommended areas for future research.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study aims to improve the accuracy and usability of Iowa Falling Weight Deflectometer (FWD) data by incorporating significant enhancements into the fully-automated software system for rapid processing of the FWD data. These enhancements include: (1) refined prediction of backcalculated pavement layer modulus through deflection basin matching/optimization, (2) temperature correction of backcalculated Hot-Mix Asphalt (HMA) layer modulus, (3) computation of 1993 AASHTO design guide related effective SN (SNeff) and effective k-value (keff ), (4) computation of Iowa DOT asphalt concrete (AC) overlay design related Structural Rating (SR) and kvalue (k), and (5) enhancement of user-friendliness of input and output from the software tool. A high-quality, easy-to-use backcalculation software package, referred to as, I-BACK: the Iowa Pavement Backcalculation Software, was developed to achieve the project goals and requirements. This report presents theoretical background behind the incorporated enhancements as well as guidance on the use of I-BACK developed in this study. The developed tool, I-BACK, provides more fine-tuned ANN pavement backcalculation results by implementation of deflection basin matching optimizer for conventional flexible, full-depth, rigid, and composite pavements. Implementation of this tool within Iowa DOT will facilitate accurate pavement structural evaluation and rehabilitation designs for pavement/asset management purposes. This research has also set the framework for the development of a simplified FWD deflection based HMA overlay design procedure which is one of the recommended areas for future research.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Debris flows are among the most dangerous processes in mountainous areas due to their rapid rate of movement and long runout zone. Sudden and rather unexpected impacts produce not only damages to buildings and infrastructure but also threaten human lives. Medium- to regional-scale susceptibility analyses allow the identification of the most endangered areas and suggest where further detailed studies have to be carried out. Since data availability for larger regions is mostly the key limiting factor, empirical models with low data requirements are suitable for first overviews. In this study a susceptibility analysis was carried out for the Barcelonnette Basin, situated in the southern French Alps. By means of a methodology based on empirical rules for source identification and the empirical angle of reach concept for the 2-D runout computation, a worst-case scenario was first modelled. In a second step, scenarios for high, medium and low frequency events were developed. A comparison with the footprints of a few mapped events indicates reasonable results but suggests a high dependency on the quality of the digital elevation model. This fact emphasises the need for a careful interpretation of the results while remaining conscious of the inherent assumptions of the model used and quality of the input data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the United States many bridge structures have been designed without consideration for their unique construction problems. Many problems could have been avoided if construction knowledge and experience was utilized in the design process. A systematic process is needed to create and capture construction knowledge for use in the design process. This study was conducted to develop a system to capture construction considerations from field people and incorporate it into a knowledge-base for use by the bridge designers. This report presents the results of this study. As a part of this study a microcomputer-based constructability system has been developed. The system is a user-friendly microcomputer database which codifies construction knowledge, provides easy access to specifications, and provides simple design computation checks for the designer. A structure for the final database was developed and used in the prototype system. A process for collecting, developing and maintaining the database is presented and explained. The study involved a constructability survey, interviews with designers and constructors, and visits to construction sites to collect constuctability concepts. The report describes the development of the constructability system and addresses the future needs for the Iowa Department of Transportation to make the system operational. A user's manual for the system is included along with the report.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A combined strategy based on the computation of absorption energies, using the ZINDO/S semiempirical method, for a statistically relevant number of thermally sampled configurations extracted from QM/MM trajectories is used to establish a one-to-one correspondence between the structures of the different early intermediates (dark, batho, BSI, lumi) involved in the initial steps of the rhodopsin photoactivation mechanism and their optical spectra. A systematic analysis of the results based on a correlation-based feature selection algorithm shows that the origin of the color shifts among these intermediates can be mainly ascribed to alterations in intrinsic properties of the chromophore structure, which are tuned by several residues located in the protein binding pocket. In addition to the expected electrostatic and dipolar effects caused by the charged residues (Glu113, Glu181) and to strong hydrogen bonding with Glu113, other interactions such as π-stacking with Ala117 and Thr118 backbone atoms, van der Waals contacts with Gly114 and Ala292, and CH/π weak interactions with Tyr268, Ala117, Thr118, and Ser186 side chains are found to make non-negligible contributions to the modulation of the color tuning among the different rhodopsin photointermediates.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Precast prestressed concrete panels have been used as subdecks in bridge construction in Iowa and other states. To investigate the performance of these types of composite slabs at locations adjacent to abutment and pier diaphragms in skewed bridges, a research prcject which involved surveys of design agencies and precast producers, field inspections of existing bridges, analytical studies, and experimental testing was conducted. The survey results from the design agencies and panel producers showed that standardization of precast panel construction would be desirable, that additional inspections at the precast plant and at the bridge site would be beneficial, and that some form of economical study should be undertaken to determine actual cost savings associated with composite slab construction. Three bridges in Hardin County, Iowa were inspected to observe general geometric relationships, construction details, and to note the visual condition of the bridges. Hairline cracks beneath several of the prestressing strands in many of the precast panels were observed, and a slight discoloration of the concrete was seen beneath most of the strands. Also, some rust staining was visible at isolated locations on several panels. Based on the findings of these inspections, future inspections are recommended to monitor the condition of these and other bridges constructed with precast panel subdecks. Five full-scale composite slab specimens were constructed in the Structural Engineering Laboratory at Iowa State University. One specimen modeled bridge deck conditions which are not adjacent to abutment or pier diaphragms, and the other four specimens represented the geometric conditions which occur for skewed diaphragms of 0, 15, 30, and 40 degrees. The specimens were subjected to wheel loads of service and factored level magnitudes at many locations on the slab surface and to concentrated loads which produced failure of the composite slab. The measured slab deflections and bending strains at both service and factored load levels compared reasonably well with the results predicted by simplified Finite element analyses of the specimens. To analytically evaluate the nominal strength for a composite slab specimen, yield-line and punching shear theories were applied. Yield-line limit loads were computed using the crack patterns generated during an ultimate strength test. In most cases, these analyses indicated that the failure mode was not flexural. Since the punching shear limit loads in most instances were close to the failure loads, and since the failure surfaces immediately adjacent to the wheel load footprint appeared to be a truncated prism shape, the probable failure mode for all of the specimens was punching shear. The development lengths for the prestressing strands in the rectangular and trapezoidal shaped panels was qualitatively investigated by monitoring strand slippage at the ends of selected prestressing strands. The initial strand transfer length was established experimentally by monitoring concrete strains during strand detensioning, and this length was verified analytically by a finite element analysis. Even though the computed strand embedment lengths in the panels were not sufficient to fully develop the ultimate strand stress, sufficient stab strength existed. Composite behavior for the slab specimens was evaluated by monitoring slippage between a panel and the topping slab and by computation of the difference in the flexural strains between the top of the precast panel and the underside of the topping slab at various locations. Prior to the failure of a composite slab specimen, a localized loss of composite behavior was detected. The static load strength performance of the composite slab specimens significantly exceeded the design load requirements. Even with skew angles of up to 40 degrees, the nominal strength of the slabs did not appear to be affected when the ultimate strength test load was positioned on the portion of each slab containing the trapezoidal-shaped panel. At service and factored level loads, the joint between precast panels did not appear to influence the load distribution along the length of the specimens. Based on the static load strength of the composite slab specimens, the continued use of precast panels as subdecks in bridge deck construction is recommended.