971 resultados para Numerical Evaluation
Resumo:
In all higher nonhuman primates, species survival depends upon safe carrying of infants clinging to body hair of adults. In this work, measurements of mechanical properties of ape hair (gibbon, orangutan, and gorilla) are presented, focusing on constraints for safe infant carrying. Results of hair tensile properties are shown to be species-dependent. Analysis of the mechanics of the mounting position, typical of heavier infant carrying among African apes, shows that both clinging and friction are necessary to carry heavy infants. As a consequence, a required relationship between infant weight, hair-hair friction coefficient, and body angle exists. The hair-hair friction coefficient is measured using natural ape skin samples, and dependence on load and humidity is analyzed. Numerical evaluation of the equilibrium constraint is in agreement with the knuckle-walking quadruped position of African apes. Bipedality is clearly incompatible with the usual clinging and mounting pattern of infant carrying, requiring a revision of models of hominization in relation to the divergence between apes and hominins. These results suggest that safe carrying of heavy infants justify the emergence of biped form of locomotion. Ways to test this possibility are foreseen here.
Resumo:
Os objetivos deste trabalho foram (i) rever métodos numéricos para precificação de derivativos; e (ii) comparar os métodos assumindo que os preços de mercado refletem àqueles obtidos pela fórmula de Black Scholes para precificação de opções do tipo européia. Aplicamos estes métodos para precificar opções de compra da ações Telebrás. Os critérios de acurácia e de custo computacional foram utilizados para comparar os seguintes modelos binomial, Monte Carlo, e diferenças finitas. Os resultados indicam que o modelo binomial possui boa acurácia e custo baixo, seguido pelo Monte Carlo e diferenças finitas. Entretanto, o método Monte Carlo poderia ser usado quando o derivativo depende de mais de dois ativos-objetos. É recomendável usar o método de diferenças finitas quando se obtém uma equação diferencial parcial cuja solução é o valor do derivativo.
Resumo:
A positive measure ψ defined on [a,b] such that its moments μn=∫a btndψ(t) exist for n=0,±1,±2,⋯, is called a strong positive measure on [a,b]. If 0≤anumerical evaluation of the nodes and weights of such quadrature rules are also considered. © 2010 IMACS. Published by Elsevier B.V. All rights reserved.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Bound-constrained minimization is a subject of active research. To assess the performance of existent solvers, numerical evaluations and comparisons are carried on. Arbitrary decisions that may have a crucial effect on the conclusions of numerical experiments are highlighted in the present work. As a result, a detailed evaluation based on performance profiles is applied to the comparison of bound-constrained minimization solvers. Extensive numerical results are presented and analyzed.
Resumo:
Die Berechnung von experimentell überprüfbaren Vorhersagen aus dem Standardmodell mit Hilfe störungstheoretischer Methoden ist schwierig. Die Herausforderungen liegen in der Berechnung immer komplizierterer Feynman-Integrale und dem zunehmenden Umfang der Rechnungen für Streuprozesse mit vielen Teilchen. Neue mathematische Methoden müssen daher entwickelt und die zunehmende Komplexität durch eine Automatisierung der Berechnungen gezähmt werden. In Kapitel 2 wird eine kurze Einführung in diese Thematik gegeben. Die nachfolgenden Kapitel sind dann einzelnen Beiträgen zur Lösung dieser Probleme gewidmet. In Kapitel 3 stellen wir ein Projekt vor, das für die Analysen der LHC-Daten wichtig sein wird. Ziel des Projekts ist die Berechnung von Einschleifen-Korrekturen zu Prozessen mit vielen Teilchen im Endzustand. Das numerische Verfahren wird dargestellt und erklärt. Es verwendet Helizitätsspinoren und darauf aufbauend eine neue Tensorreduktionsmethode, die Probleme mit inversen Gram-Determinanten weitgehend vermeidet. Es wurde ein Computerprogramm entwickelt, das die Berechnungen automatisiert ausführen kann. Die Implementierung wird beschrieben und Details über die Optimierung und Verifizierung präsentiert. Mit analytischen Methoden beschäftigt sich das vierte Kapitel. Darin wird das xloopsnosp-Projekt vorgestellt, das verschiedene Feynman-Integrale mit beliebigen Massen und Impulskonfigurationen analytisch berechnen kann. Die wesentlichen mathematischen Methoden, die xloops zur Lösung der Integrale verwendet, werden erklärt. Zwei Ideen für neue Berechnungsverfahren werden präsentiert, die sich mit diesen Methoden realisieren lassen. Das ist zum einen die einheitliche Berechnung von Einschleifen-N-Punkt-Integralen, und zum anderen die automatisierte Reihenentwicklung von Integrallösungen in höhere Potenzen des dimensionalen Regularisierungsparameters $epsilon$. Zum letzteren Verfahren werden erste Ergebnisse vorgestellt. Die Nützlichkeit der automatisierten Reihenentwicklung aus Kapitel 4 hängt von der numerischen Auswertbarkeit der Entwicklungskoeffizienten ab. Die Koeffizienten sind im allgemeinen Multiple Polylogarithmen. In Kapitel 5 wird ein Verfahren für deren numerische Auswertung vorgestellt. Dieses neue Verfahren für Multiple Polylogarithmen wurde zusammen mit bekannten Verfahren für andere Polylogarithmus-Funktionen als Bestandteil der CC-Bibliothek ginac implementiert.
Resumo:
In this thesis a mathematical model was derived that describes the charge and energy transport in semiconductor devices like transistors. Moreover, numerical simulations of these physical processes are performed. In order to accomplish this, methods of theoretical physics, functional analysis, numerical mathematics and computer programming are applied. After an introduction to the status quo of semiconductor device simulation methods and a brief review of historical facts up to now, the attention is shifted to the construction of a model, which serves as the basis of the subsequent derivations in the thesis. Thereby the starting point is an important equation of the theory of dilute gases. From this equation the model equations are derived and specified by means of a series expansion method. This is done in a multi-stage derivation process, which is mainly taken from a scientific paper and which does not constitute the focus of this thesis. In the following phase we specify the mathematical setting and make precise the model assumptions. Thereby we make use of methods of functional analysis. Since the equations we deal with are coupled, we are concerned with a nonstandard problem. In contrary, the theory of scalar elliptic equations is established meanwhile. Subsequently, we are preoccupied with the numerical discretization of the equations. A special finite-element method is used for the discretization. This special approach has to be done in order to make the numerical results appropriate for practical application. By a series of transformations from the discrete model we derive a system of algebraic equations that are eligible for numerical evaluation. Using self-made computer programs we solve the equations to get approximate solutions. These programs are based on new and specialized iteration procedures that are developed and thoroughly tested within the frame of this research work. Due to their importance and their novel status, they are explained and demonstrated in detail. We compare these new iterations with a standard method that is complemented by a feature to fit in the current context. A further innovation is the computation of solutions in three-dimensional domains, which are still rare. Special attention is paid to applicability of the 3D simulation tools. The programs are designed to have justifiable working complexity. The simulation results of some models of contemporary semiconductor devices are shown and detailed comments on the results are given. Eventually, we make a prospect on future development and enhancements of the models and of the algorithms that we used.
Resumo:
In the present dissertation we consider Feynman integrals in the framework of dimensional regularization. As all such integrals can be expressed in terms of scalar integrals, we focus on this latter kind of integrals in their Feynman parametric representation and study their mathematical properties, partially applying graph theory, algebraic geometry and number theory. The three main topics are the graph theoretic properties of the Symanzik polynomials, the termination of the sector decomposition algorithm of Binoth and Heinrich and the arithmetic nature of the Laurent coefficients of Feynman integrals.rnrnThe integrand of an arbitrary dimensionally regularised, scalar Feynman integral can be expressed in terms of the two well-known Symanzik polynomials. We give a detailed review on the graph theoretic properties of these polynomials. Due to the matrix-tree-theorem the first of these polynomials can be constructed from the determinant of a minor of the generic Laplacian matrix of a graph. By use of a generalization of this theorem, the all-minors-matrix-tree theorem, we derive a new relation which furthermore relates the second Symanzik polynomial to the Laplacian matrix of a graph.rnrnStarting from the Feynman parametric parameterization, the sector decomposition algorithm of Binoth and Heinrich serves for the numerical evaluation of the Laurent coefficients of an arbitrary Feynman integral in the Euclidean momentum region. This widely used algorithm contains an iterated step, consisting of an appropriate decomposition of the domain of integration and the deformation of the resulting pieces. This procedure leads to a disentanglement of the overlapping singularities of the integral. By giving a counter-example we exhibit the problem, that this iterative step of the algorithm does not terminate for every possible case. We solve this problem by presenting an appropriate extension of the algorithm, which is guaranteed to terminate. This is achieved by mapping the iterative step to an abstract combinatorial problem, known as Hironaka's polyhedra game. We present a publicly available implementation of the improved algorithm. Furthermore we explain the relationship of the sector decomposition method with the resolution of singularities of a variety, given by a sequence of blow-ups, in algebraic geometry.rnrnMotivated by the connection between Feynman integrals and topics of algebraic geometry we consider the set of periods as defined by Kontsevich and Zagier. This special set of numbers contains the set of multiple zeta values and certain values of polylogarithms, which in turn are known to be present in results for Laurent coefficients of certain dimensionally regularized Feynman integrals. By use of the extended sector decomposition algorithm we prove a theorem which implies, that the Laurent coefficients of an arbitrary Feynman integral are periods if the masses and kinematical invariants take values in the Euclidean momentum region. The statement is formulated for an even more general class of integrals, allowing for an arbitrary number of polynomials in the integrand.
Resumo:
BACKGROUND Port-wine stains (PWS) are malformations of capillaries in 0.3% of newborn children. The treatment of choice is by pulsed dye LASER (PDL), and requires several sessions. The efficacy of this treatment is at present evaluated on the basis of clinical inspection and of digital photographs taken throughout the treatment. LASER-Doppler imaging (LDI) is a noninvasive method of imaging the perfusion of the tissues by the microcirculatory system (capillaries). The aim of this paper is to demonstrate that LDI allows a quantitative, numerical evaluation of the efficacy of the PDL treatment of PWS. METHOD The PDL sessions were organized according to the usual scheme, every other month, from September 1, 2012, to September 30, 2013. LDI imaging was performed at the start and at the conclusion of the PDL treatment, and simultaneously on healthy skin in order to obtain reference values. The results evidenced by LDI were analyzed according to the "Wilcoxon signed-rank" test before and after each session, and in the intervals between the three PDL treatment sessions. RESULTS Our prospective study is based on 20 new children. On average, the vascularization of the PWS was reduced by 56% after three laser sessions. Compared with healthy skin, initial vascularization of PWS was 62% higher than that of healthy skin at the start of treatment, and 6% higher after three sessions. During the 2 months between two sessions, vascularization of the capillary network increased by 27%. CONCLUSION This study shows that LDI can demonstrate and measure the efficacy of PDL treatment of PWS in children. The figures obtained when measuring the results by LDI corroborate the clinical assessments and may allow us to refine, and perhaps even modify, our present use of PDL and thus improve the efficacy of the treatment.
Resumo:
Non linear transformations are a good alternative for the numerical evaluation of singular and quasisingular integrals appearing in Boundary Element Method specially in the p-adaptive version. Some aspects of its numerical implementation in 2-D Potential codes is discussed and some examples are shown.
Resumo:
Two main alternating facies were observed at Ocean Drilling Program (ODP) Site 1165, drilled in 3357 m water depth into the Wild Drift (Cooperation Sea, Antarctica): a dark gray, laminated, terrigenous one (interpreted as muddy contourites) and a greenish, homogeneous, biogenic and coarse fraction-bearing one (interpreted as hemipelagic deposits with ice rafted debris [IRD]). These two cyclically alternating facies reflect orbitally driven changes (Milankovitch periodicities) recorded in spectral reflectance, bulk density, and magnetic susceptibility data and opal content changes. Superimposed on these short-term variations, significant uphole changes in average sedimentation rates, total clay content, IRD amount, and mineral composition were interpreted to represent the long-term lower to upper Miocene transition from a temperate climate to a cold-climate glaciation. The analysis of the short-term variations (interpreted to reflect ice sheet expansions controlled by 41-k.y. insolation changes) requires a quite closely spaced sampled record like that provided by the archive multisensor track. Among those, cycles are best described by spectral reflectance data and, in particular, by a parameter calculated as the ratio of the reflectivity in the green color band and the average reflectivity (gray). In this data report a numerical evaluation of spectral reflectance data was performed and substantiated by correlation with core photos to provide an objective description of the color variations within Site 1165 sediments. The resulting color description provides a reference to categorize the available samples in terms of facies and, hence, a framework for further analyses. Moreover, a link between visually described features and numerical series suitable for spectral analyses is provided.
Resumo:
The solution to the Green and Ampt infiltration equation is expressible in terms of the Lambert W-1 function. Approximations for Green and Ampt infiltration are thus derivable from approximations for the W-1 function and vice versa. An infinite family of asymptotic expansions to W-1 is presented. Although these expansions do not converge near the branch point of the W function (corresponds to Green-Ampt infiltration with immediate ponding), a method is presented for approximating W-1 that is exact at the branch point and asymptotically, with interpolation between these limits. Some existing and several new simple and compact yet robust approximations applicable to Green-Ampt infiltration and flux are presented, the most accurate of which has a maximum relative error of 5 x 10(-5)%. This error is orders of magnitude lower than any existing analytical approximations. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
The kinematic mapping of a rigid open-link manipulator is a homomorphism between Lie groups. The homomorphisrn has solution groups that act on an inverse kinematic solution element. A canonical representation of solution group operators that act on a solution element of three and seven degree-of-freedom (do!) dextrous manipulators is determined by geometric analysis. Seven canonical solution groups are determined for the seven do! Robotics Research K-1207 and Hollerbach arms. The solution element of a dextrous manipulator is a collection of trivial fibre bundles with solution fibres homotopic to the Torus. If fibre solutions are parameterised by a scalar, a direct inverse funct.ion that maps the scalar and Cartesian base space coordinates to solution element fibre coordinates may be defined. A direct inverse pararneterisation of a solution element may be approximated by a local linear map generated by an inverse augmented Jacobian correction of a linear interpolation. The action of canonical solution group operators on a local linear approximation of the solution element of inverse kinematics of dextrous manipulators generates cyclical solutions. The solution representation is proposed as a model of inverse kinematic transformations in primate nervous systems. Simultaneous calibration of a composition of stereo-camera and manipulator kinematic models is under-determined by equi-output parameter groups in the composition of stereo-camera and Denavit Hartenberg (DH) rnodels. An error measure for simultaneous calibration of a composition of models is derived and parameter subsets with no equi-output groups are determined by numerical experiments to simultaneously calibrate the composition of homogeneous or pan-tilt stereo-camera with DH models. For acceleration of exact Newton second-order re-calibration of DH parameters after a sequential calibration of stereo-camera and DH parameters, an optimal numerical evaluation of DH matrix first order and second order error derivatives with respect to a re-calibration error function is derived, implemented and tested. A distributed object environment for point and click image-based tele-command of manipulators and stereo-cameras is specified and implemented that supports rapid prototyping of numerical experiments in distributed system control. The environment is validated by a hierarchical k-fold cross validated calibration to Cartesian space of a radial basis function regression correction of an affine stereo model. Basic design and performance requirements are defined for scalable virtual micro-kernels that broker inter-Java-virtual-machine remote method invocations between components of secure manageable fault-tolerant open distributed agile Total Quality Managed ISO 9000+ conformant Just in Time manufacturing systems.
Resumo:
The spacing of adjacent wheel lines of dual-lane loads induces different lateral live load distributions on bridges, which cannot be determined using the current American Association of State Highway and Transportation Officials (AASHTO) Load and Resistance Factor Design (LRFD) or Load Factor Design (LFD) equations for vehicles with standard axle configurations. Current Iowa law requires dual-lane loads to meet a five-foot requirement, the adequacy of which needs to be verified. To improve the state policy and AASHTO code specifications, it is necessary to understand the actual effects of wheel-line spacing on lateral load distribution. The main objective of this research was to investigate the impact of the wheel-line spacing of dual-lane loads on the lateral load distribution on bridges. To achieve this objective, a numerical evaluation using two-dimensional linear elastic finite element (FE) models was performed. For simulation purposes, 20 prestressed-concrete bridges, 20 steel bridges, and 20 slab bridges were randomly sampled from the Iowa bridge database. Based on the FE results, the load distribution factors (LDFs) of the concrete and steel bridges and the equivalent lengths of the slab bridges were derived. To investigate the variations of LDFs, a total of 22 types of single-axle four-wheel-line dual-lane loads were taken into account with configurations consisting of combinations of various interior and exterior wheel-line spacing. The corresponding moment and shear LDFs and equivalent widths were also derived using the AASHTO equations and the adequacy of the Iowa DOT five-foot requirement was evaluated. Finally, the axle weight limits per lane for different dual-lane load types were further calculated and recommended to complement the current Iowa Department of Transportation (DOT) policy and AASHTO code specifications.
Resumo:
In this paper we consider instabilities of localised solutions in planar neural field firing rate models of Wilson-Cowan or Amari type. Importantly we show that angular perturbations can destabilise spatially localised solutions. For a scalar model with Heaviside firing rate function we calculate symmetric one-bump and ring solutions explicitly and use an Evans function approach to predict the point of instability and the shapes of the dominant growing modes. Our predictions are shown to be in excellent agreement with direct numerical simulations. Moreover, beyond the instability our simulations demonstrate the emergence of multi-bump and labyrinthine patterns. With the addition of spike-frequency adaptation, numerical simulations of the resulting vector model show that it is possible for structures without rotational symmetry, and in particular multi-bumps, to undergo an instability to a rotating wave. We use a general argument, valid for smooth firing rate functions, to establish the conditions necessary to generate such a rotational instability. Numerical continuation of the rotating wave is used to quantify the emergent angular velocity as a bifurcation parameter is varied. Wave stability is found via the numerical evaluation of an associated eigenvalue problem.