907 resultados para fixed point method


Relevância:

80.00% 80.00%

Publicador:

Resumo:

We exhibit the construction of stable arc exchange systems from the stable laminations of hyperbolic diffeomorphisms. We prove a one-to-one correspondence between (i) Lipshitz conjugacy classes of C(1+H) stable arc exchange systems that are C(1+H) fixed points of renormalization and (ii) Lipshitz conjugacy classes of C(1+H) diffeomorphisms f with hyperbolic basic sets Lambda that admit an invariant measure absolutely continuous with respect to the Hausdorff measure on Lambda. Let HD(s)(Lambda) and HD(u)(Lambda) be, respectively, the Hausdorff dimension of the stable and unstable leaves intersected with the hyperbolic basic set L. If HD(u)(Lambda) = 1, then the Lipschitz conjugacy is, in fact, a C(1+H) conjugacy in (i) and (ii). We prove that if the stable arc exchange system is a C(1+HDs+alpha) fixed point of renormalization with bounded geometry, then the stable arc exchange system is smooth conjugate to an affine stable arc exchange system.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

BACKGROUND: A possible strategy for increasing smoking cessation rates could be to provide smokers who have contact with healthcare systems with feedback on the biomedical or potential future effects of smoking, e.g. measurement of exhaled carbon monoxide (CO), lung function, or genetic susceptibility to lung cancer. OBJECTIVES: To determine the efficacy of biomedical risk assessment provided in addition to various levels of counselling, as a contributing aid to smoking cessation. SEARCH STRATEGY: We systematically searched the Cochrane Collaboration Tobacco Addiction Group Specialized Register, Cochrane Central Register of Controlled Trials 2008 Issue 4, MEDLINE (1966 to January 2009), and EMBASE (1980 to January 2009). We combined methodological terms with terms related to smoking cessation counselling and biomedical measurements. SELECTION CRITERIA: Inclusion criteria were: a randomized controlled trial design; subjects participating in smoking cessation interventions; interventions based on a biomedical test to increase motivation to quit; control groups receiving all other components of intervention; an outcome of smoking cessation rate at least six months after the start of the intervention. DATA COLLECTION AND ANALYSIS: Two assessors independently conducted data extraction on each paper, with disagreements resolved by consensus. Results were expressed as a relative risk (RR) for smoking cessation with 95% confidence intervals (CI). Where appropriate a pooled effect was estimated using a Mantel-Haenszel fixed effect method. MAIN RESULTS: We included eleven trials using a variety of biomedical tests. Two pairs of trials had sufficiently similar recruitment, setting and interventions to calculate a pooled effect; there was no evidence that CO measurement in primary care (RR 1.06, 95% CI 0.85 to 1.32) or spirometry in primary care (RR 1.18, 95% CI 0.77 to 1.81) increased cessation rates. We did not pool the other seven trials. One trial in primary care detected a significant benefit of lung age feedback after spirometry (RR 2.12; 95% CI 1.24 to 3.62). One trial that used ultrasonography of carotid and femoral arteries and photographs of plaques detected a benefit (RR 2.77; 95% CI 1.04 to 7.41) but enrolled a population of light smokers. Five trials failed to detect evidence of a significant effect. One of these tested CO feedback alone and CO + genetic susceptibility as two different intervention; none of the three possible comparisons detected significant effects. Three others used a combination of CO and spirometry feedback in different settings, and one tested for a genetic marker. AUTHORS' CONCLUSIONS: There is little evidence about the effects of most types of biomedical tests for risk assessment. Spirometry combined with an interpretation of the results in terms of 'lung age' had a significant effect in a single good quality trial. Mixed quality evidence does not support the hypothesis that other types of biomedical risk assessment increase smoking cessation in comparison to standard treatment. Only two pairs of studies were similar enough in term of recruitment, setting, and intervention to allow meta-analysis.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Questions: A multiple plot design was developed for permanent vegetation plots. How reliable are the different methods used in this design and which changes can we measure? Location: Alpine meadows (2430 m a.s.l.) in the Swiss Alps. Methods: Four inventories were obtained from 40 m(2) plots: four subplots (0.4 m(2)) with a list of species, two 10m transects with the point method (50 points on each), one subplot (4 m2) with a list of species and visual cover estimates as a percentage and the complete plot (40 m(2)) with a list of species and visual estimates in classes. This design was tested by five to seven experienced botanists in three plots. Results: Whatever the sampling size, only 45-63% of the species were seen by all the observers. However, the majority of the overlooked species had cover < 0.1%. Pairs of observers overlooked 10-20% less species than single observers. The point method was the best method for cover estimate, but it took much longer than visual cover estimates, and 100 points allowed for the monitoring of only a very limited number of species. The visual estimate as a percentage was more precise than classes. Working in pairs did not improve the estimates, but one botanist repeating the survey is more reliable than a succession of different observers. Conclusion: Lists of species are insufficient for monitoring. It is necessary to add cover estimates to allow for subsequent interpretations in spite of the overlooked species. The choice of the method depends on the available resources: the point method is time consuming but gives precise data for a limited number of species, while visual estimates are quick but allow for recording only large changes in cover. Constant pairs of observers improve the reliability of the records.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The present paper studies the probability of ruin of an insurer, if excess of loss reinsurance with reinstatements is applied. In the setting of the classical Cramer-Lundberg risk model, piecewise deterministic Markov processes are used to describe the free surplus process in this more general situation. It is shown that the finite-time ruin probability is both the solution of a partial integro-differential equation and the fixed point of a contractive integral operator. We exploit the latter representation to develop and implement a recursive algorithm for numerical approximation of the ruin probability that involves high-dimensional integration. Furthermore we study the behavior of the finite-time ruin probability under various levels of initial surplus and security loadings and compare the efficiency of the numerical algorithm with the computational alternative of stochastic simulation of the risk process. (C) 2011 Elsevier Inc. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Subshifts are sets of configurations over an infinite grid defined by a set of forbidden patterns. In this thesis, we study two-dimensional subshifts offinite type (2D SFTs), where the underlying grid is Z2 and the set of for-bidden patterns is finite. We are mainly interested in the interplay between the computational power of 2D SFTs and their geometry, examined through the concept of expansive subdynamics. 2D SFTs with expansive directions form an interesting and natural class of subshifts that lie between dimensions 1 and 2. An SFT that has only one non-expansive direction is called extremely expansive. We prove that in many aspects, extremely expansive 2D SFTs display the totality of behaviours of general 2D SFTs. For example, we construct an aperiodic extremely expansive 2D SFT and we prove that the emptiness problem is undecidable even when restricted to the class of extremely expansive 2D SFTs. We also prove that every Medvedev class contains an extremely expansive 2D SFT and we provide a characterization of the sets of directions that can be the set of non-expansive directions of a 2D SFT. Finally, we prove that for every computable sequence of 2D SFTs with an expansive direction, there exists a universal object that simulates all of the elements of the sequence. We use the so called hierarchical, self-simulating or fixed-point method for constructing 2D SFTs which has been previously used by Ga´cs, Durand, Romashchenko and Shen.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Dans ce mémoire, nous étudierons quelques propriétés algébriques, géométriques et topologiques des surfaces de Riemann compactes. Deux grand sujets seront traités. Tout d'abord, en utilisant le fait que toute surface de Riemann compacte de genre g plus grand ou égal à 2 possède un nombre fini de points de Weierstrass, nous allons pouvoir conclure que ces surfaces possèdent un nombre fini d'automorphismes. Ensuite, nous allons étudier de plus près la formule de trace d'Eichler. Ce théorème nous permet de trouver le caractère d'un automorphisme agissant sur l'espace des q-différentielles holomorphes. Nous commencerons notre étude en utilisant la quartique de Klein. Nous effectuerons un exemple de calcul utilisant le théorème d'Eichler, ce qui nous permettra de nous familiariser avec l'énoncé du théorème. Finalement, nous allons démontrer la formule de trace d'Eichler, en prenant soin de traiter le cas où l'automorphisme agit sans point fixe séparément du cas où l'automorphisme possède des points fixes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Most of the commercial and financial data are stored in decimal fonn. Recently, support for decimal arithmetic has received increased attention due to the growing importance in financial analysis, banking, tax calculation, currency conversion, insurance, telephone billing and accounting. Performing decimal arithmetic with systems that do not support decimal computations may give a result with representation error, conversion error, and/or rounding error. In this world of precision, such errors are no more tolerable. The errors can be eliminated and better accuracy can be achieved if decimal computations are done using Decimal Floating Point (DFP) units. But the floating-point arithmetic units in today's general-purpose microprocessors are based on the binary number system, and the decimal computations are done using binary arithmetic. Only few common decimal numbers can be exactly represented in Binary Floating Point (BF P). ln many; cases, the law requires that results generated from financial calculations performed on a computer should exactly match with manual calculations. Currently many applications involving fractional decimal data perform decimal computations either in software or with a combination of software and hardware. The performance can be dramatically improved by complete hardware DFP units and this leads to the design of processors that include DF P hardware.VLSI implementations using same modular building blocks can decrease system design and manufacturing cost. A multiplexer realization is a natural choice from the viewpoint of cost and speed.This thesis focuses on the design and synthesis of efficient decimal MAC (Multiply ACeumulate) architecture for high speed decimal processors based on IEEE Standard for Floating-point Arithmetic (IEEE 754-2008). The research goal is to design and synthesize deeimal'MAC architectures to achieve higher performance.Efficient design methods and architectures are developed for a high performance DFP MAC unit as part of this research.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis is an attempt to initiate the development of a discrete geometry of the discrete plane H = {(qmxo,qnyo); m,n e Z - the set of integers}, where q s (0,1) is fixed and (xO,yO) is a fixed point in the first quadrant of the complex plane, xo,y0 ¢ 0. The discrete plane was first considered by Harman in 1972, to evolve a discrete analytic function theory for geometric difference functions. We shall mention briefly, through various sections, the principle of discretization, an outline of discrete a alytic function theory, the concept of geometry of space and also summary of work done in this thesis

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Decimal multiplication is an integral part of financial, commercial, and internet-based computations. This paper presents a novel double digit decimal multiplication (DDDM) technique that offers low latency and high throughput. This design performs two digit multiplications simultaneously in one clock cycle. Double digit fixed point decimal multipliers for 7digit, 16 digit and 34 digit are simulated using Leonardo Spectrum from Mentor Graphics Corporation using ASIC Library. The paper also presents area and delay comparisons for these fixed point multipliers on Xilinx, Altera, Actel and Quick logic FPGAs. This multiplier design can be extended to support decimal floating point multiplication for IEEE 754- 2008 standard.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Es werde das lineare Regressionsmodell y = X b + e mit den ueblichen Bedingungen betrachtet. Weiter werde angenommen, dass der Parametervektor aus einem Ellipsoid stammt. Ein optimaler Schaetzer fuer den Parametervektor ist durch den Minimax-Schaetzer gegeben. Nach der entscheidungstheoretischen Formulierung des Minimax-Schaetzproblems werden mit dem Bayesschen Ansatz, Spektralen Methoden und der Darstellung von Hoffmann und Laeuter Wege zur Bestimmung des Minimax- Schaetzers dargestellt und in Beziehung gebracht. Eine Betrachtung von Modellen mit drei Einflussgroeßen und gemeinsamen Eigenvektor fuehrt zu einer Strukturierung des Problems nach der Vielfachheit des maximalen Eigenwerts. Die Bestimmung des Minimax-Schaetzers in einem noch nicht geloesten Fall kann auf die Bestimmung einer Nullstelle einer nichtlinearen reellwertigen Funktion gefuehrt werden. Es wird ein Beispiel gefunden, in dem die Nullstelle nicht durch Radikale angegeben werden kann. Durch das Intervallschachtelungs-Prinzip oder Newton-Verfahren ist die numerische Bestimmung der Nullstelle moeglich. Durch Entwicklung einer Fixpunktgleichung aus der Darstellung von Hoffmann und Laeuter war es in einer Simulation moeglich die angestrebten Loesungen zu finden.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Ausgangspunkt der Dissertation ist ein von V. Maz'ya entwickeltes Verfahren, eine gegebene Funktion f : Rn ! R durch eine Linearkombination fh radialer glatter exponentiell fallender Basisfunktionen zu approximieren, die im Gegensatz zu den Splines lediglich eine näherungsweise Zerlegung der Eins bilden und somit ein für h ! 0 nicht konvergentes Verfahren definieren. Dieses Verfahren wurde unter dem Namen Approximate Approximations bekannt. Es zeigt sich jedoch, dass diese fehlende Konvergenz für die Praxis nicht relevant ist, da der Fehler zwischen f und der Approximation fh über gewisse Parameter unterhalb der Maschinengenauigkeit heutiger Rechner eingestellt werden kann. Darüber hinaus besitzt das Verfahren große Vorteile bei der numerischen Lösung von Cauchy-Problemen der Form Lu = f mit einem geeigneten linearen partiellen Differentialoperator L im Rn. Approximiert man die rechte Seite f durch fh, so lassen sich in vielen Fällen explizite Formeln für die entsprechenden approximativen Volumenpotentiale uh angeben, die nur noch eine eindimensionale Integration (z.B. die Errorfunktion) enthalten. Zur numerischen Lösung von Randwertproblemen ist das von Maz'ya entwickelte Verfahren bisher noch nicht genutzt worden, mit Ausnahme heuristischer bzw. experimenteller Betrachtungen zur sogenannten Randpunktmethode. Hier setzt die Dissertation ein. Auf der Grundlage radialer Basisfunktionen wird ein neues Approximationsverfahren entwickelt, welches die Vorzüge der von Maz'ya für Cauchy-Probleme entwickelten Methode auf die numerische Lösung von Randwertproblemen überträgt. Dabei werden stellvertretend das innere Dirichlet-Problem für die Laplace-Gleichung und für die Stokes-Gleichungen im R2 behandelt, wobei für jeden der einzelnen Approximationsschritte Konvergenzuntersuchungen durchgeführt und Fehlerabschätzungen angegeben werden.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Das von Maz'ya eingeführte Approximationsverfahren, die Methode der näherungsweisen Näherungen (Approximate Approximations), kann auch zur numerischen Lösung von Randintegralgleichungen verwendet werden (Randpunktmethode). In diesem Fall hängen die Komponenten der Matrix des resultierenden Gleichungssystems zur Berechnung der Näherung für die Dichte nur von der Position der Randpunkte und der Richtung der äußeren Einheitsnormalen in diesen Punkten ab. Dieses numerisches Verfahren wird am Beispiel des Dirichlet Problems für die Laplace Gleichung und die Stokes Gleichungen in einem beschränkten zweidimensionalem Gebiet untersucht. Die Randpunktmethode umfasst drei Schritte: Im ersten Schritt wird die unbekannte Dichte durch eine Linearkombination von radialen, exponentiell abklingenden Basisfunktionen approximiert. Im zweiten Schritt wird die Integration über den Rand durch die Integration über die Tangenten in Randpunkten ersetzt. Für die auftretende Näherungspotentiale können sogar analytische Ausdrücke gewonnen werden. Im dritten Schritt wird das lineare Gleichungssystem gelöst, und eine Näherung für die unbekannte Dichte und damit auch für die Lösung der Randwertaufgabe konstruiert. Die Konvergenz dieses Verfahrens wird für glatte konvexe Gebiete nachgewiesen.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We study four measures of problem instance behavior that might account for the observed differences in interior-point method (IPM) iterations when these methods are used to solve semidefinite programming (SDP) problem instances: (i) an aggregate geometry measure related to the primal and dual feasible regions (aspect ratios) and norms of the optimal solutions, (ii) the (Renegar-) condition measure C(d) of the data instance, (iii) a measure of the near-absence of strict complementarity of the optimal solution, and (iv) the level of degeneracy of the optimal solution. We compute these measures for the SDPLIB suite problem instances and measure the correlation between these measures and IPM iteration counts (solved using the software SDPT3) when the measures have finite values. Our conclusions are roughly as follows: the aggregate geometry measure is highly correlated with IPM iterations (CORR = 0.896), and is a very good predictor of IPM iterations, particularly for problem instances with solutions of small norm and aspect ratio. The condition measure C(d) is also correlated with IPM iterations, but less so than the aggregate geometry measure (CORR = 0.630). The near-absence of strict complementarity is weakly correlated with IPM iterations (CORR = 0.423). The level of degeneracy of the optimal solution is essentially uncorrelated with IPM iterations.

Relevância:

80.00% 80.00%

Publicador:

Relevância:

80.00% 80.00%

Publicador:

Resumo:

En este trabajo se implementa una metodología para incluir momentos de orden superior en la selección de portafolios, haciendo uso de la Distribución Hiperbólica Generalizada, para posteriormente hacer un análisis comparativo frente al modelo de Markowitz.