979 resultados para Piecewise linear techniques
Resumo:
We establish a one-to-one correspondence between the renormalizations and proper totally invariant closed sets (i.e., α-limit sets) of expanding Lorenz map, which enable us to distinguish periodic and non-periodic renormalizations. We describe the minimal renormalization by constructing the minimal totally invariant closed set, so that we can define the renormalization operator. Using consecutive renormalizations, we obtain complete topological characteriza- tion of α-limit sets and nonwandering set decomposition. For piecewise linear Lorenz map with slopes ≥ 1, we show that each renormalization is periodic and every proper α-limit set is countable.
Resumo:
This paper examines the impact of urban sprawl, a phenomenon of particular interest in Spain, which is currently experiencing this process of rapid, low-density urban expansion. Many adverse consequences are attributed to urban sprawl (e.g., traffic congestion, air pollution and social segregation), though here we are concerned primarily with the rising costs of providing local public services. Our initial aim is to develop an accurate measure of urban sprawl so that we might empirically test its impact on municipal budgets. Then, we undertake an empirical analysis using a cross-sectional data set of 2,500 Spanish municipalities for the year 2003 and a piecewise linear function to account for the potentially nonlinear relationship between sprawl and local costs. The estimations derived from the expenditure equations for both aggregate and six disaggregated spending categories indicate that low-density development patterns lead to greater provision costs of local public services.
Resumo:
We describe an explicit relationship between strand diagrams and piecewise-linear functions for elements of Thompson’s group F. Using this correspondence, we investigate the dynamics of elements of F, and we show that conjugacy of one-bump functions can be described by a Mather-type invariant.
Stabilized Petrov-Galerkin methods for the convection-diffusion-reaction and the Helmholtz equations
Resumo:
We present two new stabilized high-resolution numerical methods for the convection–diffusion–reaction (CDR) and the Helmholtz equations respectively. The work embarks upon a priori analysis of some consistency recovery procedures for some stabilization methods belonging to the Petrov–Galerkin framework. It was found that the use of some standard practices (e.g. M-Matrices theory) for the design of essentially non-oscillatory numerical methods is not feasible when consistency recovery methods are employed. Hence, with respect to convective stabilization, such recovery methods are not preferred. Next, we present the design of a high-resolution Petrov–Galerkin (HRPG) method for the 1D CDR problem. The problem is studied from a fresh point of view, including practical implications on the formulation of the maximum principle, M-Matrices theory, monotonicity and total variation diminishing (TVD) finite volume schemes. The current method is next in line to earlier methods that may be viewed as an upwinding plus a discontinuity-capturing operator. Finally, some remarks are made on the extension of the HRPG method to multidimensions. Next, we present a new numerical scheme for the Helmholtz equation resulting in quasi-exact solutions. The focus is on the approximation of the solution to the Helmholtz equation in the interior of the domain using compact stencils. Piecewise linear/bilinear polynomial interpolation are considered on a structured mesh/grid. The only a priori requirement is to provide a mesh/grid resolution of at least eight elements per wavelength. No stabilization parameters are involved in the definition of the scheme. The scheme consists of taking the average of the equation stencils obtained by the standard Galerkin finite element method and the classical finite difference method. Dispersion analysis in 1D and 2D illustrate the quasi-exact properties of this scheme. Finally, some remarks are made on the extension of the scheme to unstructured meshes by designing a method within the Petrov–Galerkin framework.
Mutigrid preconditioner for nonconforming discretization of elliptic problems with jump coefficients
Resumo:
In this paper, we present a multigrid preconditioner for solving the linear system arising from the piecewise linear nonconforming Crouzeix-Raviart discretization of second order elliptic problems with jump coe fficients. The preconditioner uses the standard conforming subspaces as coarse spaces. Numerical tests show both robustness with respect to the jump in the coe fficient and near-optimality with respect to the number of degrees of freedom.
Resumo:
OBJECTIVES: To compare immunological, virological and clinical outcomes in persons initiating combination antiretroviral therapy (cART of different durations within 6 months of seroconversion (early treated) with those who deferred therapy (deferred group). DESIGN: CD4 cell and HIV-RNA measurements for 'early treated' individuals following treatment cessation were compared with the corresponding ART-free period for the 'deferred' group using piecewise linear mixed models. Individuals identified during primary HIV infection were included if they seroconverted from 1st January 1996 and were at least 15 years of age at seroconversion. Those with at least 2 CD4 less than 350 cells/microl or AIDS within the first 6 months following seroconversion were excluded. RESULTS: Of 348 'early treated' patients, 147 stopped cART following treatment for at least 6 (n = 38), more than 6-12 (n = 40) or more than 12 months (n = 69). CD4 cell loss was steeper for the first 6 months following cART cessation, but subsequent loss rate was similar to the 'deferred' group (n = 675, P = 0.26). Although those treated for more than 12 months appeared to maintain higher CD4 cell counts following cART cessation, those treated for 12 months or less had CD4 cell counts 6 months after cessation comparable to those in the 'deferred' group. There was no difference in HIV-RNA set points between the 'early' and 'deferred' groups (P = 0.57). AIDS rates were similar but death rates, mainly due to non-AIDS causes, were higher in the 'deferred' group (P = 0.05). CONCLUSION: Transient cART, initiated within 6 months of seroconversion, seems to have no effect on viral load set point and limited beneficial effect on CD4 cell levels in individuals treated for more than 12 months. Its long-term effects remain inconclusive and need further investigation.
Resumo:
Statistical computing when input/output is driven by a Graphical User Interface is considered. A proposal is made for automatic control ofcomputational flow to ensure that only strictly required computationsare actually carried on. The computational flow is modeled by a directed graph for implementation in any object-oriented programming language with symbolic manipulation capabilities. A complete implementation example is presented to compute and display frequency based piecewise linear density estimators such as histograms or frequency polygons.
Resumo:
Age-related changes in lumbar vertebral microarchitecture are evaluated, as assessed by trabecular bone score (TBS), in a cohort of 5,942 French women. The magnitude of TBS decline between 45 and 85 years of age is piecewise linear in the spine and averaged 14.5 %. TBS decline rate increases after 65 years by 50 %. INTRODUCTION: This study aimed to evaluate age-related changes in lumbar vertebral microarchitecture, as assessed by TBS, in a cohort of French women aged 45-85 years. METHODS: An all-comers cohort of French Caucasian women was selected from two clinical centers. Data obtained from these centers were cross-calibrated for TBS and bone mineral density (BMD). BMD and TBS were evaluated at L1-L4 and for all lumbar vertebrae combined using GE-Lunar Prodigy densitometer images. Weight, height, and body mass index (BMI) also were determined. To validate our all-comers cohort, the BMD normative data of our cohort and French Prodigy data were compared. RESULTS: A cohort of 5,942 French women aged 45 to 85 years was created. Dual-energy X-ray absorptiometry normative data obtained for BMD from this cohort were not significantly different from French prodigy normative data (p = 0.15). TBS values at L1-L4 were poorly correlated with BMI (r = -0.17) and weight (r = -0.14) and not correlated with height. TBS values obtained for all lumbar vertebra combined (L1, L2, L3, L4) decreased with age. The magnitude of TBS decline at L1-L4 between 45 and 85 years of age was piecewise linear in the spine and averaged 14.5 %, but this rate increased after 65 years by 50 %. Similar results were obtained for other region of interest in the lumbar spine. As opposed to BMD, TBS was not affected by spinal osteoarthrosis. CONCLUSION: The age-specific reference curve for TBS generated here could therefore be used to help clinicians to improve osteoporosis patient management and to monitor microarchitectural changes related to treatment or other diseases in routine clinical practice.
Resumo:
We propose a finite element approximation of a system of partial differential equations describing the coupling between the propagation of electrical potential and large deformations of the cardiac tissue. The underlying mathematical model is based on the active strain assumption, in which it is assumed that a multiplicative decomposition of the deformation tensor into a passive and active part holds, the latter carrying the information of the electrical potential propagation and anisotropy of the cardiac tissue into the equations of either incompressible or compressible nonlinear elasticity, governing the mechanical response of the biological material. In addition, by changing from an Eulerian to a Lagrangian configuration, the bidomain or monodomain equations modeling the evolution of the electrical propagation exhibit a nonlinear diffusion term. Piecewise quadratic finite elements are employed to approximate the displacements field, whereas for pressure, electrical potentials and ionic variables are approximated by piecewise linear elements. Various numerical tests performed with a parallel finite element code illustrate that the proposed model can capture some important features of the electromechanical coupling, and show that our numerical scheme is efficient and accurate.
Resumo:
In this article we study the effect of uncertainty on an entrepreneur who must choose the capacity of his business before knowing the demand for his product. The unit profit of operation is known with certainty but there is no flexibility in our one-period framework. We show how the introduction of global uncertainty reduces the investment of the risk neutral entrepreneur and, even more, that the risk averse one. We also show how marginal increases in risk reduce the optimal capacity of both the risk neutral and the risk averse entrepreneur, without any restriction on the concave utility function and with limited restrictions on the definition of a mean preserving spread. These general results are explained by the fact that the newsboy has a piecewise-linear, and concave, monetary payoff witha kink endogenously determined at the level of optimal capacity. Our results are compared with those in the two literatures on price uncertainty and demand uncertainty, and particularly, with the recent contributions of Eeckhoudt, Gollier and Schlesinger (1991, 1995).
Resumo:
In this article we study the effect of uncertainty on an entrepreneur who must choose the capacity of his business before knowing the demand for his product. The unit profit of operation is known with certainty but there is no flexibility in our one-period framework. We show how the introduction of global uncertainty reduces the investment of the risk neutral entrepreneur and, even more, that the risk averse one. We also show how marginal increases in risk reduce the optimal capacity of both the risk neutral and the risk averse entrepreneur, without any restriction on the concave utility function and with limited restrictions on the definition of a mean preserving spread. These general results are explained by the fact that the newsboy has a piecewise-linear, and concave, monetary payoff witha kink endogenously determined at the level of optimal capacity. Our results are compared with those in the two literatures on price uncertainty and demand uncertainty, and particularly, with the recent contributions of Eeckhoudt, Gollier and Schlesinger (1991, 1995).
Resumo:
Locality to other nodes on a peer-to-peer overlay network can be established by means of a set of landmarks shared among the participating nodes. Each node independently collects a set of latency measures to landmark nodes, which are used as a multi-dimensional feature vector. Each peer node uses the feature vector to generate a unique scalar index which is correlated to its topological locality. A popular dimensionality reduction technique is the space filling Hilbert’s curve, as it possesses good locality preserving properties. However, there exists little comparison between Hilbert’s curve and other techniques for dimensionality reduction. This work carries out a quantitative analysis of their properties. Linear and non-linear techniques for scaling the landmark vectors to a single dimension are investigated. Hilbert’s curve, Sammon’s mapping and Principal Component Analysis have been used to generate a 1d space with locality preserving properties. This work provides empirical evidence to support the use of Hilbert’s curve in the context of locality preservation when generating peer identifiers by means of landmark vector analysis. A comparative analysis is carried out with an artificial 2d network model and with a realistic network topology model with a typical power-law distribution of node connectivity in the Internet. Nearest neighbour analysis confirms Hilbert’s curve to be very effective in both artificial and realistic network topologies. Nevertheless, the results in the realistic network model show that there is scope for improvements and better techniques to preserve locality information are required.
Resumo:
Liquid chromatography-mass spectrometry (LC-MS) datasets can be compared or combined following chromatographic alignment. Here we describe a simple solution to the specific problem of aligning one LC-MS dataset and one LC-MS/MS dataset, acquired on separate instruments from an enzymatic digest of a protein mixture, using feature extraction and a genetic algorithm. First, the LC-MS dataset is searched within a few ppm of the calculated theoretical masses of peptides confidently identified by LC-MS/MS. A piecewise linear function is then fitted to these matched peptides using a genetic algorithm with a fitness function that is insensitive to incorrect matches but sufficiently flexible to adapt to the discrete shifts common when comparing LC datasets. We demonstrate the utility of this method by aligning ion trap LC-MS/MS data with accurate LC-MS data from an FTICR mass spectrometer and show how hybrid datasets can improve peptide and protein identification by combining the speed of the ion trap with the mass accuracy of the FTICR, similar to using a hybrid ion trap-FTICR instrument. We also show that the high resolving power of FTICR can improve precision and linear dynamic range in quantitative proteomics. The alignment software, msalign, is freely available as open source.
Resumo:
Tremor is a clinical feature characterized by oscillations of a part of the body. The detection and study of tremor is an important step in investigations seeking to explain underlying control strategies of the central nervous system under natural (or physiological) and pathological conditions. It is well established that tremorous activity is composed of deterministic and stochastic components. For this reason, the use of digital signal processing techniques (DSP) which take into account the nonlinearity and nonstationarity of such signals may bring new information into the signal analysis which is often obscured by traditional linear techniques (e.g. Fourier analysis). In this context, this paper introduces the application of the empirical mode decomposition (EMD) and Hilbert spectrum (HS), which are relatively new DSP techniques for the analysis of nonlinear and nonstationary time-series, for the study of tremor. Our results, obtained from the analysis of experimental signals collected from 31 patients with different neurological conditions, showed that the EMD could automatically decompose acquired signals into basic components, called intrinsic mode functions (IMFs), representing tremorous and voluntary activity. The identification of a physical meaning for IMFs in the context of tremor analysis suggests an alternative and new way of detecting tremorous activity. These results may be relevant for those applications requiring automatic detection of tremor. Furthermore, the energy of IMFs was visualized as a function of time and frequency by means of the HS. This analysis showed that the variation of energy of tremorous and voluntary activity could be distinguished and characterized on the HS. Such results may be relevant for those applications aiming to identify neurological disorders. In general, both the HS and EMD demonstrated to be very useful to perform objective analysis of any kind of tremor and can therefore be potentially used to perform functional assessment.
Resumo:
Total ozone trends are typically studied using linear regression models that assume a first-order autoregression of the residuals [so-called AR(1) models]. We consider total ozone time series over 60°S–60°N from 1979 to 2005 and show that most latitude bands exhibit long-range correlated (LRC) behavior, meaning that ozone autocorrelation functions decay by a power law rather than exponentially as in AR(1). At such latitudes the uncertainties of total ozone trends are greater than those obtained from AR(1) models and the expected time required to detect ozone recovery correspondingly longer. We find no evidence of LRC behavior in southern middle-and high-subpolar latitudes (45°–60°S), where the long-term ozone decline attributable to anthropogenic chlorine is the greatest. We thus confirm an earlier prediction based on an AR(1) analysis that this region (especially the highest latitudes, and especially the South Atlantic) is the optimal location for the detection of ozone recovery, with a statistically significant ozone increase attributable to chlorine likely to be detectable by the end of the next decade. In northern middle and high latitudes, on the other hand, there is clear evidence of LRC behavior. This increases the uncertainties on the long-term trend attributable to anthropogenic chlorine by about a factor of 1.5 and lengthens the expected time to detect ozone recovery by a similar amount (from ∼2030 to ∼2045). If the long-term changes in ozone are instead fit by a piecewise-linear trend rather than by stratospheric chlorine loading, then the strong decrease of northern middle- and high-latitude ozone during the first half of the 1990s and its subsequent increase in the second half of the 1990s projects more strongly on the trend and makes a smaller contribution to the noise. This both increases the trend and weakens the LRC behavior at these latitudes, to the extent that ozone recovery (according to this model, and in the sense of a statistically significant ozone increase) is already on the verge of being detected. The implications of this rather controversial interpretation are discussed.