203 resultados para Smoothness
Resumo:
Pardo, Patie, and Savov derived, under mild conditions, a Wiener-Hopf type factorization for the exponential functional of proper Lévy processes. In this paper, we extend this factorization by relaxing a finite moment assumption as well as by considering the exponential functional for killed Lévy processes. As a by-product, we derive some interesting fine distributional properties enjoyed by a large class of this random variable, such as the absolute continuity of its distribution and the smoothness, boundedness or complete monotonicity of its density. This type of results is then used to derive similar properties for the law of maxima and first passage time of some stable Lévy processes. Thus, for example, we show that for any stable process with $\rho\in(0,\frac{1}{\alpha}-1]$, where $\rho\in[0,1]$ is the positivity parameter and $\alpha$ is the stable index, then the first passage time has a bounded and non-increasing density on $\mathbb{R}_+$. We also generate many instances of integral or power series representations for the law of the exponential functional of Lévy processes with one or two-sided jumps. The proof of our main results requires different devices from the one developed by Pardo, Patie, Savov. It relies in particular on a generalization of a transform recently introduced by Chazal et al together with some extensions to killed Lévy process of Wiener-Hopf techniques. The factorizations developed here also allow for further applications which we only indicate here also allow for further applications which we only indicate here.
Resumo:
We propose a Nystr¨om/product integration method for a class of second kind integral equations on the real line which arise in problems of two-dimensional scalar and elastic wave scattering by unbounded surfaces. Stability and convergence of the method is established with convergence rates dependent on the smoothness of components of the kernel. The method is applied to the problem of acoustic scattering by a sound soft one-dimensional surface which is the graph of a function f, and superalgebraic convergence is established in the case when f is infinitely smooth. Numerical results are presented illustrating this behavior for the case when f is periodic (the diffraction grating case). The Nystr¨om method for this problem is stable and convergent uniformly with respect to the period of the grating, in contrast to standard integral equation methods for diffraction gratings which fail at a countable set of grating periods.
First order k-th moment finite element analysis of nonlinear operator equations with stochastic data
Resumo:
We develop and analyze a class of efficient Galerkin approximation methods for uncertainty quantification of nonlinear operator equations. The algorithms are based on sparse Galerkin discretizations of tensorized linearizations at nominal parameters. Specifically, we consider abstract, nonlinear, parametric operator equations J(\alpha ,u)=0 for random input \alpha (\omega ) with almost sure realizations in a neighborhood of a nominal input parameter \alpha _0. Under some structural assumptions on the parameter dependence, we prove existence and uniqueness of a random solution, u(\omega ) = S(\alpha (\omega )). We derive a multilinear, tensorized operator equation for the deterministic computation of k-th order statistical moments of the random solution's fluctuations u(\omega ) - S(\alpha _0). We introduce and analyse sparse tensor Galerkin discretization schemes for the efficient, deterministic computation of the k-th statistical moment equation. We prove a shift theorem for the k-point correlation equation in anisotropic smoothness scales and deduce that sparse tensor Galerkin discretizations of this equation converge in accuracy vs. complexity which equals, up to logarithmic terms, that of the Galerkin discretization of a single instance of the mean field problem. We illustrate the abstract theory for nonstationary diffusion problems in random domains.
Resumo:
Upscaling ecological information to larger scales in space and downscaling remote sensing observations or model simulations to finer scales remain grand challenges in Earth system science. Downscaling often involves inferring subgrid information from coarse-scale data, and such ill-posed problems are classically addressed using regularization. Here, we apply two-dimensional Tikhonov Regularization (2DTR) to simulate subgrid surface patterns for ecological applications. Specifically, we test the ability of 2DTR to simulate the spatial statistics of high-resolution (4 m) remote sensing observations of the normalized difference vegetation index (NDVI) in a tundra landscape. We find that the 2DTR approach as applied here can capture the major mode of spatial variability of the high-resolution information, but not multiple modes of spatial variability, and that the Lagrange multiplier (γ) used to impose the condition of smoothness across space is related to the range of the experimental semivariogram. We used observed and 2DTR-simulated maps of NDVI to estimate landscape-level leaf area index (LAI) and gross primary productivity (GPP). NDVI maps simulated using a γ value that approximates the range of observed NDVI result in a landscape-level GPP estimate that differs by ca 2% from those created using observed NDVI. Following findings that GPP per unit LAI is lower near vegetation patch edges, we simulated vegetation patch edges using multiple approaches and found that simulated GPP declined by up to 12% as a result. 2DTR can generate random landscapes rapidly and can be applied to disaggregate ecological information and compare of spatial observations against simulated landscapes.
Resumo:
We quantified gait and stride characteristics (velocity, frequency, stride length, stance and swing duration, and duty factor) in the bursts of locomotion of two small, intermittently moving, closely related South American gymnophthalmid lizards: Vanzosaura rubricauda and Procellosaurinus tetradactylus. They occur in different environments: V rubricauda is widely distributed in open areas with various habitats and substrates, while P. tetradactylus is endemic to dunes in the semi-arid Brazilian Caatinga. Both use trot or walking trot characterised by a lateral sequence. For various substrates in a gradient of roughness (perspex, cardboard, sand, gravel), both species have low relative velocities in comparison with those reported for larger continuously moving lizards. To generate velocity, these animals increase stride frequency but decrease relative stride length. For these parameters, P. tetradactylus showed lower values than V rubricauda. In their relative range of velocities, no significant differences in stride length and frequency were recorded for gravel. However, the slopes of a correlation between velocity and its components were lower in P. tetradactylus on cardboard, whereas on sand this was only observed for velocity and stride length. The data showed that the difference in rhythmic parameters between both species increased with the smoothness of the substrates. Moreover, P. tetradactylus shows a highly specialised locomotor strategy involving lower stride length and frequency for generating lower velocities than in V. rubricauda. This suggests the evolution of a central motor pattern generator to control slower limb movements and to produce fewer and longer pauses in intermittent locomotion. (c) 2008 Elsevier GmbH. All rights reserved.
Resumo:
In this paper, we consider codimension one Anosov actions of R(k), k >= 1, on closed connected orientable manifolds of dimension n vertical bar k with n >= 3. We show that the fundamental group of the ambient manifold is solvable if and only if the weak foliation of codimension one is transversely affine. We also study the situation where one 1-parameter subgroup of R(k) admits a cross-section, and compare this to the case where the whole action is transverse to a fibration over a manifold of dimension n. As a byproduct, generalizing a Theorem by Ghys in the case k = 1, we show that, under some assumptions about the smoothness of the sub-bundle E(ss) circle plus E(uu), and in the case where the action preserves the volume, it is topologically equivalent to a suspension of a linear Anosov action of Z(k) on T(n).
Resumo:
Environmentally friendly biocomposites were successfully prepared by dissolving chitosan and cellulose in a NaOH/thiourea solvent with subsequent heating and film casting. Under the considered conditions, NaOH/thiourea led to chain depolymerization of both biopolymers without a dramatic loss of film forming capacities. Compatibility of both biopolymers in the biocomposite was firstly assessed through scanning electron microscopy, revealing an intermediate organization between cellulose fiber network and smoothness of pure chitosan. DSC analyses led to exothermic peaks close to 285 and 315 degrees C for the biocomposite, compared to the exothermic peaks of chitosan (275 degrees C) and cellulose (265 and 305 degrees C), suggesting interactions between chitosan and cellulose. Contact angle analyses pointed out the deformation that can occur at the surface due to the high affinity of the;e materials with water. T(2) NMR relaxometry behavior of biocomposites appeared to be dominated by chitosan. Other properties of films, as crystallinity, water sorption isotherms, among others, are also discussed. (C) 2010 Published by Elsevier Ltd.
Resumo:
Four different trials of stratified three-layered fine paper, of sulphate pulp, were performed to investigate if stratified fine fraction or fibres from birch can improve the properties of a paper compared to a reference sheet. All trials had five different scenarios and each scenario was calendered with different linear load. All sheets had a grammage of 80 g/m2.In the first trial, the paper contained birch, pine and filler of calciumcarbonate (marble), and was manufactured with the pilot paper machine XPM and the stratified headbox Formator at RCF (Stora Enso Research Center in Falun). The furnish consisted of 75% birch and 25% pine.The second trial contained coated sheets with paper from trial one as the base paper. The coating slip contained calciumcarbonate and clay and the amount was approximately 10-12 g/m2.The third trial, also with birch and pine but without filler, was performed at STFI (Skogsindustrins Tekniska Forskningsinstitut in Stockholm) with the laboratory scaled paper machine StratEx and the stratified headbox AQ-vanes. The furnish consisted of 75% birch and 25% pine, except for one scenario which contained of 75% pine and 25% birch.The last trial contained fractionated pulp of birch and pine and was performed at STFI. 50% was fine fraction and 50% was coarse fraction.This test does not show any clear benefits of making stratified sheets of birch and pine when it comes to properties such as bending stiffness, tensile index and surface smoothness. The retention can be improved with birch in the surface plies. It is possible that the formation can be improved with birch in the surface plies and pine in the middle ply. It is also possible that fine fraction in the surface plies and coarse fraction in the middle ply can improve both surface smoothness and bending stiffness. The results in this test are shown with confidence intervals which points out the difficulties of analysing sheets manufactured with a pilot paper machine or a laboratory scaled paper machine.
Resumo:
This paper describes the formulation of a Multi-objective Pipe Smoothing Genetic Algorithm (MOPSGA) and its application to the least cost water distribution network design problem. Evolutionary Algorithms have been widely utilised for the optimisation of both theoretical and real-world non-linear optimisation problems, including water system design and maintenance problems. In this work we present a pipe smoothing based approach to the creation and mutation of chromosomes which utilises engineering expertise with the view to increasing the performance of the algorithm whilst promoting engineering feasibility within the population of solutions. MOPSGA is based upon the standard Non-dominated Sorting Genetic Algorithm-II (NSGA-II) and incorporates a modified population initialiser and mutation operator which directly targets elements of a network with the aim to increase network smoothness (in terms of progression from one diameter to the next) using network element awareness and an elementary heuristic. The pipe smoothing heuristic used in this algorithm is based upon a fundamental principle employed by water system engineers when designing water distribution pipe networks where the diameter of any pipe is never greater than the sum of the diameters of the pipes directly upstream resulting in the transition from large to small diameters from source to the extremities of the network. MOPSGA is assessed on a number of water distribution network benchmarks from the literature including some real-world based, large scale systems. The performance of MOPSGA is directly compared to that of NSGA-II with regard to solution quality, engineering feasibility (network smoothness) and computational efficiency. MOPSGA is shown to promote both engineering and hydraulic feasibility whilst attaining good infrastructure costs compared to NSGA-II.
Resumo:
In this paper we consider strictly convex monotone continuous complete preorderings on R+n that are locally representable by a concave utility function. By Alexandroff 's (1939) theorem, this function is twice dífferentiable almost everywhere. We show that if the bordered hessian determinant of a concave utility representation vanishes on a null set. Then demand is countably rectifiable, that is, except for a null set of bundles, it is a countable union of c1 manifolds. This property of consumer demand is enough to guarantee that the equilibrium prices of apure exchange economy will be locally unique, for almost every endowment. We give an example of an economy satisfying these conditions but not the Katzner (1968) - Debreu (1970, 1972) smoothness conditions.
Direito à moradia em cidades sustentáveis: parâmetros de políticas públicas habitacionais Natal 2013
Resumo:
The right to housing is included in several international human rights instruments and in Brazilian legal system integrates the constitutional catalog of fundamental social rights (art. 6) and urban development policy (art. 182 and 183). Besides, it is for all federative governments its effectiveness by building programs and improvement of housing conditions and sanitation (art. 23, IX), which justifies the investment in urban planning and public policy of housing affordability because they are tools for achieving this right. Newer strategies in this area have been based on tax incentives, combined with the mortgage as a way to induce the construction of new housing units or reform those in a precarious situation. However, there is still a deficit households and environmental soundness, compounded with the formation of informal settlements. Consequently, we need constant reflections on the issue, in order to identify parameters that actually guide their housing policies in order to meet the constitutional social functions of the city and ensure well-begins of its citizens (art. 182). On the other hand, the intervention of the government in this segment can not only see the availability of the home itself, but also the quality of your extension or surroundings, observing aspects related to environmental sanitation, urban mobility, leisure and services essential health, education and social assistance. It appears that the smoothness and efficiency of a housing policy condition to the concept of adequate housing, in other words, structurally safe, comfortable and environmentally legally legitimate, viable from the extensive coordination with other public policies. Only to compliance with this guideline, it is possible to realize the right to housing in sustainable cities
Resumo:
The so-called Dual Mode Adaptive Robust Control (DMARC) is proposed. The DMARC is a control strategy which interpolates the Model Reference Adaptive Control (MRAC) and the Variable Structure Model Reference Adaptive Control (VS-MRAC). The main idea is to incorporate the transient performance advantages of the VS-MRAC controller with the smoothness control signal in steady-state of the MRAC controller. Two basic algorithms are developed for the DMARC controller. In the first algorithm the controller's adjustment is made, in real time, through the variation of a parameter in the adaptation law. In the second algorithm the control law is generated, using fuzzy logic with Takagi-Sugeno s model, to obtain a combination of the MRAC and VS-MRAC control laws. In both cases, the combined control structure is shown to be robust to the parametric uncertainties and external disturbances, with a fast transient performance, practically without oscillations, and a smoothness steady-state control signal
Resumo:
The recent observational advances of Astronomy and a more consistent theoretical framework turned Cosmology in one of the most exciting frontiers of contemporary science. In this thesis, homogeneous and inhomogeneous Universe models containing dark matter and different kinds of dark energy are confronted with recent observational data. Initially, we analyze constraints from the existence of old high redshift objects, Supernovas type Ia and the gas mass fraction of galaxy clusters for 2 distinct classes of homogeneous and isotropic models: decaying vacuum and X(z)CDM cosmologies. By considering the quasar APM 08279+5255 at z = 3.91 with age between 2-3 Gyr, we obtain 0,2 < OM < 0,4 while for the j3 parameter which quantifies the contribution of A( t) is restricted to the intervalO, 07 < j3 < 0,32 thereby implying that the minimal age of the Universe amounts to 13.4 Gyr. A lower limit to the quasar formation redshift (zJ > 5,11) was also obtained. Our analyzes including flat, closed and hyperbolic models show that there is no an age crisis for this kind of decaying A( t) scenario. Tests from SN e Ia and gas mass fraction data were realized for flat X(z)CDM models. For an equation of state, úJ(z) = úJo + úJIZ, the best fit is úJo = -1,25, úJl = 1,3 and OM = 0,26, whereas for models with úJ(z) = úJo+úJlz/(l+z), we obtainúJo = -1,4, úJl = 2,57 and OM = 0,26. In another line of development, we have discussed the influence of the observed inhomogeneities by considering the Zeldovich-Kantowski-DyerRoeder (ZKDR) angular diameter distance. By applying the statistical X2 method to a sample of angular diameter for compact radio sources, the best fit to the cosmological parameters for XCDM models are OM = O, 26,úJ = -1,03 and a = 0,9, where úJ and a are the equation of state and the smoothness parameters, respectively. Such results are compatible with a phantom energy component (úJ < -1). The possible bidimensional spaces associated to the plane (a , OM) were restricted by using data from SNe Ia and gas mass fraction of galaxy clusters. For Supernovas the parameters are restricted to the interval 0,32 < OM < 0,5(20") and 0,32 < a < 1,0(20"), while to the gas mass fraction we find 0,18 < OM < 0,32(20") with alI alIowed values of a. For a joint analysis involving Supernovas and gas mass fraction data we obtained 0,18 < OM < 0,38(20"). In general grounds, the present study suggests that the influence of the cosmological inhomogeneities in the matter distribution need to be considered with more detail in the analyses of the observational tests. Further, the analytical treatment based on the ZKDR distance may give non-negligible corrections to the so-calIed background tests of FRW type cosmologies
Resumo:
The scheme is based on Ami Harten's ideas (Harten, 1994), the main tools coming from wavelet theory, in the framework of multiresolution analysis for cell averages. But instead of evolving cell averages on the finest uniform level, we propose to evolve just the cell averages on the grid determined by the significant wavelet coefficients. Typically, there are few cells in each time step, big cells on smooth regions, and smaller ones close to irregularities of the solution. For the numerical flux, we use a simple uniform central finite difference scheme, adapted to the size of each cell. If any of the required neighboring cell averages is not present, it is interpolated from coarser scales. But we switch to ENO scheme in the finest part of the grids. To show the feasibility and efficiency of the method, it is applied to a system arising in polymer-flooding of an oil reservoir. In terms of CPU time and memory requirements, it outperforms Harten's multiresolution algorithm.The proposed method applies to systems of conservation laws in 1Dpartial derivative(t)u(x, t) + partial derivative(x)f(u(x, t)) = 0, u(x, t) is an element of R-m. (1)In the spirit of finite volume methods, we shall consider the explicit schemeupsilon(mu)(n+1) = upsilon(mu)(n) - Deltat/hmu ((f) over bar (mu) - (f) over bar (mu)-) = [Dupsilon(n)](mu), (2)where mu is a point of an irregular grid Gamma, mu(-) is the left neighbor of A in Gamma, upsilon(mu)(n) approximate to 1/mu-mu(-) integral(mu-)(mu) u(x, t(n))dx are approximated cell averages of the solution, (f) over bar (mu) = (f) over bar (mu)(upsilon(n)) are the numerical fluxes, and D is the numerical evolution operator of the scheme.According to the definition of (f) over bar (mu), several schemes of this type have been proposed and successfully applied (LeVeque, 1990). Godunov, Lax-Wendroff, and ENO are some of the popular names. Godunov scheme resolves well the shocks, but accuracy (of first order) is poor in smooth regions. Lax-Wendroff is of second order, but produces dangerous oscillations close to shocks. ENO schemes are good alternatives, with high order and without serious oscillations. But the price is high computational cost.Ami Harten proposed in (Harten, 1994) a simple strategy to save expensive ENO flux calculations. The basic tools come from multiresolution analysis for cell averages on uniform grids, and the principle is that wavelet coefficients can be used for the characterization of local smoothness.. Typically, only few wavelet coefficients are significant. At the finest level, they indicate discontinuity points, where ENO numerical fluxes are computed exactly. Elsewhere, cheaper fluxes can be safely used, or just interpolated from coarser scales. Different applications of this principle have been explored by several authors, see for example (G-Muller and Muller, 1998).Our scheme also uses Ami Harten's ideas. But instead of evolving the cell averages on the finest uniform level, we propose to evolve the cell averages on sparse grids associated with the significant wavelet coefficients. This means that the total number of cells is small, with big cells in smooth regions and smaller ones close to irregularities. This task requires improved new tools, which are described next.
Resumo:
The history match procedure in an oil reservoir is of paramount importance in order to obtain a characterization of the reservoir parameters (statics and dynamics) that implicates in a predict production more perfected. Throughout this process one can find reservoir model parameters which are able to reproduce the behaviour of a real reservoir.Thus, this reservoir model may be used to predict production and can aid the oil file management. During the history match procedure the reservoir model parameters are modified and for every new set of reservoir model parameters found, a fluid flow simulation is performed so that it is possible to evaluate weather or not this new set of parameters reproduces the observations in the actual reservoir. The reservoir is said to be matched when the discrepancies between the model predictions and the observations of the real reservoir are below a certain tolerance. The determination of the model parameters via history matching requires the minimisation of an objective function (difference between the observed and simulated productions according to a chosen norm) in a parameter space populated by many local minima. In other words, more than one set of reservoir model parameters fits the observation. With respect to the non-uniqueness of the solution, the inverse problem associated to history match is ill-posed. In order to reduce this ambiguity, it is necessary to incorporate a priori information and constraints in the model reservoir parameters to be determined. In this dissertation, the regularization of the inverse problem associated to the history match was performed via the introduction of a smoothness constraint in the following parameter: permeability and porosity. This constraint has geological bias of asserting that these two properties smoothly vary in space. In this sense, it is necessary to find the right relative weight of this constrain in the objective function that stabilizes the inversion and yet, introduces minimum bias. A sequential search method called COMPLEX was used to find the reservoir model parameters that best reproduce the observations of a semi-synthetic model. This method does not require the usage of derivatives when searching for the minimum of the objective function. Here, it is shown that the judicious introduction of the smoothness constraint in the objective function formulation reduces the associated ambiguity and introduces minimum bias in the estimates of permeability and porosity of the semi-synthetic reservoir model