97 resultados para Stochastic Dominance
Resumo:
Context tree models have been introduced by Rissanen in [25] as a parsimonious generalization of Markov models. Since then, they have been widely used in applied probability and statistics. The present paper investigates non-asymptotic properties of two popular procedures of context tree estimation: Rissanen's algorithm Context and penalized maximum likelihood. First showing how they are related, we prove finite horizon bounds for the probability of over- and under-estimation. Concerning overestimation, no boundedness or loss-of-memory conditions are required: the proof relies on new deviation inequalities for empirical probabilities of independent interest. The under-estimation properties rely on classical hypotheses for processes of infinite memory. These results improve on and generalize the bounds obtained in Duarte et al. (2006) [12], Galves et al. (2008) [18], Galves and Leonardi (2008) [17], Leonardi (2010) [22], refining asymptotic results of Buhlmann and Wyner (1999) [4] and Csiszar and Talata (2006) [9]. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
The dynamical discrete web (DyDW), introduced in the recent work of Howitt and Warren, is a system of coalescing simple symmetric one-dimensional random walks which evolve in an extra continuous dynamical time parameter tau. The evolution is by independent updating of the underlying Bernoulli variables indexed by discrete space-time that define the discrete web at any fixed tau. In this paper, we study the existence of exceptional (random) values of tau where the paths of the web do not behave like usual random walks and the Hausdorff dimension of the set of such exceptional tau. Our results are motivated by those about exceptional times for dynamical percolation in high dimension by Haggstrom, Peres and Steif, and in dimension two by Schramm and Steif. The exceptional behavior of the walks in the DyDW is rather different from the situation for the dynamical random walks of Benjamini, Haggstrom, Peres and Steif. For example, we prove that the walk from the origin S(0)(tau) violates the law of the iterated logarithm (LIL) on a set of tau of Hausdorff dimension one. We also discuss how these and other results should extend to the dynamical Brownian web, the natural scaling limit of the DyDW. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
We prove that for any a-mixing stationary process the hitting time of any n-string A(n) converges, when suitably normalized, to an exponential law. We identify the normalization constant lambda(A(n)). A similar statement holds also for the return time. To establish this result we prove two other results of independent interest. First, we show a relation between the rescaled hitting time and the rescaled return time, generalizing a theorem of Haydn, Lacroix and Vaienti. Second, we show that for positive entropy systems, the probability of observing any n-string in n consecutive observations goes to zero as n goes to infinity. (c) 2010 Elsevier B.V. All rights reserved.
Resumo:
We considered whether ecological restoration using high diversity of native tree species serves to restore nitrogen dynamics in the Brazilian Atlantic Forest. We measured delta(15)N and N content in green foliage and soil; vegetation N:P ratio; and soil N mineralization in a preserved natural forest and restored forests of ages 21 and 52 years. Green foliage delta(15)N values, N content, N:P ratio, inorganic N and net mineralization and nitrification rates were all higher, the older the forest. Our findings indicate that the recuperation of N cycling has not been achieved yet in the restored forests even after 52 years, but show that they are following a trajectory of development that is characterized by their N cycling intensity becoming similar to a natural mature forest of the same original forest formation. This study demonstrated that some young restored forests are more limited by N compared to mature natural forests. We document that the recuperation of N cycling in tropical forests can be achieved through ecological restoration actions. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
Outgassing of carbon dioxide (CO(2)) from rivers and streams to the atmosphere is a major loss term in the coupled terrestrial-aquatic carbon cycle of major low-gradient river systems (the term ""river system"" encompasses the rivers and streams of all sizes that compose the drainage network in a river basin). However, the magnitude and controls on this important carbon flux are not well quantified. We measured carbon dioxide flux rates (F(CO2)), gas transfer velocity (k), and partial pressures (p(CO2)) in rivers and streams of the Amazon and Mekong river systems in South America and Southeast Asia, respectively. F(CO2) and k values were significantly higher in small rivers and streams (channels <100 m wide) than in large rivers (channels >100 m wide). Small rivers and streams also had substantially higher variability in k values than large rivers. Observed F(CO2) and k values suggest that previous estimates of basinwide CO(2) evasion from tropical rivers and wetlands have been conservative and are likely to be revised upward substantially in the future. Data from the present study combined with data compiled from the literature collectively suggest that the physical control of gas exchange velocities and fluxes in low-gradient river systems makes a transition from the dominance of wind control at the largest spatial scales (in estuaries and river mainstems) toward increasing importance of water current velocity and depth at progressively smaller channel dimensions upstream. These results highlight the importance of incorporating scale-appropriate k values into basinwide models of whole ecosystem carbon balance.
Resumo:
We reconstructed Middle Pleistocene surface hydrography in the western South Atlantic based on planktonic foraminiferal assemblages, modern analog technique and Globorotalia truncatulinoides isotopic ratios of core SP1251 (38 degrees 29.7`S / 53 degrees 40.7`W / 3400 m water depth). Biostratigraphic analysis suggests that sediments were deposited between 0.3 and 0.12 Ma and therefore correlate to Marine Isotopic Stage 6 or 8. Faunal assemblage-based winter and summer SST estimates suggest that the western South Atlantic at 38 degrees S was 4-6 degrees C colder than at present, within the expected range for a glacial interval. High relative abundances of subantarctic species, particularly the dominance of Neogloboquadrina pachyderma (left), support lower than present SSTs throughout the recorded period. The oxygen isotopic composition of G. truncatulinoides suggests a northward shift of the Brazil-Malvinas Confluence Zone and of the associated mid-latitude frontal system during this Middle Pleistocene cold period, and a stronger than present influence of superficial subantarctic waters and lowering in SSTs at our core site during the recorded Middle Pleistocene glacial.
Resumo:
The Random Parameter model was proposed to explain the structure of the covariance matrix in problems where most, but not all, of the eigenvalues of the covariance matrix can be explained by Random Matrix Theory. In this article, we explore the scaling properties of the model, as observed in the multifractal structure of the simulated time series. We use the Wavelet Transform Modulus Maxima technique to obtain the multifractal spectrum dependence with the parameters of the model. The model shows a scaling structure compatible with the stylized facts for a reasonable choice of the parameter values. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
This paper develops a Markovian jump model to describe the fault occurrence in a manipulator robot of three joints. This model includes the changes of operation points and the probability that a fault occurs in an actuator. After a fault, the robot works as a manipulator with free joints. Based on the developed model, a comparative study among three Markovian controllers, H(2), H(infinity), and mixed H(2)/H(infinity) is presented, applied in an actual manipulator robot subject to one and two consecutive faults.
Resumo:
The selection criteria for Euler-Bernoulli or Timoshenko beam theories are generally given by means of some deterministic rule involving beam dimensions. The Euler-Bernoulli beam theory is used to model the behavior of flexure-dominated (or ""long"") beams. The Timoshenko theory applies for shear-dominated (or ""short"") beams. In the mid-length range, both theories should be equivalent, and some agreement between them would be expected. Indeed, it is shown in the paper that, for some mid-length beams, the deterministic displacement responses for the two theories agrees very well. However, the article points out that the behavior of the two beam models is radically different in terms of uncertainty propagation. In the paper, some beam parameters are modeled as parameterized stochastic processes. The two formulations are implemented and solved via a Monte Carlo-Galerkin scheme. It is shown that, for uncertain elasticity modulus, propagation of uncertainty to the displacement response is much larger for Timoshenko beams than for Euler-Bernoulli beams. On the other hand, propagation of the uncertainty for random beam height is much larger for Euler beam displacements. Hence, any reliability or risk analysis becomes completely dependent on the beam theory employed. The authors believe this is not widely acknowledged by the structural safety or stochastic mechanics communities. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Fault resistance is a critical component of electric power systems operation due to its stochastic nature. If not considered, this parameter may interfere in fault analysis studies. This paper presents an iterative fault analysis algorithm for unbalanced three-phase distribution systems that considers a fault resistance estimate. The proposed algorithm is composed by two sub-routines, namely the fault resistance and the bus impedance. The fault resistance sub-routine, based on local fault records, estimates the fault resistance. The bus impedance sub-routine, based on the previously estimated fault resistance, estimates the system voltages and currents. Numeric simulations on the IEEE 37-bus distribution system demonstrate the algorithm`s robustness and potential for offline applications, providing additional fault information to Distribution Operation Centers and enhancing the system restoration process. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Power distribution automation and control are import-ant tools in the current restructured electricity markets. Unfortunately, due to its stochastic nature, distribution systems faults are hardly avoidable. This paper proposes a novel fault diagnosis scheme for power distribution systems, composed by three different processes: fault detection and classification, fault location, and fault section determination. The fault detection and classification technique is wavelet based. The fault-location technique is impedance based and uses local voltage and current fundamental phasors. The fault section determination method is artificial neural network based and uses the local current and voltage signals to estimate the faulted section. The proposed hybrid scheme was validated through Alternate Transient Program/Electromagentic Transients Program simulations and was implemented as embedded software. It is currently used as a fault diagnosis tool in a Southern Brazilian power distribution company.
Resumo:
In the last decades, the air traffic system has been changing to adapt itself to new social demands, mainly the safe growth of worldwide traffic capacity. Those changes are ruled by the Communication, Navigation, Surveillance/Air Traffic Management (CNS/ATM) paradigm, based on digital communication technologies (mainly satellites) as a way of improving communication, surveillance, navigation and air traffic management services. However, CNS/ATM poses new challenges and needs, mainly related to the safety assessment process. In face of these new challenges, and considering the main characteristics of the CNS/ATM, a methodology is proposed at this work by combining ""absolute"" and ""relative"" safety assessment methods adopted by the International Civil Aviation Organization (ICAO) in ICAO Doc.9689 [14], using Fluid Stochastic Petri Nets (FSPN) as the modeling formalism, and compares the safety metrics estimated from the simulation of both the proposed (in analysis) and the legacy system models. To demonstrate its usefulness, the proposed methodology was applied to the ""Automatic Dependent Surveillance-Broadcasting"" (ADS-B) based air traffic control system. As conclusions, the proposed methodology assured to assess CNS/ATM system safety properties, in which FSPN formalism provides important modeling capabilities, and discrete event simulation allowing the estimation of the desired safety metric. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
The roots of swarm intelligence are deeply embedded in the biological study of self-organized behaviors in social insects. Particle swarm optimization (PSO) is one of the modern metaheuristics of swarm intelligence, which can be effectively used to solve nonlinear and non-continuous optimization problems. The basic principle of PSO algorithm is formed on the assumption that potential solutions (particles) will be flown through hyperspace with acceleration towards more optimum solutions. Each particle adjusts its flying according to the flying experiences of both itself and its companions using equations of position and velocity. During the process, the coordinates in hyperspace associated with its previous best fitness solution and the overall best value attained so far by other particles within the group are kept track and recorded in the memory. In recent years, PSO approaches have been successfully implemented to different problem domains with multiple objectives. In this paper, a multiobjective PSO approach, based on concepts of Pareto optimality, dominance, archiving external with elite particles and truncated Cauchy distribution, is proposed and applied in the design with the constraints presence of a brushless DC (Direct Current) wheel motor. Promising results in terms of convergence and spacing performance metrics indicate that the proposed multiobjective PSO scheme is capable of producing good solutions.
Resumo:
Mine simulation depends on data that is both coherent and representative of the mining operation. This paper describes a methodology for modeling operational data which has been developed for mine simulation. The methodology has been applied to a case study of an open-pit mine, where the cycle times of the truck fleet have been modeled for mine simulation purposes. The results obtained have shown that once the operational data has been treated using the proposed methodology, the system variables have proven to be adherent to theoretical distributions. The research indicated the need jar tracking the origin of data inconsistencies through the development of a process to manage inconsistent data from the mining operation.
Resumo:
This paper presents new insights and novel algorithms for strategy selection in sequential decision making with partially ordered preferences; that is, where some strategies may be incomparable with respect to expected utility. We assume that incomparability amongst strategies is caused by indeterminacy/imprecision in probability values. We investigate six criteria for consequentialist strategy selection: Gamma-Maximin, Gamma-Maximax, Gamma-Maximix, Interval Dominance, Maximality and E-admissibility. We focus on the popular decision tree and influence diagram representations. Algorithms resort to linear/multilinear programming; we describe implementation and experiments. (C) 2010 Elsevier B.V. All rights reserved.