971 resultados para Super threshold random variable
Resumo:
In this paper is presented a region-based methodology for Digital Elevation Model segmentation obtained from laser scanning data. The methodology is based on two sequential techniques, i.e., a recursive splitting technique using the quad tree structure followed by a region merging technique using the Markov Random Field model. The recursive splitting technique starts splitting the Digital Elevation Model into homogeneous regions. However, due to slight height differences in the Digital Elevation Model, region fragmentation can be relatively high. In order to minimize the fragmentation, a region merging technique based on the Markov Random Field model is applied to the previously segmented data. The resulting regions are firstly structured by using the so-called Region Adjacency Graph. Each node of the Region Adjacency Graph represents a region of the Digital Elevation Model segmented and two nodes have connectivity between them if corresponding regions share a common boundary. Next it is assumed that the random variable related to each node, follows the Markov Random Field model. This hypothesis allows the derivation of the posteriori probability distribution function whose solution is obtained by the Maximum a Posteriori estimation. Regions presenting high probability of similarity are merged. Experiments carried out with laser scanning data showed that the methodology allows to separate the objects in the Digital Elevation Model with a low amount of fragmentation.
Resumo:
In this paper a framework based on the decomposition of the first-order optimality conditions is described and applied to solve the Probabilistic Power Flow (PPF) problem in a coordinated but decentralized way in the context of multi-area power systems. The purpose of the decomposition framework is to solve the problem through a process of solving smaller subproblems, associated with each area of the power system, iteratively. This strategy allows the probabilistic analysis of the variables of interest, in a particular area, without explicit knowledge of network data of the other interconnected areas, being only necessary to exchange border information related to the tie-lines between areas. An efficient method for probabilistic analysis, considering uncertainty in n system loads, is applied. The proposal is to use a particular case of the point estimate method, known as Two-Point Estimate Method (TPM), rather than the traditional approach based on Monte Carlo simulation. The main feature of the TPM is that it only requires resolve 2n power flows for to obtain the behavior of any random variable. An iterative coordination algorithm between areas is also presented. This algorithm solves the Multi-Area PPF problem in a decentralized way, ensures the independent operation of each area and integrates the decomposition framework and the TPM appropriately. The IEEE RTS-96 system is used in order to show the operation and effectiveness of the proposed approach and the Monte Carlo simulations are used to validation of the results. © 2011 IEEE.
Resumo:
Pós-graduação em Agronomia (Energia na Agricultura) - FCA
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Questions Does the spatial association between isolated adult trees and understorey plants change along a gradient of sand dunes? Does this association depend on the life form of the understorey plant? Location Coastal sand dunes, southeast Brazil. Methods We recorded the occurrence of understorey plant species in 100 paired 0.25 m2 plots under adult trees and in adjacent treeless sites along an environmental gradient from beach to inland. Occurrence probabilities were modelled as a function of the fixed variables of the presence of a neighbour, distance from the seashore and life form, and a random variable, the block (i.e. the pair of plots). Generalized linear mixed models (GLMM) were fitted in a backward step-wise procedure using Akaike's information criterion (AIC) for model selection. Results The occurrence of understorey plants was affected by the presence of an adult tree neighbour, but the effect varied with the life form of the understorey species. Positive spatial association was found between isolated adult neighbour and young trees, whereas a negative association was found for shrubs. Moreover, a neutral association was found for lianas, whereas for herbs the effect of the presence of an adult neighbour ranged from neutral to negative, depended on the subgroup considered. The strength of the negative association with forbs increased with distance from the seashore. However, for the other life forms, the associational pattern with adult trees did not change along the gradient. Conclusions For most of the understorey life forms there is no evidence that the spatial association between isolated adult trees and understorey plants changes with the distance from the seashore, as predicted by the stress gradient hypothesis, a common hypothesis in the literature about facilitation in plant communities. Furthermore, the positive spatial association between isolated adult trees and young trees identified along the entire gradient studied indicates a positive feedback that explains the transition from open vegetation to forest in subtropical coastal dune environments.
Resumo:
Despite the several issues faced in the past, the evolutionary trend of silicon has kept its constant pace. Today an ever increasing number of cores is integrated onto the same die. Unfortunately, the extraordinary performance achievable by the many-core paradigm is limited by several factors. Memory bandwidth limitation, combined with inefficient synchronization mechanisms, can severely overcome the potential computation capabilities. Moreover, the huge HW/SW design space requires accurate and flexible tools to perform architectural explorations and validation of design choices. In this thesis we focus on the aforementioned aspects: a flexible and accurate Virtual Platform has been developed, targeting a reference many-core architecture. Such tool has been used to perform architectural explorations, focusing on instruction caching architecture and hybrid HW/SW synchronization mechanism. Beside architectural implications, another issue of embedded systems is considered: energy efficiency. Near Threshold Computing is a key research area in the Ultra-Low-Power domain, as it promises a tenfold improvement in energy efficiency compared to super-threshold operation and it mitigates thermal bottlenecks. The physical implications of modern deep sub-micron technology are severely limiting performance and reliability of modern designs. Reliability becomes a major obstacle when operating in NTC, especially memory operation becomes unreliable and can compromise system correctness. In the present work a novel hybrid memory architecture is devised to overcome reliability issues and at the same time improve energy efficiency by means of aggressive voltage scaling when allowed by workload requirements. Variability is another great drawback of near-threshold operation. The greatly increased sensitivity to threshold voltage variations in today a major concern for electronic devices. We introduce a variation-tolerant extension of the baseline many-core architecture. By means of micro-architectural knobs and a lightweight runtime control unit, the baseline architecture becomes dynamically tolerant to variations.
Resumo:
The generalized failure rate of a continuous random variable has demonstrable importance in operations management. If the valuation distribution of a product has an increasing generalized failure rate (that is, the distribution is IGFR), then the associated revenue function is unimodal, and when the generalized failure rate is strictly increasing, the global maximum is uniquely specified. The assumption that the distribution is IGFR is thus useful and frequently held in recent pricing, revenue, and supply chain management literature. This note contributes to the IGFR literature in several ways. First, it investigates the prevalence of the IGFR property for the left and right truncations of valuation distributions. Second, we extend the IGFR notion to discrete distributions and contrast it with the continuous distribution case. The note also addresses two errors in the previous IGFR literature. Finally, for future reference, we analyze all common (continuous and discrete) distributions for the prevalence of the IGFR property, and derive and tabulate their generalized failure rates.
Resumo:
In many applications the observed data can be viewed as a censored high dimensional full data random variable X. By the curve of dimensionality it is typically not possible to construct estimators that are asymptotically efficient at every probability distribution in a semiparametric censored data model of such a high dimensional censored data structure. We provide a general method for construction of one-step estimators that are efficient at a chosen submodel of the full-data model, are still well behaved off this submodel and can be chosen to always improve on a given initial estimator. These one-step estimators rely on good estimators of the censoring mechanism and thus will require a parametric or semiparametric model for the censoring mechanism. We present a general theorem that provides a template for proving the desired asymptotic results. We illustrate the general one-step estimation methods by constructing locally efficient one-step estimators of marginal distributions and regression parameters with right-censored data, current status data and bivariate right-censored data, in all models allowing the presence of time-dependent covariates. The conditions of the asymptotics theorem are rigorously verified in one of the examples and the key condition of the general theorem is verified for all examples.
Resumo:
In estimation of a survival function, current status data arises when the only information available on individuals is their survival status at a single monitoring time. Here we briefly review extensions of this form of data structure in two directions: (i) doubly censored current status data, where there is incomplete information on the origin of the failure time random variable, and (ii) current status information on more complicated stochastic processes. Simple examples of these data forms are presented for motivation.
Resumo:
Many seemingly disparate approaches for marginal modeling have been developed in recent years. We demonstrate that many current approaches for marginal modeling of correlated binary outcomes produce likelihoods that are equivalent to the proposed copula-based models herein. These general copula models of underlying latent threshold random variables yield likelihood based models for marginal fixed effects estimation and interpretation in the analysis of correlated binary data. Moreover, we propose a nomenclature and set of model relationships that substantially elucidates the complex area of marginalized models for binary data. A diverse collection of didactic mathematical and numerical examples are given to illustrate concepts.
Resumo:
In this paper I consider the impact of a noisy indicator regarding a manager’s manipulative behavior on optimal effort incentives and the extent of earnings management. The analysis in this paper extends a twotask, single performance measure LEN model by including a binary random variable. I show that contracting on the noisy indicator variable is not always useful. More specifically, the principal uses the indicator variable to prevent earnings management only under conditions where manipulative behavior is not excessive. Thus, under conditions of excessive earnings management, accounting adjustments that yield a more congruent overall performance measure can be more effective than an appraisal of the existence of earnings management to mitigate the earnings management problem.
Resumo:
Geometrical dependencies are being researched for analytical representation of the probability density function (pdf) for the travel time between a random, and a known or another random point in Tchebyshev’s metric. In the most popular case - a rectangular area of service - the pdf of this random variable depends directly on the position of the server. Two approaches have been introduced for the exact analytical calculation of the pdf: Ad-hoc approach – useful for a ‘manual’ solving of a specific case; by superposition – an algorithmic approach for the general case. The main concept of each approach is explained, and a short comparison is done to prove the faithfulness.
Resumo:
A characterization is provided for the von Mises–Fisher random variable, in terms of first exit point from the unit hypersphere of the drifted Wiener process. Laplace transform formulae for the first exit time from the unit hypersphere of the drifted Wiener process are provided. Post representations in terms of Bell polynomials are provided for the densities of the first exit times from the circle and from the sphere.
Resumo:
It has been observed in various practical applications that data do not conform to the normal distribution, which is symmetric with no skewness. The skew normal distribution proposed by Azzalini(1985) is appropriate for the analysis of data which is unimodal but exhibits some skewness. The skew normal distribution includes the normal distribution as a special case where the skewness parameter is zero. In this thesis, we study the structural properties of the skew normal distribution, with an emphasis on the reliability properties of the model. More specifically, we obtain the failure rate, the mean residual life function, and the reliability function of a skew normal random variable. We also compare it with the normal distribution with respect to certain stochastic orderings. Appropriate machinery is developed to obtain the reliability of a component when the strength and stress follow the skew normal distribution. Finally, IQ score data from Roberts (1988) is analyzed to illustrate the procedure.
Resumo:
We study the first passage statistics to adsorbing boundaries of a Brownian motion in bounded two-dimensional domains of different shapes and configurations of the adsorbing and reflecting boundaries. From extensive numerical analysis we obtain the probability P(ω) distribution of the random variable ω=τ1/(τ1+τ2), which is a measure for how similar the first passage times τ1 and τ2 are of two independent realizations of a Brownian walk starting at the same location. We construct a chart for each domain, determining whether P(ω) represents a unimodal, bell-shaped form, or a bimodal, M-shaped behavior. While in the former case the mean first passage time (MFPT) is a valid characteristic of the first passage behavior, in the latter case it is an insufficient measure for the process. Strikingly we find a distinct turnover between the two modes of P(ω), characteristic for the domain shape and the respective location of absorbing and reflective boundaries. Our results demonstrate that large fluctuations of the first passage times may occur frequently in two-dimensional domains, rendering quite vague the general use of the MFPT as a robust measure of the actual behavior even in bounded domains, in which all moments of the first passage distribution exist.