920 resultados para probability distribution
Resumo:
Impedance inversion is very important in seismic technology. It is based on seismic profile. Good inversion result is derived from high quality seismic profile, which is formed using high resolution imaging resolution. High-resolution process demands that signal/noise ratio is high. It is very important for seismic inversion to improve signal/noise ratio. the main idea is that the physical parameter (wave impedance), which describes the stratigraphy directly, is achieved from seismic data expressing structural style indirectly. The solution of impedance inversion technology, which is based on convolution model, is arbitrary. It is a good way to apply the priori information as the restricted condition in inversion. An updated impedance inversion technology is presented which overcome the flaw of traditional model and highlight the influence of structure. Considering impedance inversion restricted by sedimentary model, layer filling style and congruence relation, the impedance model is built. So the impedance inversion restricted by geological rule could be realized. there are some innovations in this dissertation: 1. The best migration aperture is achieved from the included angle of time surface of diffracted wave and reflected wave. Restricted by structural model, the dip of time surface of reflected wave and diffracted wave is given. 2. The conventional method of FXY forcasting noise is updated, and the signal/noise ratio is improved. 3. Considering the characteristic of probability distribution of seismic data and geological events fully, an object function is constructed using the theory of Bayes estimation as the criterion. The mathematics is used here to describe the content of practice theory. 4. Considering the influence of structure, the seismic profile is interpreted to build the model of structure. A series of structure model is built. So as the impedance model. The high frequency of inversion is controlled by the geological rule. 5. Conjugate gradient method is selected to improve resolving process for it fit the demands of geophysics, and the efficiency of algorithm is enhanced. As the geological information is used fully, the result of impedance inversion is reasonable and complex reservoir could be forecasted further perfectly.
Resumo:
Given n noisy observations g; of the same quantity f, it is common use to give an estimate of f by minimizing the function Eni=1(gi-f)2. From a statistical point of view this corresponds to computing the Maximum likelihood estimate, under the assumption of Gaussian noise. However, it is well known that this choice leads to results that are very sensitive to the presence of outliers in the data. For this reason it has been proposed to minimize the functions of the form Eni=1V(gi-f), where V is a function that increases less rapidly than the square. Several choices for V have been proposed and successfully used to obtain "robust" estimates. In this paper we show that, for a class of functions V, using these robust estimators corresponds to assuming that data are corrupted by Gaussian noise whose variance fluctuates according to some given probability distribution, that uniquely determines the shape of V.
Resumo:
We develop a mean field theory for sigmoid belief networks based on ideas from statistical mechanics. Our mean field theory provides a tractable approximation to the true probability distribution in these networks; it also yields a lower bound on the likelihood of evidence. We demonstrate the utility of this framework on a benchmark problem in statistical pattern recognition -- the classification of handwritten digits.
Resumo:
In this thesis we study the general problem of reconstructing a function, defined on a finite lattice from a set of incomplete, noisy and/or ambiguous observations. The goal of this work is to demonstrate the generality and practical value of a probabilistic (in particular, Bayesian) approach to this problem, particularly in the context of Computer Vision. In this approach, the prior knowledge about the solution is expressed in the form of a Gibbsian probability distribution on the space of all possible functions, so that the reconstruction task is formulated as an estimation problem. Our main contributions are the following: (1) We introduce the use of specific error criteria for the design of the optimal Bayesian estimators for several classes of problems, and propose a general (Monte Carlo) procedure for approximating them. This new approach leads to a substantial improvement over the existing schemes, both regarding the quality of the results (particularly for low signal to noise ratios) and the computational efficiency. (2) We apply the Bayesian appraoch to the solution of several problems, some of which are formulated and solved in these terms for the first time. Specifically, these applications are: teh reconstruction of piecewise constant surfaces from sparse and noisy observationsl; the reconstruction of depth from stereoscopic pairs of images and the formation of perceptual clusters. (3) For each one of these applications, we develop fast, deterministic algorithms that approximate the optimal estimators, and illustrate their performance on both synthetic and real data. (4) We propose a new method, based on the analysis of the residual process, for estimating the parameters of the probabilistic models directly from the noisy observations. This scheme leads to an algorithm, which has no free parameters, for the restoration of piecewise uniform images. (5) We analyze the implementation of the algorithms that we develop in non-conventional hardware, such as massively parallel digital machines, and analog and hybrid networks.
Resumo:
We consider a mobile sensor network monitoring a spatio-temporal field. Given limited cache sizes at the sensor nodes, the goal is to develop a distributed cache management algorithm to efficiently answer queries with a known probability distribution over the spatial dimension. First, we propose a novel distributed information theoretic approach in which the nodes locally update their caches based on full knowledge of the space-time distribution of the monitored phenomenon. At each time instant, local decisions are made at the mobile nodes concerning which samples to keep and whether or not a new sample should be acquired at the current location. These decisions account for minimizing an entropic utility function that captures the average amount of uncertainty in queries given the probability distribution of query locations. Second, we propose a different correlation-based technique, which only requires knowledge of the second-order statistics, thus relaxing the stringent constraint of having a priori knowledge of the query distribution, while significantly reducing the computational overhead. It is shown that the proposed approaches considerably improve the average field estimation error by maintaining efficient cache content. It is further shown that the correlation-based technique is robust to model mismatch in case of imperfect knowledge of the underlying generative correlation structure.
Resumo:
A novel approach for estimating articulated body posture and motion from monocular video sequences is proposed. Human pose is defined as the instantaneous two dimensional configuration (i.e., the projection onto the image plane) of a single articulated body in terms of the position of a predetermined set of joints. First, statistical segmentation of the human bodies from the background is performed and low-level visual features are found given the segmented body shape. The goal is to be able to map these, generally low level, visual features to body configurations. The system estimates different mappings, each one with a specific cluster in the visual feature space. Given a set of body motion sequences for training, unsupervised clustering is obtained via the Expectation Maximation algorithm. Then, for each of the clusters, a function is estimated to build the mapping between low-level features to 3D pose. Currently this mapping is modeled by a neural network. Given new visual features, a mapping from each cluster is performed to yield a set of possible poses. From this set, the system selects the most likely pose given the learned probability distribution and the visual feature similarity between hypothesis and input. Performance of the proposed approach is characterized using a new set of known body postures, showing promising results.
Resumo:
The paper investigates stochastic processes forced by independent and identically distributed jumps occurring according to a Poisson process. The impact of different distributions of the jump amplitudes are analyzed for processes with linear drift. Exact expressions of the probability density functions are derived when jump amplitudes are distributed as exponential, gamma, and mixture of exponential distributions for both natural and reflecting boundary conditions. The mean level-crossing properties are studied in relation to the different jump amplitudes. As an example of application of the previous theoretical derivations, the role of different rainfall-depth distributions on an existing stochastic soil water balance model is analyzed. It is shown how the shape of distribution of daily rainfall depths plays a more relevant role on the soil moisture probability distribution as the rainfall frequency decreases, as predicted by future climatic scenarios. © 2010 The American Physical Society.
Resumo:
Given a probability distribution on an open book (a metric space obtained by gluing a disjoint union of copies of a half-space along their boundary hyperplanes), we define a precise concept of when the Fréchet mean (barycenter) is sticky. This nonclassical phenomenon is quantified by a law of large numbers (LLN) stating that the empirical mean eventually almost surely lies on the (codimension 1 and hence measure 0) spine that is the glued hyperplane, and a central limit theorem (CLT) stating that the limiting distribution is Gaussian and supported on the spine.We also state versions of the LLN and CLT for the cases where the mean is nonsticky (i.e., not lying on the spine) and partly sticky (i.e., is, on the spine but not sticky). © Institute of Mathematical Statistics, 2013.
Resumo:
p.35-43
Resumo:
In this paper a methodology for the application of computer simulation to the evacuation certification of aircraft is suggested. The methodology suggested here involves the use of computer simulation, historic certification data, component testing and full-scale certification trials. The proposed methodology sets out a protocol for how computer simulation should be undertaken in a certification environment and draws on experience from both the marine and building industries. Along with the suggested protocol, a phased introduction of computer models to certification is suggested. Given the sceptical nature of the aviation community regarding any certification methodology change in general, this would involve as a first step the use of computer simulation in conjunction with full-scale testing. The computer model would be used to reproduce a probability distribution of likely aircraft performance under current certification conditions and in addition, several other more challenging scenarios could be developed. The combination of full-scale trial, computer simulation (and if necessary component testing) would provide better insight into the actual performance capabilities of the aircraft by generating a performance probability distribution or performance envelope rather than a single datum. Once further confidence in the technique is established, the second step would only involve computer simulation and component testing. This would only be contemplated after sufficient experience and confidence in the use of computer models have been developed. The third step in the adoption of computer simulation for certification would involve the introduction of several scenarios based on for example exit availability instructed by accident analysis. The final step would be the introduction of more realistic accident scenarios into the certification process. This would require the continued development of aircraft evacuation modelling technology to include additional behavioural features common in real accident scenarios.
Proposed methodology for the use of computer simulation to enhance aircraft evacuation certification
Resumo:
In this paper a methodology for the application of computer simulation to evacuation certification of aircraft is suggested. This involves the use of computer simulation, historic certification data, component testing, and full-scale certification trials. The methodology sets out a framework for how computer simulation should be undertaken in a certification environment and draws on experience from both the marine and building industries. In addition, a phased introduction of computer models to certification is suggested. This involves as a first step the use of computer simulation in conjunction with full-scale testing. The combination of full-scale trial, computer simulation (and if necessary component testing) provides better insight into aircraft evacuation performance capabilities by generating a performance probability distribution rather than a single datum. Once further confidence in the technique is established the requirement for the full-scale demonstration could be dropped. The second step in the adoption of computer simulation for certification involves the introduction of several scenarios based on, for example, exit availability, instructed by accident analysis. The final step would be the introduction of more realistic accident scenarios. This would require the continued development of aircraft evacuation modeling technology to include additional behavioral features common in real accident scenarios.
Resumo:
First results of a coupled modeling and forecasting system for the pelagic fisheries are being presented. The system consists currently of three mathematically fundamentally different model subsystems: POLCOMS-ERSEM providing the physical-biogeochemical environment implemented in the domain of the North-West European shelf and the SPAM model which describes sandeel stocks in the North Sea. The third component, the SLAM model, connects POLCOMS-ERSEM and SPAM by computing the physical-biological interaction. Our major experience by the coupling model subsystems is that well-defined and generic model interfaces are very important for a successful and extendable coupled model framework. The integrated approach, simulating ecosystem dynamics from physics to fish, allows for analysis of the pathways in the ecosystem to investigate the propagation of changes in the ocean climate and lower trophic levels to quantify the impacts on the higher trophic level, in this case the sandeel population, demonstrated here on the base of hindcast data. The coupled forecasting system is tested for some typical scientific questions appearing in spatial fish stock management and marine spatial planning, including determination of local and basin scale maximum sustainable yield, stock connectivity and source/sink structure. Our presented simulations indicate that sandeels stocks are currently exploited close to the maximum sustainable yield, but large uncertainty is associated with determining stock maximum sustainable yield due to stock eigen dynamics and climatic variability. Our statistical ensemble simulations indicates that the predictive horizon set by climate interannual variability is 2–6 yr, after which only an asymptotic probability distribution of stock properties, like biomass, are predictable.
Resumo:
Large waves pose risks to ships, offshore structures, coastal infrastructure and ecosystems. This paper analyses 10 years of in-situ measurements of significant wave height (Hs) and maximum wave height (Hmax) from the ocean weather ship Polarfront in the Norwegian Sea. During the period 2000 to 2009, surface elevation was recorded every 0.59 s during sampling periods of 30 min. The Hmax observations scale linearly with Hs on average. A widely-used empirical Weibull distribution is found to estimate average values of Hmax/H s and Hmax better than a Rayleigh distribution, but tends to underestimate both for all but the smallest waves. In this paper we propose a modified Rayleigh distribution which compensates for the heterogeneity of the observed dataset: the distribution is fitted to the whole dataset and improves the estimate of the largest waves. Over the 10-year period, the Weibull distribution approximates the observed Hs and Hmax well, and an exponential function can be used to predict the probability distribution function of the ratio Hmax/Hs. However, the Weibull distribution tends to underestimate the occurrence of extremely large values of Hs and Hmax. The persistence of Hs and Hmax in winter is also examined. Wave fields with Hs > 12 m and Hmax > 16 m do not last longer than 3 h. Low-to-moderate wave heights that persist for more than 12 h dominate the relationship of the wave field with the winter NAO index over 2000–2009. In contrast, the inter-annual variability of wave fields with Hs > 5.5 m or Hmax > 8.5 m and wave fields persisting over ~2.5 days is not associated with the winter NAO index.
Resumo:
Joint quantum measurements of noncommuting observables are possible, if one accepts an increase in the measured variances. A necessary condition for a joint measurement to be possible is that a joint probability distribution exists for the measurement. This fact suggests that there may be a link with Bell inequalities, as these will be satisfied if and only if a joint probability distribution for all involved observables exists. We investigate the connections between Bell inequalities and conditions for joint quantum measurements to be possible. Mermin's inequality for the three-particle Greenberger-Horne-Zeilinger state turns out to be equivalent to the condition for a joint measurement on two out of the three quantum systems to exist. Gisin's Bell inequality for three coplanar measurement directions, meanwhile, is shown to be less strict than the condition for the corresponding joint measurement.
Resumo:
In the Crawford-Sobel (uniform, quadratic utility) cheap-talk model, we consider a simple mediation scheme (a communication device) in which the informed agent reports one of N possible elements of a partition to the mediator and then the mediator suggests one of N actions to the uninformed decision-maker according to the probability distribution of the device. We show that such a simple mediated equilibrium cannot improve upon the unmediated N-partition Crawford-Sobel equilibrium when the preference divergence parameter (bias) is small.