971 resultados para Bounded relative error
Resumo:
Affine transformations are often used in recognition systems, to approximate the effects of perspective projection. The underlying mathematics is for exact feature data, with no positional uncertainty. In practice, heuristics are added to handle uncertainty. We provide a precise analysis of affine point matching, obtaining an expression for the range of affine-invariant values consistent with bounded uncertainty. This analysis reveals that the range of affine-invariant values depends on the actual $x$-$y$-positions of the features, i.e. with uncertainty, affine representations are not invariant with respect to the Cartesian coordinate system. We analyze the effect of this on geometric hashing and alignment recognition methods.
Resumo:
This article provides importance sampling algorithms for computing the probabilities of various types ruin of spectrally negative Lévy risk processes, which are ruin over the infinite time horizon, ruin within a finite time horizon and ruin past a finite time horizon. For the special case of the compound Poisson process perturbed by diffusion, algorithms for computing probabilities of ruins by creeping (i.e. induced by the diffusion term) and by jumping (i.e. by a claim amount) are provided. It is shown that these algorithms have either bounded relative error or logarithmic efficiency, as t,x→∞t,x→∞, where t>0t>0 is the time horizon and x>0x>0 is the starting point of the risk process, with y=t/xy=t/x held constant and assumed either below or above a certain constant.
Resumo:
The saddlepoint method provides accurate approximations for the distributions of many test statistics, estimators and for important probabilities arising in various stochastic models. The saddlepoint approximation is a large deviations technique which is substantially more accurate than limiting normal or Edgeworth approximations, especially in presence of very small sample sizes or very small probabilities. The outstanding accuracy of the saddlepoint approximation can be explained by the fact that it has bounded relative error.
Resumo:
Stochastic simulation is an important and practical technique for computing probabilities of rare events, like the payoff probability of a financial option, the probability that a queue exceeds a certain level or the probability of ruin of the insurer's risk process. Rare events occur so infrequently, that they cannot be reasonably recorded during a standard simulation procedure: specifc simulation algorithms which thwart the rarity of the event to simulate are required. An important algorithm in this context is based on changing the sampling distribution and it is called importance sampling. Optimal Monte Carlo algorithms for computing rare event probabilities are either logarithmic eficient or possess bounded relative error.
Resumo:
The estimation of P(S-n > u) by simulation, where S, is the sum of independent. identically distributed random varibles Y-1,..., Y-n, is of importance in many applications. We propose two simulation estimators based upon the identity P(S-n > u) = nP(S, > u, M-n = Y-n), where M-n = max(Y-1,..., Y-n). One estimator uses importance sampling (for Y-n only), and the other uses conditional Monte Carlo conditioning upon Y1,..., Yn-1. Properties of the relative error of the estimators are derived and a numerical study given in terms of the M/G/1 queue in which n is replaced by an independent geometric random variable N. The conclusion is that the new estimators compare extremely favorably with previous ones. In particular, the conditional Monte Carlo estimator is the first heavy-tailed example of an estimator with bounded relative error. Further improvements are obtained in the random-N case, by incorporating control variates and stratification techniques into the new estimation procedures.
Resumo:
The questions that one should answer in engineering computations - deterministic, probabilistic/randomized, as well as heuristic - are (i) how good the computed results/outputs are and (ii) how much the cost in terms of amount of computation and the amount of storage utilized in getting the outputs is. The absolutely errorfree quantities as well as the completely errorless computations done in a natural process can never be captured by any means that we have at our disposal. While the computations including the input real quantities in nature/natural processes are exact, all the computations that we do using a digital computer or are carried out in an embedded form are never exact. The input data for such computations are also never exact because any measuring instrument has inherent error of a fixed order associated with it and this error, as a matter of hypothesis and not as a matter of assumption, is not less than 0.005 per cent. Here by error we imply relative error bounds. The fact that exact error is never known under any circumstances and any context implies that the term error is nothing but error-bounds. Further, in engineering computations, it is the relative error or, equivalently, the relative error-bounds (and not the absolute error) which is supremely important in providing us the information regarding the quality of the results/outputs. Another important fact is that inconsistency and/or near-consistency in nature, i.e., in problems created from nature is completely nonexistent while in our modelling of the natural problems we may introduce inconsistency or near-inconsistency due to human error or due to inherent non-removable error associated with any measuring device or due to assumptions introduced to make the problem solvable or more easily solvable in practice. Thus if we discover any inconsistency or possibly any near-inconsistency in a mathematical model, it is certainly due to any or all of the three foregoing factors. We do, however, go ahead to solve such inconsistent/near-consistent problems and do get results that could be useful in real-world situations. The talk considers several deterministic, probabilistic, and heuristic algorithms in numerical optimisation, other numerical and statistical computations, and in PAC (probably approximately correct) learning models. It highlights the quality of the results/outputs through specifying relative error-bounds along with the associated confidence level, and the cost, viz., amount of computations and that of storage through complexity. It points out the limitation in error-free computations (wherever possible, i.e., where the number of arithmetic operations is finite and is known a priori) as well as in the usage of interval arithmetic. Further, the interdependence among the error, the confidence, and the cost is discussed.
Resumo:
An analysis is carried out, using the prolate spheroidal wave functions, of certain regularized iterative and noniterative methods previously proposed for the achievement of object restoration (or, equivalently, spectral extrapolation) from noisy image data. The ill-posedness inherent in the problem is treated by means of a regularization parameter, and the analysis shows explicitly how the deleterious effects of the noise are then contained. The error in the object estimate is also assessed, and it is shown that the optimal choice for the regularization parameter depends on the signal-to-noise ratio. Numerical examples are used to demonstrate the performance of both unregularized and regularized procedures and also to show how, in the unregularized case, artefacts can be generated from pure noise. Finally, the relative error in the estimate is calculated as a function of the degree of superresolution demanded for reconstruction problems characterized by low space–bandwidth products.
Resumo:
With the emergence of Unmanned Aircraft Systems (UAS) there is a growing need for safety standards and regulatory frameworks to manage the risks associated with their operations. The primary driver for airworthiness regulations (i.e., those governing the design, manufacture, maintenance and operation of UAS) are the risks presented to people in the regions overflown by the aircraft. Models characterising the nature of these risks are needed to inform the development of airworthiness regulations. The output from these models should include measures of the collective, individual and societal risk. A brief review of these measures is provided. Based on the review, it was determined that the model of the operation of an UAS over inhabited areas must be capable of describing the distribution of possible impact locations, given a failure at a particular point in the flight plan. Existing models either do not take the impact distribution into consideration, or propose complex and computationally expensive methods for its calculation. A computationally efficient approach for estimating the boundary (and in turn area) of the impact distribution for fixed wing unmanned aircraft is proposed. A series of geometric templates that approximate the impact distributions are derived using an empirical analysis of the results obtained from a 6-Degree of Freedom (6DoF) simulation. The impact distributions can be aggregated to provide impact footprint distributions for a range of generic phases of flight and missions. The maximum impact footprint areas obtained from the geometric template are shown to have a relative error of typically less than 1% compared to the areas calculated using the computationally more expensive 6DoF simulation. Computation times for the geometric models are on the order of one second or less, using a standard desktop computer. Future work includes characterising the distribution of impact locations within the footprint boundaries.
Resumo:
Automated crowd counting has become an active field of computer vision research in recent years. Existing approaches are scene-specific, as they are designed to operate in the single camera viewpoint that was used to train the system. Real world camera networks often span multiple viewpoints within a facility, including many regions of overlap. This paper proposes a novel scene invariant crowd counting algorithm that is designed to operate across multiple cameras. The approach uses camera calibration to normalise features between viewpoints and to compensate for regions of overlap. This compensation is performed by constructing an 'overlap map' which provides a measure of how much an object at one location is visible within other viewpoints. An investigation into the suitability of various feature types and regression models for scene invariant crowd counting is also conducted. The features investigated include object size, shape, edges and keypoints. The regression models evaluated include neural networks, K-nearest neighbours, linear and Gaussian process regresion. Our experiments demonstrate that accurate crowd counting was achieved across seven benchmark datasets, with optimal performance observed when all features were used and when Gaussian process regression was used. The combination of scene invariance and multi camera crowd counting is evaluated by training the system on footage obtained from the QUT camera network and testing it on three cameras from the PETS 2009 database. Highly accurate crowd counting was observed with a mean relative error of less than 10%. Our approach enables a pre-trained system to be deployed on a new environment without any additional training, bringing the field one step closer toward a 'plug and play' system.
Resumo:
Objectives This study introduces and assesses the precision of a standardized protocol for anthropometric measurement of the juvenile cranium using three-dimensional surface rendered models, for implementation in forensic investigation or paleodemographic research. Materials and methods A subset of multi-slice computed tomography (MSCT) DICOM datasets (n=10) of modern Australian subadults (birth—10 years) was accessed from the “Skeletal Biology and Forensic Anthropology Virtual Osteological Database” (n>1200), obtained from retrospective clinical scans taken at Brisbane children hospitals (2009–2013). The capabilities of Geomagic Design X™ form the basis of this study; introducing standardized protocols using triangle surface mesh models to (i) ascertain linear dimensions using reference plane networks and (ii) calculate the area of complex regions of interest on the cranium. Results The protocols described in this paper demonstrate high levels of repeatability between five observers of varying anatomical expertise and software experience. Intra- and inter-observer error was indiscernible with total technical error of measurement (TEM) values ≤0.56 mm, constituting <0.33% relative error (rTEM) for linear measurements; and a TEM value of ≤12.89 mm2, equating to <1.18% (rTEM) of the total area of the anterior fontanelle and contiguous sutures. Conclusions Exploiting the advances of MSCT in routine clinical assessment, this paper assesses the application of this virtual approach to acquire highly reproducible morphometric data in a non-invasive manner for human identification and population studies in growth and development. The protocols and precision testing presented are imperative for the advancement of “virtual anthropology” into routine Australian medico-legal death investigation.
Resumo:
Due to the health impacts caused by exposures to air pollutants in urban areas, monitoring and forecasting of air quality parameters have become popular as an important topic in atmospheric and environmental research today. The knowledge on the dynamics and complexity of air pollutants behavior has made artificial intelligence models as a useful tool for a more accurate pollutant concentration prediction. This paper focuses on an innovative method of daily air pollution prediction using combination of Support Vector Machine (SVM) as predictor and Partial Least Square (PLS) as a data selection tool based on the measured values of CO concentrations. The CO concentrations of Rey monitoring station in the south of Tehran, from Jan. 2007 to Feb. 2011, have been used to test the effectiveness of this method. The hourly CO concentrations have been predicted using the SVM and the hybrid PLS–SVM models. Similarly, daily CO concentrations have been predicted based on the aforementioned four years measured data. Results demonstrated that both models have good prediction ability; however the hybrid PLS–SVM has better accuracy. In the analysis presented in this paper, statistic estimators including relative mean errors, root mean squared errors and the mean absolute relative error have been employed to compare performances of the models. It has been concluded that the errors decrease after size reduction and coefficients of determination increase from 56 to 81% for SVM model to 65–85% for hybrid PLS–SVM model respectively. Also it was found that the hybrid PLS–SVM model required lower computational time than SVM model as expected, hence supporting the more accurate and faster prediction ability of hybrid PLS–SVM model.
Resumo:
Chemical composition of rainwater changes from sea to inland under the influence of several major factors - topographic location of area, its distance from sea, annual rainfall. A model is developed here to quantify the variation in precipitation chemistry under the influence of inland distance and rainfall amount. Various sites in India categorized as 'urban', 'suburban' and 'rural' have been considered for model development. pH, HCO3, NO3 and Mg do not change much from coast to inland while, SO4 and Ca change is subjected to local emissions. Cl and Na originate solely from sea salinity and are the chemistry parameters in the model. Non-linear multiple regressions performed for the various categories revealed that both rainfall amount and precipitation chemistry obeyed a power law reduction with distance from sea. Cl and Na decrease rapidly for the first 100 km distance from sea, then decrease marginally for the next 100 km, and later stabilize. Regression parameters estimated for different cases were found to be consistent (R-2 similar to 0.8). Variation in one of the parameters accounted for urbanization. Model was validated using data points from the southern peninsular region of the country. Estimates are found to be within 99.9% confidence interval. Finally, this relationship between the three parameters - rainfall amount, coastline distance, and concentration (in terms of Cl and Na) was validated with experiments conducted in a small experimental watershed in the south-west India. Chemistry estimated using the model was in good correlation with observed values with a relative error of similar to 5%. Monthly variation in the chemistry is predicted from a downscaling model and then compared with the observed data. Hence, the model developed for rain chemistry is useful in estimating the concentrations at different spatio-temporal scales and is especially applicable for south-west region of India. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
The shape of tracheal cartilage has been widely treated as symmetric in analytical and numerical models. However, according to both histological images and in vivo medical image, tracheal cartilage is of highly asymmetric shape. Taking the cartilage as symmetric structure will induce bias in calculation of the collapse behavior, as well as compliance and muscular stress. However, this has been rarely discussed. In this paper, tracheal collapse is represented by considering its asymmetric shape. For comparison, the symmetric shape, which is reconstructed by half of the cartilage, is also presented. A comparison of cross-sectional area, compliance of airway and stress in the muscular membrane, determined by asymmetric shape and symmetric shape is made. The result indicates that the symmetric assumption brings a small error, around 5% in predicting the cross-sectional area under loading conditions. The relative error of compliance is more than 10%. Particularly when the pressure is close to zero, the error could be more than 50%. The model considering the symmetric shape results in a significant difference in predicting stress in muscular membrane by either under- or over-estimating it. In conclusion, tracheal cartilage should not be treated as a symmetric structure. The results obtained in this study are helpful in evaluating the error induced by the assumption in geometry.
Resumo:
A strong-coupling expansion for the Green's functions, self-energies, and correlation functions of the Bose-Hubbard model is developed. We illustrate the general formalism, which includes all possible (normal-phase) inhomogeneous effects in the formalism, such as disorder or a trap potential, as well as effects of thermal excitations. The expansion is then employed to calculate the momentum distribution of the bosons in the Mott phase for an infinite homogeneous periodic system at zero temperature through third order in the hopping. By using scaling theory for the critical behavior at zero momentum and at the critical value of the hopping for the Mott insulator–to–superfluid transition along with a generalization of the random-phase-approximation-like form for the momentum distribution, we are able to extrapolate the series to infinite order and produce very accurate quantitative results for the momentum distribution in a simple functional form for one, two, and three dimensions. The accuracy is better in higher dimensions and is on the order of a few percent relative error everywhere except close to the critical value of the hopping divided by the on-site repulsion. In addition, we find simple phenomenological expressions for the Mott-phase lobes in two and three dimensions which are much more accurate than the truncated strong-coupling expansions and any other analytic approximation we are aware of. The strong-coupling expansions and scaling-theory results are benchmarked against numerically exact quantum Monte Carlo simulations in two and three dimensions and against density-matrix renormalization-group calculations in one dimension. These analytic expressions will be useful for quick comparison of experimental results to theory and in many cases can bypass the need for expensive numerical simulations.