989 resultados para Interval Arithmetic Operations


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Las superfícies implícitas son útiles en muchas áreasde los gráficos por ordenador. Una de sus principales ventajas es que pueden ser fácilmente usadas como primitivas para modelado. Aun asi, no son muy usadas porque su visualización toma bastante tiempo. Cuando se necesita una visualización precisa, la mejor opción es usar trazado de rayos. Sin embargo, pequeñas partes de las superficies desaparecen durante la visualización. Esto ocurre por la truncación que se presenta en la representación en punto flotante de los ordenadores; algunos bits se puerden durante las operaciones matemáticas en los algoritmos de intersección. En este tesis se presentan algoritmos para solucionar esos problemas. La investigación se basa en el uso del Análisis Intervalar Modal el cual incluye herramientas para resolver problemas con incertidumbe cuantificada. En esta tesis se proporcionan los fundamentos matemáticos necesarios para el desarrollo de estos algoritmos.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Basic concepts for an interval arithmetic standard are discussed in the paper. Interval arithmetic deals with closed and connected sets of real numbers. Unlike floating-point arithmetic it is free of exceptions. A complete set of formulas to approximate real interval arithmetic on the computer is displayed in section 3 of the paper. The essential comparison relations and lattice operations are discussed in section 6. Evaluation of functions for interval arguments is studied in section 7. The desirability of variable length interval arithmetic is also discussed in the paper. The requirement to adapt the digital computer to the needs of interval arithmetic is as old as interval arithmetic. An obvious, simple possible solution is shown in section 8.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose an arithmetic of function intervals as a basis for convenient rigorous numerical computation. Function intervals can be used as mathematical objects in their own right or as enclosures of functions over the reals. We present two areas of application of function interval arithmetic and associated software that implements the arithmetic: (1) Validated ordinary differential equation solving using the AERN library and within the Acumen hybrid system modeling tool. (2) Numerical theorem proving using the PolyPaver prover. © 2014 Springer-Verlag.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work shows an application of a generalized approach for constructing dilation-erosion adjunctions on fuzzy sets. More precisely, operations on fuzzy quantities and fuzzy numbers are considered. By the generalized approach an analogy with the well known interval computations could be drawn and thus we can define outer and inner operations on fuzzy objects. These operations are found to be useful in the control of bioprocesses, ecology and other domains where data uncertainties exist.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The selection of predefined analytic grids (partitions of the numeric ranges) to represent input and output functions as histograms has been proposed as a mechanism of approximation in order to control the tradeoff between accuracy and computation times in several áreas ranging from simulation to constraint solving. In particular, the application of interval methods for probabilistic function characterization has been shown to have advantages over other methods based on the simulation of random samples. However, standard interval arithmetic has always been used for the computation steps. In this paper, we introduce an alternative approximate arithmetic aimed at controlling the cost of the interval operations. Its distinctive feature is that grids are taken into account by the operators. We apply the technique in the context of probability density functions in order to improve the accuracy of the probability estimates. Results show that this approach has advantages over existing approaches in some particular situations, although computation times tend to increase significantly when analyzing large functions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An approximate number is an ordered pair consisting of a (real) number and an error bound, briefly error, which is a (real) non-negative number. To compute with approximate numbers the arithmetic operations on errors should be well-known. To model computations with errors one should suitably define and study arithmetic operations and order relations over the set of non-negative numbers. In this work we discuss the algebraic properties of non-negative numbers starting from familiar properties of real numbers. We focus on certain operations of errors which seem not to have been sufficiently studied algebraically. In this work we restrict ourselves to arithmetic operations for errors related to addition and multiplication by scalars. We pay special attention to subtractability-like properties of errors and the induced “distance-like” operation. This operation is implicitly used under different names in several contemporary fields of applied mathematics (inner subtraction and inner addition in interval analysis, generalized Hukuhara difference in fuzzy set theory, etc.) Here we present some new results related to algebraic properties of this operation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Algorithms are described for the basic arithmetic operations and square rooting in a negative base. A new operation called polarization that reverses the sign of a number facilitates subtraction, using addition. Some special features of the negative-base arithmetic are also mentioned.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The questions that one should answer in engineering computations - deterministic, probabilistic/randomized, as well as heuristic - are (i) how good the computed results/outputs are and (ii) how much the cost in terms of amount of computation and the amount of storage utilized in getting the outputs is. The absolutely errorfree quantities as well as the completely errorless computations done in a natural process can never be captured by any means that we have at our disposal. While the computations including the input real quantities in nature/natural processes are exact, all the computations that we do using a digital computer or are carried out in an embedded form are never exact. The input data for such computations are also never exact because any measuring instrument has inherent error of a fixed order associated with it and this error, as a matter of hypothesis and not as a matter of assumption, is not less than 0.005 per cent. Here by error we imply relative error bounds. The fact that exact error is never known under any circumstances and any context implies that the term error is nothing but error-bounds. Further, in engineering computations, it is the relative error or, equivalently, the relative error-bounds (and not the absolute error) which is supremely important in providing us the information regarding the quality of the results/outputs. Another important fact is that inconsistency and/or near-consistency in nature, i.e., in problems created from nature is completely nonexistent while in our modelling of the natural problems we may introduce inconsistency or near-inconsistency due to human error or due to inherent non-removable error associated with any measuring device or due to assumptions introduced to make the problem solvable or more easily solvable in practice. Thus if we discover any inconsistency or possibly any near-inconsistency in a mathematical model, it is certainly due to any or all of the three foregoing factors. We do, however, go ahead to solve such inconsistent/near-consistent problems and do get results that could be useful in real-world situations. The talk considers several deterministic, probabilistic, and heuristic algorithms in numerical optimisation, other numerical and statistical computations, and in PAC (probably approximately correct) learning models. It highlights the quality of the results/outputs through specifying relative error-bounds along with the associated confidence level, and the cost, viz., amount of computations and that of storage through complexity. It points out the limitation in error-free computations (wherever possible, i.e., where the number of arithmetic operations is finite and is known a priori) as well as in the usage of interval arithmetic. Further, the interdependence among the error, the confidence, and the cost is discussed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Les surfaces de subdivision fournissent une méthode alternative prometteuse dans la modélisation géométrique, et ont des avantages sur la représentation classique de trimmed-NURBS, en particulier dans la modélisation de surfaces lisses par morceaux. Dans ce mémoire, nous considérons le problème des opérations géométriques sur les surfaces de subdivision, avec l'exigence stricte de forme topologique correcte. Puisque ce problème peut être mal conditionné, nous proposons une approche pour la gestion de l'incertitude qui existe dans le calcul géométrique. Nous exigeons l'exactitude des informations topologiques lorsque l'on considère la nature de robustesse du problème des opérations géométriques sur les modèles de solides, et il devient clair que le problème peut être mal conditionné en présence de l'incertitude qui est omniprésente dans les données. Nous proposons donc une approche interactive de gestion de l'incertitude des opérations géométriques, dans le cadre d'un calcul basé sur la norme IEEE arithmétique et la modélisation en surfaces de subdivision. Un algorithme pour le problème planar-cut est alors présenté qui a comme but de satisfaire à l'exigence topologique mentionnée ci-dessus.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Die Untersuchung des dynamischen aeroelastischen Stabilitätsverhaltens von Flugzeugen erfordert sehr komplexe Rechenmodelle, welche die wesentlichen elastomechanischen und instationären aerodynamischen Eigenschaften der Konstruktion wiedergeben sollen. Bei der Modellbildung müssen einerseits Vereinfachungen und Idealisierungen im Rahmen der Anwendung der Finite Elemente Methode und der aerodynamischen Theorie vorgenommen werden, deren Auswirkungen auf das Simulationsergebnis zu bewerten sind. Andererseits können die strukturdynamischen Kenngrößen durch den Standschwingungsversuch identifiziert werden, wobei die Ergebnisse Messungenauigkeiten enthalten. Für eine robuste Flatteruntersuchung müssen die identifizierten Unwägbarkeiten in allen Prozessschritten über die Festlegung von unteren und oberen Schranken konservativ ermittelt werden, um für alle Flugzustände eine ausreichende Flatterstabilität sicherzustellen. Zu diesem Zweck wird in der vorliegenden Arbeit ein Rechenverfahren entwickelt, welches die klassische Flatteranalyse mit den Methoden der Fuzzy- und Intervallarithmetik verbindet. Dabei werden die Flatterbewegungsgleichungen als parameterabhängiges nichtlineares Eigenwertproblem formuliert. Die Änderung der komplexen Eigenlösung infolge eines veränderlichen Einflussparameters wird mit der Methode der numerischen Fortsetzung ausgehend von der nominalen Startlösung verfolgt. Ein modifizierter Newton-Iterations-Algorithmus kommt zur Anwendung. Als Ergebnis liegen die berechneten aeroelastischen Dämpfungs- und Frequenzverläufe in Abhängigkeit von der Fluggeschwindigkeit mit Unschärfebändern vor.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Extensions of aggregation functions to Atanassov orthopairs (often referred to as intuitionistic fuzzy sets or AIFS) usually involve replacing the standard arithmetic operations with those defined for the membership and non-membership orthopairs. One problem with such constructions is that the usual choice of operations has led to formulas which do not generalize the aggregation of ordinary fuzzy sets (where the membership and non-membership values add to 1). Previous extensions of the weighted arithmetic mean and ordered weighted averaging operator also have the absorbent element 〈1,0〉, which becomes particularly problematic in the case of the Bonferroni mean, whose generalizations are useful for modeling mandatory requirements. As well as considering the consistency and interpretability of the operations used for their construction, we hold that it is also important for aggregation functions over higher order fuzzy sets to exhibit analogous behavior to their standard definitions. After highlighting the main drawbacks of existing Bonferroni means defined for Atanassov orthopairs and interval data, we present two alternative methods for extending the generalized Bonferroni mean. Both lead to functions with properties more consistent with the original Bonferroni mean, and which coincide in the case of ordinary fuzzy values.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Uncertainty of data affects decision making process as it increases the risk and the costs of the decision. One of the challenges in minimizing the impact of the bounded uncertainty on any scheduling algorithm is the lack of information, as only the upper bound and the lower bound are provided without any known probability or membership function. On the contrary, probabilistic uncertainty can use probability distributions and fuzzy uncertainty can use the membership function. McNaughton's algorithm is used to find the optimum schedule that minimizes the makespan taking into consideration the preemption of tasks. The challenge here is the bounded inaccuracy of the input parameters for the algorithm, namely known as bounded uncertain data. This research uses interval programming to minimise the impact of bounded uncertainty of input parameters on McNaughton’s algorithm, it minimises the uncertainty of the cost function estimate and increase its optimality. This research is based on the hypothesis that doing the calculations on interval values then approximate the end result will produce more accurate results than approximating each interval input then doing numerical calculations.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this work we use Interval Mathematics to establish interval counterparts for the main tools used in digital signal processing. More specifically, the approach developed here is oriented to signals, systems, sampling, quantization, coding and Fourier transforms. A detailed study for some interval arithmetics which handle with complex numbers is provided; they are: complex interval arithmetic (or rectangular), circular complex arithmetic, and interval arithmetic for polar sectors. This lead us to investigate some properties that are relevant for the development of a theory of interval digital signal processing. It is shown that the sets IR and R(C) endowed with any correct arithmetic is not an algebraic field, meaning that those sets do not behave like real and complex numbers. An alternative to the notion of interval complex width is also provided and the Kulisch- Miranker order is used in order to write complex numbers in the interval form enabling operations on endpoints. The use of interval signals and systems is possible thanks to the representation of complex values into floating point systems. That is, if a number x 2 R is not representable in a floating point system F then it is mapped to an interval [x;x], such that x is the largest number in F which is smaller than x and x is the smallest one in F which is greater than x. This interval representation is the starting point for definitions like interval signals and systems which take real or complex values. It provides the extension for notions like: causality, stability, time invariance, homogeneity, additivity and linearity to interval systems. The process of quantization is extended to its interval counterpart. Thereafter the interval versions for: quantization levels, quantization error and encoded signal are provided. It is shown that the interval levels of quantization represent complex quantization levels and the classical quantization error ranges over the interval quantization error. An estimation for the interval quantization error and an interval version for Z-transform (and hence Fourier transform) is provided. Finally, the results of an Matlab implementation is given

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Constrained intervals, intervals as a mapping from [0, 1] to polynomials of degree one (linear functions) with non-negative slopes, and arithmetic on constrained intervals generate a space that turns out to be a cancellative abelian monoid albeit with a richer set of properties than the usual (standard) space of interval arithmetic. This means that not only do we have the classical embedding as developed by H. Radström, S. Markov, and the extension of E. Kaucher but the properties of these polynomials. We study the geometry of the embedding of intervals into a quasilinear space and some of the properties of the mapping of constrained intervals into a space of polynomials. It is assumed that the reader is familiar with the basic notions of interval arithmetic and interval analysis. © 2013 Springer-Verlag Berlin Heidelberg.